text
stringlengths
14
1.5M
meta
stringlengths
51
1.92k
source
stringclasses
6 values
\section{The four vertex theorem} The classic 4-vertex theorem states that {\it the curvature of a smooth closed convex planar curve has at least four critical points}, see Figure \ref{ellipse} for an illustration. \begin{figure}[hbtp] \centering \includegraphics[height=2in]{ellipse} \caption{An ellipse and its evolute, the envelope of its normals or, equivalently, the locus of the centers of curvature. The cusps of the evolute correspond to the vertices of the curve.} \label{ellipse} \end{figure} Since its discovery by S. Mukhopadhyaya in 1909, this theorem has generated a large literature, comprising various generalizations and variants of this result; see \cite{DGPV,GTT,OT3,Pak} for a sampler. One approach to the proof of the 4-vertex theorem is based on the following observation: if a $2\pi$-periodic function $f(x)$ is $L^2$-orthogonal to the first harmonics, that is, to the functions $1, \sin x, \cos x$, then $f(x)$ must have at least four sign changes over the period. The proof is simple: since $\int_0^{2\pi} f(x) dx =0$, the function $f(x)$ must change sign. If there are only two sign changes, one can find a linear combination $g(x)=c+a\cos x + b \sin x$ that changes sign at the same points as $f(x)$. Since the first harmonic $g(x)$ cannot have more than two sign changes, $f(x) g(x)$ has a constant sign, and $\int_0^{2\pi} f(x) g(x) dx \neq 0$, a contradiction. Discrete versions of this argument are in the hearts of our proofs presented below. (In the 4-vertex theorem, one takes $f(x)=p'(x)+p'''(x)$, where $p(x)$ is the support function of the curve; then $p(x)+p''(x)$ is the curvature radius.) The above observation is a particular case of the Sturm-Hurwitz theorem: {\it the number of zeros of a periodic function is not less than the number of zeros of its first non-trivial harmonic}, see \cite{OT3} for five proofs and applications of this remarkable result. \begin{figure}[hbtp] \centering \includegraphics[height=2in]{Mukhopadhyaya}\quad \includegraphics[height=2in]{Sturm}\quad \includegraphics[height=2in]{Hurwitz} \caption{Syamadas Mukhopadhyaya, Jacques Charles Fran\c{c}ois Sturm, and Adolf Hurwitz.} \label{MSH} \end{figure} \section{Frieze patterns} A frieze pattern is an array of numbers consisting of finitely many bi-infinite rows; each next row is offset half-step from the previous one. The top two rows consist of 0s and of 1s, respectively, the bottom two rows are the row of 1s and 0s as well, and every elementary diamond $$ \begin{matrix} &N& \\ W&&E \\ &S& \end{matrix} $$ satisfies the unimodular relation $EW-NS=1$. The number of non-trivial rows is called the width of a frieze pattern. Denote the width by $w$ and set $n=w+3$. For example, a general frieze pattern with $w=2, n=5$ looks like this: $$ \begin{array}{ccccccccccc} \cdots&&1&& 1&&1&&\cdots \\[4pt] &x_1&&\frac{x_2+1}{x_1}&&\frac{x_1+1}{x_2}&&x_2&& \\[4pt] \cdots&&x_2&&\frac{x_1+x_2+1}{x_1x_2}&&x_1&&\cdots \\[4pt] &1&&1&&1&&1&& \end{array} $$ where the rows of 0s are omitted. These formulas appeared in the paper by Gauss ``Pentagramma Mirificum", published posthumously; Gauss calculated geometric quantities characterizing spherical self-polar pentagons, see Figure \ref{miri}. See also A. Cayley's paper \cite{Cay}. (According to Coxeter \cite{Cox} -- the very paper where frieze patterns were introduced -- the story goes further back, to N. Torporley, who in 1602 investigated the five ``parts" of a right-angled spherical triangle, anticipating by a dozen years the rule of J. Napier in spherical trigonometry.) \begin{figure}[hbtp] \centering \includegraphics[height=2.5in]{mirificum1} \caption{Pentagramma mirificum of Carl Friedrich Gauss.} \label{miri} \end{figure} And here is a frieze pattern of width four whose entries are natural numbers: $$ \begin{array}{ccccccccccccccccccccccc} &&1&&1&&1&&1&&1&&1&&1&&1 \\[4pt] &1&&3&&2&&2&&1&&4&&2&&1& \\[4pt] &&2&&5&&3&&1&&3&&7&&1&&2& \\[4pt] &1&&3&&7&&1&&2&&5&&3&&1& \\[4pt] &&1&&4&&2&&1&&3&&2&&2&&1& \\[4pt] &1&&1&&1&&1&&1&&1&&1&&1 \end{array} $$ The very existence of such frieze patterns is surprising: the unimodular rule $EW-NS=1$ does not agree easily with the property of being a positive integer! The frieze patters consisting of positive integers were classified by Conway and Coxeter \cite{CoCo}: they are in 1-1 correspondence with the triangulations of a convex $n$-gons by diagonals, and there are $\frac{(2(w+1))!}{(w+1)!(w+2)!}$ (Catalan number) of them; see \cite{Hen} for an exposition of this beautiful theorem. For example, the above frieze pattern corresponds to the triangulation in Figure \ref{heptagon}. \begin{figure}[hbtp] \centering \includegraphics[height=2in]{heptagon} \caption{A triangulation of a heptagon: the labels are the number of the triangles adjacent to each vertex. These numbers comprise the first row of the frieze pattern.} \label{heptagon} \end{figure} For a while, frieze patterns remained a relatively esoteric subject, but recently they have attracted much attention due of their significance in algebraic combinatorics and the theory of cluster algebras. I recommend a comprehensive contemporary survey of this subject \cite{Mor}. \begin{figure}[hbtp] \centering \includegraphics[height=1.5in]{Conway}\quad \includegraphics[height=1.5in]{Coxeter} \caption{John Horton Conway and Harold Scott MacDonald Coxeter.} \label{ConCox} \end{figure} Let us summarize the basic properties of frieze patterns relevant to this article. Denote by $a_i$ the entries of the first non-trivial row. \begin{enumerate} \item The NE diagonals of a frieze pattern satisfy the 2nd order linear recurrence (discrete Hill's equation) $$V_{i+1} = a_i V_i - V_{i-1}$$ with $n$-periodic coefficients whose all solutions are antiperiodic, i.e., $V_{i+n}=-V_i$ for all $i$: $$ \begin{array}{ccccccccccc} 0\ \ &&0&&0&&0\\ &1&&1&&1\\ &&a_1&&a_{2}&&a_{3}\\ &&&a_1a_2-1&&a_2a_3-1\\ &&\cdots&&a_1a_2a_3-a_1-a_3&&\cdots\\ \end{array} $$ \item The solutions of the discrete Hill's equation can be thought of as polygonal lines $\ldots,V_1, V_2, \ldots$ in ${\mathbb R}^2$, with $\det (V_i,V_{i+1})=1$ and $V_{i+n}=-V_i$. Such polygonal line is well defined up to $\operatorname{SL}(2,{\mathbb R})$-action. The projections of the vectors $V_i$ to ${\mathbb {RP}}^1$ form an $n$-gon therein, well-defined up to a M\"obius transformation. For odd $n$, this correspondence between frieze patterns of width $n-3$ and projective equivalence classes of $n$-gons in the projective line is 1-1. \item Label the entries as follows: $$ \begin{array}{ccc} &v_{i,j}&\\ v_{i,j-1}&&v_{i+1,j}\\ &v_{i+1,j-1}& \end{array} $$ with $a_i=v_{i-1,i+1}$. Then one has $v_{i,j}=\det(V_i,V_j)$, explaining the glide reflection symmetry of the entries: $v_{i,j} = v_{j,i+n}$ The Conway-Coxeter article \cite{CoCo} starts with a description of the seven ornamental frieze patterns where the glide reflection symmetry is represented by $ {\bf \large \ldots b\quad p\quad b\quad p\quad b\quad p\ldots } $ and described as ``the relation between successive footprints when one walks along a straight path covered with snow". In Conway's nomenclature, this ornamental frieze pattern is called ``step", see Figure \ref{step}. \begin{figure}[hbtp] \centering \includegraphics[height=.7in]{step} \caption{An ornamental frieze pattern with the glide reflection symmetry.} \label{step} \end{figure} \item The entries of a frieze pattern are given by the 3-diagonal determinants $$ \det\left|\begin{array}{cccccc} a_{j}&1&&&\\\ 1&a_{j+1}&1&&\\ &\ddots&\ddots&\ddots&\\ &&1&a_{i-1}&1\\ &&&1&a_{i} \end{array}\right|, $$ the continuants (called so because of their relation with continued fractions; see \cite{CO} for an intriguing history of this name). \end{enumerate} \section{A problem, a theorem, and a counter-example} I shall be concerned with frieze patterns whose entries are positive real numbers. Given two such frieze patterns of the same width $w$, choose a row and consider the $n$-periodic sequence of the differences of the respective entries of the two friezes. I am interested in the number of sign changes in this sequence over the period. More precisely, let $1\le k \le [w/2]$ be the number of a row (we don't need to go beyond $[w/2]$ due to the glide symmetry), and let $v_{i,i+k+1}$ and $u_{i,i+k+1}$ be the entries of $k$th rows of the two frieze patterns. I am interested in the sign changes of $v_{i,i+k+1}-u_{i,i+k+1}$ as $i$ increases by 1 (not excluding the case when either of these differences vanishes). \medskip \noindent {\bf Problem 1.} {\it For which $k$ must the cyclic sequence $v_{i,i+k+1}-u_{i,i+k+1}$ have at least four sign changes? } \medskip As a partial answer, one has \medskip \noindent {\bf Theorem.} {\it Four sign changes must occur for $k=1$ and for $k=2$. In addition, for every $k$, four sign changes must occur in the infinitesimal version of the problem. } \medskip Let me explain the last statement. Consider a frieze pattern whose first row is constant: $a_i=2x$ for all $i$. Then each next row is constant as well, and their entries, denoted by $U_k(x)$, satisfy $U_{k+1} = 2x U_k (x)- U_{k-1}(x)$ with $U_0(x)=1, U_1(x)=2x$. That is, $U_k(x)$ is the Chebyshev polynomial of the second kind: $$ U_k(\cos \alpha) = \frac{\sin(k+1)\alpha}{\sin \alpha}. $$ For this constant frieze pattern to have width $n-3$, set $\alpha=\pi/n$. For the infinitesimal version of Problem 1, take this constant frieze pattern and its infinitesimal deformation in the class of frieze patterns. Originally, I hoped that Problem 1 had an affirmative answer for all values of $k$. However, this conjecture was over-optimistic. The following counter-example is provided by Michael Cuntz; in this example, $w=5$ (the smallest possible not to contradict Theorem), all entries are positive rational numbers, and the differences of the entries of the third row are all positive (this row is 4-periodic due to the glide symmetry). I present only the first lines of the two frieze patterns; these are 8-periodic sequences: $$ \left(2,\ 2,\ 4,\ 2,\ 3,\ \frac{18}{41},\ 41, \frac{30}{41}\right) \ \ {\rm and}\ \ \left( 5,\ \frac{21}{97},\ 194,\ \frac{36}{97},\ 3,\ 5,\ 1,\ 5 \right). $$ It still may be possible that the bold conjecture holds for Conway-Coxeter frieze patterns that consist of positive integers. \section{Proofs} \paragraph{Case $k=1$.} Let $a_i$ and $b_i$ be the entries of the first rows of the two frieze patterns. Consider the respective discrete Hill's equations $$ V_{i+1} = a_i V_i - V_{i-1},\ U_{i+1} = b_i U_i - U_{i-1}. $$ Let $U_i$ and $V_i$ be some solutions.I claim that the sequence $a_i-b_i$ is $\ell_2$-orthogonal to $U_i V_i$: \begin{equation} \label{orth} \sum _1^n (a_i - b_i) U_i V_i = 0. \end{equation} Indeed, $$ \sum _1^n (a_i - b_i) U_i V_i = \sum _1^n [U_{i+1} + U_{i-1}] V_i - U_i [V_{i+1} + V_{i-1}] = 0, $$ due to antiperiodicity. Note that the space of solutions of a discrete Hill equation is 2-dimensional, and that its solutions are non-oscillating in the sense that they change sign only once over the period (since the entries of the frieze pattern are positive). Assume that $a_i - b_i$ does not change sign at all. Choose the initial conditions for solutions $U_{i}$ and $V_i$ as follows: $$ U_1=-1,U_2=1,V_1=-1,V_2=1. $$ That is, both solution change sign from $i=1$ to $i=2$, and then, due to the non-oscillating property, there are no other sign changes. Hence $U_i V_i >0$ for all $i$, contradicting (\ref{orth}). Likewise, if $a_i - b_i$ changes sign only twice, from $i_1$ to $i_1+1$, and from $i_2$ to $i_2+1$, choose the initial conditions for solutions $U_i$ and $V_i$ as follows: $$ U_{i_1}=-1,U_{i_1+1}=1,V_{i_2}=-1,V_{i_2+1}=1. $$ Then $(a_i - b_i) U_i V_i$ has a constant sign, again contradicting (\ref{orth}). $\Box$\bigskip This result, along with its proof, is a discrete version of the following theorem from \cite{OT1} concerning Hill's equations $\varphi''(x) = k(x) \varphi(x)$ whose solutions are $\pi$-antiperiodic (and hence the potential $k(x)$ is $\pi$-periodic) and disconjugate, meaning that every solution changes sign only once on the period $[0,\pi)$. The claim is that, {\it given two such equations, the function $k_1(x) - k_2(x)$ has at least four zeroes on $[0,\pi)$}. This theorem is equivalent to the beautiful theorem of E. Ghys: {\it the Schwartzian derivative of a diffeomorphism of ${\mathbb {RP}}^1$ has at least four distinct zeroes}, see \cite{OT3} for the relation of the Schwartzian derivative with the Hill equation, and an explanation why zeroes of the Schwartzian derivative are the vertices of a curve in Lorentzian geometry. \paragraph{Case $k=2$.} As I mentioned, to a frieze pattern there corresponds an $n$-gon in ${\mathbb {RP}}^1$. The entries of the second row of the frieze pattern are the cross-ratios of the consecutive quadruples of the vertices of this $n$-gon, where cross-ratio is defined as $$ [a,b,c,d]_1 = \frac{(d-a)(c-b)}{(d-c)(b-a)}, $$ see \cite{Mor}. On the other hand, one of the results in \cite{OT2}, another discretization of Ghys's theorem, states that, given two cyclically ordered $n$-tuples of points $x_i$ and $y_i$ in ${\mathbb {RP}}^1$, the difference of the cross-ratios $[x_i,x_{i+1},x_{i+2},x_{i+3}]_2 - [y_i,y_{i+1},y_{i+2},y_{i+3}]_2$ changes sign at least four times; here the cross-ratio is defined by $$ [a,b,c,d]_2 = \frac{(d-b)(c-a)}{(d-c)(b-a)}. $$ To complete the proof, observe that $[a,b,c,d]_2 - [a,b,c,d]_1 =1$. $\Box$\bigskip \paragraph{Infinitesimal version, $k$ arbitrary.} Consider the polygonal line $$ V_i = \frac{1}{\sqrt{\sin \frac{\pi}{n}}} \left(\cos \frac{i\pi}{n}, \sin \frac{i\pi}{n} \right), $$ so that $[V_i,V_{i+1}]=1$ and $V_{i+n}=-V_i$ hold. Let $$ W_i = V_i + {\varepsilon} E_i,\ [W_i,W_{i+1}]=1 $$ be an infinitesimal deformation of this polygon $V_i$. I assume in our calculations that ${\varepsilon}^2=0$. Let \begin{equation} \label{EtoV} E_i=p_iV_i+\bar p_i V_{i+1} = q_i V_i + \bar q_i V_{i-1}. \end{equation} We shall express the $n$-periodic coefficients $p_i,\bar p_i, \bar q_i$ via the coefficients $q_i$, that solely determine the deformation. To do so, use the fact that $V_{i+1}=cV_i-V_{i-1}$ with $c= 2 \cos (\pi/{n})$. This linear relation must be equivalent to the second equality in (\ref{EtoV}), hence $$ q_i-p_i=c \bar p_i,\ \bar p_i = - \bar q_i. $$ We also have $[W_i,W_{i+1}]=1$, implying $[V_i,E_{i+1}]+[E_i,V_{i+1}]=0$ and, using (\ref{EtoV}), $p_i=-q_{i+1}$. Thus \begin{equation} \label{viaq} p_i = -q_{i+1},\ \bar p_i = \frac{1}{c} (q_i+q_{i+1}),\ \bar q_i = -\frac{1}{c} (q_i+q_{i+1}). \end{equation} Now fix $k$ and consider the deformation of $(k-1)$st row of the frieze pattern: $$ [W_i,W_{i+k}]=[V_i,V_{i+k}] + {\varepsilon} ([V_i,E_{i+k}]+[E_i,V_{i+k}]). $$ Using (\ref{EtoV}) and (\ref{viaq}), one finds \begin{equation*} \begin{split} [V_i,E_{i+k}]+[E_i,V_{i+k}] = (q_{i+k}-q_{i+1}) \left([V_i,V_{i+k}] - \frac{1}{c} [V_i,V_{i+k-1}]\right)&\\ - \frac{1}{c} (q_{i+k+1}-q_i) [V_i,V_{i+k-1}]&\\ = \frac{1}{\sin \frac{2\pi}{n}} \left[ (q_{i+k}-q_{i+1}) \sin \frac{\pi (k+1)}{n} - (q_{i+k+1}-q_i) \sin \frac{\pi (k-1)}{n} \right]&. \end{split} \end{equation*} We want to show that the sequence \begin{equation} \label{test} c_i := (q_{i+k}-q_{i+1}) \sin \frac{\pi (k+1)}{n} - (q_{i+k+1}-q_i) \sin \frac{\pi (k-1)}{n} \end{equation} must change sign at least four times. First, observe that $c_i$ is $\ell_2$-orthogonal to the constant sequence $(1,\ldots,1)$, that is, $\sum_{i=1}^n c_i =0$; hence $c_i$ must have sign changes. Next, I claim that $c_i$ is $\ell_2$-orthogonal to the sequence $\sin (2\pi i/n)$. Indeed \begin{equation*} \begin{split} \sum_{i=1}^{n} c_i \sin \frac{2\pi i}{n} = \sum_{i=1}^{n} &q_i \sin \frac{\pi (k+1)}{n} \left( \sin \frac{2\pi (i-k)}{n} - \sin \frac{2\pi (i-1)}{n} \right)\\ - &q_i \sin \frac{\pi (k-1)}{n} \left( \sin \frac{2\pi (i-k-1)}{n} + \sin \frac{2\pi i}{n} \right). \end{split} \end{equation*} Hence twice the coefficient of $q_i$ on the right hand side equals \begin{equation*} \begin{split} &\sin \frac{\pi (k+1)}{n} \sin \frac{\pi (1-k)}{n} \cos \frac{\pi (2i-k-1)}{n}\\ &+ \sin \frac{\pi (k-1)}{n} \sin \frac{\pi (1+k)}{n} \cos \frac{\pi (2i-k-1)}{n} =0, \end{split} \end{equation*} as needed. Similarly, $c_i$ is $\ell_2$-orthogonal to the sequence $\cos (2\pi i/n)$. Finally, if $c_i$ changes sign only twice, one can find a linear combination $$ c+ a \sin \frac{2\pi i}{n} + b \cos \frac{2\pi i}{n}, $$ a discrete first harmonic, that changes sign at the same positions as $c_i$. This ``first harmonic" has no other sign changes, so its signs coincide with those of $c_i$. But it is also orthogonal to $c_i$, a contradiction. $\Box$\bigskip \section{Back to four vertices, and another problem} Perhaps the oldest result in the spirit of the four vertex-like theorem is the Legendre-Cauchy Lemma (which is about 100 years older than the theorem of Mukhopadhyaya): {\it if two convex polygons in the plane have equal respective side length, then the cyclic sequence of the differences of their respective angles has at least four sign changes}. A version of this lemma in spherical geometry is the main ingredient of the proof of the Cauchy rigidity theorem ({\it convex polytopes with congruent corresponding faces are congruent to each other}); interestingly, its original proof contained an error that remained unnoticed for nearly a century, see, e.g., chapters 22 and 26 of \cite{Pak}. The values of the angles in the formulation of the Legendre-Cauchy Lemma can be replaces by the lengths of the short, skip-a-vertex, diagonals of the respective polygons: with fixed side lengths, the angles depend monotonically on these diagonals. In particular, one may assume that the polygons are equilateral, e.g., each side has unit length. In this formulation, the Legendre-Cauchy Lemma becomes an analog of the $k=1$ case of Theorem above, with the determinants $a_i=\det (V_{i-1},V_{i+1})$ replaced by the lengths $|V_{i+1}-V_{i-1}|$. This prompts to ask another question. \medskip \noindent {\bf Problem 2.} {\it Given two equilateral convex $n$-gons, for which $k$ must the cyclic sequence $|V_{i+k}-V_{i-1}|$ have at least four sign changes? } \bigskip {\bf Acknowledgements}. It is a pleasure to acknowledge stimulating discussions with S. Morier-Genoud, V. Ovsienko, I. Pak, and R. Schwartz. Many thanks to M. Cuntz for providing his (counter)-examples. This work was supported by NSF grant DMS-1510055.
{'timestamp': '2018-06-18T02:10:56', 'yymm': '1805', 'arxiv_id': '1805.12065', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.12065'}
ArXiv
\section{Introduction} This paper considers the question of determining the number of covers between genus-$0$ curves with fixed ramification in positive characteristic. More concretely, we consider covers $f:{\mathbb P}^1\to {\mathbb P}^1$ branched at $r$ ordered points $Q_1, \ldots, Q_r$ of fixed {\em ramification type} $(d; C_1, \ldots, C_r)$, where $d$ is the degree of $f$ and $C_i=e_1(i)\text{-}\cdots\text{-}e_{s_i}(i)$ is a conjugacy class in $S_d$. This notation indicates that there are $s_i$ ramification points in the fiber $f^{-1}(Q_i)$, with ramification indices $e_j(i)$. The {\em Hurwitz number} $h(d; C_1, \ldots, C_r)$ is the number of covers of fixed ramification type over ${\mathbb C}$, up to isomorphism. This number does not depend on the position of the branch points. If $p$ is a prime not dividing any of the ramification indices $e_j(i)$, the {\em $p$-Hurwitz number} $h_p(d; C_1, \ldots, C_r)$ is the number of covers of fixed ramification type whose branch points are generic over an algebraically closed field $k$ of characteristic $p$. The genericity hypothesis is necessary because in positive characteristic the number of covers often depends on the position of the branch points. The only general result on $p$-Hurwitz numbers is that they are always less than or equal to the Hurwitz number, with equality when the degree of the Galois closure is prime to $p$. This is because every tame cover in characteristic $p$ can be lifted to characteristic $0$, and in the prime-to-$p$ case, every cover in characteristic $0$ specializes to a cover in characteristic $p$ with the same ramification type (see Corollaire 2.12 of Expos\'e XIII in \cite{sga1}). We say a cover has {\em good reduction} when such a specialization exists. However, in the general case, some covers in characteristic $0$ specialize to inseparable covers in characteristic $p$; these covers are said to have {\em bad reduction}. Thus, the difference $h(d;C_1,\dots,C_r)-h_p(d;C_1,\dots,C_r)$ is the number of covers in characteristic $0$ with generic branch points and bad reduction. In \cite{os7} and \cite{os12}, the value $h_p(d; e_1, e_2, e_3)$ is computed for genus $0$ covers and any $e_i$ prime to $p$ using linear series techniques. In this paper, we treat the considerably more difficult case of genus-$0$ covers of type $(p; e_1, e_2, e_3, e_4)$. Our main result is the following. \begin{thm}\label{thm:main} Given $e_1,\dots,e_4$ all less than $p$, with $\sum_i e_i=2p+2$, we have \[ h_p(p; e_1, e_2, e_3, e_4)=h(p; e_1, e_2, e_3, e_4)-p. \] \end{thm} An important auxiliary result is the computation of the $p$-Hurwitz number $h_p(p; e_1\text{-}e_2, e_3, e_4)$. \begin{thm}\label{thm:3-hurwitz} Given odd integers $e_1 , e_2, e_3, e_4 < p$, with $e_1 + e_2 \leq p$ and $\sum_i e_i=2p+2$, we have that \[ h_p(p; e_1\text{-}e_2, e_3, e_4)= \begin{cases}h(p; e_1\text{-}e_2, e_3, e_4)-(p+1-e_1-e_2): & e_1 \neq e_2, \\ h(p; e_1\text{-}e_2, e_3, e_4)-(p+1-e_1-e_2)/2: & e_1 =e_2. \end{cases} \] \end{thm} Corollary \ref{cor:2cyclebad} gives a more general result including the case that some of the $e_i$ are even, but in some cases we also compute the $p$-Hurwitz number only up to a factor $2$. Note that there is an explicit formula for $h(p;e_1,e_2,e_3,e_4)$ and $h(p;e_1\text{-}e_2,e_3,e_4)$; see Theorem \ref{hurwitzlem} and Lemma \ref{lem:badtype} below. Our technique involves the use of ``admissible covers,'' which are certain covers between degenerate curves (see Section \ref{sec:char0}). Admissible covers provide a compactification of the space of covers of smooth curves in characteristic $0$, but in positive characteristic this is not the case, and it is an interesting question when, under a given degeneration of the base, a cover of smooth curves does in fact have an admissible cover as a limit. In this case we say the smooth cover has {\em good degeneration}. In \cite{bo3} one finds examples of covers with generic branch points without good degeneration. In contrast, our technique for proving Theorem \ref{thm:main} simultaneously shows that many of the examples we consider have good degeneration. \begin{thm}\label{thm:good-degen} Given odd integers $1< e_1 \leq e_2 \leq e_3 \leq e_4<p$ with $\sum_i e_i=2p+2$, every cover of type $(p;e_1,e_2,e_3,e_4)$ with generic branch points $(0,1,\lambda,\infty)$ has good degeneration under the degeneration sending $\lambda$ to $\infty$. \end{thm} As with Theorem \ref{thm:3-hurwitz}, our methods do not give a complete answer in some cases with even $e_i$, but we do prove a more general result in Theorem \ref{thm:main2}. Building on the work of Raynaud \cite{ra3}, Wewers uses the theory of stable reduction in \cite{we1} to give formulas for the number of covers with three branch points and having Galois closure of degree strictly divisible by $p$ which have bad reduction to characteristic $p$. In \cite{b-w6}, some $p$-Hurwitz numbers are calculated using the existence portion of Wewers' theorems, but these are in cases which are rigid (meaning the classical Hurwitz number is $1$) or very close to rigid, so one does not have to carry out calculations with Wewers' formulas. In \cite{se6}, Selander uses the full statement of Wewers' formulas to compute some examples in small degree. Our result in Theorem \ref{thm:3-hurwitz} is the first explicit calculation of an infinite family of $p$-Hurwitz numbers which fully uses Wewers' formulas, and its proof occupies the bulk of the present paper. We begin in Sections \ref{sec:char0} and \ref{sec:group} by reviewing the situation in characteristic $0$ and some group-theoretic background. We then recall the theory of stable reduction in Section \ref{sec:stable}. In order to apply Wewers' formulas, in Section \ref{sec:tail} we analyze the possible structures of the stable reductions which arise, and then in Section \ref{sec:3pt} we apply Wewers' formulas to compute the number of smooth covers with a given stable reduction. Here we are forced to use a trick comparing the number of covers in the case of interest to the number in a related case where we know all covers have bad reduction. In Section \ref{sec:adm} we then apply Corollary \ref{cor:2cyclebad} as well as the formulas for $h_p(d;e_1,e_2,e_3)$ of \cite{os7} and the classical Hurwitz number calculations in \cite{o-l2} to estimate the number of admissible covers in characteristic $p$. This provides a sufficient lower bound on $h_p(p;e_1,e_2,e_3,e_4)$. Finally, we use the techniques of \cite{bo4}, again based on stable reduction, to directly prove in Section \ref{sec:4pt} that $h_p(p;e_1,e_2,e_3,e_4)$ is bounded above by $h(p;e_1,e_2,e_3,e_4)-p$. We thus conclude Theorems \ref{thm:main} and \ref{thm:good-degen}. We would like to thank Peter M\"uller, Bj\"orn Selander and Robert Guralnick for helpful discussions. \section{The characteristic-$0$ situation}\label{sec:char0} In this paper, we consider covers $f:{\mathbb P}^1\to {\mathbb P}^1$ branched at $r$ ordered points $Q_1, \ldots, Q_r$ of fixed {\em ramification type} $(d; C_1, \ldots, C_r)$, where $d$ is the degree of $f$ and $C_i=e_1(i)\text{-}\cdots\text{-}e_{s_i}(i)$ is a conjugacy class in $S_d$. This means that there are $s_i$ ramification points in the fiber $f^{-1}(Q_i)$, with ramification indices $e_j(i)$. The {\em Hurwitz number} $h(d; C_1, \ldots, C_r)$ is the number of covers of fixed ramification type over ${\mathbb C}$, up to isomorphism. This number does not depend on the position of the branch points. Riemann's Existence Theorem implies that the Hurwitz number $h(d; C_1, \ldots, C_r)$ is the cardinality of the set of {\em Hurwitz factorizations} defined as \[ \{(g_1, \cdots, g_r)\in C_1\times \cdots \times C_r\mid \langle g_i\rangle\subset S_d\, {\rm transitive },\, \prod_i g_i=1\}/\sim, \] where $\sim$ denotes uniform conjugacy by $S_d$. The group $\langle g_i \rangle$ is called the {\em monodromy group} of the corresponding cover. For a fixed monodromy group $G$, a variant equivalence relation is given by {\em $G$-Galois covers}, where we work with Galois covers together with a fixed isomorphism of the Galois group to $G$. The group-theoretic interpretation is then that the $g_i$ are in $G$ (with the action on a fiber recovered by considering $G$ as a subgroup of $S_{|G|}$), and the equivalence relation $\sim_G$ is uniform conjugacy by $G$. To contrast with the $G$-Galois case, we sometimes emphasize that we are working up to $S_d$-conjugacy by referring to the corresponding covers as {\em mere covers}. In this paper, we are mainly interested in the {\em pure-cycle} case, where every $C_i$ is the conjugacy class in $S_d$ of a single cycle. In this case, we write $C_i= e_i$, where $e_i$ is the length of the cycle. A cover $f:Y\to {\mathbb P}^1$ over ${\mathbb C}$ of ramification type $(d; e_1, e_2, \cdots, e_r)$ has genus $g(Y)=0$ if and only if $\sum_{i=1}^r e_i=2d-2+r$. Giving closed formulae for Hurwitz numbers may get very complicated, even in characteristic zero. The following result from \cite{o-l2} illustrates that the genus-$0$ pure-cycle case is more tractable than the general case, as one may give closed formulae for the Hurwitz numbers, at least if the number $r$ of branch points is at most $4$. \begin{thm}\label{hurwitzlem} Under the hypothesis $\sum_{i=1}^r e_i=2d-2+r$, we have the following. \begin{itemize} \item[(a)] $h(d; e_1, e_2, e_3)=1$. \item[(b)] $h(d; e_1, e_2, e_3, e_4)=\min_i(e_i(d+1-e_i)).$ \item[(c)] Let $f:{\mathbb P}^1_{\mathbb C}\to {\mathbb P}^1_{\mathbb C}$ be a cover of ramification type $(d; e_1, e_2, \ldots, e_r)$ with $r\geq 3$. The Galois group of the Galois closure of $f$ is either $S_d$ or $A_d$ unless $(d; e_1, e_2, \ldots, e_r)=(6; 4,4,5)$ in which case the Galois group is $S_5$ acting transitively on $6$ letters. \end{itemize} \end{thm} These statements are Lemma 2.1, Theorem 4.2, and Theorem 5.3 of \cite{o-l2}. We mention that Boccara (\cite{bo5}) proves a partial generalization of Theorem \ref{hurwitzlem}.(a). He gives a necessary and sufficient condition for $h(d; C_1, C_2, \ell)$ to be nonzero in the case that $C_1, C_2$ are arbitrary conjugacy classes of $S_d$ and only $C_3=\ell$ is assumed to be the conjugacy class of a single cycle. Later in our analysis we will be required to study covers of type $(d;e_1\text{-}e_2, e_3,e_4)$, so we mention a result which is not stated explicitly in \cite{o-l2}, but which follows easily from the arguments therein. We will only use the case that $e_4=d$, but we state the result in general since the argument is the same. \begin{lem}\label{lem:badtype} Given $e_1,e_2,e_3,e_4$ and $d$ with $2d+2=\sum_i e_i$ and $e_1 + e_2 \leq d$, if $e_1 \neq e_2$ we have \[ h(d;e_1\text{-}e_2, e_3, e_4)= (d+1-e_1-e_2)\min(e_1,e_2,d+1-e_3,d+1-e_4), \] and if $e_1=e_2$ we have \[ h(d;e_1\text{-}e_2, e_3, e_4)= \lceil\frac{1}{2}(d+1-e_1-e_2)\min(d+1-e_3,d+1-e_4)\rceil. \] Note that this number is always positive. In particular, when $e_4=d$ we have \[ h(d;e_1\text{-}e_2, e_3, d)= \begin{cases} d+1-e_1-e_2& \text{ if }e_1\neq e_2,\\ (d+2-e_1-e_2)/2&\text{ if }e_1=e_2, d \text{ even},\\ (d+1-e_1-e_2)/2&\text{ if }e_1=e_2, d \text{ odd}. \end{cases} \] \end{lem} \begin{proof} Without loss of generality, we may assume that $e_1\leq e_2$ and $e_3 \leq e_4$. Thus, we want to prove that $h(d;e_1\text{-}e_2, e_3, e_4)$ is given by the smaller of $e_1(d+1-e_1-e_2)$ and $(d+1-e_4)(d+1-e_1-e_2)$ when $e_1\neq e_2$, by $((d+1-e_4)(d+1-e_1-e_2)+1)/2$ when $e_1=e_2$ and all of $d,e_3,e_4$ are even, and by $(d+1-e_4)(d+1-e_1-e_2)/2$ otherwise. Even though we do not assume $e_2 \leq e_3$, this formula still follows from the argument of Theorem 4.2.(ii) of \cite{o-l2}. The first observations to make are that since $e_1+e_2 \leq d$, we have $e_3+e_4 \geq d+2$, and it follows that although we may not have $e_2 \leq e_3$, we have $e_1 < e_4$. Moreover, we have $e_1+e_3 \leq d+1$ and $e_2+e_4 \geq d+1$. We are then able to check that the Hurwitz factorizations $(\sigma_1, \sigma_2, \sigma_3, \sigma_4)$ described in case (ii) of {\it loc.\ cit.}\ still give valid Hurwitz factorizations $(g_1,g_2,g_3)$ by setting $g_1=\sigma_1\sigma_2$, just as in Proposition 4.7 of {\it loc.\ cit.} Moreover, just as in Proposition 4.9 of {\it loc.\ cit.} we find that every Hurwitz factorization must be one of the enumerated ones. It remains to consider when two of the described possibilities yield equivalent Hurwitz factorizations. If $e_1\neq e_2$, we can extract $\sigma_1$ and $\sigma_2$ as the disjoint cycles (of distinct orders) in $g_1$, so we easily see that the proof of Proposition 4.8 of {\it loc. cit.} is still valid. Thus the Hurwitz number is simply the number of possibilities enumerated in Theorem 4.2 (ii) of \cite{o-l2}, which is the minimum of $e_1(d+1-e_1-e_2)$ and $(d+1-e_4)(d+1-e_1-e_2)$, as desired. Now suppose $e_1=e_2$. We then check easily that $e_1+e_4 \geq d+1$, so that the number of enumerated possibilities is $(d+1-e_4)(d+1-e_1-e_2)$. Here, we see that we potentially have a given Hurwitz factorization $(g_1,g_2,g_3)$ being simultaneously equivalent to two of the enumerated possibilities, since $\sigma_1$ and $\sigma_2$ can be switched. Indeed, the argument of Proposition 4.8 of {\it loc.\ cit.}\ describing how to intrinsically recover the parameters $m,k$ of Theorem 4.2 (ii) of {\it loc.\ cit.}\ lets us compute how $m,k$ change under switching $\sigma_1$ and $\sigma_2$, and we find that the pair $(m,k)$ is sent to $(e_3+2e_4-d-m, e_3+e_4-d-k)$. We thus find that each Hurwitz factorization is equivalent to two distinct enumerated possibilities, with the exception that if $d$ and $e_4$ (and therefore necessarily $e_3$) are even, the Hurwitz factorization corresponding to $(m,k)=((e_3+2e_4-d)/2,(e_3+e_4-d)/2)$ is not equivalent to any other. We therefore conclude the desired statement. \end{proof} We now explain how Theorem 4.2 of \cite{o-l2} can be understood in terms of degenerations. Harris and Mumford \cite{h-m2} developed the theory of {\em admissible covers}, giving a description of the behavior of branched covers under degeneration. Admissible covers in the case we are interested in may be described geometrically as follows: let $X_0$ be the reducible curve consisting of two smooth rational components $X_0^1$ and $X_0^2$ joined at a single node $Q$. We suppose we have points $Q_1,Q_2$ on $X_0^1$, and $Q_3,Q_4$ on $X_0^2$. An {\em admissible cover} of type $(d;C_1,C_2,\ast,C_3,C_4)$ is then a connected, finite flat cover $f_0:Y_0 \to X_0$ which is \'etale away from the preimage of $Q$ and the $Q_i$, and if we denote by $Y_0^1\to X_0^1$ and $Y_0^2\to X_0^2$ the (possibly disconnected) covers of $X_0^1$ and $X_0^2$ induced by $f_0$, we require also that $Y_0^1\to X_0^1$ has ramification type $(d;C_1,C_2,C)$ for $Q_1,Q_2,Q$ and $Y_0^2\to X_0^2$ has ramification type $(d;C,C_3,C_4)$ for $Q,Q_3,Q_4$, for some conjugacy class $C$ in $S_d$, and furthermore that for $P \in f_0^{-1}(Q)$, the ramification index of $f_0$ at $P$ is the same on $Y_0^1$ and $Y_0^2$. In characteristic $p$, we further have to require that ramification above the node is tame. We refer to $Y_0^1 \to X_0^1$ and $Y_0^2 \to X_0^2$ as the {\em component covers} determining $f_0$. When we wish to specify the class $C$, we say the admissible cover is of type $(d,C_1,C_2,\ast_C,C_3,C_4)$. The two basic theorems on admissible covers concern degeneration and smoothing. First, in characteristic $0$, or when the monodromy group has order prime to $p$, if a family of smooth covers of type $(d;C_1,C_2,C_3,C_4)$ with branch points $(Q_1,Q_2,Q_3,Q_4)$ is degenerated by sending $Q_3$ to $Q_4$, the limit is an admissible cover of type $(d;C_1,C_2,\ast,C_3,C_4)$. On the other hand, given an admissible cover of type $(d;C_1,C_2,\ast,C_3,C_4)$, irrespective of characteristic there is a deformation to a cover of smooth curves, which then has type $(d;C_1,C_2,C_3,C_4)$. Such a deformation is not unique in general; we call the number of smooth covers arising as smoothings of a given admissible cover (for a fixed smoothing of the base) the {\em multiplicity} of the admissible cover. Suppose we have a family of covers $f:X \to Y$, with smooth generic fiber $f_1:X_1 \to Y_1$, and admissible special fiber $f_0:X_0 \to Y_0$. If we choose local monodromy generators for $\pi_1^{\operatorname{tame}}(Y_1 \smallsetminus \{Q_1,Q_2,Q_3,Q_4\})$ which are compatible with the degeneration to $Y_0$, we then find that if we have a branched cover of $Y_1$ corresponding to a Hurwitz factorization $(g_1,g_2,g_3,g_4)$, the induced admissible cover of $Y_0$ will have monodromy given by $(g_1,g_2,\rho)$ over $Y_0^1$ and $(\rho^{-1},g_3,g_4)$ over $Y_0^2$, where $\rho=g_3 g_4$. The multiplicity of the admissible cover arises because it may be possible to apply different simultaneous conjugations to $(g_1,g_2,\rho)$ and to $(\rho^{-1},g_3,g_4)$ while maintaining the relationship between $\rho$ and $\rho^{-1}$. It is well-known that when $\rho$ is a pure-cycle of order $m$, the admissible cover has multiplicity $m$, although we recover this fact independently in our situation as part of the Hurwitz number calculation of \cite{o-l2}. To calculate more generally the multiplicity of an admissible cover of the above type, we define the action of the braid operator $Q_3$ on the set of Hurwitz factorizations as \[ Q_3\cdot (g_1, g_2, g_3, g_4)=(g_1g_2g_1g_2^{-1}g_1^{-1}, g_1 g_2g_1^{-1}, g_3, g_4). \] One easily checks that $Q_3\cdot\bar{g}$ is again a Hurwitz factorization of the same ramification type as $\bar{g}$. The multiplicity of a given admissible cover is the length of the orbit of $Q_3$ acting on the corresponding Hurwitz factorization. In this context, we can give the following sharper statement of Theorem \ref{hurwitzlem} (b), phrased in somewhat different language in \cite{o-l2}. \begin{thm}\label{degenerationlem} Given a genus-$0$ ramification type $(d; e_1, e_2, e_3, e_4)$, with $e_1 \leq e_2 \leq e_3 \leq e_4$ the only possibilities for an admissible cover of type $(d;e_1,e_2,\ast,e_3,e_4)$ are type $(d;e_1,e_2,\ast_{m},e_3,e_4)$ or type $(d;e_1,e_2,\ast_{e_1\text{-}e_2},e_3,e_4)$. \begin{itemize} \item[(a)] Fix $m \geq 1$. There is at most one admissible cover of type $(d;e_1,e_2,\ast_{m},e_3,e_4)$, and if such a cover exists, it has multiplicity $m$. \begin{itemize} \item[(i)] Suppose that $d+1\leq e_2+e_3$. There exists an admissible cover of type $(d;e_1,e_2,\ast_{m},e_3,e_4)$ if and only if \[ e_2-e_1+1\leq m\leq 2d+1-e_3-e_4, \qquad m\equiv e_2-e_1+1 \pmod{2}. \] \item[(ii)] Suppose that $d+1\geq e_2+e_3$. There exists an admissible cover of type $(d;e_1,e_2,\ast_{m},e_3,e_4)$ if and only if \[ e_4-e_3+1\leq m\leq 2d+1-e_3-e_4,\qquad m\equiv e_2-e_1+1 \pmod{2}. \] \end{itemize} \item[(b)] Admissible covers of type $(d;e_1,e_2,\ast_{e_1\text{-}e_2},e_3,e_4)$ have multiplicity $1$. The component cover of type $(d;e_1,e_2,e_1\text{-}e_2)$ is uniquely determined, so the admissible cover is determined by its second component cover and the gluing over the node. Moreover, the gluing over the node is unique when $e_1 \neq e_2$. When $e_1=e_2$, there are always two possibilities for gluing except for a single admissible cover in the case that $e_3,e_4,$ and $d$ are all even. The number of admissible covers of this type is \[ \begin{cases} e_1(d+1-e_1-e_2)& \text{ if }d+1\leq e_2+e_3,\\ (e_3+e_4-d-1)(d+1-e_4)&\text{ if } d+1\geq e_2+e_3. \end{cases} \] \end{itemize} \end{thm} \begin{proof} We briefly explain how this follows from Theorem 4.2 of \cite{o-l2}. As stated above, the possibilities for admissible covers are determined by pairs $(g_1,g_2,\rho)$, $(\rho^{-1},g_3,g_4)$ where $(g_1,g_2,g_3,g_4)$ is a Hurwitz factorization of type $(d;e_1,e_2,e_3,e_4)$. {\it Loc.\ cit.}\ immediately implies that $\rho$ is always either a single cycle of length $m \geq 1$ or the product of two disjoint cycles. For (a), we find from part (i) of {\it loc.\ cit.}\ that the ranges for $m$ (which is $e_3+e_4-2k$ is the notation of {\it loc.\ cit.}) are as asserted, and that for a given $m$, the number of possibilities with $\rho$ an $m$-cycle is precisely $m$, when counted with multiplicity. On the other hand, in this case both component covers are three-point pure-cycle covers, and thus uniquely determined (see Theorem \ref{hurwitzlem} (a)). Thus the admissible cover is unique in this case, with multiplicity $m$. For (b), we see by inspection of the description of part (ii) of {\it loc.\ cit.}\ that $g_1$ is disjoint from $g_2$. It immediately follows that the braid action is trivial, so the multiplicity is always $1$, and the asserted count of covers follows immediately from the proof of Proposition 4.10 of {\it loc.\ cit.} Moreover, the component cover of type $(d;e_1,e_2,e_1\text{-}e_2)$ is a disjoint union of covers of type $(e_1;e_1,e_1)$ and $(e_2;e_2,e_2)$ (as well as $d-e_1-e_2$ copies of the trivial cover), so it is uniquely determined, as asserted. Furthermore, we see that the second component cover is always a single connected cover of degree $d$, and $g_1,g_2$ are recovered as the disjoint cycles of $\rho^{-1}$, so the gluing is unique when $e_1 \neq e_2$. When $e_1=e_2$, it is possible to swap $g_1$ and $g_2$, so we see that there are two possibilities for gluing. The argument of Lemma \ref{lem:badtype} shows that we do in fact obtain two distinct admissible covers in this way, except for a single cover occurring when $e_3,e_4$ and $d$ are all even. \end{proof} \section{Group theory}\label{sec:group} In several contexts, we will have to calculate monodromy groups other than those treated by Theorem \ref{hurwitzlem} (c). We will also have to pass between counting mere covers and counting $G$-Galois covers. In this section, we give basic group-theoretic results to address these topics. Since we restrict our attention to covers of prime degree, the following proposition will be helpful. \begin{prop}\label{prop:group} Let $p$ be a prime number and $G$ a transitive group on $p$ letters. Suppose that $G$ contains a pure cycle of length $1<e < p-2$. Then $G$ is either $A_p$ or $S_p$. Moreover, if $e=p-2$, and $G$ is neither $A_p$ nor $S_p$, then $p=2^r+1$ for some $r$, and $G$ contains a unique minimal normal subgroup isomorphic to $\operatorname{PSL}_2(2^r)$, and is contained in $\operatorname{P\Gamma L}_2(2^r)\simeq \operatorname{PSL}_2(2^r)\rtimes {\mathbb Z}/r{\mathbb Z}$. If $e=p-1$, and $G$ is not $S_p$, then $G={\mathbb F}_p \rtimes {\mathbb F}_p^*$. \end{prop} Note that this does not contradict the exceptional case $d=6$ and $G=S_5$ in Theorem \ref{hurwitzlem} (c), since we assume that the degree $d$ is prime. \begin{proof} Since $p$ is prime, $G$ is necessarily primitive, and a theorem usually attributed to Marggraff (\cite{l-t1}) then states that $G$ is at least $(p-e+1)$-transitive. When $e\leq p-2$, we have that $p-e+1\geq 3$. The $2$-transitive permutation groups have been classified by Cameron (Section 5 of \cite{ca2}). Specifically, $G$ has a unique minimal normal subgroup which is either elementary abelian or one of several possible simple groups. Since $G$ is at least $3$-transitive, one easily checks that the elementary abelian case is not possible: indeed, one checks directly that if a subgroup of a $3$-transitive group inside $S_p$ contains an element of prime order $\ell$, then it is not possible for all its conjugates to commute with one another. Similarly, most possibilities in the simple case cannot be $3$-transitive. If $G$ is not $S_p$ or $A_p$, then $G$ must have a unique minimal normal subgroup $N$ which is isomorphic to a Mathieu group $M_{11}, M_{23}$, or to $N=\operatorname{PSL}_2(2^r)$. We then have that $G$ is a subgroup of $\operatorname{Aut}(N)$. For $N=M_{11},M_{23}$, we have $N=G=\operatorname{Aut}(N)$, and it is easy to check that the Mathieu groups $M_{11}$ and $M_{23}$ do not contain any single cycles of order less than $p$, for example with the computer algebra package GAP. Therefore these cases do not occur. The group $\operatorname{PSL}_2(2^r)$ can only occur if $p=2^r+1$. In this case, we have that $G$ is a subgroup of $\operatorname{Aut}(\operatorname{PSL}_2(2^r))= \operatorname{P\Gamma L}_2(2^r)$ and $G$ is at most $3$-transitive, so we have $e=p-2$, as desired. Finally, if $e=p-1$, M\"uller has classified transitive permutation groups containing $(p-1)$-cycles in Theorem 6.2 of \cite{mu3}, and we see that the only possibility in prime degree other than $S_p$ is ${\mathbb F}_p \rtimes {\mathbb F}_p^*$, as asserted. \end{proof} We illustrate the utility of the proposition with: \begin{cor}\label{3pt-monodromy} Fix $e_1,e_2,e_3,e_4$ with $2 \leq e_i \leq p$ for each $i$, and $e_1+e_2 \leq p$. For $p>5$, any genus-$0$ cover of type $(p;e_1\text{-}e_2,e_3,e_4)$ has monodromy group $S_p$ or $A_p$, with the latter case occurring precisely when $e_3$ and $e_4$ are odd, and $e_1+e_2$ is even. For $p=5$, the only exceptional case is type $(5;2\text{-}2,4,4)$, where the monodromy group is ${\mathbb F}_5 \rtimes {\mathbb F}_5^*$. \end{cor} \begin{proof} Without loss of generality, we assume $e_1\leq e_2$ and $e_3 \leq e_4$. Applying Proposition \ref{prop:group}, it is clear that the only possible exception occurs for types with $e_3, e_4 \geq p-2$. We thus have to treat types $(p;3\text{-}3,p-2,p-2)$, $(p;2\text{-}4,p-2,p-2)$, $(p;2\text{-}2,p-2,p)$, $(p;2\text{-}3,p-2,p-1)$, and $(p;2\text{-}2,p-1,p-1)$. The fourth case cannot be exceptional since $G$ contains both a $(p-2)$-cycle and a $(p-1)$-cycle, and the last case also is ruled out for $p>5$ because ${\mathbb F}_p \rtimes {\mathbb F}_p^*$ does not contain a $2$-$2$-cycle. For the first three cases, we must have that $p=2^r+1$ for some $r$ and $G$ is a subgroup of $\Gamma:=\operatorname{P\Gamma L}_2(2^r)$. Since $p=2^r+1$ is a Fermat prime number, we have that $r$ is a power of $2$. Moreover, since $\operatorname{PSL}_2(4)=A_5$ as permutation groups in $S_5$, we may assume $r \geq 4$. Since $r$ is even, any element of order $3$ in $\Gamma \cong \operatorname{PSL}_2(2^r) \rtimes {\mathbb Z}/r{\mathbb Z}$ must lie inside $\operatorname{PSL}_2(2^r)$, and a non-trivial element of this group can fix at most $2$ letters. Thus, in order to contain a $3\text{-}3$-cycle, we would have to have $6 \leq p=2^r+1 \leq 8$, which contradicts the hypothesis $r \geq 4$. This rules out the first case. In the second case, if we square the $2$-$4$-cycle we obtain a $2$-$2$-cycle. To complete the argument for both the second and third cases it is thus enough to check directly that if $r>4$, an element of order $2$ cannot fix precisely $p-4$ letters, ruling out a $2$-$2$-cycle in this case. It remains only to check directly that $\operatorname{P\Gamma L}_2(16)$ does not contain a $2$-$2$-cycle, which one can do directly with GAP. \end{proof} Because the theory of stable reduction is developed in the $G$-Galois context, it is convenient to be able to pass back and forth between the context of mere covers and of $G$-Galois covers. The following easy result relates the number of mere covers to the number of $G$-Galois covers in the case we are interested in. \begin{lem}\label{Gallem} Let $f:{\mathbb P}^1 \to {\mathbb P}^1$ be a (mere) cover of degree $d$ with monodromy group $G=A_d$ (respectively, $S_d$). Then the number of $G$-Galois structures on the Galois closure of $f$ is exactly $2$ (respectively, $1$). \end{lem} \begin{proof} The case that $G=S_d$ is clear, since conjugacy by $S_d$ is then the same as conjugacy by $G$. Suppose $G=A_d$, and let $X=\{(g_1, \ldots, g_r)\mid \prod_i g_i=1, \langle g_i\rangle=d\}$. Since the centralizer $C_{S_d}(A_d)$ of $A_d$ in $S_d$ is trivial, it follows that $S_d$ acts freely on $X$, so the number of elements in $X_f \subseteq X$ corresponding to $f$ as a mere cover is $|S_d|$. Since the center of $G=A_d$ is trivial, $G$ also acts freely on $X$, and $X_f$ breaks into two equivalence classes of $G$-Galois covers, each of size $|A_d|$. \end{proof} \section{Stable reduction}\label{sec:stable} In this section, we recall some generalities on stable reduction of Galois covers of curves, and prove a few simple lemmas as a prelude to our main calculations. The main references for this section are \cite{we1} and \cite{bo4}. Since these sources only consider the case of $G$-Galois covers, we restrict to this situation here as well. Lemma \ref{Gallem} implies that we may translate results on good or bad reduction of $G$-Galois covers to results on the stable reduction of the mere covers, so this is no restriction. Let $R$ be a discrete valuation ring with fraction field $K$ of characteristic zero and residue field an algebraically closed field $k$ of characteristic $p>0$. Let $f:V={\mathbb P}^1_K \to X={\mathbb P}^1_K$ be a degree-$p$ cover branched at $r$ points $Q_1=0,Q_2=1,\ldots, Q_r=\infty$ over $K$ with ramification type $(p; C_1, \ldots, C_r).$ For the moment, we do not assume that the $C_i$ are the conjugacy classes of a single cycle. We denote the Galois closure of $f$ by $g:Y\to {\mathbb P}^1$ and let $G$ be its Galois group. Note that $G$ is a transitive subgroup of $S_p$, and thus has order divisible by $p$. Write $H:={\mathop{\rm Gal}}(Y, V)$, a subgroup of index $p$. We suppose that $Q_i\not\equiv Q_j\pmod{p}$, for $i\neq j$, in other words, that $(X; \{Q_i\})$ has good reduction as a marked curve. We assume moreover that $g$ has bad reduction to characteristic $p$, and denote by $\bar{g}:\bar{Y}\to \bar{X}$ its {\em stable reduction}. The stable reduction $\bar{g}$ is defined as follows. After replacing $K$ by a finite extension, there exists a unique stable model ${\mathcal Y}$ of $Y$ as defined in \cite{we1}. We define ${\mathcal X}={\mathcal Y}/G$. The stable reduction $\bar{g}:\bar{Y}:={\mathcal Y}\otimes_R k\to \bar{X}:={\mathcal X}\otimes_R k$ is a finite map between semistable curves in characteristic $p$; we call such maps {\em stable $G$-maps}. We refer to \cite{we1}, Definition 2.14, for a precise definition. Roughly speaking, the theory of stable reduction proceeds in two steps: first, one understands the possibilities for stable $G$-maps, and then one counts the number of characteristic-$0$ covers reducing to each stable $G$-map. We begin by describing what the stable reduction must look like. Since $(X; Q_i)$ has good reduction to characteristic $p$, there exists a model ${\mathcal X}_0\to \operatorname{Spec}(R)$ such that the $Q_i$ extend to disjoint sections. There is a unique irreducible component $\bar{X}_0$ of $\bar{X}$, called the {\em original component}, on which the natural contraction map $\bar{X}\to {{\mathcal X}}_0\otimes_Rk$ is an isomorphism. The restriction of $\bar{g}$ to $\bar{X}_0$ is inseparable. Let ${\mathbb B} \subseteq \{1,2,\ldots,r\}$ consist of those indices $i$ such that $C_i$ is not the conjugacy class of a $p$-cycle. For $i \in {\mathbb B}$, we have that $Q_i$ specializes to an irreducible component $\bar{X}_i\neq \bar{X}_0$ of $\bar{X}$. The restriction of $\bar{g}$ to $\bar{X}_i$ is separable, and $\bar{X}_i$ intersects the rest of $\bar{X}$ in a single point $\xi_i$. Let $\bar{Y}_i$ be an irreducible component of $\bar{Y}$ above $\bar{X}_i$, and write $\bar{g}_i:\bar{Y}_i\to \bar{X}_i$ for the restriction of $\bar{g}$ to $\bar{Y}_i$. We denote by $G_i$ the decomposition group of $\bar{Y}_i$. We call the components $\bar{X}_i$ (resp.\ the covers $\bar{g}_i$) for $i\in{\mathbb B}$ the {\em primitive tails} (resp.\ the {\em primitive tail covers}) associated with the stable reduction. The following definition gives a characterization of those covers that can arise as primitive tail covers (compare to \cite{we1}, Section 2.2). \begin{defn}\label{def:tail} Let $k$ be an algebraically closed field of characteristic $p>0$. Let $C$ be a conjugacy class of $S_p$ which is not the class of a $p$-cycle. A {\em primitive tail cover} of ramification type $C$ is a $G$-Galois cover $\varphi_C:T_C\to {\mathbb P}^1_k$ defined over $k$ which is branched at exactly two points $0, \infty$, satisfying the following conditions. \begin{itemize} \item[(a)] The Galois group $G_C$ of $\varphi_C$ is a subgroup of $S_p$ and contains a subgroup $H$ of index $p$ such that $\bar{T}_C:=T_C/H$ has genus $0$. \item[(b)] The induced map $\bar{\varphi}_C:\bar{T}_C\to {\mathbb P}^1$ is tamely branched at $x=0$, with conjugacy class $C$, and wildly branched at $x=\infty$. \end{itemize} If $\varphi$ is a tail cover, we let $h=h(\varphi)$ be the conductor and $pm=pm(\varphi)$ the ramification index of a wild ramification point. We say that two primitive tail covers $\varphi_i:T_i\to {\mathbb P}^1_k$ are {\em isomorphic} if there exists a $G$-equivariant isomorphism $\iota:T_1\to T_2$. Note that we do not require an isomorphism to send $\bar{T}_1$ to $\bar{T}_2$. \end{defn} Note that an isomorphism $\iota$ of primitive tail covers may be completed into a commuting square \[ \begin{CD} {T}_1 @>{\iota}>> {T}_2\\ @V{{\varphi}_1}VV @VV{{\varphi}_2}V\\ {\mathbb P}^1 @>>>{\mathbb P}^1. \end{CD} \] Note also that the number of primitive tail covers of fixed ramification type is finite. Since $p$ strictly divides the order of the Galois group $G_C$, we conclude that $m$ is prime to $p$. The invariants $h, m$ describe the wild ramification of the tail cover $\varphi_C$. The integers $h$ and $m$ only depend on the conjugacy class $C$. In Section \ref{sec:tail}, we will show this if $C$ is the class of a single cycle or the product of $2$ disjoint cycles, but this holds more generally. In the more general set-up of \cite{we1}, Definition 2.9 it is required that $\sigma:=h/m<1$ as part of the definition of primitive tail cover. We will see that in our situation this follows from (a). Moreover, we will show that $\gcd(h, m)=1$ (Lemma \ref{lem:tail1}). Summarizing, we find that $(h,m)$ satisfy: \begin{equation}\label{eq:hm} m\mid (p-1), \qquad 1\leq h < m, \qquad \gcd(h, m)=1. \end{equation} In the more general set-up of \cite{we1} there also exists so-called new tails, which satisfy $\sigma>1$. The following lemma implies that these do not occur in our situation. \begin{lem}\label{lem:stablered} The curve $\bar{X}$ consists of at most $r+1$ irreducible components: the original component $\bar{X}_0$ and primitive tails $\bar{X}_i$ for all $i \in {\mathbb B}$. \end{lem} \begin{proof} In the case that $r=3$ this is proved in \cite{we1}, Section 4.4, using that the cover is the Galois closure of a genus-$0$ cover of degree $p$. The general case is a straightforward generalization. \end{proof} It remains to discuss the restriction of $\bar{g}$ to the original component $\bar{X}_0$. As mentioned above, this restriction is inseparable, and it is described by a so-called deformation datum (\cite{we1}, Section 1.3). In order to describe deformation data, we set some notation. Let $\bar{Q}_i$ be the limit on $\bar{X}_0$ of the $Q_i$ for $i \not \in {\mathbb B}$, and set $\bar{Q}_i=\xi_i$ for $i \in {\mathbb B}$. \begin{defn}\label{def:dd} Let $k$ be an algebraically closed field of characteristic $p$. A {\em deformation datum} is a pair $(Z, \omega)$, where $Z$ is a smooth projective curve over $k$ together with a finite Galois cover $g:Z\to X={\mathbb P}^1_k$, and $\omega$ is a meromorphic differential form on $Z$ such that the following conditions hold. \begin{itemize} \item[(a)] Let $H$ be the Galois group of $Z\to X$. Then \[\beta^\ast \omega=\chi(\beta)\cdot \omega, \qquad \mbox{for all } \beta\in H. \] Here $\chi:H\to {\mathbb F}_p^\times$ in an injective character. \item[(b)] The differential form $\omega$ is either logarithmic, i.e.\ of the form $\omega={\rm d} f/f$, or exact, i.e.\ of the form ${\rm d} f$, for some meromorphic function $f$ on $Z$. \end{itemize} \end{defn} Note that the cover $Z \to X$ is necessarily cyclic. To a $G$-Galois cover $g:Y\to {\mathbb P}^1$ with bad reduction, we may associate a deformation datum, as follows. Choose an irreducible component $\bar{Y}_0$ of $\bar{Y}$ above the original component $\bar{X}_0$. Since the restriction $\bar{g}_0:\bar{Y}_0\to\bar{X}_0$ is inseparable and $G\subset S_p$, it follows that the inertia group $I$ of $\bar{Y}_0$ is cyclic of order $p$, i.e.\ a Sylow $p$-subgroup of $G$. Since the inertia group is normal in the decomposition group, the decomposition group $G_0$ of $\bar{Y}_0$ is a subgroup of $N_{S_p}(I)\simeq {\mathbb Z}/p{\mathbb Z}\rtimes_\chi{\mathbb Z}/p{\mathbb Z}^\ast$, where $\chi:{\mathbb Z}/p{\mathbb Z}^\ast\to {\mathbb Z}/p{\mathbb Z}^\ast$ is an injective character. It follows that the map $\bar{g}_0$ factors as $\bar{g}_0:\bar{Y}_0\to \bar{Z}_0\to \bar{X}_0$, where $\bar{Y}_0\to \bar{Z}_0$ is inseparable of degree $p$ and $\bar{Z}_0\to\bar{X}_0$ is separable. We conclude that the Galois group $H_0:=\operatorname{Gal}(\bar{Z}_0, \bar{X}_0)$ is a subgroup of ${\mathbb Z}/p{\mathbb Z}^\ast\simeq {\mathbb Z}/(p-1){\mathbb Z}$. In particular, it follows that \begin{equation}\label{eq:G0} G_0\simeq I\rtimes_\chi H_0. \end{equation} The inseparable map $\bar{Y}_0\to \bar{Z}_0$ is characterized by a differential form $\omega$ on $\bar{Z}_0$ satisfying the properties of Definition \ref{def:dd}, see \cite{we1}, Section 1.3.2. The differential form $\omega$ is logarithmic if $\bar{Y}_0\to \bar{Z}_0$ is a ${\boldsymbol \mu}_p$-torsor and exact if this map is an ${\boldsymbol \alpha}_p$-torsor. A differential form is logarithmic if and only if it is fixed by the Cartier operator ${\mathcal C}$ and exact if and only if it is killed by ${\mathcal C}$. (See for example \cite{g-s2}, exercise 9.6, for the definition of the Cartier operator and an outline of these properties.) Wewers (\cite{we3}, Lemma 2.8) shows that in the case of covers branched at $r=3$ points the differential form is always logarithmic. The deformation datum $(Z, \omega)$ associated to $g$ satisfies the following compatibilities with the tail covers. We refer to \cite{we1}, Proposition 1.8 and (2) for proofs of these statements. For $i\in {\mathbb B}$, we let $h_i$ (resp.\ $pm_i$) be the conductor (resp. ramification index) of a wild ramification point of the corresponding tail cover of type $C_i$, as defined above. We put $\sigma_i=h_i/m_i$. We also use the convention $\sigma_i=0$ for $i \not \in {\mathbb B}$. \begin{itemize} \item[(a)] If $C_i$ is the conjugacy class of a $p$-cycle then $\bar{Q}_i$ is unbranched in $\bar{Z}_0\to \bar{X}_0$ and $\omega$ has a simple pole at all points of $\bar{Z}_0$ above $\bar{Q}_i$. \item[(b)] Otherwise, $\bar{Z}_0\to \bar{X}_0$ is branched of order $m_i$ at $\bar{Q}_i$, and $\omega$ has a zero of order $h_i-1$ at the points of $\bar{Z}_0$ above $\bar{Q}_i$. \item[(c)] The map $\bar{Z}_0\to \bar{X}_0$ is unbranched outside $\{\bar{Q}_i\}$. All poles and zeros of $\omega$ are above the $\bar{Q}_i$. \item[(d)] The invariants $\sigma_i$ satisfy $\sum_{i\in {\mathbb B}} \sigma_i=r-2$. \end{itemize} The set $(\sigma_i)$ is called the {\em signature} of the deformation datum $(Z, \omega)$. \begin{prop}\label{prop:dd} Suppose that $r=3, 4$. We fix rational numbers $(\sigma_1, \ldots, \sigma_r)$ with $\sigma_i\in \frac{1}{p-1}{\mathbb Z}$ and $0\leq\sigma_i< 1$, and $\sum_{i=1}^r \sigma_i=r-2$. We fix points $\bar{Q}_1=0, \bar{Q}_2=1, \ldots, \bar{Q}_r=\infty$ on $\bar{X}_0\simeq {\mathbb P}^1_k$. Then there exists a deformation datum of signature $(\sigma_i)$, unique up to scaling. If further the $\bar{Q}_i$ are general, the deformation datum is logarithmic and unique up to isomorphism. \end{prop} \begin{proof} In the case that $r=3$ this is proved in \cite{we1}. (The proof in this case is similar to the proof in the case that $r=4$ which we give below.) Suppose that $r=4$. Let ${\mathbb B}=\{1\leq i\leq r\mid \sigma_i\neq 0\}$. We write $\bar{Q}_3=\lambda\in {\mathbb P}^1_k\setminus\{0,1,\infty\}$ and $\sigma_i=a_i/(p-1)$. (If $\omega$ is the deformation datum associated with $\bar{g}$, then $a_i=h_i (p-1)/m_i.$) It is shown in \cite{bo4}, Chapter 3, that a deformation datum of signature $(\sigma_i)$ consists of a differential form $\omega$ on the cover $\bar{Z}_0$ of $\bar{X}_0$ defined as a connected component of the (normalization of the) projective curve with Kummer equation \begin{equation}\label{eq:Kummer} z^{p-1}=x^{a_1}(x-1)^{a_2}(x-\lambda)^{a_3}. \end{equation} The degree of $\bar{Z}_0\to \bar{X}_0$ is \[ m:=\frac{p-1}{\gcd(p-1, a_1, a_2, a_3, a_4)}. \] The differential form $\omega$ may be written as \begin{equation}\label{eq:omega} \omega=\epsilon\frac{z\,{\rm d} x}{x(x-1)(x-\lambda)}= \epsilon \frac{x^{p-a_1}(x-1)^{p-1-a_2}(x-\lambda)^{p-1-a_3}z^p} {x^p(x-1)^p(x-\lambda)^p}\frac{{\rm d}x}{x}, \end{equation} where $\epsilon\in k^\times$ is a unit. To show the existence of the deformation datum, it suffices to show that one may choose $\epsilon$ such that $\omega$ is logarithmic or exact, or, equivalently, such that $\omega$ is fixed or killed by the Cartier operator ${\mathcal C}$. It follows from standard properties of the Cartier operator, (\ref{eq:omega}), and the assumption that $a_1+a_2+a_3+a_4=2(p-1)$ that ${\mathcal C}\omega =c^{1/p}\epsilon^{(1-p)/p}\omega$, where \begin{equation}\label{eq:Hasseinv} c=\sum_{j=\max(0, p-1-a_2-a_4)}^{\min(a_4, p-1-a_3)} {p-1-a_2\choose a_4-j}{p-1-a_3\choose j}\lambda^j. \end{equation} Note that $c$ is the coefficient of $x^p$ in $x^{p-a_1}(x-1)^{p-1-a_2}(x-\lambda)^{p-1-a_3}$. One easily checks that $c$ is nonzero as polynomial in $\lambda$. It follows that $\omega$ defines an exact differential form if and only if $\lambda$ is a zero of the polynomial $c$. This does not happen if $\{0,1,\lambda, \infty\}$ is general. We assume that $c(\lambda)\neq 0$. Since $k$ is algebraically closed, we may choose $\epsilon\in k^\times$ such that $\epsilon^{p-1}=c$. Then ${\mathcal C}\omega=\omega$, and $\omega$ defines a logarithmic deformation datum. It is easy to see that $\omega$ is unique, up to multiplication by an element of ${\mathbb F}_p^\times$. \end{proof} \section{The tail covers}\label{sec:tail} In Section \ref{sec:stable}, we have seen that associated with a Galois cover with bad reduction is a set of (primitive) tail covers. In this section, we analyze the possible tail covers for conjugacy classes $e\neq p$ and $e_1$-$e_2$ of $S_p$. Recall from Section \ref{sec:char0} that these are conjugacy classes which occur in the $3$-point covers obtained as degeneration of the pure-cycle $4$-point covers. The following lemma shows the existence of primitive tail covers for the conjugacy classes occurring in the degeneration of single-cycle $4$-point covers (Theorem \ref{degenerationlem}). \begin{lem}\label{lem:tail1} \begin{itemize} \item[(a)] Let $2\leq e< p-1$ be an integer. There exists a primitive tail cover $\varphi_e:T_e\to {\mathbb P}^1_k$ of ramification type $e$. Its Galois group is $A_p$ if $e$ is odd and $S_p$ if $e$ is even. The wild branch point of $\varphi_e$ has inertia group of order $p(p-1)/\gcd(p-1,e-1) =:pm_{e}$. The conductor is $h_e:=(p-e)/\gcd(p-1, e-1).$ \item[(b)] In the case that $e=p-1$, there exists a primitive tail cover $\varphi_e$ of ramification type $e$, with Galois group ${\mathbb F}_p\rtimes {\mathbb F}_p^\ast$. The cover is totally branched above the wild branch point and has conductor $h_{p-1}=1$. \item[(c)] Let $2\leq e_1\leq e_2\leq p-1$ be integers with $e_1+e_2\leq p$. There is a primitive tail cover $\varphi_{e_1,e_2}:T_{e_1,e_2}\to {\mathbb P}^1_k$ of ramification type $e_1$-$e_2$. The wild branch point of $\varphi_{e_1,e_2}$ has inertia group of order $p(p-1)/\gcd(p-1,e_1+e_2-2)=:pm_{e_1, e_2} $. The conductor is $h_{e_1,e_2}:=(p+1-e_1-e_2)/\gcd(p-1, e_1+e_2-2).$ \end{itemize} In all three cases, the tail cover is unique with the given ramification when considered as a mere cover. \end{lem} \begin{proof} Let $2\leq e\leq p-1$ be an integer. We define the primitive tail cover $\varphi_e$ as the Galois closure of the degree-$p$ cover $\bar{\varphi}_e:\bar{T}_e:={\mathbb P}^1\to {\mathbb P}^1$ given by \begin{equation}\label{eq:tail1} y^p+y^e=x, \qquad (x, y)\mapsto x. \end{equation} One easily checks that this is the unique degree-$p$ cover between projective lines with one wild branch point and the required tame ramification. The decomposition group $G_e$ of $T_e$ is contained in $S_p$. We note that the normalizer in $S_p$ of a Sylow $p$-subgroup has trivial center. Therefore the inertia group $I$ of a wild ramification point of $\varphi_e$ is contained in ${\mathbb F}_p\rtimes_\chi {\mathbb F}_p^\ast$, where $\chi:{\mathbb F}_p^\ast\to {\mathbb F}_p^\ast$ is an injective character. Therefore it follows from \cite{b-w5}, Proposition 2.2.(i) that $\gcd(h_e, m_e)=1$. The statement on the wild ramification follows now directly from the Riemann--Hurwitz formula (as in \cite{we1}, Lemma 4.10). Suppose that $e$ is odd. Then $m_e=(p-1)/\gcd(p-1, e-1)$ divides $(p-1)/2$. Therefore in this case both the tame and the wild ramification groups are contained in $A_p$. This implies that the Galois group $G_e$ of $\varphi_e$ is a subgroup of $A_p$. To prove (a), we suppose that $e\neq p-1$. We show that the Galois group $G_e$ of $\varphi_e$ is $A_p$ or $S_p$. Suppose that this is not the case. Proposition \ref{prop:group} implies that $e=p-2=2^r-1$. Moreover, $G_e$ is a subgroup of $\operatorname{P\Gamma L}_2(2^r)\simeq \operatorname{PSL}_2(2^r)\rtimes {\mathbb Z}/r{\mathbb Z}$. The normalizer in $\operatorname{P\Gamma L}_2(2^r)$ of a Sylow $p$-subgroup $I$ is ${\mathbb Z}/p{\mathbb Z}\rtimes {\mathbb Z}/2r{\mathbb Z}$. The computation of the wild ramification shows that the inertia group $I(\eta)$ of the wild ramification point $\eta$ is isomorphic to ${\mathbb Z}/p{\mathbb Z}\rtimes {\mathbb Z}/\frac{p-1}{2}{\mathbb Z}$. Therefore $\operatorname{P\Gamma L}_2(2^r)$ contains a subgroup isomorphic to $I(\eta)$ if and only if $p=17=2^4+1$, in which case $I(\eta)=N_{\operatorname{P\Gamma L}_2(2^r)}(I)$. We conclude that if $G_e\not\simeq S_p, A_p$ then $e=15$ and $p=17$. However, in this last case one may check using Magma that a suitable specialization of (\ref{eq:tail1}) has Galois group $A_{17}$. As before, we conclude that $G_e\simeq A_{17}$. Now suppose that $e=p-1$. It is easy to see that the Galois closure of $\bar{\varphi}_{p-1}$ is in this case the cover $\varphi_{p-1}:{\mathbb P}^1\to {\mathbb P}^1$ obtained by dividing out ${\mathbb F}_p\rtimes {\mathbb F}_p^\ast\subset \operatorname{PGL}_2(p) = \operatorname{Aut}({\mathbb P}^1)$. This proves (b). Let $e_1, e_2$ be as in the statement of (c). As in the proof of (a), we define $\varphi_{e_1, e_2}$ as the Galois closure of a non-Galois cover $\bar{\varphi}_{e_1, e_2}:\bar{T}_{e_1, e_2}\to {\mathbb P}^1$ of degree $p$. The cover $\bar{\varphi}_{e_1, e_2}$, if it exists, is given by an equation \begin{equation}\label{eq:tail2} F(y):=y^{e_1}(y-1)^{e_2}\tilde{F}(y)=x,\qquad (x, y)\mapsto x, \end{equation} where $\tilde{F}(y)=\sum_{i=0}^{p-e_1-e_2} c_i y^i$ has degree $p-e_1-e_2$. We may assume that $c_{p-e_1-e_2}=1$. The condition that $\bar{\varphi}_{e_1, e_2}$ has exactly three ramification points $y=0,1,\infty$ yields the condition $F'(y)=\gamma t^{e_1-1}(t-1)^{e_2-1}$. Therefore the coefficients of $\tilde{F}$ satisfy the recursion \begin{equation}\label{eq:rec} c_i=c_{i-1}\frac{e_1+e_2+i-1}{e_1+i}, \qquad i=1, \ldots, p-e_1-e_2. \end{equation} This implies that the $c_i$ are uniquely determined by $c_{p-e_1-e_2}=1$. Conversely, it follows that the polynomial $F$ defined by these $c_i$ has the required tame ramification. The statement on the wild ramification follows from the Riemann--Hurwitz formula, as in the proof of (a). \end{proof} It remains to analyze the number of tail covers, and their automorphism groups. Due to the nature of our argument, we will only need to carry out this analysis for the tails of ramification type $e$. From Lemma \ref{lem:tail1}, it follows already that the map $\varphi_C:T_C\to {\mathbb P}^1$ is unique. However, part of the datum of a tail cover is an isomorphism $\alpha:~\operatorname{Gal}(T_C, {\mathbb P}^1)\stackrel{\sim}{\to} G_C$. For every $\tau\in \operatorname{Aut}(G_C)$, the tuple $(\varphi, \tau\circ \alpha)$ also defines a tail cover, which is potentially non-equivalent. Modification by $\tau$ leaves the cover unchanged as a $G_C$-Galois cover if and only if $\tau \in \operatorname{Inn}(G_C)$. However, the weaker notion of equivalence for tail covers implies that $\tau$ leaves the cover unchanged as a tail cover if and only if $\tau$ can be described as conjugation by an element of $N_{\operatorname{Aut}(T)}(G_C)$. Thus, the number of distinct tail covers corresponding to a given mere cover is the order of the cokernel of the map $$ N_{\operatorname{Aut}(T_C)}(G_C) \to \operatorname{Aut}(G_C) $$ given by conjugation. Denote by $\operatorname{Aut}_{G_C}(\varphi_C)$ the kernel of this map, or equivalently the set of $G_C$-equivariant automorphisms of $T_C$. It follows finally that the number of tail covers corresponding to $\varphi_C$ is \begin{equation}\label{eq:number-tails} \frac{|\operatorname{Aut}(G_C)||\operatorname{Aut}_{G_C}(\varphi_C)|}{|N_{\operatorname{Aut}(T_C)}(G_C)|}. \end{equation} Finally, denote by $\operatorname{Aut}_{G_C}^0(\varphi_C)\subset\operatorname{Aut}_{G_C}(\varphi_C)$ the subset of automorphisms which fix the chosen ramification point $\eta$. We now simultaneously compute these automorphism groups and show that in the single-cycle case, we have a unique tail cover. \begin{lem}\label{lem:tail2} Let $2\leq e\leq p-1$ be an integer. \begin{itemize} \item[(a)] The group $\operatorname{Aut}_{G_e}(\varphi_e)$ (resp.\ $\operatorname{Aut}_{G_e}^0(\varphi_e)$) is cyclic of order $(p-e)/2$ (resp.\ $h_e$) if $e$ is odd and $p-e$ (resp.\ $h_e$) is $e$ is even. \item[(b)] There is a unique primitive tail cover of type $e$. \end{itemize} \end{lem} \begin{proof} First note that the definition of $\operatorname{Aut}_{G_e}(\varphi_e)$ implies that any element induces an automorphism of any intermediate cover of $\varphi_e$, and in particular induces automorphisms of $\bar{T}_e$ and ${\mathbb P}^1$. Choose a primitive $(p-e)$th root of unity $\zeta\in \bar{{\mathbb F}}_p$. Then $\mu(x, y)=(\zeta^ex, \zeta y)$ is an automorphism of $\bar{T}_e$. One easily checks that $\mu$ generates the group of automorphisms of $\bar{T}_e$ which induces automorphisms of ${\mathbb P}^1$ under $\varphi_e$, and that furthermore $T_e$ is Galois over ${\mathbb P}^1/\langle\mu\rangle$, so in particular every element of $\mu$ lifts to an automorphism of $T_e$. Taking the quotient by the action of $\mu$, we obtain a diagram \begin{equation}\label{tail2eq} \begin{CD} \bar{T}_e @>>> \bar{T}'_e=\bar{T}_e/\langle\mu\rangle\\ @V{\bar{\varphi}_e}VV @VV{\bar{\psi}_e}V\\ {\mathbb P}^1 @>>>{\mathbb P}^1/\langle\mu\rangle. \end{CD} \end{equation} Since we know the ramification of the other three maps, one easily computes that the tame ramification of $\bar{\psi}_e$ is $e$-$(p-e)$. Let $\psi_e: T_e'\to {\mathbb P}^1$ be the Galois closure of $\bar{\psi}_e$. We now specialize to the case that $e$ is odd. Since $G_e=A_p$ does not contain an element of cycle type $e$-$(p-e)$, it follows that the Galois group $G'$ of $\psi_e$ is $S_p$. Therefore it follows by degree considerations that the cover $T_e \to T_e'$ is cyclic of degree $(p-e)/2$. Denote by $Q$ the Galois group of the cover $T_e\to {\mathbb P}^1/\langle\mu\rangle$. This is a group of order $p!(p-e)/2$, which contains normal subgroups isomorphic to $A_p$ and ${\mathbb Z}/\frac{p-e}{2}{\mathbb Z}$, respectively. It follows that $Q = {\mathbb Z}/\frac{p-e}{2}{\mathbb Z} \rtimes S_p$. Note that $\operatorname{Aut}_{G_e}(\varphi_e)$ is necessarily a subgroup of $Q$. In fact, it is precisely the subgroup of $Q$ which commutes with every element of $A_p \subseteq Q$. One easily checks that the semidirect product cannot be split, and that $\operatorname{Aut}_{G_e}(\varphi_e)$ is precisely the normal subgroup ${\mathbb Z}/\frac{p-e}{2}{\mathbb Z}$, that is the Galois group of $T_e$ over $T'_e$. To compute $\operatorname{Aut}^0_{G_e}(\varphi_e)$ we need to compute the order of the inertia group of a wild ramification point of $T_e$ in the map $T_e\to T'_e$. Since a wild ramification point of $T_e'$ has inertia group of order $p(p-1)=pm_e\gcd(p-1, e-1)$, we know the orders of the inertia groups of three of the four maps, and conclude that $\operatorname{Aut}^0_{G_e}(\varphi_e)$ has order $h_e=(p-e)/\gcd(p-1, e-1)$. This proves (a) in the case $e$ is odd. For (b), we simply observe that since $Q \subset N_{\operatorname{Aut}(T_e)}(G_e)$, we have $$\frac{|\operatorname{Aut}(G_e)||\operatorname{Aut}_{G_e}(\varphi_e)|}{|N_{\operatorname{Aut}(T_e)}(G_e)|} \leq \frac{p!\frac{p-e}{2}}{|Q|}=1,$$ so the tail cover is unique, as desired. We now treat the case that $e$ is even. For (a), if $e<p-1$, the Galois group of $\bar{\psi}_e$ is equal to the Galois group of $\bar{\varphi}_e$, which is isomorphic to $S_p$. We conclude that the degree of $T_e\to T'_e$ is $p-e$ in this case, and the group $Q$ defined as above is a direct product ${\mathbb Z}/(p-e){\mathbb Z} \times S_p$. Similarly to the case that $e$ is odd, we conclude that $\operatorname{Aut}_{G_e}(\varphi_e)$ (resp.\ $\operatorname{Aut}_{G_e}^0(\varphi_e)$) is cyclic of order $p-e$ (resp.\ $h_e$) in this case, as desired. On the other hand, if $e=p-1$, we have that $p-e=1$, hence $\mu$ is trivial, and we again conclude that (a) holds. Finally, (b) is trivial: if $e<p-1$, the Galois group of $\varphi_e$ is $S_p$ and $\operatorname{Aut}(S_p)=S_p$. Therefore there is a unique tail cover in this case. The same conclusion holds in the case that $e=p-1$, since $G_{p-1}\simeq {\mathbb F}_p\rtimes_\chi{\mathbb F}_p^\ast$ and $\operatorname{Aut}(G_{p-1})=G_{p-1}$. The statement of the lemma follows. \end{proof} \begin{rem} In the case of $e_1$-$e_2$ tail covers, there may in fact be more than one structure on a given mere cover. However, we will not need to know this number for our argument. \end{rem} \section{Reduction of $3$-point covers}\label{sec:3pt} In this section, we (almost) compute the number of $3$-point covers with bad reduction for ramification types $(p;e_1\text{-}e_2,e_3,e_4)$. More precisely, we compute this number in the case that not both $e_3$ and $e_1+e_2$ are even. In the remaining case, we only compute this number up to a factor $2$, which is good enough for our purposes. Although we restrict to types of the above form, our strategy applies somewhat more generally. The results of this section rely on the results of Wewers \cite{we1}, who gives a precise formula for the number of lifts of a given special $G$-map (Section \ref{sec:stable}) in the $3$-point case. We fix a type $\tau=(p;e_1\text{-}e_2,e_3,e_4)$ satisfying the genus-$0$ condition $\sum_i e_i=2p+2$. We allow $e_3$ or $e_4$ to be $p$, although this is not the case that ultimately interests us; see below for an explanation. We do however assume throughout that we are not in the exceptional case $\tau=(5;2\text{-}2,4,4)$. According to Lemma \ref{lem:tail1}, we may fix a set of primitive tail covers $\bar{g}_i$ of type $C_i$, for $i$ such that $C_i\neq p$. Moreover, by Proposition \ref{prop:dd} we have a (unique) deformation datum, so we know there exists at least one special $G$-map $\bar{g}$ of type $\tau$. Lemma \ref{lem:tail2} implies moreover that the number of special $G$-maps is equal to the number of $e_1$-$e_2$ tail covers. Wewers (\cite{we1}, Theorem 3) shows that there exists a $G$-Galois cover $g:Y\to {\mathbb P}^1$ in characteristic zero with bad reduction to characteristic $p$, and more specifically with stable reduction equal to the given special $G$-map $\bar{g}$. Moreover, Wewers gives a formula for the number $\tilde{L}(\bar{g})$ for lifts of the given special $G$-map $\bar{g}$. In order to state his formula, we need to introduce one more invariant. Let $\operatorname{Aut}_G^0(\bar{g})$ be the group of $G$-equivariant automorphisms of $\bar{Y}$ which induce the identity on the original component $\bar{X}_0$. Choose $\gamma\in \operatorname{Aut}_G^0(\bar{g})$, and consider the restriction of $\gamma$ to the original component $\bar{X}_0$. Let $\bar{Y}_0$ be an irreducible component of $\bar{Y}$ above $\bar{X}_0$ whose inertia group is the fixed Sylow $p$-subgroup $I$ of $G$. As in (\ref{eq:G0}), we write $G_0=I\rtimes_\chi H_0\subset {\mathbb F}_p\rtimes_\chi {\mathbb F}_p^\ast$ for the decomposition group of $\bar{Y}_0$. Wewers (\cite{we1}, proof of Lemma 2.17) shows that $\gamma_0:=\gamma|_{\bar{Y}_0}$ centralizes $H_0$ and normalizes $I$, i.e.\ $\gamma_0\in C_{N_G(I)}(H_0)$. Since $\bar{Y}|_{\bar{X}_0}=\operatorname{Ind}_{G_0}^G\bar{Y}$ and $\gamma$ is $G$-equivariant, it follows that the restriction of $\gamma$ to $\bar{X}_0$ is uniquely determined by $\gamma_0$. We denote by $n'(\tau)$ the order of the subgroup consisting of those $\gamma_0\in C_{N_G(I)}(H_0)$ such that there exists a $\gamma\in \operatorname{Aut}_G^0(\bar{g})$ with $\gamma|_{\bar{Y}_0}=\gamma_0$. Our notation is justified by Corollary \ref{cor:nprime-defnd} below. Wewers (\cite{we1}, Corollary 4.8) shows that \begin{equation}\label{eq:pd} |\tilde{L}(\bar{g})|=\frac{p-1}{n'(\tau)}\prod_{i\in {\mathbb B}}\frac{h_{C_i}}{|\operatorname{Aut}_{G_{C_i}}^0(\bar{g}_{C_i})|}. \end{equation} The numbers are as defined in Section \ref{sec:stable}. (Note that the group $\operatorname{Aut}_{G_{C_i}}^0(\bar{g}_{C_i})$ is defined differently from the group $\operatorname{Aut}_G^0(\bar{g})$.) To compute the number of curves with bad reduction, we need to compute the number $n'(\tau)$ defined above. As explained by Wewers (\cite[Lemma 2.17]{we1}), one may express the number $n'(\tau)$ in terms of certain groups of automorphisms of the tail covers. However, there is a mistake in the concrete description he gives of $\operatorname{Aut}_G^0(\bar{g})$ in terms of the tails, therefore we do not use Wewers' description. For a corrected version, we refer to the manuscript \cite{se6}. The difficulty we face in using Wewers' formula directly is that we do not know the Galois group $G_{e_1\text{-}e_2}$ of the $e_1$-$e_2$ tail. This prevents us from directly computing both the number of $e_1$-$e_2$ tails, and the term $n'(\tau)$. We avoid this problem by using the following trick. We first consider covers of type $\tau^\ast=(p; e_1\text{-}e_2, \varepsilon, p)$, with $\varepsilon=p+2-e_1-e_2$, which all have bad reduction. This observation lets us compute $n'(\tau^\ast)$ from Wewers' formula. We then show that for covers of type $\tau=(p; e_1\text{-}e_2,e_3,e_4)$, the number $n'(\tau)$ essentially only depends on $e_1$ and $e_2$, allowing us to compute $n'(\tau)$ from $n'(\tau^\ast)$. A problem with this method is that in the case that the Galois groups of covers with type $\tau$ and $\tau^\ast$ are not equal, the numbers $n'(\tau)$ and $n'(\tau^\ast)$ may differ by a factor $2$. Therefore in this case, we are able to determine the number of covers of type $\tau$ with bad reduction only up to a factor $2$. In Lemma \ref{lem:badtype}, we have counted non-Galois covers, but in this section, we deal with Galois covers. Let $G(\tau)$ be the Galois group of a cover of type $\tau$. This group is well-defined and either $A_p$ or $S_p$, by Corollary \ref{3pt-monodromy}. We write $\gamma(\tau)$ for the quotient of the number of Galois covers of type $\tau$ by the Hurwitz number $h(\tau)$. By Lemma \ref{Gallem}, it follows that $\gamma(\tau)$ is $2$ if $G$ is $A_p$ and $1$ if it is $S_p$. The number $\gamma(\tau)$ will drop out from the formulas as soon as we pass back to the non-Galois situation in Section \ref{sec:adm}. We first compute the number $n'(\tau^\ast)$. We note that by Corollary \ref{3pt-monodromy}, the Galois group $G(\tau^\ast)$ of a cover of type $\tau^\ast$ is $A_p$ if $e_1+e_2$ is even and $S_p$ otherwise. In particular, we see that $G(\tau)=G(\tau^\ast)$ unless $e_1+e_2$ and $e_3$ are both even. In this case we have that $G(\tau)=S_p$ and $G(\tau^\ast)=A_p$. Recall from Lemma \ref{lem:tail2} that there is a unique tail cover for the single-cycle tails. We denote by $N_{e_1\text{-}e_2}$ the number of $e_1\text{-}e_2$ tails, and by $\operatorname{Aut}^0_{e_1\text{-}e_2}$ the group $\operatorname{Aut}^0_{G_{e_1\text{-}e_2}}(\bar{g}_{e_1\text{-}e_2})$ for any tail cover $\bar{g}_{e_1\text{-}e_2}$ as in Lemma \ref{lem:tail1}. Note that since $\bar{g}_{e_1\text{-}e_2}$ is unique as a mere cover, and the definition of $\operatorname{Aut}^0_{G_{e_1\text{-}e_2}}(\bar{g}_{e_1\text{-}e_2})$ is independent of the $G$-structure, this notation is well-defined. We similarly have from \eqref{eq:pd} that $|\tilde{L}(\bar{g})|$ depends only on $\tau$, so we write $\tilde{L}(\tau):=|\tilde{L}(\bar{g})|$ for any special $G$-map $\bar{g}$ of type $\tau$. \begin{lem}\label{lem:tauast} Let $\tau^\ast$ be as above. Then \[ n'(\tau^\ast)= \frac{(1+\delta_{e_1,e_2})N_{e_1\text{-}e_2}(p-1)} {\gcd(p-1, e_1+e_2-2)\gamma(\tau^\ast)|\operatorname{Aut}^0_{e_1\text{-}e_2}|}. \] Here $\delta_{e_1,e_2}$ is the Kronecker $\delta$. \end{lem} \begin{proof} Lemma \ref{lem:badtype} implies that the Hurwitz number $h(\tau^\ast)$ equals $(p+1-e_1-e_2)/2$ if $e_1=e_2$ and $(p+1-e_1-e_2)$ otherwise. Since all covers of type $\tau^\ast$ have bad reduction, $h(\tau^\ast)\gamma(\tau^\ast)$ is equal to $N_{e_1\text{-}e_2} \cdot \tilde{L}(\tau^\ast)$. The statement of the lemma follows by applying Lemmas \ref{lem:tail1}.(c), \ref{lem:tail2}, and (\ref{eq:pd}). \end{proof} We now analyze $n'$ in earnest. For convenience, for $i\in {\mathbb B}$ we also introduce the notation $\widetilde{\operatorname{Aut}}_{G_i}(\bar{g}_i)$ for the group of $G$-equivariant automorphisms of the induced tail cover $\operatorname{Ind}_{G_i}^G(\bar{g}_i)$. Recall also that $\xi_i$ is the node connecting $\bar{X}_0$ to $\bar{X}_i$. We note that $n'$ may be analyzed tail by tail, in the sense that given $\gamma_0 \in C_{N_G(I)}(H_0)$, we have that $\gamma_0$ lifts to $\operatorname{Aut}^0(\bar{g})$ if and only if for each $i \in {\mathbb B}$, there is some $\gamma_i \in \widetilde{\operatorname{Aut}}_{G_i}(\bar{g}_i)$ whose action on $\bar{g}_i^{-1}(\xi_i)$ is compatible with $\gamma_0$. The basic proposition underlying the behavior of $n'$ is then the following: \begin{prop}\label{prop:nprime-desc} Suppose $G=S_p$ or $A_p$, and we have a special $G$-map $\bar{g}:\bar{Y}\to\bar{X}$. Then: \begin{itemize} \item[(a)] For $i\in {\mathbb B}$, the $G$-equivariant automorphisms of $\bar{g}^{-1}(\xi_i)$ form a cyclic group. \item[(b)] Given an element $\gamma_0 \in C_{N_G(I)}(H_0)$ and $i \in {\mathbb B}$, there exists $\gamma_i\in \widetilde{Aut}_{G_i}(\bar{g}_i)$ agreeing with the action of $\gamma_0$ on $\bar{g}^{-1}(\xi_i)$ if and only if there exists $\gamma_i'\in \widetilde{Aut}_{G_i}(\bar{g}_i)$ having the same orbit length on $\bar{g}^{-1}(\xi_i)$ as $\gamma_0$ has. \end{itemize} \end{prop} \begin{proof} For (a), if $\tilde{\xi}_i$ is a point above $\xi_i$ lying on the chosen component $\bar{Y}_0$, one easily checks that a $G$-equivariant automorphism $\gamma$ of $\bar{g}^{-1}(\xi_i)$ is determined by where it sends $\tilde{\xi}_i$, which can in turn be represented by an element $g \in G$ chosen so that $g(\tilde{\xi}_i)=\gamma(\tilde{\xi}_i)$. Note that $\gamma \neq g$; in fact, if $\gamma,\gamma'$ are determined by $g,g'$, the composition law is that $\gamma \circ \gamma'$ corresponds to $g' g$. Such a $g$ yields a choice of $\gamma$ if and only if we have the equality of stabilizers $G_{\tilde{\xi}_i}=G_{g(\tilde{\xi}_i)}$. Now, any $h \in G_{\tilde{\xi}_i}$ is necessarily in $G_0$, and using that $I \subseteq G_{\tilde{\xi}_i}$, we find that we must have $g I g^{-1} \subseteq G_0$. But $I$ contains the only $p$-cycles in $G_0$, so we conclude $g\in N_G I$. However, since $I$ fixed $\tilde{\xi}_i$, the choices of $G$ may be taken modulo $I$, so we conclude that they lie in $N_G I/I$. Finally, since $G=S_p$ or $A_p$, we have that $N_G I/I$ is cyclic, isomorphic to ${\mathbb Z}/(p-1){\mathbb Z}$ if $G=S_p$ and to ${\mathbb Z}/(\frac{p-1}{2}){\mathbb Z}$ if $G=A_p$. (b) then follows immediately, since the actions of both $\gamma_0$ and $\gamma_i'$ on $\bar{g}^{-1}(\xi_i)$ lie in the same cyclic group; we can take $\gamma_i$ to be an appropriate power of $\gamma_i'$. \end{proof} \begin{cor}\label{cor:nprime-defnd} For $\tau$ as above, $n'(\tau)$ is well defined. \end{cor} \begin{proof} We know that $G=S_p$ or $A_p$, and we also know by Proposition \ref{prop:dd} and Lemma \ref{lem:tail1} that the deformation datum is uniquely determined, and so are the tail covers, at least as mere covers. But the description of $n'(\tau)$ given by Proposition \ref{prop:nprime-desc} is visibly independent of the $G$-structure on the tail covers, so we obtain the desired statement. \end{proof} We can now obtain the desired comparison of $n'(\tau)$ with $n'(\tau^\ast)$. \begin{prop}\label{prop:n'} Let $\tau=(p; e_1$-$e_2, e_3, e_4)$ be a type satisfying the genus-$0$ condition, and let $\tau^\ast$ be the corresponding modified type. Then if $G(\tau)=G(\tau^\ast)$ we have $n'(\tau)=n'(\tau^\ast)$. Otherwise, $n'(\tau)\in \{2n'(\tau^\ast), n'(\tau^\ast)\}$. \end{prop} \begin{proof} Let $\gamma_0$ be a generator of $C_{N_G(I)}(H_0)$. We ask which powers of $\gamma_0$ extend to an element of $\operatorname{Aut}_G^0(\bar{g})$, and we analyze this question tail by tail. Fix a tail $\bar{X}_i$, and suppose that it is a single-cycle tail of length $e:=e_i$. The crucial assertion is that $\gamma_0$ itself (and hence all its powers) always extends to $\bar{X}_i$. First suppose that $e<p-1$ is even. Thus $G=G_i=S_p$, and $\widetilde{\operatorname{Aut}}_G(\bar{g}_i)=\operatorname{Aut}_{G_i}(\bar{g}_i)$. Now, $\gamma_0$ acts on the fiber of $\xi_i$ with orbit length $(p-1)/m_e=\gcd(p-1,p-e)$. On the other hand, by Lemma \ref{lem:tail1} we have that $h_e=(p-e)/\gcd(p-1,e-1)$. Lemma \ref{lem:tail2} implies that if $\gamma_i \in \operatorname{Aut}_{G_i}(\bar{g}_i)$ is a generator, then the order of $\gamma_i$ is $p-e$, and also that $\operatorname{Aut}^0_{G_i}(\bar{g}_i)$ has order $h_e$. We conclude that an orbit of $\gamma_i$ has length $\gcd(p-1,e-1)=\gcd(p-1,p-e)$, and thus by Proposition \ref{prop:nprime-desc} that $\gamma_0$ extends to $\bar{X}_i$, as claimed. The next case is that $e$ is odd, and $G=A_p$. This proceeds exactly as before, except that both orbits in question have length $\gcd(p-1,p-e)/2$. Now, suppose $e$ is odd, but $G=S_p$. Then the orbit length of $\gamma_0$ is $\gcd(p-1,p-e)$. We have $\widetilde{\operatorname{Aut}}_{G}(\bar{g}_i)$ equal to the $G$-equivariant automorphisms of $\operatorname{Ind}_{A_p}^{S_p}(\bar{g}_i)$. These contain induced copies of the $G$-equivariant automorphisms of $\bar{g}_i$, so in particular we know we have elements of orbit length $\gcd(p-1,e-1)/2$. However, in fact one also has a $G$-equivariant automorphism exchanging the two copies of $\bar{g}_i$, and whose square is the generator of the $A_p$-equivariant automorphisms of $\bar{g}_i$. One may think of this as coming from the automorphism constructing in Lemma \ref{lem:tail2} inducing the isomorphism between the two different $A_p$-structures on the tail cover. We thus have an element of $\widetilde{\operatorname{Aut}}_{G}(\bar{g}_i)$ of orbit length $\gcd(p-1,e-1)$, and $\gamma_0$ extends to the tail in this case as well. Finally, if $e=p-1$ then $m_i=p-1$ and thus $\gamma_0$ acts as the identity on the fiber of $\xi_i$. The claim is trivially satisfied in this case. It follows that extending $\gamma_0$ to the $e$-tails imposes no condition when $e<p$, and of course we do not have tails in the case that $e=p$. Therefore the only non-trivial condition imposed in extending $\gamma_0$ is the extension to the $e_1$-$e_2$-tail. In the case that $G(\tau)=G(\tau^\ast)$ we conclude the desired statement from Proposition \ref{prop:nprime-desc}, since the orbit lengths in question are clearly the same in both cases. Suppose that $G(\tau)\neq G(\tau^\ast)$. This happens if and only if both $e_1+e_2$ and $e_3$ are even. In this case we have that $G(\tau)=S_p$ and $G(\tau^\ast)=A_p$. Here, we necessarily have that $e_1+e_2,e_3,e_4$ are all even, so the only conditions imposed on either $n'(\tau)$ or $n'(\tau^\ast)$ come from the $e_1$-$e_2$ tail. Since the orbit of $\gamma_0$ is twice as long in the case of $\tau$, the answers can differ by at most a factor of $2$ in this case, as desired. \end{proof} Let $2\leq e_1\leq e_2\leq e_3\leq e_4<p$ be integers with $\sum_i e_i=2p+2$ and $e_1+e_2\leq p$. The following corollary translates Proposition \ref{prop:n'} into an estimate for the number of Galois covers of type $\tau=(p; e_1$-$e_2, e_3, e_4)$ with bad reduction. Theorem \ref{thm:3-hurwitz} is a special case. \begin{cor}\label{cor:2cyclebad} Let $\tau=(p; e_1\text{-}e_2, e_3, e_4)$ with $\tau \neq (5;2\text{-}2,4,4)$. The number of mere covers of type $\tau$ with bad reduction to characteristic $p$ is equal to \[ \begin{cases} \delta(\tau)(p+1-e_1-e_2)&\text{ if }e_1\neq e_2,\\ \delta(\tau)(p+1-e_1-e_2)/2&\text{ if } e_1=e_2, \end{cases} \] where $\delta(\tau)\in \{1,2\}$, and $\delta=1$ unless $e_1+e_2$ and $e_3$ are both even. \end{cor} \begin{proof} We recall that the number of Galois covers of type $\tau$ with bad reduction is equal to $N_{e_1\text{-}e_2} \cdot \tilde{L}(\tau^\ast)$. It follows from Lemma \ref{lem:tail1}.(c) and (\ref{eq:pd}) that this number is \[ \begin{cases} \displaystyle{ \frac{\gamma(\tau^\ast)n'(\tau^\ast)}{n'(\tau)}(p+1-e_1-e_2)}& \text{ if }e_1\neq e_2,\\ \displaystyle{\frac{\gamma(\tau^\ast)n'(\tau^\ast)}{n'(\tau)}(p+1-e_1-e_2)/2}& \text{ if }e_1= e_2. \end{cases} \] The definition of the Galois factor $\gamma(\tau)$ implies that the number of mere covers of type $\tau$ with bad reduction is \[ \begin{cases} \displaystyle{\frac{\gamma(\tau^\ast)}{\gamma(\tau)} \frac{n'(\tau^\ast)}{n'(\tau)}(p+1-e_1-e_2)}& \text{ if }e_1\neq e_2,\\ \displaystyle{\frac{\gamma(\tau^\ast)}{\gamma(\tau)} \frac{n'(\tau^\ast)}{n'(\tau)}(p+1-e_1-e_2)/2}& \text{ if }e_1= e_2. \end{cases} \] Proposition \ref{prop:n'} implies that $n'(\tau)/n'(\tau^\ast)\in \{1,2\}$, and is equal to $1$ unless $e_1+e_2,e_3,e_4$ are all even. Moreover, if $n'(\tau)\neq n'(\tau^\ast)$ then $\gamma(\tau^\ast)/\gamma(\tau)=2$. The statement of the corollary follows from this. \end{proof} \begin{rem} Similar to the proof of Corollary \ref{cor:2cyclebad}, one may show that every genus-$0$ three-point cover of type $(p; e_1, e_2, e_3)$ has bad reduction. We do not include this proof here, as a proof of this result using linear series already occurs in \cite{os7}, Theorem 4.2. \end{rem} \section{Reduction of admissible covers}\label{sec:adm} In this section, we return to the case of non-Galois covers, and use the results of Section \ref{sec:3pt} to compute the number of ``admissible covers with good reduction''. We start by defining what we mean by this. As always, we fix a type $(p; e_1, e_2, e_3, e_4)$ with $1<e_1\leq e_2\leq e_3\leq e_4<p$ satisfying the genus-$0$ condition $\sum_i e_i=2p+2$. As in Section \ref{sec:char0}, we consider admissible degenerations of type $(p;e_1, e_2,\ast,e_3, e_4)$, which means that $Q_3=\lambda\equiv Q_4=\infty\pmod{p}$. Recall from Section \ref{sec:char0} that in positive characteristic not every smooth cover degenerates to an admissible cover, as a degeneration might become inseparable. The number of admissible covers (even counted with multiplicity) is still bounded by the number of smooth covers, but equality need not hold. \begin{defn}\label{def:phadm} We define $h^{\rm\scriptstyle adm}_p(p; e_1, e_2,\ast,e_3, e_4)$ as the number of admissible covers of type $(p; e_1,e_2, \ast, e_3,e_4)$, counted with multiplicity, over an algebraically closed field of characteristic $p$. \end{defn} The following proposition is the main result of this section. \begin{prop}\label{prop:admbad} The assumptions on the type $\tau=(p;e_1, e_2, e_3, e_4)$ are as above. Then \[ h^{\rm\scriptstyle adm}_p(p; e_1, e_2,\ast, e_3, e_4)>h(p; e_1, e_2, e_3, e_4)-2p, \] and \[ h^{\rm\scriptstyle adm}_p(p; e_1, e_2,\ast, e_3, e_4)=h(p; e_1, e_2, e_3, e_4)-p \] unless $e_1+e_2$ and $e_3$ are both even. \end{prop} \begin{proof} We begin by noting that in the case $\tau=(5;2,2,4,4)$ corresponding to the exceptional case of Corollary \ref{3pt-monodromy}, the assertion of the proposition is automatic since $h(5;2,2,4,4)=8<10$. We may therefore assume that $\tau \neq (5;2,2,4,4)$. We use the description of the admissible covers in characteristic zero (Theorem \ref{degenerationlem}) and the results of Section \ref{sec:3pt} to estimate the number of admissible covers with good reduction to characteristic $p$, i.e.\ that remain separable. We first consider the pure-cycle case, i.e.\ the case of Theorem \ref{degenerationlem}.(a). Let $m$ be an integer satisfying the conditions of {\it loc.\ cit.} We write ${f}_0:{V}_0\to {X}_0$ for the corresponding admissible cover. Recall from Section \ref{sec:char0} that $\bar{X}$ consists of two projective lines ${X}^1_0, {X}^2_0$ intersecting in one point. Choose an irreducible component ${Y}^i_0$ of ${Y}_0$ above ${X}^i_0$, and write ${f}^i_0:{Y}^i_0\to {X}^i_0$ for the restriction. These are covers of type $(d_1; e_1, e_2, m)$ and $(d_2; m, e_3, e_4)$ with $d_i\leq p$, respectively. The admissible cover ${f}_0$ has good reduction to characteristic $p$ if and only if both three-point covers ${f}^i_0$ have good reduction. It is shown in \cite{os7}, Theorem 4.2, that a genus-$0$ three-point cover of type $(d;a,b,c)$ with $a,b,c<p$ has good reduction to characteristic $p$ if and only if its degree $d$ is strictly less than $p$. Since the degree $d_2$ of the cover ${f}^2_0$ is always at least as large as the other degree $d_1$, it is enough to calculate when $d_2<p$. The Riemann--Hurwitz formula implies that $d_2=(m+e_3+e_4-1)/2$. Therefore the condition $d_2<p$ is equivalent to the inequality $$e_3+e_4+m \leq 2p-1.$$ Since we assumed the existence of an admissible cover with $\rho$ an $m$-cycle, it follows from Theorem \ref{degenerationlem}.(a) that $m \leq 2d+1-e_3-e_4=2p+1-e_3-e_4$. We find that $d_2<p$ unless $m=2p+1-e_3-e_4$. We also note that the lower bound for $m$ is always less than or equal to the upper bound, which is $2p+1-e_3-e_4$. We thus conclude that there are $2p+1-e_3-e_4$ admissible covers with bad reduction. We now consider the case of an admissible cover with $\rho$ an $e_1$-$e_2$-cycle (Theorem \ref{degenerationlem}.(b)). Let $f_0:V_0 \to X_0$ be such an admissible cover in characteristic $0$, as above. In particular, the restriction $f_0^1$ (resp.\ $f_0^2$) has type $(d_1; e_1, e_2, e_1\text{-}e_2)$ (resp.\ $(d_2; e_1\text{-}e_2, e_3, e_4)$). We write $g_0$ for the Galois closure of $f_0$, and $g_0^i$ for the corresponding restrictions. Let $G^i$ be the Galois group of $g_0^i$. The assumptions on the $e_i$ imply that $p$ does not divide the order of Galois group of $g_0^1$, therefore $g_0^1$ has good reduction to characteristic $p$. Moreover, the cover $g_0^1$ is uniquely determined by the triple $(\rho^{-1}, g_3, g_4)$. If $e_1\neq e_2$, the gluing is likewise uniquely determined, while if $e_1=e_2$ there are exactly $2$ possibilities for the tuple $(g_1, g_2, g_3, g_4)$ for a given triple $(\rho^{-1}, g_3, g_4)$. Therefore to count the number of admissible covers with bad reduction in this case, it suffices to consider the reduction behavior of the cover $g_0^2:Y_0^2\to X_0^2$. Corollary \ref{cor:2cyclebad} implies that whether or not $e_1$ equals $e_2$, the number of admissible covers with bad reduction in the $2$-cycle case is equal to $(p+1-e_1-e_2)$ unless $e_1+e_2$ and $e_3$ are both even, and bounded from above by $2(p+1-e_1-e_2)$ always. We conclude using Theorem \ref{degenerationlem} that the total number of admissible covers with bad reduction counted with multiplicity is less than or equal to \[ (2p+1-e_3-e_4)+2(p+1-e_1-e_2)=p+(p+1-e_1-e_2)<2p, \] and equal to \[ (2p+1-e_3-e_4)+(p+1-e_1-e_2)=p \] unless $e_1+e_2$ and $e_3$ are both even. The proposition follows. \end{proof} \begin{rem} Theorem 4.2 of \cite{os7} does not need the assumption $d=p$. Therefore the proof of Proposition \ref{prop:admbad} in the single-cycle case shows the following stronger result. Let $(d; e_1, e_2, e_3, e_4)$ be a genus-$0$ type with $1<e_1\leq e_2\leq e_3\leq e_4 < p$. Then the number of admissible covers with a single ramified point over the node and bad reduction to characteristic $p$ is $$(d-p+1)(d+p+1-e_3-e_4)$$ when either $d+1 \geq e_2+e_3$ or $d+1-e_1 < p$. Otherwise, all admissible covers have bad reduction. \end{rem} \section{Proof of the main result}\label{sec:4pt} In this section, we count the number of mere covers with ramification type $(p;e_1, e_2, e_3, e_4)$ and bad reduction in the case that the branch points are generic. Equivalently, we compute the $p$-Hurwitz number $h_p(p;e_1, e_2, e_3, e_4)$. Suppose that $r=4$ and fix a genus-$0$ type $\tau=(p; e_1, e_2, e_3, e_4)$ with $2\leq e_1\leq e_2\leq e_3\leq e_4<p$. We let $g:Y\to X= {\mathbb P}^1_K$ be a Galois cover of type $\tau$ defined over a local field $K$ as in Section \ref{sec:stable}, such that $(X; Q_i)$ is the generic $r$-marked curve of genus $0$. It is no restriction to suppose that $Q_1=0, Q_2=1, Q_3=\lambda, Q_4=\infty$, where $\lambda$ is transcendental over ${\mathbb Q}_p$. We suppose that $g$ has bad reduction to characteristic $p$, and denote by $\bar{g}:\bar{Y}\to \bar{X}$ the stable reduction. We have seen in Section \ref{sec:stable} that we may associate with $\bar{g}$ a set of primitive tail covers $( \bar{g}_i)$ and a deformation datum $(\bar{Z}_0, \omega)$. The primitive tail covers $\bar{g}_i$ for $i\in{\mathbb B}=\{1,2,3,4\}$ are uniquely determined by the $e_i$ (Lemma \ref{lem:tail1}). The following proposition shows that the number of covers with bad reduction is divisible by $p$ in the case that the branch point are generic. \begin{prop} \label{prop:baddeg} Suppose that $(X={\mathbb P}^1_K; Q_i)$ is the generic $r=4$-marked curve of genus zero. Then the number of mere covers of $X$ of ramification type $(p;e_1, e_2, e_3, e_4)$ with bad reduction is nonzero and divisible by $p$. \end{prop} \begin{proof} Since the number of Galois covers and the number of mere covers differ by a prime-to-$p$ factor, it suffices to prove the proposition for Galois covers. The existence portion of the proposition is proved in \cite{bo4}, Proposition 2.4.1, and the divisibility by $p$ in Lemma 3.4.1 of {\it loc.\ cit.}\ (in a more general setting). We briefly sketch the proof, which is easier in our case due to the simple structure of the stable reduction (Lemma \ref{lem:stablered}). The idea of the proof is inspired by a result of \cite{we3}, Section~3. We begin by observing that away from the wild branch point $\xi_i$, the primitive tail cover $\bar{g}_i$ is tamely ramified. Therefore we can lift this cover of affine curves to characteristic zero. Let ${\mathcal X}_{0}={\mathbb P}^1_R$ be equipped with $4$ sections $Q_1=0, Q_2=1, Q_3=\lambda, Q_4=\infty$, where $\lambda\in R$ is transcendental over ${\mathbb Z}_p$. Then (\ref{eq:Kummer}) defines an $m$-cyclic cover ${\mathcal Z}_0\to{\mathcal X}_0$. We write $Z\to X$ for its generic fiber. Proposition \ref{prop:dd} implies the existence of a deformation datum $(\bar{Z}_0, \omega)$. Associated with the deformation datum is a character $\chi:{\mathbb Z}/m{\mathbb Z}\to {\mathbb F}_p^\times$ defined by $\chi(\beta)=\beta^\ast z/z\pmod{z}$. The differential form $\omega$ corresponds to a $p$-torsion point $P_0\in J(\bar{Z}_0)[p]_\chi$ on the Jacobian of $\bar{Z}_0$. See for example \cite{se7} (Here we use that the conjugacy classes $C_i$ are conjugacy classes of prime-to-$p$ elements. This implies that the differential form $\omega$ is holomorphic.) Since $\sum_{i=1}^4 h_{i}=2m$ and the branch points are generic, we have that $J(\bar{Z}_0)[p]_\chi\simeq {\mathbb Z}/p{\mathbb Z}\times {\boldsymbol \mu}_p$ (\cite{bo6}, Proposition 2.9) After enlarging the discretely valued field $K$, if necessary, we may choose a $p$-torsion point $P\in J({\mathcal Z}_0\otimes_R K)[p]_\chi$ lifting $P_0$. It corresponds to an \'etale $p$-cyclic cover $W\to Z$. The cover $\psi:W\to X$ is Galois, with Galois group $N:={\mathbb Z}/p{\mathbb Z}\rtimes_\chi{\mathbb Z}/m{\mathbb Z}$. It is easy to see that $\psi$ has bad reduction, and that its deformation datum is $(\bar{Z}_0, \omega)$. By using formal patching (\cite{ra3} or \cite{we1}), one now checks that there exists a map $g_R:{\mathcal Y}\to {\mathcal X}$ of stable curves over $\operatorname{Spec}(R)$ whose generic fiber is a $G$-Galois cover of smooth curves, and whose special fiber defines the given tails covers and the deformation datum. Over a neighborhood of the original component $g_R$ is the induced cover $\operatorname{Ind}_N^G {\mathcal Z}_0\to {\mathcal X}_0$. Over the tails, the cover $g_R$ is induced by the lift of the tail covers. The fact that we can patch the tail covers with the cover over ${\mathcal X}_0$ follows from the observation that $h_{i}<m_{i}$ (Lemma \ref{lem:tail1}), since locally there a unique cover with this ramification (\cite{we1}, Lemma 2.12). This proves the existence statement. The divisibility by $p$ now follows from the observation that the set of lifts $P$ of the $p$-torsion point $P_0\in J(\bar{Z}_0)[p]_\chi$ corresponding to the deformation datum is a ${\boldsymbol \mu}_p$-torsor. \end{proof} We are now ready to prove our Theorem \ref{thm:main}, as well as a slightly sharper version of Theorem \ref{thm:good-degen}. \begin{thm}\label{thm:main2} Let $p$ be an odd prime and $k$ an algebraically closed field of characteristic $p$. Suppose we are given integers $2\leq e_1\leq e_2\leq e_3\leq e_4<p$. There exists a dense open subset $U\subset {\mathbb P}^1_k$ such that for $\lambda\in U$ the number of degree-$p$ covers with ramification type $(e_1, e_2, e_3, e_4)$ over the branch points $(0,1,\lambda,\infty)$ is given by the formula $$h_p(e_1,\dots,e_4)=\min_i(e_i(p+1-e_i))-p.$$ Furthermore, unless both $e_1+e_2$ and $e_3$ are even, every such cover has good degeneration under a degeneration of the base sending $\lambda$ to $\infty$. \end{thm} \begin{proof} Proposition \ref{prop:baddeg} implies that the number of covers with ramification type $(p;e_1, e_2, e_3, e_4)$ and bad reduction is at least $p$. This implies that the generic Hurwitz number $h_p(e_1,\ldots, e_4)$ is at most $\min_i(e_i(p+1-e_i))-p$. Proposition \ref{prop:admbad} implies that the number of admissible covers in characteristic $p$ strictly larger than $\min_i(e_i(p+1-e_i))-2p$. Since the number of separable covers can only decrease under specialization, we conclude that the generic Hurwitz number equals $\min_i(e_i(p+1-e_i))-p$. This proves the first statement, and the second follows immediately from Proposition \ref{prop:admbad} in the situation that $e_1+e_2$ and $e_3$ are not both even. \end{proof} \begin{rem} By using the results of \cite{bo4} one can prove a stronger result than Theorem \ref{thm:main2}. We say that a $\lambda\in {\mathbb P}^1_k\setminus\{0,1,\infty\}$ is {\em supersingular} if it is a zero of the polynomial (\ref{eq:Hasseinv}) and {\em ordinary} otherwise. Then the number of covers in characteristic $p$ of type $(p; e_1, e_2, e_3, e_4)$ branched at $(0,1,\lambda, \infty)$ is $h_p(p; e_1, e_2, e_3, e_4)$ if $\lambda$ is ordinary and $h_p(p; e_1, e_2, e_3, e_4)-1$ if $\lambda$ is supersingular. To prove this result, one needs to study the stable reduction of the cover $\pi:\bar{{\mathcal H}}\to {\mathbb P}^1_\lambda$ of the Hurwitz curve to the configuration space. We do not prove this result here, as it would require too many technical details. \end{rem} \bibliographystyle{hamsplain}
{'timestamp': '2009-06-09T19:52:34', 'yymm': '0906', 'arxiv_id': '0906.1793', 'language': 'en', 'url': 'https://arxiv.org/abs/0906.1793'}
ArXiv
\section{Introduction and main results} In this note we are interested in the existence versus non-existence of stable sub- and super-solutions of equations of the form \begin{equation} \label{eq1} -div( \omega_1(x) \nabla u ) = \omega_2(x) f(u) \qquad \mbox{in $ {\mathbb{R}}^N$,} \end{equation} where $f(u)$ is one of the following non-linearities: $e^u$, $ u^p$ where $ p>1$ and $ -u^{-p}$ where $ p>0$. We assume that $ \omega_1(x)$ and $ \omega_2(x)$, which we call \emph{weights}, are smooth positive functions (we allow $ \omega_2$ to be zero at say a point) and which satisfy various growth conditions at $ \infty$. Recall that we say that a solution $ u $ of $ -\Delta u = f(u)$ in $ {\mathbb{R}}^N$ is stable provided \[ \int f'(u) \psi^2 \le \int | \nabla \psi|^2, \qquad \forall \psi \in C_c^2,\] where $ C_c^2$ is the set of $ C^2$ functions defined on $ {\mathbb{R}}^N$ with compact support. Note that the stability of $u$ is just saying that the second variation at $u$ of the energy associated with the equation is non-negative. In our setting this becomes: We say a $C^2$ sub/super-solution $u$ of (\ref{eq1}) is \emph{stable} provided \begin{equation} \label{stable} \int \omega_2 f'(u) \psi^2 \le \int \omega_1 | \nabla \psi|^2 \qquad \forall \psi \in C_c^2. \end{equation} One should note that (\ref{eq1}) can be re-written as \begin{equation*} - \Delta u + \nabla \gamma(x) \cdot \nabla u ={ \omega_2}/{\omega_1}\ f(u) \qquad \text{ in $ \mathbb{R}^N$}, \end{equation*} where $\gamma = - \log( \omega_1)$ and on occasion we shall take this point of view. \begin{remark} \label{triv} Note that if $ \omega_1$ has enough integrability then it is immediate that if $u$ is a stable solution of (\ref{eq1}) we have $ \int \omega_2 f'(u) =0 $ (provided $f$ is increasing). To see this let $ 0 \le \psi \le 1$ be supported in a ball of radius $2R$ centered at the origin ($B_{2R}$) with $ \psi =1$ on $ B_R$ and such that $ | \nabla \psi | \le \frac{C}{R}$ where $ C>0$ is independent of $ R$. Putting this $ \psi$ into $ (\ref{stable})$ one obtains \[ \int_{B_R} \omega_2 f'(u) \le \frac{C}{R^2} \int_{R < |x| <2R} \omega_1,\] and so if the right hand side goes to zero as $ R \rightarrow \infty$ we have the desired result. \end{remark} The existence versus non-existence of stable solutions of $ -\Delta u = f(u)$ in $ {\mathbb{R}}^N$ or $ -\Delta u = g(x) f(u)$ in $ {\mathbb{R}}^N$ is now quite well understood, see \cite{dancer1, farina1, egg, zz, f2, f3, wei, ces, e1, e2}. We remark that some of these results are examining the case where $ \Delta $ is replaced with $ \Delta_p$ (the $p$-Laplacian) and also in many cases the authors are interested in finite Morse index solutions or solutions which are stable outside a compact set. Much of the interest in these Liouville type theorems stems from the fact that the non-existence of a stable solution is related to the existence of a priori estimates for stable solutions of a related equation on a bounded domain. In \cite{Ni} equations similar to $ -\Delta u = |x|^\alpha u^p$ where examined on the unit ball in $ {\mathbb{R}}^N$ with zero Dirichlet boundary conditions. There it was shown that for $ \alpha >0$ that one can obtain positive solutions for $ p $ supercritical with respect to Sobolev embedding and so one can view that the term $ |x|^\alpha$ is restoring some compactness. A similar feature happens for equations of the form \[ -\Delta u = |x|^\alpha f(u) \qquad \mbox{in $ {\mathbb{R}}^N$};\] the value of $ \alpha$ can vastly alter the existence versus non-existence of a stable solution, see \cite{e1, ces, e2, zz, egg}. We now come to our main results and for this we need to define a few quantities: \begin{eqnarray*} I_G&:=& R^{-4t-2} \int_{ R < |x|<2R} \frac{ \omega_1^{2t+1}}{\omega_2^{2t}}dx , \\ J_G&:=& R^{-2t-1} \int_{R < |x| <2R} \frac{| \nabla \omega_1|^{2t+1} }{\omega_2^{2t}} dx ,\\I_L&:=& R^\frac{-2(2t+p-1)}{p-1} \int_{R<|x|<2R }{ \left( \frac{w_1^{p+2t-1}}{w_2^{2t}} \right)^{\frac{1}{p-1} } } dx,\\ J_L&:= &R^{-\frac{p+2t-1}{p-1} } \int_{R<|x|<2R }{ \left( \frac{|\nabla w_1|^{p+2t-1}}{w_2^{2t}} \right)^{\frac{1}{p-1} } } dx,\\ I_M &:=& R^{-2\frac{p+2t+1}{p+1} } \int_{R<|x|<2R }{ \left( \frac{w_1^{p+2t+1}}{w_2^{2t}} \right)^{\frac{1}{p+1} } } \ dx, \\ J_M &:= & R^{-\frac{p+2t+1}{p+1} } \int_{R<|x|<2R }{ \left( \frac{|\nabla w_1|^{p+2t+1}}{w_2^{2t}} \right)^{\frac{1}{p+1} } } dx. \end{eqnarray*} The three equations we examine are \[ -div( \omega_1 \nabla u ) = \omega_2 e^u \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (G), \] \[ -div( \omega_1 \nabla u ) = \omega_2 u^p \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (L), \] \[ -div( \omega_1 \nabla u ) = - \omega_2 u^{-p} \qquad \mbox{ in $ {\mathbb{R}}^N$ } \quad (M),\] and where we restrict $(L)$ to the case $ p>1$ and $(M)$ to $ p>0$. By solution we always mean a $C^2$ solution. We now come to our main results in terms of abstract $ \omega_1 $ and $ \omega_2$. We remark that our approach to non-existence of stable solutions is the approach due to Farina, see \cite{f2,f3,farina1}. \begin{thm} \label{main_non_exist} \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ I_G, J_G \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<2$. \item There is no positive stable sub-solution (super-solution) of $(L)$ if $ I_L,J_L \rightarrow 0$ as $ R \rightarrow \infty$ for some $p- \sqrt{p(p-1)} < t<p+\sqrt{p(p-1)} $ ($0<t<\frac{1}{2}$). \item There is no positive stable super-solution of (M) if $ I_M,J_M \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<p+\sqrt{p(p+1)}$. \end{enumerate} \end{thm} If we assume that $ \omega_1$ has some monotonicity we can do better. We will assume that the monotonicity conditions is satisfied for big $x$ but really all ones needs is for it to be satisfied on a suitable sequence of annuli. \begin{thm} \label{mono} \begin{enumerate} \item There is no stable sub-solution of $(G)$ with $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$ if $ I_G \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<2$. \item There is no positive stable sub-solution of $(L)$ provided $ I_L \rightarrow 0$ as $ R \rightarrow \infty$ for either: \begin{itemize} \item some $ 1 \le t < p + \sqrt{p(p-1)}$ and $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$, or \\ \item some $ p - \sqrt{p(p-1)} < t \le 1$ and $ \nabla \omega_1(x) \cdot x \ge 0$ for big $ x$. \end{itemize} There is no positive super-solution of $(L)$ provided $ I_L \rightarrow 0$ as $ R \rightarrow \infty$ for some $ 0 < t < \frac{1}{2}$ and $ \nabla \omega_1(x) \cdot x \le 0$ for big $x$. \item There is no positive stable super-solution of $(M)$ provided $ I_M \rightarrow 0$ as $ R \rightarrow \infty$ for some $0<t<p+\sqrt{p(p+1)}$. \end{enumerate} \end{thm} \begin{cor} \label{thing} Suppose $ \omega_1 \le C \omega_2$ for big $ x$, $ \omega_2 \in L^\infty$, $ \nabla \omega_1(x) \cdot x \le 0$ for big $ x$. \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ N \le 9$. \item There is no positive stable sub-solution of $(L)$ if $$N<2+\frac{4}{p-1} \left( p+\sqrt{p(p-1)} \right).$$ \item There is no positive stable super-solution of $(M)$ if $$N<2+\frac{4}{p+1} \left( p+\sqrt{p(p+1)} \right).$$ \end{enumerate} \end{cor} If one takes $ \omega_1=\omega_2=1$ in the above corollary, the results obtained for $(G)$ and $(L)$, and for some values of $p$ in $(M)$, are optimal, see \cite{f2,f3,zz}. We now drop all monotonicity conditions on $ \omega_1$. \begin{cor} \label{po} Suppose $ \omega_1 \le C \omega_2$ for big $x$, $ \omega_2 \in L^\infty$, $ | \nabla \omega_1| \le C \omega_2$ for big $x$. \begin{enumerate} \item There is no stable sub-solution of $(G)$ if $ N \le 4$. \item There is no positive stable sub-solution of $(L)$ if $$N<1+\frac{2}{p-1} \left( p+\sqrt{p(p-1)} \right).$$ \item There is no positive super-solution of $(M)$ if $$N<1+\frac{2}{p+1} \left( p+\sqrt{p(p+1)} \right).$$ \end{enumerate} \end{cor} Some of the conditions on $ \omega_i$ in Corollary \ref{po} seem somewhat artificial. If we shift over to the advection equation (and we take $ \omega_1=\omega_2$ for simplicity) \[ -\Delta u + \nabla \gamma \cdot \nabla u = f(u), \] the conditions on $ \gamma$ become: $ \gamma$ is bounded from below and has a bounded gradient. In what follows we examine the case where $ \omega_1(x) = (|x|^2 +1)^\frac{\alpha}{2}$ and $ \omega_2(x)= g(x) (|x|^2 +1)^\frac{\beta}{2}$, where $ g(x) $ is positive except at say a point, smooth and where $ \lim_{|x| \rightarrow \infty} g(x) = C \in (0,\infty)$. For this class of weights we can essentially obtain optimal results. \begin{thm} \label{alpha_beta} Take $ \omega_1 $ and $ \omega_2$ as above. \begin{enumerate} \item If $ N+ \alpha - 2 <0$ then there is no stable sub-solution for $(G)$, $(L)$ (here we require it to be positive) and in the case of $(M)$ there is no positive stable super-solution. This case is the trivial case, see Remark \ref{triv}. \\ \textbf{Assumption:} For the remaining cases we assume that $ N + \alpha -2 > 0$. \item If $N+\alpha-2<4(\beta-\alpha+2)$ then there is no stable sub-solution for $ (G)$. \item If $N+\alpha-2<\frac{ 2(\beta-\alpha+2) }{p-1} \left( p+\sqrt{p(p-1)} \right)$ then there is no positive stable sub-solution of $(L)$. \item If $N+\alpha-2<\frac{2(\beta-\alpha+2) }{p+1} \left( p+\sqrt{p(p+1)} \right)$ then there is no positive stable super-solution of $(M)$. \item Further more 2,3,4 are optimal in the sense if $ N + \alpha -2 > 0$ and the remaining inequality is not satisfied (and in addition we assume we don't have equality in the inequality) then we can find a suitable function $ g(x)$ which satisfies the above properties and a stable sub/super-solution $u$ for the appropriate equation. \end{enumerate} \end{thm} \begin{remark} Many of the above results can be extended to the case of equality in either the $ N + \alpha - 2 \ge 0$ and also the other inequality which depends on the equation we are examining. We omit the details because one cannot prove the results in a unified way. \end{remark} In showing that an explicit solution is stable we will need the weighted Hardy inequality given in \cite{craig}. \begin{lemma} \label{Har} Suppose $ E>0$ is a smooth function. Then one has \[ (\tau-\frac{1}{2})^2 \int E^{2\tau-2} | \nabla E|^2 \phi^2 + (\frac{1}{2}-\tau) \int (-\Delta E) E^{2\tau-1} \phi^2 \le \int E^{2\tau} | \nabla \phi|^2,\] for all $ \phi \in C_c^\infty({\mathbb{R}}^N)$ and $ \tau \in {\mathbb{R}}$. \end{lemma} By picking an appropriate function $E$ this gives, \begin{cor} \label{Hardy} For all $ \phi \in C_c^\infty$ and $ t , \alpha \in {\mathbb{R}}$. We have \begin{eqnarray*} \int (1+|x|^2)^\frac{\alpha}{2} |\nabla\phi|^2 &\ge& (t+\frac{\alpha}{2})^2 \int |x|^2 (1+|x|^2)^{-2+\frac{\alpha}{2}}\phi^2\\ &&+(t+\frac{\alpha}{2})\int (N-2(t+1) \frac{|x|^2}{1+|x|^2}) (1+|x|^2)^{-1+\frac{\alpha} {2}} \phi^2. \end{eqnarray*} \end{cor} \section{Proof of main results} \textbf{ Proof of Theorem \ref{main_non_exist}.} (1). Suppose $ u$ is a stable sub-solution of $(G)$ with $ I_G,J_G \rightarrow 0$ as $ R \rightarrow \infty$ and let $ 0 \le \phi \le 1$ denote a smooth compactly supported function. Put $ \psi:= e^{tu} \phi$ into (\ref{stable}), where $ 0 <t<2$, to arrive at \begin{eqnarray*} \int \omega_2 e^{(2t+1)u} \phi^2 &\le & t^2 \int \omega_1 e^{2tu} | \nabla u|^2 \phi^2 \\ && +\int \omega_1 e^{2tu}|\nabla \phi|^2 + 2 t \int \omega_1 e^{2tu} \phi \nabla u \cdot \nabla \phi. \end{eqnarray*} Now multiply $(G)$ by $ e^{2tu} \phi^2$ and integrate by parts to arrive at \[ 2t \int \omega_1 e^{2tu} | \nabla u|^2 \phi^2 \le \int \omega_2 e^{(2t+1) u} \phi^2 - 2 \int \omega_1 e^{2tu} \phi \nabla u \cdot \nabla \phi,\] and now if one equates like terms they arrive at \begin{eqnarray} \label{start} \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^2 & \le & \int \omega_1 e^{2tu} \left( | \nabla \phi |^2 - \frac{ \Delta \phi}{2} \right) dx \nonumber \\ && - \frac{1}{2} \int e^{2tu} \phi \nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} Now substitute $ \phi^m$ into this inequality for $ \phi$ where $ m $ is a big integer to obtain \begin{eqnarray} \label{start_1} \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^{2m} & \le & C_m \int \omega_1 e^{2tu} \phi^{2m-2} \left( | \nabla \phi |^2 + \phi |\Delta \phi| \right) dx \nonumber \\ && - D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi \end{eqnarray} where $ C_m$ and $ D_m$ are positive constants just depending on $m$. We now estimate the terms on the right but we mention that when ones assume the appropriate monotonicity on $ \omega_1$ it is the last integral on the right which one is able to drop. \begin{eqnarray*} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi|^2 & = & \int \omega_2^\frac{2t}{2t+1} e^{2tu} \phi^{2m-2} \frac{ \omega_1 }{\omega_2^\frac{2t}{2t+1}} | \nabla \phi|^2 \\ & \le & \left( \int \omega_2 e^{(2t+1) u} \phi^{(2m-2) \frac{(2t+1)}{2t}} dx \right)^\frac{2t}{2t+1}\\ &&\left( \int \frac{ \omega_1 ^{2t+1}}{\omega_2^{2t}} | \nabla \phi |^{2(2t+1)} \right)^\frac{1}{2t+1}. \end{eqnarray*} Now, for fixed $ 0 <t<2$ we can take $ m $ big enough so $ (2m-2) \frac{(2t+1)}{2t} \ge 2m $ and since $ 0 \le \phi \le 1$ this allows us to replace the power on $ \phi$ in the first term on the right with $2m$ and hence we obtain \begin{equation} \label{three} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi|^2 \le \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{2t}{2t+1} \left( \int \frac{ \omega_1 ^{2t+1}}{\omega_2^{2t}} | \nabla \phi |^{2(2t+1)} \right)^\frac{1}{2t+1}. \end{equation} We now take the test functions $ \phi$ to be such that $ 0 \le \phi \le 1$ with $ \phi $ supported in the ball $ B_{2R}$ with $ \phi = 1 $ on $ B_R$ and $ | \nabla \phi | \le \frac{C}{R}$ where $ C>0$ is independent of $ R$. Putting this choice of $ \phi$ we obtain \begin{equation} \label{four} \int \omega_1 e^{2tu} \phi^{2m-2} | \nabla \phi |^2 \le \left( \int \omega_2 e^{(2t+1)u} \phi^{2m} \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}. \end{equation} One similarly shows that \[ \int \omega_1 e^{2tu} \phi^{2m-1} | \Delta \phi| \le \left( \int \omega_2 e^{(2t+1)u} \phi^{2m} \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}.\] So, combining the results we obtain \begin{eqnarray} \label{last} \nonumber \frac{(2-t)}{2} \int \omega_2 e^{(2t+1) u} \phi^{2m} &\le& C_m \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{2t}{2t+1} I_G^\frac{1}{2t+1}\\ &&- D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} We now estimate this last term. A similar argument using H\"{o}lder's inequality shows that \[ \int e^{2tu} \phi^{2m-1} | \nabla \omega_1| | \nabla \phi| \le \left( \int \omega_2 \phi^{2m} e^{(2t+1) u} dx \right)^\frac{2t}{2t+1} J_G^\frac{1}{2t+1}. \] Combining the results gives that \begin{equation} \label{last} (2-t) \left( \int \omega_2 e^{(2t+1) u} \phi^{2m} dx \right)^\frac{1}{2t+1} \le I_G^\frac{1}{2t+1} + J_G^\frac{1}{2t+1}, \end{equation} and now we send $ R \rightarrow \infty$ and use the fact that $ I_G, J_G \rightarrow 0$ as $ R \rightarrow \infty$ to see that \[ \int \omega_2 e^{(2t+1) u} =0, \] which is clearly a contradiction. Hence there is no stable sub-solution of $(G)$. (2). Suppose that $u >0$ is a stable sub-solution (super-solution) of $(L)$. Then a similar calculation as in (1) shows that for $ p - \sqrt{p(p-1)} <t < p + \sqrt{p(p-1)}$, $( 0 <t<\frac{1}{2})$ one has \begin{eqnarray} \label{shit} (p -\frac{t^2}{2t-1} )\int \omega_2 u^{2t+p-1} \phi^{2m} & \le & D_m \int \omega_1 u^{2t} \phi^{2(m-1)} (|\nabla\phi|^2 +\phi |\Delta \phi |) \nonumber \\ && +C_m \frac{(1-t)}{2(2t-1)} \int u^{2t} \phi^{2m-1}\nabla \omega_1 \cdot \nabla \phi. \end{eqnarray} One now applies H\"{o}lder's argument as in (1) but the terms $ I_L$ and $J_L$ will appear on the right hand side of the resulting equation. This shift from a sub-solution to a super-solution depending on whether $ t >\frac{1}{2}$ or $ t < \frac{1}{2}$ is a result from the sign change of $ 2t-1$ at $ t = \frac{1}{2}$. We leave the details for the reader. (3). This case is also similar to (1) and (2). \hfill $ \Box$ \textbf{Proof of Theorem \ref{mono}.} (1). Again we suppose there is a stable sub-solution $u$ of $(G)$. Our starting point is (\ref{start_1}) and we wish to be able to drop the term \[ - D_m \int e^{2tu} \phi^{2m-1} \nabla \omega_1 \cdot \nabla \phi, \] from (\ref{start_1}). We can choose $ \phi$ as in the proof of Theorem \ref{main_non_exist} but also such that $ \nabla \phi(x) = - C(x) x$ where $ C(x) \ge 0$. So if we assume that $ \nabla \omega_1 \cdot x \le 0$ for big $x$ then we see that this last term is non-positive and hence we can drop the term. The the proof is as before but now we only require that $ \lim_{R \rightarrow \infty} I_G=0$. (2). Suppose that $ u >0$ is a stable sub-solution of $(L)$ and so (\ref{shit}) holds for all $ p - \sqrt{p(p-1)} <t< p + \sqrt{p(p-1)}$. Now we wish to use monotonicity to drop the term from (\ref{shit}) involving the term $ \nabla \omega_1 \cdot \nabla \phi$. $ \phi$ is chosen the same as in (1) but here one notes that the co-efficient for this term changes sign at $ t=1$ and hence by restriction $t$ to the appropriate side of 1 (along with the above condition on $t$ and $\omega_1$) we can drop the last term depending on which monotonicity we have and hence to obtain a contraction we only require that $ \lim_{R \rightarrow \infty} I_L =0$. The result for the non-existence of a stable super-solution is similar be here one restricts $ 0 < t < \frac{1}{2}$. (3). The proof here is similar to (1) and (2) and we omit the details. \hfill $\Box$ \textbf{Proof of Corollary \ref{thing}.} We suppose that $ \omega_1 \le C \omega_2$ for big $ x$, $ \omega_2 \in L^\infty$, $ \nabla \omega_1(x) \cdot x \le 0$ for big $ x$. \\ (1). Since $ \nabla \omega_1 \cdot x \le 0$ for big $x$ we can apply Theorem \ref{mono} to show the non-existence of a stable solution to $(G)$. Note that with the above assumptions on $ \omega_i$ we have that \[ I_G \le \frac{C R^N}{R^{4t+2}}.\] For $ N \le 9$ we can take $ 0 <t<2$ but close enough to $2$ so the right hand side goes to zero as $ R \rightarrow \infty$. Both (2) and (3) also follow directly from applying Theorem \ref{mono}. Note that one can say more about (2) by taking the multiple cases as listed in Theorem \ref{mono} but we have choice to leave this to the reader. \hfill $ \Box$ \textbf{Proof of Corollary \ref{po}.} Since we have no monotonicity conditions now we will need both $I$ and $J$ to go to zero to show the non-existence of a stable solution. Again the results are obtained immediately by applying Theorem \ref{main_non_exist} and we prefer to omit the details. \hfill $\Box$ \textbf{Proof of Theorem \ref{alpha_beta}.} (1). If $ N + \alpha -2 <0$ then using Remark \ref{triv} one easily sees there is no stable sub-solution of $(G)$ and $(L)$ (positive for $(L)$) or a positive stable super-solution of $(M)$. So we now assume that $ N + \alpha -2 > 0$. Note that the monotonicity of $ \omega_1$ changes when $ \alpha $ changes sign and hence one would think that we need to consider separate cases if we hope to utilize the monotonicity results. But a computation shows that in fact $ I$ and $J$ are just multiples of each other in all three cases so it suffices to show, say, that $ \lim_{R \rightarrow \infty} I =0$. \\ (2). Note that for $ R >1$ one has \begin{eqnarray*} I_G & \le & \frac{C}{R^{4t+2}} \int_{R <|x| < 2R} |x|^{ \alpha (2t+1) - 2t \beta} \\ & \le & \frac{C}{R^{4t+2}} R^{N + \alpha (2t+1) - 2t \beta}, \end{eqnarray*} and so to show the non-existence we want to find some $ 0 <t<2$ such that $ 4t+2 > N + \alpha(2t+1) - 2 t \beta$, which is equivalent to $ 2t ( \beta - \alpha +2) > (N + \alpha -2)$. Now recall that we are assuming that $ 0 < N + \alpha -2 < 4 ( \beta - \alpha +2) $ and hence we have the desired result by taking $ t <2$ but sufficiently close. The proof of the non-existence results for (3) and (4) are similar and we omit the details. \\ (5). We now assume that $N+\alpha-2>0$. In showing the existence of stable sub/super-solutions we need to consider $ \beta - \alpha + 2 <0$ and $ \beta - \alpha +2 >0$ separately. \begin{itemize} \item $(\beta - \alpha + 2 <0)$ Here we take $ u(x)=0$ in the case of $(G)$ and $ u=1$ in the case of $(L)$ and $(M)$. In addition we take $ g(x)=\E$. It is clear that in all cases $u$ is the appropriate sub or super-solution. The only thing one needs to check is the stability. In all cases this reduces to trying to show that we have \[ \sigma \int (1+|x|^2)^{\frac{\alpha}{2} -1} \phi^2 \le \int (1+|x|^2)^{\frac{\alpha}{2}} | \nabla\phi |^2,\] for all $ \phi \in C_c^\infty$ where $ \sigma $ is some small positive constant; its either $ \E$ or $ p \E$ depending on which equation were are examining. To show this we use the result from Corollary \ref{Hardy} and we drop a few positive terms to arrive at \begin{equation*} \int (1+|x|^2)^\frac{\alpha}{2} |\nabla\phi|^2\ge (t+\frac{\alpha}{2})\int \left (N-2(t+1) \frac{|x|^2}{1+|x|^2}\right) (1+|x|^2)^{-1+\frac{\alpha} {2}} \end{equation*} which holds for all $ \phi \in C_c^\infty$ and $ t,\alpha \in {\mathbb{R}}$. Now, since $N+\alpha-2>0$, we can choose $t$ such that $-\frac{\alpha}{2}<t<\frac{n-2}{2}$. So, the integrand function in the right hand side is positive and since for small enough $\sigma$ we have \begin{equation*} \sigma \le (t+\frac{\alpha}{2})(N-2(t+1) \frac{|x|^2}{1+|x|^2}) \ \ \ \text {for all} \ \ x\in \mathbb{R}^N \end{equation*} we get stability. \item ($\beta-\alpha+2>0$) In the case of $(G)$ we take $u(x)=-\frac{\beta-\alpha+2}{2} \ln(1+|x|^2)$ and $g(x):= (\beta-\alpha+2)(N+(\alpha-2)\frac{|x|^2}{1+|x|^2})$. By a computation one sees that $u$ is a sub-solution of $(G)$ and hence we need now to only show the stability, which amounts to showing that \begin{equation*} \int \frac{g(x)\psi^2}{(1+|x|^{2 })^{-\frac{\alpha}{2}+1}}\le \int\frac{|\nabla\psi|^2}{ (1+|x|^2)^{-\frac{\alpha}{2}} }, \end{equation*} for all $ \psi \in C_c^\infty$. To show this we use Corollary \ref{Hardy}. So we need to choose an appropriate $t$ in $-\frac{\alpha}{2}\le t\le\frac{N-2}{2}$ such that for all $x\in {\mathbb{R}}^N$ we have \begin{eqnarray*} (\beta-\alpha+2)\left( N+ (\alpha-2)\frac{|x|^2}{1+|x|^2}\right) &\le& (t+\frac{\alpha}{2})^2 \frac{ |x|^2 }{(1+|x|^2}\\ &&+(t+\frac{\alpha}{2}) \left(N-2(t+1) \frac{|x|^2}{1+|x|^2}\right). \end{eqnarray*} With a simple calculation one sees we need just to have \begin{eqnarray*} (\beta-\alpha+2)&\le& (t+\frac{\alpha}{2}) \\ (\beta-\alpha+2) \left( N+ \alpha-2\right) & \le& (t+\frac{\alpha}{2}) \left(N-t-2+\frac{\alpha}{2}) \right). \end{eqnarray*} If one takes $ t= \frac{N-2}{2}$ in the case where $ N \neq 2$ and $ t $ close to zero in the case for $ N=2$ one easily sees the above inequalities both hold, after considering all the constraints on $ \alpha,\beta$ and $N$. We now consider the case of $(L)$. Here one takes $g(x):=\frac {\beta-\alpha+2}{p-1}( N+ (\alpha-2-\frac{\beta-\alpha+2}{p-1}) \frac{|x|^2}{1+|x|^2})$ and $ u(x)=(1+|x|^2)^{ -\frac {\beta-\alpha+2}{2(p-1)} }$. Using essentially the same approach as in $(G)$ one shows that $u$ is a stable sub-solution of $(L)$ with this choice of $g$. \\ For the case of $(M)$ we take $u(x)=(1+|x|^2)^{ \frac {\beta-\alpha+2}{2(p+1)} }$ and $g(x):=\frac {\beta-\alpha+2}{p+1}( N+ (\alpha-2+\frac{\beta-\alpha+2}{p+1}) \frac{|x|^2}{1+|x|^2})$. \end{itemize} \hfill $ \Box$
{'timestamp': '2011-08-17T02:00:55', 'yymm': '1108', 'arxiv_id': '1108.3118', 'language': 'en', 'url': 'https://arxiv.org/abs/1108.3118'}
ArXiv
\section{Introduction} \label{sec:intro} \begin{figure} \centering \begin{tabular}{c c c} \begin{minipage}[c]{0.3\linewidth} \includegraphics[width=\linewidth, height=1.4\linewidth]{res/img/example/2.jpg} \end{minipage} & \begin{minipage}[c]{0.3\linewidth} \includegraphics[width=\linewidth, height=1.4\linewidth]{res/img/example/3.jpg} \end{minipage} & \begin{minipage}[c]{0.3\linewidth} \includegraphics[width=\linewidth, height=1.4\linewidth]{res/img/example/5.jpg} \end{minipage} \\ (a) & (b) & (c) \\ \multicolumn{3}{c}{\includegraphics[width=\linewidth]{res/img/imbalance.pdf}} \\ \multicolumn{3}{c}{(d)}\\ \end{tabular} \caption{(a): $\textless$vase-sitting on-table$\textgreater$; (b): $\textless$man-sitting on-chair$\textgreater$; (c): $\textless$dog-sitting on-chair$\textgreater$. (a)(b)(c) have completely different visual appearances but are considered as the same relation class. (d): The long-tailed distribution of independent relation classes} \label{fig:motivation} \end{figure} \begin{figure*} \centering \begin{tabular}{c c c} \begin{minipage}[c]{0.36\linewidth} \includegraphics[width=\linewidth]{res/img/intro_2.pdf} \end{minipage} & \begin{minipage}[c]{0.23\linewidth} \includegraphics[width=\linewidth]{res/img/intro_0.pdf} \end{minipage} & \begin{minipage}[c]{0.36\linewidth} \includegraphics[width=\linewidth]{res/img/intro_1.pdf} \end{minipage} \\ (a) & (b) & (c) \\ \end{tabular} \caption{(a): The global knowledge graph of VG; (b): Unstructured output space in which the relation classifier is shared among all subject-object pairs; (c): Structured output space of HOSE-Net in which the relation classifier is context-specific.} \label{fig:intro} \end{figure*} In recent years, visual recognition tasks for scene understanding has gained remarkable progress, particularly in object detection and instance segmentation. While accurate identification of objects is a critical part of visual recognition, higher-level scene understanding requires higher-level information of objects. Scene graph generation aims to provide more comprehensive visual clues than individual object detectors by understanding object interactions. Such scene graphs serve as structural representations of images by describing objects as nodes (``subjects/objects") and their interactions as edges (``relation"), which benefit many high-level vision tasks such as image caption\cite{li2019know,yang2019auto,guo2019aligning}, visual question answering\cite{teney2017graph,peng2019cra} and image generation\cite{johnson2018image}. In scene graph generation, we actually obtain a set of visual phases$\textless$ subject-relation-object $\textgreater$ and the locations of objects in the image. The triples of each scene graph form a local knowledge graph of the image and the triples of the whole training set form a global knowledge graph of relationships as shown in Figure~\ref{fig:intro} (a). It remains a challenging task because deep neural networks cannot directly predict structured data due to its continuous nature. It's a common practice to divide the scene graphs into classifiable graph elements. \cite{sadeghi2011recognition} divides them into visual phase classes. However, it's infeasible due to the hyper-linear growth concerning the number of objects and relations. A widely-adopted strategy is to divide them into independent object classes and relation classes\cite{lu2016visual}. Most methods classify the objects separately and then apply local graph structures to learn contextual object representations for relation classification\cite{xu2017scene,qi2019attentive,chen2019knowledge}. However, they ignore the fact that the output space should also be contextual and structure-aware and adopt an unstructured one as shown in Figure~\ref{fig:intro} (b). Hence these methods suffer from drastic intra-class variations. For example, given the relation ``sit on'', the visual contents vary from ``vase-sit on-table" to ``dog-sit on-chair" as shown in Figure~\ref{fig:motivation} (a)(b)(c). On the other hand, the distribution of these independent relation classes is seriously unbalanced as shown in Figure~\ref{fig:motivation} (d). To mitigate the issues mentioned above, we propose a novel higher order structure embedded network (HOSE-Net), which consists of a visual module, a structure-aware embedding-to-classifier (SEC) module and a hierarchical semantic aggregation(HSA) module. The SEC module is designed to construct a contextual and structured output space. First, since objects serve as contexts in relationships, SEC learns context embeddings which embeds the objects' behavior patterns as subjects or objects and transfers this knowledge among the classifiers it connects to based on the overall class structure. It adopts a graph convolution network\cite{kipf2016semi} to propagate messages on the local graphs with the guidance of object co-occurrence\cite{mensink2014costa} statistics. Second, SEC learns a mapping function to project the context embeddings to related relation classifiers. This mapping function is shared among all contexts which implicitly encodes the statistical correlations among objects and relations and organize a global knowledge graph based output space shown in Figure~\ref{fig:intro} (c). Since the unbalanced relation data are distributed into different subspaces, SEC can alleviate the long-tailed distribution and the intra-class variations. However, even if the context-specific classifiers can share statistical strengths via the context embeddings, distributing the training samples into a large set of subspaces can still suffer from sparsity issues. To address this problem, we are inspired by the thought that object-based contexts can be redundant or noisy since relations are often defined in more abstract contexts. For example, ride in ``man-ride-horse'' and ``woman-ride-elephant'' can be summarized as ``people-ride-animal''. Accordingly, we propose a hierarchical semantic aggregation (HSA) module to mine the latent higher order structures in the global knowledge graph. HSA hierarchically clusters the graph nodes following the principle that, if two objects have similar behavior patterns in the relationships they involved, the contexts they create can be embedded together, which is designed to find a good strategy to redistribute the samples into a smaller set of subspaces. An object semantic hierarchy is generated in the process even if HSA just uses the graph structures. It's not hard to understand because a semantic hierarchy is based on the properties of objects which also very relevant to their behavior patterns in relationships. In summary, the proposed Higher Order Structure Embedded Network(HOSE-Net) uses embedding methods to construct a structured output space. By modeling the inter-dependencies among object labels and relation labels, the serious intra-class variations and the long-tailed distribution can be alleviated. Moreover, clustering methods are used to make the structured output space more scalable and generalized. \begin{figure*} \centering \includegraphics[width=\linewidth]{res/img/overview.pdf} \caption{The framework of our HOSE-Net. It consists of three modules: (1) a visual module which outputs detection results and prepare subject-object pairs for relation representation learning; (2) a SEC module which embeds the object labels into context embeddings by message passing and maps them to classifiers; (3) a HSA module which distill higher order structural information for context embedding learning.} \label{fig:overview} \end{figure*} Our contributions are as follows: \begin{enumerate}[(1)] \item We propose to map the object-based contexts in relationships into a high-dimensional space to learn contextual and structure-aware relation classifiers via a novel structure-aware embedding-to-classifier module. This module can be integrated with other works focusing on visual feature learning. \item We design a hierarchical semantic aggregation module to distill higher order structural information for learning a higher order structured output space. \item We extensively evaluate our proposed HOSE-Net, which achieves new state-of-the-art performances on challenging Visual Genome \cite{krishna2017visual} and VRD~\cite{lu2016visual} benchmarks. \end{enumerate} \section{Related Work} \textbf{Scene Graph Generation.} Recently, the task of scene graph generation is proposed to understand the interactions between objects. \cite{sadeghi2011recognition} decomposes the scene graphs into a set of visual phase classes and designs a detection model to directly detect them from the image. Considering each visual phase as a distinct class would fail since the number of visual phase triples can be very large even with a moderate number of objects and relations. An alternative strategy is to decompose the scene graphs into object classes and relation classes in which way the graph structures of the output data is completely collapsed. Most of these methods focus on modeling the inter-dependencies of objects and relations in the visual representation learning. \cite{dai2017detecting} embeds the statistical inference procedure into the deep neural network via structural learning. \cite{xu2017scene} constructs bipartite sub-graphs of scene graphs and use RNNs to refine the visual features by iterative message passing. \cite{chen2019knowledge,qi2019attentive,cui2018context} uses graph neural networks to learn contextual visual representation. However, these methods still suffer from highly diverse appearances within each relation class because they all adopt a flat and independent relation classifiers. In this paper, we argue that the structural information including the local and the global graph structures of the output data is vital for regularizing a semantic space. \\ \textbf{Learning Correlated Classifiers With Knowledge Graph}. Zero-shot learning(ZSL) models need to transfer the knowledge learned from the training classes to the unknown test classes. A main research direction is to represent each class with learned vector representations. In this way, the correlations between known classes and unknown classes can help to transfer the knowledge learned from the training classes to the unknown test classes by mapping the embeddings to visual classifiers. Knowledge Graphs (KGs) effectively capture explicit relational knowledge about individual entities hence many methods\cite{wang2018zero,kampffmeyer2019rethinking,hascoet2019semantic,gao2019know,zhang2019tgg} use KGs to learn the class correlations. In scene graph generation, the relation classes are correlated by object classes as in the knowledge graph and the structural information is vital for a well-defined output space. We indirectly learn vector representations of the objects' role in relationships which are mapped to the visual relation classifiers via the knowledge graph structure. \section{Approach} \subsection{Overview} We formally define a scene graph as $G = \left \{ B, O, R \right \}$. $O = \left \{ o_1, o_2, \dots, o_n \right \}$ is the object set and $o_i$ denotes the i-th object in image. $B = \left \{ b_1, b_2, \dots, b_n \right \}$ is the bounding box set and $b_i$ denotes the bounding box of $o_i$. $R = \left \{ r_{o_1 \rightarrow o_2}, r_{o_2 \rightarrow o_3}, \dots, r_{o_{(n-1)} \rightarrow o_n} \right \}$ is the edge set and $r_{o_i \rightarrow o_j}$ denotes the relation between subject $o_i$ and object $o_j$. The probability distribution of the scene graph $\Pr(G | I)$ is formularized as: \begin{equation} \Pr(G | I) = \Pr(B|I)\Pr(O|B,I)\Pr(R|B,O,I) \end{equation} We follow the widely-adopted two-stage pipeline\cite{zellers2018neural} to generate scene graphs. The first stage is object detection including object localization ($\Pr(B|I)$) and object recognition ($Pr(O|B, I)$). The second stage is relation classification ($\Pr(R|B,O,I)$). Our proposed HOSE-Net consists of a visual module, a SEC module and a HSA module. Section ~\ref{sec:visrep} introduces the visual module. The major component is an object detector, which outputs $B$, $O$ and the region features $F=\left \{ f_1, f_2, \dots, f_n \right \}$. Then a set of object pairs $\left \{ (f_s, f_t), (o_s, o_t), (b_s,b_t) \right \}$ are produced, where $ s \neq t;s, t = 1 ... n $. The union box feature $f_u$ for each pair is extracted by a relation branch. The spatial feature $f_{spt}$ for each pair is learned from $(b_s,b_t)$ by a spatial module. Section ~\ref{sec:secm} introduces the structure-aware embedding-to-classifier(SEC) module. First, we construct local graphs to transfer statistical information between context embeddings of $O$. Then the context embeddings are mapped to a set of primitive classifiers. The classifier for each relation representation is adaptively generated by concatenating the primitive classifiers according to the pair label $(o_s, o_t)$. Section ~\ref{sec:hsa} introduces the hierarchical semantic aggregation(HSA) module. Based on the resulting semantic hierarchy, HSA creates a context dictionary $\mathcal{D}$ to map $o_i$ to one-hot encoding $c_i \in \mathbb{R}^{K}$ where $K \in \left [ 1,N \right ] $ is the number of context embeddings and $N$ is the number of object classes. The overall pipeline in shown in Figure~\ref{fig:overview}. \subsection{Visual Representation}\label{sec:visrep} \textbf{Object Detection}. In the first stage, the object detection is implemented by a Faster RCNN\cite{ren2015faster}. With the detection results, a set of subject-object region feature pairs $(f_s, f_t)$ with label information $(o_s, o_t)$ and coordinates of subject box $(x_s,y_s,w_s,h_s)$, object box $(x_t,y_t,w_t,h_t)$, union box$(x_u,y_u,w_u,h_u)$ is produced. Then a separate relation branch uses three bottlenecks to refine the image feature and extract the union box feature $f_u$ of each subject-object pair by roi pooling. While the Faster RCNN branch focuses on learning discriminative features for objects, the relation branch focuses on learning interactive parts of two objects.\\ \\ \textbf{Relation Representation}. Most existing methods explore the the visual representation learning for relations. To establish the connections between objects, they usually build graphs to associate the detected regions and use message passing frameworks to learn contextualized visual representations. Then the fusion features of the subjects and objects are projected to a set of independent relation labels by a softmax function. Whether the relation classifiers are structured and contextualized has been little explored. To verify the effectiveness of adopting a structured output space, we don't use a graph structure for learning the visual representations. Given the triple region features from the detection module $(f_s, f_t, f_u)$, the visual representation of the relation is: \begin{equation} r_{st} = \Psi_{st}([f_u; f_s; f_t]) \end{equation} where $[;]$ is the concatenation operation and $\Psi_{st}$ is a linear transformation. $[f_u, f_s, f_t] \in \mathbb{R}^{3d_f}$. \\ \\ \textbf{Spatial Representation}. The relative positions of the subject boxes and the object boxes are also valuable spatical clues for recognizing the relations. The normalized box coordinates $\widehat{b_i}$ are computed as $[\frac{x}{w_{img}},\frac{y}{h_{img}},\frac{x+w}{w_{img}},\frac{y+h}{h_{img}},\frac{wh}{w_{img} h_{img}}]$ where $w_{img}$ and $h_{img}$ are the width and height of the image. The relative spatial feature $b_{st}$ is encoded as $[\frac{x_s-x_t}{w_t},\frac{y_s-y_t}{h_t},log\frac{w_s}{w_t},log\frac{h_s}{h_t}]$. The final spatial representation is the concatenation of the normalized features and the relative features of the subject and object boxes: \begin{equation} f_{spt} = \Psi_{spt}([\widehat{b_s},\widehat{b_t},b_{st}]) \end{equation} where $\Psi_{spt}$ is a linear transformation, $[\widehat{b_s},\widehat{b_t},b_{st}] \in \mathbb{R}^{14}$. \subsection{Structure-Aware Embedding-to-Classifier}\label{sec:secm} Given the object label information $O = \left \{ o_1, o_2, \dots, o_n \right \}$, our proposed SEC module generates dynamic classifiers for relation representations according to the pair label $(o_s,o_t)$. First, we embed the object labels into higher level context embeddings. The one-hot context encodings of objects $\mathcal{C} = \left \{ c_1, c_2, \dots, c_n \right \},c_i \in \mathbb{R}^{K}$ are obtained through the context dictionary $\mathcal{D}$ which will be discussed in \ref{sec:hsa}. The context embeddings $\mathcal{E} = \left \{ e_1, e_2, \dots, e_n \right \}, e_i \in \mathbb{R}^{d_e}$ are genereted as follows: \begin{equation} e_i = W_{e}c_i \end{equation} where $W_{e} \in \mathbb{R}^{d_e \times K}$ is a context embedding matrix to be learned. Then the context embeddings $\mathcal{E}$ are fed into a graph convolution network to learn local contextual information based on object co-occurrences. We model the co-occurrence pattern in the form of conditional probability, i.e., $\mathcal{P}_{ij}$, which denotes the probability of occurrence of the j-th object class when the i-th object class appears. We compute the co-occurrences of object pairs in the training set and get the matrix $\mathcal{T}\in \mathbb{R}^{N \times N}$, $N$ is the number of object classes. $\mathcal{T}_{ij}$ denotes the co-occurring times of label pairs. The conditional probability matrix $\mathcal{P}$ is computed by: \begin{equation} \mathcal{P}_{ij} = \frac{\mathcal{T}_{ij}}{\sum_{j}^{N}\mathcal{T}_{ij}} \end{equation} where $\sum_{j}^{N}\mathcal{T}_{ij}$ denotes the total number of i-th object class occurrences in the training set. Then the adjacency matrix $A \in \mathbb{R}^{n \times n}$ of the local contextual graph is produced by: \begin{equation} A_{ij} = \mathcal{P}_{o_i, o_j} \end{equation} The update rule of each GCN layer is: \begin{equation} \mathcal{E} ^{l + 1} = f(\mathcal{E} ^{l}, A) \end{equation}where $f$ is the graph convolution operation of \cite{kipf2016semi}. The node output of the final GCN layer is the primitive classifiers $W_{prim} = \left \{ w_1, w_2, \dots, w_n \right \}, w_i \in \mathbb{R}^{\frac{d_{cls}}{2}}$ formulated as: \begin{equation} w_i = {e_i}^{l + 1} = \sigma(\sum_{j\in \mathbb{N}_{i}}A_{ij}U{e_j}^{l}) \end{equation} where $U \in \mathbb{R}^{d_e \times \frac{d_{cls}}{2}}$ is the transformation matrix to be learned. $\mathbb{N}_{i}$ is the neighbor node set of $e_i$. $\sigma$ is the nonlinear function. For each subject-object label pair $(o_s,o_t)$, the visual classifier $W_{st}$ is a composition of two primitive classifiers according to its context: \begin{equation} W_{st} = [w_{o_s};w_{o_t}] \in \mathbb{R}^{d_{cls}} \end{equation} where $[;]$ is the concatenation operation. Apply the learned classifier to the relation representations to get the predicted scores: \begin{equation} \hat{y} = W_{st}[r_{st};f_{spt}] \end{equation} \subsection{Hierarchical Semantic Aggregation}\label{sec:hsa} \begin{figure} \centering \begin{minipage}[c]{\linewidth} \includegraphics[width=\linewidth]{res/img/graph_1.pdf} \end{minipage} (a) \begin{minipage}[c]{\linewidth} \includegraphics[width=\linewidth]{res/img/graph_2.pdf} \end{minipage} (b) \caption{The connectivity subgraph of (street, sidewalk) is illustrated in two parts: (street, sidewalk) are objects (a) or subjects (b) in $\textless$subject-relation-object$\textgreater$. The blue edges are the connected path and the yellow edges are the unconnected path.} \label{fig:scene_graph_example} \end{figure} Even if the global knowledge graph exhibits rich, lower-order connectivity patterns captured at the level of objects and relations, new problems emerge if we create context embeddings for all object classes. When the number of classes $N$ increases, a lot of context-specific classifiers can't get sufficient training samples due to data sparsity and can not be scalable. The motivation of HSA is, although relations exist among concrete objects, the objects actually have many similar higher level behavior patterns in the overall contexts. And there exists higher order connectivity patterns on the class structure which are essential for understanding the object behaviors in relationships. We design an clustering algorithm for mining the higher order structural information based on behavior patterns of objects. The connectivity pattern with respect to two nodes $q_s, q_t$ of knowledge graph $KG$ is represented in a subgraph $SG$. $SG$ includes two sets of connection nodes $L_s,L_o$ between $q_s, q_t$: \begin{eqnarray} q_i\in L_s \Leftrightarrow r_{q_i \rightarrow q_s} = r_{q_i \rightarrow q_t}\\ q_i\in L_o \Leftrightarrow r_{q_i \leftarrow q_s} = r_{q_i \leftarrow q_t} \end{eqnarray} where $r_{q_i \rightarrow q_j}$ denotes the relation between $q_i$ and $q_j$. Figure~\ref{fig:scene_graph_example} illustrates an example, which visualize the common patterns between street and sidewalk: both of them are made of tile, can be covered in snow, a person can walk on them and have buildings nearby. If the behavior patterns of $q_s$ and $q_t$ have a large overlap, they can be clustered into a higher-level node. The similarity score between $q_s, q_t$ is defined as: \begin{equation} \begin{split} f_{sim}(q_s, q_t) & = \frac{\left | L_s \right |}{d_{in}(q_s) + d_{in}(q_t) - \left | L_s \right |} \\ & + \frac{\left | L_o \right |}{d_{out}(q_s) + d_{out}(q_t) - \left | L_o \right |}, \end{split} \end{equation} where $d_{in}(q_i)/d_{out}(q_i)$ denotes the number of incoming/outgoing edges of node $q_i$, which represents the occurrence times of $q_i$ in all relationships as object/subject respectively. $\left | L_s \right|/\left | L_o \right |$ denotes the number of nodes in $L_s/L_o$, which represents the number of common behavior patterns of $q_s, q_t$ as object/subject. This measure is fully based on the graph structure, not on the distributed representations of nodes from external knowledge graphs. \\ \begin{algorithm}[tb] \caption{} \label{alg:optim} \begin{algorithmic}[1] \Require{$KG = \left \{ (q_1^0,q_2^0,...,q_N^0),(r_{q_1^0 \rightarrow q_2^0},...,r_{q_i^0 \rightarrow q_j^0},...) \right \}$,similarity measure function $f_{sim}$, cluster number $K$} \For{\texttt{$i = 1,2,...,N$}} \State{$\lambda_{i} = 1 $} \For{\texttt{$j = 1,2,...,N$}} \State{$Sim(i,j) = f_{sim}(q_i^0,q_j^0)$} \State{$Sim(j,i) = Sim(i,j)$} \EndFor \EndFor \State {Set current cluster number $num = N$} \While{$num > K$}\Comment{Find the two most similar node cluster} \State{ $q_{i}^{l_i}, q_{j}^{l_j} \gets SELECTMAX(Sim(i,j) / (\lambda_{i} + \lambda_{j}))$} \State $q_{i}^{l_{ij}} \gets MERGE(q_{i}^{l_i},q_{j}^{l_j})$ \State $KG,Sim \gets UPDATE(KG,Sim)$ \State $\lambda_{i} \gets \lambda{i} + \lambda{j} + 1$ \State $REINDEX(\lambda)$ \State $num \gets num - 1$ \EndWhile \State{$\mathcal{D} \gets GETDICT((q_1^0,q_2^0,...,q_N^0),(q_1^l,q_2^l,...,q_K^l))$} \Ensure{$\mathcal{D}$} \end{algorithmic} \end{algorithm} \noindent\textbf{Algorithm.} We use hierarchical agglomerative clustering to find the node clusters shown in Algorithm~\ref{alg:optim}. At each iteration, we merge the two clusters which have the most similar behavior patterns and update the knowledge graph by replacing the two clustered nodes with a higher level node. Since the given triples are incomplete and unbalanced, we introduce a penalty term $\lambda$ to avoid the objects which have frequent occurrences in annotated relationships dominating the clustering. When the number of clusters reaches the given K, the algorithm stops iterating. We encode the clustering results as a dictionary $\mathcal{D}$ to map the N objects to one-hot encodings of dimension $K$ hence the objects within the same cluster will have the same context embedding. In this way, the output space is reorganized into a smaller one. Even if the clusters are not reasonable for all relations, experiments show that the context embeddings can still learn the upside in a high-dimensional space. \\ \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}[c]{0.48\linewidth} \includegraphics[height=\linewidth]{res/img/tree.pdf} \end{minipage} & \begin{minipage}[c]{0.52\linewidth} \includegraphics[height=\linewidth]{res/img/tree1.pdf} \end{minipage} \\ (a) & (b) \\ \end{tabular} \caption{(a) The semantic hierarchy of VG. (b) The semantic hierarchy of VRD.} \label{fig:optim_result} \end{figure*} \section{Experiments} \begin{table*} \centering \begin{tabular}{l l | l | l l | l l | l l} \hline SEC & HSA & & \multicolumn{2}{c}{SGDET} \vline & \multicolumn{2}{c}{SGCLS} \vline & \multicolumn{2}{c}{PRDCLS}\\ & & Recall at & 50 & 100 & 50 & 100 & 50 & 100 \\ \hline \xmark & \xmark & Baseline $(K=1)$ & 28.1 & 32.5 & 34.8 & 36.4 & 64.6 & 67.3\\ \cmark & \xmark & HOSE-Net $(K=150)$ & 28.6 & 33.1 & 36.2 & 37.3 & 66.5 & 69.0\\ \cmark & \cmark & HOSE-Net $(K=40)$ & \textbf{28.9} & \textbf{33.3} & \textbf{36.3} & \textbf{37.4} & \textbf{66.8} & \textbf{69.2}\\ \hline \end{tabular} \caption{Ablation study on the SEC module and HSA module.} \label{tab:compair_m} \end{table*} \begin{figure} \centering \includegraphics[width=\linewidth, height=0.45\linewidth]{res/img/ablation.pdf} \caption{Ablation study on the clustering number. We draw the performance curves of SGCLS and PRDCLS on K = 1, 10, 20, 30, 40, 50, 70, 100, 130, 150.} \label{fig:compair_m} \end{figure} \begin{table*} \centering \begin{tabular}{l | l l | l l | l l} \hline & \multicolumn{2}{c}{SGDET} \vline & \multicolumn{2}{c}{SGCLS} \vline & \multicolumn{2}{c}{PRDCLS}\\ Recall at & 50 & 100 & 50 & 100 & 50 & 100 \\ \hline SEC ($K=150$) & 28.6 & 33.1 & 36.2 & 37.3 & 66.5 & 69.0\\ SEC + kmeans with word2vec embedding($K=40$) & 28.7 & 33.1 & 36.2 & 37.3 & 66.4 & 68.9\\ SEC + HSA ($K=40$) & \textbf{28.9} & \textbf{33.3} & \textbf{36.3} & \textbf{37.4} & \textbf{66.8} & \textbf{69.2}\\ \hline \end{tabular} \caption{ Comparison between the clustering in HSA module and kmeans with word2vec embedding clustering.} \label{tab:kmeans} \end{table*} \begin{table*} \centering \begin{tabular}{l | l l l l l l | l l l l l l} \hline & \multicolumn{6}{c}{Graph Constraint} \vline & \multicolumn{6}{c}{No Graph Constraint}\\ & \multicolumn{2}{c}{SGDET} & \multicolumn{2}{c}{SGCLS} & \multicolumn{2}{c}{PRDCLS} \vline & \multicolumn{2}{c}{SGDET} & \multicolumn{2}{c}{SGCLS} & \multicolumn{2}{c}{PRDCLS}\\ Recall at & 50 & 100 & 50 & 100 & 50 & 100 & 50 & 100 & 50 & 100 & 50 & 100 \\ \hline VRD~\cite{lu2016visual} & 0.3 & 0.5 & 11.8 & 14.1 & 27.9 & 35.0 & - & - & - & - & - & -\\ ISGG~\cite{xu2017scene} & 3.4 & 4.2 & 21.7 & 24.4 & 44.8 & 53.1 & - & - & - & - & - & -\\ MSDN~\cite{xu2017scene} & 7.0 & 9.1 & 27.6 & 29.9 & 53.2 & 57.9 & - & - & - & - & - & - \\ AsscEmbed~\cite{newell2017pixels} & 8.1 & 8.2 & 21.8 & 22.6 & 54.1 & 55.4 & 9.7 & 11.3 & 26.5 & 30.0 & 68.0 & 75.2\\ Message Passing+~\cite{zellers2018neural} & 20.7 & 24.5 & 34.6 & 35.4 & 59.3 & 61.3 & 22.0 & 27.4 & 43.4 & 47.2 & 75.2 & 83.6\\ Frequency~\cite{zellers2018neural} & 23.5 & 27.6 & 32.4 & 34.0 & 59.9 & 64.1 & 25.3 & 30.9 & 40.5 & 43.7 & 71.3 & 81.2\\ Frequency+Overlap~\cite{zellers2018neural} & 26.2 & 30.1 & 32.3 & 32.9 & 60.6 & 62.2 & 28.6 & 34.4 & 39.0 & 43.4 & 75.7 & 82.9\\ MotifNet-LeftRight~\cite{zellers2018neural} & 27.2 & 30.3 & 35.8 & 36.5 & 65.2 & 67.1 & 30.5 & 35.8 & \textbf{44.5} & 47.7 & 81.1 & 88.3\\ GraphRCNN~\cite{yang2018graph} & 11.4 & 13.7 & 29.6 & 31.6 & 54.2 & 59.1 & - & - & - & - & - & - \\ KERN~\cite{chen2019knowledge} & 27.1 & 29.8 & \textbf{36.7} & 37.4 & 65.8 & 67.6 & - & - & - & - & - & -\\ VCTREE~\cite{tang2019learning} & 27.7 & 31.1 & 37.9 & \textbf{38.6} & 66.2 & 67.9 & - & - & - & - & - & -\\ \hline HOSE-Net ($K=40$) & \textbf{28.9} & \textbf{33.3} & 36.3 & 37.4 & \textbf{66.7} & \textbf{69.2} & \textbf{30.5} & \textbf{36.3} & 44.2 & \textbf{48.1} & \textbf{81.1} & \textbf{89.2}\\ \hline \hline RelDN$^\ast$~\cite{zhang2019graphical} & 28.3 & 32.7 & 36.8 & 36.8 & 68.4 & 68.4 & 30.4 & \textbf{36.7} & 48.9 & 50.8 & 93.8 & 97.8\\ HOSE-Net$^\ast$ ($K=40$) & \textbf{28.9} & \textbf{33.3} & \textbf{37.3} & \textbf{37.3} & \textbf{70.1} & \textbf{70.1} & \textbf{30.5} & 36.3 & \textbf{49.7} & \textbf{51.2} & \textbf{94.6} & \textbf{98.2}\\ \hline \end{tabular} \caption{Comparison with state-of-the-art methods on Visual Genome. HOSE-Net$^\ast$ uses the evaluation metric in ~\cite{zhang2019graphical}} \label{tab:sota_vg} \end{table*} \subsection{Datasets} \textbf{Visual Genome\cite{krishna2017visual}.} It is a large scale dataset with 75729 object classes and 40480 relation classes. There are several modified versions for scene graph generation. In this paper, we follow the same train/val splits in which the most frequent 150 objects and 50 relations are chosen. We measure our method on VG in three tasks: \begin{enumerate} \item predicate classification (PRDCLS): Given the ground truth annotations of the object classes and bounding boxes, predict the relation type of each object pair. \item Scene graph classification (SGCLS): Given the ground truth annotations of object bounding boxes, predict the object classes and the relation type of each object pair. \item Scene graph detection (SGDET): Predict the bounding boxes, the object classes and the relation type of each object pair. \end{enumerate} We use Recall@50, Recall@100 as our evaluation metrics. Recall@x computes the fraction of relationship hits in the top x confident relationship predictions. The reason why precision and average precision (AP) are not proper metrics for this task is, only a fraction of relationships are annotated and they will penalize the right detection if it is not in the ground truth. We report the Graph Constraint Recall@x following \cite{lu2016visual} which only involves the highest score relation prediction of each subject-object pair in the recall ranking. We also report the No Graph Constraint Recall@x following \cite{newell2017pixels} which involves all the 50 relation scores of each subject-object pair in the recall ranking. It allows multiple relations exist between objects. \noindent\textbf{VRD\cite{lu2016visual}} contains 4000 training and 1000 test images including 100 object classes and 70 relations. We follow \cite{yu2017visual} to measure our method on VRD in two tasks: \begin{enumerate} \item Phase detection: Predict the visual phase triplets $\textless$subject-relation-object$\textgreater$ and localize the union bounding boxes of each object pair. \item Relationship detection: Predict the visual phase triplets $\textless$subject-relation-object$\textgreater$ and localize the bounding boxes of subjects and objects. \end{enumerate} We report Recall@50 and Recall@100 at involving 1 ,10 and 70 relation predictions per object pair in recall ranking as the evaluation metrics. \begin{table*} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l | l l l l l l | l l l l l l} \hline & \multicolumn{6}{c}{Relationship Detection} \vline & \multicolumn{6}{c}{Phrase Detection}\\ & \multicolumn{2}{c}{rel=1} & \multicolumn{2}{c}{rel=10} & \multicolumn{2}{c}{rel=70} \vline& \multicolumn{2}{c}{rel=1} & \multicolumn{2}{c}{rel=10} & \multicolumn{2}{c}{rel=70}\\ Recall at & 50 & 100 & 50 & 100 & 50 & 100 & 50 & 100 & 50 & 100 & 50 & 100\\ \hline VTransE~\cite{zhang2017visual} & 19.4 & 22.4 & - & - & - & - & 14.1 & 15.2 & - & - & - & -\\ ViP-CNN~\cite{li2017vip} & 17.32 & 21.01 & - & - & - & - & 22.78 & 27.91 & - & - & - & -\\ VRL~\cite{liang2017deep} & 18.19 & 20.79 & - & - & - & - & 21.37 & 22.60 & - & - & - & -\\ KL distilation~\cite{yu2017visual} & 19.17 & 21.34 & \textbf{22.56} & \textbf{29.89} & \textbf{22.68} & \textbf{31.89} & 23.14 & 24.03 & 26.47 & 29.76 & 26.32 & 29.43\\ MF-URLN~\cite{zhan2019exploring} & 23.9 & 26.8 & - & - & - & - & 31.5 & 36.1 & - & - & - & -\\ Zoom-Net~\cite{yin2018zoom} & 18.92 & 21.41 & - & - & 21.37 & 27.30 & 24.82 & 28.09 & - & - & 29.05 & 37.34\\ CAI + SCA-M~\cite{yin2018zoom} & 19.54 & 22.39 & - & - & 22.34 & 28.52 & 25.21 & 28.89 & - & - & \textbf{29.64} & \textbf{38.39}\\ RelDN (ImageNet)~\cite{zhang2019graphical} & 19.82 & 22.96 & 21.52 & 26.38 & 21.52 & 26.38 & 26.37 & 31.42 & 28.24 & 35.44 & 28.24 & 35.44\\ \hline HOSE-Net ($K=18$) & \textbf{20.46} & \textbf{23.57} & 22.13 & 27.36 & 22.13 & 27.36 & \textbf{27.04} & \textbf{31.71} & \textbf{28.89} & \textbf{36.16} & 28.89 & 36.16\\ \hline \end{tabular} } \caption{Comparison with state-of-the-art methods on VRD.} \label{tab:sota_vrd} \end{table*} \subsection{Implementation Details} HOSE-Net adopts a two-stage pipeline. The object detector is Faster RCNN with a VGG backbone initialized by COCO pre-trained weights for Visual Genome and ImageNet pre-trained weights for VRD and then finetuned on the visual relationship datasets. The backbone weights are fixed. For stable training, we add an unstructured relation classifier as a separate branch for joint training. Considering the dataset scale and dataset quality, we adopt different training mechanisms for Visual Genome and VRD. In Visual Genome experiments, we set $lr = 0.001$ for the structured classifier and $lr = 0.01$ for the unstructured one. During testing, we evaluate the structured classifier. In VRD, the loss weight of the unstructured classifier is 0.7, and the structured one is 0.3. During testing, the result is the weight sum of the two classifiers. Since the statistical bias is a widely-adopted strategy in the two-stage pipeline, we train a bias vector and fuse the bias results with the visual module results during testing following \cite{zellers2018neural}. The proposed framework is implemented by PyTorch. All experiments are conducted on servers with 8 NVIDIA Titan X GPUs with 12 GB memory. The batch size at the training phase is set to 8. $d_f$ is set to 4096 and $d_e$ is set to 512. \subsection{Ablation Study} Now we perform ablation studies to better examine the effectiveness of our framework. \\ \textbf{Structured Output Space with Cluster Number K.} We perform an ablation study to validate the effectiveness of the SEC module which learns a structured output space and the HSA module which incorporates higher order structure into the output space with respect to the cluster number K. $K = 1$ is our baseline model which uses the conventional unstructured relation classifiers. $K = 150$ only employs SEC module to learn a low order structured output space. In HSA module, K is a hyper parameter which can be a trade-off between the performance and the model complexity. We know that all clustering algorithms suffer from the lack of automatic decisions for an optimal number of clusters. While trying all possible combinations is prohibitively expensive, we have got a comprehensive set of results for comparison. The performance curve on $K=1,10,20,30,40,50,60,70,100,130,150$ are shown in the Figure~\ref{fig:compair_m}. We find $K=40$ works the best. Table~\ref{tab:compair_m} presents results when $K=1,150,40$. \\ The comparison shows that: \begin{enumerate}[1)] \item Adopting a structured output space ($K=40,150$) is superior to an unstructured one($K=1$) which verifies the effectiveness of the SEC module. \item Adopting a higher order structured output space ($K=40$) outperforms lower order one ($K=150$) which verifies the effectiveness of the HSA module. \end{enumerate} We also show the resulting semantic hierarchy of objects from the HSA module on VG and VRD in Figure~\ref{fig:optim_result}. Although the HSA module is not designed to sort out the objects, the unsupervised process of searching higher order connectivity patterns in the knowledge graph can contribute to an object taxonomy. At the lower levels, the object classes are classified according to more specific properties, eg. roof with railing, street with sidewalk, train with car. At the higher levels, the clusters have more abstract semantics and are classified according to more general properties, eg. glass-bottle-cup with basket-box-bag, toilet-sink-drawer with shelf-cabinet-counter. \noindent\textbf{Clustering in HSA}. Our behavior pattern based hierarchical clustering purely relies on the knowledge graph structure of the ground truth. To verify the effectiveness of our clustering, we also conduct K-means clustering on word2vec embeddings of objects to obtain an external knowledge based clusters. Table~\ref{tab:kmeans} presents the results of adopting HSA clustering results ($K=40$), adopting K-means with word2vec embedding clustering results and not adopting context clustering $K=150$).\\ The comparison shows that: \begin{enumerate}[1)] \item HOSE-Net with SEC($K=150$) shows comparable results to HOSE-Net with SEC and K-means with word2vec embedding($K=40$), which means, the clustering results can't improve the performance.\\ \item HOSE-Net with SEC and HSA($K=40$) outperforms HOSE-Net with SEC and K-means with word2vec embedding($K=40$), which proves that our structure-based clustering with internal relation knowledge can truly produces helpful clustering results to boost this task. \end{enumerate} \subsection{Comparison to State of the Art} \textbf{Visual Genome:} Table~\ref{tab:sota_vg} shows the performance of our model outperforms the state-of-the-art methods. Our object detector is adopted from \cite{zhang2019graphical} with $mAP = 25.5$ , $IoU = 0.5$. The number of clusters for comparison is 40. These methods all adopt flat relation classifiers. VRD\cite{lu2016visual}, AsscEmbed\cite{newell2017pixels}, Frequency\cite{zellers2018neural} predict the objects and the relations without joint inference. The other works are engaged in modeling the inter-dependencies among objects and relations. MotifNet-LeftRight\cite{zellers2018neural} encodes the dependencies through bidirectional LSTMs. MSDN\cite{li2017scene},ISGG\cite{xu2017scene},KERN\cite{chen2019knowledge} rely on message passing mechanism. SGP\cite{herzig2018mapping} employs structured learning. In comparison, our framework doesn't refine the visual representations but still achieves new state-of-the-art results on SGDET, SGCLS, PRDCLS with and without graph constraint. RelDN\cite{zhang2019graphical} proposes contrastive losses and reports Top@K Accuracy (A@K) on PredCls and SGCls in which the ground-truth subject-object pair information is also given. We also compare with RelDN at A@K as shown in Table~\ref{tab:sota_vg}. \noindent\textbf{VRD:} Table~\ref{tab:sota_vrd} presents results on VRD compared with state-of-the-art methods. The number of clusters for comparison is 18. The implementation details of most methods on VRD are not very clear. As shown in \cite{zhang2019graphical}, pre-training on COCO can provide stronger localization features than pre-training on ImageNet. For a fair comparison, we use the ImageNet pre-trained model. We achieve new state-of-the-art results on Relationship Detection and Phrase Detection. \section{Conclusions} In this work, we propose Higher Order Structure Embedded Network to address the problems caused by ignoring the structure nature of scene graphs in existing methods. First we propose a Structure-Aware Embedding-to-Classifier module to redistribute the training samples into different classification subspaces according to the object labels and connect the subspaces with a set of context embeddings following the global knowledge graph structure. Then we propose a Hierarchical Semantic Aggregation module to mine higher order structures of the global knowledge graph which makes the model more scalable and trainable. \section{ACKNOWLEDGMENTS} This work was supported by NSFC project Grant No. U1833101, SZSTI under Grant No. JCYJ20190809172201639 and the Joint Research Center of Tencent and Tsinghua. \bibliographystyle{ACM-Reference-Format}
{'timestamp': '2020-08-13T02:11:53', 'yymm': '2008', 'arxiv_id': '2008.05156', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.05156'}
ArXiv
\section{Introduction} Dynamic contrast enhanced MRI (DCE-MRI) in the liver is widely used for detecting hepatic lesions and in distinguishing malignant from benign lesions. However, such images often suffer from motion artifacts due to unpredictable respiration, dyspnea, or mismatches in k-space caused by rapid injection of the contrast agent\cite{motosugi2015investigation}\cite{davenport2013comparison}. In DCE-MRI, a series of T1-weighted MR images is obtained after the intravenous injection of a gadolinium-based MR contrast agent, such as gadoxetic acid. However, acquiring appropriate data sets for DCE arterial phase MR images is difficult due to the limited scan time available in the first pass of the contrast agent. Furthermore, it has been reported that transient dyspnea can be caused by gadoxetic acid at a non-negligible frequency \cite{motosugi2015investigation}\cite{davenport2013comparison}, which results in degraded image quality due to respiratory motion-related artifacts such as blurring and ghosting\cite{stadler2007artifacts}. Especially, coherent ghosting originating from the anterior abdominal wall decrease diagnostic performance of the images\cite{chavhan2013abdominal} Recently, many strategies have been proposed to avoid motion artifacts in DCE-MRI. Of these, fast acquisition strategies using compressed sensing may provide the simplest way to avoid motion artifacts in the liver \cite{vasanawala2010improved}\cite{zhang2014clinical}\cite{jaimes2016strategies}. Compressed sensing is an acquisition and reconstruction technique based on the sparsity of the signal, and the k-space undersampling results in a shorter scan time. Zhang et al. demonstrated that DCE-MRI with a high acceleration factor of 7.2 using compressed sensing provides significantly better image quality than conventional parallel imaging \cite{zhang2014clinical}. Other approaches include data acquisition without breath holding (free-breathing method) using respiratory triggering and respiratory triggered DCE-MRI, which is an effective technique to reduce motion artifacts in cases of patients who are unable to suspend their respiration \cite{vasanawala2010navigated}\cite{chavhan2013abdominal}.In these approaches, sequence acquisitions are triggered based on respiratory tracings or navigator echoes, and typically provide a one-dimensional projection of abdominal images. Shreyas et al. found that the image quality in acquisitions with navigator echoes under free-breathing conditions is significantly improved. Although triggering based approaches successfully reduce the motion artifacts, it is not possible to appropriately time arterial phase image acquisition due to the long scan times required to acquire an entire dataset. In addition, miss-triggers often occur in cases of unstable patient respiration, which causes artifacts and blurring of the images. Recently, a radial trajectory acquisition method with compressed sensing was proposed \cite{feng2014golden}\cite{feng2016xd}, which enables high-temporal resolution imaging without breath holding in DCE-MRI. However, the image quality of radial acquisition without breath holding is worse than that with breath holding even though the clinical usefulness of radial trajectory acquisition has been demonstrated in many papers \cite{chandarana2011free}\cite{chandarana2013free}.\cite{chandarana2014free} Post-processing artifact reduction techniques using deep learning approaches have been also been proposed. Deep learning, which is used in complex non-linear processing applications, is a machine learning technique that relies on a neural network with a large number of hidden layers. Han et al. proposed a denoising algorithm using a multi-resolution convolutional network called “U-Net” to remove the streak artifacts induced in images obtained via radial acquisition \cite{han2018deep}.In addition, aliasing artifact reduction has been demonstrated in several papers as an alternative to compressed sensing reconstruction \cite{lee2017deep}\cite{yang2018dagan}\cite{hyun2018deep}. The results of several feasibility studies of motion artifact reduction in the brain \cite{Karsten2018ismrm}\cite{Patricia2018ismrm}\cite{Kamlesh2018ismrm}, abdomen \cite{Daiki2018ismrm}, and cervical spine \cite{Hongpyo2018ismrm} have also been reported. Although these post-processing techniques have been studied extensively, no study has ever demonstrated practical artifact reduction in DCE-MRI of the liver. In this study, a motion artifact reduction method was developed based on a convolutional network (MARC) for DCE-MRI of the liver that removes motion artifacts from input MR images. Both simulations and experiments were conducted to demonstrate the validity of the proposed algorithm. \section{Methods} \subsection{Network architecture} In this paper, a patch-wise motion artifact reduction method that employs a convolutional neural network with multi-channel images (MARC) is proposed, as shown in Fig. \ref{fig_network}, which is based on the network originally proposed by Zhang et al. for Gaussian denoising, JPEG deblocking, and super-resolution of natural images\cite{zhang2017beyond}. Patch-wise training has advantages in extracting large training datasets from limited images, and efficient memory usage on host PCs and GPUs. Residual learning approach was adopted to achieve effective training of the network\cite{he2016deep}. The network relies on two-dimensional convolutions, batch normalizations, and rectified linear units (ReLU) to extract artifact components from images with artifacts. To utilize the structural similarity of the multi-contrast images, a seven-layer patched image with varying contrast, was used as input to the network. Sixty-four filters using a kernel size of 3$\times$3$\times$7 in ReLU were adopted to facilitate non-linear operation. The number of convolution layers $N_{conv}$ was determined as shown in the Analysis subsection. In the last layer, seven filters with a kernel size of 3$\times$3$\times$64 were used for second to the last layers. Finally, a 7-channel image was predicted as the output of the network. The total number of parameters was 268,423. Artifact-reduced images could then be generated by subtracting the predicted image from the input. \begin{figure}[bt] \centering \includegraphics[width=12cm]{Network} \caption{Network architecture for the proposed convolutional neural network, two-dimensional convolutions, batch normalizations, and ReLU. The network predicts the artifact component from an input dataset. The network number of convolution layers in the network was determined by simulation-based method.} \label{fig_network} \end{figure} \subsection{Imaging} Following Institutional Review Board approval, patient studies were conducted. This study retrospectively included 31 patients (M/F, mean age 59, range 34–79 y.o.) who underwent DCE-MRI of the liver in our institution . MR images were acquired using a 3T MR750 system (GE Healthcare, Waukesha, WI); a whole-body coil and 32-channel torso array were used for RF transmission and receiving, and self-calibrated parallel imaging (ARC) was used with an acceleration factor of 2 $\times$ 2. A three-dimensional (3D) T1-weighted spoiled gradient echo sequence with a dual-echo bipolar readout and variable density Cartesian undersampling (DISCO: differential subsampling with cartesian ordering) was used for the acquisition \cite{saranathan2012differential}, along with an elliptical-centric trajectory with pseudo-randomized sorting in ky-kz. DIXON-based reconstruction method was used to suppress fat signals\cite{reeder2004multicoil}. A total of seven temporal phase images, including pre-contrast and six arterial phases, were obtained using gadolinium contrast with end-expiration breath-holdings of 10 and 21 s. The standard dose (0.025 mmol/kg) of contrast agent (EOB Primovist, Bayer Heathcare, Osaka, Japan) was injected at the rate of 1 ml/s followed by a 20-mL saline flush using a power injector. The arterial phase scan was started 30 s after the start of the injection. The acquired k-space datasets were reconstructed using a view-sharing approach between the phases and a two-point Dixon method to separate the water and fat components. The following imaging parameters were used: flip angle = 12 $^\circ$, receiver bandwidth = $\pm$ 167 kHz, TR = 3.9 ms, TE = 1.1/2.2 ms, acquisition matrix size = 320 $\times$ 192, FOV = 340 $\times$ 340 mm$^2$, the total number of slices = 56, slice thickness = 3.6 mm. The acquired images were cropped to a matrix size of 320 $\times$ 280 after zero-filling to 320 $\times$ 320. \subsection{Respiration-induced noise simulation}\label{sec_resp_sim} A respiration-induced artifact was simulated by adding simulated errors to the k-space datasets generated from the magnitude-only image. Generally, a breath-holding failure causes phase errors in the k-space, which results in the artifact along the phase-encoding direction. In this study, for simplicity, rigid motion along the anterior-posterior direction was assumed, as shown in Fig. \ref{fig_motion}. In this case, the phase error was induced in the phase-encoding direction, and was proportional to the motion shift. Motion during readout can be neglected because it is performed within a millisecond order. Then, the in-phase and out-of-phase MR signal with phase error $\phi$ can be expressed as follows: \begin{eqnarray} S'_I(k_x, k_y) &=& S_I (k_x, k_y) e^{-j\phi(k_y)}\\ S'_O(k_x, k_y) &=& S_O (k_x, k_y) e^{-j\phi(k_y)}, \end{eqnarray} where $S_I$ and $S_O$ are the in-phase and out-of-phase signals, respectively, without the phase error; $S'_I$ and $S'_O$ are the corresponding signals with the phase error, and $k_x$, $k_y$ represent the k-space ($-\pi < k_x < \pi$, $-\pi < k_y < \pi$) in the readout and the phase-encoding directions, respectively. Finally, k-space of the water signal ($S_W$) with the phase error can be expressed as follows: \begin{eqnarray} S_W &=& \frac{S'_I+S'_O}{2}\\ &=& \frac{S_I+S_O}{2}e^{-j\phi(k_y)}\\ &=& \mathcal{F}[I_W]e^{-j\phi(k_y)}, \end{eqnarray} where $\mathcal{F}$ is the Fourier operator, and Iw denotes the water image. It is clear from the above equation that artifact simulation can be implemented by simply adding the phase error components to the k-space of the water image. In this study, the k-space datasets were generated from magnitude-only water images. To simulate the background $B_0$ inhomogeneity, the magnitude images were multiplied by $B_0$ distributions derived from polynomial functions up to the third order. The coefficients for the functions were determined randomly so that the peak-to-peak value of the distribution was within $\pm5$ ppm ($\pm4.4$ radian) To generate a motion artifact in the MR images, we used two kinds of phase error patterns: periodic and random. Generally, severe coherent ghosting artifacts are observed along the phase-encoding direction. Although there are several factors that generate artifacts in the acquired images during DCE-MRI including respiratory, voluntary motion, pulsatile arterial flow, view-sharing failure, and unfolding failure\cite{stadler2007artifacts}\cite{arena1995mr}, the artifact from the abdominal wall in the phase-encoding direction is mainly recognizable. In the case of centric-order acquisitions, the phase mismatching in the k-space results in high-frequency and coherent ghosting. An error pattern using simple sine wave with random frequency, phase, and duration was used to simulate the ghosting artifact. It was assumed that motion oscillations caused by breath-hold failures occurred after a delay as the scan time proceeded. The phase error can be expressed as follows: \[ \phi(k_y) = \left\{ \begin{array}{ll} 0 & (|k_y| < k_{y0} ) \\ 2\pi \frac{k_y \Delta sin(\alpha k_y + \beta)}{N} & (otherwise), \end{array} \right. \] where $\Delta$ denotes the significance of motion, $\alpha$ is the period of the sine wave which determines the frequency, $\beta$ is the phase of the sine wave, and $k_{y0}, (0 < k_{y0} < \pi)$ is the delay time for the phase error. In this study, the values of $\Delta$ (from 0 to 20 pixels, which equals 2.4-2.6 cm depending on FOV), $\alpha$ (from 0.1 to 5 Hz), $\beta$ (from 0 to $\pi /4$) and $k_{y0}$ (from $\pi/10$ to $\pi/2$) were selected randomly. The period $\alpha$ was determined such that it covered the normal respiratory frequency for adults and elderly adults, which is generally within 0.2-0.7 Hz\cite{rodriguez2013normal}. In addition to the periodic noise, random phase error pattern was also used to simulate non-periodic irregular motion as follow. First, the number of phase-encoding lines, which have the phase error, was randomly determined as between 10-50 \% of all phase-encoding lines except at the center region of the k-space ($-\pi /10 < k_{y0} < \pi /10$). Then, the significance of the error was determined randomly line-by-line in the same manner as used for periodic noise. \begin{figure}[bt] \centering \includegraphics[width=11cm]{Motion_Noise} \caption{(left) Example of a simulation of the respiratory motion artifact by adding phase errors along the phase-encoding direction in k-space. (Right) The k-space and image datasets before and after adding simulated phase errors.} \label{fig_motion} \end{figure} \subsection{Network Training} The processing was implemented in MATLAB 2018b on a workstation running Ubuntu 16.04 LTS with an Intel Xeon CPU E5-2630, 128 GB DDR3 RAM, and an NVIDIA Quadro P5000 graphics card. The data processing sequence used in this study is summarized in Fig \ref{fig_dataset}. Training datasets containing artifacts and residual patches were generated using multi-phase magnitude-only reference images (RO $\times$ PE $\times$ SL $\times$ Phase: 320 $\times$ 280 $\times$ 56 $\times$ 7) acquired from six patients selected by a radiologist from among the 26 patients in the study. The radiologist confirmed that all reference images were successfully acquired without motion artifacts. For the multi-phase slices (320 $\times$ 280 $\times$ 7) of the images, 125,273 patches 48 $\times$ 48 $\times$ 7 in size were randomly cropped. The resulting patches that contained only background signals were removed from the training datasets. Images with motion artifact (artifact images) were generated using the reference images, as explained in the previous subsection. Artifact patches, which were used as inputs to the MARC, were cropped from the artifact images using the same method as that for the reference patches. Finally, residual patches, which were used for the output of the network, were generated by subtracting the reference patches from the artifact patches. All patches were normalized by dividing them by the maximum value of the artifact images. Network training was performed using a Keras and Tensorflow backend (Google, Mountain View, CA), and the network was optimized using the Adam algorithm with a learning rate of 0.001. The optimization was conducted with a mini-batch of 64 patches. A total of 100 epochs with an early-stopping patience of 10 epochs were completed for convergence purposes and the L1 loss function was used as the residual components between the artifact patches and outputs were assumed to be sparse. \begin{equation} Loss(I_{art}, I_{out}) = \frac{1}{N}\sum_i^N \| I_{art} - I_{out} \|_1, \end{equation} where $I_{art}$ represents the artifact patches, $I_{out}$ represents the outputs predicted using the MARC, and N is the number of data points. Validation for L1 loss was performed using K-fold cross validation (K = 5). The $N_{conv}$ used in the network was determined by maximizing the structural similarity (SSIM) index between the reference and artifact-reduced patches of the validation datasets. Here, the SSIM index is a quality metric used for measuring the similarity between two images, and is defined as follows: \begin{equation} SSIM(I_{ref}, I_{den}) = \frac{(2\mu_{ref} \mu_{den}+c_1)(2\sigma_{ref,den}+c_2)}{(\mu_{ref}^2 + \mu_{den}^2 + c_1)(\sigma_{ref}^2 + \sigma_{den}^2+c_2)}, \end{equation} where, $I_{ref}$ and $I_{den}$ are input and artifact-reduced patches, $\mu$ is the mean intensity, $\sigma$ denotes the standard deviation, and $c_1$ and $c_2$ are constants. In this study, the values of $c_1$ and $c_2$ were as described in \cite{wang2004image}. The number of patients used for the training versus the L1 loss with 100-epoch training was plotted to investigate the relationship between the size of the training datasets and training performance. The average sample size for one patient was 7916 training patches and 3417 validation patches for 11,333 patches. A validation dataset of 37,582 patches used for these trainings was generated from the 11 patients. \begin{figure}[bt] \centering \includegraphics[width=12cm]{Dataset} \caption{Data processing for the training. The artifact images were simulated from the reference images. Residual images were calculated by subtracting the reference patches from the artifact patches. A total of 125,273 patches were generated by randomly cropping small images from the original images.} \label{fig_dataset} \end{figure} \subsection{Analysis}\label{sec_anal} To demonstrate the performance of the MARC to reduce the artifacts in the DCE-MR images acquired during unsuccessful breath holding, the following experiments were conducted using the data from the 20 remaining patients in the study. To identify biases in the intensities and liver-to-aorta contrast between the reference and artifact-reduced images, a Bland–Altman analysis, which plots the differences of the two images versus their average, was used in which the intensities were obtained from the central slice in each phase. The Bland–Altman analysis for the intensities was conducted in the subgroups of high (mean intensity $\geq$ 0.46) and low (mean intensity < 0.46) intensities. For convenience, half of the maximum mean intensity (0.46) was used as the threshold. The mean signal intensities of the liver and aorta were measured by manually placing the region-of-interest (ROI) on the MR images, and the ROI of the liver was carefully located in the right lobe to exclude vessels. The same ROIs were applied to all other phases of the images. The quality of images before and after applying the MARC were visually evaluated by a radiologist (M.K.) with three years of experience in abdominal radiology who was not told whether each image came before or after the MARC was applied. The radiologist evaluated the images using a 5-point scale based on the significance of the artifacts (1 = no artifact; 2 = mild artifacts; 3 = moderate artifacts; 4 = severe artifacts; 5 = non-diagnostic). The scores of more than 1 were analyzed statistically by using the Wilcoxon signed rank test. To confirm the validity of the anatomical structure after applying the MARC, the artifact-reduced images in the arterial phase were compared with those without the motion artifact, which were obtained from separate MR examinations performed 71 days apart in the same patients. The same sequence and imaging parameters were used for the acquisition. \section{Results} The changes in the mean and standard deviation ($\mu$) of the SSIM index between the reference and artifact-reduced images are plotted against $N_{conv}$ in Fig. \ref{fig_SSIM} (a), and the results show that the network with an $N_{conv}$ of more than four exhibited a better SSIM index, while networks with an $N_{conv}$ of four or below had a poor SSIM index. In this study, an $N_{conv}$ of seven was adopted in the experiments as this value maximized the SSIM index (mean: 0.87, $\mu$: 0.05). The training was successfully terminated by early stopping in 70 epochs as shown in Fig. \ref{fig_SSIM} (b). Figure \ref{fig_SSIM} (c) shows the number of patients used for the training versus the training and validation losses, and the sample size. The results implied that stable convergence was achieved when the sample size was 3 or more although the training with few patients gave inappropriate convergence. The features using the trained network extracted from the 1st, 4th, and 8th intermediate layers corresponding to specific input and output are shown in Fig. \ref{fig_Features}. Higher frequency ghosting-like patterns were extracted from the input in the 8th layer. Figures \ref{fig_BAplot} (a) and (b) show the Bland–Altman plots of the intensities and liver-to-aorta contrast ratios between the reference and artifact-reduced images. The differences in the intensities between the two images (mean difference = 0.01 (95 \% CI, -0.05-0.04) for mean intensity < 0.46 and mean difference = -0.05 (95 \% CI, -0.19-0.01) for mean intensity $\geq$ 0.46) were heterogeneously distributed, depending on the mean intensity. The intensities of the artifact-reduced images were lower than that of the references by ~15 \% on average, which can be seen in the high signal intensity areas shown in Fig. \ref{fig_BAplot} (a). A Bland–Altman plot of the liver-to-aorta contrast ratio (Fig. \ref{fig_BAplot} (b)) showed no systematic errors in contrast between the two images. The image quality of the artifact-reduced images (mean (SD) score = 3.2(0.63)) were significantly better (P < 0.05) than that of the reference images (mean (SD) score =2.7(0.77)), and the respiratory motion-related artifacts (Fig. \ref{fig_results} top row) were reduced by applying MARC (Fig. \ref{fig_results} bottom row). The middle row in Fig. \ref{fig_results} shows the extracted residual components for the input images. The images with and without breath-hold failure are shown in Fig. 8 (a, b). The motion artifact in Fig. \ref{fig_Comparison} (b) was partially reduced by using MARC, as shown in Fig. \ref{fig_Comparison} (c). This result indicated that there was no loss of critical anatomical details, and additional blurring was observed although moderate artifact on the right lobe remained. \begin{figure}[bt] \centering \includegraphics[width=7cm]{SSIM} \caption{(a) SSIM changes depending on the number of layers ($N_{conv}$). The highest SSIM (0.89) was obtained with an $N_{conv}$ of 7. (b) The L1 loss decreased in both the training, and validation datasets as the number of epochs increased. No further decrease was visually observed in > 70 epochs. The training was terminated by early stopping in 80 epochs.Error bars on the validation loss represent the standard deviation for K-fold cross validation. (c) Validation loss, training loss, and sample size were plotted against the number of patients. Smaller loss was observed as the sample size and number of patients increased.} \label{fig_SSIM} \end{figure} \begin{figure}[bt] \centering \includegraphics[width=11cm]{Features} \caption{Features extracted from 1st, 4th, and 8th layers of the developed network corresponding to specific input and output. Low- and high-frequency components were observed in the lower layers. On the other hands, an artifact-like pattern was extracted from the higher layer.} \label{fig_Features} \end{figure} \begin{figure}[bt] \centering \includegraphics[width=11cm]{BA_plot} \caption{Bland–Altman plots for (a) the intensities and (b)the liver-to-aorta contrast ratio between the reference and artifact-reduced images in the validation dataset. The mean difference in the intensities was 0.01 (95 \% CI, -0.05-0.04) in the areas corresponding to mean intensity of < 0.46 and -0.05 (95 \% CI, -0.19-0.01) in the parts of mean intensity of $\geq$ 0.4. Mead difference in the contrast ratio was 0.00 (95 \% CI, -0.02-0.02). These results indicated that there were no systematic errors in the contrast ratios, whereas the intensities of the artifact-reduced images were lower than that of the reference images due to the effect of artifact reduction especially in the area with high signal intensities. } \label{fig_BAplot} \end{figure} \begin{figure}[bt] \centering \includegraphics[width=12cm]{Filter_Results} \caption{Examples of artifact reduction with MARC for a patient from the validation dataset. The motion artifacts in the images (upper row) were reduced (lower row) by using the MARC.The residual components are shown in the middle row.} \label{fig_results} \end{figure} \begin{figure}[bt] \centering \includegraphics[width=12cm]{Comparison} \caption{(a, b) MR image in the arterial phase with and without breath-hold failure. (c) The artifact-reduced image of (b). The images were acquired in difference studies with same imaging parameters.} \label{fig_Comparison} \end{figure} \section{Discussion} In this paper, an algorithm to reduce the number of motion-related artifacts after data acquisition was developed using a deep convolutional network, and was then used to extract artifacts from local multi-channel patch images. The network was trained using reference MR images acquired with appropriate breath-holding, and noisy images were generated by adding phase error to the reference images. The number of convolution layers in the network was semi-optimized in the simulation. Once trained, the network was applied to MR images of patients who failed to hold their breath during the data acquisition. The results of the experimental studies demonstrate that the MARC successfully extracted the residual components of the images and reduced the amount of motion artifacts and blurring. No study has ever attempted to demonstrate blind artifact reduction in abdominal imaging, although many motion correction algorithms with navigator echoes or respiratory signal have been proposed \cite{vasanawala2010navigated}\cite{brau2006generalized}\cite{cheng2012nonrigid}. In these approaches, additional RF pulses and/or longer scan time will be required to fill the k-space signal whereas MARC enables motion reduction without sequence modification or additional scan time. The processing time for one slice was 4 ms, resulting in about 650 ms for all slices of one patient. This computational cost is acceptable for practical clinical use. In MRI of the liver, DCE-MRI is mandatory in order to identify hypervascular lesions, including hepatocellular carcinoma \cite{tang2017evidence}\cite{chen2016added}, and to distinguish malignant from benign lesions. At present, almost all DCE-MR images of the liver are acquired with a 3D gradient echo sequence due to its high spatial resolution and fast acquisition time within a single breath hold. Despite recent advances in imaging techniques that improve the image quality \cite{yang2016sparse}\cite{ogasawara2017image}, it remains difficult to acquire uniformly high quality DCE-MRI images without respiratory motion-related artifacts. In terms of reducing motion artifacts, the unpredictability of patients is the biggest challenge to overcome, as the patients who will fail to hold their breath are not known in advance. One advantage of the proposed MARC algorithm is that it is able to reduce the magnitude of artifacts in images that have been already acquired, which will have a significant impact on the efficacy of clinical MR. In the current study, an optimal $N_{conv}$ of seven was selected based on the SSIM indexes of the reference image and the artifact-reduced image after applying MARC. The low SSIM index observed for small values of $N_{conv}$was thought to be due to the difficulty of modeling the features of the input datasets with only a small number of layers. On the other hand, a slight decrease in the SSIM index were observed for $N_{conv}$ of >12. This result implies that overfitting of the network occurred by using too many layers. To overcome this problem, a larger number of learning datasets and/or regularization and optimization of a more complicated network will be required. Several other network architectures have been proposed for the denoising of MRI images. For example, U-Net \cite{ronneberger2015u}, which consists of upsampling and downsampling layers with skipped connections, is a widely used fully convolutional network for the segmentation \cite{dalmics2017using}, reconstruction, and denoising\cite{yu2017deep} of medical images. This architecture, which was originally designed for biomedical image segmentation, uses multi-resolution features instead of a max-pooling approach to implement segmentation with high localization accuracy. Most of the artifacts observed in MR images, such as motion, aliasing, or streak artifacts, are distributed globally in the image domain because the noise and errors usually contaminate the k-space domain. It is known that because U-Net has a large receptive field, these artifacts can be effectively removed using global structural information. Generative adversarial networks (GANs) \cite{goodfellow2014generative}, which are comprised of two networks, called the generator and discriminator, is another promising approach for denoising MR images. Yang et al. proposed a network to remove aliasing artifacts in compressed sensing MRI using a GAN-based network with a U-Net generator\cite{yang2018dagan}. We used patched images instead of a full-size image, because it was difficult to implement appropriate training with limited number of datasets as well as owing to computational limitation. However, we believe this approach is reasonable because the pattern of artifact due to respiratory motion looks similar in every patch, even though the respiratory artifact is distributed globally. Although it should be studied further in the future, we consider that MARC from the patched image can be generalized to a full-size image from our results. Recently, the AUtomated TransfOrm by Manifold APproximation (AUTOMAP) method, which uses full connection and convolution layers, has been proposed for MRI reconstruction \cite{zhu2018image}. The AUTOMAP method directly transforms the domain from the k-space to the image space, and thus enables highly flexible reconstruction for arbitrary k-space trajectories. Three-dimensional CNNs , which are network architectures for 3D images \cite{kamnitsas2017efficient}\cite{chen2018efficient} are also promising method. However, these networks require large number of parameters, huge memory on GPUs and host computers, and long computational time for training and hyperparameter tuning. Therefore, it is still challenging to apply these approaches in practical applications. These network architectures may be combined to achieve more spatial and temporal resolution. It is anticipated that further studies will be conducted on the use of deep learning strategies in MRI. The limitations in the current study were as follows. First, clinical significance was not fully assessed. While the image quality appeared to improve in almost all cases, it will be necessary to confirm that no anatomical/pathological details were removed by MARC before this approach can be clinically applied. Second, simple centric acquisition ordering was assumed when generating the training datasets, which means that MARC can only be applied for a limited sequence. Additional training will be necessary before MARC can be generalized to more pulse sequences. In addition, realistic simulation can offer further improvement of our algorithm because noise simulation in this study was based on the assumption that ghosting originates from simple rigid motion. Moreover, the artifact was simulated in the generated k-space data from images for clinical use. Simulation in the original k-space data may offer different results. We need further researches to reveal which approach would be appropriate for artifact simulation. The research on diagnostic performance using deep learning-based filters has not been performed sufficiently in spite of considerable effort spent for the development of algorithms. Our approach can provide additional structures and texture to the input images using the information learned from the trained datasets. Therefore, no essential information was added using MARC although image quality based on visual assessment was improved. However, even non-essential improvement may help non-expert or inexperienced readers to find lesions in the images. Further research on diagnostic performance will be required to demonstrate its clinical usefulness. \section{Conclusion} In this study, a deep learning-based network was developed to remove motion artifacts in DCE-MRI images. The results of experiments showed the proposed network effectively removed the motion artifacts from the images. These results indicate the deep learning-based network has the potential to also remove unpredictable motion artifacts from images.
{'timestamp': '2018-10-04T02:05:48', 'yymm': '1807', 'arxiv_id': '1807.06956', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.06956'}
ArXiv
Abstract: Motion artifacts are a primary source of magnetic resonance (MR) image quality deterioration with strong repercussions on diagnostic performance. Currently, MR motion correction is carried out either prospectively, with the help of motion tracking systems, or retrospectively by mainly utilizing computationally expensive iterative algorithms. In this paper, we utilize a new adversarial framework, titled MedGAN, for the joint retrospective correction of rigid and non-rigid motion artifacts in different body regions and without the need for a reference image. MedGAN utilizes a unique combination of non-adversarial losses and a new generator architecture to capture the textures and fine-detailed structures of the desired artifact-free MR images. Quantitative and qualitative comparisons with other adversarial techniques have illustrated the proposed model performance.
{'timestamp': '2019-04-22T22:38:06Z', 'url': 'https://arxiv.org/abs/1809.06276', 'language': 'en', 'source': 'c4'}
ArXiv
\section{Introduction} \label{sec:intro} Let $G$ be a simply-connected, simple compact Lie group. Principal $G$-bundles over $S^{4}$ are classified by the value of the second Chern class, which can take any integer value. Let \(\namedright{P_{k}}{}{S^{4}}\) represent the equivalence class of principal $G$-bundle whose second Chern class is $k$. Let \ensuremath{\mathcal{G}_{k}}\ be the \emph{gauge group} of this principal $G$-bundle, which is the group of $G$-equivariant automorphisms of $P_{k}$ which fix $S^{4}$. Crabb and Sutherland~\cite{CS} showed that, while there are countably many inequivalent principal $G$-bundles, the gauge groups $\{\ensuremath{\mathcal{G}_{k}}\}_{k\in\mathbb{Z}}$ have only finitely many distinct homotopy types. There has been a great deal of interest recently in determining the precise number of possible homotopy types. The following classifications are known. For two integers $a,b$, let $(a,b)$ be their greatest common divisor. If $G=SU(2)$ then $\ensuremath{\mathcal{G}_{k}}\simeq\ensuremath{\mathcal{G}_{k^{\prime}}}$ if and only if $(12,k)=(12,k^{\prime})$~\cite{K}; if $G=SU(3)$ then $\ensuremath{\mathcal{G}_{k}}\simeq\ensuremath{\mathcal{G}_{k^{\prime}}}$ if and only if $(24,k)=(24,k^{\prime})$~\cite{HK}; if $G=SU(5)$ then $\ensuremath{\mathcal{G}_{k}}\simeq\ensuremath{\mathcal{G}_{k^{\prime}}}$ when localized at any prime $p$ or rationally if and only if $(120,k)=(120,k^{\prime})$~\cite{Th2}; and if $G=Sp(2)$ then $\ensuremath{\mathcal{G}_{k}}\simeq\ensuremath{\mathcal{G}_{k^{\prime}}}$ when localized at any prime $p$ or rationally if and only if $(40,k)=(40,k^{\prime})$~\cite{Th1}. Partial classifications that are potentially off by a factor of $2$ have been worked out for $G_{2}$~\cite{KTT} and $Sp(3)$~\cite{Cu}. The $SU(4)$ case is noticeably absent. The $SU(5)$ case was easier since elementary bounds on the number of homotopy types matched at the prime $2$ but did not at the prime $3$, and it is typically easier to work out $3$-primary problems in low dimension than $2$-primary problems. In the $SU(4)$ case the elementary bounds do not match at $2$, and the purpose of this paper is to resolve the difference, at least after looping. \begin{theorem} \label{types} For $G=SU(4)$, there is a homotopy equivalence $\Omega\ensuremath{\mathcal{G}_{k}}\simeq\Omega\ensuremath{\mathcal{G}_{k^{\prime}}}$ when localized at any prime $p$ or rationally if and only if $(60,k)=(60,k^{\prime})$. \end{theorem} Two novel features arise in the methods used, as compared to the other known classifications. One is the use of Miller's stable splittings of Stiefel manifolds in order to gain some control over unstable splittings, and the other is showing that a certain ambiguity which prevents a clear classification statement for $\ensuremath{\mathcal{G}_{k}}$ vanishes after looping. It would be interesting to know if these ideas give access to classifications for $SU(n)$-gauge groups for $n\geq 6$. One motivation for studying $SU(4)$-gauge groups is their connection to physics, in particular, to $SU(n)$-extensions of the standard model. For instance, the group $SU(4)$ is gauged in the Pati-Salam model~\cite{PS} and the flavour symmetry it represents there plays a role in several other grand unified theories~\cite{BH}. The progression of results from $SU(2)$ to $SU(5)$ and possibly beyond would be of interest to physicists studying the $SU(n)$-gauge groups in t'Hooft's large $n$ expansion~\cite{tH}. \section{Determining homotopy types of gauge groups} \label{sec:prelim} We begin by describing a context in which homotopy theory can be applied to study gauge groups. This works for any simply-connected, simple compact Lie group $G$ and so is stated that way. Let $BG$ and $B\ensuremath{\mathcal{G}_{k}}$ be the classifying spaces of $G$ and $\ensuremath{\mathcal{G}_{k}}$ respectively. Let $\ensuremath{\mbox{Map}}(S^{4},BG)$ and $\mapstar(S^{4},BG)$ respectively be the spaces of freely continuous and pointed continuous maps between~$S^{4}$ and $BG$. The components of each space are in one-to-one correspondence with the integers, where the integer is determined by the degree of a map \(\namedright{S^{4}}{}{BG}\). By~\cite{AB,G}, there is a homotopy equivalence $B\ensuremath{\mathcal{G}_{k}}\simeq\ensuremath{\mbox{Map}}_{k}(S^{4},BG)$ between $B\ensuremath{\mathcal{G}_{k}}$ and the component of $\mbox{Map}(S^{4},BG)$ consisting of maps of degree~$k$. Evaluating a map at the basepoint of $S^{4}$, we obtain a map \(ev\colon\namedright{B\ensuremath{\mathcal{G}_{k}}}{}{BG}\) whose fibre is homotopy equivalent to $\mapstar_{k}(S^{4},BG)$. It is well known that each component of $\mapstar(S^{4},BG)$ is homotopy equivalent to $\Omega^{3}_{0} G$, the component of $\Omega^{3} G$ containing the basepoint. Putting all this together, for each $k\in\mathbb{Z}$, there is a homotopy fibration sequence \begin{equation} \label{evfib} \namedddright{G}{\partial_{k}}{\Omega^{3}_{0} G}{}{B\ensuremath{\mathcal{G}_{k}}}{ev}{BG} \end{equation} where $\partial_{k}$ is the fibration connecting map. The order of $\partial_{k}$ plays a crucial role. By~\cite{L}, the triple adjoint \(\namedright{S^{3}\wedge G}{}{G}\) of $\partial_{k}$ is homotopic to the Samelson product $\langle k\cdot i,1\rangle$, where $i$ is the inclusion of $S^{3}$ into $G$ and $1$ is the identity map on~$G$. This implies two things. First, the order of $\partial_{k}$ is finite. For, rationally, $G$ is homotopy equivalent to a product of Eilenberg-MacLane spaces and the homotopy equivalence can be chosen to be one of $H$-maps. Since Eilenberg-MacLane spaces are homotopy commutative, any Samelson product into such a space is null homotopic. Thus, rationally, the adjoint of $\partial_{k}$ is null homotopic, implying that the same is true for $\partial_{k}$ and therefore the order of $\partial_{k}$ is finite. Second, the linearity of the Samelson product implies that $\langle k\cdot i,1\rangle\simeq k\circ\langle i,1\rangle$, so taking adjoints we obtain $\partial_{k}\simeq k\circ\partial_{1}$. Thus the order of~$\partial_{k}$ is determined by the order of $\partial_{1}$. When $G=SU(n)$, Hamanaka and Kono~\cite{HK} gave the following lower bound on the order of $\partial_{1}$ and the number of homotopy types of \ensuremath{\mathcal{G}_{k}}. \begin{lemma} \label{HKlemma} Let $G=SU(n)$. Then the following hold: \begin{letterlist} \item the order of $\partial_{1}$ is divisible by $n(n^{2}-1)$; \item if $\ensuremath{\mathcal{G}_{k}}\simeq\ensuremath{\mathcal{G}_{k^{\prime}}}$ then $(n(n^{2}-1),k)=(n(n^{2}-1),k^{\prime})$. \end{letterlist} \end{lemma} \vspace{-0.8cm}~$\hfill\Box$\medskip The calculation in~\cite{HK} that established Lemma~\ref{HKlemma} was that a composite \(\nameddright{\Sigma^{2n-5}\mathbb{C}P^{2}}{}{SU(n)}{\partial_{1}}{\Omega_{0}^{3} SU(n)}\) has order~$n(n^{2}-1)$. The adjoint of this map has the same order, so we obtain the following alternative formulation. \begin{lemma} \label{loopHKlemma} Let $G=SU(n)$. Then the following hold: \begin{letterlist} \item the order of $\Omega\partial_{1}$ is divisible by $n(n^{2}-1)$; \item if $\Omega\ensuremath{\mathcal{G}_{k}}\simeq\Omega\ensuremath{\mathcal{G}_{k^{\prime}}}$ then $(n(n^{2}-1),k)=(n(n^{2}-1),k^{\prime})$. \end{letterlist} \end{lemma} \vspace{-0.8cm}~$\hfill\Box$\medskip In particular, if $G=SU(4)$ then $60$ divides the order of $\Omega\partial_{1}$ and a homotopy equivalence $\Omega\ensuremath{\mathcal{G}_{k}}\simeq\Omega\ensuremath{\mathcal{G}_{k^{\prime}}}$ implies that $(60,k)=(60,k^{\prime})$. In Section~\ref{sec:proof} we will find an upper bound on the order of $\Omega\partial_{1}$ that matches the lower bound. \begin{theorem} \label{looppartialorder} The map \(\namedright{\Omega SU(4)}{\Omega\partial_{1}}{\Omega^{4}_{0} SU(4)}\) has order~$60$. \end{theorem} Granting Theorem~\ref{looppartialorder} for now, we can prove Theorem~\ref{types} by using the following general result from~\cite{Th1}. If $Y$ is an $H$-space, let \(k\colon\namedright{Y}{}{Y}\) be the $k^{th}$-power map. \begin{lemma} \label{ptypecount} Let $X$ be a space and $Y$ be an $H$-space with a homotopy inverse. Suppose there is a map \(\namedright{X}{f}{Y}\) of order $m$, where $m$ is finite. Let $F_{k}$ be the homotopy fibre of $k\circ f$. If $(m,k)=(m,k^{\prime})$ then $F_{k}$ and $F_{k^{\prime}}$ are homotopy equivalent when localized rationally or at any prime.~$\hfill\Box$ \end{lemma} \noindent \begin{proof}[Proof of Theorem~\ref{types}] By Theorem~\ref{looppartialorder}, the map \(\namedright{\Omega SU(4)}{\Omega\partial_{1}}{\Omega^{4}_{0} SU(4)}\) has order~$60$. So Lemma~\ref{ptypecount} implies that if $(60,k)=(60,k^{\prime})$, then $\Omega\ensuremath{\mathcal{G}_{k}}\simeq\Omega\ensuremath{\mathcal{G}_{k^{\prime}}}$ when localized at any prime $p$ or rationally. On the other hand, by Lemma~\ref{loopHKlemma}, if $\Omega\ensuremath{\mathcal{G}_{k}}\simeq\Omega\ensuremath{\mathcal{G}_{k^{\prime}}}$ then $(60,k)=(60,k^{\prime})$. Thus there is a homotopy equivalence $\Omega\ensuremath{\mathcal{G}_{k}}\simeq\Omega\ensuremath{\mathcal{G}_{k^{\prime}}}$ at each prime $p$ and rationally if and only if $(60,k)=(60,k^{\prime})$. \end{proof} It remains to prove Theorem~\ref{looppartialorder}. In fact, the odd primary components of the order of $\partial_{1}$ (and hence $\Omega\partial_{1}$ by Lemma~\ref{loopHKlemma}) are obtained as special cases of a more general result in~\cite{Th3}. \begin{lemma} \label{oddbounds} Localized at $p=3$, $\partial_{1}$ has order $3$; localized at $p=5$, $\partial_{1}$ has order $5$; and localized at~$p>5$, $\partial_{1}$ has order~$1$.~$\hfill\Box$ \end{lemma} Thus to prove Theorem~\ref{looppartialorder} we are reduced to proving the following. \begin{theorem} \label{looppartialorder2} Localized at $2$ the map \(\namedright{\Omega SU(4)}{\Omega\partial_{1}}{\Omega^{4}_{0} SU(4)}\) has order~$4$. \end{theorem} \section{An initial upper bound on the $2$-primary order of $\partial_{1}$} \label{sec:initialbound} For the remainder of the paper all spaces and maps will be localized at $2$ and homology will be taken with mod-$2$ coefficients. Note that some statements that follow are valid wilthout localizing, such as Theorems~\ref{Miller} and~\ref{Millernat}, but rather than dancing back and forth between local and non-local statements we simply localize at $2$ throughout. In~\cite{B}, or by different means in~\cite{KKT}, it was shown that there is a homotopy commutative square \[\diagram SU(n)\rto^-{\partial_{1}}\dto^{\pi} & \Omega^{3}_{0} SU(n)\ddouble \\ SU(n)/SU(n-2)\rto^-{f} & \Omega^{3}_{0} SU(n) \enddiagram\] for some map $f$, where $\pi$ is the standard quotient map. In our case it is well known that there is a homotopy equivalence $SU(4)/SU(2)\simeq S^{5}\times S^{7}$. Thus there is a homotopy commutative square \begin{equation} \label{su4su2} \diagram SU(4)\rto^-{\partial_{1}}\dto^{\pi} & \Omega^{3}_{0} SU(4)\ddouble \\ S^{5}\times S^{7}\rto^-{f} & \Omega^{3}_{0} SU(4). \enddiagram \end{equation} Taking the triple adjoint of $f$, we obtain a map \[f'\colon\nameddright{S^{8}\vee S^{10}\vee S^{15}}{\simeq} {\Sigma^{3}(S^{5}\times S^{7})}{}{SU(4)}.\] Mimura and Toda~\cite{MT} calculated the homotopy groups of $SU(4)$ through a range. The $2$-primary components of $\pi_{8}(SU(4))$, $\pi_{10}(SU(4))$ and $\pi_{15}(SU(4))$ are $\mathbb{Z}/8\mathbb{Z}$, $\mathbb{Z}/8\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ and $\mathbb{Z}/8\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$, respectively. Consequently, the order of $f'$ is bounded above by $8$. The order of~$f$ is therefore also bounded above by $8$. The homotopy commutativity of~(\ref{su4su2}) then implies the following. \begin{lemma} \label{initialbound} The order the map \(\namedright{SU(4)}{\partial_{1}}{\Omega^{3}_{0} SU(4)}\) is bounded above by $8$.~$\hfill\Box$ \end{lemma} Ideally it should be possible to reduce this upper bound by a factor of two. The remainder of the paper aims to show that this can be done after looping. \section{A cofibration} \label{sec:Cprops} The homotopy groups of spheres will play an important role. In all cases except one we follow Toda's notation~\cite{To}. Specifically, (i) for $n\geq 3$, $\eta_{n}=\Sigma^{n-3}\eta_{3}$ represents the generator of $\pi_{n+1}(S^{n})\cong\mathbb{Z}/2\mathbb{Z}$; (ii) for $n\geq 5$, $\nu_{n}=\Sigma^{n-3}\nu_{5}$ represents the generator of $\pi_{n+3}(S^{n})\cong\mathbb{Z}/8\mathbb{Z}$; and (iii) differing from Toda's notation, for $n\geq 3$, $\nu'_{n}=\Sigma^{n-3}\nu'_{3}$ represents the $n-3$ fold suspension of the generator~$\nu'_{3}$ of $\pi_{6}(S^{3})\cong\mathbb{Z}/4\mathbb{Z}$. Note that for $n\geq 5$, $\nu'_{n}=2\nu_{n}$. \medskip By~(\ref{su4su2}), the map \(\namedright{SU(4)}{\partial_{1}}{\Omega^{3}_{0} SU(4)}\) factors as a composite \(\nameddright{SU(4)}{\pi}{S^{5}\times S^{7}}{f}{\Omega^{3}_{0} SU(4)}\). In this section we determine properties of the map $\pi$ and its homotopy cofibre. To prepare, first recall some properties of $SU(4)$. There is an algebra isomorphism $\cohlgy{SU(4)}\cong\Lambda(x,y,z)$, where the degrees of $x,y,z$ are $3,5,7$ respectively. A $\mathbb{Z}/2\mathbb{Z}$-module basis for $\rcohlgy{SU(4)}$ is therefore $\{x,y,z,xy,xz,yz,xyz\}$ in degrees $\{3,5,7,8,10,12,15\}$ respectively, so $SU(4)$ may be given a $CW$-structure with one cell in each of those dimensions. There is a canonical map \(\namedright{\Sigma\ensuremath{\mathbb{C}P^{3}}}{}{SU(4)}\) which induces a projection onto the generating set in cohomology. Notice that $\Sigma\ensuremath{\mathbb{C}P^{3}}$ is homotopy equivalent to the $7$-skeleton of $SU(4)$, and there is a homotopy cofibration \begin{equation} \label{CP3cofib} \llnameddright{S^{4}\vee S^{6}}{\eta_{3}\vee\nu'_{3}}{S^{3}}{}{\Sigma\ensuremath{\mathbb{C}P^{3}}}. \end{equation} Miller~\cite{M} gave a stable decomposition of Stiefel manifolds which includes the following as a special case. \begin{theorem} \label{Miller} There is a stable homotopy equivalence \[SU(4)\simeq_{S}\Sigma\ensuremath{\mathbb{C}P^{3}}\vee M\vee S^{15}\] where $M$ is given by the homotopy cofibration \[\llnameddright{S^{11}}{\nu'_{8}+\eta_{10}}{S^{8}\vee S^{10}}{}{M}.\] \end{theorem} \vspace{-1cm}~$\hfill\Box$\medskip Crabb~\cite{C} and Kitchloo~\cite{Kitch} refined the stable decomposition of Stiefel manifolds and proved that it was natural for maps \(\namedright{SU(n)}{}{SU(n)/SU(m)}\). In our case, this gives the following. \begin{theorem} \label{Millernat} Stably, there is a homotopy commutative diagram \[\diagram SU(4)\rto^-{\simeq_{S}}\dto^{\pi} & \Sigma\ensuremath{\mathbb{C}P^{3}}\vee M\vee S^{15}\dto^{\overline{\pi}} \\ S^{5}\times S^{7}\rto^-{\simeq_{S}} & S^{5}\vee S^{7}\vee S^{12}. \enddiagram\] where $\overline{\pi}$ is the wedge sum of: (i) the map \(\namedright{\Sigma\ensuremath{\mathbb{C}P^{3}}}{}{S^{5}\vee S^{7}}\) that collapses the bottom cell, (ii) the pinch map \(\lnamedright{M}{}{S^{12}}\) to the top cell, and (iii) the trivial map \(\namedright{S^{15}}{}{\ast}\).~$\hfill\Box$ \end{theorem} Now define the space $C$ and maps $j$ and $\delta$ by the homotopy cofibration \begin{equation} \label{Ccofib} \namedddright{SU(4)}{\pi}{S^{5}\times S^{7}}{j}{C}{\delta}{\Sigma SU(4)}. \end{equation} Since $\pi^{\ast}$ is an inclusion onto the subalgebra $\Lambda(y,z)$ of $\Lambda(x,y,z)\cong\cohlgy{SU(4)}$, the long exact sequence in cohomology induced by the cofibration sequence~(\ref{Ccofib}) implies that a $\mathbb{Z}/2\mathbb{Z}$-module basis for $\rcohlgy{C}$ is given by $\{\sigma x,\sigma xy, \sigma xz, \sigma xyz\}$ in degrees $\{4,9,11,16\}$ respectively, where the elements of $\rcohlgy{C}$ have been identified with the image of $\delta^{\ast}$. So as a $CW$-complex, $C$ has one cell in each of the dimensions $\{4,9,11,16\}$. The stable homotopy type of $C$ and the stable class of the map $j$ follow immediately from Theorem~\ref{Millernat}. \begin{proposition} \label{Cstable} Stably, there is a homotopy commutative diagram \[\diagram S^{5}\times S^{7}\rto^-{\simeq_{S}}\dto^{j} & S^{5}\vee S^{7}\vee S^{12}\dto^{\overline{j}} \\ C\rto^-(0.6){\simeq_{S}} & S^{4}\vee S^{9}\vee S^{11}\vee S^{16} \enddiagram\] where $\overline{j}$ is the wedge sum of: (i) \(\llnamedright{S^{5}\vee S^{7}}{\eta_{4}+\nu'_{4}}{S^{4}}\), and (ii) \(\llnamedright{S^{12}}{\nu'_{9}+\eta_{11}}{S^{9}\vee S^{11}}\). \end{proposition} \vspace{-1cm}~$\hfill\Box$\medskip The stable decomposition of $C$ will be useful but we will ultimately need to work with unstable information in the form of the homotopy type of $\Sigma^{3} C$ and the homotopy class of $\Sigma^{3} j$. We start with the homotopy type of $\Sigma^{3} C$. In general, for a $CW$-complex $X$ and positive integer~$m$, let~$X_{m}$ be the $m$-skeleton of $X$. In our case, the $CW$-structure for $C$ implies that there are homotopy cofibrations \begin{align} \label{Ccofib1} & \nameddright{S^{8}}{g_{1}}{S^{4}}{}{C_{9}} \\ \label{Ccofib2} & \nameddright{S^{10}}{g_{2}}{C_{9}}{}{C_{11}} \\ \label{Ccofib3} & \nameddright{S^{15}}{g_{3}}{C_{11}}{}{C} \end{align} \begin{lemma} \label{C9decomp} There is a homotopy equivalence $\Sigma^{2} (C_{9})\simeq S^{6}\vee S^{11}$. \end{lemma} \begin{proof} By~\cite{To}, $\pi_{10}(S^{6})=0$, so the map $\Sigma^{2} g_{1}$ in~(\ref{Ccofib1}) is null homotopic. The asserted homotopy equivalence for $\Sigma^{2} (C_{9})$ follows immediately. \end{proof} \begin{lemma} \label{C11decomp} There is a homotopy equivalence $\Sigma^{2} (C_{11})\simeq S^{6}\vee S^{11}\vee S^{13}$. \end{lemma} \begin{proof} Substituting the homotopy equivalence in Lemma~\ref{C9decomp} into the double suspension of~(\ref{Ccofib2}) gives a homotopy cofibration \(\nameddright{S^{12}}{\Sigma^{2} g_{2}}{S^{6}\vee S^{11}}{}{\Sigma^{2} (C_{11})}\). By the Hilton-Milnor Theorem, $\Sigma^{2} g_{2}\simeq a+b$ where $a$ and $b$ are obtained by composing $\Sigma^{2} g_{2}$ with the pinch maps to $S^{6}$ and $S^{11}$ respectively. We claim that each of $a$ and $b$ is null homotopic, implying that $\Sigma^{2} g_{2}$ is null homotopic, from which the asserted homotopy equivalence for $\Sigma^{2} (C_{11})$ follows immediately. By Proposition~\ref{Cstable}, $C$ is stably homotopy equivalent to a wedge of spheres, and therefore $C_{11}$ is too. Thus $g_{2}$ is stably trivial, implying for dimensional reasons that $a$ and $b$ are as well. On the other hand, $a$ and $b$ are represented by classes in $\pi_{12}(S^{6})\cong\mathbb{Z}/2\mathbb{Z}$ and $\pi_{12}(S^{11})\cong\mathbb{Z}/2\mathbb{Z}$ respectively. By~\cite{To}, these groups are generated by $\nu_{6}^{2}$ and $\eta_{11}$, both of which are stable. Thus the only way that~$a$ and $b$ can be stably trivial is if both are already trivial. Hence $\Sigma^{2} g_{2}$ is null homotopic. \end{proof} \begin{lemma} \label{Cdecomp} There is a homotopy equivalence $\Sigma^{3} C\simeq E\vee S^{12}\vee S^{14}$ where $E$ is given by a homotopy cofibration \(\llnameddright{S^{18}}{s\cdot\bar{\nu}_{7}\nu_{15}}{S^{7}}{}{E}\) for some $s\in\mathbb{Z}/2\mathbb{Z}$. \end{lemma} \begin{proof} Substituting the homotopy equivalence in Lemma~\ref{C11decomp} into the double suspension of~(\ref{Ccofib3}) gives a homotopy cofibration \(\nameddright{S^{17}}{\Sigma^{2} g_{3}}{S^{6}\vee S^{11}\vee S^{13}}{}{\Sigma^{2} C}\). By the Hilton-Milnor Theorem, $\Sigma^{2} g_{3}\simeq a+b+c+d$ where $a$, $b$ and $c$ are obtained by composing $\Sigma^{2} g_{3}$ with the pinch maps to $S^{6}$, $S^{11}$ and~$S^{13}$ respectively, and $d$ is a composite \(\nameddright{S^{17}}{}{S^{16}}{w}{S^{6}\vee S^{11}\vee S^{13}}\). Here, $w$ is the Whitehead product of the identity maps on $S^{6}$ and $S^{11}$. As $\Sigma w$ is null homotopic, we instead consider \[\nameddright{S^{18}}{\Sigma^{3} g_{3}}{S^{7}\vee S^{12}\vee S^{14}} {}{\Sigma^{3} C}\] where $\Sigma^{3} g_{3}\simeq\Sigma a+\Sigma b+\Sigma c$. By Proposition~\ref{Cstable}, $C$ is stably homotopy equivalent to a wedge of spheres, so $\Sigma^{3} g_{3}\simeq\Sigma a+\Sigma b+\Sigma c$ is stably trivial. Thus, for dimensional reasons, each of $\Sigma a$, $\Sigma b$ and $\Sigma c$ is stably trivial. Observe that both $\Sigma b$ and $\Sigma c$ are in the stable range, impling that they are null homotopic. On the other hand, $\Sigma a$ represents a class in $\pi_{18}(S^{7})$. By~\cite{To}, $\pi_{18}(S^{7})\cong\mathbb{Z}/8\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ where the order $8$ generator is the stable class $\zeta_{7}$ and the order~$2$ generator is the unstable class $\bar{\nu}_{7}\nu_{15}$. Note too that the stable order of $\zeta_{7}$ is $8$, so the only nontrivial unstable class in $\pi_{18}(S^{7})$ is $\bar{\nu}_{7}\nu_{15}$. As $\Sigma a$ is stably trivial, we obtain $\Sigma a=s\cdot\bar{\nu}_{7}\nu_{15}$ for some $s\in\mathbb{Z}/2\mathbb{Z}$. Hence $\Sigma^{2} g_{3}$ factors as the composite \(\llnamedright{S^{18}}{s\cdot\bar{\nu}_{7}\nu_{15}}{S^{7}}\hookrightarrow S^{7}\vee S^{12}\vee S^{14}\), from which the asserted homotopy decomposition of $\Sigma^{3} C$ follows. \end{proof} Next, we identify $\Sigma^{3} j$. Let \[\iota\colon\namedright{S^{7}}{}{E}\] be the inclusion of the bottom cell. \begin{lemma} \label{jclass} There is a homotopy commutative diagram \[\diagram S^{8}\vee S^{10}\vee S^{15}\rto^-{\simeq}\dto^{a+b+c} & \Sigma^{3}(S^{5}\times S^{7})\dto^{\Sigma^{3} j} \\ E\vee S^{12}\vee S^{14}\rto^-{\simeq} & \Sigma^{3} C \enddiagram\] where $a$, $b$ and $c$ respectively are the composites \begin{align*} & a\colon\nameddright{S^{8}}{\eta_{7}}{S^{7}}{\iota}{E}\hookrightarrow E\vee S^{12}\vee S^{14} \\ & b\colon\nameddright{S^{10}}{\nu'_{7}}{S^{7}}{\iota}{E}\hookrightarrow E\vee S^{12}\vee S^{14} \\ & c\colon\lllnameddright{S^{15}}{\psi+\nu'_{12}+\eta_{14}}{S^{7}\vee S^{12}\vee S^{14}} {\iota\vee 1\vee 1}{E\vee S^{12}\vee S^{14}} \end{align*} and $\psi=t\cdot\sigma'\eta_{14}$ for some $t\in\mathbb{Z}/2\mathbb{Z}$. \end{lemma} \begin{proof} By Proposition~\ref{Cstable}, the diagram in the statement of the lemma stably homotopy commutes if~$c$ is replaced by the composite \(c'\colon\lllnameddright{S^{15}}{\ast+\nu'_{12}+\eta_{14}}{S^{7}\vee S^{12}\vee S^{14}} {\iota\vee 1\vee 1}{E\vee S^{12}\vee S^{14}}\). Since $a$ and $b$ are in the stable range, the diagram in the statement of the lemma therefore does homotopy commute when restricted to $S^{8}\vee S^{10}$. However, $c'$ is not in the stable range. It fails to be so only by a map \(\psi''\colon\namedright{S^{15}}{}{S^{7}}\). Thus if $c''$ is the composite \(c''\colon\lllnameddright{S^{15}}{\psi''+\nu'_{12}+\eta_{14}}{S^{7}\vee S^{12}\vee S^{14}} {\iota\vee 1\vee 1}{E\vee S^{12}\vee S^{14}}\) then the diagram in the statement of the lemma homotopy commutes with $c$ replaced by $c''$. More can be said. By~\cite{To} (stated later also in~(\ref{SU4groups})), $\pi_{15}(S^{7})\cong\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ with generators $\sigma'\nu_{14}$, $\bar{\nu}_{7}$ and $\epsilon_{7}$. Thus $\psi''=t\cdot\sigma'\nu_{14}+u\cdot\bar{\nu}_{7} + v\cdot\epsilon_{7}$ for some $t,u,v\in\mathbb{Z}/2\mathbb{Z}$. The generators $\bar{\nu}_{7}$ and $\epsilon_{7}$ are stable while $\sigma'\nu_{14}$ is unstable. So as $c''$ stabilizes to $c$, we must have $\psi''$ stabilizing to the trivial map. Thus $u$ and $v$ must be zero. Hence $\psi''=t\cdot\sigma'\nu_{14}$. Now $c''$ is exactly the map $c$ described in the statement of the lemma. \end{proof} \section{Preliminary information on the homotopy groups of $SU(4)$} \label{sec:htpygroups} This section records some information on the homotopy groups of $SU(4)$ which will be needed subsequently. There is a homotopy fibration \[\nameddright{S^{3}}{i}{SU(4)}{q}{S^{5}\times S^{7}}.\] This induces a long exact sequence of homotopy groups \[\cdots\longrightarrow\namedddright{\pi_{n+1}(S^{3}\times S^{5})}{} {\pi_{n}(S^{3})}{i_{\ast}}{\pi_{n}(SU(4))}{q_{\ast}}{\pi_{n}(S^{5}\times S^{7})} \longrightarrow\cdots\] Following~\cite{MT}, the notation $[\alpha\oplus\beta]\in\pi_{n}(SU(4))$ means that $[\alpha\oplus\beta]$ is a generator of $\pi_{n}(SU(4))$ with the property that $q_{\ast}([\alpha\oplus\beta])=\alpha\oplus\beta$ for $\alpha\in\pi_{n}(S^{5})$ and $\beta\in\pi_{n}(S^{7})$. The homotopy groups of $SU(4)$ in low dimensions were determined by Mimura and Toda~\cite{MT}. The information presented will be split into two parts, the first corresponding to subsequent calculations involving $\pi_{8}(SU(4))$ and $\pi_{10}(SU(4))$, and the second corresponding to calculations involving $\pi_{15}(SU(4))$. First, for $r\geq 1$, let \(\underline{2}^{r}\colon\namedright{S^{7}}{}{S^{7}}\) be the map of degree $2^{r}$. In general, the degree two map on $S^{2n+1}$ need not induce multiplication by $2$ in homotopy groups. However, as $S^{7}$ is an $H$-space, the degree $2$ map on $S^{7}$ is homotopic to the $2^{nd}$-power map, implying that it does in fact induce multiplication by $2$ in homotopy groups. We record this for later use. \begin{lemma} \label{S72} The map \(\namedright{S^{7}}{\underline{2}}{S^{7}}\) induces multiplication by $2$ in homotopy groups.~$\hfill\Box$ \end{lemma} \subsection{Dimensions $8$ and $10$} The relevant table of homotopy groups from~\cite{MT} is: \begin{equation} \label{SU4groups1} \begin{tabular}{|c|c|c|c|}\hline & $\pi_{7}(SU(4))$ & $\pi_{8}(SU(4))$ & $\pi_{10}(SU(4))$ \\ \hline $2$-component & $\mathbb{Z}$ & $\mathbb{Z}/8\mathbb{Z}$ & $\mathbb{Z}/8\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ \\ \hline generators & $[\eta_{5}^{2}\oplus\underline{2}]$ & $[\nu_{5}\oplus\eta_{7}]$ & $[\nu_{7}]$, $[\nu_{5}\eta_{8}^{2}]$ \\ \hline \end{tabular} \end{equation} In addition, Mimura and Toda~\cite[Lemma 6.2(i)]{MT} proved that \(\namedright{\pi_{n+1}(S^{5}\times S^{7})}{}{\pi_{n}(S^{3})}\) is an epimorphism for $n\in\{8,10\}$, implying the following. \begin{lemma} \label{pi810inj} The map \(\namedright{\pi_{n}(SU(4))}{q_{\ast}}{\pi_{n}(S^{5}\times S^{7})}\) is an injection for $n\in\{8,10\}$.~$\hfill\Box$ \end{lemma} Toda~\cite{To} proved the following relations in the homotopy groups of spheres. \begin{lemma} \label{Todarelns1} The following hold: \begin{letterlist} \item $2\nu'_{3}\simeq\eta_{3}^{3}$; \item $4\nu_{5}\simeq\eta_{5}^{3}$; \item $\eta_{3}\nu'_{3}\simeq\ast$. \end{letterlist} \end{lemma} \vspace{-1cm}~$\hfill\Box$\medskip For convenience, let \[d\colon\namedright{S^{7}}{}{SU(4)}\] represent the generator $[\eta_{5}^{2}\oplus\underline{2}]$ of $\pi_{7}(SU(4))$. \begin{lemma} \label{S810dgrms} There are homotopy commutative diagrams \[\diagram S^{8}\rto^-{[\nu_{5}\oplus\eta_{7}]}\dto^{\eta_{7}} & SU(4)\dto^{4} & & S^{10}\rto^-{[\nu_{7}]}\dto^{\nu'_{7}} & SU^{4}\dto^{4} \\ S^{7}\rto^-{d} & SU(4) & & S^{7}\rto^-{d} & SU(4). \enddiagram\] \end{lemma} \begin{proof} By Lemma~\ref{pi810inj}, \(\namedright{\pi_{n}(SU(4))}{q_{\ast}}{\pi_{n}(S^{5}\times S^{7})}\) is an injection for $n\in\{8,10\}$. So in both cases it suffices to show that the asserted homotopies hold after composition with \(\namedright{SU(4)}{q}{S^{5}\times S^{7}}\). Since the composite \(\nameddright{S^{7}}{d}{SU(4)}{q}{S^{5}\times S^{7}}\) is $\eta_{5}^{2}\times\underline{2}$, the two assertions will follow if we prove: (i) $(\eta_{5}^{2}\times\underline{2})\circ\eta_{7}\simeq q\circ 4\circ[\nu_{5}\oplus\eta_{7}]$; (ii) $(\eta_{5}^{2}\times\underline{2})\circ\nu'_{7}\simeq q\circ 4\circ[\nu_{7}]$. By Lemma~\ref{S72}, $\underline{2}\circ\eta_{7}\simeq 2\eta_{7}$ and $\underline{2}\circ\nu'_{7}\simeq 2\nu'_{7}$. Since $\eta_{7}$ has order~$2$ we obtain $\underline{2}\circ\eta_{7}\simeq\ast$. By Lemma~\ref{Todarelns1}~(a), $2\nu'_{7}\simeq\eta_{7}^{3}$. Thus (i) and (ii) reduce to proving: (i$^{\prime}$) $\eta_{5}^{3}\simeq q\circ 4\circ[\nu_{5}\oplus\eta_{7}]$; (ii$^{\prime}$) $\eta_{7}^{3}\simeq q\circ 4\circ[\nu_{7}]$. Consider the diagram \[\diagram S^{8}\rto^-{[\nu_{5}\oplus\eta_{7}]}\dto^{\underline{4}} & SU(4)\dto^{4} \\ S^{8}\rto^-{[\nu_{5}\oplus\eta_{7}]}\drto_{\nu_{5}\times\eta_{7}} & SU(4)\dto^{q} \\ & S^{5}\times S^{7}. \enddiagram\] The top square homotopy commutes since the multiplications in $[S^{8},SU(4)]$ induced by the $H$-structure on $SU(4)$ and the co-$H$-structure on $S^{8}$ coincide. The bottom square homotopy commutes by definition of $[\nu_{5}\circ\eta_{7}]$. Since $\eta_{7}$ has order $2$ and, by Lemma~\ref{Todarelns1}, $4\nu_{5}\simeq\eta_{5}^{3}$, we obtain $(\nu_{5}\times\eta_{7})\circ\underline{4}\simeq\eta_{5}^{3}$. Therefore $q\circ 4\circ [\nu_{5}\circ\eta_{7}]\simeq\eta_{5}^{3}$, and so (i$^{\prime}$) holds. Next, consider the diagram \[\diagram S^{10}\rto^-{[\nu_{7}]}\dto^{\underline{4}} & SU(4)\dto^{4} \\ S^{10}\rto^-{[\nu_{7}]}\drto_{\ast\times\nu_{7}} & SU(4)\dto^{q} \\ & S^{5}\times S^{7}. \enddiagram\] The two squares homotopy commute as in the previous case. By Lemma~\ref{Todarelns}, $4\nu_{7}\simeq\eta_{7}^{3}$. Therefore $q\circ 4\circ\beta\simeq\eta_{7}^{3}$, and so (ii$^{\prime}$) holds. \end{proof} \subsection{Dimension $15$} The relevant table of homotopy groups from~\cite{MT} is: \begin{equation} \label{SU4groups} \begin{tabular}{|c|c|c|c|}\hline & $\pi_{12}(SU(4))$ & $\pi_{14}(SU(4))$ & $\pi_{15}(SU(4))$ \\ \hline $2$-component & $\mathbb{Z}/4\mathbb{Z}$ & $\mathbb{Z}/16\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ & $\mathbb{Z}/8\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ \\ \hline generators & $[\sigma^{'''}]$ & $[\eta_{5}\epsilon_{6}\oplus\sigma']$, $[\nu_{5}^{2}]\circ\nu_{11}$ & $[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}$, $[\sigma'\eta_{14}]$ \\ \hline \end{tabular} \end{equation} In addition, Mimura and Toda~\cite[Lemma 6.2(i)]{MT} proved that \(\namedright{\pi_{16}(S^{5}\times S^{7})}{}{\pi_{15}(S^{3})}\) is an epimorphism, implying the following. \begin{lemma} \label{pi15inj} The map \(\namedright{\pi_{15}(SU(4))}{q_{\ast}}{\pi_{15}(S^{5}\times S^{7})}\) is an injection.~$\hfill\Box$ \end{lemma} The next table gives some information on the $2$-primary components of selected homotopy groups of spheres that were determined by Toda~\cite{To}: \begin{equation} \label{S15groups} \begin{tabular}{|c|c|c|c|}\hline & $\pi_{15}(S^{7})$ & $\pi_{15}(S^{12})$ & $\pi_{15}(S^{14})$ \\ \hline $2$-component & $\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z} \oplus\mathbb{Z}/2\mathbb{Z}$ & $\mathbb{Z}/8\mathbb{Z}$ & $\mathbb{Z}/2\mathbb{Z}$ \\ \hline generators & $\sigma'\eta_{14}$, $\bar{\nu}_{7}$, $\epsilon_{7}$ & $\nu_{12}$ & $\eta_{14}$ \\ \hline \end{tabular} \end{equation} In addition, Toda~\cite{To} proved the following relations (the proofs are scattered through Toda's book but a summary list can be found in~\cite[Equations 1.1 and 2.1]{O}). \begin{lemma} \label{Todarelns} The following hold: \begin{letterlist} \item $\eta_{5}\bar{\nu}_{6}=\nu_{5}^{3}$; \item $\eta_{3}\nu_{4}=\nu'_{3}\eta_{6}$; \item $\eta_{6}\sigma'=4\bar{\nu}_{6}$; \item $\eta_{6}\nu_{7}=\nu_{6}\eta_{9}=0$; \item $\sigma^{'''}\nu_{12}=4(\nu_{5}\sigma_{8})$. \end{letterlist} \end{lemma} \vspace{-1cm}~$\hfill\Box$\medskip Lemma~\ref{Todarelns} is used to obtain two more relations. \begin{lemma} \label{sphererelns} The following hold: \begin{letterlist} \item $\eta_{5}^{2}\bar{\nu}_{7}=0$; \item $\eta_{5}^{2}\sigma'=0$. \end{letterlist} \end{lemma} \begin{proof} In what follows, we freely use the fact that the relations in Lemma~\ref{Todarelns} imply analogous relations for their suspensions; for example, $\eta_{5}\bar{\nu}_{6}=\nu_{5}^{3}$ implies that $\eta_{6}\bar{\nu}_{7}=\nu_{6}^{3}$. For part~(a), the relations in Lemma~\ref{Todarelns}~(a), (b) and (d) respectively imply the following string of equalities: $\eta_{5}^{2}\bar{\nu}_{7}=\eta_{5}\nu_{6}^{3}=\nu'_{5}\eta_{8}\nu_{9}^{2}=0$. For part~(b), Lemma~\ref{Todarelns}~(c) and the fact that $\eta_{5}$ has order~$2$ imply that there are equalities $\eta_{5}^{2}\sigma'=\eta_{5}(4\bar{\nu}_{6})=0$. \end{proof} We now determine the homotopy classes of two maps into $SU(4)$. \begin{lemma} \label{SU4relns} The following hold: \begin{letterlist} \item the composite \(\nameddright{S^{15}}{\bar{\nu}_{7}}{S^{7}}{d}{SU(4)}\) is null homotopic; \item the composite \(\lnameddright{S^{15}}{\sigma'\eta_{14}}{S^{7}}{d}{SU(4)}\) is null homotopic. \end{letterlist} \end{lemma} \begin{proof} By Lemma~\ref{pi15inj}, \(\namedright{\pi_{15}(SU(4))}{q_{\ast}}{\pi_{15}(S^{5}\times S^{7})}\) is an injection. So in both cases it suffices to show that the assertions hold after composition with \(\namedright{SU(4)}{q}{S^{5}\times S^{7}}\). Since the composite \(\nameddright{S^{7}}{d}{SU(4)}{q}{S^{5}\times S^{7}}\) is $\eta_{5}^{2}\times\underline{2}$, the two assertions will follow if we prove: (a$^{\prime}$) $(\eta_{5}^{2}\times\underline{2})\circ\bar{\nu}_{7}\simeq\ast$; (b$^{\prime}$) $(\eta_{5}^{2}\times\underline{2})\circ\sigma'\eta_{14}\simeq\ast$. \noindent By Lemma~\ref{S72}, the degree two map on $S^{7}$ induces multiplication by $2$ on homotopy groups, so as both $\bar{\nu}_{7}$ and $\sigma'\eta_{14}$ have order~$2$, it suffices to prove: (a$^{\prime\prime}$) $\eta_{5}^{2}\bar{\nu}_{7}\simeq\ast$; (b$^{\prime\prime}$) $\eta_{5}^{2}\sigma'\eta_{14}\simeq\ast$. \noindent Part~(a$^{\prime\prime}$) is the statement of Lemma~\ref{sphererelns}~(a) and part~(b$^{\prime\prime}$) is immediate from Lemma~\ref{sphererelns}~(b). \end{proof} One consequence of Lemma~\ref{SU4relns} is the existence of an extension involving the space $E$ appearing in the homotopy decomposition of $\Sigma^{3} C$ in Lemma~\ref{Cdecomp}. \begin{lemma} \label{Eext} There is an extension \[\diagram S^{7}\rto^-{d}\dto^{\iota} & SU(4) \\ E\urto_-{e} & \enddiagram\] for some map $e$. \end{lemma} \begin{proof} By Lemma~\ref{Cdecomp}, there is a homotopy cofibration \(\llnameddright{S^{18}}{t\cdot\bar{\nu}_{7}\nu_{15}}{S^{7}}{}{E}\) for some $t\in\mathbb{Z}/2\mathbb{Z}$. By Lemma~\ref{SU4relns}~(a), $d\circ\bar{\nu}_{7}$ is null homotopic. Therefore $d\circ (t\cdot\bar{\nu}_{7}\nu_{15})$ is null homotopic, implying that the asserted extension exists. \end{proof} \section{The proof of Theorem~\ref{looppartialorder2}} \label{sec:proof} Recall from~(\ref{su4su2}) that \(\namedright{SU(4)}{\partial_{1}}{\Omega^{3}_{0} SU(4)}\) factors as the composite \(\nameddright{SU(4)}{\pi}{S^{5}\times S^{7}}{f}{\Omega^{3}_{0} SU(4)}\). Let \[f'\colon\namedright{\Sigma^{3}(S^{5}\times S^{7})}{}{SU(4)}\] be the triple adjoint of $f$. Let $f'_{1}$, $f'_{2}$ and $f'_{3}$ be the restrictions of the composite \[\nameddright{S^{8}\vee S^{10}\vee S^{15}}{\simeq}{\Sigma^{3}(S^{5}\vee S^{7})} {f'}{SU(4)}\] to $S^{8}$, $S^{10}$ and $S^{15}$ respectively. We wish to identify $f'_{1}$, $f'_{2}$ and $f'_{3}$ more explicitly. Let \(t_{1}\colon\namedright{S^{5}}{}{SU(4)}\) and \(t_{2}\colon\namedright{S^{7}}{}{SU(4)}\) represent generators of $\pi_{5}(SU(4))\cong\mathbb{Z}$ and $\pi_{7}(SU(4))\cong\mathbb{Z}$ respectively. By~\cite{MT} these generators can be chosen so that $\pi\circ t_{1}$ is homotopic to $\underline{2}\oplus\ast$ and $\pi\circ t_{2}$ is homotopic to $\eta^{2}_{5}\oplus\underline{2}$. So there are homotopy commutative diagrams \begin{equation} \label{2Bottdgrms} \diagram S^{5}\rto^-{t_{1}}\drto_{\underline{2}\oplus\ast} & SU(4)\rto^-{\partial_{1}}\dto^{\pi} & \Omega^{3}_{0} SU(4)\ddouble & & S^{7}\rto^-{t_{2}}\drto_{\eta^{2}_{5}\oplus\underline{2}} & SU(4)\rto^-{\partial_{1}}\dto^{\pi} & \Omega^{3}_{0} SU(4)\ddouble \\ & S^{5}\times S^{7}\rto^-{f} & \Omega^{3}_{0} SU(4) & & & S^{5}\times S^{7}\rto^-{f} & \Omega^{3}_{0} SU(4). \enddiagram \end{equation} On the other hand, since the triple adjoint of $\partial_{1}$ is the Samelson product $\langle i,1\rangle$, the triple adjoint of $\partial_{1}\circ t_{j}$ is $\langle t_{j},1\rangle$ for $j=1,2$. Bott~\cite{B} calculated that both of these maps have order $4$. Thus the left diagram in~(\ref{2Bottdgrms}) implies that the restriction of $f$ to $S^{5}$ has order~$8$, and the right diagram in~(\ref{2Bottdgrms}) implies that the restriction of $f$ to $S^{7}$ has order~$8$. Thus, taking triple adjoints, $f'_{1}$ and $f'_{2}$ both have order~$8$. The order of $f'_{3}$ is not as clear. By~(\ref{SU4groups}), $\pi_{15}(SU(4))\cong\mathbb{Z}/8\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$, so $f'_{3}$ may have order~$8$. This ambiguity will be reflected in the alternative possibilities worked out below. Recall from Lemma~\ref{jclass} that there is a homotopy commutative diagram \[\diagram S^{8}\vee S^{10}\vee S^{15}\rto^-{\simeq}\dto^{a+b+c} & \Sigma^{3}(S^{5}\times S^{7})\dto^{\Sigma^{3} j} \\ E\vee S^{12}\vee S^{14}\rto^-{\simeq} & \Sigma^{3} C \enddiagram\] where $a$, $b$ and $c$ respectively are the composites \begin{align*} & a\colon\nameddright{S^{8}}{\eta_{7}}{S^{7}}{\iota}{E}\hookrightarrow E\vee S^{12}\vee S^{14} \\ & b\colon\nameddright{S^{10}}{\nu'_{7}}{S^{7}}{\iota}{E}\hookrightarrow E\vee S^{12}\vee S^{14} \\ & c\colon\lllnameddright{S^{15}}{\psi+\nu'_{12}+\eta_{14}}{S^{7}\vee S^{12}\vee S^{14}} {\iota\vee 1\vee 1}{E\vee S^{12}\vee S^{14}} \end{align*} and $\psi=t\cdot\sigma'\eta_{14}$ for some $t\in\mathbb{Z}/2\mathbb{Z}$. Let $c'$ be the composite \[c'\colon\lllnameddright{S^{15}}{\psi'+\nu'_{12}+\eta_{14}}{S^{7}\vee S^{12}\vee S^{14}} {\iota\vee 1\vee 1}{E\vee S^{12}\vee S^{14}}\] where $\psi'=t\cdot\sigma'\eta_{14}+\eta_{7}\sigma_{8}$. Let $\xi$ be the composite \[\xi\colon\nameddright{E\vee S^{12}\vee S^{14}}{}{E}{e}{SU(4)}\] where the left map is the pinch onto the first wedge summand and $e$ is the map from Lemma~\ref{Eext}. \begin{lemma} \label{alternative} There is a homotopy commutative diagram \[\diagram S^{8}\vee S^{10}\vee S^{15}\rrto^-{f'_{1}+f'_{2}+f'_{3}}\dto^{a+b+\gamma} & & SU(4)\dto^{4} \\ E\vee S^{12}\vee S^{14}\rrto^-{\xi} & & SU(4) \enddiagram\] where $\gamma$ may be chosen to be $c$ if the order of $f'_{3}$ is at most $4$ and $\gamma$ may be chosen to be $c'$ if the order of $f'_{3}$ is $8$. Further, in the latter case, the composite \(\namedddright{S^{15}}{\eta_{7}\sigma_{8}}{S^{7}}{\iota}{E}{e}{SU(4)}\) represents $4[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}$. \end{lemma} \begin{proof} First, consider the diagram \begin{equation} \label{dgrm1} \diagram S^{8}\vee S^{10}\rto^-{f'_{1}+f'_{2}}\dto^{\eta_{7}+\nu'_{7}} & SU(4)\dto^{4} \\ S^{7}\rto^-{d}\dto^{\iota} & SU(4)\ddouble \\ E\rto^-{e} & SU(4). \enddiagram \end{equation} Since $\pi_{8}(SU(4))\cong\mathbb{Z}/8\mathbb{Z}$ is generated by $[\nu_{5}\oplus\eta_{7}]$ and $f'_{1}$ has order~$8$, we must have $f'_{1}=u\cdot[\nu_{5}\oplus\eta_{7}]$ for some unit $u\in\mathbb{Z}/8\mathbb{Z}$. Thus $4f'_{1}\simeq 4[\nu_{5}\oplus\eta_{7}]$, so the restriction of the upper square in~(\ref{dgrm1}) to~$S^{8}$ homotopy commutes by Lemma~\ref{S810dgrms}. Similarly, since $\pi_{10}(SU(4))\cong\mathbb{Z}/8\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ with $[\nu_{7}]$ being the generator of order~$8$, and $f'_{2}$ has order $8$, we must have $4f'_{2}\simeq 4[\nu_{5}]$, so the restriction of the upper square in~(\ref{dgrm1}) to $S^{10}$ homotopy commutes by Lemma~\ref{S810dgrms}. The lower square in~(\ref{dgrm1}) homotopy commutes by Lemma~\ref{Eext}. Now observe that the lower direction around~(\ref{dgrm1}) is the definition of $a+b$. Thus~(\ref{dgrm1}) implies that the diagram in the statement of the lemma homotopy commutes when restricted to $S^{8}\vee S^{10}$. Second, consider the diagram \begin{equation} \label{dgrm2} \diagram S^{15}\dto^{(t\cdot\sigma'\eta_{14}+\theta)+\nu'_{12}+\eta_{14}}\rrto^-{f'_{3}} & & SU(4)\ddto^{4} \\ S^{7}\vee S^{12}\vee S^{15}\dto^{\iota\vee 1\vee 1} & & \\ E\vee S^{12}\vee S^{14}\rrto^-{\xi} & & SU(4) \enddiagram \end{equation} where two possibilities for $\theta$ will be considered. In the lower direction around the diagram, by definition, $\xi$ is the composite \(\nameddright{E\vee S^{12}\vee S^{14}}{}{E}{e}{SU(4)}\) where the left map is the pinch onto the first wedge summand. By Lemma~\ref{Eext}, $\iota\circ e=d$. Thus the lower direction around the diagram is homotopic to the composite \(\lllnameddright{S^{15}}{t\cdot\sigma'\eta_{14}+\theta}{S^{7}}{d}{SU(4)}\). By Lemma~\ref{SU4relns}~(b), $d\circ t\cdot\sigma'\eta_{14}$ is null homotopic. Thus the lower direction around the diagram is in fact homotopic to the composite \(\nameddright{S^{15}}{\theta}{S^{7}}{d}{SU(4)}\). If $f'_{3}$ has order at most $4$ then $4f'_{3}$ is null homotopic. Taking $\theta$ to be the constant map shows that~(\ref{dgrm2}) homotopy commutes. Observe also that with this choice of $\theta$ the left column in~(\ref{dgrm2}) is the definition of $c$, so we obtain the diagram in the statement of the lemma when restricted to $S^{15}$. Now combining~(\ref{dgrm1}) and~(\ref{dgrm2}) we obtain the diagram asserted by the lemma. Suppose that $f'_{3}$ has order~$8$. Since $\pi_{15}(SU(4))\cong\mathbb{Z}/8\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$ with the order~$8$ generator being $[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}$, we obtain $4f'_{3}\simeq 4[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}$. Take $\theta=\eta_{7}\sigma_{8}$. We claim that $d\circ\theta\simeq 4[\nu_{5}\circ\eta_{7}]\circ\sigma_{8}$. If so then~(\ref{dgrm2}) homotopy commutes with this choice of $\theta$ and, as the left column of~(\ref{dgrm2}) is the definition of $c'$, we obtain the diagram in the statement of the lemma when restricted to $S^{15}$. Therefore combining~(\ref{dgrm1}) and~(\ref{dgrm2}) we obtain the diagram asserted by the lemma. It remains to show that $d\circ\eta_{7}\sigma_{8}\simeq 4[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}$. By Lemma~\ref{pi15inj} it suffices to compose with \(\namedright{SU(4)}{q}{S^{5}\times S^{7}}\) and check there. On the one hand, $q\circ d\circ\eta_{7}\sigma_{8}\simeq (\eta_{5}^{2}\times\underline{2})\circ\eta_{7}\sigma_{8}\simeq \eta_{5}^{3}\sigma_{8}$, where the left homotopy holds by definition of $d$ and the right homotopy is due to the fact that $\eta_{7}$ has order~$2$ and, by Lemma~\ref{S72}, $\underline{2}$ induces multiplication by $2$ on homotopy groups. On the other hand, $q\circ 4[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}\simeq 4(\nu_{5}\times\eta_{7})\circ\sigma_{8}\simeq 4\nu_{5}\sigma_{8}\simeq \eta_{5}^{3}\sigma_{8}$. Here, from left to right, the first homotopy holds by definition of $[\nu_{5}\oplus\eta_{7}]$, the second holds since $\eta_{7}$ has order~$2$, and the third holds by~\cite{To}. Thus $d\circ\eta_{7}\simeq_{8}\simeq 4[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}$, as claimed. \end{proof} Now return to the map \(\namedright{SU(4)}{\partial_{1}}{\Omega_{0}^{3} SU(4)}\). \begin{proposition} \label{order4options} The following hold: \begin{letterlist} \item if $f'_{3}$ has order at most $4$ then $4\circ\partial_{1}$ is null homotopic; \item if $f'_{3}$ has order $8$ then $4\circ\partial_{1}$ is homotopic to the composite \(\namedddright{SU(4)}{\pi}{S^{5}\times S^{7}}{}{S^{12}}{4\chi} {\Omega^{3}_{0} SU(4)}\), where the middle map is the pinch map to the top cell and $\chi$ is the triple adjoint of the order~$8$ generator $[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}$ in $\pi_{15}(SU(4))$. \end{letterlist} \end{proposition} \begin{proof} If the order of $f'_{3}$ is at most~$4$, then in Lemma~\ref{alternative} we may take $\gamma=c$. Doing so, observe that by using the inverse equivalences in Lemma~\ref{jclass} we obtain a homotopy commutative diagram \begin{equation} \label{Cjdgrm} \diagram \Sigma^{3}(S^{5}\times S^{7})\rto^-{f'}\dto^{\Sigma^{3} j} & SU(4)\dto^{4} \\ \Sigma^{3} C\rto^-{\xi'} & SU(4) \enddiagram \end{equation} where $\xi'$ is the composite \(\nameddright{\Sigma^{3} C}{\simeq}{E\vee S^{12}\vee S^{14}}{\xi}{SU(4)}\). Now consider the diagram \[\diagram SU(4)\rto^-{\partial_{1}}\dto^{\pi} & \Omega^{3}_{0} SU(4)\ddouble \\ S^{5}\times S^{7}\rto^-{f}\dto^{j} & \Omega^{3}_{0} SU(4)\dto^{4} \\ C\rto & \Omega^{3}_{0} SU(4) \enddiagram\] The top square homotopy commutes by~(\ref{su4su2}) while the bottom square is the triple adjoint of~(\ref{Cjdgrm}). Since the left column consists of two consecutive maps in a homotopy cofibration sequence it is null homotopic. The homotopy commutativity of the diagram therefore implies that~$4\circ\partial_{1}$ is null homotopic. If the order of $f'_{3}$ is $8$, then in Lemma~\ref{alternative} we may take $\gamma=c'$. Doing so, since $c'=c+\eta_{7}\sigma_{8}$, instead of~(\ref{Cjdgrm}) we obtain a homotopy commutative diagram \begin{equation} \label{Cj2dgrm} \diagram \Sigma^{3}(S^{5}\times S^{7})\rto^-{f'}\dto^{\Sigma^{3} j+\ell} & SU(4)\dto^{4} \\ \Sigma^{3} C\rto^-{\xi'} & SU(4) \enddiagram \end{equation} where $\ell$ is the composite \(\nameddright{\Sigma^{3}(S^{5}\times S^{7})}{}{S^{15}}{\eta_{7}\sigma_{8}} {S^{7}}\hookrightarrow\Sigma^{3} C\). Now consider the diagram \[\diagram \Sigma^{3} SU(4)\rto^-{\partial'_{1}}\dto^{\Sigma^{3}\pi} & SU(4)\ddouble \\ \Sigma^{3}(S^{5}\times S^{7})\rto^-{f'}\dto^{\Sigma^{3} j+\ell} & SU(4)\dto^{4} \\ \Sigma^{3} C\rto^-{\xi'} & SU(4) \enddiagram\] where $\partial'_{1}$ is the triple adjoint of $\partial$. The top square homotopy commutes by~(\ref{su4su2}) while the bottom square homotopy commutes by~(\ref{Cj2dgrm}). Since $\Sigma^{3} j\circ\Sigma^{3}\pi$ are consecutive maps in a homotopy cofibration, their composite is null homotopic. Thus this diagram implies that $4\circ\partial'_{1}$ is homotopic to the composite \(\namedddright{\Sigma^{3} SU(4)}{\Sigma^{3}\pi}{\Sigma^{3}(S^{5}\times S^{7})} {}{S^{15}}{\eta_{7}\sigma_{8}}{S^{7}}\hookrightarrow\namedright{\Sigma^{3} C} {\xi'}{SU(4)}\). Notice that the pinch map to the top cell \(\namedright{\Sigma^{3}(S^{5}\times S^{7})}{}{S^{15}}\) is a triple suspension, while by Lemma~\ref{alternative} the composite \(\namedright{S^{15}}{\eta_{7}\sigma_{8}}{S^{7}}\hookrightarrow\namedright{\Sigma^{3} C} {\xi'}{SU(4)}\) represents $4[\nu_{5}\oplus\eta_{7}]\circ\sigma_{8}$. Thus, taking triple adjoints, $4\circ\partial_{1}$ is homotopic to the composite \(\namedddright{SU(4)}{\pi}{S^{5}\times S^{7}}{}{S^{12}}{4\chi}{SU(4)}\), as asserted. \end{proof} \begin{remark} It can be checked that if $f'_{3}$ has order~$8$ then there is no different choice of the map~$\xi$ which makes $\xi\circ(a+b+c)\simeq 4f'$ in Lemma~\ref{alternative}. The argument is to check all possible cases; it is not included as it is not needed. However, it does imply that $4\circ\partial_{1}$ is nontrivial; for if it were trivial then $4\circ\partial_{1}\simeq 4\circ f\circ\pi_{1}$ would have to factor through the cofibre $C$ of $\pi$, implying that there has to be a choice of $\xi$ such that $\xi\circ(a+b+c)\simeq 4f'$. \end{remark} \begin{theorem} \label{partialorder4} The following hold: \begin{letterlist} \item if $f'_{3}$ has order $4$ then $\partial_{1}$ has order $4$; \item if $f'_{3}$ has order $8$ then $\Omega\partial_{1}$ has order $4$. \end{letterlist} \end{theorem} \begin{proof} By Proposition~\ref{order4options}, if $f'_{3}$ has order $4$ then $4\circ\partial_{1}$ is null homotopic, implying that $\partial_{1}$ has order at most $4$. On the other hand, by Lemma~\ref{HKlemma}, the order of $\partial_{1}$ is divisible by $4$. Thus $\partial_{1}$ has order $4$. Next, in general, the quotient map \(\namedright{X\times Y}{Q}{X\wedge Y}\) is null homotopic after looping. For if \(i\colon\namedright{X\vee Y}{}{X\times Y}\) is the inclusion of the wedge into the product, then $Q\circ i$ is null homotopic, but by the Hilton-Milnor Theorem $\Omega i$ has a right homotopy inverse. In our case, if $f'_{3}$ has order $8$ then Proposition~\ref{order4options} states that $4\circ\partial_{1}$ factors through the quotient map \(\namedright{S^{5}\times S^{7}}{Q}{S^{5}\wedge S^{7}\simeq S^{12}}\). Thus $4\Omega\partial_{1}$ is null homotopic. Consequently, $\Omega\partial_{1}$ has order at most $4$. By Lemma~\ref{loopHKlemma}, the order of~$\Omega\partial_{1}$ is divisible by $4$. Thus $\Omega\partial_{1}$ has order $4$. \end{proof} \begin{proof}[Proof of Theorem~\ref{looppartialorder2}] Proposition~\ref{partialorder4} implies that in any case the $2$-primary component of the order of $\Omega\partial_{1}$ is $4$. \end{proof} \bibliographystyle{amsplain}
{'timestamp': '2019-09-12T02:11:50', 'yymm': '1909', 'arxiv_id': '1909.04643', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.04643'}
ArXiv
\section{Global Meshing by Labatut~et~al.} \label{sec:base_method} Our base method is a global meshing approach by Labatut~et~al.~\cite{labatut07delaunay}, which requires a point cloud with visibility information as input (i.e. which point was reconstructed using which cameras/images). With this data, they first compute the Delaunay tetrahedralization of the point cloud. This leads to a set of tetrahedra which are connected to their neighbors through their facets. If the sampling is dense enough, it has been shown that this tetrahedralization contains a good approximation of the real surface~\cite{amenta98}. Now the main idea of~\cite{labatut07delaunay} was to construct a dual graph representation of the Delaunay tetrahedralization and perform graph cut optimization on this dual graph to extract a surface (a visual representation of the dual graph is plotted in Fig.~\ref{fig:graph_init}). They formulated the problem such that after the optimization each tetrahedron is either labeled as \emph{inside} or \emph{outside}. This results in a watertight surface, which is the minimum cut of the graph cut optimization and represents the transition between tetrahedra labeled as \emph{inside} and \emph{outside}. The following optimization problem is solved by the graph cut optimization to find the surface $\mathcal{S}$: \begin{equation} \argmin_{\mathcal{S}} E_{vis}(\mathcal{S}) + \alpha \cdot E_{smooth}(\mathcal{S}) \end{equation} where $E_{vis}(\mathcal{S})$ is the data term and represents the penalties for the visibility constraint violations (i.e. ray conflicts, see Fig.~\ref{fig:graph_init}.c). $E_{smooth}(\mathcal{S})$ is the regularization term and is the sum of all smoothness penalties across the surface. $\alpha$ is a factor that balances the data and the regularization term and thus it controls the degree of smoothness. Many ways have been proposed to set these energy terms~\cite{hiep09mvs,hoppe13incmeshing,jancosek11,labatut07delaunay,labatut09surface_range_data,vu12dense_mvs}. In an evaluation~\cite{mostegel12} on the Strecha dataset~\cite{strecha08dataset}, we found that a constant visibility cost (Fig.~\ref{fig:graph_init}.c) and a small constant regularization cost (per edge/facets) lead to very accurate results. Thus we used this energy formulation with $\alpha = 10^{-4}$ in all our experiments. Note that this base energy formulation is not crucial for our approach and can be replaced by other methods. \section{Conclusion} In this paper we presented a hybrid approach between volumetric and Delaunay-based surface reconstruction approaches. This formulation gives our approach the unique ability to handle multi-scale point clouds of any size with a constant memory usage. The number of points per voxel is the only relevant parameter of our approach, which directly represents the trade-off between completeness and memory consumption. In our experiments, we were thus able to reconstruct a consistent surface mesh on a dataset with 2 billion points and a scale variation of more than 4 orders of magnitude requiring less than 9GB of RAM per process. Our other experiments demonstrated that, despite the low memory usage, our approach is still extremely resilient to outlier fragments, vast scale changes and highly competitive in accuracy and completeness with the state-of-the-art in multi-scale surface reconstruction. \section{Experiments} We split our experiments into three parts. First, we present qualitative results on a publicly available multi-scale dataset~\cite{fuhrmann2014mve} and a new cultural heritage dataset with an extreme density diversity (from 1m to 50$\mu$m). Second, we evaluate our approach quantitatively on the Middlebury dataset~\cite{seitz2006comparison} and the DTU dataset~\cite{aanaes2016large}. Third, we assess the breakdown behavior of our approach in a synthetic experiment, where we iteratively increase the point density ratio between neighboring voxels up to a factor of 4096. For all our experiments, we use the same set of parameters. The most interesting parameter is the maximum number of points per voxel (further referred to as "leaf size"), which represents the trade-off between completeness and memory usage. We set this parameter to 128k points, which keeps the memory consumption per process below 9GB. Only in our first experiment (Citywall), we vary this parameter to assess its sensitivity (which turns out to be very low). As detailed in our technical report~\cite{mostegel12}, the base method per se is not able to handle Gaussian noise without a loss of accuracy. Thus, we apply simple pre- and post-processing steps for the reduction of Gaussian noise. As pre-processing step, we apply scale sensitive point fusion. Points are iteratively and randomly drawn from the set of all points within a voxel. For each drawn point, we fuse the k-nearest neighbors within a radius of 3 times the point scale (points cannot be fused twice). This step can be seen as the non-volumetric equivalent to the fusion of points on a fixed voxel grid. The k-nn criterion only prohibits that uncertain points delete too many more accurate points. We select k such that all points of similar scale within the radius are fused if no significantly finer scale is present (leading to $k=20$). As post-processing, we apply two iterations of HC-Laplacian smoothing~\cite{vollmer1999improved}. Both, post- and pre-processing, are computationally negligible compared to the meshing itself (less than 1\% of the run-time). All experiments reporting timings were run on a server with 210GB accessible RAM and 2 Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, which totals in 40 virtual cores. For merging the local solutions back together, we process the local solutions on a voxel basis. If patch candidates extend in other voxels, these voxels are locked to avoid race conditions. To minimize resource conflicts, we randomly choose voxels which are delegated to worker processes. The number of worker processes is adjusted to fit the memory of the host machine. \subsection{Qualitative Evaluation} For multi-scale 3D reconstruction there currently does not exist any benchmark with ground truth. Thus, qualitative results are the most important indicator for comparing 3D multi-scale meshing approaches. On two datasets, we compare our approach to two state-of-the-art multi-scale meshing approaches. The first approach (FSSR~\cite{fuhrmann14}) is a completely local approach, whereas the second approach (GDMR~\cite{ummenhofer15}) contains a global optimization. For our approach, we only use a single ray per 3D point (from the camera of the depthmap). \vspace{-15pt} \paragraph{Citywall dataset.} The Citywall dataset~\cite{fuhrmann2014mve} is publicly available and consists of 564 images, which were taken in a hand-held manner and contain a large variation of camera to scene distance. As input we use a point cloud computed with the MVE~\cite{fuhrmann2014mve} pipeline on scale level 1, which resulted in 295M points. For this experiment, we used the same parameters as used in~\cite{ummenhofer15} for FSSR and GDMR. For FSSR, the "meshclean" routine was used as suggested in the MVE users guide with "-t10". Aside from the visual comparisons (Fig.~\ref{fig:citywall_comparison}), we use this dataset also to evaluate the impact of choosing different leaf sizes (maximum numbers of points per voxel) on the quality and completeness of the reconstruction (Fig.~\ref{fig:citywall_comparison}) and the memory consumption (Tab.~\ref{tab:citywall_mem_time}). \begin{table} \centering \begin{tabular}[b]{|c||c|c|c|c|c|c|c|c|} \hline leaf size & 512k & 128k & 32k & 8k \\\hline\hline Peak Mem [GB] & 25.3 & 8.9 & 3.1 & 2.2 \\\hline \end{tabular} \caption{Influence of octree leaf size. For a changing number of points in the octree leaves, we show the peak memory usage of a single process. More details in the supplementary~\cite{mostegel17}. } \label{tab:citywall_mem_time} \vspace{-15pt} \end{table} In matters of completeness, we can see in Fig.~\ref{fig:citywall_comparison} that our approach lies in between FSSR (a local approach) and GDMR (a global approach). The degree of completeness can be adjusted with the leaf size. A large leaf size leads to a very complete result, but the memory consumption is also significantly higher (see Tab.~\ref{tab:citywall_mem_time}). However, even with very small leaf sizes (8k points), the mesh is completely closed in densely sampled parts of the scene. If we compare the quality of the resulting mesh to FSSR and GDMR, we see that our approach preserves much more fine details and has significantly higher resilience against mutually supporting outliers (red circles). The degree of resilience declines gracefully when the leaf size becomes lower, and even for 8k points the output is, in this respect, at least as good as FSSR and GDMR. The drawback of our method is that the Gaussian noise level is somewhat higher compared to the other approaches, which could be reduced with more smoothing iterations. \vspace{-15pt} \paragraph{Valley dataset.} The Valley dataset is a cultural heritage dataset, where the images were taken on significantly different scale levels. The most coarse scale was recorded with a manned and motorized hang glider, the second scale level with a fixed wing UAV (unmanned aerial vehicle), the third with an autonomous octocopter UAV~\cite{mostegel16b} and the finest scale with a terrestrial stereo setup~\cite{hoell15}. Each scale was reconstructed individually and then geo-referenced using offline differential GPS measurements of ground control points (GCPs), total station measurements of further GCPs and a prism on the stereo setup~\cite{alexander15}. The relative alignment was then fine-tuned with ICP (iterative closest point). On each scale level we densified the point cloud using SURE~\cite{rothermel12}, which was mainly developed for aerial reconstruction and is therefore ideally suited for this data. We compute the point scale for SURE analog to MVE as the average 3D distance from a depthmap value to its neighbors (4-neighborhood). The resulting point clouds have the following size and ground sampling distance: Stereo setup (1127M points @ 43-47$\mu$m), octocopter UAV (46M points @ 3.5-15mm), fixed wing UAV (162M points @ 3-5cm) and hang glider (572M points @ 10-100cm), which sums up to 1.9 billion points in total. This dataset is available~\cite{mostegel17}. On this dataset, FSSR and GDMR were executed with the standard parameters, which also obtained the "best" results for SURE input on the DTU dataset (see Sec.~\ref{ssec:quant_eval}). However, both approaches ran out of memory with these parameters on the evaluation machine with 210 GB RAM. To obtain any results for comparison we increased the scale parameter (in multiples of two) until the approaches could be successfully executed, which resulted in a scaling factor of 4 for FSSR and GDMR. The second problem of the reference implementations is that they use a maximum octree depth of 21 levels for efficient voxel indexing, but this dataset requires a greater depth. Thus both implementations ignore the finest scale level. To still evaluate the transition capabilities between octocopter and stereo scale, we also executed both approaches on only these two scale levels (marked as "only subset"). The overall runtimes were 1.5 days for GDMR, 0.5 days for FSSR and 9 days for our approach. One has to be keep in mind that FSSR and GDMR had two octree levels less (data reduction between 16 and 64), additionally to throwing away the lowest scale (half of the points). Furthermore, our approach only required 119GB of memory with 16 processes, whereas GDMR required 150GB and FSSR 170GB, despite the large data reduction. Per process our approach once again required less than 9GB. In Fig.~\ref{fig:valley_comparison} we show the results of this experiment. Note that even without the 2 coarser scale levels, both reference approaches are unable to consistently connect the lowest scale level. In contrast, our approach produces a single mesh that consistently connects all scale levels from 6 $km^2$ down to sub-millimeter density (see video~\cite{mostegel17}). \begin{figure*}[t] \centering \includegraphics[width=0.99\textwidth]{citywall_comparison.jpg} \caption{Visual comparison of the citywall dataset~\cite{fuhrmann2014mve}. From left to right, we first show the output of our approach with different values for the maximum number of points per octree node (ranging from 512k to only 8k). Then we show the results of the state-of-the-art methods GDMR~\cite{ummenhofer15} and FSSR~\cite{fuhrmann14}. The first four rows show similar view points as used in~\cite{ummenhofer15} for a fair comparison. The regions encircled in red highlight one of the benefits of our method, i.e. preserving small details while being highly resilient against mutually supporting outliers. Concerning the maximum leaf size, larger leaf sizes lead to more complete results with our approach (blue circles). However, our method is able to handle even very small leaf sizes (8k points) gracefully, with only a slight increase of holes and outliers. } \label{fig:citywall_comparison} \vspace{-8pt} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.99\textwidth]{valley_comparison.jpg} \caption{Valley dataset. From top to bottom, we traverse the vast scale changes of the reconstruction (from 6 $km^2$ down to 50 $\mu m$ sampling distance). From left to right, we show our results, GDMR~\cite{ummenhofer15} and FSSR~\cite{fuhrmann14}; with and without color. As GDMR and FSSR are both not able handle the vast scale difference, we also show the results computed only with the point clouds of the octocopter UAV and the stereo setup as input (red boxes). In the last row, we show all meshes as wire frames to highlight the individual triangles (yellow boxes show the visualized region). Note that our approach consistently connects all scales. } \label{fig:valley_comparison} \vspace{-15pt} \end{figure*} \subsection{Quantitative Evaluation} \label{ssec:quant_eval} For the quantitative evaluation, we use the Middlebury~\cite{seitz2006comparison} and the DTU dataset~\cite{aanaes2016large}. Both datasets are single scale and have relatively small data sizes (Middlebury 96Mpix, DTU 94Mpix). However, they provide ground truth and allow us to show that our approach is highly competitive in matters of accuracy and completeness. \begin{table} \centering \begin{tabular}[b]{|c||c|c|c|c|c|} \hline Thr. & PSR~\cite{kazhdan06} & SSD~\cite{calakli11}& FSSR & GDMR & OURS \\\hline\hline 90\% & 0.36 & 0.38 & 0.40 & 0.42 & \bf{0.35} \\\hline 97\% & 0.56 & 0.56 & 0.63 & 0.61 & \bf{0.54} \\\hline 99\% & 0.84 & 0.75 & 0.84 & 0.78 & \bf{0.71} \\\hline \end{tabular} \caption{Accuracy on the Middlebury \emph{Temple Full} dataset. Results of other approaches were taken from~\cite{ummenhofer15}. Lower values are better.} \label{tab:temple} \vspace{-15pt} \end{table} \vspace{-15pt} \paragraph{Middlebury dataset.} Following the foot steps of \cite{fuhrmann14,muecke11,ummenhofer15}, we evaluate our approach on the Middlebury \emph{Temple Full} dataset~\cite{seitz2006comparison}. This established benchmark consists of 312 images and a non-public ground truth. For fairness, we use the same evaluation approach as~\cite{ummenhofer15} and report our results for a point cloud computed with MVE~\cite{fuhrmann2014mve} in Tab.~\ref{tab:temple}. In this setup, our approach reaches the best accuracy on all accuracy thresholds with a very high completeness (for 1.25mm: OURS: 99.7\%, FSSR: 99.4\%, GDMR: 99.3\%). A visual comparison can be found in the supplementary~\cite{mostegel17}. Among all evaluated MVS approaches we are ranked second~\cite{seitz2006comparison} (on March/10/2017). Only~\cite{wei14} obtained a better accuracy in the evaluation, and they actually focus on generating better depthmaps and not surface reconstruction. \vspace{-15pt} \paragraph{DTU dataset.} The DTU Dataset~\cite{aanaes2016large} consists of 124 miniature scenes with 49/64 RGB images and structured-light ground truth for each scene. However, the ground truth contains a significant amount of outliers, which in our opinion requires manual cleaning for delivering expressive results. Thus, we hand picked one of the scenes (No. 25) and manually removed obvious outliers (see supplementary~\cite{mostegel17}). We chose this scene as it contains many challenging structures (fences, umbrellas, tables, an arc and a detached sign-plate) additional to a quite realistic facade model. On this data, we evaluate three different meshing approaches (FSSR~\cite{fuhrmann14},GDMR~\cite{ummenhofer15} and OURS) on the point clouds of three state-of-the-art MVS algorithms (MVE~\cite{fuhrmann2014mve},SURE~\cite{rothermel12} and PMVS~\cite{furukawa10pmvs}). For our approach, we used a maximum leaf size of 128k points which results peak memory usage per process of below 9GB. For FSSR we swept the scale multiplication factor and for GDMR $\lambda_1$ and $\lambda_2$ in multiples of two. In Tab.~\ref{tab:dtu}, we compare our approach to the "best" values of FSSR and GDMR. With "best" we mean that the sum of the median accuracy and completeness is minimal over all evaluated parameters. A table with all evaluated parameters can be found in the supplementary~\cite{mostegel17}. If we take a look at the results, we can see that the relative performance of each approach is strongly influenced by the input point cloud. For PMVS input, our approach is ranked second in all factors, while FSSR obtains a higher accuracy at the cost of lower completeness and GDMR higher completeness at the cost of lower accuracy. On SURE input, our approach performs worse than the other two. Note that in this scene, SURE produces a great amount of mutually consistent outliers through extrapolation in texture-less regions. These outliers cannot be resolved with the visibility term as all cameras observe the scene from the same side. For MVE input, our approach achieves the best rank in nearly all evaluated factors. \begin{table} \centering \begin{tabular}[b]{|c||c|c|c|c|} \hline \bf{MVE} & MeanAcc & MedAcc & MeanCom & MedCom \\\hline\hline FSSR & 0.673 (2) & 0.396 (3) & 0.430 (3)& 0.239 (1)\\\hline GDMR & 1.013 (3) & 0.275 (2) & 0.423 (2)& 0.284 (3)\\\hline OURS & 0.671 (1) & 0.262 (1) & 0.423 (1)& 0.279 (2)\\\hline\hline \bf{SURE} & MeanAcc & MedAcc & MeanCom & MedCom \\\hline\hline FSSR & 1.044 (1) & 0.490 (3)& 0.431 (1)& 0.257 (1)\\\hline GDMR & 1.099 (2) & 0.301 (1)& 0.519 (3)& 0.357 (2)\\\hline OURS & 1.247 (3) & 0.365 (2)& 0.509 (2)& 0.368 (3)\\\hline \hline \bf{PMVS} & MeanAcc & MedAcc & MeanCom & MedCom \\\hline\hline FSSR & 0.491 (1)& 0.318 (1)& 0.624 (3)& 0.395 (3)\\\hline GDMR & 0.996 (3)& 0.355 (3)& 0.537 (1)& 0.389 (1)\\\hline OURS & 0.626 (2)& 0.341 (2)& 0.567 (2)& 0.390 (2)\\\hline \end{tabular} \caption{Accuracy and completeness on scene 25 of DTU Dataset~\cite{aanaes2016large}. We evaluate three different meshing approaches (FSSR~\cite{fuhrmann14},GDMR~\cite{ummenhofer15} and OURS) on the point clouds of three different MVS approaches (MVE~\cite{fuhrmann2014mve},SURE~\cite{rothermel12} and PMVS~\cite{furukawa10pmvs}). For all evaluated factors (mean/median accuracy and mean/median completeness) lower values are better. In brackets we show the relative rank.} \label{tab:dtu} \vspace{-15pt} \end{table} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{synthetic.png} \caption{Synthetic breakdown experiment. From left to right, we reduce the number of points in the center of the square. The number in the top row shows the density ratio between the outer parts of the square and the inner part. From top to bottom, we show the different steps of our approach ((1) collecting consistencies, (2) closing holes with patches and (3) graph cut-based hole filling). We colored the image background red to highlight the holes in the reconstruction. Note that the after the first step, many holes exist exactly on the border of the octree nodes, which our further steps close or at least reduce. } \label{fig:synthetic} \vspace{-15pt} \end{figure} \subsection{Breakdown Analysis} In this experiment, we evaluate the limits of our approach with respect to point density jumps. Thus we construct an artificial worst case scenario, i.e. a scenario where the density change happens exactly at the voxel border. Our starting point is a square plane where we sample 2.4 million points and to which we add some Gaussian noise in the z-axis. The points are connected to 4 virtual cameras (visibility links), which are positioned fronto parallel to the plane. Then we subsequently reduce the number of points in the center of the plane by a factor 2 until we detect the first holes in the reconstruction (which happened at a reduction of 64). Then we reduce the point density by a factor 4 until a density ratio of 4096. In Fig.~\ref{fig:synthetic} we show the most relevant parts of the experiment. Up to a density ratio of 32, our approach is able to produce a hole-free mesh as output. If we compare this to a balanced octree (where the relative size of adjacent voxels is limited to a factor two), we can perfectly cope with 8 times higher point densities. When the ratio becomes even higher, the number of holes at the transition rises gradually. In Fig.~\ref{fig:synthetic}, we can see that graph cut optimization is able to reduce the size of the remaining holes significantly, even for a density ratio of 4k. This means that even for extreme density ratios of over 3 orders we can still provide a result, albeit one that contains a few holes at the transition. \section{Introduction} \begin{figure} \centering \vspace{-13pt} \includegraphics[width=0.95\columnwidth]{valley_multi_scale.pdf} \caption{From kilometer to sub-millimeter. Our approach is capable to compute a consistently connected mesh even in the presence of vast point density changes, while at the same time keeping a definable constant peak memory usage. } \vspace{-20pt} \label{fig:teaser} \end{figure} In this work we focus on surface reconstruction from multi-scale multi-view stereo (MVS) point clouds. These point clouds receive increasing attention as their computation only requires simple 2D images as input. Thus the same reconstruction techniques can be used for all kinds of 2D images independent of the acquisition platform, including satellites, airplanes, unmanned aerial vehicles (UAVs) and terrestrial mounts. These platforms allow to capture a scene in a large variety of resolutions (aka scale levels or levels of details). A multicopter UAV alone can vary the level of detail by roughly two orders of magnitude. If the point clouds from different acquisition platforms are combined, there is no limit to the possible variety of point density and 3D uncertainty. Further, the size of these point clouds can be immense. State-of-the-art MVS approaches~\cite{furukawa10pmvs,galliani15,goesele07,rothermel12} compute 3D points in the order of the total number of acquired pixels. This means that they generate 3D points in the order of $10^7$ per taken image with a modern camera. In a few hours of time, it is thus possible to acquire images that result in several billions of points. Extracting a consistent surface mesh from this immense amount of data is a non-trivial task, however, if such a mesh could be extracted it would be a great benefit for virtual 3D tourism. Instead of only being able to experience a city from far away, it would then be possible to completely immerge into the scene and experience cultural heritage in full detail. However, current research in multi-scale surface reconstruction focuses on one of two distinct goals (aside from accuracy). One group (e.g.~\cite{fuhrmann14,kuhn16}) focuses on scalability through local formulations. The drawback of these approaches is that the completeness often suffers; i.e. many holes can be seen in the reconstruction due to occlusions in the scene, which reduces the usefulness for virtual reality. The second group (e.g.~\cite{ummenhofer15,vu12dense_mvs}) thus focuses on obtaining a closed mesh through applying global methods. For obtaining the global solution, these methods require all data at once, which sadly precludes them from being scalable. Achieving both goals at once, scalability and a closed solution, might be impossible for arbitrary jumps in point density. The reason for this is that any symmetric neighborhood on the density jump boundary can have more points in the denser part than fit into memory, while having only a few or even no points in the sparse part. However, scalability requires independent sub-problems of limited size, while a closed solution requires sufficient overlap for joining them back together. To mitigate this problem, we formulate our approach as a hybrid between global and local methods. First, we separate the input data with a coarse octree, where a leaf node typically contains thousands of points. The exact amount of points is an adjustable parameter that represents the trade-off between completeness and memory usage. Within neighboring leaf nodes, we perform a local Delaunay tetrahedralization and max-flow min-cut optimization to extract local surface hypotheses. This leads to many surface hypotheses that partially share the same base tetrahedralization, but also intersect each other in many places. To resolve these conflicts between the hypotheses in a non-volumetric manner, we propose a novel graph cut formulation based on the individual surface hypotheses. This formulation allows us to optimally fill holes which result from local ambiguities and thus maximize the completeness of the final surface. This allows us to handle point clouds of any size with a constant memory footprint, where the capability to close holes can be traded off with the memory usage. Thus we were able to generate a consistent mesh from a point cloud with 2 billion points with a ground sampling variation from 1m to 50$\mu$m using less than 9GB of RAM per process (see Fig.~\ref{fig:teaser} and video~\cite{mostegel17}). \section{Supplementary Material} This section contains further details about the experiments conducted in the main paper. \paragraph{Citywall dataset~\cite{fuhrmann2014mve}.} In Tab.~\ref{tab:citywall_mem_time_supp}, we provide a much more detailed version of Tab.~\ref{tab:citywall_mem_time} in the main paper. Here, the table shows which step of our approach was run with how many processes. We tried to select the number of processes such that we were sure not to exceed the memory of the server (210GB). Note that, we did not have exact knowledge of the memory consumption of each step with respect to the leaf size prior to the experiment. As the number of processes is varying per step, we normalized the run-times to 40 virtual processes (the number of virtual cores of the server). The normalized time was computed as \emph{real run-time} $\cdot$ \emph{num processes} / \emph{40}. \paragraph{Middlebury dataset~\cite{seitz2006comparison}.} For the readers convenience, we downloaded and grouped the visual results from the official evaluation homepage~\cite{seitz2006comparison}. Thus, we visual these results in Fig.~\ref{fig:middlebury_details}. As detailed in the main paper (Tab.~\ref{tab:temple}), we achieve better accuracy scores than all other approaches that were executed MVE~\cite{fuhrmann2014mve} input. We assume that this is the case, because our approach preserves more detail than the other approaches (see bottom row). \paragraph{DTU dataset~\cite{aanaes2016large}.} For getting a fair comparison with the reference approaches, we performed a sweep of the most important parameters for FSSR~\cite{fuhrmann14} and GDMR~\cite{ummenhofer15}. For FSSR we changed the scale multiplication factor and for GDMR jointly $\lambda_1$ and $\lambda_2$ in multiples of two. In the main paper, we only report the scores of "best" parameters. With "best" we mean that the sum of the median accuracy and completeness is minimal over all evaluated parameters. The complete set of results for MVE~\cite{fuhrmann2014mve},SURE~\cite{rothermel12} and PMVS~\cite{furukawa10pmvs} is in shown in Tab.~\ref{tab:DTU_mve}, \ref{tab:DTU_sure} and \ref{tab:DTU_pmvs}, where we marked the values reported in the main paper in bold font. Note that the standard parameters won in all cases except for PMVS (Tab.~\ref{tab:DTU_pmvs}). We assume that the reason for this is that PMVS generates less points with standard parameters; i.e. $level = 1$ and $csize = 2$ increase the theoretic point radius more or less by 4. As the DTU dataset contains a significant amount of outliers in the ground truth, we cleaned the ground truth of scene 25 manually (see Fig.~\ref{fig:DTU_details}). We additionally provide the cleaned ground truth online~\cite{mostegel17}. In Fig.~\ref{fig:DTU_details} we also show the error-colored point clouds generated by the evaluation system of~\cite{aanaes2016large}. If we take a look at the results, we can see that GDMR and our approach are more robust to outliers than FSSR (see median accuracy with MVE and SURE input). If nearly no outliers are present (PMVS), FSSR reaches the best accuracy, at the cost of leaving many holes in the facade. On dense parts of the scene (mostly the facade), GDMR and our approach perform very similar, however we can see a significant difference in parts where no input points constrain the algorithms. The formulation of GDMR prefers smooth normal transitions, which can lead to unwanted bubbles (see umbrellas). Our approach instead prefers to close holes with planes. In this example, this strategy leads to a better mean accuracy. In the presence of many outliers (SURE), our approach can successfully remove outliers if they cause a ray conflict (right side of the terrace), while other outliers remain (left side of the arc). \begin{table*} \centering \begin{tabular}[b]{|c||c|c|c|c|} \hline leaf size & \bf{512k} & \bf{128k} & \bf{32k} & \bf{8k} \\\hline num leaves & 194 & 695 & 2675 & 10123 \\\hline\hline \multicolumn{5}{|c|}{running base approach~\cite{labatut07delaunay}} \\\hline num processes & 32 & 32 & 32 & 32 \\\hline real run-time [h] & 38.9 & 30.0 & 18.5 & 12.0 \\\hline run-time/40 proc [h] & 31.1 & 24.0 & 14.8 & 9.6 \\\hline\hline \multicolumn{5}{|c|}{extracting candidate patches} \\\hline num processes & 16 & 32 & 32 & 32 \\\hline real run-time [h] & 6.2 & 3.3 & 3.6 & 4.0 \\\hline run-time/40 proc [h] & 2.5 & 2.6 & 2.9 & 3.2 \\\hline peak mem/proc [GB] & 13.0 & 4.1 & 1.9 & 2.2 \\\hline \hline \multicolumn{5}{|c|}{closing holes with full patches} \\\hline num processes & 4 & 4 & 6 & 6 \\\hline real run-time [h] & 12.1 & 7.0 & 4.3 & 5.1 \\\hline run-time/40 proc [h] & 1.2 & 0.7 & 0.6 & 0.8 \\\hline peak mem/proc [GB] & 14.8 & 6.5 & 2.4 & 1.8 \\\hline \hline \multicolumn{5}{|c|}{hole filling with graph cut} \\\hline num processes & 4 & 4 & 6 & 6 \\\hline real run-time [h] & 31 & 18 & 6 & 5 \\\hline run-time/40 proc [h] & 0.8 & 0.5 & 0.2 & 0.1 \\\hline peak mem/proc [GB] & 25.3 & 8.9 & 3.1 & 1.8 \\\hline \hline \multicolumn{5}{|c|}{overall} \\\hline run-time/40 proc [h] & 35.6 & 27.8 & 18.5 & 13.7\\\hline peak mem/proc [GB] & 25.3 & 8.9 & 3.1 & 2.2 \\\hline \end{tabular} \caption{Influence of octree leaf size. In this experiment, we vary the maximum number of points per voxel (noted as "leaf size"). The other values in the table result from the choice of this parameter. We recorded each value for each step of our approach. "num leaves" denotes the number of voxels in the octree. "num processes" denotes the number of processes that were running concurrently on the server. "real run-time" denotes the recorded run-time on the server with the selected number of processes. "run-time / 40 proc" denotes the recorded run-time normalized to the 40 virtual cores of the server. "peak mem/proc" denotes the recorded peak RAM usage of a single process. For "running base approach", an error occurred for the memory recording, which is why this value is missing. However, similar evaluations in the other experiments indicate that this value is at least 10\% smaller than the overall peak memory usage per process. } \label{tab:citywall_mem_time_supp} \end{table*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{combined_temple.jpg} \caption{Details of multi-scale approaches on the Middlebury \emph{Temple Full} dataset~\cite{seitz2006comparison}. From left to right, we show the ground truth, FSSR~\cite{fuhrmann14}, GDMR~\cite{ummenhofer15} and our approach. From top to bottom, we show "view 1", "view 2" and a magnified detail of "view 2". Note that our approach, preserves more fine details and edges, which in the end led to a better accuracy. } \label{fig:middlebury_details} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{DTU_details.jpg} \caption{Scene 25 of the DTU dataset~\cite{aanaes2016large}. On the left side we show the ground truth provided by~\cite{aanaes2016large} (original) and the ground truth after removing obvious outliers (cleaned). The other three columns show the point clouds generated by the evaluation system of~\cite{aanaes2016large} (the meshes are regularly sampled to point clouds by the system). Points included in the evaluation are colored from white (no error) to red ($\geq 10$mm error). The points from blue to green are excluded from the evaluation. The evaluated approaches are noted in the top left corner of each sub-image and in brackets we note the MVS algorithm used to obtain the input point cloud. } \label{fig:DTU_details} \end{figure*} \begin{table*}[htbp] \centering \begin{tabular}{|l|c|r|r|r|r|r|r|} \hline \multirow{2}{*}{Approach} & \multicolumn{1}{c|}{Param.} & \multicolumn{3}{c|}{Accuracy} & \multicolumn{3}{c|}{Completeness} \\ \cline{3-8} & \multicolumn{1}{c|}{Factor} & \multicolumn{1}{c|}{Mean} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{Variance} & \multicolumn{1}{c|}{Mean} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{Variance} \\ \hline \multirow{3}{*}{FSSR} & 1 & \textbf{0.673} & \textbf{0.396} & \textbf{0.685} & \textbf{0.430} & \textbf{0.239} & \textbf{1.638} \\ & 2 & 0.833 & 0.403 & 1.213 & 0.474 & 0.285 & 1.630 \\ & 4 & 0.878 & 0.338 & 1.698 & 0.490 & 0.289 & 1.625 \\ \hline \multirow{5}{*}{GDMR} & 0.5 & 1.048 & 0.269 & 4.846 & 0.460 & 0.290 & 0.716 \\ & 1 & \textbf{1.013} & \textbf{0.275} & \textbf{4.262} & \textbf{0.423} & \textbf{0.284} & \textbf{0.587} \\ & 2 & 1.021 & 0.287 & 4.015 & 0.410 & 0.278 & 0.496 \\ & 4 & 1.008 & 0.304 & 3.539 & 0.407 & 0.270 & 0.524 \\ & 8 & 0.971 & 0.332 & 2.772 & 0.399 & 0.260 & 0.564 \\ \hline OURS & \multicolumn{1}{c|}{\textbf{-}} & \textbf{0.671} & \textbf{0.262} & \textbf{1.330} & \textbf{0.423} & \textbf{0.279} & \textbf{0.575} \\ \hline \end{tabular} \vspace{5pt} \caption{Detailed evaluation results for scene 25 of the DTU dataset~\cite{aanaes2016large} with MVE~\cite{fuhrmann2014mve} as algorithm for computing the input point cloud. We report the results of FSSR~\cite{fuhrmann14} and GDMR~\cite{ummenhofer15} for different multiplication factors of the standard parameters.} \label{tab:DTU_mve} \end{table*} \begin{table*}[htbp] \centering \begin{tabular}{|l|c|r|r|r|r|r|r|} \hline \multirow{2}{*}{Approach} & \multicolumn{1}{c|}{Param.} & \multicolumn{3}{c|}{Accuracy} & \multicolumn{3}{c|}{Completeness} \\ \cline{3-8} & \multicolumn{1}{c|}{Factor} & \multicolumn{1}{c|}{Mean} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{Variance} & \multicolumn{1}{c|}{Mean} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{Variance} \\ \hline \multirow{4}{*}{FSSR} & \textbf{1} & \textbf{1.044} & \textbf{0.490} & \textbf{2.487} & \textbf{0.431} & \textbf{0.257} & \textbf{1.218} \\ & 2 & 1.594 & 0.523 & 5.935 & 0.501 & 0.353 & 0.985 \\ & 4 & 1.975 & 0.496 & 9.503 & 0.485 & 0.370 & 0.450 \\ & 8 & 2.395 & 0.520 & 14.259 & 0.520 & 0.387 & 0.462 \\ \hline \multirow{4}{*}{GDMR} & 0.5 & 1.101 & 0.295 & 4.931 & 0.565 & 0.373 & 0.917 \\ & \textbf{1} & \textbf{1.099} & \textbf{0.301} & \textbf{4.693} & \textbf{0.519} & \textbf{0.357} & \textbf{0.744} \\ & 2 & 1.163 & 0.322 & 4.813 & 0.494 & 0.339 & 0.753 \\ & 4 & 1.358 & 0.373 & 5.687 & 0.465 & 0.317 & 0.703 \\ \hline OURS & \multicolumn{1}{c|}{\textbf{-}} & \textbf{1.247} & \textbf{0.365} & \textbf{5.013} & \textbf{0.509} & \textbf{0.368} & \textbf{0.512} \\ \hline \end{tabular} \vspace{5pt} \caption{Detailed evaluation results for scene 25 of the DTU dataset~\cite{aanaes2016large} with SURE~\cite{rothermel12} as algorithm for computing the input point cloud. We report the results of FSSR~\cite{fuhrmann14} and GDMR~\cite{ummenhofer15} for different multiplication factors of the standard parameters.} \label{tab:DTU_sure} \end{table*} \begin{table*}[htbp] \centering \begin{tabular}{|l|c|r|r|r|r|r|r|} \hline \multirow{2}{*}{Approach} & \multicolumn{1}{c|}{Param.} & \multicolumn{3}{c|}{Accuracy} & \multicolumn{3}{c|}{Completeness} \\ \cline{3-8} & \multicolumn{1}{c|}{Factor} & \multicolumn{1}{c|}{Mean} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{Variance} & \multicolumn{1}{c|}{Mean} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{Variance} \\ \hline \multirow{3}{*}{FSSR} & 2 & 0.332 & 0.245 & 0.079 & 1.110 & 0.476 & 4.867 \\ & \textbf{4} & \textbf{0.491} & \textbf{0.318} & \textbf{0.273} & \textbf{0.624} & \textbf{0.395} & \textbf{1.772} \\ & 8 & 0.593 & 0.327 & 0.551 & 0.764 & 0.413 & 3.126 \\ \hline \multirow{6}{*}{GDMR} & 1 & 1.651 & 0.456 & 9.159 & 1.280 & 0.621 & 3.785 \\ & 2 & 1.372 & 0.373 & 6.855 & 0.789 & 0.468 & 1.510 \\ & 4 & 1.149 & 0.343 & 4.991 & 0.581 & 0.404 & 0.735 \\ & \textbf{8} & \textbf{0.996} & \textbf{0.355} & \textbf{3.024} & \textbf{0.537} & \textbf{0.389} & \textbf{0.613} \\ & 16 & 0.988 & 0.377 & 2.697 & 0.529 & 0.381 & 0.615 \\ & 32 & 1.003 & 0.402 & 2.586 & 0.518 & 0.368 & 0.633 \\ \hline OURS & \multicolumn{1}{c|}{\textbf{-}} & \textbf{0.626} & \textbf{0.341} & \textbf{0.755} & \textbf{0.567} & \textbf{0.390} & \textbf{0.743} \\ \hline \end{tabular} \vspace{5pt} \caption{Detailed evaluation results for scene 25 of the DTU dataset~\cite{aanaes2016large} with PMVS~\cite{furukawa10pmvs} as algorithm for computing the input point cloud. We report the results of FSSR~\cite{fuhrmann14} and GDMR~\cite{ummenhofer15} for different multiplication factors of the standard parameters.} \label{tab:DTU_pmvs} \end{table*} \end{document} \section{Related Work} Surface reconstruction from point clouds is an extensively studied topic and a general review can be found in~\cite{berger14}. In the following, we focus on the most relevant works with respect to multi-scale point clouds and scalability. Many surface reconstruction approaches rely on an octree-structure for data handling. While it has been shown by Kazhdan~et~al.~\cite{kazhdan07} that consistent isosurfaces can be extracted from arbitrary octree structures, the vast scale differences imposed by multi-view stereo lead to new challenges for octree-based approaches. Consequently, fixed depth approaches (e.g.~\cite{hornung06,kazhdan06,bolitho07}) are not well-suited for this kind of input data. Thus, Muecke~et~al.~\cite{muecke11} handle scale transitions in computing meshes on multiple octree levels within a crust of voxels around the data points and stitching the partial solutions back together. However, this approach is not scalable due to its global formulation. Fuhrmann and Goesele~\cite{fuhrmann14} therefore propose a completely local surface reconstruction approach, where they construct an implicit function as the sum of basis functions. While this approach is scalable from a theoretical stand point, the interpolation capabilities are very limited due to a very small support region. Furthermore, the pure local nature of the approach is unable to cope with mutually supporting outliers (e.g. if one depthmap is misaligned with respect to the other depthmaps), which occur quite often in practice (see experiments). Kuhn~et~al.~\cite{kuhn16} reduce this problem by checking for visibility conflicts in close proximity (10 voxels) of a measurement. Nevertheless, this approach still has very limited interpolation capabilities compared to global approaches. Recently, Ummenhofer and Brox~\cite{ummenhofer15} proposed a global variational approach for surface reconstruction of large multi-scale point clouds. While they report that they can process a billion points, the required memory foot print for this problem size is already considerable (152 GB). Aside from not being scalable due to the global formulation, this approach also needs to balance the octree. As our experiments demonstrate, this leads to severe problems if the scale difference is too large. Aside from octree-based approaches, there is also a considerable amount of work that is based on the Delaunay tetrahedralization of the 3D points~\cite{hiep09mvs,hoppe13incmeshing,jancosek11,labatut07delaunay,labatut09surface_range_data,vu12dense_mvs}. Opposed to octree-based approaches, the Delaunay tetrahedralization splits the space into uneven tetrahedra and thus grants these approaches the unique capability to close holes of arbitrary size for any point density. The key property of these approaches is that they build a directed graph based on the neighborhood of adjacent tetrahedra in the Delaunay tetrahedralization. The energy terms within the graph are then set according to rays between the cameras and their corresponding 3D measurements. These visibility terms make this type of approaches very accurate and robust to outliers. The main differences between the approaches mentioned above are how the smoothness terms are set and what kind of post-processing is applied. One property that all of these approaches share is that they are all based on global graph cut optimization, which precludes them from scalability. However, the complete resilience to changes in point density makes these approaches ideal for multi-view stereo surface reconstruction, which motivated us to scale up this type of approaches. \section{Making It Scale} To scale up the base method, it is necessary to first divide the data into manageable pieces, which we achieve with an unrestricted octree. On overlapping data subsets, we then solve the surface extraction problem optimally and obtain overlapping hypotheses. This brings us to the main contribution of our work, the fusion of these hypotheses. The main problem is that the property which gives the base approach its unique interpolation properties (i.e. the irregular space division via Delaunay tetrahedralization) also makes the fusion of the surface hypotheses a non-trivial problem. We solve this problem by first collecting consistencies between the mesh hypotheses and then filling the remaining holes via a second graph cut optimization on surface candidates. In the following we explain all important steps. \vspace{-10pt} \paragraph{Dividing and conquering the data.} For dividing the data, we use an octree, similar to other works in this field~\cite{fuhrmann14,kazhdan07,kuhn16,muecke11}. In contrast to these works, we treat leaf nodes (aka voxels) of the tree differently. Instead of treating a voxel as smallest unit, we only use it to reduce the number points to a manageable size. We achieve this by subdividing the octree nodes until the number of points within each node is below a fixed threshold. As we want to handle density jumps of arbitrary size, we do not restrict the transition between neighboring voxels. This means that the traditional local neighborhood is not well suited for combining the local solutions, as this neighborhood can be very large at the transition between scale levels. Instead, we collect all unique voxel subsets, where each voxel in the set touches the same voxel corner point (corner point, edge and plane connections are respected). This limits the maximum subset size to 8 voxels. For each voxel subset, we then compute a local Delaunay tetrahedralization and execute the base method (Sec.~\ref{sec:base_method}) to extract a surface hypothesis. The resulting hypotheses strongly overlap each other in most parts, but inconsistencies arise at the voxel boundaries. In these regions, the tetrahedra topology strongly differs, which results in a significant amount of artifacts and ambiguity. For this reason, standard mesh repair approaches such as~\cite{jacobson13,attene14} are not applicable. \vspace{-10pt} \paragraph{Building up a consistent mesh.} In a first step, we collect all triangles (within each voxel) which are shared among all local solutions and add them to the \emph{combined solution}. In the following "\emph{combined solution}" will always refer to the current state of the combined surface hypothesis. Note that the initial \emph{combined solution} is already a valid surface hypothesis with many holes. Triangles which are part of the \emph{combined solution} are not revised by any subsequent steps. Then we look for all triangles that span between two voxels and are in the local solution of all voxel subsets that contain these two voxels. If these triangles separate two \emph{final} tetrahedra, we add them to the \emph{combined solution}. In our case, a \emph{final} tetrahedron is a tetrahedron where the circumscribing sphere does not reach outside the voxel subset. After this step, the combined solution typically contains a large amount of holes at the voxel borders. In the next step, we want to find edge-connected sets of triangles (we will further refer to these sets as "patches") with which we can close the holes in the \emph{combined solution}. To create patch candidates, we search through the local solutions. First, we remove triangles that would violate the two-manifoldness of the \emph{combined solution} (i.e. connecting a facet to an edge that already has two facets) or would intersect the \emph{combined solution}. Then we cluster all remaining triangles in linear time to patches via their edge connections. On a voxel basis, we now end up with many patch candidates. While many candidates might be used to close a hole, it happens that some of them are more suitable than others. As the base approach produces a closed surface for each voxel subset, this also means that it closes the surface behind the scene. To avoid that such a patch is used rather than one in the foreground, we rank the quality of a patch by its \emph{centricity} in the voxel subset. In other words, we prefer patches which are far away from the outer border of the voxel subset, as the Delaunay tetrahedralization is more stable in these regions. We compute the \emph{centricity} of a patch $p$ as: \begin{equation} \text{\emph{centricity}}(p) = 1 - \min_{i \in I_p} \frac{\|c_p - i\|}{r_p}, \end{equation} where $c_p$ is the centroid of the patch $p$, $I_p$ is the set of inner points (Fig.~\ref{fig:inner_points}) of the voxel subset of $p$. $r_{p}$ is the distance from the inner point to the farthest corner of the voxel in which $c_{p}$ lies, which normalizes the \emph{centricity} to [0,1]. For each voxel, we now try to fit the candidate patches in descending order, while ensuring that the outer boundary completely connects to the \emph{combined solution} without violating the two-manifoldness or intersecting the \emph{combined solution}. If such a patch is found it is added to the \emph{combined solution}. Thus this step closes holes which can be completely patched with a single local solution. \begin{figure}[t] \centering \subfigure[] { \includegraphics[width=0.18\columnwidth]{inner_single.eps} }\quad \subfigure[] { \includegraphics[width=0.18\columnwidth]{inner_pair.eps} }\quad \subfigure[] { \includegraphics[width=0.18\columnwidth]{inner_quad.eps} } \quad \subfigure[] { \includegraphics[width=0.18\columnwidth]{inner_oct.eps} } \caption{For computing the \emph{centricity}, we consider 4 types of inner points: (a) Within a voxel, (b) on the plane between 2 voxels, (c) on the edge between 4 voxels (d) on the point between 8 voxels.} \label{fig:inner_points} \end{figure} \vspace{-10pt} \paragraph{Hole filling via graph cut} To deal with parts of the scene where the local Delaunay tetrahedralizations are very inconsistent, we propose a graph cut formulation on the triangles of a patch candidate. For efficiency, this graph cut operates only on surface patches for which the visibility terms have already been evaluated by the first graph cut. The idea behind the formulation is to minimize the total length of the outer mesh boundary. First, we rank all candidate patches by \emph{centricity}. For the best patch candidate, we extract all triangles in the \emph{combined solution} which share an edge connection with the patch. The edges connecting the \emph{combined solution} with the patch define the "hole" which we aim to close or minimize (we refer to this set of edges as $E_h$ and the corresponding set of triangles as $T_h$). Within the set of patch triangles ($T_p$), we now want to extract the optimal subset of triangles ($T_*$) such that the overall outer edge length is minimized: \begin{equation} T_* = \argmin_{T_i \subseteq T_p} \sum_{e \in E_i} \| e \|, \end{equation} where $E_i$ is the set of outer edges (i.e. edges only shared by one triangle) defined through the triangle subset $T_i$ and $E_h$. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{hole_filling.pdf} \caption{Hole filling via graph cut. We transform the mesh into nodes (from 3D triangles) and weighted directed edges (from 3D edges). In yellow we show triangles of the \emph{combined solution} which are relevant for our optimization, whereas the blue triangles are not relevant. The orange triangles represent the patch candidate for hole filling ($T_p$). The capacity of the graph edges corresponds to the 3D edge length (colors show the capacity). Only the edges from the source (black edges) have infinite capacity. The dashed red line shows the minimum cut of this example.} \label{fig:weak_patching_init} \vspace{-15pt} \end{figure} We achieve this minimization with the following graph formulation (see also Fig.~\ref{fig:weak_patching_init}). For each triangle in the hole set $T_h$ and the patch $T_p$ we insert a node in the graph. Then we insert edges with infinite capacity from the source to all triangles/nodes in $T_h$ (to force this set of triangles to be part of the solution). All triangles in $T_h$ are then connected to their neighbors in $T_p$ with directed edges, where the capacity of the edge in the graph corresponds to the edge length in 3D. Similarly, we insert two graph edges for each pair of neighboring triangles in $T_p$, where the capacity is also equal to the edge length. Finally, we insert a graph edge for each outer triangle (i.e. all triangles with less than three neighbors). These edges are connected to the sink and their capacity is the sum of all outer edges of the triangle. Through this formulation, the graph cut optimization minimizes the total length of the remaining boundary. After the optimization, all triangles which are needed for the optimal boundary reduction are contained in the source set of the graph. These triangles are added to the \emph{combined solution} and the process is repeated with the next patch candidate.
{'timestamp': '2017-05-03T02:06:05', 'yymm': '1705', 'arxiv_id': '1705.00949', 'language': 'en', 'url': 'https://arxiv.org/abs/1705.00949'}
ArXiv
\section{Introduction} Coronal lines are collisionally excited forbidden transitions within low-lying levels of highly ionized species (IP $>$ 100 eV). As such, these lines form in extreme energetic environments and thus are unique tracers of AGN activity; they are not seen in starburst galaxies. Coronal lines appear from X-rays to IR and are common in Seyfert galaxies regardless of their type (Penston et al. 1984; Marconi et al. 1994; Prieto \& Viegas 2000). The strongest ones are seen in the IR; in the near-IR they can even dominate the line spectrum (Reunanen et al. 2003). Owing to the high ionization potential, these lines are expected to be limited to few tens to hundred of parsec around the active nucleus. On the basis of spectroscopic observations, Rodriguez-Ardila et al. (2004, 2005) unambiguously established the size of the coronal line region (CLR) in NGC~1068 and the Circinus Galaxy, using the coronal lines [SiVII] 2.48~$\rm \mu m$\,, [SiVI] 1.98~$\rm \mu m$\~,, [Fe\,{\sc vii}] 6087\AA, [Fe\,{\sc x}] 6374 \AA\ and [Fe\,{\sc xi}] 7892 \AA\. They find these lines extending up to 20 to 80 pc from the nucleus, depending on ionization potential. Given those sizes, we started an adaptive-optics-assisted imaging program with the ESO/VLT aimed at revealing the detailed morphology of the CLR in some of nearest Seyfert galaxies. We use as a tracer the isolated IR line [Si VII] 2.48~$\rm \mu m$\ (IP=205.08 eV). This letter presents the resulting narrow-band images of the [Si VII] emission line, which reveal for the first time the detailed morphology of the CLR, and with suitable resolution for comparison with radio and optical- lower-ionization-gas images. The morphology of the CLR is sampled with a spatial resolutions almost a factor 5 better than any previously obtained, corresponding to scales $\sim <$10 pc. The galaxies presented are all Seyfert type 2: Circinus, NGC~1068, ESO~428-G1 and NGC~3081. Ideally, we had liked to image type 1 objects, but, in the Southern Hemisphere, there are as yet no known, suitable type~1 sources at sufficiently low redshift to guarantee the inclusion of [Si VII] 2.48~$\rm \mu m$\ entirely in the filter pass-band. \section{Observations, image registration and astrometry} Observations were done with the adaptive-optics assisted IR camera NACO at the ESO/VLT. Two narrow band filters, one centered on the coronal [SiVII] 2.48~$\mu m$ line and an adjacent band centered on 2.42~$\mu m$ line-free continuum, were used. The image scale was 0.027 arcsec pixel$^{-1}$ in all cases, 0.013 arcsec pixel$^{-1}$ in NGC\,1068. Integration times were chosen to keep the counts within the linearity range: $\sim 20$ minutes per filter and source. For each filter, the photometry was calibrated against standard stars observed after each science target. These stars were further used as PSF when needed and for deriving a correction factor that normalizes both narrow-band filters to provide equal number of counts per a given flux. In deriving this factor it is assumed that the continuum level in the stars is the same in both filters and not emission lines are not present. The wavefront sensor of the adaptive optics system followed the optical nucleus of the galaxies to determine seeing corrections. The achieved spatial resolution was estimated from stars available in the field of the galaxies when possible; this was not possible in NGC 3081 and NGC 1068 (cf. Table 1). The resolutions were comparable in both filters within the reported errors in Table 1. Continuum-free [SiVII]2.48~$\rm \mu m$\ line images are shown in Figs. 1 and 2 for each galaxy. These were produced after applying the normalization factor derived from the standers stars. The total integrated coronal line emission derived from these images is listed in Table 2. For comparison, [SiVII] 2.48~$\rm \mu m$\ fluxes derived from long-slit spectroscopy are also provided. Also in these figures, images with the 2.48~$\rm \mu m$\~ filter of the standard stars -also used as PSF's control- are shown. The images provide a rough assessment of the image quality/resolution achieved in the science frames. For the case of Circinus and ESO 428-G014, a more accurate evaluation is possible from the images of a field star. One of these field star is shown in both filters in Figs. 1e and 2b respectively.. To get an assesment of the image quality at the lowest signal levels, the images of the field stars in particular are normalized to the galaxy peak at the corresponding filter. These are much fainter than the galaxy nucleus, thus, the star peak is a mere $\sim$5\% of the galaxy peak. \begin{table*} \centering \caption{Galaxies scale and achieved NACO angular resolution. $*$: in NGC 1068, is given the size of the nucleus as K-band interferometry sets an upper limit for the core of 5 mas (Weigelt et al. 2004); in NGC 3081, the size of a PSF star taken after the science frames is given} \begin{tabular}{cccccccc} \hline AGN & Seyfert & 1 arcsec & Stars & FWHM & FWHM & Size of nucleus \\ &type &in pc & in field & arcsec & pc & FWHM arcsec \\ \hline Circinus & 2 & 19 & 2 & 0.19$\pm$0.02 & 3.6 & 0.27 \\ NGC 1068 & 2 & 70 & 0 & 0.097$^*$ & 6.8 & $<$0.097 \\ ESO\,428-G014 & 2 & 158 & 3 & 0.084$\pm$0.006 & 13 & 0.15$\pm$0.01 \\ NGC\,3081 & 2 & 157 & 0 & 0.095$^*$ & 14 & $<$0.32 \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Size and photometry of the 2.48 $\mu m$ coronal line region. $*$: from a 1'' x 1.4'' aperture \label{flxTb}} \begin{tabular}{ccccc} \hline {AGN} & {Radius from nucleus} & {Flux NACO} & {Flux long-slit} & {Reference long-slit} \\ &pc &\multicolumn{2}{c}{in units of $10^{-14}$~erg~s$^{-1}$ cm$^{-2}$} \\ \hline Circinus & 30 & 20 & 16 & Oliva et al. 1994 \\ NGC 1068 & 70 & 72 & 47$^*$ & Reunanen et al. 2003 \\ ESO\,428-G014 & 120 - 160 & 2.0 & 0.7$^*$ & Reunanen et al. 2003 \\ NGC\,3081 & 120 & 0.8 & 0.8$^*$ & Reunanen et al. 2003 \\ \hline \end{tabular} \end{table*} Radio and HST images were used, where available, to establish an astrometric reference frame for the CLR in each of the galaxies. For NGC~1068 (Fig 1a, b, \& c), the registration of radio, optical HST and adaptive-optics IR images by Marco et al. (1997; accuracy $\sim$ 0.05'') was adopted. The comparison of the [SiVII] 2.48~$\rm \mu m$\ line image with the HST [OIII] 5007 \AA\ followed by assuming the peak emission in Marco's et al. K-band image to coincide with that in the NACO 2.42~$\rm \mu m$\ continuum image. The comparison with the MERLIN 5~GHz image of Gallimore et al. (2004) was done assuming that the nuclear radio source 'S1' and the peak emission in the NACO 2.42~$\rm \mu m$\ image are coinciding. In Circinus (Fig. 1d, e \& f), the registration of NACO and HST/$H\alpha$ images was done on the basis of 3--4 stars or unresolved star clusters available in all fields. That provides an accurate registration better than 1 pixel (see Prieto et al. 2004). No radio image of comparable resolution is available for this galaxy. For ESO\,428-G014 (Fig. 2a, b, \& c), NACO images were registered on the basis of 3 available stars in the field. Further registration with a VLA 2~cm image (beam 0.2''; Falcke, Wilson \& Simpson 1998) was made on the assumption that the continuum peak at 2.42~$\mu m$ coincides with that of the VLA core. We adopted the astrometry provided by Falcke et al. (uncertainty $\sim$ 0.3") who performed the registration of the 2~cm and the HST/H$\alpha$ images, and plotted the HST/H$\alpha$ atop the NACO coronal line image following that astrometry. NGC~3081 (Fig. 2d, e, \& f) has no stars in the field. In this case NACO 2.42~$\rm \mu m$\ and 2.44~$\rm \mu m$\ images, and an additional NACO deep Ks-band image, were registered using the fact that the NACO adaptive optics system always centers the images at the same position of the detector within 1 pixel (0.027''). The registration with a HST/WFPC2 image at 7910\AA\ (F791W), employed as a reference the outer isophote of the Ks-band image which show very similar morphology to that seen in the HST 7910\AA\ image. Further comparison with an HST PC2 $H\alpha$ image relied on the astrometry by Ferruit et al. (2000). The registration with an HST/FOC UV image at 2100\AA\ (F210M) was based on the assumption that the UV nucleus and the continuum peak emission at 2.42 2.42~$\rm \mu m$\ coincides. The radio images available for this galaxy have a beam resolution $>0.5''$ (Nagar et al. 1999), which includes all the detected coronal extended emission, and are therefore not used in this work. \section{The size and morphology of the coronal line region} In the four galaxies, the CLR resolves into a bright nucleus and extended emission along a preferred position angle, which usually coincides with that of the extended lower-ionization gas. The size of the CLR is a factor 3 to 10 smaller than the extended narrow line region (NLR). The maximum radius (Table 2) varies from 30 pc in Circinus to 70 pc in NGC 1068, to $\sim >$ 120 pc in NGC 3081 and ESO~428-G014. The emission in all cases is diffuse or filamentary, and it is difficult to determine whether it further breaks down into compact knots or blobs such as those found in H$\alpha$, [OIII] 5007\AA\ or radio images even though the resolutions are comparable. In Circinus, [SiVII]2.48~$\rm \mu m$\ emission extends across the nucleus and aligns with the orientation of its one-sided ionization cone, seen in H$\alpha$, or in [OIII] 5007 \AA. In these lines, the counter-cone is not seen (Wilson et al. 2002), but in [SiVII], presumably owing to the reduced extinction, extended diffuse emission is detected at the counter-cone position (Fig. 1f; Prieto et al. 2004). This has been further confirmed with VLT/ISAAC spectroscopy which shows both [SiVII]2.48~$\rm \mu m$\ and [SiVI] 1.96~$\rm \mu m$\ extending up to 30 pc radius from the nucleus (Rodriguez-Ardila et al. 2004). In the coronal line image, the North-West emission is defining an opening cone angle larger than that in $H\alpha$. The morphology of [SiVII] in this region is suggestive of the coronal emission tracing the walls of the ionization cone (see fig. 1f). In ESO~428-G014, the coronal emission is remarkably aligned with the radio-jet (Fig. 2c). The 2~cm emission is stronger in the northwest direction, and [SiVII] is stronger in that direction too. H$\alpha$ emission is also collimated along the radio structure, but the emission spreads farther from the projected collimation axis and extends out to a much larger radius from the nucleus than the coronal or radio emission (Fig. 2b). Both H$\alpha$ and the 2 cm emission resolve into several blobs but the coronal emission is more diffuse. In NGC 3081, the coronal emission resolves into a compact nuclear region and a detached faint blob at $\sim$120 pc north of it. The HST [OIII] 5007 and H$\alpha$ images show rather collimated structure extending across the nucleus along the north-south direction over $\sim$ 300 pc radius (Ferruit et al. 2000). Besides the nucleus, the second brightest region in those lines coincides with the detached [Si VII] emission blob (Fig. 2d). At this same position, we also find UV emission in a HST/FOC image at 2100 \AA. NGC~1068 shows the strongest [Si VII] 2.48~$\rm \mu m$\ emission among the four galaxies, a factor three larger than in Circinus, and the only case where the nuclear emission shows detailed structure. At $\sim 7 ~pc$ radius from the radio core S1, [Si VII] emission divides in three bright blobs. The position of S1 falls in between the blobs. The southern blob looks like a concentric shell. The northern blob coincides with the central [OIII] peak emission at the vortex of the ionization cone; the other two blobs are not associated with a particular enhancement in [OIII] or radio emission (Fig. 1b \& c). [Si VII] depression at the position of S1 may indicate a very high ionization level at already 7 pc radius (our resolution) from the center; the interior region might instead be filled with much higher ionization-level gas, e.g. [FeX], [Si IX] and higher. This central structure, $\sim$14~ pc radius in total, is surrounded in all directions by much lower surface brightness gas, extending up to at least 70 pc radius. The presence of this diffuse region is confirmed by VLT/ISAAC spectra along the north-south direction, which reveal [SiVI]1.96~$\rm \mu m$\ and [Si VII]2.48~$\rm \mu m$\ extending at both sides of the nucleus up to comparable radii (Rodriguez-Ardila et al. 2004, 2005). This diffuse emission shows slight enhancement at both sides of the 5 GHz jet, but there otherwise appears no direct correspondence between the CLR and radio morphology. \section{Discussion} ESO~438-G014 and NGC~3081 show the largest and best collimated [SiVII] emission, up to 150 pc radius from the nucleus. To reach those distances by nuclear photoionization alone would require rather low electron densities or a very strong (collimated) radiation field. Density measurements in the CLR are scarce: Moorwood et al. (1996) estimate a density $n_e \sim 5000~cm^{-3}$ in Circinus on the basis of [NeV]~14.3~$\rm \mu m$\ /24.3~$\rm \mu m$; Erkens et al. (1997) derive $n_e < 10{^7}~ cm^{-3}$ in several Seyfert 1 galaxies, on the basis of several optical [FeVII] ratios. This result may be uncertain because the optical [Fe VII] are weak and heavily blended. Taking $n_e \sim 10^4 cm^{-3}$ as a reference value, it results in an ionization parameter U $<\sim 10^{-3}$ at 150~pc from the nucleus, which is far too low to produce strong [SiVII] emission (see e.g Ferguson et al. 1997; Rodriguez-Ardila et al. 2005). We argue that, in addition to photoionization, shocks must contribute to the coronal emission. This proposal is primarily motivated by a parallel spectroscopic study of the kinematics of the CLR gas of several Seyfert galaxies (Rodriguez-Ardila et al. 2005), which reveals coronal line profiles with velocities 500 $\rm km\, s^{-1}$ $< v <$ 2000 $\rm km\, s^{-1}$ . Here we assess the proposal in a qualitative manner, by looking for evidence for shocks from the morphology of the gas emission. In ESO~428-G014, the remarkable alignment between [Si VII] and the radio emission is a strong indication of the interaction of the radio jet with the ISM. There is spectroscopic evidence of a highly turbulent ISM in this object: asymmetric emission line profiles at each side of the nucleus indicate gas velocities of up to 1400 $\rm km\, s^{-1}$ (Wilson \& Baldwin 1989). Shocks with those velocities heat the gas to temperatures of $>\sim 10^7 K$, which will locally produce bremsstrahlung continuum in the UV -- soft X-rays (Contini et al. 2004) necessary to produce coronal lines. [Si VII] 2.48~$\rm \mu m$\ with IP = 205.08 eV eV will certainly be enhanced in this process. The concentric shell-like structure seen in NGC 3081 in [OIII] 5007 \AA\ and H$\alpha$ (Ferruit et al. 2000) is even more suggestive of propagating shock fronts. From the [OIII]/H$\alpha$ map by Ferruit et al., the excitation level at the position of the [Si VII] northern blob is similar to that of the nucleus, which points to similar ionization parameter despite the increasing distance from the nucleus. The cloud density might then decrease with distance to balance the ionization parameter, but this would demand a strong radiation field to keep line emission efficient. Alternatively, a local source of excitation is needed. The presence of cospatial UV continuum, possibly locally generated bremsstrahlung, and [Si VII] line emission circumstantially supports the shock-excitation proposal. In the case of Circinus and NGC~1068, the direct evidence for shocks from the [Si VII] images is less obvious. In NGC~1068, the orientation of the three blob nuclear structure does not show an obvious correspondence with the radio-jet; it may still be possible we missed the high velocity coronal gas component measured in NGC 1068 in our narrow band filter. In Circinus, there are not radio maps of sufficient resolution for a meaningful comparison. However, both galaxies present high velocity nuclear outflows, which are inferred from the asymmetric and blueshifthed profiles measured in the [OIII] 5007 gas in the case of Circinus (Veilleux \& Bland-Hawthorn 1997), and in the Fe and Si coronal lines in both. In the latter, velocities of $\sim$500 $\rm km\, s^{-1}$ in Circinus and $\sim$ 2000 $\rm km\, s^{-1}$ in NGC 1068 are inferred from the coronal profiles (Rodriguez-Ardila et al. 2004, 2005). An immediate prediction for the presence of shocks is the production of free-free emission, with a maximum in the UV- -- X-ray, from the shock-heated gas. We make here a first order assessment of this contribution using results from photoionization - shocks composite models by Contini et al. (2004), and compare it with the observed soft X-rays. For each galaxy, we derive the 1 keV emission due to free-free from models computed for a nuclear ionizing flux, $F_h = 10^{13} photons~cm^{-2} s^{-1} eV^{-1}$, pre-shock density $n_o=300 cm^{-3}$ and shock velocity closer to the gas velocities measured in these galaxies (we use figure A3 in Contini et al.). The selection of this high- ionizing-flux value has a relative low impact does on the 1 keV emission estimate as the bremsstrahlung emission from this flux drops sharply shortwards the Lyman limit; the results are more depending on the strength of the post-shock bremsstrahlung component, this being mainly dominated by the shock velocity and peaks in the soft X-rays (see fig. A3 in Contini et al. for illustrative examples). Regarding selection of densities, pre-shock densities of a few hundred $ cm^{-3}$ actually imply densities downstream (from where shock-excited lines are emitted) a factor of 10 - 100 higher, the higher for higher velocities, and thus whitin the range of those estimated from coronal line measureemnts (see above). Having selected the model parameters, we further assume that the estimated 1 keV emission comes from a region with size that of the observed [Si VII] emission. Under those premises, the results are as follows. For NGC 1068, assuming the free-free emission extending uniformly over a $\pi \times (70 pc)^2 cm^{-2}$ region (cf. Table 1), and models for shock velocities of 900 $\rm km\, s^{-1}$, the inferred X-ray flux is larger by a factor of 20 compared with the nuclear 1 keV Chandra flux derived by Young et al. (2001). One could in principle account for this difference by assuming a volume filling factor of 5-10\%, which in turn would account for the fact that free-free emission should mostly be produced locally at the fronts shock. In the case of Circinus, following the same procedure, we assume a free-free emission size of $\pi \times (30 pc)^2 cm^{-2}$ (cf. Table 1), and models of shock velocities of 500 $\rm km\, s^{-1}$ (see above). In this case, the inferred X-ray flux is lower than the 1 keV BeppoSAX flux, as estimated in Prieto et al. (2004), by an order of magnitude. For the remaining two galaxies, we assume respective free-free emission areas (cf. Table 1) of 300 pc x 50 pc for ESO 428-G014 -- the width of [Si VII] is $\sim 50 pc$ in the direction perpendicular to the jet -- and $2 \times (\pi \times (14~pc)^2 cm^{-2})$ for NGC 3081 -- in this case, free-free emission is assumed to come from the nucleus and the detached [Si VII] region North of it only. Taking the models for shocks velocities of 900 $\rm km\, s^{-1}$, the inferred X-ray fluxes, when compared with 1 KeV fluxes estimated from BeppoSAX data analysis by Maiolino et al. (1998), are of the same order for ESO 428-G014 and about an order of magnitude less in NGC 3081. The above results are clearly dominated by the assumed size of the free-free emission region, which is unknown. The only purpose of this exercise is to show that under reasonable assumptions of shock velocities, as derived from the line profiles, the free-free emission generated by these shocks in the X-ray could be accommodated within the observed soft X-ray fluxes. We thank Heino Falcke who provided us with the 2 cm radio image of ESO 428-G014, and Marcella Contini for a thorough review of the manuscript.
{'timestamp': '2005-09-07T17:03:32', 'yymm': '0509', 'arxiv_id': 'astro-ph/0509181', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/0509181'}
ArXiv
\section{Introduction} \label{sec1} \IEEEPARstart{W}{ith} the development of the internet of vehicles (IoV) and cloud computing, caching technology facilitates various real-time vehicular applications for vehicular users (VUs), such as automatic navigation, pattern recognition and multimedia entertainment \cite{Liuchen2021} \cite{QWu2022}. For the standard caching technology, the cloud caches various contents like data, video and web pages. In this scheme, vehicles transmit the required contents to a macro base station (MBS) connected to a cloud server, and could fetch the contents from the MBS, which would cause high content transmission delay from the MBS to vehicles due to the communication congestion caused by frequently requested contents from vehicles \cite{Dai2019}. The content transmission delay can be effectively reduced by the emergence of vehicular edge computing (VEC), which caches contents in the road side unit (RSU) deployed at the edge of vehicular networks (VNs) \cite{Javed2021}. Thus, vehicles can fetch contents directly from the local RSU, to reduce the content transmission delay. In the VEC, since the caching capacity of the local RSU is limited, if some vehicles cannot fetch their required contents, a neighboring RSU who has the required contents could forward them to the local RSU. The worst case is that vehicles need to fetch contents from the MBS due to both local and neighboring RSUs not having cached the requested contents. In the VEC, it is critical to design a caching scheme to cache the popular contents. The traditional caching schemes cache contents based on the previously requested contents \cite{Narayanan2018}. However, owing to the high-mobility characteristics of vehicles in VEC, the previously requested contents from vehicles may become outdated quickly, thus the traditional caching schemes may not satisfy all the VUs' requirement. Therefore, it is necessary to predict the most popular contents in the VEC and cache them in the suitable RSUs in advance. Machine learning (ML) as a new tool, can extract hidden features by training user data to efficiently predict popular contents\cite{Yan2019}. However, the user data usually contains privacy information and users are reluctant to share their data directly with others, which make it difficult to collect and train users' data. Federated learning (FL) can protect the privacy of users by sharing their local models instead of data\cite{Chen2021}. In traditional FL, the global model is periodically updated by aggregating all vehicles' local models\cite{Wang2020} -\cite{Cheng2021}. However, vehicles may frequently drive out of the coverage area of the VEC before they update their local models and thus the local models cannot be uploaded in the same area, which would reduce the accuracy of the global model as well as the probability of getting the predicted popular contents. Hence, it motivates us to consider the mobility of vehicles and propose an asynchronous FL to predict accurate popular contents in VEC. Generally, the predicted popular contents should be cached in their local RSU of vehicles to guarantee a low content transmission delay. However, the caching capacity of each local RSU is limited and the popular contents may be diverse, thus the size of the predicted popular contents usually exceeds the cache capacity of the local RSU. Hence, the VEC has to determine where the predicted popular contents are cached and updated. The content transmission delay is an important metric for vehicles to provide real-time vehicular application. The different popular contents cached in the local and neighboring RSUs would impact the way vehicles fetch contents, and thus affect the content transmission delay. In addition, the content transmission delay of each vehicle is impacted by its channel condition, which is affected by vehicle mobility. Therefore, it is necessary to consider the mobility of vehicles to design a cooperative caching scheme, in which the predicted popular contents can be cached among RSUs to optimize the content transmission delay. In contrast to some conventional decision algorithms, deep reinforcement learning (DRL) is a favorable tool to construct the decision-making framework and optimize the cooperative caching for the contents in complex vehicular environment \cite{Zhu2021}. Therefore, we shall employ DRL to determine the optimal cooperative caching to reduce the content transmission delay of vehicles. In this paper, we consider the vehicle mobility and propose a cooperative Caching scheme in VEC based on Asynchronous Federated and deep Reinforcement learning (CAFR). The main contributions of this paper are summarized as follows. \begin{itemize} \item[1)] By considering the mobility characteristics of vehicles including the positions and velocities, we propose an asynchronous FL algorithm to improve the accuracy of the global model. \item[2)] We propose an algorithm to predict the popular contents based on the global model, where each vehicle adopts the autoencoder (AE) to predict its interested contents based on the global model, while the local RSU collects the interested contents of all vehicles within the coverage area to catch the popular contents. \item[3)] We elaborately design a DRL framework (dueling deep Q-network (DQN)) to illustrate the cooperative caching problem, where the state, action and reward function have been defined. Then the local RSU can determine optimal cooperative caching to minimize the content transmission delay based on the dueling DQN algorithm. \end{itemize} The rest of the paper is organized as follows. Section \ref{sec2} reviews the related works on content caching in VNs. Section \ref{sec3} briefly describes the system model. Section \ref{sec5} proposes a mobility-aware cooperative caching in the VEC based on asynchronous federated and deep reinforcement learning method. We present some simulation results in Section \ref{sec6}, and then conclude them in Section \ref{sec7}. \section{Related Work} \label{sec2} In this section, we first review the existing works related to the content caching in vehicular networks (VNs), and then survey the current state of art of the cooperative content caching schemes in VEC. In \cite{YDai2020}, Dai \textit{et al.} proposed a distributed content caching framework with empowering blockchain to achieve security and protect privacy, and considered the mobility of vehicles to design an intelligent content caching scheme based on DRL framework. In \cite{Yu2021}, Yu \textit{et al.} proposed a mobility-aware proactive edge caching scheme in VNs that allows multiple vehicles with private data to collaboratively train a global model for predicting content popularity, in order to meet the requirements for computationally intensive and latency-sensitive vehicular applications. In \cite{JZhao2021}, Zhao \textit{et al.} optimized the edge caching and computation management for service caching, and adopted Lyapunov optimization to deal with the dynamic and unpredictable challenges in VNs. In \cite{SJiang2020}, Jiang \textit{et al.} constructed a two-tier secure access control structure for providing content caching in VNs with the assistance of edge devices, and proposed the group signature-based scheme for the purpose of anonymous authentication. In \cite{CTang2021}, Tang \textit{et al.} proposed a new optimization method to reduce the average response time of caching in VNs, and then adopted Lyapunov optimization technology to constrain the long-term energy consumption to guarantee the stability of response time. In \cite{YDai2022}, Dai \textit{et al.} proposed a VN with digital twin to cache contents for adaptive network management and policy arrangement, and designed an offloading scheme based on the DRL framework to minimize the total offloading delay. However, the above content caching schemes in VNs did not take into account the cooperative caching in the VEC environment. There are some works considering cooperative content caching schemes in VEC. In \cite{GQiao2020}, Qiao \textit{et al.} proposed a cooperative edge caching scheme in VEC and constructed the double time-scale markov decision process to minimize the content access cost, and employed the deep deterministic policy gradient (DDPG) method to solve the long-term mixed-integer linear programming problems. In \cite{JChen2020}, Chen \textit{et al.} proposed a cooperative edge caching scheme in VEC which considered the location-based contents and the popular contents, while designing an optimal scheme for cooperative content placement based on an ant colony algorithm to minimize the total transmission delay and cost. In \cite{LYao2022}, Yao \textit{et al.} designed a cooperative edge caching scheme with consistent hash and mobility prediction in VEC to predict the path of each vehicle, and also proposed a cache replacement policy based on the content popularity to decide the priorities of collaborative contents. In \cite{RWang2021}, Wang \textit{et al.} proposed a cooperative edge caching scheme in VEC based on the long short-term memory (LSTM) networks, which caches the predicted contents in RSUs or other vehicles and thus reduces the content transmission delay. In \cite{DGupta2020}, Gupta \textit{et al.} proposed a cooperative caching scheme that jointly considers cache location, content popularity and predicted rating of contents to make caching decision based on the non-negative matrix factorization, where it employs a legitimate user authorization to ensure the secure delivery of cached contents. In \cite{LYao2019}, Yao \textit{et al.} proposed a cooperative caching scheme based on the mobility prediction and drivers' social similarities in VEC, where the regularity of vehicles' movement behaviors are predicted based on the hidden markov model to improve the caching performance. In \cite{RWu2022}, Wu \textit{et al.} proposed a hybrid service provisioning framework and cooperative caching scheme in VEC to solve the profit allocation problem among the content providers (CPs), and proposed an optimization model to improve the caching performance in managing the caching resources. In \cite{LYao2017}, Yao \textit{et al.} proposed a cooperative caching scheme based on mobility prediction, where the popular contents may be cached in the mobile vehicles within the coverage area of hot spot. They also designed a cache replacement scheme according to the content popularity to solve the limited caching capacity problem for each edge cache device. In \cite{KZhang2018}, Zhang \textit{et al.} proposed a cooperative edge caching architecture that focuses on the mobility-aware caching, where the vehicles cache the contents with base stations collaboratively. They also introduced a vehicle-aided edge caching scheme to improve the capability of edge caching. In \cite{KLiu2016}, Liu \textit{et al.} designed a cooperative caching scheme that allows vehicles to search the unrequested contents. This scheme facilitates the content sharing among vehicles and improves the service performance. In \cite{SWang2017}, Wang \textit{et al.} proposed a VEC caching scheme to reduce the total transmission delay. This scheme extends the capability of the data center from the core network to the edge nodes by cooperatively caching popular contents in different CPs. It minimizes the VUs' average delay according to an iterative ascending price method. In \cite{MLiu2021}, Liu \textit{et al.} proposed a real-time caching scheme in which edge devices cooperate to improve the caching resource utilization. In addition, they adopted the DRL framework to optimize the problem of searching requests and utility models to guarantee the search efficiency. In \cite{BKo2019}, Ko \textit{et al.} proposed an adaptive scheduling scheme consisting of the centralized scheduling mechanism, ad hoc scheduling mechanism and cluster management mechanism to exploit the ad hoc data sharing among different RSUs. In \cite{JCui2020}, Cui \textit{et al.} proposed a privacy-preserving data downloading method in VEC, where the RSUs can find popular contents by analyzing encrypted requests of nearby vehicles to improve the downloading efficiency of the network. In \cite{QLuo2020}, Luo \textit{et al.} designed a communication, computation and cooperative caching framework, where computing-enabled RSUs provide computation and bandwidth resource to the VUs to minimize the data processing cost in VEC. As mentioned above, no other works has considered the vehicle mobility and privacy of VUs simultaneously to design cooperative caching schemes in VEC, which motivates us to propose a mobility-aware cooperative caching in VEC based on the asynchronous FL and DRL. \begin{figure} \center \includegraphics[scale=0.7]{1-eps-converted-to.pdf} \caption{VEC scenario} \label{fig1} \end{figure} \section{System Model} \label{sec3} \subsection{System Scenario} As shown in Fig. \ref{fig1}, we consider a three-tier VEC in an urban scenario that consists of a local RSU, a neighboring RSU, a MBS attached with a cloud and some vehicles moving in the coverage area of the local RSU. The top tier is the MBS deployed at the center of the VEC, while middle tier is the RSUs deployed in the coverage area of the MBS. They are placed on one side of the road. The bottom tier is the vehicles driving within the coverage area of the RSUs. Each vehicle stores a large amount of VUs' historical data, i.e., local data. Each data is a vector reflecting different information of a VU, including the VU's personal information such as identity (ID) number, gender, age and postcode, the contents that the VU may request, as well as the VU's ratings for the contents where a larger rating for a content indicates that the VU is more interested in the content. Particularly, the rating for a content may be $0$, which means that it is not popular or is not requested by VUs. Each vehicle randomly chooses a part of the local data to form a training set while the rest is used as a testing set. The time duration of vehicles within the coverage area of the MBS is divided into rounds. For each round, each vehicle randomly selects contents from its testing set as the requested contents, and sends the request information to the local RSU to fetch the contents at the beginning of each round. We consider the MBS has abundant storage capacity and caches all available contents, while the limited storage capacity of each RSU can only accommodate part of contents. Therefore, the vehicle fetches each of the requested content from the local RSU, neighboring RSU or MBS in different conditions. Specifically, \subsubsection{Local RSU}If a requested content is cached in the local RSU, the local RSU sends back the requested content to the vehicle. In this case the vehicle fetches the content from the local RSU. \subsubsection{neighboring RSU}If a requested content is not cached in the local RSU, the local RSU transfers the request to the neighboring RSU, and the neighboring RSU sends the content to the local RSU if it caches the requested content. Afterward, the local RSU sends back the content to the vehicle. In this case the vehicle fetches the content from the neighboring RSU. \subsubsection{MBS}If a content is neither cached in the local RSU nor the neighboring RSU, the vehicle sends the request to the MBS that directly sends back the requested content to the vehicle. In this case, the VU fetches the content from the MBS. \subsection{Mobility Model of Vehicles} The model assumes that all vehicles drive in the same direction and vehicles arrive at a local RSU, following a Poisson distribution with the arrival rate $\lambda_{v}$. Once a vehicle enters the coverage of the local RSU, it sends request information to the local RSU. Each vehicle keeps the same mobility characteristics including position and velocity within a round and may change its mobility characteristics at the beginning of each round. The velocity of different vehicles follows an independent identically distribution. The velocity of each vehicle is generated by a truncated Gaussian distribution, which is flexible and consistent with the real dynamic vehicular environment. For round $r$, the number of vehicles driving in the coverage area of the local RSU is $N^{r}$. The set of $N^{r}$ vehicles are denoted as $\mathbb{V}^{r}=\left\{V_{1}^{r}, V_{2}^{r},\ldots, V_{i}^{r}, \ldots, V_{N^{r}}^{r}\right\}$, where $V_{i}^{r}$ is vehicle $i$ driving in the local RSU $(1 \leq i \leq N^{r})$. Let $\left\{U_{1}^{r}, U_{2}^{r}, \ldots, U_{i}^{r}, \ldots, U_{N^{r}}^{r}\right\}$ be the velocities of all vehicles driving in the local RSU, where $U_{i}^{r}$ is velocity of $V_{i}^{r}$. According to \cite{AlNagar2019}, the probability density function of $U_{i}^{r}$ is expressed as \begin{equation} f({U_{i}^r}) = \left\{ \begin{aligned} \frac{{{e^{ - \frac{1}{{2{\sigma ^2}}}{{({U_{i}^r} - \mu )}^2}}}}}{{\sqrt {2\pi {\sigma ^2}} (erf(\frac{{{U_{\max }} - \mu }}{{\sigma \sqrt 2 }}) - erf(\frac{{{U_{\min }} - \mu }}{{\sigma \sqrt 2 }}))}},\\ {U_{min }} \le {U_{i}^r} \le {U_{max }},\\ 0 \qquad \qquad \qquad \qquad \quad otherwise. \end{aligned} \right. \label{eq1} \end{equation} where $U_{\max}$ and $U_{\min}$ are the maximum and minimum velocity threshold of each vehicle, respectively, and $erf\left(\frac{U_{i}^{r}-\mu}{\sigma \sqrt{2}}\right)$ is the Gauss error function of $U_{i}^{r}$ under the mean $\mu$ and variance $\sigma^{2}$. \subsection{Communication Model} The communications between the local RSU and neighboring RSU adopt the wired link. Each vehicle keeps the same communication model during a round and changes its communication model for different rounds. When the round is $r$, the channel gain of $V_{i}^{r}$ is modeled as \cite{3gpp} \begin{equation} \begin{aligned} h_{i}^{r}(dis(x,V_{i}^{r}))=\alpha_{i}^{r}(dis(x,V_{i}^{r})) g_{i}^{r}(dis(x,V_{i}^{r})), \\ x=S,M,\\ \label{eq2} \end{aligned} \end{equation} where $x=S$ means the local RSU and $x=M$ means the MBS, $dis(x,V_{i}^{r})$ is the distance between the local RSU$/$MBS and $V_{i}^{r}$, $\alpha_{i}^{r}(dis(x,V_{i}^{r}))$ is the path loss between the local RSU$/$MBS and $V_{i}^{r}$, and $g_{i}^{r}(dis(x,V_{i}^{r}))$ is the shadowing channel fading between the local RSU$/$MBS and $V_{i}^{r}$, which follows a Log-normal distribution. Each RSU communicates with the vehicles in its coverage area through vehicle to RSU (V2R) link, while the MBS communicates with vehicles through vehicle to base station (V2B) link. Since the distances between the local RSU$/$MBS and $V_{i}^{r}$ are different in different rounds, V2R$/$V2B link suffers from different channel impairments, and thus transmit with different transmission rates in different rounds. The transmission rates under V2R and V2B link are calculated as follows. According to the Shannon theorem, the transmission rate between the local RSU and $V_{i}^{r}$ is calculated as \cite{Chenwu2020} \begin{equation} R_{R, i}^{r}=B\log _{2}\left(1+\frac{p_B h_{i}^{r}(dis(S,V_{i}^{r}))}{\sigma_{c}^{2}}\right), \label{eq3} \end{equation}where $B$ is the available bandwidth, $p_B$ is the transmit power level used by the local RSU and $\sigma_{c}^{2}$ is the noise power. Similarly, the transmission rate between the MBS and $V_{i}^{r}$ is calculated as \begin{equation} R_{B, i}^{r}=B\log _{2}\left(1+\frac{p_{M} h_{i}^{r}(dis(M,V_{i}^{r}))}{\sigma_{c}^{2}}\right), \label{eq4} \end{equation}where $p_{M}$ is the transmit power level used by MBS. \begin{figure} \center \includegraphics[scale=0.75]{2-eps-converted-to.pdf} \caption{Asynchronous FL} \label{fig2} \end{figure} \section{Cooperative Caching Scheme} \label{sec5} In this section, we propose a cooperative cache scheme to optimize the content transmission delay in each round $r$. We first propose an asynchronous FL algorithm to protect VU's information and obtain an accurate model. Then we propose an algorithm to predict the popular contents based on the obtained model. Finally, we present a DRL based algorithm to determine the optimal cooperative caching according to the predicted popular contents. Next, we will introduce the asynchronous FL algorithm, the popular content prediction algorithm and the DRL-based algorithm, respectively. \subsection{Asynchronous Federated Learning} As shown in Fig. \ref{fig2}, the asynchronous FL algorithm consists of 5 steps as follows. \subsubsection{Select Vehicles} \ \newline \indent The main goal of this step is to select the vehicles whose staying time in the local RSU is long enough to ensure they can participate in the asynchronous FL and complete the training process. Each vehicle first sends its mobility characteristics including its velocity and position (i.e., the distance to the local RSU and distance it has traversed within the coverage of the local RSU), then the local RSU selects vehicles according to the staying time that is calculated based on the vehicle's mobility characteristics. The staying time of $V_{i}^{r}$ in the local RSU is calculated as \begin{equation} T_{r,i}^{staying}=\left(L_{s}-P_{i}^{r}\right) / U_{i}^{r}, \label{eq5} \end{equation} where $L_s$ is the coverage range of the local RSU, $P_{i}^{r}$ is the distance that $V_{i}^{r}$ has traversed within the coverage of the local RSU. The staying time of $V_{i}^{r}$ should be larger than the sum of the average training time $T_{training}$ and inference time $T_{inference}$ to guarantee that $V_{i}^{r}$ can complete the training process. Therefore, if $T_{r,i}^{staying}>T_{training}+T_{inference}$, the local RSU selects $V_{i}^{r}$ to participate in asynchronous FL training. Otherwise, $V_{i}^{r}$ is ignored. \subsubsection{Download Model} \ \newline \indent In this step, the local RSU will generate the global model $\omega^{r}$. For the first round, the local RSU initializes a global model based on the AE, which can extract the hidden features used for popular content prediction. In each round, the local RSU updates the global model and transfers the global model $\omega^{r}$ to all the selected vehicles in the end. \subsubsection{Local Training} \ \newline \indent In this step, each vehicle in the local RSU sets the downloaded global model $\omega^{r}$ as the initial local model and updates the local model iteratively through training. Afterward, the updated local model will be the feedback to the local RSU. For each iteration $k$, $V_{i}^{r}$ randomly samples some training data $n_{i,k}^{r}$ from the training set. Then, it uses $n_{i,k}^{r}$ to train the local model based on the AE that consists of an encoder and a decoder. Let $W_{i,k}^{r,e}$ and $b_{i,k}^{r,e}$ be the weight matrix and bias vector of the encoder for iteration $k$, respectively, $W_{i,k}^{r,d}$ and $b_{i,k}^{r,d}$ be the weight matrix and bias vector of the decoder for iteration $k$, respectively. Thus the local model of $V_{i,j}^{r}$ for iteration $k$ is expressed as $\omega_{i,k}^r=\{W_{i,k}^{r,e}, b_{i,k}^{r,e}, W_{i,k}^{r,d}, b_{i,k}^{r,d}\}$. For each training data $x$ in $n_{i,k}^{r}$, the encoder first maps the original training data $x$ to a hidden layer to obtain the hidden feature of $x$, i.e., $z(x)=f\left(W_{i,k}^{r,e}x+b_{i,k}^{r,e}\right)$. Then the decoder calculates the reconstructed input $\hat{x}$, i.e., $\hat{x}=g\left(W_{i,k}^{r,d}z(x)+b_{i,k}^{r,d}\right)$, where $f{(\cdot)}$ and $g{(\cdot)}$ are the nonlinear and logical activation function \cite{Ng2011}. Afterward, the loss function of data $x$ under the local model $\omega_{i,k}^r$ is calculated as \begin{equation} l\left(\omega_{i,k}^r;x\right)=(x-\hat{x})^{2}, \label{eq6} \end{equation}where $\omega^{r}_{i,1}=\omega^{r}$. After the loss functions of all the data in $n_{i,k}^{r}$ are calculated, the local loss function for iteration $k$ is calculated as \begin{equation} f(\omega_{i,k}^r)=\frac{1}{\left| n_{i,k}^r\right|}\sum_{x\in n_{i,k}^r} l\left(\omega_{i,k}^r;x\right), \label{eq7} \end{equation} where $\left| n_{i,k}^r\right|$ is the number of data in $n_{i,k}^r$. Then the regularized local loss function is calculated to reduce the deviation between the local model $\omega_{i,k}^r$ and global model $\omega^{r}$ to improve the algorithm convergence, i.e., \begin{equation} g\left(\omega_{i,k}^r\right)=f\left(\omega_{i,k}^r\right)+\frac{\rho}{2}\left\|\omega^{r}-\omega_{i,k}^r\right\|^{2}, \label{eq8} \end{equation} where $\rho$ is the regularization parameter. Let $\nabla g(\omega_{i,k}^{r})$ be the gradient of $g\left(\omega_{i,k}^r\right)$, which is referred to as the local gradient. In the previous round, some vehicles may upload the updated local model unsuccessfully due to the delayed training time, and thus adversely affect the convergence of global model \cite{Chen2020}\cite{Xie2019}\cite{-S2021}. Here, these vehicles are called stragglers and the local gradient of a straggler in the previous round is referred to as the delayed local gradient. To solve this problem, the delayed local gradient will be aggregated into the local gradient of the current round $r$. Thus, the aggregated local gradient can be calculated as \begin{equation} \nabla \zeta_{i,k}^{r}=\nabla g(\omega_{i,k}^{r})+\beta \nabla g_{i}^{d}, \label{eq9} \end{equation} where $\beta$ is the decay coefficient and $\nabla g_{i}^{d}$ is the delayed local gradient. Note that $\nabla g_{i}^{d}=0$ if $V_{i}^{r}$ uploads successfully in the previous round. Then the local model for the next iteration is updated as \begin{equation} \omega^{r}_{i,k+1}=\omega^{r}-\eta_{l}^{r}\nabla \zeta_{i,k}^{r}, \label{eq10} \end{equation}where $\eta_{l}^{r}$ is the local learning rate in round $r$, which is calculated as \begin{equation} \eta_{l}^{r}=\eta_{l} \max \{1, \log (r)\}, \label{eq11} \end{equation} where $\eta_{l}$ is the initial value of local learning rate. Then iteration $k$ is finished and $V_{i}^{r}$ randomly samples some training data again to start the next iteration. When the number of iterations reaches the threshold $e$, $V_{i}^{r}$ completes the local training and upload the updated local model $\omega_{i}^{r}$ to the local RSU. \subsubsection{Upload Model} \ \newline \indent Each vehicle uploads its updated local model to the local RSU after it completes local training. \subsubsection{Asynchronous aggregation} \ \newline \indent If the local model of $V_{i}^{r}$, i.e., $\omega^{r}_{i}$, is the first model received by the local RSU, the upload is successful and the local RSU updates the global model. Otherwise, the local RSU drops $\omega^{r}_{i}$ and thus the upload is not successful. When the upload is successful, the local RSU updates the global model $\omega^{r}$ by weighted averaging as follows: \begin{algorithm} \caption{The Asynchronous Federated Learning Algorithm} \label{al1} Set global model $\omega^{r}$;\\ \For{each round $r$ from $1$ to $R^{max}$} { \For{each vehicle $ V^{r}_{i} \in \mathbb{V}^{r}$ \textbf{in parallel}} { $T_{r,i}^{staying}=\left(L_{s}-P_{i}^{r}\right) / U_{i}^{r}$;\\ \If{ $T_{r,i}^{staying}>T_{training}+T_{inference}$} { $V^{r}_i$ is selected to participate in asynchronous FL training; } } \For{each selected vehicle $ V^{r}_{i}$} { $\omega^{r}_{i} \leftarrow \textbf{Vehicle Updates}(\omega^r,i)$;\\ Upload the local model $\omega^{r}_{i}$ to the local RSU;\\ } Receive the updated model $\omega^{r}_{i}$;\\ Calculate the weight of the asynchronous aggregation $\chi_{i}$ based on Eq. \eqref{eq14};\\ Update the global model based on Eq. \eqref{eq12};\\ \Return $w^{r+1}$ } \textbf{Vehicle Update}($w,i$):\\ \textbf{Input:} $w^r$ \\ Calculate the local learning rate $\eta_{l}^{r}$ based on Eq. \eqref{eq11};\\ \For{each local epoch k from $1$ to $e$} { Randomly samples some data $n_{i,k}^r$ from the training set;\\ \For{each data $x \in n_{i,k}^r$ } { Calculate the loss function of data $x$ based on Eq. \eqref{eq6};\\ } Calculate the local loss function for interation $k$ based on Eq. \eqref{eq7};\\ Calculate the regularized local loss function $g\left(\omega_{i,k}^r\right)$ based on Eq. \eqref{eq8};\\ Aggregate local gradient $\nabla \zeta_{i,k}^{r}$ based on Eq. \eqref{eq9};\\ Update the local model $\omega^{r}_{i,k}$ based on Eq. \eqref{eq10};\\ } Set $\omega^{r}_{i}=\omega^{r}_{i,e}$;\\ \Return$\omega^{r}_{i}$ \end{algorithm} \begin{equation} \omega^{r}=\omega^{r-1}+\frac{d_{i}^r}{d^r} \chi_{i} \omega^{r}_{i}, \label{eq12} \end{equation}where $d_{i}^r$ is the size of local data in $V_i^r$, $d^r$ is the total local data size of the selected vehicles and $\chi_{i}$ is the weight of the asynchronous aggregation for $V_{i}^{r}$. The weight of the asynchronous aggregation $\chi_{i}$ is calculated by considering the traversed distance of $V_{i}^{r}$ in the coverage area of the local RSU and the content transmission delay from local RSU to $V_{i}^{r}$ to improve the accuracy of the global model and reduce the content transmission delay. Specifically, if the traversed distance of $V_{i}^{r}$ is large, it may have long available time to participate in the training, thus its local model should occupy large weight for aggregation to improve the accuracy of the global model. In addition, the content transmission delay from local RSU to $V_{i}^{r}$ is important because $V_{i}^{r}$ would finally download the content from the local RSU when the content is either cached in the local or neighboring RSU. Thus, if the content transmission delay from local RSU to $V_{i}^{r}$ is small, its local model should also occupy large weight for aggregation to reduce the content transmission delay. The weight of the asynchronous aggregation $\chi_{i}$ is calculated as \begin{equation} \chi_{i}=\mu_{1} {(L_{s}-P_{i}^{r})}+\mu_{2} \frac{s}{R_{R, i}^{r}}, \label{eq13} \end{equation}where $\mu_{1}$ and $\mu_{2}$ are coefficients of the position weight and transmission weight, respectively (i.e., $\mu_{1}+\mu_{2}=1$), $s$ is the size of each content. Thus, the content transmission delay from local RSU to $V_{i}^{r}$ is affected by the transmission rate between the local RSU and $V_{i}^{r}$, i.e., $R_{R, i}^{r}$. We can further calculate $\chi_{i}$ based on the normalized $L_{s}-P_{i}^{r}$ and $R_{R, i}^{r}$, i.e., \begin{equation} \chi_{i}=\mu_{1} \frac{(L_{s}-P_{i}^{r})}{L_{s}}+\mu_{2} \frac{R_{R, i}^{r}}{\max _{k \in N^{r}}\left(R_{R, k}^{r}\right)}. \label{eq14} \end{equation} Since the local RSU knows $dis(S,V_{i}^{r})$ and $P_{i}^{r}$ for each vehicle $i$ at the beginning of the asynchronous FL, the local RSU can calculate $R_{R, i}^{r}$ according to Eqs. \eqref{eq2} and \eqref{eq3}, and further calculate $\chi_{i}$ according to Eq. \eqref{eq13}. Up to now, the asynchronous FL in round $r$ is finished and the updated global model $\omega^{r}$ is obtained. The process of the asynchronous FL algorithm is shown in Algorithm \ref{al1} for ease of understanding, where $R^{max}$ is the maximum number of rounds, $e$ is the maximum number of local epochs. Then, the local RSU sends the obtained model to each vehicle to predict popular contents. \subsection{Popular Content Prediction} \begin{figure*} \center \includegraphics[scale=0.6]{3-eps-converted-to.pdf} \caption{Popular content prediction process} \label{fig3} \end{figure*} In this subsection, we propose an algorithm to predict the popular contents. As shown in Fig. \ref{fig3}, the popular content prediction algorithm consists of the 4 steps as follows. \subsubsection{Data Preprocessing} \ \newline \indent The VU's rating for a content is $0$ when VU is uninterested in the content or has not requested a content. Thus, it is difficult to differentiate if a content is an interesting one for the VU when its rating is $0$. Marking all contents with rating $0$ as uninterested contents is a bias prediction. Therefore, we adopt the obtained model to reconstruct the rating for each content in the first step, which is described as follows. Each vehicle abstracts a rating matrix from the data in the testing set, where the first dimension of the matrix is VUs' ID and the second dimension is VU's ratings for all contents. Denote the rating matrix of $V_{i}^r$ as $\boldsymbol{R}_{i}^r$. Then, the AE with the obtained model is adopted to reconstruct $\boldsymbol{R}_{i}^r$. The rating matrix $\boldsymbol{R}_{i}^r$ is used as the input data for the AE that outputs the reconstructed rating matrix $\hat{\boldsymbol{R}}_{i}^r$. Since $\hat{\boldsymbol{R}}_{i}^r$ is reconstructed based on the obtained model which reflects the hidden features of data, $\hat{\boldsymbol{R}}_{i}^r$ can be used to approximate the rating matrix $\boldsymbol{R}_{i}^r$. Then, similar to the rating matrix, each vehicle also abstracts a personal information matrix from the data of the testing set, where the first dimension of the matrix is VUs' ID and the second dimension is VU's personal information. \subsubsection{Cosine Similarity} \ \newline \indent $V_{i}^r$ counts the number of the nonzero ratings for each VU in $\boldsymbol{R}_{i}^r$ and marks the VUs with the $1$$/$$m$ largest numbers as active VUs. Then, each vehicle combines $\hat{\boldsymbol{R}}_{i}^r$ and the personal information matrix (denoted as $\boldsymbol{H}_{i}^r$) to calculate the similarity between each active VU and other VUs. The similarity between an active VU $a$ and $b$ is calculated according to cosine similarity \cite{yuet2018} \begin{equation} \begin{aligned} \operatorname{sim}_{a,b}^{r,i}=\cos \left(\boldsymbol{H}_{i}^r(a,:), \boldsymbol{H}_{i}^r(b,:)\right)\\ =\frac{\boldsymbol{H}_{i}^r(a,:) \cdot \boldsymbol{H}_{i}^r(b,:)^T}{\left\|\boldsymbol{H}_{i}^r(a,:)\right\|_{2} \times\left\|\boldsymbol{H}_{i}^r(b,:)\right\|_{2}} \label{eq15} \end{aligned} \end{equation}where $\boldsymbol{H}_{i}^r(a,:)$ and $\boldsymbol{H}_{i}^r(b,:)$ are the vectors corresponding to the active VU $a$ and $b$ in the combined matrixes, respectively, $\left\|\boldsymbol{H}_{i}^r(a,:)\right\|_{2}$ and $\left\|\boldsymbol{H}_{i}^r(b,:)\right\|_{2}$ are the 2-norm of $\boldsymbol{H}_{i}^r(a,:)$ and $\boldsymbol{H}_{i}^r(b,:)$, respectively. Then for each active VU $a$, $V_{i}^r$ selects the VUs with the $K$ largest similarities as the $K$ neighboring VUs of VU $a$. The ratings of the $K$ neighboring VUs also reflect the preferences of VU $a$ to a certain extent. \subsubsection{Interested Contents} \ \newline \indent After determining the neighboring VUs of active VUs, in $\boldsymbol{R}_{i}^r$, the vectors of neighboring VUs for each active VU are abstracted to construct a matrix $\boldsymbol{H}_K$, where the first dimension of $\boldsymbol{H}_K$ is the IDs of the neighboring VUs for active VUs, while the second dimension of $\boldsymbol{H}_K$ is the ratings of the contents from neighboring VUs. In $\boldsymbol{H}_K$, a content with a VU's nonzero rating is regarded as the VU's interested content. Then the number of interested contents is counted for each VU, where the counted number of a content is referred to as the content popularity of the content. $V_{i}^r$ selects the contents with the $F_c$ largest content popularity as the predicted interested contents. \subsubsection{Popular Contents} \ \newline \indent After vehicle in the local RSU uploads their predicted interested contents, the local RSU collects and compares the predicted interested contents uploaded from all vehicles to select the contents with the $F_{c}$ largest content popularity as the popular contents. The proposed popular content prediction algorithm is illustrated in Algorithm \ref{al2}, where $\mathbb{C}^{r}$ is the set of the popular contents and $\mathbb{C}_{i}^r$ is the set of interested contents of $V^{r}_i$. \begin{algorithm} \caption{The Popular Content Prediction Algorithm} \label{al2} \textbf{Input: $\omega^{r}$}\\ \For{each vehicle $ V^{r}_{i} \in \mathbb{V}^{r}$} { Construct the rating matrix $\boldsymbol{R}_{i}^r$ and personal information matrix;\\ $\hat{\boldsymbol{ R}}_{i}^r \leftarrow AE(\omega^{r},\boldsymbol{R}_{i}^r)$;\\ Combine $\hat{\boldsymbol{ R}}_{i}^r$ and information matrix as $\boldsymbol{H}_{i}^r$;\\ $\mathbb{C}_{i}^r \leftarrow \textbf{Vehicle Predicts}(\boldsymbol{H}_{i}^r,i)$;\\ Uploads $\mathbb{C}_{i}^r$ to the local RSU;\\ } \textbf{Compare} received contents and select the $F_c$ most interested contents into $\mathbb{C}^{r}$.\\ \Return $\mathbb{C}^{r}$\\ \textbf{Vehicle Predicts}$(\boldsymbol{H}_{i}^r, i)$:\\ \textbf{Input: $\boldsymbol{H}_{i}^r, i\in {1,2,...,N^r}$}\\ Calculate the similarity between $V_{i}^r$ and other vehicles based on Eq. \eqref{eq15};\\ Select the first $K$ vehicles with the largest similarity as neighboring vehicles of $V_{i}^r$;\\ Construct reconstructed rating matrixes of $K$ neighboring vehicles as $\boldsymbol{H}_K$;\\ Select the $F_c$ most interested contents as $\mathbb{C}_{i}^r$;\\ \Return $\mathbb{C}_{i}^r$ \end{algorithm} The cache capacity of the each RSU $c$, i.e., the largest number of contents that each RSU can accommodate, is usually smaller than $F_{c}$. Next, we will propose a cooperative caching to determine where the predicted popular contents can be cached. \subsection{Cooperative Caching Based on DRL} We consider the computation capability of each RSU is powerful and the cooperative caching can be determined within a short time. The main goal is to find an optimal cooperative caching based on DRL to minimize the content transmission delay. Next, we will formulate the DRL framework and then introduce the DRL algorithm. \subsubsection{DRL Framework} \ \newline \indent The DRL framework includes state, action and reward. The training process is divided into slots. For the current slot $t$, the local RSU observes the current state $s(t)$ and decides the current action $a(t)$ based on $s(t)$ according to a policy $\pi$, which is used to generate the action based on the state at each slot. Then the local RSU can obtain the current reward $r(t)$ and observes the next state $s(t+1)$ that is transited from the current state $s(t)$. We will design $s(t)$, $a(t)$ and $r(t)$, respectively, for this DRL framework. \paragraph{State} \ \newline \indent We consider the contents cached by the local RSU as the current state $s(t)$. In order to focus on the contents with high popularity, the contents of the state space $s(t)$ are sorted in descending order based on the predicted content popularity of the $F_c$ popular contents, thus the current state can be expressed as $s(t)=\left(s_{1}, s_{2}, \ldots, s_{c}\right)$, where $s_{i}$ is the $i$th most popular content. \paragraph{Action} \ \newline \indent Action $a(t)$ represents whether the contents cached in the local RSU need to be relocated or not. In the $F_c$ predicted popular contents, the contents that are not cached in the local RSU form a set $\mathbb{N}$. If $a(t)=1$, the local RSU randomly selects $n(n<c)$ contents from $\mathbb{N}$ and exchanges them with the $n$ lowest popular contents cached in the local RSU, and then sorts the contents in a descending order based on their content popularity to get $s(t+1)$. Neighboring RSU also randomly samples $c$ contents from $F_c$ popular contents that do not belong to $s(t+1)$ as the cached contents of the neighboring RSU within the next slot $t+1$. We denote the contents cached by the neighboring RSU as $s_n(t+1)$. If $a(t)=0$, the contents cached in the local RSU will not be relocated and the neighboring RSU also determines its cached contents, similar to the case when $a(t)=1$. \paragraph{Reward} \ \newline \indent The reward function $r(t)$ is designed to minimize the total content transmission delay to fetch the contents requested by vehicles. Note that the local RSU has recorded all the contents requested by the vehicles. The content transmission delays to fetch a requested content $f$ are different when the content is cached in different places. If content $f$ is cached in the local RSU, i.e., $f\in s(t)$, the local RSU transmits content $f$ to $V_{i}^{r}$, thus the content transmission delay is calculated as \begin{equation} d_{R, i, f}^{r}=\frac{s}{R_{R, i}^{r}}, \label{eq16} \end{equation}where $R_{R, i}^{r}$ is the transmission rate between the local RSU and $V_{i}^{r}$, which has been calculated by Eq. \eqref{eq3}. If content $f$ is cached in the neighboring RSU, i.e., $f\in s_n(t)$, the neighboring RSU sends the content to the local RSU that forwards the content to $V_{i}^{r}$, thus the transmission delay is calculated as \begin{equation} \bar{d}_{R, i, f}^{r}=\frac{s}{R_{R, i}^{r}}+\frac{s}{R_{R-R}}, \label{eq17} \end{equation}where $R_{R-R}$ is the transmission rate between the local RSU and neighboring RSU, which is a constant transmission rate in the wired link. If content $f$ is neither cached in the local RSU nor in the neighboring RSU, i.e., $f \notin s(t) \text{ and } f \notin s_n(t)$, the MBS transmits content $f$ to $V_{i}^{r}$, thus the content transmission delay is expressed as \begin{equation} d_{B, i,f}^{r}=\frac{s}{R_{B, i}^{r}}, \label{eq18} \end{equation}where $R_{B, i}^{r}$ is the transmission rate between the MBS and $V_{i}^{r}$, which is calculated according to Eq. \eqref{eq4}. In order to clearly distinguish the content transmission delays under different conditions, we set the reward that $V_{i}^r$ fetches content $f$ at slot $t$ as \begin{equation} r_{i,f}^r(t)=\begin{cases} e^{-\lambda_{1} d_{R,i,f}^{r}}& f\in s(t)\\ e^{-\left(\lambda_{1} d_{R, i, f}^{r}+\lambda_{2} \bar d_{R, i, f}^{r}\right)}&f \in s_n(t) \\ e^{-\lambda_{3} d_{M, i, f}^{r}}&f \notin s(t) \text{ and } f \notin s_n(t) \end{cases}, \label{eq19} \end{equation} where $\lambda_{1}+\lambda_{2}+\lambda_{3}=1$ and $\lambda_{1}<\lambda_{2}\ll \lambda_{3}$. Thus the reward function $r(t)$ is calculated as \begin{equation} r(t)=\sum_{i=1}^{N^r}\sum_{f=1}^{F_{i}^r} r_{i,f}^r(t), \label{eq20} \end{equation}where $F_{i}^r$ is the number of requested contents from $V_{i}^r$. \subsubsection{DRL Algorithm} \ \newline \indent As mentioned above, the next state will change when the action is $1$. The dueling DQN algorithm is a particular algorithm which works for the cases where the partial actions have no relevant effects on subsequent states \cite{Wangarxiv2016}. Specifically, the dueling DQN decomposes the Q-value into two functions $V$ and $A$. Function $V$ is the state value function that is unrelated to the action, while $A$ is the action advantage function that is related to the action. Therefore, we adopt the dueling DQN algorithm to solve this problem. \begin{algorithm} \caption{Cooperative Caching Based on Dueling DQN Algorithm} \label{al3} Initialize replay buffer $\mathcal{D}$, the parameters of the prediction network $\theta$, the parameters of the target network $\theta'$;\\ \textbf{Input:} requested contents from all vehicles in the local RSU for round $r$\\ \For{episode from $1$ to $T_s$} { Local RSU randomly caches $c$ contents from $F_c$ popular contents;\\ Neighboring RSU randomly caches $c$ contents from $F_c$ popular contents that are not cached in the local RSU;\\ \For{slot from $1$ to $N_s$} { Observe the state $s(t);$\\ Calculate the Q-value of prediction network $Q(s(t), a; \theta)$ based on Eq. \eqref{eq21};\\ Calculate the action $a(t)$ based on Eq. \eqref{eq22};\\ Obtain state $s(t+1)$ after executing action $a(t)$;\\ Obtain reward $r(t)$ based on Eqs. \eqref{eq16} - \eqref{eq20};\\ Store tuple $(s(t),a(t),r(t),s(t+1))$ in $\mathcal{D}$;\\ \If{number of tuples in $\mathcal{D}$ is larger than $I$} { Randomly sample a minibatch of $I$ tuples from $\mathcal{D}$;\\ \For{tuple $i$ from $1$ to $I$} { Calculate the Q-value function of target network $Q'(s^i, a; \theta')$ based on Eq. \eqref{eq23};\\ Calculate the target Q-value of the target network $y^i$ based on Eq. \eqref{eq24};\\ Calculate the loss function $L(\theta)$ based on Eq. \eqref{eq25};\\ } Calculate the gradient of loss function $\nabla_{\theta} L(\theta)$ based on Eq. \eqref{eq26};\\ Update parameters of the prediction network $\theta$ based on Eq. \eqref{eq27};\\ } \If{number of slots is $M$} {$\theta'=\theta$.\\} } } \end{algorithm} The dueling DQN includes a prediction network, a target network and a replay buffer. The prediction network evaluates the current state-action value (Q-value) function, while the target network generates the optimal Q-value function. Each of them consists of three layers, i.e., the feature layer, the state-value layer, and the advantage layer. The replay buffer $\mathcal{D}$ is adopted to cache the transitions for each slot. The dueling DQN algorithm is illustrated in Algorithm \ref{al3} and is described in detail as follow. \begin{figure*} \center \includegraphics[scale=0.27]{4-eps-converted-to.pdf} \caption{The flow diagram of the dueling DQN} \label{fig4} \end{figure*} Firstly, the parameters of the prediction network $\theta$ and the parameters of the target network $\theta'$ are initialized randomly. The requested contents from all vehicles in the local RSU for round $r$ as input (lines 1-2). Then the algorithm is executed for $T_s$ episodes. At the beginning of each episode, the local RSU randomly selects $c$ contents from $F_c$ popular contents, and the neighboring RSU randomly selects $c$ contents from $F_c$ popular contents that are not cached in the local RSU. Then the algorithm is executed iteratively from slots $1$ to $N_s$. In each slot $t$, the local RSU first observes state $s(t)$ and then input $s(t)$ to the prediction network, in which it goes through the feature layer, state-value layer and advantage layer, respectively. In the end, the prediction network outputs the state value function $V(s(t) ; \theta)$ and the action advantage function under each action $a$, i.e., $A(s(t), a ; \theta)$, respectively, where ${a \in\{0,1\}}$. Furthermore, the Q-value function of prediction network under each action $a$ is calculated as \begin{equation} \begin{aligned} Q(s(t), a; \theta)=V(s(t) ; \theta)+\{ A(s(t), a ; \theta) \\ -\mathbb{E}[A(s(t), a ; \theta)] \} \\ \end{aligned}. \label{eq21} \end{equation} In Eq. \eqref{eq21}, the range of Q-values can be narrowed to remove redundant degrees of freedom by calculating the difference between the action advantage function $A(s(t), a ; \theta)$ and the average value of the action advantage functions under all actions, i.e., $\mathbb{E}[A(s(t), a ; \theta)]$. Thus, the stability of the algorithm can be improved. Then action $a(t)$ is chosen by the $\varepsilon \text {-greedy}$ method, which is calculated as \begin{equation} a(t)=\underset{a \in\{0,1\}}{\operatorname{argmax}}(Q(s(t), a;\theta)) \label{eq22}. \end{equation} Particularly, action $a(1)$ is initialized as $1$ at slot $1$. The local RSU calculates the reward $r(t)$ according to Eqs. \eqref{eq16} - \eqref{eq20} and state $s(t)$ transits to the next state $s(t+1)$, then the local RSU observes $s(t+1)$. Next, the neighboring RSU randomly samples $c$ popular contents that are not cached in $s(t+1)$ as its cached contents, which is denoted as $s_n(t+1)$. The transition from $s(t)$ to $s(t+1)$ is denoted as tuple $(s(t),a(t),r(t),s(t+1))$, which is then stored in the replay buffer $\mathcal{D}$. When the number of the stored tuples in the replay buffer $\mathcal{D}$ is larger than $I$, the local RSU randomly samples $I$ tuples from $\mathcal{D}$ to form a minibatch. Let $(s^i,a^i,r^i,s'^i), (i=1,2,3,...,I)$ be the $i$-th tuple in the mini-batch. Then $S_i$ input each tuple into the prediction network and the target network (lines 3-12). Next, we will introduce how parameters of prediction network $\theta$ are updated. For tuple $i$, the local RSU inputs $s^i$ into the target network, where it goes through the feature layer and outputs its feature. Then the feature is input to the state-value layer and the advantage layer, respectively, which output state value function $V'(s^i ; \theta')$ and action advantage function $A'(s^i, a; \theta')$ under each action $a \in \{0,1\}$, respectively. Thus, the Q-value function of target network of tuple $i$ under each action $a$ is calculated as \begin{equation} \begin{aligned} &Q'(s^i, a; \theta')=V'(s^i ; \theta')\\ &+\{ A'(s^i, a ; \theta') -\left.\mathbb{E}\left[A'\left(s^i, a ; \theta'\right)\right]\right\}\\ \end{aligned}. \label{eq23} \end{equation} Then the target Q-value of the target network of tuple $i$ is calculated as \begin{equation} y^i=r^i+\gamma_{D} \max _{a\in\{0,1\} } Q'(s^i, a; \theta'), \label{eq24} \end{equation}where $\gamma_{D}$ is the discount factor. The loss function is calculated as follows \begin{equation} L(\theta)=\frac{1}{I} \sum_{i=1}^{I}\left[(y^i-Q(s^i, a^i, \theta))^{2}\right]. \label{eq25} \end{equation} The gradient of loss function $\nabla_{\theta} L(\theta)$ for all sampled tuples is calculated as \begin{equation} \nabla_{\theta} L(\theta)=\frac{1}{I} \sum_{i=1}^{I} [\left(y^i-Q(s^i, a^i, \theta)\right) \nabla_{\theta^i} Q(s^i, a^i, \theta)]. \label{eq26} \end{equation} At the end of slot $t$, the parameters of the prediction network $\theta$ are updated as \begin{equation} \theta \leftarrow \theta-\eta_{\theta} \nabla_{\theta} L(\theta), \label{eq27} \end{equation}where $\eta_{\theta}$ is the learning rate of prediction network. Up to now, the iteration in slot $t$ is completed, which will be repeated. During the iterations, the parameters of target network $\theta' $ are updated after a certain number of slots ($M$), as the parameters of prediction network $\theta$. When the number of slots reaches $N_s$, this episode is finished and then the local RSU randomly caches $c$ contents from $F_c$ popular contents to start the next episode. When the number of episodes reaches $T_s$, the algorithm will be terminated (lines 13-22). The flow diagram of the dueling DQN algorithm is shown in Fig. \ref{fig4}. Finally, the local RSU and neighboring RSU cache popular contents according to the optimal cooperative caching, and then each vehicle fetches contents from the VEC. This round is finished after each vehicle has fetched contents and then the next round is started. \section{Simulation and Analytical Results} \label{sec6} \begin{table} \caption{Values of the parameters in the experiments.} \label{tab2} \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Parameters of System Model}\\ \hline \textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\ \hline $B$ & $540$ kHz & $K$ &$10$\\ \hline $m$ &$3$ & $p_B$ & $30$ dBm\\ \hline $p_M$ & $43$ dBm & $R_{R,R}$ & $15$ Mbps \\ \hline $s$ &$100$ bytes & $T_{training}$ & $2$s\\ \hline $T_{inference}$ & $0.5$s & $U_{\max}$ &$60$ km/h\\ \hline $U_{\min }$ &$50$ km/h & $\mu$ &$55$ km/h\\ \hline $\sigma$ &$2.5$km/h & $\sigma_{c}^{2}$ & $-114$ dBm\\ \hline \multicolumn{4}{|c|}{Parameters of Asynchronous FL}\\ \hline \textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\ \hline $L_s$ &$1000$m & $\beta$ & $0.001$\\ \hline $\eta_{l}$ &$0.01$ & $\mu_{1}$ &$0.5$ \\ \hline $\mu_{2}$ &$0.5$ & $\rho$ &$0.0001$\\ \hline \multicolumn{4}{|c|}{Parameters of DRL}\\ \hline \textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\ \hline $I$ &$32$ & $\gamma_{D}$ & $0.99$\\ \hline $\eta_{\theta}$ &$0.01$ & $\lambda_{1}$ & $0.0001$\\ \hline $\lambda_{2}$ & $0.4$ & $\lambda_{3}$ & $0.5999$\\ \hline \end{tabular} \end{table} We have evaluated the performance of the proposed CAFR scheme in this section. \subsection{Settings and Dataset} We simulate a VEC environment on the urban road as shown in Fig. \ref{fig1} and the simulation tool is Python $3.8$. The communications between vehicle and RSU/MBS employ the 3rd Generation Partnership Project (3GPP) cellular V2X (C-V2X) architecture, where the parameters are set according to the 3GPP standard \cite{3gpp}. The simulation parameters are listed in Table \ref{tab2}. A real-world dataset from the MovieLens website, i.e., MovieLens 1M, is used in the experiments. MovieLens 1M contains $1,000,209$ rating values for $3,883$ movies from $6,040$ anonymous VUs with movie ratings ranging from $0$ to $1$, where each VU rates at least $20$ movies \cite{Harper2016}. MovieLens lM also provides personal information about VUs including ID number, gender, age and postcode. We randomly divide MovieLens lM data set to each vehicle as its local data. Each vehicle randomly chooses $99.8\%$ data from its local data as its training set and $0.2\%$ data as its testing set. For each round, each vehicle randomly samples a part of the movies from testing set as its requested contents. \subsection{Performance Evaluation} We use cache hit ratio and the content transmission delay as performance metrics to evaluate the CAFR scheme. The cache hit rate is defined as the probability of fetching requested contents from the local RSU \cite{Muller2017}. If a requested content is cached in the local RSU, it can be fetched directly from the local RSU, which is referred to as a cache hit, otherwise, it is referred to as a cache miss. Thus, the cache hit rate is calculated as \begin{equation} \text { cache hit radio }=\frac{\text {cache hits }}{\text {cache hits }+\text {cache misses }}\times 100\%. \label{eq28} \end{equation} The content transmission delay indicates the average delay for all vehicles to fetch contents, which is calculated as \begin{equation} \text {content transmission delay}=\frac{D^{\text {total}}}{\text {the number of vehicles }}, \label{eq29} \end{equation} where $D^{\text {total}}$ is the delay for all vehicles to fetch contents, and it is calculated by aggregating the content transmission delay for every vehicle to fetch contents. We compare the CAFR scheme with other baseline schemes such as: \begin{itemize} \item Random: Randomly selecting $c$ contents from the all contents to cache in the local and neighboring RSU. \item c-$\epsilon$-greedy: Selecting the contents with $c$ largest numbers of requests based on probability $1-\epsilon$ and selecting $c$ contents randomly based on probability $\epsilon$ to cache in the local RSU. In our simulation, $\epsilon= 0.1$. \item Thompson sampling: For each round, the contents cached in the local RSU is updated based on the number of cache hits and cache misses in the previous round \cite{Cui2020}, and $c$ contents with the highest value are selected to cache in the local RSU. \item FedAVG: Federated averaging (FedAVG) is a typical synchronous FL scheme where the local RSU needs to wait for the local model updates to update its global model according to weighted average method: \begin{equation} \omega^{r}=\sum_{i=1}^{N^r} \frac {d^r_i}{d^r} \omega^{r}_{i}. \label{eq30} \end{equation} \item CAFR without DRL: Compared with the CAFR scheme, this scheme does not adopt the DRL algorithm to optimize caching scheme. Specifically, after predicting the popular contents, $c$ contents are randomly selected from the predicted popular contents to cache in the local RSU and neighboring RSU, respectively. \end{itemize} \begin{figure} \center \includegraphics[scale=0.5]{method_ce_vs_cs-eps-converted-to.pdf} \caption{Cache hit radio under different cache capacities} \label{fig5} \end{figure} Now, we will evaluate the performance of the CAFR scheme through simulation experiments. In the following performance evaluation, each result is the average value of five experiments. Fig. \ref{fig5} shows the cache hit ratio of different schemes under different cache capacities of each RSU, where the result of CAFR is obtained when the vehicle density is $15$ vehicles/km (i.e., the number of vehicles is 15 per kilometer), and the results of other schemes are independent with the vehicle density. It can be seen that the cache hit ratio of all schemes increases with a larger capacity. This is because that the local RSU caches more contents with a larger capacity, thus the requested contents of vehicles are more likely to be fetched from the local RSU. Moreover, it is seen that the random scheme provides the worst cache hit ratio, because the scheme just selects contents randomly without considering the content popularity. In addition, CAFR and c-$\epsilon$-greedy outperform the random scheme and the thompson sampling. This is because that random and thompson sampling schemes do not predict the caching contents through learning, whereas CAFR and c-$\epsilon$-greedy decide the caching contents by observing the historical requested contents. Furthermore, CAFR outperforms c-$\epsilon$-greedy. This is because that CAFR captures useful hidden features from the data to predict the accurate popular contents. \begin{figure} \center \includegraphics[scale=0.5]{method_rd_vs_cs-eps-converted-to.pdf} \caption{Content transmission delay under different cache capacities} \label{fig6} \end{figure} Fig. \ref{fig6} shows the content transmission delay of different schemes under different cache capacities of each RSU, where the vehicle density is $15$ vehicles/km. It is seen that the content transmission delays of all schemes decrease as the cache capacity increases. This is because that each RSU caches more contents as the cache capacity increases, and each vehicle fetches contents from local RSU and neighboring RSU with a higher possibility, thus reducing the content transmission delay. Moreover, the content transmission delay of CAFR is smaller than other schemes. This is because that the cache hit rate of CAFR is better than those of schemes, and more vehicles can fetch contents from local RSU directly, thus reducing the content transmission delay. \begin{figure} \center \includegraphics[scale=0.5]{vs_vd-eps-converted-to.pdf} \caption{Cache hit radio and content transmission delay under different vehicle densities} \label{fig7} \end{figure} Fig. \ref{fig7} shows the cache hit ratio and the content transmission delay of the CAFR scheme under different vehicle densities when the cache capacity of each RSU is $100$. As shown in this figure, the cache hit rate increases as the vehicle density increases. This is because when more vehicles enter the coverage area of the RSU, the global model of the local RSU is trained based on more data, and thus can predict accurately. In addition, the content transmission delay decreases as the vehicle density increases. This is because the cache hit rate increases when the vehicle density increases, which enables more vehicles to fetch contents directly from local RSU. \begin{figure} \center \includegraphics[scale=0.5]{asy_syn_ce-eps-converted-to.pdf} \caption{Cache hit radio of CAFR and FedAVG} \label{fig8} \end{figure} Fig. \ref{fig8} compares the cache hit rate of the CAFR scheme and the FedAVG scheme under different rounds when the vehicle density is $15$ vehicles/km and the cache capacity of each RSU is $100$ contents. It can be seen that the cache hit radio of CAFR fluctuates between $22.5\%$ and $24\%$ within $30$ rounds, while the cache hit rate of FedAVG scheme fluctuates between $22\%$ and $23.5\%$ within $30$ rounds. This indicates that the CAFR scheme is slightly better than the FedAVG scheme. This is because the CAFR scheme has considered the vehicles' mobility characteristics including the positions and velocities to select vehicles and aggregate the local model, thus improving the accuracy of the global model. \begin{figure} \center \includegraphics[scale=0.5]{asy_syn_tt-eps-converted-to.pdf} \caption{Training time of CAFR and FedAVG} \label{fig9} \end{figure} Fig. \ref{fig9} shows the training time of CAFR and FedAVG schemes for each round when the vehicle density is $15$ vehicles/km and the cache capacity of each RSU is $100$ contents. It can be seen that the training time of CAFR scheme for each round is within $1$s and $2$s, while the training time of FedAVG scheme for each round is within $22$s and $24$s. This indicates that CAFR scheme has a much smaller training time than the FedAVG scheme. This is because the FedAVG scheme needs to aggregate all vehicles' local models for the global model updating in each round, while the CAFR scheme aggregates as soon as a vehicle's local model is received for each round. \begin{figure} \center \includegraphics[scale=0.5]{ce_rd_episode-eps-converted-to.pdf} \caption{Cache hit radio and content transmission delay of each episode in the DRL} \label{fig10} \end{figure} Fig. \ref{fig10} shows the cache hit rate and content transmission delay of each episode in the DRL of the CAFR scheme when the vehicle density is $15$ vehicles/km and the cache capacity of RSU is $100$. As the episode increases, the cache hit rate gradually increases and the content transmission delay decreases gradually in the first ten episodes. This is because the local RSU and neighboring RSU gradually cache appropriate popular contents in the first ten episodes. In addition, it is seen that the cache hit rate and content transmission delay converge at around episode $10$. This is because the local RSU is able to learn the policy to perform optimal cooperative caching at around $10$ episodes. \begin{figure} \center \includegraphics[scale=0.5]{rl_vs_ce_cs-eps-converted-to.pdf} \caption{Cache hit radio for whether cache replacement} \label{fig11} \end{figure} Fig. \ref{fig11} compares the cache hit ratio of the CAFR scheme with CAFR scheme without DRL under different cache capacities of each RSU when the vehicle density is $15$ vehicles/km. As shown in Fig. \ref{fig11}, the cache hit ratio of CAFR outperforms the CAFR without DRL. This is because DRL can determine the optimal cooperative caching according to the predicted popular contents, and thus more suitable popular contents can be cached in the local RSU. \begin{figure} \center \includegraphics[scale=0.5]{rl_vs_rd_cs-eps-converted-to.pdf} \caption{Content transmission delay of CAFR and CAFR without DRL under different cache capacities} \label{fig12} \end{figure} Fig. \ref{fig12} compares the content transmission delay of the CAFR scheme with CAFR scheme without DRL under different cache capacities of each RSU when the vehicle density is $15$ vehicles/km. As shown in Fig. \ref{fig12}, the content transmission delay of CAFR is less than that of CAFR without DRL. This is because the cache hit ratio of CAFR outperforms the CAFR without DRL and more vehicles can fetch contents from local RSU directly. \section{Conclusions} \label{sec7} In this paper, we considered the vehicle mobility and proposed a cooperative caching scheme CAFR to reduce the content transmission delay and improve the cache hit radio. We first proposed an asynchronous FL algorithm to obtain an accurate global model, and then proposed an algorithm to predict the popular contents based on the global model. Afterwards, we proposed a cooperative caching scheme to minimize the content transmission delay based on the dueling DQN algorithm. Simulation results have demonstrated that the CAFR scheme outperforms other baseline caching schemes. According to the theoretical analysis and simulation results, the conclusions can be summarized as follows: \begin{itemize} \item CAFR scheme can learn from the local data of vehicles to capture useful hidden features and predict the accurate popular contents. \item CAFR greatly reduces the training time for each round by aggregating the local model of a single vehicle in each round. In addition, CAFR considers vehicles' mobility characteristics including the positions and velocities to select vehicles and aggregate the local model, which can improve the accuracy of the training model. \item The DRL in the CAFR scheme determines the optimal cooperative caching policy according to the predicted popular contents, and thus more suitable popular contents are cached in the local RSU and neighboring RSU to reduce the content transmission delay. \end{itemize} \ifCLASSOPTIONcaptionsoff \newpage \fi
{'timestamp': '2022-08-03T02:06:56', 'yymm': '2208', 'arxiv_id': '2208.01219', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.01219'}
ArXiv
\section{Introduction} \subsubsection{Derandomized LWE.} The learning with errors (LWE) problem~\cite{Reg[05]} is at the basis of multiple cryptographic constructions~\cite{Peikert[16],Hamid[19]}. Informally, LWE requires solving a system of `approximate' linear modular equations. Given positive integers $w$ and $q \geq 2$, an LWE sample is defined as: $(\textbf{a}, b = \langle \textbf{a}, \textbf{s} \rangle + e \bmod q)$, where $\textbf{s} \in \mathbb{Z}^w_q$ and $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_q$. The error term $e$ is sampled randomly, typically from a normal distribution with standard deviation $\alpha q$ where $\alpha = 1/\poly(w)$, followed by which it is rounded to the nearest integer and reduced modulo $q$. Banerjee et al.~\cite{Ban[12]} introduced a derandomized variant of LWE, called learning with rounding (LWR), wherein instead of adding a random small error, a \textit{deterministically} rounded version of the sample is announced. Specifically, for some positive integer $p < q$, the elements of $\mathbb{Z}_q$ are divided into $p$ contiguous intervals containing (roughly) $q/p$ elements each. The rounding function, defined as: $\lfloor \cdot \rceil_p: \mathbb{Z}_q \rightarrow \mathbb{Z}_p$, maps the given input $x \in \mathbb{Z}_q$ into the index of the interval that $x$ belongs to. An LWR instance is generated as: $(\textbf{a}, \lfloor \langle \textbf{a}, \textbf{s} \rangle \rceil_p)$ for vectors $\textbf{s} \in \mathbb{Z}_q^w$ and $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}_q^w$. For certain range of parameters, Banerjee et al. proved the hardness of LWR under the LWE assumption. In this work, we propose a new derandomized variant of LWE, called learning with linear regression (LWLR). We reduce the hardness of LWLR to that of LWE for certain choices of parameters. \subsubsection{Physical Layer Communications and Shared Secret Extraction.} In the OSI model, physical layer consists of the fundamental hardware transmission technologies. It provides electrical, mechanical, and procedural interface to the transmission medium. Physical layer communication between parties has certain inherent characteristics that make it an attractive source of renewable and shared secrecy. Multiple methods to extract secret bits from channel measurements have been explored~\cite{Prem[13],Xiao[08],Zhang[08],Zeng[15],Kepe[15],Jiang[13],Ye[10],Ye[06],Ye[07]}. Papers~\cite{Sheh[15],Poor[17]} survey the notable results in the area. It follows from \textit{channel reciprocity} that two nodes of a physical layer channel obtain identical channel state information. Secrecy of this information follows directly from the \textit{spatial decorrelation} property. Specifically, the channel reciprocity property implies that the signal distortion (attenuation, delay, phase shift, and fading) is identical in both directions of a link. At the same time, spatial decorrelation property means that in rich scattering environments, the receivers located a few wavelengths away experience uncorrelated channels; this ensures that an eavesdropper separated by at least half a wavelength from the communicating nodes experiences a different channel, and hence makes inaccurate measurements. Both of these properties have been demonstrated to hold in practice \cite{MarPao[14],ZenZimm[15]}. In this work, we use these two properties to securely generate sufficiently independent yet deterministic errors to derandomize LWE. Specifically, we use the Gaussian errors occurring in physical layer communications to generate special linear regression models that are later used to derandomize LWE. \subsubsection{Rounded Gaussians.} Using discrete Gaussian elements to hide secrets is a common approach in lattice-based cryptography. The majority of digital methods for generating Gaussian random variables are based on transformations of uniform random variables~\cite{Knuth[97]}. Popular methods include Ziggurat~\cite{Zigg[00]}, inversion~\cite{Invert[03]}, Wallace~\cite{Wallace[96]}, and Box-Muller~\cite{Box[58]}. Sampling discrete Gaussians can also be done by sampling from some continuous Gaussian distribution, followed by rounding the coordinates to nearby integers~\cite{Pie[10],Box[58],Hul[17]}. Using such rounded Gaussians can lead to better efficiency and, in some cases, better security guarantees for lattice-based cryptographic protocols~\cite{Hul[17]}. In our work, we use rounded Gaussian errors that are derived from continuous Gaussians. \subsubsection{Key-homomorphic PRFs.} In a pseudorandom function (PRF) family~\cite{Gold[86]}, each function is specified by a key such that it can be evaluated deterministically given the key but behaves like a random function without the key. For a PRF $F_k$, the index $k$ is called its key or seed. A PRF family $F$ is called key-homomorphic if the set of keys has a group structure and if there is an efficient algorithm that, given $F_{k_1}(x)$ and $F_{k_2}(x)$, outputs $F_{k_2 \oplus k_2}(x)$, where $\oplus$ is the group operation~\cite{Naor[99]}. Multiple key-homomorphic PRF families have been constructed via varying approaches~\cite{Naor[99],Boneh[13],Ban[14],Parra[16],SamK[20],Navid[20]}. In this work, we introduce and construct an extended version of key-homomorphic PRFs, called star-specific key-homomorphic (SSKH) PRFs, which are defined for settings wherein parties constructing the PRFs are part of an interconnection network that can be (re)arranged as a graph comprised of only (undirected) star graphs with restricted vertex intersections. An undirected star graph $S_k$ can be defined as a tree with one internal node and $k$ leaves. \Cref{FigStar} depicts an example star graph, $S_7$, with seven leaves. \begin{figure}[h!] \centering \stargraph{7}{2} \caption{An Example Star Graph, $S_7$}\label{FigStar} \end{figure} Henceforth, we use the terms star and star graph interchangeably. \subsubsection{Cover-free Families with Restricted Intersections.} Cover-free families were first defined by Kautz and Singleton~\cite{Kautz[64]} in 1964 --- as superimposed binary codes. They were motivated by investigating binary codes wherein disjunction of at most $r ~(\geq 2)$ codewords is distinct. In early 1980s, cover-free families were studied in the context of group testing \cite{BushFed[84]} and information theory \cite{Ryk[82]}. Erd\"{o}s et al. called the corresponding set systems $r$-cover-free and studied their cardinality for $r=2$~\cite{PaulFrankl[82]} and $r < n$~\cite{PaulFrankl[85]}. \begin{definition}[$r$-cover-free Families~\cite{PaulFrankl[82],PaulFrankl[85]}] \emph{We say that a family of sets $\mathcal{H} = \{H_i\}_{i=1}^\alpha$ is $r$-cover-free for some integer $r < \alpha$ if there exists no $H_i \in \mathcal{H}$ such that: \[H_i \subseteq \bigcup_{H_j \in \mathcal{H}^{(r)}} H_j, \] where $\mathcal{H}^{(r)} \subset \mathcal{H}$ is some subset of $\mathcal{H}$ with cardinality $r$.} \end{definition} Cover-free families have found many applications in cryptography and communications, including blacklisting~\cite{RaviSri[99]}, broadcast encryption~\cite{CanGara[99],Garay[00],DougR[97],DougRTran[98]}, anti-jamming~\cite{YvoSafavi[99]}, source authentication in networks~\cite{Safavi[99]}, group key predistribution~\cite{ChrisFred[88],Dyer[95],DougRTran[98],DougRTranWei[00]}, compression schemes \cite{Thasis[19]}, fault-tolerant signatures \cite{Gunnar[16],Bardini[21]}, frameproof/traceability codes~\cite{Staddon[01],WeiDoug[98]}, traitor tracing \cite{DonTon[06]}, batch signature verification \cite{Zave[09]}, and one-time and multiple-times digital signature schemes \cite{Josef[03],GM[11]}. In this work, we initiate the study of new variants of $r$-cover-free families. The motivation behind exploring this direction is to compute the maximum number of SSKH PRFs that can be constructed by overlapping sets of parties. We prove various bounds on the novel variants of $r$-cover-free families and later use them to establish the maximum number of SSKH PRFs that can be constructed by overlapping sets of parties in the presence of active/passive and internal/external adversaries. \subsection{Our Contributions} \subsubsection{Cryptographic Contributions.} We know that physical layer communications over Gaussian channels introduce independent Gaussian errors. Therefore, it is logical to wonder whether we can use some processed form of those Gaussian errors to generate deterministic --- yet sufficiently independent --- errors to derandomize LWE. Such an ability would have direct applications to use-cases wherein LWR is used to realize derandomized LWE. Our algorithm to derandomize LWE uses physical layer communications as the training data for linear regression analysis, whose (optimal) hypothesis is used to generate deterministic errors belonging to a truncated Gaussian distribution. We round the resulting errors to the nearest integer, hence moving to a rounded Gaussian distribution, which is reduced modulo the LWE modulus to generate the desired errors. It is worth mentioning that many hardness proofs for LWE, including Regev's initial proof~\cite{Reg[05]}, used an analogous approach --- but without the linear regression component --- to sample random ``LWE errors''~\cite{Reg[05],Albre[13],Gold[10],Duc[15],Hul[17]}. We call our derandomized variant of LWE: learning with linear regression (LWLR). Under certain parameter choices, we prove that LWLR is as hard as LWE. After establishing theoretically the validity of our idea, we test it via some experiments to establish its practicality. We introduce a new class of PRFs, called star-specific key-homomorphic (SSKH) PRFs, which are key-homomorphic PRFs that are defined by the sets of parties that construct them. In our construction, the sets of parties are arranged as star graphs wherein the leaves represent the parties and edges denote communication channels between them. For instance, a SSKH PRF $F^{(\partial_i)}_k$ is unique to the set/star of parties, $\partial_i$, that constructs it, i.e., $\forall i \neq j: F^{(\partial_i)}_k \neq F^{(\partial_j)}_k$. As an example application of LWLR, we replace LWR with LWLR in the LWR-based key-homomorphic PRF construction from \cite{Ban[14]} to construct the first SSKH PRF family. Due to their conflicting goals, statistical inference and cryptography are almost dual of each other. Given some data, statistical inference aims to identify the distribution that they belong to, whereas in cryptography, the central aim is to design a distribution that is hard to predict. Interestingly, in our work, we use statistical inference to construct a novel cryptographic tool. In addition to all known applications of key-homomorphic PRFs --- as given in \cite{Boneh[13],Miranda[21]} --- our SSKH PRF family also allows collaborating parties to securely generate pseudorandom nonce/seed without relying on any pre-provisioned secrets. \subsubsection{Mutual Information between Linear Regression Models.} To quantify the relation between different SSKH PRFs, we examine the mutual information between linear regression hypotheses that are generated via (training) datasets with overlapping data points. A higher mutual information translates into a stronger relation between the corresponding SSKH PRFs that are generated by using those linear regression hypotheses. The following text summarizes the main result that we prove in this context. Suppose, for $i=1,2,\ldots,\ell$, we have: $$y_i\sim \mathcal{N}(\alpha+\beta x_i,\sigma^2)\quad\text{and}\quad z_i\sim \mathcal{N}(\alpha+\beta w_i,\sigma^2),$$ with $x_i=w_i$ for $i=1,\ldots,a$. Let $h_1(x)=\hat{\alpha}_1 x+\hat{\beta}_1$ and $h_2(w)=\hat{\alpha}_2 w+\hat{\beta}_2$ be the linear regression hypotheses obtained from the samples $(x_i,y_i)$ and $(w_i,z_i)$, respectively. \begin{theorem}\label{MutualThm} The mutual information between $(\hat{\alpha_1},\hat{\beta_1})$ and $(\hat{\alpha_2},\hat{\beta_2})$ is: \begin{align*} &\ -\frac{1}{2}\log\left(1-\frac{\left(\ell C_2-2C_1X_1+aX_2\right)\left(\ell C_2-2C_1W_1+aW_2\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right. \\ &\qquad\left.+\frac{\left((a-1)C_2-C_3\right)\left((a-1)C_2-C_3+\ell(X_2+W_2)-2X_1W_1\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right), \end{align*} where $X_1=\sum_{i=1}^{\ell} x_i$, $X_2=\sum_{i=1}^{\ell} x_i^2$, $W_1=\sum_{i=1}^{\ell} w_i$, $W_2=\sum_{i=1}^{\ell} w_i^2$, $C_1=\sum_{i=1}^a x_i=\sum_{i=1}^a w_i$, $C_2=\sum_{i=1}^a x_i^2=\sum_{i=1}^a w_i^2$ and $C_3=\sum_{i=1}^{\ell}\sum_{j=1,j\neq i}^{\ell}x_ix_j$. \end{theorem} \subsubsection{New Bounds on $t$-intersection Maximally Cover Free Families.} Since we use physical layer communications to generate deterministic LWE instances, a large enough overlap among different sets of devices can lead to reduced collective and conditional entropy for the different SSKH PRFs constructed by those sets. We say that a set system $\mathcal{H}$ is (i) $k$-uniform if: $\forall A \in \mathcal{H}: |A| = k$, (ii) at most $t$-intersecting if: $\forall A, B \in \mathcal{H}, B \neq A: |A \cap B| \leq t$. \begin{definition}[Maximally Cover-free Families] \emph{A family of sets $\mathcal{H}$ is \textit{maximally} cover-free if it holds that: \[\forall A \in \mathcal{H}: A \not\subseteq \bigcup\limits_{\substack{B \in \mathcal{H} \\ B \neq A}} B.\]} \end{definition} It follows trivially that if the sets of parties --- each of which is arranged as a star --- belong to a maximally cover-free family, then no SSKH PRF can have zero conditional entropy since each set of parties must have at least one member/device that is exclusive to it. We know from \Cref{MutualThm} that based on the overlap in training data, we can compute the mutual information between different linear regression hypotheses. Since the training dataset for a set of parties performing linear regression analysis is simply their mutual communications, it follows that the mutual information between any two SSKH PRFs increases with the overlap between the sets of parties that construct them. Hence, given a maximum mutual information threshold, \Cref{MutualThm} can be used to compute the maximum acceptable overlap between different sets of parties. To establish the maximum number of SSKH PRFs that can be constructed by such overlapping sets, we revisited cover-free families. Based on our requirements, we focused on the following two cases: \begin{itemize} \item $\mathcal{H}$ is at most $t$-intersecting and $k$-uniform, \item $\mathcal{H}$ is maximally cover-free, at most $t$-intersecting and $k$-uniform. \end{itemize} We derive multiple bounds on the size of $\mathcal{H}$ for both these cases and later use them to establish the maximum number of SSKH PRFs that can be constructed securely against active/passive and internal/external adversaries. We mention our central results here. \begin{theorem}\label{MainThm1} Let $k,t\in\mathbb{Z}^+$, and $C<1$ be any positive real number. \begin{enumerate}[label = {(\roman*)}] \item Suppose $t<k-1$. Then, for all sufficiently large $N$, the maximum size $\nu(N,k,t)$ of a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[N]}$ satisfies $$CN\leq\nu(N,k,t)<N.$$ \item Suppose $t<k$. Then, for all sufficiently large $n$, the maximum size $\varpi(n,k,t)$ of an at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[n]}$ satisfies $$\frac{Cn^{t+1}}{k(k-1)\cdots(k-t)}\leq\varpi(n,k,t)<\frac{n^{t+1}}{k(k-1)\cdots(k-t)}.$$ \end{enumerate} In particular, $\nu(N,k,t)\sim N$ and $\varpi(n,k,t)\sim\frac{n^{t+1}}{k(k-1)\cdots(k-t)}$. \end{theorem} We also provide an explicit construction for at most $t$-intersecting and $k$-uniform set systems. \subsubsection{Maximum Number of SSKH PRFs.} We use the results from \Cref{MainThm1,MutualThm} to derive the maximum number, $\zeta$, of SSKH PRFs that can be constructed against various adversaries (modeled as probabilistic polynomial time Turing machine). To outline, we prove the following: \begin{itemize} \item For an external/eavesdropping adversary with oracle access to the SSKH PRF family, we get: \[\zeta \sim \dfrac{n^k}{k!}.\] \item For non-colluding semi-honest parties, we get: \[\zeta \geq Cn,\] where $C<1$ is a positive real number. \end{itemize} We also establish the ineffectiveness of the man-in-the-middle attack against our SSKH PRF construction. \subsection{Organization} The rest of the paper is organized as follows: Section~\ref{Sec2} recalls the concepts and constructs that are relevant to our solutions and constructions. Section~\ref{Sec3} reviews the related work. Section~\ref{Sec4} gives a formal definition of SSKH PRFs. We prove various bounds on (maximally cover-free) at most $t$-intersecting $k$-uniform families in \Cref{Extremal}. In Section~\ref{Sec6}, we present our protocol for generating rounded Gaussian errors from physical layer communications. The section also discusses the implementation, simulation, test results, error analysis, and complexity for our protocol. In \Cref{Mutual}, we analyze the mutual information between different linear regression hypotheses that are generated over overlapping training datasets. In \Cref{LWLRsec}, we define LWLR and generate LWLR instances. In the same section, we reduce the hardness of LWLR to that of LWE. In Section~\ref{Sec7}, we use LWLR to adapt the key-homomorphic PRF construction from~\cite{Ban[14]} to construct the first star-specific key-homomorphic PRF family, and prove its security under the hardness of LWLR (and hence LWE). In the same section, we use our results from \Cref{Extremal,Mutual} to establish the maximum number of SSKH PRFs that can be constructed by a given set of parties in the presence of active/passive and external/internal adversaries. Section~\ref{Sec8} gives the conclusion. \section{Preliminaries}\label{Sec2} For a positive integer $n$, let: $[n] = \{1, \dots, n\}.$ As mentioned earlier, we use the terms star and star graph interchangeably. For a vector $\mathbf{v}= (v_1, v_2, \ldots, v_w) \in \mathbb{R}^w$, the Euclidean and infinity norms are defined as: $||\mathbf{v}|| = \sqrt{(\sum_{i=1}^w v_i^2)}$ and $||\mathbf{v}||_\infty = \max(|v_1|, |v_2|, \ldots, |v_w|),$ respectively. In this text, vectors and matrices are denoted by bold lower case letters and bold upper case letters, respectively. \subsection{Entropy} The concept of entropy was originally introduced as a thermodynamic construct by Rankine in 1850 \cite{True[80]}. It was later adapted to information theory by Shannon \cite{Shannon[48]}, where it denotes a measure of the uncertainty associated with a random variable, i.e., (information) entropy is defined as a measure of the average information content that is missing when value of a random variable is not known. \begin{definition} \emph{For a finite set $S = \{s_1, s_2, \ldots, s_n\}$ with probabilities $p_1, p_2, \ldots, p_n$, the entropy of the probability distribution over $S$ is defined as: \[H(S) = \sum\limits_{i=1}^n p_i \log \dfrac{1}{p_i}. \]} \end{definition} \subsection{Lattices} A lattice $\mathrm{\Lambda}$ of $\mathbb{R}^w$ is defined as a discrete subgroup of $\mathbb{R}^w$. In cryptography, we are interested in integer lattices, i.e., $\mathrm{\Lambda} \subseteq \mathbb{Z}^w$. Given $w$-linearly independent vectors $\textbf{b}_1,\dots,\textbf{b}_w \in \mathbb{R}^w$, a basis of the lattice generated by them can be represented as the matrix $\mathbf{B} = (\textbf{b}_1,\dots,\textbf{b}_w) \in \mathbb{R}^{w \times w}$. The lattice generated by $\mathbf{B}$ is the following set of vectors: \[\mathrm{\Lambda}(\textbf{B}) = \left\{ \sum\limits_{i=1}^w c_i \textbf{b}_i: c_i \in \mathbb{Z} \right\}.\] Historically, lattices received attention from illustrious mathematicians, including Lagrange, Gauss, Dirichlet, Hermite, Korkine-Zolotareff, and Minkowski (see \cite{Laga[73],Gauss[81],Herm[50],Kork[73],Mink[10],JacSte[98]}). Problems in lattices have been of interest to cryptographers since 1997, when Ajtai and Dwork~\cite{Ajtai[97]} proposed a lattice-based public key cryptosystem following Ajtai's~\cite{Ajtai[96]} seminal worst-case to average-case reductions for lattice problems. In lattice-based cryptography, \textit{q-ary} lattices are of particular interest; they satisfy the following condition: $$q \mathbb{Z}^w \subseteq \mathrm{\Lambda} \subseteq \mathbb{Z}^w,$$ for some (possibly prime) integer $q$. In other words, the membership of a vector $\textbf{x}$ in $\mathrm{\Lambda}$ is determined by $\textbf{x}\bmod q$. Given a matrix $\textbf{A} \in \mathbb{Z}^{w \times n}_q$ for some integers $q, w, n,$ we can define the following two $n$-dimensional \textit{q-ary} lattices, \[\mathrm{\Lambda}_q(\textbf{A}) = \{\textbf{y} \in \mathbb{Z}^n: \textbf{y} = \textbf{A}^T\textbf{s} \bmod q \text{ for some } \textbf{s} \in \mathbb{Z}^w \}, \] \[\hspace{-28mm} \mathrm{\Lambda}_q^{\perp}(\textbf{A}) = \{\textbf{y} \in \mathbb{Z}^n: \textbf{Ay} = \textbf{0} \bmod q \}.\] The first \textit{q-ary} lattice is generated by the rows of $\textbf{A}$; the second contains all vectors that are orthogonal (modulo $q$) to the rows of $\textbf{A}$. Hence, the first \textit{q-ary} lattice, $\mathrm{\Lambda}_q(\textbf{A})$, corresponds to the code generated by the rows of $\textbf{A}$ whereas the second, $\mathrm{\Lambda}_q^{\perp}(\textbf{A})$, corresponds to the code whose parity check matrix is $\textbf{A}$. For a complete introduction to lattices, we refer the interested reader to the monographs by Gr\"{a}tzer~\cite{Gratzer[03],Gratzer[09]}. \subsection{Gaussian Distributions} Gaussian sampling is an extremely useful tool in lattice-based cryptography. It was introduced by Gentry et al. \cite{Gentry[08]}, Gaussian sampling takes a short basis $\textbf{B}$ of a lattice $\mathrm{\Lambda}$ and an arbitrary point $\textbf{v}$ as inputs and outputs a point from a Gaussian distribution discretized on the lattice points and centered at $\textbf{v}$. Gaussian sampling does not leak any information about the lattice $\mathrm{\Lambda}$. It has been used directly to construct multiple cryptographic schemes, including hierarchical identity-based encryption \cite{AgDan[10],CashDenn[10]}, standard model signatures \cite{AgDan[10],XavBoy[10]}, and attribute-based encryption \cite{DanSer[14]}. In addition, Gaussian sampling/distribution also plays an important role in other hard lattice problems, such as, learning single periodic neurons \cite{SongZa[21]}, and has direct connections to standard lattice problems \cite{DivDan[15],NoDan[15],SteDavid[15]}. \begin{definition} \emph{A continuous Gaussian distribution, $\mathcal{N}^w(\textbf{v},\sigma^2)$, over $\mathbb{R}^w$, centered at some $\mathbf{v} \in \mathbb{R}^w$ with standard deviation $\sigma$ is defined for $\textbf{x} \in \mathbb{R}^w$ as the following density function: \[\mathcal{N}_\textbf{x}^w(\textbf{v},\sigma^2) = \left( \dfrac{1}{\sqrt{2 \pi \sigma^2}} \right)^w \exp \left(\frac{-||\textbf{x} - \textbf{v}||^2}{2 \sigma^2}\right). \]} \end{definition} A rounded Gaussian distribution can be obtained by simply rounding the samples from a continuous Gaussian distribution to their nearest integers. Rounded Gaussians have been used to establish hardness of LWE \cite{Reg[05],Albre[13],Gold[10],Duc[15]} --- albeit not as frequently as discrete Gaussians. \begin{definition}[Adapted from \cite{Hul[17]}]\label{roundGauss} \emph{A rounded Gaussian distribution, $\mathrm{\Psi}^w(\textbf{v},\hat{\sigma}^2)$, over $\mathbb{Z}^w$, centered at some $\textbf{v} \in \mathbb{Z}^w$ with parameter $\sigma$ is defined for $\textbf{x} \in \mathbb{Z}^w$ as: \[\mathrm{\Psi}^w_\textbf{x}(\textbf{v},\hat{\sigma}^2) = \int_{A_\textbf{x}} \mathcal{N}_\textbf{s}^w(\textbf{v},\sigma^2)\,d\textbf{s} = \int_{A_\textbf{x}} \left( \dfrac{1}{\sqrt{2 \pi \sigma^2}} \right)^w \exp\left( \dfrac{-||\textbf{s} - \textbf{v}||^2}{2 \sigma^2} \right)\,d\textbf{s}, \] where $A_\textbf{x}$ denotes the region $\prod_{i=1}^{w} [x_i - \frac{1}{2}, x_i + \frac{1}{2})$; $\hat{\sigma}$ and $\sigma$ are the standard deviations of the rounded Gaussian and its underlying continuous Gaussian, respectively, such that: $\hat{\sigma} = \sqrt{\sigma^2 + 1/12}$.} \end{definition} \begin{definition}[Gaussian channel]\label{Gauss} \emph{A Gaussian channel is a discrete-time channel with input $x_i$ and output $y_i = x_i + \varepsilon_i$, where $\varepsilon_i$ is drawn i.i.d. from a Gaussian distribution $\mathcal{N}(0, \sigma^2)$, with mean 0 and standard deviation $\sigma$, which is assumed to be independent of the signal $x_i$.} \end{definition} \begin{definition}[Discrete Gaussian over Lattices] \emph{Given a lattice $\mathrm{\Lambda} \in \mathbb{Z}^w$, the discrete Gaussian distribution over $\mathrm{\Lambda}$ with standard deviation $\sigma \in \mathbb{R}$ and center $\textbf{v} \in \mathbb{R}^w$ is defined as: \[D(\mathrm{\Lambda},\textbf{v},\sigma^2)_\textbf{x} = \dfrac{\rho_\textbf{x}(\textbf{v},\sigma^2)}{\rho_\mathrm{\Lambda}(\textbf{v},\sigma^2)};\ \forall \textbf{x} \in \mathrm{\Lambda},\] where $\rho_\mathrm{\Lambda}(\textbf{v}, \sigma^2) = \sum_{\textbf{x}_i \in \mathrm{\Lambda}} \rho_{\textbf{x}_i}(\textbf{v},\sigma^2)$ and $$\rho_\textbf{x}(\textbf{v},\sigma^2) = \exp\left( \pi \dfrac{||\textbf{x} - \textbf{v}||_{GS}^2}{\sigma^2}\right)$$ and $||\cdot||_{GS}$ denotes the Gram-Schmidt norm. } \end{definition} The smoothing parameter is defined as a measure of the ``difference'' between discrete and standard Gaussians, that are defined over identical parameters. Informally, it is the smallest $\sigma$ required by a discrete Gaussian distribution, over a lattice $\mathrm{\Lambda}$, to behave like a continuous Gaussian --- up to some acceptable statistical error. For more details, see \cite{MicReg[04],DO[07],ChungD[13]}. \begin{theorem}[Drowning/Smudging \cite{MartDeo[17],Gold[10],Dodis[10]}] Let $\sigma > 0$ and $y \in \mathbb{Z}$. The statistical distance between $\mathrm{\Psi}(v,\sigma^2)$ and $\mathrm{\Psi}(v,\sigma^2) + y$ is at most $|y|/\sigma$. \end{theorem} Let $X_1, X_2, \ldots, X_n$ be i.i.d. random variables from the same distribution, i.e., all $X_i$'s have the same mean $\mu$ and standard deviation $\sigma$. Let random variable $\overline{X}_n$ be the average of $X_1, \ldots, X_n$. Then, the following theorems hold. \begin{theorem}[Strong Law of Large Numbers] $\overline{X}_n$ converges almost surely to $\mu$ as $n\rightarrow\infty$. \end{theorem} \subsection{Learning with Errors}\label{LWE} The learning with errors (LWE) problem~\cite{Reg[05]} is at the center of the majority of lattice-based cryptographic constructions~\cite{Peikert[16]}. LWE is known to be hard based on the worst-case hardness of standard lattice problems such as GapSVP (decision version of the Shortest Vector Problem) and SIVP (Shortest Independent Vectors Problem)~\cite{Reg[05],Pei[09]}. Multiple variants of LWE such as ring LWE~\cite{Reg[10]}, module LWE~\cite{Ade[15]}, cyclic LWE~\cite{Charles[20]}, continuous LWE~\cite{Bruna[20]}, \textsf{PRIM LWE}~\cite{SehrawatVipin[21]}, middle-product LWE~\cite{Miruna[17]}, group LWE~\cite{NicMal[16]}, entropic LWE \cite{ZviVin[16]}, universal LWE \cite{YanHua[22]}, and polynomial-ring LWE~\cite{Damien[09]} have been developed since 2010. Many cryptosystems rely on the hardness of LWE, including (identity-based, leakage-resilient, fully homomorphic, functional, public-key/key-encapsulation, updatable, attribute-based, inner product, predicate) encryption~\cite{AnaFan[19],KimSam[19],WangFan[19],Reg[05],Gen[08],Adi[09],Reg[10],Shweta[11],Vinod[11],Gold[13],Jan[18],Hayo[19],Bos[18],Bos[16],WBos[15],Brak[14],Fan[12],Joppe[13],Adriana[12],Lu[18],AndMig[22],MiaSik[22],Boneh[13],Vipin[19],LiLi[22],RaviHow[22],ShuiTak[20],SerWe[15]}, oblivious transfer~\cite{Pei[08],Dott[18],Quach[20]}, (blind) signatures~\cite{Gen[08],Vad[09],Markus[10],Vad[12],Tesla[20],Dili[17],FALCON[20]}, PRFs with special algebraic properties~\cite{Ban[12],Boneh[13],Ban[14],Ban[15],Zvika[15],Vipin[19],KimDan[17],RotBra[17],RanChen[17],KimWu[17],KimWu[19],Qua[18],KevAna[14],SinShi[20],BanDam[18],PeiShi[18]}, verifiable/homomorphic/function secret sharing~\cite{SehrawatVipin[21],GHL[21],Boy[17],DodHal[16],GilLin[17],LisPet[19]}, hash functions~\cite{Katz[09],Pei[06]}, secure matrix multiplication computation~\cite{Dung[16],Wang[17]}, verifiable quantum computations~\cite{Urmila[18],Bra[21],OrNam[21],ZhenAlex[21]}, noninteractive zero-knowledge proof system for (any) NP language~\cite{Sina[19]}, classically verifiable quantum computation~\cite{Urmila[18]}, certifiable randomness generation \cite{Bra[21]}, obfuscation~\cite{Huijia[16],Gentry[15],Hal[17],ZviVin[16],AnanJai[16],CousinDi[18]}, multilinear maps \cite{Grg[13],Gentry[15],Gu[17]}, lossy-trapdoor functions \cite{BellKil[12],PeiW[08],HoWee[12]}, quantum homomorphic encryption \cite{Mahadev[18]}, key exchange \cite{Cost[15],Alkim[16],StebMos[16]}, and many more~\cite{Peikert[16],JiaZhen[20],KatzVadim[21]}. \begin{definition}[Decision-LWE \cite{Reg[05]}]\label{defLWE} \emph{For positive integers $w$ and $q \geq 2$, and an error (probability) distribution $\chi$ over $\mathbb{Z}$, the decision-LWE${}_{w, q, \chi}$ problem is to distinguish between the following pairs of distributions: \[((\textbf{a}_i, \langle \textbf{a}_i, \textbf{s} \rangle + e_i \bmod q))_i \quad \text{and} \quad ((\textbf{a}_i, u_i))_i,\] where $i \in [\poly(w)], \textbf{a}_i \xleftarrow{\; \$ \;} \mathbb{Z}^{w}_q, \textbf{s} \in \mathbb{Z}^w_q, e_i \xleftarrow{\; \$ \;} \chi,$ and $u_i \xleftarrow{\; \$ \;} \mathbb{Z}_q$.} \end{definition} Regev~\cite{Reg[05]} showed that for certain noise distributions and a sufficiently large $q$, the LWE problem is as hard as the worst-case SIVP and GapSVP under a quantum reduction (see~\cite{Pei[09],Zvika[13],ChrisOded[17]} for other reductions). Standard instantiations of LWE assume $\chi$ to be a rounded or discrete Gaussian distribution. Regev's proof requires $\alpha q \geq 2 \sqrt{w}$ for ``noise rate'' $\alpha \in (0,1)$. These results were extended by Applebaum et al.~\cite{Benny[09]} to show that the fixed secret $\textbf{s}$ can be sampled from a low norm distribution. Specifically, they showed that sampling $\textbf{s}$ from the noise distribution $\chi$ does not weaken the hardness of LWE. Later, Micciancio and Peikert discovered that a simple low-norm distribution also works as $\chi$~\cite{Micci[13]}. \subsection{Pseudorandom Functions} In a pseudorandom function (PRF) family~\cite{Gold[86]}, each function is specified by a key such that, with the key, it can be evaluated deterministically but behaves like a random function without it. Here, we recall the formal definition of a PRF family. Recall that an ensemble of probability distributions is a sequence $\{X_n\}_{n \in \mathbb{N}}$ of probability distributions. \begin{definition}[Negligible Function]\label{Neg} \emph{For security parameter $\L$, a function $\eta(\L)$ is called \textit{negligible} if for all $c > 0$, there exists a $\L_0$ such that $\eta(\L) < 1/\L^c$ for all $\L > \L_0$.} \end{definition} \begin{definition}[Computational Indistinguishability~\cite{Gold[82]}] \emph{Let $X = \{X_\lambda\}_{\lambda \in \mathbb{N}}$ and $Y = \{Y_\lambda\}_{\lambda \in \mathbb{N}}$ be ensembles, where $X_\lambda$'s and $Y_\lambda$'s are probability distributions over $\{0,1\}^{\kappa(\lambda)}$ for $\lambda \in \mathbb{N}$ and some polynomial $\kappa(\lambda)$. We say that $\{X_\lambda\}_{\lambda \in \mathbb{N}}$ and $\{Y_\lambda\}_{\lambda \in \mathbb{N}}$ are polynomially/computationally indistinguishable if the following holds for every (probabilistic) polynomial-time algorithm $\mathcal{D}$ and all $\lambda \in \mathbb{N}$: \[\Big| \Pr[t \leftarrow X_\lambda: \mathcal{D}(t) = 1] - \Pr[t \leftarrow Y_\lambda: \mathcal{D}(t) = 1] \Big| \leq \eta(\lambda),\] where $\eta$ is a negligible function.} \end{definition} \begin{remark}[Perfect Indistinguishability] We say that $\{X_\lambda\}_{\lambda \in \mathbb{N}}$ and $\{Y_\lambda\}_{\lambda \in \mathbb{N}}$ are perfectly indistinguishable if the following holds for all $t$: \[\Pr[t \leftarrow X_\lambda] = \Pr[t \leftarrow Y_\lambda].\] \end{remark} We consider adversaries interacting as part of probabilistic experiments called games. For an adversary $\mathcal{A}$ and two games $\mathfrak G_1, \mathfrak G_2$ with which it can interact, $\mathcal{A}'s$ distinguishing advantage is: \[Adv_{\mathcal{A}}(\mathfrak{G}_1, \mathfrak{G}_2) := \Big|\Pr[\mathcal{A} \text{ accepts in } \mathfrak G_1] - \Pr[\mathcal{A} \text{ accepts in } \mathfrak G_2]\Big|.\] For the security parameter $\L$, the two games are said to be computationally indistinguishable if it holds that: $$Adv_{\mathcal{A}}(\mathfrak{G}_1, \mathfrak{G}_2) \leq \eta(\L),$$ where $\eta$ is a negligible function. \begin{definition}[PRF] \emph{Let $A$ and $B$ be finite sets, and let $\mathcal{F} = \{ F_k: A \rightarrow B \}$ be a function family, endowed with an efficiently sampleable distribution (more precisely, $\mathcal{F}, A$ and $B$ are all indexed by the security parameter $\L)$. We say that $\mathcal{F}$ is a PRF family if the following two games are computationally indistinguishable: \begin{enumerate}[label=(\roman*)] \item Choose a function $F_k \in \mathcal{F}$ and give the adversary adaptive oracle access to $F_k$. \item Choose a uniformly random function $U: A \rightarrow B$ and give the adversary adaptive oracle access to $U.$ \end{enumerate}} \end{definition} Hence, PRF families are efficient distributions of functions that cannot be efficiently distinguished from the uniform distribution. For a PRF $F_k \in \mathcal{F}$, the index $k$ is called its key/seed. PRFs have a wide range of applications, most notably in cryptography, but also in computational complexity and computational learning theory. For a detailed introduction to PRFs and review of the noteworthy results, we refer the interested reader to the survey by Bogdanov and Rosen \cite{AndAlo[17]}. In 2020, Liu and Pass \cite{LiuPass[20]} made a remarkable breakthrough and tied the existence of pseudorandom functions --- and one-way functions, in general --- to the average-case hardness of $K^{\poly}$-complexity, which denotes polynomial-time-bounded Kolmogorov complexity (see \cite{Solomon[64],Chai[69],Kko[86]} for an introduction to Kolmogorov complexity). \subsection{Linear Regression}\label{Sec5} Linear regression is a linear approach to model relationship between a dependent variable and explanatory/independent variable(s). As is the case with most statistical analysis, the goal of regression is to make sense of the observed data in a useful manner. It analyzes the training data and attempts to model the relationship between the dependent and explanatory/independent variable(s) by fitting a linear equation to the observed data. These predictions (often) have errors, which cannot be predicted accurately~\cite{Trevor[09],Montgo[12]}. For linear regression, the mean and variance functions are defined as: \[\E(Y |X = x) = \beta_0 + \beta_1 x \quad \text{and} \quad \text{var}(Y |X = x) = \sigma^2,\] respectively, where $\E(\cdot)$ and $\sigma$ denote the expected value and standard deviation, respectively; $\beta_0$ represents the intercept, which is the value of $\E(Y |X = x)$ when $x$ equals zero; $\beta_1$ denotes the slope, i.e., the rate of change in $\E(Y |X = x)$ for a unit change in $X$. The parameters $\beta_0$ and $\beta_1$ are also known as \textit{regression coefficients}. For any regression model, the observed value $y_i$ might not always equal its expected value $\E(Y |X = x_i)$. To account for this difference between the observed data and the expected value, statistical error, which is defined as: $$\epsilon_i = y_i - \E(Y |X = x_i),$$ was introduced. The value of $\epsilon_i$ depends on \emph{unknown} parameters in the mean function. For linear regression, errors are random variables that correspond to the vertical distance between the point $y_i$ and the mean function $\E(Y |X = x_i)$. Depending on the type and size of the training data, different algorithms like gradient descent, least squares may be used to compute the values of $\beta_0$ and $\beta_1$. In this paper, we employ least squares linear regression to estimate $\beta_0$ and $\beta_1$, and generate the optimal hypothesis for the target function. Due to the inherent error in all regression models, it holds that: $$h(x) = f(x) + \varepsilon_x,$$ where $h(x)$ is the (optimal) hypothesis of the linear regression model, $f(x)$ is the target function and $\varepsilon_x$ is the total (reducible + irreducible) error at point $x$. \subsection{Interconnection Network} In an interconnection network, each device is independent and connects with other devices via point-to-point links, which are two-way communication lines. Therefore, an interconnected network can be modeled as an undirected graph $G = (V, E)$, where each device is a vertex in $V$ and edges in $E$ represent links between the devices. Next, we recall some basic definitions/notations for undirected graphs. \begin{definition} \emph{The degree $\deg(v)$ of a vertex $v \in V$ is the number of adjacent vertices it has in a graph $G$. The degree of a graph $G$ is defined as: $\deg(G) = \max\limits_{v \in V}(\deg(v))$.} \end{definition} If $\deg(v_i) = \deg(v_j)$ for all $v_i, v_j \in V$, then $G$ is called a regular graph. Since it is easy to construct star graphs that are hierarchical, vertex edge symmetric, maximally fault tolerance, and strongly resilient along with having other desirable properties such as small(er) degree, diameter, genus and fault diameter \cite{Akera[89],Sheldon[94]}, networks of star graphs are well-suited to model interconnection networks. For a detailed introduction to interconnection networks, we refer the interested reader to the comprehensive book by Duato et al. \cite{Sudha[02]}. \section{Related Work}\label{Sec3} \subsection{Learning with Rounding}\label{LWR} Naor and Reingold \cite{Naor[9]} introduced synthesizers to construct PRFs via a hard-to-learn deterministic function. The obstacle in using LWE as the hard learning problem in their synthesizers is that LWE's hardness depends on random errors. In fact, without the error, LWE becomes a trivial problem, that can be solved via Gaussian elimination. Therefore, in order to use these synthesizers for constructing LWE-based PRFs, there was a need to replace the random errors with deterministic --- yet sufficiently independent --- errors such that the hardness of LWE is not weakened. Banerjee et al.~\cite{Ban[12]} addressed this problem by introducing the learning with rounding (LWR) problem, wherein instead of adding a small random error, as done in LWE, a deterministically rounded version of the sample is generated. For $q \geq p \geq 2$, the rounding function, $\lfloor \cdot \rceil_p: \mathbb{Z}_q \rightarrow \mathbb{Z}_p$, is defined as: \[\lfloor x \rceil_p = \left\lfloor \dfrac{p}{q} \cdot x \right\rceil,\] i.e., if $\lfloor x \rceil_p = y$, then $y \cdot \lfloor q/p \rceil$ is the integer multiple of $\lfloor q/p \rceil$ that is nearest to $x$. The error in LWR comes from deterministically rounding $x$ to a (relatively) nearby value in $\mathbb{Z}_p$. \begin{definition}[LWR Distribution~\cite{Ban[12]}] \emph{Let $q \geq p \geq 2$ be positive integers, then: for a vector $\textbf{s} \in \mathbb{Z}^w_q$, LWR distribution $L_\textbf{s}$ is defined to be a distribution over $\mathbb{Z}^w_q \times \mathbb{Z}_p$ that is obtained by choosing a vector $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_q$, and outputting: $(\textbf{a},b = \lfloor \langle \textbf{a},\textbf{s} \rangle \rceil_p).$} \end{definition} For a given distribution over $\textbf{s} \in \mathbb{Z}^w_q$ (e.g., the uniform distribution), the decision-LWR${}_{w,q,p}$ problem is to distinguish (with advantage non-negligible in $w)$ between some fixed number of independent samples $(\textbf{a}_i,b_i) \leftarrow L_\textbf{s}$, and the same number of samples drawn uniformly from $\mathbb{Z}^w_q \times \mathbb{Z}_p$. Banerjee et al. proved decision-LWR to be as hard as decision-LWE for a setting of parameters where the modulus and modulus-to-error ratio are superpolynomial in the security parameter \cite{Ban[12]}. Alwen et al.~\cite{Alwen[13]}, Bogdanov et al.~\cite{Andrej[16]}, and Bai et al.~\cite{ShiBai[18]} made further improvements on the range of parameters and hardness proofs for LWR. LWR has been used to construct pseudorandom generators/functions~\cite{Ban[12],Boneh[13],Ban[14],Vipin[19],VipinThesis[19],BenoSan[17]}, and probabilistic~\cite{Jan[18],Hayo[19]} and deterministic~\cite{Xie[12]} encryption schemes. As mentioned earlier, hardness reductions of LWR hold for superpolynomial approximation factors over worst-case lattices. Montgomery~\cite{Hart[18]} partially addressed this issue by introducing a new variant of LWR, called Nearby Learning with Lattice Rounding problem, which supports unbounded number of samples and polynomial (in the security parameter) modulus. \subsection{LWR/LWE-based Key-homomorphic PRF\lowercase{s}}\label{foll} Since LWR allows generating derandomized LWE instances, it can be used as the hard-to-learn deterministic function in Naor and Reingold's synthesizers, and therefore, construct LWE-based PRF families for specific parameters. Due to the indispensable small error, LWE-based key-homomorphic PRFs only achieve what is called `almost homomorphism'~\cite{Boneh[13]}. \begin{definition}[Key-homomorphic PRF~\cite{Boneh[13]}] \emph{Let $F: \mathcal{K} \times \mathcal{X} \rightarrow \mathbb{Z}^w_q$ be an efficiently computable function such that $(\mathcal{K}, \oplus)$ is a group. We say that the tuple $(F, \oplus)$ is a $\gamma$-almost key-homomorphic PRF if the following two properties hold: \begin{enumerate}[label=(\roman*)] \item $F$ is a secure PRF, \item for all $k_1, k_2 \in \mathcal{K}$ and $x \in \mathcal{X}$, there exists $\textbf{e} \in [0, \gamma]^w$ such that: $$F_{k_1}(x) + F_{k_2}(x) = F_{k_1 \oplus k_2}(x) + \textbf{e} \bmod q.$$ \end{enumerate}} \end{definition} Multiple key-homomorphic PRF families have been constructed via varying approaches~\cite{Naor[99],Boneh[13],Ban[14],Parra[16],SamK[20],Navid[20]}. In addition to key-homomorphism, PRF families have been defined/constructed with various other properties and features, e.g., bi-homomorphic PRFs~\cite{Vipin[19]}, (private) constrained PRFs~\cite{Zvika[15],KimDan[17],RotBra[17],RanChen[17],DanDav[15],SinShi[20],PeiShi[18]}, Legendre PRFs \cite{IVD[88]}, power residue PRFs \cite{Ward[20]}, traceable PRFs \cite{GoyalWu[21],MaiWu[22]}, quantum PRFs \cite{MaZha[12],NicPu[20]}, oblivious PRFs \cite{SilJul[22],BonKog[20],FreePin[05]}, domain-preserving PRFs \cite{WaGu[19]}, structure-preserving PRFs \cite{MasaRafa[19]}, related-key attack (RKA) secure PRFs \cite{MihiDav[10],MihiDav[03],DavLis[10],Lucks[04],KrzT[08],EyaWid[14],Fab[19],KevAna[14],Boneh[13],Vipin[19],Ban[14]}, threshold/distributed PRFs \cite{CacKla[05],StanHug[18],BanDam[18],Boneh[13],Vipin[19],Ban[14]}, privately programmable PRFs \cite{DanDav[15],PeiShi[18]}, (zero-knowledge) provable PRFs \cite{BenoSan[17],CarMit[19]}, and watermarkable PRFs~\cite{KimWu[17],KimWu[19],Qua[18],ManHo[20],Yukun[21],SinShi[20]}. Note that key-homomorphic PRFs directly lead to RKA-secure PRFs and (one-round) distributed PRFs \cite{Boneh[13]}. \section{Star-specific Key-homomorphic PRF: Definition}\label{Sec4} In this section, we formally define star-specific key-homomorphic (SSKH) PRF. Let $G = (V,E)$ be a graph, representing an interconnection network, containing multiple star graphs wherein the leaves of each star graph, $\partial$, represent unique parties and the root represents a central hub that broadcasts messages to all leaves/parties in $\partial$. Different star graphs may have an arbitrary number of shared leaves. Henceforth, we call such a graph as an \textit{interconnection graph}. \Cref{DefFig} depicts a simple interconnection graph with two star graphs, each containing one central hub, respectively, along with eight parties/leaves where one leaf is shared by both star graphs. Note that an interconnection graph is simply a bipartite graph with a given partition of its vertices into two disjoint and independent subsets $V_1$ and $V_2$. (The vertices in $V_1$ are the central hubs and the vertices in $V_2$ are the parties.) \begin{figure} \centering \includegraphics[scale=.5]{PRF2.png} \caption{Example Interconnection Graph}\label{DefFig} \end{figure} \begin{definition}\label{MainDef} \emph{Let graph $G = (V, E)$ be an interconnection graph with a set of vertices $V$ and a set of edges $E$. Let there be $\rho$ star graphs $\partial_1, \ldots, \partial_\rho$ in $G$. Let $F=\left(F^{(\partial_i)}\right)_{i=1,\ldots,\rho}$ be a family of PRFs, where, for each $i$, $F^{(\partial_i)}: \mathcal{K} \times \mathcal{X} \rightarrow \mathbb{Z}^w_q$ with $(\mathcal{K}, \oplus)$ a group. Then, we say that the tuple $(F, \oplus)$ is a star-specific $(\delta,\gamma,p)$-almost key-homomorphic PRF if the following two conditions hold: \begin{enumerate}[label=(\roman*)] \item for all $\partial_i \neq \partial_j~(i,j \in [\rho]), k \in \mathcal{K}$ and $x \in \mathcal{X}$, it holds that: \[\Pr[F^{(\partial_i)}_k(x) = F^{(\partial_j)}_k(x)] \leq \delta^w+\eta(\L),\] where $F^{(\partial)}_k(x)$ denotes the PRF computed by parties in star graph $\partial \subseteq V(G)$ on input $x \in \mathcal{X}$ and key $k \in \mathcal{K}$, and $\eta(\L)$ is a negligible function in the security parameter $\L$, \item for all $k_1, k_2 \in \mathcal{K}$ and $x \in \mathcal{X}$, there exists a vector $\textbf{e}=(e_1,\ldots,e_w)$ satisfying: $$F_{k_1}^{(\partial)}(x) + F^{(\partial)}_{k_2}(x) = F^{(\partial)}_{k_1 \oplus k_2}(x) + \textbf{e} \bmod q,$$ such that for all $i\in [w]$, it holds that: $\Pr[-\gamma\leq e_i\leq\gamma]\geq p$. \end{enumerate}} \end{definition} \section{Maximally Cover-free At Most $t$-intersecting $k$-uniform Families}\label{Extremal} Extremal combinatorics deals with the problem of determining or estimating the maximum or minimum cardinality of a collection of finite objects that satisfies some specific set of requirements. It is also concerned with the investigation of inequalities between combinatorial invariants, and questions dealing with relations among them. For an introduction to the topic, we refer the interested reader to the books by Jukna \cite{Stas[11]} and Bollob\'{a}s \cite{BollB[78]}, and the surveys by Alon \cite{Alon1[03],Alon1[08],Alon1[16],Alon1[20]}. Extremal combinatorics can be further divided into the following distinct fields: \begin{itemize} \item Extremal graph theory, which began with the work of Mantel in 1907 \cite{Aign[95]} and was first investigated in earnest by Tur\'{a}n in 1941 \cite{Tura[41]}. For a survey of the important results in the field, see \cite{Niki[11]}. \item Ramsey theory, which was popularised by Erd\H{o}s and Szekeres \cite{Erd[47],ErdGeo[35]} by extending a result of Ramsey from 1929 (published in 1930 \cite{FrankRam[30]}). For a survey of the important results in the field, see \cite{JacFox[15]}. \item Extremal problems in arithemetic combinatorics, which grew from the work of van der Waerden in 1927 \cite{BL[27]} and the Erd\H{o}s-Tur\'{a}n conjecture of 1936 \cite{PLPL[36]}. For a survey of the important results in the field, see \cite{Choon[12]}. \item Extremal (finite) set theory, which was first investigated by Sperner~\cite{Sperner[28]} in 1928 by establishing the maximum size of an antichain, i.e., a set-system where no member is a superset of another. However, it was Erd\H{o}s et al. \cite{Erdos[61]} who started systematic research in extremal set theory. \end{itemize} Extremal set theory deals with determining the size of set-systems that satisfy certain restrictions. It is one of the most rapidly developing areas in combinatorics, with applications in various other branches of mathematics and theoretical computer science, including functional analysis, probability theory, circuit complexity, cryptography, coding theory, probabilistic methods, discrete geometry, linear algebra, spectral graph theory, ergodic theory, and harmonic analysis \cite{Beimel[15],Beimel[12],Zeev[15],Zeev[11],Klim[09],Sergey[08],Liu[17],SehrawatVipin[21],VipinYvo[20],Polak[13],Blackburn[03],WangThesis[20],Sudak[10],GarVac[94],Gro[00],IWF[20]}. For more details on extremal set theory, we refer the reader to the book by Gerbner and Patkos \cite{GerbBala[18]}; for probabilistic arguments/proofs, see the books by Bollob\'{a}s \cite{Boll[86]} and Spencer \cite{JSpen[87]}. Our work in this paper concerns a subfield of extremal set theory, called \textit{intersection theorems}, wherein set-systems under specific intersection restrictions are constructed, and bounds on their sizes are derived. A wide range of methods have been employed to establish a large number of intersection theorems over various mathematical structures, including vector subspaces, graphs, subsets of finite groups with given group actions, and uniform hypergraphs with stronger or weaker intersection conditions. The methods used to derive these theorems have included purely combinatorial methods such as shifting/compressions, algebraic methods (including linear-algebraic, Fourier analytic and representation-theoretic), analytic, probabilistic and regularity-type methods. We shall not give a full account of the known intersection theorems, but only touch upon the results that are particularly relevant to our set-system and its construction. For a broader account, we refer the interested reader to the comprehensive surveys by Ellis \cite{Ell[21]}, and Frankl and Tokushige~\cite{Frankl[16]}. For an introduction to intersecting and cross-intersecting families related to hypergraphs, see~\cite{AMDD[2020],Kleit[79]}. \begin{note} Set-system and hypergraph are very closely related terms, and commonly used interchangeably. Philosophically, in a hypergraph, the focus is more on vertices, vertex subsets being in ``relation'', and subset(s) of vertices satisfying a specific configuration of relations; whereas in a set-system, the focus is more on set-theoretic properties of the sets. \end{note} In this section, we derive multiple intersection theorems for: \begin{enumerate} \item at most $t$-intersecting $k$-uniform families of sets, \item maximally cover-free at most $t$-intersecting $k$-uniform families of sets. \end{enumerate} We also provide an explicit construction for at most $t$-intersecting $k$-uniform families of sets. Later, we use the results from this section to establish the maximum number of SSKH PRFs that can be constructed securely by a set of parties against various active/passive and internal/external adversaries. For $a,b\in\mathbb{Z}$ with $a\leq b$, let $[a,b]:=\{a,a+1,\ldots,b-1,b\}$. \begin{definition} \emph{$\mathcal{H}\subseteq 2^{[n]}$ is $k$-uniform if $|A|=k$ for all $A\in\mathcal{H}$.} \end{definition} \begin{definition} \emph{$\mathcal{H}\subseteq 2^{[n]}$ is maximally cover-free if $$A\not\subseteq\bigcup_{B\in\mathcal{H}, B\neq A}B$$ for all $A\in\mathcal{H}$.} \end{definition} It is clear that $\mathcal{H}\subseteq 2^{[n]}$ is maximally cover-free if and only if every $A\in\mathcal{H}$ has some element $x_A$ such that $x_A\not\in B$ for all $B\in\mathcal{H}$ with $B\neq A$. Furthermore, the maximum size of a $k$-uniform family $\mathcal{H}\subseteq 2^{[n]}$ that is maximally cover-free is $n-k+1$, and it is given by the set system $$\mathcal{H}=\left\{[k-1]\cup\{x\}: x\in[k, n]\right\}$$ (and this is unique up to permutations of $[n]$). \begin{definition} \label{intersecting_definitions} \emph{Let $t$ be a non-negative integer. We say the set system $\mathcal{H}$ is \begin{enumerate}[label = {(\roman*)}] \item at most $t$-intersecting if $|A\cap B|\leq t$, \item exactly $t$-intersecting if $|A\cap B|=t$, \item at least $t$-intersecting if $|A\cap B|\geq t$, \end{enumerate} for all $A, B\in\mathcal{H}$ with $A\neq B$.} \end{definition} Property (iii) in \Cref{intersecting_definitions} is often simply called ``$t$-intersecting'' \cite{Borg[11]}, but we shall use the term ``at least $t$-intersecting'' for clarity. \begin{definition} \emph{Let $\mathcal{F},\mathcal{G}\subseteq 2^{[n]}$. We say that $\mathcal{F}$ and $\mathcal{G}$ are equivalent (denoted $\mathcal{F}\sim\mathcal{G}$) there exists a permutation $\pi$ of $[n]$ such that $\pi^\ast(\mathcal{F})=\mathcal{G}$, where $$\pi^\ast(\mathcal{F})=\left\{\{\pi(a) :a\in A\}: A\in\mathcal{F}\right\}.$$} \end{definition} For $n,k,t,m\in\mathbb{Z}^+$ with $t\leq k\leq n$, let $N(n,k,t,m)$ denote the collection of all set systems $\mathcal{H}\subseteq 2^{[n]}$ of size $m$ that are at most $t$-intersecting and $k$-uniform, and $M(n,k,t,m)$ denote the collection of set systems $\mathcal{H}\in N(n,k,t,m)$ that are also maximally cover-free. The following proposition establishes a bijection between equivalence classes of these two collections of set systems (for different parameters): \begin{proposition} \label{maximally_cover_free} Suppose $n,k,t,m\in\mathbb{Z}^+$ satisfy $t\leq k\leq n$ and $m<n$. Then there exists a bijection $$M(n,k,t,m)\ /\sim\ \leftrightarrow N(n-m,k-1,t,m)\ /\sim.$$ \end{proposition} \begin{proof} We will define functions \begin{align*} \bar{f}:&& M(n,k,t,m)\ /\sim\ &\to N(n-m,k-1,t,m)\ /\sim \\ \bar{g}:&& N(n-m,k-1,t,m)\ /\sim\ &\to M(n,k,t,m)\ /\sim \end{align*} that are inverses of each other. Let $\mathcal{H}\in M(n,k,t,m)$. Since $\mathcal{H}$ is maximally cover-free, for every $A\in\mathcal{H}$, there exists $x_A\in A$ such that $x_A\not\in B$ for all $B\in\mathcal{H}$ such that $B\neq A$. Consider the set system $\{A\setminus\{x_A\}: A\in\mathcal{H}\}$. First, note that although this set system depends on the choice of $x_A\in A$ for each $A\in\mathcal{H}$, the equivalence class of $\{A\setminus\{x_A\}: A\in\mathcal{H}\}$ is independent of this choice, hence this gives us a map \begin{align*} f:M(n,k,t,m)&\to N(n-m,k-1,t,m)\ /\sim \\ \mathcal{H}&\mapsto [\{A\setminus\{x_A\}: A\in\mathcal{H}\}]. \end{align*} Furthermore, it is clear that that if $\mathcal{H}\sim\mathcal{H}'$, then $f(\mathcal{H})\sim f(\mathcal{H}')$, so $f$ induces a well-defined map $$\bar{f}: M(n,k,t,m)\ /\sim\ \to N(n-m,k-1,t,m)\ /\sim.$$ Next, for a set system $\mathcal{G}=\{G_1,\ldots, G_m\}\in N(n-m,k-1,t,m)$, define \begin{align*} g: N(n-m,k-1,t,m) &\to M(n,k,t,m)\ /\sim \\ \mathcal{G} &\mapsto [\{G_i\cup\{n-m+i\}:i\in[m]\}]. \end{align*} Again, this induces a well-defined map $$\bar{g}: N(n-m,k-1,t,m)\ /\sim\ \to M(n,k,t,m)\ /\sim$$ since $g(\mathcal{G})\sim g(\mathcal{G}')$ for any $\mathcal{G}$, $\mathcal{G}'$ such that $\mathcal{G}\sim\mathcal{G}'$. We can check that $\bar{f}\circ \bar{g}=id_{N(n-m,k-1,t,m)}$ and that $\bar{g}\circ \bar{f}=id_{M(n,k,t,m)}$, and thus $\bar{f}$ and $\bar{g}$ are bijections, as desired. \qed \end{proof} \begin{corollary} \label{maximally_cover_free_corollary} Let $n,k,t,m\in\mathbb{Z}^+$ be such that $t\leq k\leq n$ and $m<n$. Then there exists a maximally cover-free, at most $t$-intersecting, $k$-uniform set system $\mathcal{H}\subseteq 2^{[n]}$ of size $m$ if and only if there exists an at most $t$-intersecting, $(k-1)$-uniform set system $\mathcal{G}\subseteq 2^{[n-m]}$. \end{corollary} \begin{remark} Both Proposition \ref{maximally_cover_free} and Corollary \ref{maximally_cover_free_corollary} remain true if, instead of at most $t$-intersecting families, we consider exactly $t$-intersecting or at least $t$-intersecting families. \end{remark} At least $t$-intersecting families have been completely characterized by Ahlswede and Khachatrian \cite{Ahlswede[97]}, but the characterization of exactly $t$-intersecting and at most $t$-intersecting families remain open. Let $\varpi(n,k,t)=\max\left\{|\mathcal{H}|: \mathcal{H}\subseteq 2^{[n]}\text{ is at most }t\text{-intersecting and }k\text{-uniform}\right\}.$ \begin{proposition} \label{simple_bound} Suppose $n,k,t\in\mathbb{Z}^+$ are such that $t\leq k\leq n$. Then $$\varpi(n,k,t)\leq\frac{\binom{n}{t+1}}{\binom{k}{t+1}}.$$ \end{proposition} \begin{proof} Let $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting and $k$-uniform family. The number of pairs $(X, A)$, where $A\in\mathcal{H}$ and $X\subseteq A$ is of size $t+1$, is equal to $|\mathcal{H}|\cdot\binom{k}{t+1}$. Since $\mathcal{H}$ is at most $t$-intersecting, any $(t+1)$-element subset of $[n]$ lies in at most one set in $\mathcal{H}$. Thus, $$|\mathcal{H}|\cdot\binom{k}{t+1}\leq\binom{n}{t+1}\implies |\mathcal{H}|\leq\frac{\binom{n}{t+1}}{\binom{k}{t+1}}.$$ \qed \end{proof} Using Proposition \ref{maximally_cover_free}, we immediately obtain the following as a corollary: \begin{corollary} \label{simple_bound_corollary} Suppose $\mathcal{H}\subseteq 2^{[n]}$ is maximally cover-free, at most $t$-intersecting and $k$-uniform. Then $$|\mathcal{H}|\leq\frac{\binom{n-|\mathcal{H}|}{t+1}}{\binom{k-1}{t+1}}.$$ \end{corollary} Similarly, by applying Proposition \ref{maximally_cover_free}, other results on at most $t$-intersecting and $k$-uniform set systems can also be translated into results on set systems that, in addition to having these two properties, are maximally cover-free. Often, we will not explicitly state these results, but it will be understood that such results can be easily derived. \subsection{Bounds for Small $n$} In this section, we give several bounds on $\varpi(n,k,t)$ for small values of $n$. \begin{lemma} \label{bound_for_small_n_lemma} Let $n,k,t\in\mathbb{Z}^+$ be such that $t\leq k\leq n<\frac{1}{2}k\left(\frac{k}{t}+1\right)$. Let $m'$ be the least positive integer such that $n<m'k-\frac{1}{2}m'(m'-1)t$. Then $$\varpi(n,k,t)=m'-1.$$ \end{lemma} \begin{proof} We will first show that there exists $m^\star\in\mathbb{Z}^+$ such that $n<m^\star k-\frac{1}{2}m^\star(m^\star-1)t$. Consider the quadratic polynomial $p(x)=xk-\frac{1}{2}x(x-1)t$. Note that $p(x)$ achieves its maximum value at $x=\frac{k}{t}+\frac{1}{2}$. If we let $m^\star$ be the unique positive integer such that $\frac{k}{t}\leq m^\star<\frac{k}{t}+1$, then $$p(m^\star)\geq p\left(\frac{k}{t}\right)=\frac{1}{2}k\left(\frac{k}{t}+1\right)>n,$$ as required. Next, suppose $\mathcal{H}$ is an at most $t$-intersecting, $k$-uniform set family with $|\mathcal{H}|\geq m'$. Let $A_1,\ldots, A_{m'}\in\mathcal{H}$ be distinct. Then $$n\geq\left|\bigcup_{i=1}^{m'} A_i\right|=\sum_{i=1}^{m'} \left|A_i\setminus\bigcup_{j=1}^{i-1}A_j\right|\geq\sum_{i=0}^{m'-1} (k-it)=m'k-\frac{1}{2}m'(m'-1)t,$$ which is a contradiction. This proves that $|\mathcal{H}|\leq m'-1$. It remains to construct an at most $t$-intersecting, $k$-uniform set family $\mathcal{H}\subseteq 2^{[n]}$ with $|\mathcal{H}|=m'-1$. Let $m=m'-1$. The statement is trivial if $m=0$, so we may assume that $m\in\mathbb{Z}^+$. By the minimality of $m'$, we must have $n\geq mk-\frac{1}{2}m(m-1)t$. Let $k=\alpha t+\beta$ with $\alpha,\beta\in\mathbb{Z}$ and $0\leq\beta\leq t-1$. Now, define a set system $\mathcal{H}=\{A_1,\ldots,A_m\}$ as follows: $$A_i=\left\{(l,\{i,j\}):l\in[t], j\in[\alpha+1]\setminus\{i\}\right\}\cup\left\{(i,j):j\in[\beta]\right\}.$$ It is clear, by construction, that $\mathcal{H}$ is at most $t$-intersecting and $k$-uniform. Furthermore, since $\alpha=\lfloor k/t\rfloor\geq m$, the number of elements in the universe of $\mathcal{H}$ is \begin{align*} &t\cdot\left|\{i,j\}:1\leq i<j\leq\alpha+1,\,i\leq m\right|+m\beta \\ =\ &t\left(\binom{\alpha+1}{2}-\binom{\alpha+1-m}{2}\right)+m\beta \\ =\ &t\left(m\alpha-\frac{1}{2}m(m-1)\right)+m\beta \\ =\ &mk-\frac{1}{2}m(m-1)t. \end{align*} \qed \end{proof} \begin{proposition} \label{bound_for_small_n} Let $n,k,t\in\mathbb{Z}^+$ be such that $t\leq k\leq n$. \begin{enumerate}[label = {(\alph*)}] \item If $n<\frac{1}{2}k\left(\frac{k}{t}+1\right)$, then $$\varpi(n,k,t)=\left\lfloor\frac{1}{2}+\frac{k}{t}-\sqrt{\left(\frac{1}{2}+\frac{k}{t}\right)^2-\frac{2n}{t}}\right\rfloor.$$ \item If $t\mid k$ and $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)$, then $$\varpi(n,k,t)=\frac{k}{t}+1.$$ \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[label = {(\alph*)}] \item Note that $m=\left\lfloor\frac{1}{2}+\frac{k}{t}-\sqrt{\left(\frac{1}{2}+\frac{k}{t}\right)^2-\frac{2n}{t}}\right\rfloor$ satisfies $n\geq mk-\frac{1}{2}m(m-1)t$ and $m'=m+1$ satisfies $n< m'k-\frac{1}{2}m'(m'-1)t$, hence the result follows immediately from Lemma \ref{bound_for_small_n_lemma}. \item Let $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting, $k$-uniform set family. We may assume that $|\mathcal{H}|\geq\frac{k}{t}$. We will first show that any three distinct sets in $\mathcal{H}$ have empty intersection. Let $A_1$, $A_2$ and $A_3$ be any three distinct sets in $\mathcal{H}$, and let $A_4,\,\ldots,\,A_{\frac{k}{t}}\in\mathcal{H}$ be such that the $A_i$'s are all distinct. Then $$\left|\bigcup_{i=1}^{\frac{k}{t}} A_i\right|=\sum_{i=1}^{\frac{k}{t}} \left|A_i\setminus\bigcup_{j=1}^{i-1}A_j\right|\geq\sum_{i=0}^{\frac{k}{t}-1} (k-it)=\frac{1}{2}k\left(\frac{k}{t}+1\right)=n,$$ and thus we have must equality everywhere. In particular, we obtain $|A_3\setminus (A_1\cup A_2)|=k-2t$, which together which the fact that $\mathcal{H}$ is at most $t$-intersecting, implies that $A_1\cap A_2\cap A_3=\varnothing$, as claimed. Therefore, every $x\in[n]$ lies in at most $2$ sets in $\mathcal{H}$. Now, $$|\mathcal{H}|\cdot k=|(A,x):A\in\mathcal{H},\ x\in A|\leq 2n\implies |\mathcal{H}|\leq\frac{2n}{k}=\frac{k}{t}+1,$$ proving the first statement. Next, we shall exhibit an at most $t$-intersecting, $k$-uniform set family $\mathcal{H}\subseteq 2^{[n]}$, where $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)$, with $|\mathcal{H}|=\frac{k}{t}+1$. Let $\mathcal{H}=\{A_1,\ldots,A_{\frac{k}{t}+1}\}$ with $$A_i=\left\{(l,\{i,j\}):l\in[t], j\in\left[\frac{k}{t}+1\right]\setminus\{i\}\right\}.$$ It is clear that $\mathcal{H}$ is exactly $t$-intersecting and $k$-uniform, and that it is defined over a universe of $t\cdot\dbinom{k/t+1}{2}=n$ elements. \end{enumerate} \qed \end{proof} \begin{remark} The condition $n<\frac{1}{2}k\left(\frac{k}{t}+1\right)$ in Proposition \ref{bound_for_small_n}(a) is necessary. Indeed, if $n=\dfrac{1}{2}k\left(\dfrac{k}{t}+1\right)$, then $$\left\lfloor\frac{1}{2}+\frac{k}{t}-\sqrt{\left(\frac{1}{2}+\frac{k}{t}\right)^2-\frac{2n}{t}}\right\rfloor=\frac{k}{t}<\frac{k}{t}+1.$$ \end{remark} We will now look at the case where $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)+1$. Unlike above, we do not have exact bounds for this case. But what is perhaps surprising is that, for certain $k$ and $t$, the addition of a single element to the universe set can increase the maximum size of the set family by $3$ or more. \begin{proposition} \label{bound_beyond_small_n} Let $n,k,t\in\mathbb{Z}^+$ be such that $t\leq k\leq n$ and $t\mid k$. If $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)+1$, then $$\varpi(n,k,t)\leq\frac{\frac{k}{t}+1}{1-\frac{k}{n}}=\left(\frac{k^2+kt+2t}{k^2-kt+2t}\right)\left(\frac{k}{t}+1\right).$$ \end{proposition} \begin{proof} Let $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting and $k$-uniform family. There exists some element $x\in[n]$ such that $x$ is contained in at most $\lfloor\frac{k|\mathcal{H}|}{n}\rfloor$ sets in $\mathcal{H}$. We construct a set family $\mathcal{H}'\subseteq 2^{[n]\setminus\{x\}}$ by taking those sets in $\mathcal{H}$ that do not contain $x$. Since $\mathcal{H}'$ is defined over a universe of $\frac{1}{2}k\left(\frac{k}{t}+1\right)$ elements, applying Proposition \ref{bound_for_small_n}, we obtain \begin{align*} |\mathcal{H}|-\left\lfloor\frac{k|\mathcal{H}|}{n}\right\rfloor\leq|\mathcal{H}'|\leq\frac{k}{t}+1&\implies\left\lceil|\mathcal{H}|-\frac{k|\mathcal{H}|}{n}\right\rceil\leq\frac{k}{t}+1 \\ &\implies|\mathcal{H}|-\frac{k|\mathcal{H}|}{n}\leq\frac{k}{t}+1 \\ &\implies|\mathcal{H}|\leq\frac{\frac{k}{t}+1}{1-\frac{k}{n}}. \end{align*} \qed \end{proof} \begin{remark} \begin{enumerate}[label = {(\alph*)}] \item If $k=3$, $t=1$ and $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)+1=7$, then the bound in the Proposition \ref{bound_beyond_small_n} states that $\varpi(n,k,t)\leq\left(\frac{k^2+kt+2t}{k^2-kt+2t}\right)\left(\frac{k}{t}+1\right)=7$. The Fano plane is an example of a $3$-uniform family of size $7$ defined over a universe of $7$ elements that is exactly $1$-intersecting. Thus, the bound in \Cref{bound_beyond_small_n} can be achieved, at least for certain choices of $k$ and $t$. \begin{figure}[H] \centering \includegraphics[scale=.5]{Fano.png} \caption{The Fano plane} \label{fano_plane} \end{figure} \item Note that $$\left(\frac{k^2+kt+2t}{k^2-kt+2t}\right)\left(\frac{k}{t}+1\right)-\left(\frac{k}{t}+1\right)=\frac{2k^2+2kt}{k^2-kt+2t}.$$ We can show that the above expression is (strictly) bounded above by $6$ (for $k\neq t$), with slightly better bounds for $t=1,\,2,\,3,\,4$. It follows that $$\varpi(n,k,t)\leq\begin{cases} \frac{k}{t}+4 & \text{if }t=1, \\ \frac{k}{t}+5 & \text{if }t=2,\,3,\,4, \\ \frac{k}{t}+6 & \text{if }t\geq 5. \end{cases} $$ Furthermore, $\lim_{k\rightarrow\infty}\frac{2k^2+2kt}{k^2-kt+2t}=2$, thus, for fixed $t$, we have $\varpi(n,k,t)\leq\frac{k}{t}+3$ for large enough $k$. \end{enumerate} \end{remark} Next, we give a necessary condition for the existence of at most $t$-intersecting and $k$-uniform families $\mathcal{H}\subseteq 2^{[n]}$, which implicitly gives a bound on $\varpi(n,k,t)$. \begin{proposition} \label{larger_n} Let $n,k,t\in\mathbb{Z}^+$ satisfy $t\leq k\leq n$, and $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting and $k$-uniform family with $|\mathcal{H}|=m$. Then $$(n-r)\left\lfloor\frac{km}{n}\right\rfloor^2+r\left\lceil\frac{km}{n}\right\rceil^2\leq (k-t)m+tm^2$$ where $r=km-n\left\lfloor\frac{km}{n}\right\rfloor$. \end{proposition} \begin{proof} Let $\alpha_j$ be the number of elements that is contained in exactly $j$ sets in $\mathcal{H}$. We claim that the following holds: \begin{align} \sum_{j=0}^{m}\alpha_j&=n, \label{eq_1} \\ \sum_{j=0}^{m}j\alpha_j&=km, \label{eq_2} \\ \sum_{j=0}^{m}j(j-1)\alpha_j&\leq tm(m-1) \label{eq_3}. \end{align} (\ref{eq_1}) is immediate, (\ref{eq_2}) follows from double counting the set $\{(A,x):A\in\mathcal{H},\ x\in A\}$, and (\ref{eq_3}) follows from considering $\{(A,B,x):A,B\in\mathcal{H},\ A\neq B,\ x\in A\cap B\}$ and using the fact that $\mathcal{H}$ is at most $t$-intersecting. This proves the claim. Next, let us find non-negative integer values of $\alpha_0,\ldots,\alpha_m$ satisfying both (\ref{eq_1}) and (\ref{eq_2}) that minimize the expression $\sum_{j=0}^{m}j(j-1)\alpha_j$. Note that $$\sum_{j=0}^{m}j(j-1)\alpha_j=\sum_{j=0}^{m}(j^2\alpha_j-j\alpha_j)=\sum_{j=0}^{m}j^2\alpha_j-km,$$ so we want to minimize $\sum_{j=0}^{m}j^2\alpha_j$ subject to the restrictions (\ref{eq_1}) and (\ref{eq_2}). If $n\nmid km$, this is achieved by letting $\alpha_{\lfloor\frac{km}{n}\rfloor}=n-r$ and $\alpha_{\lceil\frac{km}{n}\rceil}=r$, with all other $\alpha_j$'s equal to $0$. If $n\mid km$, we let $\alpha_{\frac{km}{n}}=n$ with all other $\alpha_j$'s equal to $0$. Indeed, it is easy to see that the above choice of $\alpha_0,\ldots,\alpha_m$ satisfy both (\ref{eq_1}) and (\ref{eq_2}). Now, let $\alpha_0,\ldots,\alpha_m$ be some other choice of the $\alpha_j$'s that also satisfy both (\ref{eq_1}) and (\ref{eq_2}). We will show that the function $f(\alpha_0,\ldots,\alpha_m)=\sum_{j=0}^{m}j^2\alpha_j$ can be decreased with a different choice of $\alpha_0,\ldots,\alpha_m$. Suppose $\alpha_i\neq 0$ for some $i\neq\lfloor\frac{km}{n}\rfloor,\lceil\frac{km}{n}\rceil$, and assume that $i<\lfloor\frac{km}{n}\rfloor$ (the other case where $i>\lceil\frac{km}{n}\rceil$ is similar). Since the $\alpha_j$'s satisfy both (\ref{eq_1}) and (\ref{eq_2}), there must be some $i_1$ with $i_1\geq\lceil\frac{km}{n}\rceil$ (the inequality is strict if $n\mid km$) such that $\alpha_{i_1}\neq 0$. Then if we decrease $\alpha_i$ and $\alpha_{i_1}$ each by one, and increase $\alpha_{i+1}$ and $\alpha_{i_1-1}$ each by one, constraints (\ref{eq_1}) and (\ref{eq_2}) continue to be satisfied. Furthermore, \begin{align*} f(&\alpha_1,\ldots,\alpha_i,\alpha_{i+1},\ldots,\alpha_{i_1-1},\alpha_{i_1},\ldots,\alpha_m)\\ &-f(\alpha_1,\ldots,\alpha_i-1,\alpha_{i+1}+1,\ldots,\alpha_{i_1-1}+1,\alpha_{i_1}-1,\ldots,\alpha_m) \\ &=\ i^2-(i+1)^2-(i_1-1)^2+i_1^2 = 2i_1-2i-2>0 \end{align*} since $i_1\geq\lfloor\frac{km}{n}\rfloor+1>i+1$. This proves the claim that the choice of $\alpha_{\lfloor\frac{km}{n}\rfloor}=n-r$ and $\alpha_{\lceil\frac{km}{n}\rceil}=r$ minimizes $f$. Therefore, we can only find non-negative integers $\alpha_0,\ldots,\alpha_m$ satisfying all three conditions above if and only if $$(n-r)\left\lfloor\frac{km}{n}\right\rfloor^2+r\left\lceil\frac{km}{n}\right\rceil^2-km\leq tm(m-1),$$ as desired. \qed \end{proof} \begin{remark} For fixed $k$ and $t$, if $n$ is sufficiently large, then the inequality in Proposition \ref{larger_n} will be true for all $m$. Thus, the above proposition is only interesting for values of $n$ that are not too large. \end{remark} \subsection{Asymptotic Bounds} The study of Steiner systems has a long history, dating back to the mid 19th century work on triple block designs by Pl\"{u}cker \cite{Plucker[35]}, Kirkman \cite{Kirkman[47]}, Steiner \cite{Steiner[53]}, and Reiss \cite{Reiss[59]}. The term Steiner (triple) systems was coined in 1938 by Witt \cite{Witt[38]}. A Steiner system is defined as an arrangement of a set of elements in triples such that each pair of elements is contained in exactly one triple. Steiner systems have strong connections to a wide range of topics, including statistics, linear coding, finite group theory, finite geometry, combinatorial design, experimental design, storage systems design, wireless communication, low-density parity-check code design, distributed storage, batch codes, and low-redundancy private information retrieval. For an broader introduction to the topic, we refer the interested reader to~\cite{Wilson[03]} (also see \cite{Tri[99],Charles[06]}). In this section, we will see how at most $t$-intersecting families are related to Steiner systems. Using a remarkable recent result from Keevash \cite{Keevash[19]} about the existence of Steiner systems with certain parameters, we obtain an asymptotic bound for the maximum size of at most $t$-intersecting families. \begin{definition} \emph{A Steiner system $S(t,k,n)$, where $t\leq k\leq n$, is a family $\mathcal{S}$ of subsets of $[n]$ such that \begin{enumerate} \item $|A|=k$ for all $A\in\mathcal{S}$, \item any $t$-element subset of $[n]$ is contained in exactly one set in $\mathcal{S}$. \end{enumerate} The elements of $\mathcal{S}$ are known as blocks.} \end{definition} From the above definition, it is clear that there exists a family $\mathcal{H}$ that achieves equality in Proposition \ref{simple_bound} if and only if $S(t+1,k,n)$ exists. It is easy to derive the following well known necessary condition for the existence of a Steiner system with given parameters: \begin{proposition} If $S(t,k,n)$ exists, then $\binom{n-i}{t-i}$ is divisible by $\binom{k-i}{t-i}$ for all $0\leq i\leq t$, and the number of blocks in $S(t,k,n)$ is equal to $\binom{n}{t}/\binom{k}{t}$. \end{proposition} In 2019, Keevash \cite{Keevash[19]} proved the following result, providing a partial converse to the above, and answering in the affirmative a longstanding open problem in the theory of designs. \begin{theorem}[\cite{Keevash[19]}] \label{keevash} For any $k,t\in\mathbb{Z}^+$ with $t\leq k$, there exists $n_0(k,t)$ such that for all $n\geq n_0(k,t)$, a Steiner system $S(t,k,n)$ exists if and only if $$\binom{k-i}{t-i}\text{ divides }\binom{n-i}{t-i}\quad\text{for all }i=0,1\ldots,t-1.$$ \end{theorem} Using this result, we will derive asymptotic bounds for the maximum size of an at most $t$-intersecting and $k$-uniform family. \begin{proposition} \label{asymptotic_bound} Let $k,t\in\mathbb{Z}^+$ with $t<k$, and $C<1$ be any positive real number. \begin{enumerate}[label = {(\roman*)}] \item There exists $n_1(k,t,C)$ such that for all integers $n\geq n_1(k,t,C)$, there is an at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[n]}$ with $$|\mathcal{H}|\geq\frac{Cn^{t+1}}{k(k-1)\cdots(k-t)}.$$ \item For all sufficiently large $n$, $$\frac{Cn^{t+1}}{k(k-1)\cdots(k-t)}\leq\varpi(n,k,t)<\frac{n^{t+1}}{k(k-1)\cdots(k-t)}.$$ In particular, $$\varpi(n,k,t)\sim\frac{n^{t+1}}{k(k-1)\cdots(k-t)}.$$ \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[label = {(\roman*)}] \item Let $t'=t+1$. By Theorem \ref{keevash}, there exists $n_0(k,t')$ such that for all $N\geq n_0(k,t')$, a Steiner system $S(t',k,N)$ exists if \begin{equation} \tag{$\ast$} \label{keevash_condition} \binom{k-i}{t'-i}\text{ divides }\binom{N-i}{t'-i}\quad\text{for all }i=0,1\ldots,t'-1. \end{equation} Suppose $n$ is sufficiently large. Let $n'\leq n$ be the largest integer such that (\ref{keevash_condition}) is satisfied with $N=n'$. Since \begin{align*} &\binom{k-i}{t'-i}\text{ divides }\binom{N-i}{t'-i} \\ \iff\ &(k-i)\cdots(k-t'+1)\mid(N-i)\cdots(N-t'+1), \end{align*} all $N$ of the form $\lambda k(k-1)\cdots(k-t'+1)+t'-1$ with $\lambda\in\mathbb{Z}$ will satisfy (\ref{keevash_condition}). Hence, $n-n'\leq k(k-1)\cdots(k-t'+1)$. By our choice of $n'$, there exists a Steiner system $S(t',k,n')$. This Steiner system is an at most $t$-intersecting and $k$-uniform set family, defined on the universe $[n']\subseteq [n]$, such that \begin{align*} |S(t',k,n')|&=\frac{\binom{n'}{t'}}{\binom{k}{t'}}=\frac{n'(n'-1)\cdots(n'-t'+1)}{k(k-1)\cdots(k-t'+1)} \\ &\geq\frac{(n-\alpha)(n-\alpha-1)\cdots(n-\alpha-t'+1)}{k(k-1)\cdots(k-t'+1)}, \end{align*} where $\alpha=\alpha(k,t')=k(k-1)\cdots(k-t'+1)$ is independent of $n$. Since $C<1$, there exists $n_2(k,t',C)$ such that for all $n\geq n_2(k,t',C)$, $$\frac{(n-\alpha)(n-\alpha-1)\cdots(n-\alpha-t'+1)}{n^{t'}}\geq C,$$ from which it follows that $$|S(t',k,n')|\geq\frac{Cn^{t'}}{k(k-1)\cdots(k-t'+1)}=\frac{Cn^{t+1}}{k(k-1)\cdots(k-t)}$$ for all sufficiently large $n$. From the above argument, we see that we can pick $$n_1(k,t,C)=\max\left(n_0(k,t')+\alpha(k,t'), n_2(k,t',C)\right). $$ \item By Proposition \ref{simple_bound}, $$\varpi(n,k,t)\leq\frac{\binom{n}{t+1}}{\binom{k}{t+1}}=\frac{n(n-1)\cdots(n-t)}{k(k-1)\cdots(k-t)}<\frac{n^{t+1}}{k(k-1)\cdots(k-t)}.$$ The other half of the inequality follows immediately from (a). \end{enumerate} \qed \end{proof} \begin{proposition} \label{asymptotic_bound_for_maximally_cover_free} Let $k,t\in\mathbb{Z}^+$ with $t<k-1$, and $C<1$ be any positive real number. Then for all sufficiently large $N$, \begin{enumerate}[label = {(\roman*)}] \item there exists a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[N]}$ with $|\mathcal{H}|\geq CN$, \item the maximum size $\nu(N,k,t)$ of a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[N]}$ satisfies $$CN\leq\nu(N,k,t)<N.$$ \end{enumerate} \end{proposition} \begin{proof} We note that (ii) follows almost immediately from (i). Let us prove (i). Fix $C_0$ such that $C<C_0<1$. By Propositions \ref{maximally_cover_free} and \ref{asymptotic_bound}, for all integers $n\geq n_1(k-1,t,C_0)$, there exists a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{\left[n+\frac{n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}\right]}$ with $$|\mathcal{H}|\geq\frac{C_0n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}.$$ Since $C<C_0$, there exist $\delta>1$ and $\varepsilon>0$ such that $C_0>\delta(1+\varepsilon)C$. Given $N$, let $n\in\mathbb{Z}^+$ be maximum such that $$n+\frac{n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}\leq N.$$ Assume that $N$ is sufficiently large so that $n\geq n_1(k-1,t,C_0)$. Then, by the above, there is a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[N]}$ so that $$|\mathcal{H}|\geq\frac{C_0n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}.$$ Since $n$ is maximal, we have $$N<(n+1)+\frac{(n+1)^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}.$$ If $N$ (and thus $n$) is sufficiently large such that $$(n+1)<\frac{\varepsilon(n+1)^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}\quad\text{and}\quad\left(1+\frac{1}{n}\right)^{t+1}<\delta,$$ then $$N<\frac{(1+\varepsilon)(n+1)^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}<\frac{\delta(1+\varepsilon)n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}$$ and it follows that $$|\mathcal{H}|\geq\frac{C_0n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}>\frac{C_0N}{\delta(1+\varepsilon)}>CN.$$ \qed \end{proof} \subsection{An Explicit Construction} While Proposition \ref{asymptotic_bound} provides an answer for the maximum size of an at most $t$-intersecting and $k$-uniform family for large enough $n$, we cannot explicitly construct such set families since Theorem \ref{keevash} (and hence Proposition \ref{asymptotic_bound}) is nonconstructive. In this section, we will look at a method that explicitly constructs set families with larger parameters from set families with smaller parameters. Fix a positive integer $t$. For an at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[n]}$, define $$s(\mathcal{H})=\frac{k|\mathcal{H}|}{n}.$$ $s(\mathcal{H})$ is a measure of the ``relative size'' of $\mathcal{H}$ that takes into account the parameters $k$ and $n$. Note that the maximum possible value of $|\mathcal{H}|$ should increase with larger $n$ and decrease with larger $k$, hence $s(\mathcal{H})$ is a reasonable measure of the ``relative size'' of $\mathcal{H}$. The following result shows that it is possible to construct a sequence of at most $t$-intersecting and $k_j$-uniform families $\mathcal{H}\subseteq 2^{[n_j]}$, where $k_j\rightarrow\infty$, such that all set families in the sequence have the same relative size. \begin{proposition} Let $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting and $k$-uniform family. Then there exists a sequence of set families $\mathcal{H}_j$ such that \begin{enumerate}[label = {(\alph*)}] \item $\mathcal{H}_j$ is an at most $t$-intersecting and $k_j$-uniform set family, \item $s(\mathcal{H}_j)=s(\mathcal{H})$ for all $j$, \item $\lim_{j\rightarrow\infty}k_j=\infty$. \end{enumerate} \end{proposition} \begin{proof} We will define the set families $\mathcal{H}_j$ inductively. First, let $\mathcal{H}_1=\mathcal{H}$. Suppose we have defined $\mathcal{H}_j\subseteq 2^{[n_j]}$, an at most $t$-intersecting and $k_j$-uniform family for some $j\in\mathbb{Z}^+$. Let $m=|\mathcal{H}_j|$. Consider set families $\mathcal{G}^{(1)},\ldots,\mathcal{G}^{(m)},\mathcal{H}^{(1)},\ldots,\mathcal{H}^{(m)}$ defined on disjoint universes such that each $\mathcal{G}^{(\ell)}$ (and similarly, each $\mathcal{H}^{(\ell)}$) is isomorphic to $\mathcal{H}_j$. Write $$\mathcal{G}^{(\ell)}=\{B_1^{(\ell)},\ldots,B_m^{(\ell)}\},\quad \mathcal{H}^{(\ell)}=\{C_1^{(\ell)},\ldots,C_m^{(\ell)}\}.$$ Now, for $1\leq h,i\leq m$, define the sets $A_{h,i}=B_h^{(i)}\sqcup C_i^{(h)}$, and let $$\mathcal{H}_{j+1}=\{A_{h,i}: 1\leq h,i\leq m\}.$$ It is clear that $\mathcal{H}_{j+1}$ is a $2k_j$-uniform family defined over a universe of $2mn_j$ elements, and that $|\mathcal{H}_{j+1}|=m^2$. We claim that $\mathcal{H}_{j+1}$ is at most $t$-intersecting. Indeed, if $(h_1,i_1)\neq (h_2,i_2)$, then \begin{align*} |A_{h_1,i_1}\cap A_{h_2,i_2}|&=|(B_{h_1}^{(i_1)}\sqcup C_{i_1}^{(h_1)})\cap (B_{h_2}^{(i_2)}\sqcup C_{i_2}^{(h_2)})| \\ &=|B_{h_1}^{(i_1)}\cap B_{h_2}^{(i_2)}|+|C_{i_1}^{(h_1)}\cap C_{i_2}^{(h_2)}| \\ &= \begin{cases} |C_{i_1}^{(h_1)}\cap C_{i_2}^{(h_2)}|\leq t & \text{if }h_1=h_2\text{ and }i_1\neq i_2, \\ |B_{h_1}^{(i_1)}\cap B_{h_2}^{(i_2)}|\leq t & \text{if }h_1\neq h_2\text{ and }i_1=i_2, \\ 0 & \text{if }h_1\neq h_2\text{ and }i_1\neq i_2. \end{cases} \end{align*} Finally, $$s(\mathcal{H}_{j+1})=\frac{k_{j+1}|\mathcal{H}_{j+1}|}{n_{j+1}}=\frac{2k_jm^2}{2mn_j}=\frac{k_j|\mathcal{H}_j|}{n_j}=s(\mathcal{H}_j).$$ \qed \end{proof} \begin{remark} In the above proposition, $n_j$, $k_j$ and $|\mathcal{H}_j|$ grow with $j$. Clearly, given a family $\mathcal{H}$ as in the above proposition, it is also possible to construct a sequence of set families $\mathcal{H}_j$ such that $s(\mathcal{H}_j)=s(\mathcal{H})$ for all $j$, where $n_j$ and $|\mathcal{H}_j|$ grow with $j$, while $k_j$ stays constant. It is natural to ask, therefore, if it is possible to construct a sequence of set families satisfying $s(\mathcal{H}_j)=s(\mathcal{H})$, where $n_j$ and $k_j$ grow with $j$, but $|\mathcal{H}_j|$ stays constant. In fact, this is not always possible. Indeed, let $\mathcal{H}$ be the Fano plane, then $t=1$, $n=7$, $k=3$ and $|\mathcal{H}|=7$. Note that $\mathcal{H}$ satisfies Proposition \ref{larger_n} with equality, i.e. $$\frac{(k|\mathcal{H}|)^2}{n}=k|\mathcal{H}|+t(|\mathcal{H}|^2-|\mathcal{H}|).$$ If we let $n'=\lambda n$ and $k'=\lambda k$ for some $\lambda>1$, then $$\frac{(k'|\mathcal{H}|)^2}{n'}=\lambda\frac{(k|\mathcal{H}|)^2}{n}=\lambda\left(k|\mathcal{H}|+t(|\mathcal{H}|^2-|\mathcal{H}|)\right)>k'|\mathcal{H}|+t(|\mathcal{H}|^2-|\mathcal{H}|).$$ Hence, by Proposition \ref{larger_n}, there is no $k'$-uniform and at most $t$-intersecting family $\mathcal{H}'\subseteq 2^{[n']}$ such that $|\mathcal{H}'|=|\mathcal{H}|=7$. \end{remark} \section{Generating Rounded Gaussians from Physical Communications}\label{Sec6} In this section, we describe our procedure, called rounded Gaussians from physical (layer) communications (RGPC), to generate deterministic errors from a rounded Gaussian distribution --- which we later prove to be sufficiently independent in specific settings. RGPC is comprised of the following two subprocedures: \begin{itemize} \item Hypothesis generation: a protocol to generate a linear regression hypothesis from the training data, which, in our case, is comprised of the physical layer communications between participating parties. \item Rounded Gaussian error generation: this procedure allows us to use the linear regression hypothesis --- generated by using physical layer communications as training data --- to derive deterministic rounded Gaussian errors. That is, this process samples from a rounded Gaussian distribution in a manner that is (pseudo)random to a polynomial external/internal adversary but is deterministic to the authorized parties. \end{itemize} \subsection{Setting and Central Idea}\label{broad} For the sake of intelligibility, we begin by giving a brief overview of our central idea. Let there be a set of $n \geq 2$ parties, $\mathcal{P} = \{P_i\}_{i=1}^n$. All parties agree upon a function $f(x) = \beta_0 + \beta_1 x,$ with intercept $\beta_0 \leftarrow \mathbb{Z}$ and slope $\beta_1 \leftarrow \mathbb{Z}$. Let $\mathcal{H} \subseteq 2^{\mathcal{P}}$ be a family of sets such that each set $H_i \in \mathcal{H}$ forms a star graph $\partial_i$ wherein each party is connected to a central hub $C_i \notin H_i$ (for all $i \in [|\mathcal{H}|])$ via two channels: one Gaussian and another error corrected. If $\mathcal{H}$ is $k$-uniform and at most $t$-intersecting, then each star in the interconnection graph formed by the sets $H_i \in \mathcal{H}$ contains exactly $k$ members and $2k$ channels such that $|\partial_i \cap \partial_j| \leq t$. During the protocol, each party $P_j$ sends out message pairs of the form $x_j, f(x_j)$, where $x_j \leftarrow \mathbb{Z}$ and $f$ is a randomly selected function of specific type (more on this later), to the central hubs of all stars that it is a member of, such that: \begin{itemize} \item $f(x)$ is sent over the Gaussian channel, \item $x$ is sent over the error corrected channel. \end{itemize} For the sake of simplicity, in this section, we only consider a single star. Due to the guaranteed errors occurring in the Gaussian channel, the messages recorded at each central hub $C_i$ are of the form: $y = f(x) + \varepsilon_x,$ where $\varepsilon_x$ belongs to some Gaussian distribution $\mathcal{N}(0, \sigma^2)$ with mean zero and standard deviation $\sigma$ (in our experiments, which are discussed in \Cref{Sim}, we sample from $\sigma \in [10,300])$. $C_i$ forwards $\{x, y\}$ to all parties over the respective error corrected channels in $\partial_i$. In our algorithm, we use least squares linear regression which aims to minimize the sum of the squared residuals. We know that the hypothesis generated by linear regression is of the form: $h(x) = \hat{\beta}_0 + \hat{\beta}_1 x$. Thus, the statistical error, with respect to our target function, comes out as: \begin{equation}\label{hypoEq} \bar{e}_x = |y - h(x)|. \end{equation} Due to the nature of the physical layer errors and independent channels, we know that the errors $\varepsilon_x$ are random and independent. Thus, it follows that for restricted settings, the error terms $\bar{e}_{x_i}$ and $\bar{e}_{x_j}$ are almost independent for all $x_i \neq x_j$, and --- are expected to --- belong to a Gaussian distribution. Next, we round $\bar{e}_x$ to the nearest integer as: $e_x = \lfloor \bar{e}_x \rceil$ to get the final error, $e_x$, which: \begin{itemize} \item is determined by $x$, \item belongs to a rounded Gaussian distribution. \end{itemize} We know from~\cite{Reg[05],Albre[13],Gold[10],Duc[15],Hul[17]} that --- with appropriate parameters --- rounded Gaussians satisfy the hardness requirements for multiple LWE-based constructions. Next, we discuss our RGPC protocol in detail. \begin{note}\label{ImpNote} With a sufficiently large number of messages, $f(x)$ can be very closely approximated by the linear regression hypothesis $h(x)$. Therefore, with a suitable choice of parameters, it is quite reasonable to expect that the error distribution is Gaussian (which is indeed the case --- see \Cref{Lemma1}, where we use drowning/smudging to argue about the insignificance of negligible uniform error introduced by linear regression analysis). Considering this, we also examine the more interesting case wherein the computations are performed in $\mathbb{Z}_m$ (for some $m \in \mathbb{Z}^+ \setminus \{1\}$) instead of over $\mathbb{Z}$. However, our proofs and arguments are presented according to the former case, i.e., where the computations are performed over $\mathbb{Z}$. We leave adapting the proofs and arguments for the latter case --- i.e., computations over $\mathbb{Z}_m$ --- as an open problem. \end{note} \subsection{Hypothesis Generation from Physical Layer Communications}\label{Gen} In this section, we describe hypothesis generation from physical layer communications which allows us to generate an optimal linear regression hypothesis, $h(x)$, for the target function $f(x) + \varepsilon_x$. As mentioned in \Cref{ImpNote}, we consider the case wherein the error computations are performed in $\mathbb{Z}_m$. As described in \Cref{broad}, the linear regression data for each subset of parties $H_i \in \mathcal{H}$ is comprised of the messages exchanged within star graph $\partial_i$ --- that is formed by the parties in $H_i$. \subsubsection{Assumptions.}\label{assumption} We assume that the following conditions hold: \begin{enumerate} \item Value of the integer modulus $m$: \begin{itemize} \item is either known beforehand, or \item can be derived from the target function. \end{itemize} \item \label{assume2} Size of the dataset, i.e., the total number of recorded physical layer messages, is reasonably large such that there are enough data points to accurately fit linear regression on any function period. In our experiments, we set it to $2^{16}$ messages. \item \label{assume} For a dataset $\mathcal{D} = \{(x_i,y_i)\}~(i \in [\ell])$ of unique function input, message received pairs, it holds for the slope, $\beta_1$, of $f(x)$, that $\ell/\beta_1$ is superpolynomial. For our experiments, we use values of $\beta_1$ with $\ell/\beta_1\geq 100$. \end{enumerate} \subsubsection{Setup.} Recall that the goal of linear regression is to find subset(s) of data points that can be used to generate a hypothesis $h(x)$ to approximate the target function, which in our case is $f(x) + \varepsilon_x$. Then, we extrapolate it to the entire dataset. However, since modulo is a periodic function, there is no explicit linear relationship between $x \leftarrow \mathbb{Z}$ and $y = f(x) + \varepsilon_x \bmod m$, even without the error term $\varepsilon_x$. Thus, we cannot directly apply linear regression to the entire dataset $\mathcal{D} = \{(x_i,y_i)\}~(i \in [\ell])$ and expect meaningful results unless $\beta_0=0$ and $\beta_1 = 1$. We arrange the dataset, $\mathcal{D}$, in ascending order with respect to the $x_i$ values, i.e., for $1 \leq i < j \leq m$ and all $x_i, x_j \in \mathcal{D}$, it holds that: $x_i < x_j$. Let $\mathcal{S} = \{x_i\}_{i=1}^\ell$ denote the ordered set with $x_i$ $(\forall i \in [\ell])$ arranged in ascending order. Observe that the slope of $y = f(x) + \varepsilon_x \bmod m$ is directly proportional to the number of periods on any given range, $[x_i, x_j]$. For example, observe the slope in \Cref{Fig1}, which depicts the scatter plot for $y = 3x + \varepsilon_x \bmod 12288$ with three periods. Therefore, in order to find a good linear fit for our target function on a subset of dataset that lies inside the given range, $[x_i, x_j]$, we want to correctly estimate the length of a single period. Consequently, our aim is to find a range $[x_i, x_j]$ for which the following is minimized: \begin{equation}\label{eq} \Big| \hat{\beta}_1 - \dfrac{m}{x_j - x_i} \Big|, \end{equation} where $\hat{\beta}_1$ denotes the slope for our linear regression hypothesis $h(x) = \hat{\beta}_0 + \hat{\beta}_1 x$ computed on the subset with $x$ values in $[x_i,x_j]$. \begin{figure}[h!] \centering \includegraphics[scale=.084]{1_8x.png} \caption{Scatter plot for $y = 3 x + \varepsilon_x \bmod 12288$ (three periods)}\label{Fig1} \end{figure} \subsubsection{Generating Optimal Hypothesis.} The following procedure describes our algorithm for finding the optimal hypothesis $h(x)$ and the target range $[x_i, x_j]$ that satisfies \Cref{eq} for $\beta_0=0$. When $\beta_0$ is not necessarily $0$, a small modification to the procedure (namely, searching over all intervals $[x_i, x_j]$, instead of searching over only certain intervals as described below) is needed. Let $\kappa$ denote the total number of periods, then it follows from Assumption~\ref{assume} (given in Section~\ref{assumption}) that $\kappa \leq \lceil \ell/100 \rceil$. Let $\delta_{\kappa,i} = |\hat{\beta}_1(\kappa, i) - \kappa|$, where $\hat{\beta}_1(\kappa, i)$ denotes that $\hat{\beta}_1$ is computed over the range $\left[x_{\left\lfloor(i-1)\ell/\kappa\right\rfloor+1},x_{\left\lfloor i\ell/\kappa\right\rfloor}\right]$. \begin{enumerate} \item Initialize the number of periods with $\kappa = 1$ and calculate $\delta_{1,1} = |\hat{\beta}_1(1,1) - 1|$. \item Compute the $\delta_{\kappa,i}$ values for all $1 < \kappa \leq \lceil \ell/100 \rceil$ and $i \in [\kappa]$. For instance, $\kappa = 2$ denotes that we consider two ranges: $\hat{\beta}_1 (2,1)$ is calculated on $\left[ x_1, x_{\lfloor \ell/\kappa \rfloor}\right]$ and $\hat{\beta}_1 (2,2)$ on $\left[x_{\lfloor \ell/\kappa \rfloor +1}, x_\ell\right]$. Hence, we compute $\delta_{2,i}$ for these two ranges. Similarly, $\kappa =3$ denotes that we consider three ranges $\left[ x_1, x_{\lfloor \ell/\kappa \rfloor}\right]$, $\left[x_{\lfloor \ell/\kappa \rfloor +1}, x_{\lfloor 2\ell/\kappa \rfloor}\right]$ and $\left[x_{\lfloor 2\ell/\kappa \rfloor +1}, x_\ell\right]$, and we compute $\hat{\beta}(3,i)$ and $\delta_{3,i}$ over these three ranges. Hence, $\delta_{\kappa,i}$ values are computed for all $(\kappa,i)$ that satisfy $1 \leq i \leq \kappa \leq \lceil \ell/100 \rceil$. \item Identify the optimal value $\delta = \min_{\kappa,i}(\delta_{\kappa,i})$, which is the minimum over all $\kappa \in [\lceil \ell/100 \rceil]$ and $i \in [\kappa]$. \item After finding the minimal $\delta$, output the corresponding (optimal) hypothesis $h(x)$. \end{enumerate} What the above algorithm does is basically a grid search over $\kappa$ and $i$ with the performance metric being minimizing the $\delta_{\kappa,i}$ value. \begin{center} \textbf{Grid search: more details} \end{center} \begin{myenv} Grid search is an approach used for hyperparameter tuning. It methodically builds and evaluates a model for each combination of parameters. Due to its ease of implementation and parallelization, grid search has prevailed as the de-facto standard for hyperparameter optimization in machine learning, especially in lower dimensions. For our purpose, we tune two parameters $\kappa$ and $i$. Specifically, we perform grid search to find hypotheses $h(x)$ for all $\kappa$ and $i$ such that $\kappa \in [\lceil \ell/100 \rceil]$ and $i \in [\kappa]$. Optimal hypothesis is the one with the smallest value of the performance metric $\delta_{\kappa,i}$. \end{myenv} \subsection{Simulation and Testing}\label{Sim} \begin{figure} \centering \includegraphics[scale=.080]{4_1_4x.png} \caption{Distribution plot of $\bar{e}_x$ for $y = 546 x + \varepsilon_x \bmod 12288$. Slope estimate: $\beta_1 = 551.7782$.}\label{Fig2} \end{figure} We tested our RGPC algorithm with varying values of $m$ and $\beta_1$ for the following functions: \begin{itemize} \item $f(x) = \beta_0 + \beta_1 x,$ \item $f(x) = \beta_0 + \beta_1 \sqrt{x},$ \item $f(x) = \beta_0 + \beta_1 x^2,$ \item $f(x) = \beta_0 + \beta_1 \sqrt[3]{x},$ \item $f(x) = \beta_0 + \beta_1 \ln (x+1).$ \end{itemize} To generate the training data, we simulated the channel noise, $\varepsilon_x$, as a random Gaussian noise (introduced by the Gaussian channel), which we sampled from various distributions with zero mean and standard deviation ranging from $10$ to $300$; and we used values of the integer modulus $m$ up to $20000$. Channel noise was computed by rounding $\varepsilon_x$ to the nearest integer and reducing the result modulo $m$. For each function, we simulated $2^{16}$ input, output pairs, exchanged over Gaussian channels, i.e., for each function, the dataset $\mathcal{D} = \{(x_i,y_i)\}~(i \in [2^{16}])$ contains $2^{16}$ unique pairs $(x_i, y_i)$. As expected, given the dataset $\mathcal{D}$ with data points $x_i, y_i = f(x_i) + \varepsilon_i \bmod m$, our algorithm always converged to the optimal range, yielding close approximations for the target function with deterministic errors, $\bar{e}_{x} = |y - h(x)| \bmod m$. \Cref{Fig2} shows a histogram of the errors $\bar{e}_x$ generated by our RGPC protocol --- with our training data --- for the target (linear) function $y = 546 x + \varepsilon_x \bmod 12288$. The errors indeed appear to belong to a truncated Gaussian distribution, bounded by the modulus $12288$ from both tails. \begin{figure} \centering \includegraphics[scale=.115]{2_8x.png} \caption{Distribution plot of $\bar{e}_x$ for $y = 240 \sqrt{x} + \varepsilon_x \bmod 12288$. Slope estimate: $\beta_1 = 239.84$.}\label{Fig3} \end{figure} Moving on to the cases wherein the independent variable $x$ and the dependent variable $y$ have a nonlinear relation: the most representative example of such a relation is the power function $f(x)=\beta_1 x^\vartheta$, where $\vartheta \in \mathbb{R}$. We know that nonlinearities between variables can sometimes be linearized by transforming the independent variable. Hence, we applied the following transformation: if we let $x_{\upsilon}=x^\vartheta$, then $f_\upsilon(x_\upsilon) = \beta_1 x_\upsilon = f(x)$ is linear in $x_\upsilon$. This can now be solved by applying our hypothesis generation algorithm for linear functions. \Cref{Fig3,Fig4,Fig6,Fig7} show the histograms of the errors $\bar{e}_x$ generated by our training datasets for the various nonlinear functions from the list given at the beginning of \Cref{Sim}. Again, the errors for these nonlinear functions appear to belong to truncated Gaussian distributions, bounded by their respective moduli from both tails. \begin{figure} \centering \includegraphics[scale=.115]{3_8x.png} \caption{Distribution plot of $\bar{e}_x$ for $y = 125 x^2 + \varepsilon_x \bmod 10218$. Slope estimate: $\beta_1 = 124.51$.}\label{Fig4} \end{figure} \begin{figure} \centering \includegraphics[scale=.115]{5_8x.png} \caption{Distribution plot of $\bar{e}_x$ for $y = 221 \sqrt[3]{x} + \varepsilon_x \bmod 11278$. Slope estimate: $\beta_1 = 221.01$.}\label{Fig6} \end{figure} \begin{figure} \centering \includegraphics[scale=.115]{6_8x.png} \caption{Distribution plot of $\bar{e}_x$ for $y = 53 \ln (x+1) + \varepsilon_x \bmod 8857$. Slope estimate: $\beta_1 = 54.48$.}\label{Fig7} \end{figure} \subsection{Complexity}\label{Time} Let the size of the dataset collected by recording the physical layer communications be $\ell$. Then, the complexity for least squares linear regression is $\mathrm{\Theta}(\ell)$ additions and multiplications. It follows from Assumption~\ref{assume} (from \Cref{assumption}) that $\ell^2$ is an upper bound on the maximum number of evaluations required for grid search. Therefore, the overall asymptotic complexity of our algorithm to find optimal hypothesis, and thereafter generate deterministic rounded Gaussian errors is $O(\ell^3)$. In comparison, the complexity of performing least squares linear regression on a dataset $\{(x_i, y_i)\}$ that has not been reduced modulo $m$ is simply $\mathrm{\Theta}(\ell)$ additions and multiplications. \subsection{Error Analysis} Before moving ahead, we would recommend the reader revisit \Cref{ImpNote}. Suppose that there are a total of $\ell$ samples, and that the difference of two values modulo $m$ is always represented by an element in $(-m/2,(m+1)/2]$. We make the further assumption that there exists a constant $b\geq 1$ such that the $x_i$ values satisfy: \begin{equation} \label{assumption_on_x} \tag{$\dagger$} \frac{(x_i-\bar{x})^2}{\sum_{j=1}^{\ell} (x_j-\bar{x})^2}\leq\frac{b}{\ell} \end{equation} for all $i=1,\ldots,\ell$, where $\bar{x}=\sum_{j=1}^{\ell}\frac{x_j}{\ell}$. Observe that, if $\ell-1$ divides $m-1$ and the $x_i$ values are $0,\frac{m-1}{\ell-1},\ldots,\frac{(\ell-1)(m-1)}{\ell-1}$, then $\sum_{j=1}^{\ell} (x_j-\bar{x})^2=\frac{\ell(\ell^2-1)(m-1)^2}{12(\ell-1)^2}$, and the numerator is bounded above by $\frac{(m-1)^2}{4}$, thus the above is satisfied with $b=3$. In general, by the strong law of large numbers, choosing a large enough number of $x_i$'s uniformly at random from $[0, m-1]$ will, with very high probability, yield $\bar{x}$ close to $\frac{m-1}{2}$ and $\frac{1}{\ell}{\sum_{j=1}^{\ell} (x_j-\bar{x})^2}$ close to $\frac{(m^2-1)}{12}$ (since $X\sim U(0, m-1)\implies \E(X)=\frac{m-1}{2}$ and $\var(X)=\frac{m^2-1}{12}$), hence (\ref{assumption_on_x}) will be satisfied with a small constant $b$, say with $b=4$. The dataset is $\mathcal{D} = \{(x_i, y_i)\}_{i=1}^\ell$, where $y_i=f(x_i)+\varepsilon_i=\beta_0+\beta_1 x_i+\varepsilon_i$, with $\varepsilon_i\sim \mathcal{N}(0,\sigma^2)$. Suppose the regression line is given by $y=\hat{\beta_0}+\hat{\beta}_1x$. Then the error $\bar{e}_i$ is given by \begin{align*} \bar{e}_i&=(\hat{\beta_0}+\hat{\beta}_1x_i)-y_i=(\hat{\beta_0}+\hat{\beta}_1x_i)-(\beta_0+\beta_1 x_i+\varepsilon_i) \\ &=(\hat{\beta_0}-\beta_0)+(\hat{\beta_1}-\beta_1)x_i-\varepsilon_i. \end{align*} The joint distribution of the regression coefficients $\hat{\beta_0}$ and $\hat{\beta_1}$ is given by the following well known result: \begin{proposition} \label{regression_hypothesis_distribution} Let $y_1,y_2,\ldots,y_\ell$ be independently distributed random variables such that $y_i\sim \mathcal{N}(\alpha+\beta x_i,\sigma^2)$ for all $i=1,\ldots ,\ell$. If $\hat{\alpha}$ and $\hat{\beta}$ are the least square estimates of $\alpha$ and $\beta$ respectively, then: $$\begin{pmatrix} \hat{\alpha} \\ \hat{\beta} \end{pmatrix} \sim \mathcal{N}\left( \begin{pmatrix} \alpha \\ \beta \end{pmatrix} ,\, \sigma^2 \begin{pmatrix} \ell & \sum_{i=1}^{\ell} x_i \\ \sum_{i=1}^{\ell} x_i & \sum_{i=1}^{\ell} x_i^2 \end{pmatrix}^{-1} \right).$$ \end{proposition} Applying Proposition \ref{regression_hypothesis_distribution}, and using the fact that $\mathbf{X}\sim\mathcal{N}(\boldsymbol{\mu},\mathbf{\Sigma})\implies\mathbf{A}\mathbf{X}\sim\mathcal{N}(\mathbf{A}\boldsymbol{\mu},\mathbf{A}\mathbf{\Sigma}\mathbf{A}^T)$, we have \begin{align*} (\hat{\beta_0}-\beta_0)+(\hat{\beta_1}-\beta_1)x_i&\sim \mathcal{N}\left(0,\, \sigma^2\frac{\sum_{j=1}^\ell x_j^2-2x_i\sum_{j=1}^\ell x_j+\ell x_i^2}{\ell\sum_{j=1}^\ell x_j^2-(\sum_{j=1}^\ell x_j)^2} \right) \\ &= \mathcal{N}\left(0,\, \frac{\sigma^2}{\ell}\left(1+\frac{\ell(x_i-\bar{x})^2}{\sum_{j=1}^\ell (x_j-\bar{x})^2}\right) \right). \end{align*} Thus, by assumption (\ref{assumption_on_x}), the variance of $(\hat{\beta_0}-\beta_0)+(\hat{\beta_1}-\beta_1)x_i$ is bounded above by $(1+b)\frac{\sigma^2}{\ell}$. Since $Z\sim \mathcal{N}(0,1)$ satisfies $|Z|\leq 2.807034$ with probability $0.995$, by the union bound, $\bar{e}_i$ is bounded by $$|\bar{e}_i|\leq 2.807034\left(1+\sqrt{\frac{1+b}{\ell}}\right)\sigma$$ with probability at least $0.99$. \begin{note}\label{note1} Our protocol allows fine-tuning the Gaussian errors by tweaking the standard deviation $\sigma$ and mean $\mu$ for the Gaussian channel. Hence, in our proofs and arguments, we only use the term ``target rounded Gaussian''. \end{note} \begin{lemma}\label{Lemma1} Suppose that the number of samples $\ell$ is superpolynomial, and that \emph{(\ref{assumption_on_x})} is satisfied with some constant $b$. Then, the errors $e_i$ belong to the target rounded Gaussian distribution. \end{lemma} \begin{proof} Recall that the error $\bar{e}_i$ has two components: one is the noise introduced by the Gaussian channel and second is the error due to regression fitting. The first component is naturally Gaussian. The standard deviation for the second component is of order $\sigma/\sqrt{\ell}$. Hence, it follows from drowning/smudging that for a superpolynomial $\ell$, the error distribution for $\bar{e}_i$ is statistically indistinguishable from the Gaussian distribution to which the first component belongs. Therefore, it follows that the final output, $e_i = \lceil \bar{e}_i \rfloor$, of the RGPC protocol belongs to the target rounded Gaussian distribution (see \Cref{note1}). \qed \end{proof} Hence, our RGPC protocol generates errors, $e_{x} = \lceil |y - h(x)| \rfloor$, in a deterministic manner by generating a mapping defined by the hypothesis $h$; let $M_h: x \mapsto e_x$ be this mapping. \begin{lemma}\label{Obs1} For an external adversary $\mathcal{A}$, it holds that $M_h: x \mapsto e_x$ maps a random element $x \leftarrow \mathbb{Z}$ to a random element $e_x$ in the target rounded Gaussian distribution $\mathrm{\Psi}(0, \hat{\sigma}^2)$. \end{lemma} \begin{proof} It follows from \Cref{Lemma1} that $M_h$ outputs from the target rounded Gaussian distribution. Note the following straightforward observations: \begin{itemize} \item Since each coefficient of $f(x)$ is randomly sampled from $\mathbb{Z}_m$, $f(x)$ is a random linear function. \item The inputs $x \leftarrow \mathbb{Z}$ are sampled randomly. \item The Gaussian channel introduces a noise $\varepsilon_x$ to $f(x)$, that is drawn i.i.d. from a Gaussian distribution $\mathcal{N}(0,\sigma^2)$. Hence, the receiving parties get a random element $f(x) + \varepsilon_x$. \item It follows that $\lceil |f(x) - h(x)| \rfloor$ outputs a random element from the target rounded Gaussian $\mathrm{\Psi}(0, \hat{\sigma}^2)$. \end{itemize} Hence, $M_h: x \mapsto e_x$ is a deterministic mapping from a random element $x \leftarrow \mathbb{Z}$ to a random element $e_x$ in the desired rounded Gaussian $\mathrm{\Psi}(0, \hat{\sigma}^2)$. So, given an input $x$, an external adversary $\mathcal{A}$ has no advantage in guessing $e_x$. \qed \end{proof} \section{Mutual Information Analysis}\label{Mutual} \begin{definition} \emph{Let $f_X$ be the probability density function (p.d.f.) of the continuous random variable $X$. Then, the \emph{differential entropy} of $X$ is $$H(X)=-\int f_X(x)\log f_X(x)\,dx.$$} \end{definition} \begin{definition} \emph{The \emph{mutual information}, $I(X;Y)$, of two continuous random variables $X$ and $Y$ with joint p.d.f.\ $f_{X,Y}$ and marginal p.d.f's $f_X$ and $f_Y$ respectively, is $$I(X;Y)=\int\int f_{X,Y}(x,y)\log\left(\frac{f_{X,Y}(x,y)}{f_X(x)f_Y(y)}\right)\,dy\,dx.$$} \end{definition} From the above definitions, it is easy to prove that $I(X;Y)$ satisfies the equality: $$I(X;Y)=H(X)+H(Y)-H(X,Y).$$ Let us now describe our aim. Suppose, for $i=1,2,\ldots,\ell$, we have $$y_i\sim \mathcal{N}(\alpha+\beta x_i,\sigma^2)\quad\text{and}\quad z_i\sim \mathcal{N}(\alpha+\beta w_i,\sigma^2),$$ with $x_i=w_i$ for $i=1,\ldots,a$. Let $h_1(x)=\hat{\alpha}_1 x+\hat{\beta}_1$ and $h_2(w)=\hat{\alpha}_2 w+\hat{\beta}_2$ be the linear regression hypotheses obtained from the samples $(x_i,y_i)$ and $(w_i,z_i)$, respectively. We would like to compute an expression for the mutual information $$I((\hat{\alpha_1},\hat{\beta_1});(\hat{\alpha_2},\hat{\beta_2})).$$ First, we recall the following standard fact: \begin{proposition} \label{multivariate_normal_differential_entropy} Let $X\sim \mathcal{N}(\mathbf{v},\Sigma)$, where $\mathbf{v}\in\mathbb{R}^d$ and $\Sigma\in\mathbb{R}^{d\times d}$. Then: $$H(X)=\frac{1}{2}\log(\det\Sigma)+\frac{d}{2}(1+\log(2\pi)).$$ \end{proposition} Our main result is the following: \begin{proposition}\label{mainProp} Let $\hat{\alpha_1}$, $\hat{\beta_1}$, $\hat{\alpha_2}$, $\hat{\beta_2}$ be as above. Then \begin{gather*} H(\hat{\alpha_1},\hat{\beta_1})=2\log\sigma-\frac{1}{2}\log(\ell X_2-X_1^2)+(1+\log(2\pi)), \\ H(\hat{\alpha_2},\hat{\beta_2})=2\log\sigma-\frac{1}{2}\log(\ell W_2-W_1^2)+(1+\log(2\pi)), \end{gather*} and \begin{align*} &\ H(\hat{\alpha_1},\hat{\beta_1},\hat{\alpha_2},\hat{\beta_2}) \\ =&\ 4\log\sigma-\frac{1}{2}\log\left((\ell X_2-X_1^2)(\ell W_2-W_1^2)\right)+2(1+\log(2\pi)) \\ &\qquad+\frac{1}{2}\log\left(1-\frac{\left(\ell C_2-2C_1X_1+a X_2\right)\left(\ell C_2-2C_1W_1+a W_2\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right. \\ &\qquad\left.+\frac{\left((a -1)C_2-C_3\right)\left((a -1)C_2-C_3+\ell(X_2+W_2)-2X_1W_1\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right), \end{align*} where $X_1=\sum_{i=1}^{\ell} x_i$, $X_2=\sum_{i=1}^{\ell} x_i^2$, $W_1=\sum_{i=1}^{\ell} w_i$, $W_2=\sum_{i=1}^{\ell} w_i^2$, $C_1=\sum_{i=1}^a x_i=\sum_{i=1}^a w_i$, $C_2=\sum_{i=1}^a x_i^2=\sum_{i=1}^a w_i^2$ and $C_3=\sum_{i=1}^{\ell}\sum_{j=1,j\neq i}^{\ell}x_ix_j$. The mutual information between $(\hat{\alpha_1},\hat{\beta_1})$ and $(\hat{\alpha_2},\hat{\beta_2})$ is: \begin{align*} &I((\hat{\alpha_1},\hat{\beta_1});(\hat{\alpha_2},\hat{\beta_2})) \\ =&\ -\frac{1}{2}\log\left(1-\frac{\left(\ell C_2-2C_1X_1+aX_2\right)\left(\ell C_2-2C_1W_1+aW_2\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right. \\ &\qquad\left.+\frac{\left((a-1)C_2-C_3\right)\left((a-1)C_2-C_3+\ell(X_2+W_2)-2X_1W_1\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right). \end{align*} \end{proposition} \begin{proof} The expressions for $H(\hat{\alpha_1},\hat{\beta_1})$ and $H(\hat{\alpha_2},\hat{\beta_2})$ follow from Propositions \ref{regression_hypothesis_distribution} and \ref{multivariate_normal_differential_entropy}, and the expression for $I((\hat{\alpha_1},\hat{\beta_1});(\hat{\alpha_2},\hat{\beta_2}))$ follows from $$I((\hat{\alpha_1},\hat{\beta_1});(\hat{\alpha_2},\hat{\beta_2}))=H(\hat{\alpha_1},\hat{\beta_1})+H(\hat{\alpha_2},\hat{\beta_2})-H(\hat{\alpha_1},\hat{\beta_1},\hat{\alpha_2},\hat{\beta_2}).$$ It remains to derive the expression for $H(\hat{\alpha_1},\hat{\beta_1},\hat{\alpha_2},\hat{\beta_2})$. First, define the matrices $$X= \begin{pmatrix} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_\ell \end{pmatrix}, \qquad W= \begin{pmatrix} 1 & w_1 \\ 1 & w_2 \\ \vdots & \vdots \\ 1 & w_\ell \end{pmatrix}. $$ Then \begin{align*} \boldsymbol{\hat{\theta}}:= \begin{pmatrix} \hat{\alpha_1} \\ \hat{\beta_1} \\ \hat{\alpha_2} \\ \hat{\beta_2} \end{pmatrix} &= \begin{pmatrix} \alpha \\ \beta \\ \alpha \\ \beta \end{pmatrix} + \begin{pmatrix} (X^TX)^{-1}X^TU \\ (W^TW)^{-1}W^TV \end{pmatrix} \\ &=\begin{pmatrix} \alpha \\ \beta \\ \alpha \\ \beta \end{pmatrix} + \begin{pmatrix} (X^TX)^{-1}X^T & 0 \\ 0 & (W^TW)^{-1}W^T \end{pmatrix} \begin{pmatrix} U \\ V \end{pmatrix}, \end{align*} where $U$, $V\sim \mathcal{N}(0,\sigma^2 I_\ell)$; so: $$\var(\boldsymbol{\hat{\theta}})= \begin{pmatrix} (X^TX)^{-1}X^T & 0 \\ 0 & (W^TW)^{-1}W^T \end{pmatrix} \var{ \begin{pmatrix} U \\ V \end{pmatrix} } \begin{pmatrix} X(X^TX)^{-1} & 0 \\ 0 & W(W^TW)^{-1} \end{pmatrix} .$$ For any matrix $M=(M_{i,j})$, let $[M]_a$ denote the matrix with the same dimensions as $M$, and with entries $$([M]_a)_{i,j}= \begin{cases} M_{i,j} & \text{if }i,j\leq a, \\ 0 & \text{otherwise}. \end{cases} $$ Note that $$\var{ \begin{pmatrix} U \\ V \end{pmatrix} }= \begin{pmatrix} \sigma^2 I_\ell & \sigma^2 [I_\ell]_a \\ \sigma^2 [I_\ell]_a & \sigma^2 I_\ell \end{pmatrix}, $$ hence \begin{align*} \var(\boldsymbol{\hat{\theta}})=\sigma^2 \begin{pmatrix} (X^TX)^{-1} & [(X^TX)^{-1}X^T]_a(W(W^TW)^{-1}) \\ [(W^TW)^{-1}W^T]_a(X(X^TX)^{-1}) & (W^TW)^{-1} \end{pmatrix}, \end{align*} and $$\det(\var(\boldsymbol{\hat{\theta}}))=\sigma^8\det(A-BD^{-1}C)\det(D)$$ where \begin{align*} A&=(X^TX)^{-1}= \begin{pmatrix} \frac{X_2}{\ell X_2-X_1^2} & -\frac{X_1}{\ell X_2-X_1^2} \\ -\frac{X_1}{\ell X_2-X_1^2} & \frac{\ell}{\ell X_2-X_1^2} \end{pmatrix}, \\ B&=[(X^TX)^{-1}X^T]_a(W(W^TW)^{-1}) \\ &= \begin{pmatrix} \frac{\sum_{i=1}^{a} (X_2-x_iX_1)(W_2-w_iW_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} & \frac{\sum_{i=1}^{a} (\ell w_i-W_1)(X_2-x_iX_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} \\ \frac{\sum_{i=1}^{a} (\ell x_i-X_1)(W_2-w_iW_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} & \frac{\sum_{i=1}^{a} (\ell x_i-X_1)(\ell w_i-W_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} \end{pmatrix},\\ C&=[(W^TW)^{-1}W^T]_a(X(X^TX)^{-1}) \\ &= \begin{pmatrix} \frac{\sum_{i=1}^{a} (X_2-x_iX_1)(W_2-w_iW_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} & \frac{\sum_{i=1}^{a} (\ell x_i-X_1)(W_2-w_iW_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} \\ \frac{\sum_{i=1}^{a} (\ell w_i-W_1)(X_2-x_iX_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} & \frac{\sum_{i=1}^{a} (\ell x_i-X_1)(\ell w_i-W_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} \end{pmatrix}, \\ D&=(W^TW)^{-1}= \begin{pmatrix} \ell & W_1 \\ W_1 & W_2 \end{pmatrix}^{-1}. \end{align*} After a lengthy computation, we obtain the following expression for $\det(\var(\boldsymbol{\hat{\theta}}))$: \begin{align*} &\frac{\sigma^8}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\left(1-\frac{\left(\ell C_2-2C_1X_1+aX_2\right)\left(\ell C_2-2C_1W_1+aW_2\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right. \\ &\qquad\left.+\frac{\left((a-1)C_2-C_3\right)\left((a-1)C_2-C_3+\ell(X_2+W_2)-2X_1W_1\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right). \end{align*} The expression for $H(\hat{\alpha_1},\hat{\beta_1},\hat{\alpha_2},\hat{\beta_2})$ follows by applying Proposition \ref{multivariate_normal_differential_entropy}. \qed \end{proof} \section{Learning with Linear Regression (LWLR)}\label{LWLRsec} In this section, we define LWLR and reduce its hardness to LWE. As mentioned in Note \ref{note1}, our RGPC protocol allows us freedom in tweaking the target rounded Gaussian distribution $\mathrm{\Psi}(0, \hat{\sigma}^2)$ by simply selecting the desired standard deviation $\sigma$. Therefore, when referring to the desired rounded Gaussian distribution for the hardness proofs, we use $\mathrm{\Psi}(0, \hat{\sigma}^2)$ (or $\mathrm{\Psi}(0, \hat{\sigma}^2) \bmod m$, i.e., $\mathrm{\Psi}_m(0, \hat{\sigma}^2))$ without divulging into the specific value of $\sigma$. Let $\mathcal{P} = \{P_i\}_{i=1}^n$ be a set of $n$ parties. \begin{definition} \emph{For modulus $m$ and a uniformly sampled $\textbf{a} \leftarrow \mathbb{Z}^w_m$, the learning with linear regression (LWLR) distribution LWLR${}_{\textbf{s},m,w}$ over $\mathbb{Z}_m^w \times \mathbb{Z}_m$ is defined as: $(\textbf{a}, x + e_{x}),$ where $x = \langle \textbf{a}, \textbf{s} \rangle$ and $e_{x} \in \mathrm{\Psi}(0, \hat{\sigma}^2)$ is a rounded Gaussian error generated by the RGPC protocol, on input $x$.} \end{definition} \begin{theorem}\label{LWLRthm} For modulus $m$, security parameter $\L$, $\ell = g(\L)$ samples (where $g$ is a superpolynomial function), polynomial adversary $\mathcal{A} \notin \mathcal{P}$, some distribution over secret $\textbf{s} \in \mathbb{Z}_m^w$, and a mapping $M_h: \mathbb{Z} \to \mathrm{\Psi}(0, \hat{\sigma}^2)$ generated by the RGPC protocol, where $\mathrm{\Psi}(0, \hat{\sigma}^2)$ is the target rounded Gaussian distribution, solving decision-LWLR${}_{\textbf{s},m,w}$ is at least as hard as solving the decision-LWE${}_{\textbf{s},m,w}$ problem for the same distribution over $\textbf{s}$. \end{theorem} \begin{proof} Recall from \Cref{Lemma1} that, since $\ell$ is superpolynomial, the errors belong to the desired rounded Gaussian distribution $\mathrm{\Psi}(0, \hat{\sigma}^2)$. As given, for a fixed secret $\textbf{s} \in \mathbb{Z}^w_m$, a decision-LWLR${}_{\textbf{s},m,w}$ instance is defined as $(\textbf{a}, x + e_x)$ for $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_m$ and $x = \langle \textbf{a},\textbf{s} \rangle$. Recall that a decision-LWE${}_{\textbf{s},m,w}$ instance is defined as $(\textbf{a}, \langle \textbf{a}, \textbf{s} \rangle + e)$ for $\textbf{a} \leftarrow \mathbb{Z}^w_m$ and $e \leftarrow \chi$ for a rounded (or discrete) Gaussian distribution $\chi$. We know from \Cref{Obs1} that $M_h$ is a deterministic mapping --- generated by the RGPC protocol --- from random inputs $x \leftarrow \mathbb{Z}$ to random elements $e_x \in \mathrm{\Psi}(0, \hat{\sigma}^2)$. Next, we define the following two games. \begin{itemize} \item $\mathfrak{G}_1$: in this game, we begin by fixing a secret $\textbf{s}$. Each query from the attacker is answered with an LWLR${}_{\textbf{s},m,w}$ instance as: $(\textbf{a}, x + e_x)$ for a unique $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_m$, and $x = \langle \textbf{a},\textbf{s} \rangle$. The error $e_x \in \mathrm{\Psi}(0, \hat{\sigma}^2)$ is generated as: $e_x = M_h(x)$. \item $\mathfrak{G}_2$: in this game, we begin by fixing a secret $\textbf{s}$. Each query from the attacker is answered with an LWE${}_{\textbf{s},m,w}$ instance as: $(\textbf{a}, \langle \textbf{a}, \textbf{s} \rangle + e)$ for $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_m$ and $e \leftarrow \mathrm{\Psi}(0, \tilde{\sigma}^2)$, where $\mathrm{\Psi}(0, \tilde{\sigma}^2)$ denotes a rounded Gaussian distribution that is suitable for sampling LWE errors. \end{itemize} Let $\mathcal{A} \notin \mathcal{P}$ be able to distinguish LWLR${}_{\textbf{s},m,w}$ from LWE${}_{\textbf{s},m,w}$ with some non-negligible advantage, i.e., $Adv_{\mathcal{A}}(\mathfrak{G}_1, \mathfrak{G}_2) \geq \varphi(w)$ for a non-negligible function $\varphi$. Hence, it follows that $Adv_{\mathcal{A}}(\mathrm{\Psi}(0, \tilde{\sigma}^2), \mathrm{\Psi}(0, \hat{\sigma}^2)) \geq \varphi(w)$. However, we have already established in \Cref{Obs1} that $M_h$ is random to $\mathcal{A} \notin \mathcal{P}$. Furthermore, we know that $\hat{\sigma}$ can be brought arbitrarily close to $\tilde{\sigma}$ (see \Cref{note1}). Therefore, with careful selection of parameters, it must hold that $Adv_{\mathcal{A}}(\mathrm{\Psi}(0, \tilde{\sigma}^2), \mathrm{\Psi}(0, \hat{\sigma}^2)) \leq \eta(w)$ for a negligible function $\eta$, which directly leads to $Adv_{\mathcal{A}}(\mathfrak{G}_1, \mathfrak{G}_2) \leq \eta(w)$. Hence, for any distribution over a secret $\textbf{s} \in \mathbb{Z}_m^w$, solving decision-LWLR${}_{\textbf{s},m,w}$ is at least as hard as solving the decision-LWE${}_{\textbf{s},m,w}$ problem for the same distribution over $\textbf{s}$. \qed \end{proof} \section{Star-specific Key-homomorphic PRF\lowercase{s}}\label{Sec7} In this section, we use LWLR to construct the first star-specific key-homomorphic (SSKH) PRF family. We adapt the key-homomorphic PRF construction from~\cite{Ban[14]} by replacing the deterministic errors generated via the rounding function from LWR with the deterministic errors generated by our RGPC protocol. \subsection{Background} For the sake of completeness, we begin by recalling the key-homomorphic PRF construction from~\cite{Ban[14]}. Let $T$ be a full binary tree with at least one node, i.e., in $T$ every non-leaf node has two children. Let $T.r$ and $T.l$ denote its right and left subtree, respectively. Let $\lfloor \cdot \rceil_p$ denote the rounding function used in LWR (see \Cref{LWR} for an introduction to LWR). Let $q \geq 2$, $d = \lceil \log q \rceil$ and $x[i]$ denote the $i^{th}$ bit of a bit-string $x$. Define a gadget vector as: \[\textbf{g} = (1,2,4,\dots, 2^{d-1}) \in \mathbb{Z}^d_q,\] where $q$ is the LWE modulus. Define a decomposition function $\textbf{g}^{-1}: \mathbb{Z}_q \rightarrow \{0,1\}^d$ such that $\textbf{g}^{-1}(a)$ is a ``short'' vector and $ \forall a \in \mathbb{Z}_q$, it holds that: $\langle \textbf{g}, \textbf{g}^{-1}(a) \rangle = a$, where $\langle \cdot \rangle$ denotes the inner product. Function $\textbf{g}^{-1}$ is defined as: \begin{center} $\textbf{g}^{-1}(a) = (x[0], x[1], \dots, x[d-1]) \in \{0,1\}^d,$ \end{center} where $a = \sum\limits^{d-1}_{i=0} x[i] \cdot 2^i$ is the binary representation of $a$. The gadget vector is used to define the gadget matrix $\textbf{G}$ as: \[\textbf{G} = \textbf{I}_w \otimes \textbf{g} = \text{diag}(\textbf{g}, \dots, \textbf{g}) \in \mathbb{Z}^{w \times wd}_q,\] where $\textbf{I}_w$ is the $w \times w$ identity matrix and $\otimes$ denotes the Kronecker product~\cite{Kath[04]}. The binary decomposition function, $\textbf{g}^{-1}$, is applied entry-wise to vectors and matrices over $\mathbb{Z}_q$. Thus, $\textbf{g}^{-1}$ can be extended to get another deterministic decomposition function as: $$\textbf{G}^{-1}: \mathbb{Z}^{w \times u}_q \rightarrow \{0,1\}^{wd \times u}$$ such that $\textbf{G} \cdot \textbf{G}^{-1}(\textbf{A}) = \textbf{A}$. Given uniformly sampled matrices, $\textbf{A}_0, \textbf{A}_1 \in \mathbb{Z}^{w \times wd}_q$, define function $\textbf{A}_T(x): \{0,1\}^{|T|} \rightarrow \mathbb{Z}^{w \times wd}_q$ as: \begin{align*} \textbf{A}_T(x) &= \begin{cases} \textbf{A}_x \qquad \qquad \qquad \qquad \qquad \qquad \; \text{if } |T| = 1, \\ \textbf{A}_{T.l}(x_l) \cdot \textbf{G}^{-1}(\textbf{A}_{T.r}(x_r)) \qquad \; \; \; \text{otherwise}, \end{cases} \numberthis \label{AlignEq} \end{align*} where $|T|$ denotes the total number of leaves in $T$ and $x \in \{0,1\}^{|T|}$ such that $x = x_l || x_r$ for $x_l \in \{0,1\}^{|T.l|}$ and $x_r \in \{0,1\}^{|T.r|}$. The key-homomorphic PRF family is defined as: \[\mathcal{F}_{\textbf{A}_0, \textbf{A}_1, T, p} = \left\lbrace F_\textbf{s}: \{0,1\}^{|T|} \rightarrow \mathbb{Z}^{wd}_p \right\rbrace,\] where $p \leq q$ is the modulus. A member of the function family $\mathcal{F}$ is indexed by the seed $\textbf{s} \in \mathbb{Z}^w_q$ as: \[F_{\textbf{s}}(x) = \lfloor \textbf{s} \cdot \textbf{A}_{T}(x) \rceil_p.\] \subsection{Our Construction} We are now ready to present the construction for the first SSKH PRF family. \subsubsection{Settings.} Let $\mathcal{P} = \{P_i\}_{i=1}^n$ be a set of $n$ honest parties, that are arranged as the vertices of an interconnection graph $G = (V, E)$, which is comprised of $S_k$ stars $\partial_1, \ldots, \partial_\rho$, i.e., each subgraph $\partial_i$ is a star with $k$ leaves. As mentioned in Section~\ref{broad}, we assume that each party in $\partial_i$ is connected to $\partial_i$'s central hub $C_i$ via two channels: one Gaussian channel with the desired parameters and another error corrected channel. Each party in $\mathcal{P}$ receives parameters $\textbf{A}_0, \textbf{A}_1$, i.e., all parties are provisioned with identical parameters. Hence, physical layer communications and measurements are the exclusive source of variety and secrecy in this protocol. Since we are dealing with vectors, the data points for linear regression analysis, i.e., the messages exchanged among the parties in the stars, are of the form $\{(\mathbf{x}_i, \mathbf{y}_i)\}_{i=1}^\ell$, where $\textbf{x}_i, \textbf{y}_i \in \mathbb{Z}^{wd}.$ Hence, the resulting rounded Gaussian --- generated by the RGPC protocol --- becomes $\mathrm{\Psi}^{wd}(\textbf{0}, \hat{\sigma}^2)$. Let the parties in each star graph exchange messages in accordance to the RGPC protocol such that messages from different central hubs $C_i, C_j~(\forall i,j \in [\rho];\ i \neq j)$ are distinguishable to the parties belonging to multiple star graphs. \subsubsection{Construction.} Without loss of generality, consider a star $\partial_i \subseteq V(G)$. Each party in $\partial_i$ follows the RGPC protocol to generate its linear regression hypothesis $h^{(\partial_i)}$. Parties in star $\partial_i$ construct a $\partial_i$-specific key-homomorphic PRF family, whose member $F_{\textbf{s}_{i}}^{(\partial_i)}(x)$, indexed by the key/seed $\textbf{s}_{i} \in \mathbb{Z}^w_m$, is defined as: \begin{equation} F_{\textbf{s}_{i}}^{(\partial_i)}(x) = \textbf{s}_{i} \cdot \textbf{A}_{T}(x) + \textbf{e}^{(\partial_i)}_{\textbf{b}} \bmod m, \label{eq1} \end{equation} where $\textbf{A}_T(x)$ is as defined by Equation \eqref{AlignEq}, $\textbf{b} = \textbf{s}_{i} \cdot \textbf{A}_{T}(x)$, and $\textbf{e}^{(\partial_i)}_{\textbf{b}}$ denotes a rounded Gaussian error computed --- on input $\textbf{b}$ --- by our RGPC protocol via its star-specific least squares hypothesis $h^{(\partial_i)}$ for $\partial_i$. The star-specific secret $\textbf{s}_{i}$ can be generated by using a reconfigurable antenna (RA) \cite{MohAli[21],inbook} at the central hub $C_i$ and thereafter reconfiguring it to randomize the error-corrected channel between itself and the parties in $\partial_i$. Specifically, $\textbf{s}_{i}$ can be generated via the following procedure: \begin{enumerate} \item After performing the RGPC protocol, each party $P_j \in \partial_i$ sends a random $r_j \in [\ell]$ to $C_i$ via the error corrected channel. $C_i$ broadcasts $r_j$ to all parties in $\partial_i$ and randomizes all error-corrected channels by reconfiguring its RA. If two parties' $r_j$ values arrive simultaneously, then $C_i$ randomly discards one of them and notifies the concerned party to resend another random value. This ensures that the channels are re-randomized after receiving each $r_j$ value. By the end of this cycle, each party receives $k$ random values $\{r_j\}_{j=1}^k$. Let $\wp_i$ denote the set of all $r_j$ values received by the parties in $\partial_i$. \item Each party in $\partial_i$ computes $\bigoplus\limits_{r_j \in \wp_i} r_j = s \bmod m$. \item This procedure is repeated to extract the required number of bits to generate the vector $\textbf{s}_{i}$. \end{enumerate} Since $C_i$ randomizes all its channels by simply reconfiguring its RA, no active or passive adversary can compromise all $r_j \in \wp_i$ values \cite{Aono[05],MehWall[14],YanPan[21],MohAli[21],inbook,Alan[20],PanGer[19],MPDaly[12]}. In honest settings, secrecy of the star-specific secret $\textbf{s}_{i}$, generated by the aforementioned procedure, follows directly from the following three facts: \begin{itemize} \item All parties are honest. \item All data points $\{x_i\}_{i=1}^\ell$ are randomly sampled integers, i.e., $x_i \leftarrow \mathbb{Z}$. \item The coefficients of $f(x)$, and hence $f(x)$ itself, are sampled randomly. \end{itemize} We examine other settings, wherein there are active/passive and internal/external adversaries, in the following section. Note that the protocol does not require the parties to share their identities. Hence, the above protocol is trivially anonymous over anonymous channels (see \cite{GeoCla[08],MattYen[09]} for surveys on anonymous communications). Since anonymity has multiple applications in cryptographic protocols \cite{Stinson[87],Phillips[92],Blundo[96],Kishi[02],Deng[07],SehrawatVipin[21],Sehrawat[17],AmosMatt[07],Daza[07],Gehr[97],Anat[15],Mida[03],OstroKush[06],DijiHua[07]}, it is a useful feature of our construction. \subsection{Maximum number of SSKH PRFs and Defenses Against Various Attacks}\label{Max} In this section, we employ our results from \Cref{Extremal} to derive the maximum number of SSKH PRFs that can be constructed by a set of $n$ parties. Recall that we use the terms star and star graph interchangeably. We know that in order to construct an SSKH PRF family, the parties are arranged in an interconnection graph $G$ wherein the --- possibly overlapping --- subsets of $\mathcal{P}$ form different star graphs, $\partial_1, \ldots, \partial_\rho$, within $G$. We assume that for all $i \in [\rho]$, it holds that: $|\partial_i| = k$. Recall from \Cref{Extremal} that we derived various bounds on the size of the following set families $\mathcal{H}$ defined over a set of $n$ elements: \begin{enumerate} \item $\mathcal{H}$ is an at most $t$-intersecting $k$-uniform family, \item $\mathcal{H}$ is a maximally cover-free at most $t$-intersecting $k$-uniform family. \end{enumerate} We set $n$ to be the number of nodes/parties in $G$. Hence, $k$ represents the size of each star with $t$ being equal to (or greater than) $\max\limits_{i\neq j}(|\partial_i \cap \partial_j|)$. In our SSKH PRF construction, no member of a star $\partial$ has any secrets that are hidden from the other members of $\partial$. Also, irrespective of their memberships, all parties are provisioned with identical set of initial parameters. The secret keys and regression models are generated via physical layer communications and collaboration. Due to these facts, the parties in our SSKH PRF construction must be either honest or semi-honest but non-colluding. We consider these factors while computing the maximum number of SSKH PRFs that can be constructed securely against various types of adversaries. For a star $\partial$, let $\mathcal{O}_{\partial}$ denote an oracle for the SSKH PRF $F^{(\partial)}_{\textbf{s}}$, i.e., on receiving input $x$, $\mathcal{O}_{\partial}$ outputs $F^{(\partial)}_{\textbf{s}}(x)$. Given oracle access to $\mathcal{O}_{\partial}$, it must holds that for a probabilistic polynomial-time adversary $\mathcal{A}$ who is allowed $\poly(\L)$ queries to $\mathcal{O}_{\partial}$, the SSKH PRF $F^{(\partial)}_{\textbf{s}}$ remains indistinguishable from a uniformly random function $U$ --- defined over the same domain and range as $F^{(\partial)}_{\textbf{s}}$. Let $E_{i}$ denote the set of Gaussian and error-corrected channels that are represented by the edges in star $\partial_i$. \subsubsection{External Adversary with Oracle Access.} In this case, the adversary can only query the oracle for the SSKH PRF, and hence the secrecy follows directly from the hardness of LWLR. Therefore, at most $t$-intersecting $k$-uniform families are sufficient for this case, i.e., we do not need the underlying set family $\mathcal{H}$ to be maximally cover-free. Moreover, $t = k-1$ suffices for this case because maximum overlap between different stars can be tolerated. Hence, it follows from \Cref{asymptotic_bound} (in \Cref{Extremal}) that the maximum number $\zeta$ of SSKH PRFs that can be constructed is: $$\zeta\sim\frac{n^{k}}{k!}.$$ \subsubsection{Eavesdropping Adversary with Oracle Access.} Let $\mathcal{A}$ be an eavesdropping adversary, who is able to observe a subset of Gaussian and/or error-corrected channels between parties and central hubs. We call this subset $E'$ and we assume that $E_{i} \not\subseteq E'$. Let us analyze the security w.r.t. this adversary. \begin{enumerate} \item Secrecy of $\textbf{s}_i$: After each party $P_z \in \partial_i$ contributes to the generation of $\textbf{s}_i$ by sending a random value $r_z$ to $C_i$ and $r_z$ is broadcasted to all parties in $\partial_i$, $C_i$ randomizes all error-corrected channels by reconfiguring its RA. This means that an adversary $\mathcal{A}$ is unable to compromise all $r_z$ values. Hence, it follows that no information about $\textbf{s}_i$ is leaked to $\mathcal{A}$. \item Messages exchanged via the channels in $E'$: leakage of enough messages exchanged within star $\partial_i$ would allow $\mathcal{A}$ to closely approximate the RGPC protocol within $\partial_i$. Hence, if $\mathcal{A}$ can eavesdrop enough channels in $\partial_i$, it can approximate the deterministic errors generated for any given input. We know that without proper errors, LWE is reduced to a set of linear equations that is easily solvable via Gaussian elimination. Hence, this leakage can be devastating. Fortunately, the use of physical layer security technologies can provide information-theoretic protection --- against an eavesdropping adversary --- for the messages exchanged over physical layer \cite{YonWu[18],ShiJia[21]}. \end{enumerate} Hence, with these physical layer security measures, an eavesdropping adversary with oracle access has no advantage over an adversary with oracle access alone. \subsubsection{Man-in-the-Middle.} Physical-layer-based key generation schemes exploit the channel reciprocity for secret key extraction, which can achieve information-theoretic secrecy against eavesdroppers. However, these schemes have been shown to be vulnerable against man-in-the-middle (MITM) attacks. During a typical MITM attack, the adversary creates separate connection(s) with the communicating node(s) and relays altered transmission packets to them. Eberz et al. \cite{EbeMatt[12]} demonstrated a practical MITM attack against RSS-based key generation protocols \cite{JanaSri[09],SuWade[08]}, wherein the MITM adversary $\mathcal{A}$ exploits the same channel characteristics as the target/communicating parties $P_1, P_2$. To summarize, in the attack from \cite{EbeMatt[12]}, $\mathcal{A}$ injects packets that cause a similar channel measurement at both $P_1$ and $P_2$. This attack enables $\mathcal{A}$ to recover up to 47\% of the secret bits generated by $P_1$ and $P_2$. To defend against such attacks, we can apply techniques that allow us to detect an MITM attack over physical layer \cite{LeAle[16]}, and if one is detected, the antenna of $\partial_i$'s central hub, $C_i$, can be reconfigured to randomize all channels in $\partial_i$ \cite{YanPan[21]}. This only requires a reconfigurable antenna (RA) at each central hub. An RA can swiftly reconfigure its radiation pattern, polarization, and frequency by rearranging its antenna currents \cite{MohAli[21],inbook}. It has been shown that due to multipath resulting from having an RA, even small variation by the RA can create large variations in the channel, effectively creating fast varying channels with a random distribution \cite{JunPhD[21]}. One way an RA may randomize the channels is by randomly selecting antenna configurations in the transmitting array at the symbol rate, leading to a random phase and amplitude multiplied to the transmitted symbol. The resulting randomness is compensated by appropriate element weights so that the intended receiver does not experience any random variations. In this manner, an RA can be used to re-randomize the channel and hence break the temporal correlation of the channels between $\mathcal{A}$ and the attacked parties, while preserving the reciprocity of the other channels. Therefore, even if an adversary $\mathcal{A}$ is able to perform successful injection in communication round {\ss}, its channels with the attacked parties change randomly when it attempts injection attacks in round {\ss}+1. On the other hand, the channels between the parties in star $\partial_i$ and their central hub $C_i$ remain reciprocal, i.e., they can still make correct (and identical) measurements from the randomized channel. Hence, by reconfiguring its RA, $C_i$ can prevent further injections from $\mathcal{A}$ without affecting the legitimate parties' ability to make correct channel measurements. Further details on this defense technique are beyond the scope of this paper. For detailed introduction to the topic and its applications in different settings, we refer the interested reader to \cite{Aono[05],MehWall[14],YanPan[21],MohAli[21],inbook,Alan[20],PanGer[19],MPDaly[12]}. In this manner, channel state randomization can be used to effectively reduce an MITM attack to the less harmful jamming attack \cite{MitChor[21]}. See \cite{HossHua[21]} for a thorough introduction to jamming and anti-jamming techniques. \subsubsection{Non-colluding Semi-honest Parties.} Suppose that some or all parties in $\mathcal{P}$ are semi-honest (also referred to honest-but-curious), who follow the protocol correctly but try to gain/infer more information than what is allowed by the protocol transcript. Further suppose that the parties do not collude with each other. In such settings, the only way any party $P_j \notin \partial_i$ can gain additional information about the SSKH PRF $F^{(\partial_i)}_{\textbf{s}_{i}}$ is to use the SSKH PRFs of the stars that it is a member of. For instance, if $P_j \in \mathcal{P}_{\partial_d}, \mathcal{P}_{\partial_j}, \mathcal{P}_{\partial_o}$ and $\mathcal{P}_{\partial_i} \subset \mathcal{P}_{\partial_o} \cup \mathcal{P}_{\partial_j} \cup \mathcal{P}_{\partial_d}$, then because the parties send identical messages to all central hubs they are connected to, it follows that $H(F^{(\partial_i)}_{\textbf{s}_{i}} | P_j) = 0$. This follows trivially because $P_j$ can compute $\textbf{s}_i$. Having maximally cover-free families eliminates this vulnerability against non-colluding semi-honest parties. This holds because with maximally cover-free families with member sets denoting the stars, the following can never hold true for any integer $\varrho \in [\rho]$: \[\mathcal{P}_{\partial_i} \subseteq \bigcup_{j \in [\varrho]} \mathcal{P}_{\partial_j}, \text{ where } \forall j \in [\varrho], \text{ it holds that: } \partial_i \neq \partial_j.\] It follows from \Cref{asymptotic_bound_for_maximally_cover_free} that the maximum number of SSKH PRFs that can be constructed with non-colluding semi-honest parties is at least $Cn$ for some positive real number $C < 1$. Hence, in order to construct SSKH PRFs that are secure against all the adversaries and models discussed above, the representative/underlying family of sets must be maximally cover-free, at most $(k-1)$-intersecting and $k$-uniform. \subsection{Runtime and Key Size} We know that the complexity of a single evaluation of the key-homomorphic PRF from~\cite{Ban[14]} is $\mathrm{\Theta}(|T| \cdot w^\omega \log^2 m)$ ring operations in $\mathbb{Z}_m$, where $\omega \in [2, 2.37286]$ is the exponent of matrix multiplication~\cite{Josh[21]}. Using the fast multiplication algorithm in \cite{HarveyHoeven[21]}, this gives a time complexity of $\mathrm{\Theta}(|T| \cdot w^\omega \cdot m\log^3 m)$. The time required to complete the setup in our SSKH PRF construction is equal to the time required by our RGPC algorithm to find the optimal hypothesis, which we know from Section~\ref{Time} to be $\mathrm{\Theta}(\ell)$ additions and multiplications. If $B$ is an upper bound on $x_i$ and $y_i$, then the time complexity is $O\left(\ell B\log B\right)$. Once the optimal hypothesis is known, it takes $\mathrm{\Theta}(wm\log^2 m)$ time to generate a deterministic LWLR error for a single input. Hence, after the initial setup, the time complexity of a single function evaluation of our star-specific key-homomorphic PRF remains $\mathrm{\Theta}(|T| \cdot w^\omega \cdot m\log^3 m)$. Similarly, the key size for our star-specific key-homomorphic PRF family is the same as that of the key-homomorphic PRF family from~\cite{Ban[14]}. Specifically, for security parameter $\L$ and $2^{\L}$ security against known lattice reduction algorithms~\cite{Ajtai[01],Fincke[85],Gama[06],Gama[13],Gama[10],LLL[82],Ngu[10],Vid[08],DanPan[10],DaniPan[10],Poha[81],Schnorr[87],Schnorr[94],Schnorr[95],Nguyen[09],EamFer[21],RicPei[11],MarCar[15],AvrAda[03],GuaJoh[21],SatoMasa[21],TamaStep[20],AleQi[21]}, the key size for our star-specific key-homomorphic PRF family is $\L$. \subsection{Correctness and Security} Recall that LWR employs rounding to hide all but some of the most significant bits of $\lfloor \textbf{s} \cdot \textbf{A} \rceil_p$; therefore, the rounded-off bits become the deterministic error. On the other hand, our solution, i.e., LWLR uses special linear regression hypothesis to generate the desired rounded Gaussian errors, which are derived from the (independent) errors occurring in the physical layer communications over Gaussian channel(s). For the sake of simplicity, the proofs assume honest parties in the absence of any adversary. For other supported cases, it is easy to adapt the statements of the results according to the bounds/conditions established in \Cref{Max}. Observe that the RGPC protocol ensures that all parties in a star $\partial$ receive an identical dataset $\mathcal{D}$, and therefore arrive at the same linear regression hypothesis $h^{(\partial)}$, and the same errors $\textbf{e}^{(\partial)}_{\textbf{b}}$. \begin{theorem}\label{thm1} The function family defined by \Cref{eq1} is a star-specific key-homomorphic PRF under the decision-LWE assumption. \end{theorem} \begin{proof} We know from \Cref{LWLRthm} that for $\textbf{s}_i \xleftarrow{\$} \mathbb{Z}_m^w$ and a superpolynomial number of samples $\ell$, the LWLR instances generated in \Cref{eq1} are as hard as LWE --- to solve for $\textbf{s}_i$ (and $\textbf{e}_\textbf{b}^{(\partial_i)})$. The randomness of the function family follows directly from the randomness of $\textbf{s}_i$. The deterministic behavior follows from the above observation and the fact that $\textbf{A}_T(x)$ is a deterministic function. Hence, the family of functions defined in \Cref{eq1} is a PRF family. Define $$G_{\mathbf{s}}^{(\partial_i)}(x) = \textbf{s} \cdot \mathbf{A}_{T}(x) + \lfloor\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}\rceil \bmod m,$$ where $\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}$ is the (raw) Gaussian error corresponding to $\mathbf{b}$ for star $\partial_i$, and define $G_{\mathbf{s}}^{(\partial_j)}$ similarly. Since the errors $\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}$ and $\boldsymbol{\varepsilon}^{(\partial_j)}_{\textbf{b}}$ are independent Gaussian random variables, each with variance $\sigma^2$, \begin{align*} \Pr[G_{\mathbf{s}}^{(\partial_i)}(x)=G_{\mathbf{s}}^{(\partial_j)}(x)]&=\Pr[\lfloor\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}\rceil=\lfloor\boldsymbol{\varepsilon}^{(\partial_j)}_{\textbf{b}}\rceil] \\ &\leq \Pr[||\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}-\boldsymbol{\varepsilon}^{(\partial_j)}_{\textbf{b}}||_\infty<1] \\ &= \Pr[|Z|<(\sqrt{2}\sigma)^{-1}]^w \end{align*} where $Z$ is a standard Gaussian random variable. Furthermore, since the number of samples is superpolynomial in the security parameter $\L$, by drowning/smudging, the statistical distance between $\boldsymbol{e}^{(\partial_i)}_{\textbf{b}}$ and $\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}$ is negligible (similarly for $\boldsymbol{\varepsilon}^{(\partial_j)}_{\textbf{b}}$ and $\boldsymbol{e}^{(\partial_j)}_{\textbf{b}}$). Hence, \begin{align*} \Pr[F_{\mathbf{s}}^{(\partial_i)}(x)=F_{\mathbf{s}}^{(\partial_j)}(x)]&=\Pr[G_{\mathbf{s}}^{(\partial_i)}(x)=G_{\mathbf{s}}^{(\partial_j)}(x)]+\eta(\L) \\ &\leq\Pr[|Z|<(\sqrt{2}\sigma)^{-1}]^w+\eta(\L), \end{align*} where $\eta(\L)$ is a negligible function in $\L$. By choosing $\delta=\Pr[|Z|<(\sqrt{2}\sigma)^{-1}]$, this function family satisfies \Cref{MainDef}(a). Finally, by Chebyshev's inequality and the union bound, for any $\tau>0$, \[F_{\textbf{s}_1}^{(\partial)}(x) + F_{\textbf{s}_2}^{(\partial)}(x) = F_{\textbf{s}_1 + \textbf{s}_2}^{(\partial)}(x) + \textbf{e}' \bmod m,\] where each entry of $\textbf{e}'$ lies in $[-3\tau\hat{\sigma}, 3\tau\hat{\sigma}]$ with probability at least $1-3/\tau^2$. For example, choosing $\tau=\sqrt{300}$ gives us the bound that the absolute value of each entry is bounded by $\sqrt{2700}\hat{\sigma}$ with probability at least $0.99$. Therefore, the function family defined by \Cref{eq1} is a family of star-specific key-homomorphic PRFs --- as defined by \Cref{MainDef} --- under the decision-LWE assumption. \qed \end{proof} \section{Conclusion}\label{Sec8} In this paper, we introduced a novel derandomized variant of the celebrated learning with errors (LWE) problem, called learning with linear regression (LWLR), which derandomizes LWE via deterministic --- yet sufficiently independent --- errors that are generated by using special linear regression models whose training data consists of physical layer communications over Gaussian channels. Prior to our work, learning with rounding and its variant nearby learning with lattice rounding were the only known derandomized variant of the LWE problem; both of which relied on rounding. LWLR relies on the naturally occurring errors in physical layer communications to generate deterministic yet sufficiently independent errors from the desired rounded Gaussian distributions. We also introduced star-specific key-homomorphic (SSKH) pseudorandom functions (PRFs), which are directly defined by the physical layer communications among the respective sets of parties that construct them. We used LWLR to construct the first SSKH PRF family. In order to quantify the maximum number of SSKH PRFs that can be constructed by sets of overlapping parties, we derived: \begin{itemize} \item a formula to compute the mutual information between linear regression models that are generated via overlapping training datasets, \item bounds on the size of at most $t$-intersecting $k$-uniform families of sets; we also gave an explicit construction to construct such set systems, \item bounds on the size of maximally cover-free at most $t$-intersecting $k$-uniform families of sets. \end{itemize} Using these results, we established the maximum number of SSKH PRFs that can be constructed by a given set of parties in the presence of active/passive and internal/external adversaries. \bibliographystyle{plainurl}
{'timestamp': '2022-05-03T02:41:24', 'yymm': '2205', 'arxiv_id': '2205.00861', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.00861'}
ArXiv
\section{Detailed investigations of event pairs showing marginal evidence of lensing} \label{sec:appendix} \begin{figure*}[tbh] \includegraphics[width=0.8\linewidth]{Plots/gw170104_gw170814.pdf} \caption{Marginalized 2D and 1D posterior distributions of the parameters that are included in the consistency test, for the event pair GW170104 (blue), GW170814(red). Here, ${m_1^z, m_2^z}$ are the redshifted component masses, ${a_1, a_2}$ are the dimensionless spin magnitudes, ${\theta_{a1}, \theta_{a2}}$ are the polar angle of the spin orientations (with respect to the orbital angular momentum), ${\alpha, \sin \delta}$ denote the sky location, and $\theta_{J_N}$ is the orientation of the total angular momentum of the binary (with respect to the line of sight). The solid (dashed) condors corrsponds to the $90\%(50\%)$ confidence levels of the 2D distributions. The inset plot shows the marginalized posterior distributions of the sky localization parameters for these events. Overall, the posteriors have some levels of overlap, thus resulting in a considerable Bayes factor of ${\mc{B}_\textsc{u}^\textsc{l}} \sim {198}$ supporting the lensing hypothesis, purely based on parameter consistency. However, galaxy lenses are unlikely to produce time delay of 7 months between the images, resulting in a small Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}}\sim 4 \times 10^{-3}$ based on time delay considerations.} \label{fig:corner_170104_170814} \end{figure*} Here we present additional investigations on the event pairs that show marginal evidence of multiply-imaged lensing in the analysis presented in Sec.~\ref{sec:multipleimages}, providing a qualitative explanation of the Bayes factors presented in that section in terms of the overlap of the estimated posteriors from these event pairs. Figure~\ref{fig:corner_170104_170814} presents the 2D and 1D marginalized posterior distributions of the parameters that are included in the consistency test, for the event pair GW17014-GW170814. Posteriors have appreciable levels of overlap in many parameters, thus resulting in a considerable Bayes factor of ${\mc{B}_\textsc{u}^\textsc{l}} \sim {198}$ supporting the lensing hypothesis, purely based on parameter consistency. However, galaxy lenses are unlikely to produce time delay of 7 months between the images~\citep{Haris:2018vmn}, resulting in a small Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}}\sim 4 \times 10^{-3}$ based on time delay considerations. Figure~\ref{fig:corner_150914_170809} shows similar plots for the event pair GW150914-GW170809. Although marginalized 1D posteriors have some levels of overlap in many parameters, 2D posteriors show good separation in many parameters, e.g., in $\mathcal{M}^z - \chi_{\rm eff}$. The resulting Bayes factor supporting the lensing hypothesis, based on parameter consistency is ${\mc{B}_\textsc{u}^\textsc{l}} \sim {29}$. However, galaxy lenses are unlikely to produce time delay of 23 months between the images, resulting in a small Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}} \sim \times 10^{-4}$ based on time delay considerations. Figure~\ref{fig:corner_170809_170814} shows similar plots for the event pair GW170809-GW170814. Here also, the 2D posteriors of several parameters (e.g., in $\mathcal{M}^z - \chi_{\rm eff}$) show poor overlaps, suggesting that the full multidimensional posteriors do not have significant overlap. The resultant Bayes factor for parameter consistency is ${\mc{B}_\textsc{u}^\textsc{l}} \sim {1.2}$, even though, the time delay between these events is consistent with galaxy lenses, producing a Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}} \sim {3.3}$ based on time delay. \begin{figure*}[tbh] \includegraphics[width=0.8\linewidth]{Plots/gw150914_gw170809.pdf} \caption{Same as Fig.~\ref{fig:corner_170104_170814}, except that the figure corresponds to the 150914 (blue), GW170809 (red) event pair. The inset plot shows the marginalized posterior distributions of the redshifted chirp mass $\mathcal{M}^z$ and effective spin $\chi_{\rm eff}$ for these events. Marginalized 1D posteriors have some levels of overlap in many parameters; however 2D posteriors show good separation in many parameters, e.g., in $\mathcal{M}^z - \chi_{\rm eff}$. The resulting Bayes factor supporting the lensing hypothesis, based on parameter consistency is ${\mc{B}_\textsc{u}^\textsc{l}} \sim {29}$. However, galaxy lenses are unlikely to produce time delay of 23 months between the images, resulting in a small Bayes factor ${\mc{R}_\textsc{u}^\textsc{l}} \sim \times 10^{-4}$ based on time delay considerations.} \label{fig:corner_150914_170809} \end{figure*} \begin{figure*}[tbh] \includegraphics[width=0.8\linewidth]{Plots/gw170809_gw170814.pdf} \caption{Same as Fig.~\ref{fig:corner_170104_170814}, except that the figure corresponds to the GW170809 (blue), GW170814 (red) event pair. Marginalized 1D posteriors have some levels of overlap in many parameters; however 2D posteriors show good separation in many parameters, e.g., in $\mathcal{M}^z - \chi_{\rm eff}$. The resulting Bayes factor supporting the lensing hypothesis, based on parameter consistency is ${\mc{B}_\textsc{u}^\textsc{l}} \sim {1.2}$.} \label{fig:corner_170809_170814} \end{figure*} \section{Introduction} \label{sec:intro} \input{intro.tex} \section{No evidence of lensing magnification} \label{sec:magnification} \input{lensingmag.tex} \section{No evidence of multiple images} \label{sec:multipleimages} \input{multiimage.tex} \section{No evidence of wave optics effects} \label{sec:waveoptics} \input{waveoptics.tex} \section{Outlook} \label{sec:outlook} \input{outlook.tex} \bigskip \paragraph{Acknowledgments:} We thank the LIGO Scientific Collaboration and Virgo Collaboration for providing the data of binary black hole observations during the first two observation runs of Advanced LIGO and Virgo. PA's research was supported by the Science and Engineering Research Board, India through a Ramanujan Fellowship, by the Max Planck Society through a Max Planck Partner Group at ICTS-TIFR, and by the Canadian Institute for Advanced Research through the CIFAR Azrieli Global Scholars program. SK acknowledges support from national post doctoral fellowship (PDF/2016/001294) by Scientific and Engineering Research Board, India. OAH is supported by the Hong Kong Ph.D. Fellowship Scheme (HKPFS) issued by the Research Grants Council (RGC) of Hong Kong. The work described in this paper was partially supported by grants from the Research Grants Council of the Hong Kong (Project No. CUHK 14310816, CUHK 24304317 and CUHK 14306218) and the Direct Grant for Research from the Research Committee of the Chinese University of Hong Kong. KKYN acknowledges support of the National Science Foundation, and the LIGO Laboratory. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under cooperative agreement PHY-0757058. Computations were performed at the ICTS cluster Alice and the LIGO Hanford cluster. KH, SK, AKM and PA thank Tejaswi Venumadhav, B. Sathyaprakash, Jolien Creighton, Xiaoshu Liu, Ignacio Magana Hernandez and Chad Hanna for useful discussions. OAH, KKYN and TGFL also acknowledge useful input from Peter~T.~H.~Pang, Rico K.~L.~Lo. \bibliographystyle{apj}
{'timestamp': '2019-01-30T02:18:02', 'yymm': '1901', 'arxiv_id': '1901.02674', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.02674'}
ArXiv
\section{Introduction} The study of high energy scattering near the horizon of a black hole has lead to an improved perspective on quantum chaos \cite{LO, Dray:1984ha, SS, KitaevTalks, MSS}. The scrambling of information near the horizon of the black hole is related to the chaotic spread of information on the boundary quantum system. The signature of quantum chaos used in this context is given by the exponential growth of the square of double commutators $\langle |[A(0),B(t)]|^2 \rangle_\beta$ evaluated at a thermal state with inverse temperature $\beta$, \cite{ KitaevTalks,LO}. The main object needed to compute this double commutator is the real part of the out-of-time-ordered (OTOC) correlator $\langle A^\dagger(0) B^\dagger(t) A(0) B(t) \rangle_\beta$. The connected part of the OTOC grows exponentially with time in a chaotic system, with a rate defined as the Lyapunov exponent $\lambda$. This growth happens long after the dissipation time $t_d$ controlled by the thermalization scale of time-ordered correlators. At a much larger scrambling time $t_{\rm sc}$ the connected part of the OTOC, and also the commutator, saturate. In reference \cite{SS} it was explained in detail how, in holographic theories, the bulk computation of an OTOC is given by a high energy scattering near the black hole horizon \cite{Dray:1984ha}. The result is a convolution between wavefunctions (bulk-boundary propagators), that evolve the particles from the boundary to the near horizon region, and a local high energy S-matrix near the horizon. A high energy gravitational scattering is equivalent to a classical shockwave interaction \cite{Dray:1984ha}. This gives a Lyapunov exponent $\lambda = 2 \pi / \beta$ which was shown in \cite{MSS} to be maximal. When doing this calculation in string theory, and assuming inelastic effects are subleading, reference \cite{SS}, building upon \cite{Brower:2006ea}, explains how the sum over all stringy modes are equivalent to an effective elastic Pomeron which produces a perturbative correction $\lambda = \frac{2\pi }{\beta} (1 - \mathcal{O}(\alpha'))$, where $\alpha'$ is related to the string tension, which is analogous to the flat space Regge asymptotics. On the other hand, the saturation of the OTOC at the scrambling time is related to the decay of the bulk-boundary propagators and therefore to the quasinormal modes of the black hole. So far the attention has focused on OTOC that appear on double commutators, since they are more directly related to the definition of chaos of \cite{LO} as explained above. In this note we will analyze general OTOC between four arbitrary operators. For holographic theories, the importance of these quantities is more evident in the bulk. When the operators are different, the bulk scattering is completely inelastic and the Pomeron controlling these OTOC does not necessarily have the quantum numbers of the vacuum, for example. In particular, gravity plays no role in the local near horizon bulk interaction, and the growing piece of these correlators probe interactions that are not universal. Therefore it is interesting to study them to get more information about the bulk. In this paper we will extend the chaos bound and constrain arbitrary out-of-time ordered correlators. The assumptions we will use are the same as in the original chaos bound of \cite{MSS}. In particular, we will focus for simplicity on hermitian operators although we will relax this assumption later. For the exponential ansatz of an OTOC we will use the notation \begin{eqnarray}\label{intro:eq:} {\rm Re}~{\rm Tr} [ y A(0) y B(t) y C(0) y D(t) ] \approx F_d - \varepsilon_{ABCD} e^{\lambda_{ABCD} t}, \end{eqnarray} where by $F_d$ we denote the order one factorized approximation (which implicitly also depends on the choice of operators) and $\varepsilon$ is a small correction. Following \cite{MSS} we define $y$ such that $y^4=e^{-\beta H}/Z$. For systems with a large number of degrees of freedom $N$ the amplitude of the growing piece is of order $\varepsilon\sim N^{-2}$ while the factorized piece is generically of order one. Unless all four operators $A$, $B$, $C$ and $D$ are all different, the correlator in \eqref{intro:eq:} is real. Correlators involving combinations $ABAB$, $ABCB$ or $ABAD$ are all real. We will refer to configurations such as $ABAB$ that appear on double commutators as `diagonal' or `elastic' OTOC, while we will refer to the generic OTOC as `off-diagonal' or `inelastic' correlators. In this notation, the chaos bound of \cite{MSS} is a statement about the positivity of the prefactor $\varepsilon_{ABAB}>0$ and a bound on the growth rate $\lambda_{ABAB} \leq 2\pi/\beta$. This is valid for any choice of $A$ and $B$, even though in most examples the Lyapunov exponent is independent on the choice of operators. We will show (see details in section \ref{sec:proof}) that the inelastic OTOC for four different operators is also similarly constrianed \begin{equation} \lambda_{ABCD} \leq \lambda_{\rm diag} \leq \frac{2\pi}{\beta}, \end{equation} where $\lambda_{\rm diag} \equiv {\rm min} (\lambda_{ABAB},\lambda_{ADAD},\lambda_{CBCB}, \lambda_{CDCD})$. This means a generic off-diagonal OTOC cannot grow faster than diagonal ones. From the gravity side, this puts a bound on the spin of the effective mode controlling this interaction, it cannot be bigger than $2$. It is reasonable to expect all diagonal or off-diagonal OTOC for arbitrary $A$, $B$, $C$, $D$ to grow with the same rate $\lambda_L$, even if not maximal \footnote{For example in the SYK model \cite{Sachdev:1992fk, KitaevTalks,Maldacena:2016hyu,Kitaev:2017awl,Gu:2018jsv} the exponentially growing piece always comes from the same set of ladder diagrams, regardless of how we glue it to external operators.}. With this assumption, we can also bound the amplitude of the growing piece. In the general case of four different operators the constraint is presented in section \ref{sec:proof}. If we take two of the operators to be the same then we can write a simpler version \begin{equation}\label{ec:introbound} (\varepsilon_{ABCB})^2 \leq \varepsilon_{ABAB} \varepsilon_{CBCB}, \end{equation} and similarly for $\varepsilon_{ABAD}$. The same structure is present for the case of an OTOC between four different operators, $\varepsilon_{ABCD}$ is bounded by the prefactors appearing in diagonal correlators. From the gravity side this puts a bound on how strongly can matter couple to the effective mode controlling this interaction. In the context of holographic theories, the coefficient in the left hand side of the inequality \eqref{ec:introbound} is given by an inelastic scattering between particles in the bulk, which in general does not involve graviton exchange. It is interesting to see that we can bound such a process by the right hand side, which is universally fixed by gravitational interactions and the equivalence principle. Even though the inelastic coupling $\varepsilon_{ABCD}$ does not necessarily have a definite sign, its magnitude cannot be bigger than a mean of the gravitational couplings. In this context, this analysis suggests that gravity is the highest spin interaction, and the strongest with that spin. In section \ref{sec:nonh} we generalize the chaos bound to non-hermitian operators. Then similar constraints on an OTOC between four non-hermitian operators can be derived. In section \ref{sec:2dCFT} we make some comments regarding the behavior of inelastic OTOC for 2d CFT. In \cite{Roberts:2014ifa} the authors show how the maximal chaos exponent is controlled by the dominance of the identity Virasoro block. For an OTOC between different operators the identity channel does not appear in the OPE expansion. Using semiclassical expressions for non-vacuum blocks at large central charge $c$ we study the behavior of off-diagonal OTOC. In particular, we see that before the scrambling time, Virasoro descendants are not important in the second sheet. This shows how gravity naively plays no role in the physics of these OTOCs. After the scrambling time at which exponential growth stops, we show how quasinormal modes dictate the decay of the OTOC. We conclude in section \ref{Sec:conc} with open questions and future directions. \section{Constraints on generic OTOC}\label{sec:proof} In this section we will show how to bound general OTOC between arbitrary operators. The argument is simple but requires some notation. In order to do that, we will begin by stating the chaos bound from \cite{MSS}, which we will refer to as the elastic chaos bound. In \cite{MSS} the authors focus on a particular correlator \begin{equation} F(t) \equiv {\rm Tr} [ y A(0) y B(t) y A(0) y B(t) ], \end{equation} between hermitian operators $A$ and $B$, where divergences are regularized by placing them symmetrically along the euclidean circle of size $\beta$. This is implemented by inserting the operators $y$ defined as $y^4=e^{-\beta H}/Z$. The motivation for considering such correlators comes from its relation to commutators square between operators $A(0)$ and $B(t)$. We will consider times that are much larger than the dissipation time $t_d$ but smaller than the scrambling time $t_{sc}$, which we assume to be parametrically larger (as in, for example, large $N$ theories). In this regime the OTOC is almost constant and given by its factorized contribution $F_d$ to leading order, where \begin{equation} F_d \approx {\rm Tr} [ y^2 A y^2 A] {\rm Tr} [ y^2 B y^2 B] \end{equation} For chaotic systems we expect the subleading behavior to have an exponential behavior \begin{equation}\label{eq:ans} F(t) = F_d - \varepsilon ~e^{\lambda t} +\ldots \end{equation} where $\lambda$ is the Lyapunov exponent of the system. The parameter $\varepsilon$ is a small constant which controls the scrambling time at which the OTOC decays. For a large $N$ system it is of order $\varepsilon \sim N^{-2}$. From now on we will denote these prefactors of exponentially growing terms by $\varepsilon$ to denote they are small compared to the factorized term. The chaos bound from \cite{MSS} states that the quantity $F(t)$ is bounded by the right hand side of equation \eqref{eq:ans} with both \begin{equation} \varepsilon \geq 0 ~~~{\rm and}~~~\lambda \leq \frac{2\pi}{\beta} \end{equation} We will take this as our starting point for the generalizations below. Therefore we will implicitly use the same assumptions and caveats as in \cite{MSS}. \subsection{An inelastic chaos bound} In this section we will prove the bound stated in the introduction regarding OTOC between four different operators. We will focus first on hermitian operators. The upshot is that the growing piece of a general OTOC cannot grow faster than exponentially, with the maximal rate $\lambda = 2\pi/\beta$. We will also see how to put a bound on the magnitude of the growing piece. To simplify the presentation, we will go over the argument in steps. We will first generalize the chaos bound to a correlator ${\rm Tr} [ y A(0) y B(t) y C(0) y B(t) ]$. This OTOC is real for arbitrary operators since \begin{equation}\label{eq:realABCB} {\rm Tr} [ y A y B(t) y C y B(t) ]^\dagger = {\rm Tr} [ B(t) y C y B(t) y A y ]={\rm Tr} [ y A y B(t) y C y B(t) ], \end{equation} where we used that the operators are hermitian. In the first line we applied the hermitian conjugation inside the trace and in the second one used the cyclic property of the trace\footnote{Similarly, one can show that $ {\rm Tr} [ y A y B(t) y A y D(t) ]$ is real and symmetric under exchange of $B$ and $D$. }. Moreover, the OTOC is also symmetric under the exchange of $A$ and $C$ \begin{equation}\label{eq:symABCB} {\rm Tr} [ y A y B(t) y C y B(t) ]={\rm Tr} [ y C y B(t) y A y B(t) ] \end{equation} which follows from the cyclic property of the trace. To bound ${\rm Tr} [ y A(0) y B(t) y C(0) y B(t) ]$ we will analyze a diagonal correlator of the form \begin{eqnarray}\label{eq:corrvwvw} F(t)= {\rm Tr} [ y V y B(t) y V y B(t) ] ,~~~ V=\alpha_1 A + \alpha_2 C, \end{eqnarray} for arbitrary real coefficients $\alpha_1$ and $\alpha_2$. To simplify the notation, we omit the time argument when the operator is inserted at $t=0$. Expanding each term in the right hand side of equation \eqref{eq:corrvwvw} gives \begin{eqnarray}\label{eq:ABCBexpand} F(t) &=& \alpha_1^2 {\rm Tr} [ y A y B(t) y A y B(t) ]+\alpha_2^2 {\rm Tr} [ y C y B(t) y C y B(t) ] \nn && + 2 \alpha_1 \alpha_2 {\rm Tr} [ y A y B(t) y C y B(t) ]. \end{eqnarray} This contains the correlator we want to bound. There for by using the information we learn from the chaos bound on diagonal OTOC we can bound off-diagonal OTOC such as ${\rm Tr} [ y A y B(t) y C y B(t) ]$. Before we move on we can write an ansatz for these OTOC similar to equation \eqref{eq:ans}. For concretenes and to set notation we write \begin{eqnarray} {\rm Tr} [ y A y B(t) y A y B(t) ] &=& F^{AA}_d - \varepsilon_{AA}~ e^{\lambda_{AA} t},\\ {\rm Tr} [ y C y B(t) y C y B(t) ] &=& F^{CC}_d-\varepsilon_{CC} ~e^{\lambda_{CC} t},\\ {\rm Tr} [ y A y B(t) y C y B(t) ] &=&F^{AC}_d- \varepsilon_{AC} ~e^{\lambda_{AC} t}. \end{eqnarray} Where we indicate the dependence on the operators of the factorized leading contribution $F_d$, the amplitude of growing piece $\varepsilon$ and rate $\lambda$. We leave the dependence on the operator $B$ implicit. From \eqref{eq:realABCB} and \eqref{eq:symABCB} we know that $F_d^{AC}=F_d^{CA}$, $\varepsilon_{AC}=\varepsilon_{CA}$ and $\lambda_{AC}=\lambda_{CA}$ are real. To leading order, the right hand side of equation \eqref{eq:ABCBexpand} above is approximately constant in time, order one, and given by \begin{equation} F_d \approx \alpha_1^2 F_d^{AA} + \alpha_2^2 F_d^{CC} + 2 \alpha_1 \alpha_2 F_d^{AC}. \end{equation} This quantity is positive for any choice of $\alpha_1$ and $\alpha_2$. This can be shown using Cauchy-Schwarz or more directly by starting from expression \eqref{eq:corrvwvw} in terms of $V$. Moreover, we might also diagonalize the $2\times 2$ matrix of two-point functions between $A$ and $C$ such that $F_d^{AC}=0$, making the equation above manifestly positive. Next, we will focus on the subleading piece growing in time. We will consider first the most general case where all rates exponents are allowed to be different. Using this ansatz for the maximal growth we can write the subleading part of the OTOC as \begin{equation}\label{eq:ABCBsublead} F_d - F(t)= \alpha_1^2 \varepsilon_{AA} e^{\lambda_{AA} t} + \alpha_2^2 \varepsilon_{CC} e^{\lambda_{CC} t} + 2 \alpha_1 \alpha_2 \varepsilon_{AC} e^{\lambda_{AC} t}. \end{equation} Since $\alpha_1$ and $\alpha_2$ are arbitrary coefficients, and by the elastic chaos bound, we can conclude that $\lambda_{AA}, \lambda_{CC}, \lambda_{AC} \leq 2\pi/\beta$. Otherwise we could form a linear combination of $A$ and $C$ such that $F(t)$ could violate the chaos bound. Moreover we can also argue that $\lambda_{AC} \leq {\rm min}(\lambda_{AA},\lambda_{CC})$. Otherwise eventually the mixed term would dominate and we could choose a sign of the $\alpha$'s which would give a negative prefactor, also violating the chaos bound. In other words, $\lambda_{AC} > {\rm min}(\lambda_{AA},\lambda_{CC})$ would imply $\varepsilon_{AC}=0$. Above we considered the most general case. Now we will assume that all OTOC grow with the same rate $\lambda$. In this case the chaos bound on the prefactor sign gives us a bound \begin{equation} \alpha_1^2 \varepsilon_{AA} + \alpha_2^2 \varepsilon_{CC} + 2 \alpha_1 \alpha_2 \varepsilon_{AC} \geq 0, ~~~~\forall ~\alpha_1,\alpha_2 \end{equation} coming from the diagonal chaos bound applied to the right hand side of equation \eqref{eq:ABCBsublead}. This condition is equivalent to the following constraint \begin{equation}\label{eq:boundabac} \varepsilon_{AC}^2 \leq \varepsilon_{AA} \varepsilon_{CC} \end{equation} From this condition we see that even though we can constrain the growth of $\langle A B CB \rangle$, the chaos bound does not constrain the sign of the correction, which could be positive or negative but with a magnitude bounded by $\sqrt{\varepsilon_{AA} \varepsilon_{CC}}$. This is also analogous to the ANEC case studied in \cite{CMT} (see also \cite{Meltzer:2017rtf}). Having done this, the obvious next step is to consider other linear combinations. An option is \begin{equation}\label{eq:ABCDlc} F(t)= {\rm Tr} [ y A y W(t) y A y W(t) ] ,~~~ W=\alpha_1 B + \alpha_2 D. \end{equation} The chaos bound applied to this correlator gives analogous bounds as the previous analysis for the (real) correlator $ {\rm Tr} [ y A y B(t) y A y D(t) ]$. Namely, the growing piece cannot grow too fast and the amplitude cannot be bigger than diagonal one. Instead, to obtain new bounds, we will consider \begin{equation}\label{eq:ABCDlc} F(t)= {\rm Tr} [ y A y W(t) y C y W(t) ] ,~~~ W=\alpha_1 B + \alpha_2 D \end{equation} with real coefficients $\alpha_1$ and $\alpha_2$. Then we can use the inelastic chaos bound derive above to constrain ${\rm Tr} [ y A y B(t) y C y D(t) ]$. We again assume an exponential ansatz on each term. A new feature of the most general case is that the mixed term now is not real anymore since \begin{equation} {\rm Tr} [ y A y B(t) y C y D(t) ]^\dagger = {\rm Tr} [ y A y D(t) y C y B(t) ] = {\rm Tr} [ y C y B(t) y A y D(t) ]. \end{equation} This means that exchanging $A \leftrightarrow C$ or $B \leftrightarrow D$ are related by complex conjugation. Only a simultaneous exchange of $A \leftrightarrow C$ and $B \leftrightarrow D$ is a symmetry. From expanding the right hand side of \eqref{eq:ABCDlc} we see it is only sensitive to the real part of ${\rm Tr} [ y A y B(t) y C y D(t) ]$. To set notation we write the exponential ansatz for the correlators as \begin{eqnarray} {\rm Tr} [ y A y B(t) y A y B(t) ] &=& F_d^{ABAB} - \varepsilon_{ABAB} ~e^{\lambda_{ABAB} t}, \\ {\rm Tr} [ y A y B(t) y C y B(t) ] &=& F_d^{ABCB} - \varepsilon_{ABCB} ~e^{\lambda_{ABCB} t},\\ {\rm Re}~{\rm Tr} [ y A y B(t) y C y D(t) ] &=& F_d^{ABCD} - \varepsilon_{ABCD} ~e^{\lambda_{ABCD} t}. \end{eqnarray} In these expressions all quantities on the right hand side are real. In the third line, after taking the real part, the quantities are symmetric under independently exchanging $A$ and $C$ or $B$ and $D$. Now we can expand the right hand side of \eqref{eq:ABCDlc}. Again, the factorized contributions $F_d$ give some leading constant piece for the correlator in \eqref{eq:ABCDlc} and we will focus on the subleading growing piece. From the constraint on the growth rate in time we obtain the following bound quoted in the introduction \begin{equation} \lambda_{ABCD} \leq \lambda_{\rm diag} \leq \frac{2\pi}{\beta}, \end{equation} where $\lambda_{\rm diag} \equiv {\rm min} (\lambda_{ABAB},\lambda_{ADAD},\lambda_{CBCB}, \lambda_{CDCD})$. Namely, if the rate of growth are different for each term, we can say that $\lambda_{ABCD}$ is smaller than the minimum of $\lambda_{ABAB}$, $\lambda_{CBCB}$, etc, which are all smaller than $2\pi/\beta$ by the chaos bound. Similarly to the previous case, we can assume all OTOC have the same rate of growth, and then we can also bound the amplitude $\varepsilon$ of the growing piece. The bound we obtain from the previous analysis, equation \eqref{eq:boundabac}, is \begin{eqnarray}\label{gencond} &&\hspace{-0.5cm}(\alpha_1^2 \varepsilon_{ABAB} +\alpha_2^2 \varepsilon_{ADAD} + 2 \alpha_1 \alpha_2 \varepsilon_{ABAD} )(\alpha_1^2 \varepsilon_{CBCB} +\alpha_2^2 \varepsilon_{CDCD} + 2 \alpha_1 \alpha_2 \varepsilon_{CBCD} )\nn &&~~~-(\alpha_1^2 \varepsilon_{ABCB} +\alpha_2^2 \varepsilon_{ADCD} + 2 \alpha_1 \alpha_2 \varepsilon_{ABCD} )^2\geq 0, \end{eqnarray} which should be satisfied for any choice of $\alpha_1$, $\alpha_2$. Since this condition is invariant under a rescaling of $\alpha_i \to \lambda \alpha_i$, we can fix $\alpha_1=1$. Then this condition \eqref{gencond} is equivalent to the positivity of a quartic polynomial on the variable $\alpha_2$ with coefficients depending on the $\varepsilon$'s. \footnote{For a general quartic polynomial $P(x)=a x^4 + b x^3 + c x^2 + d x + e $ it should have $a,e>0$ and the condition having four complex roots is to have a positive discriminant $\Delta(P)\geq0$, and a positive $8 ac-3b^2\geq0$.} When these conditions are written in terms of the amplitudes $\varepsilon$'s they look algebraically complicated and not very enlightening. To simplify the discussion we can use the previous bound $\varepsilon_{ABAD}^2 \leq \varepsilon_{ABAB} \varepsilon_{ADAD}$ and $ \varepsilon_{CBCD}^2 \leq \varepsilon_{CBCB} \varepsilon_{CDCD}$ to complete the square in the first line of equation \eqref{gencond}. Then we can derive a non-optimal bound on the most generic $\varepsilon_{ABCD}$ as \begin{equation} \varepsilon_{ABCD} ^2 \leq 4( \sqrt{ \varepsilon_{ADAD}\varepsilon_{CBCB}} +\sqrt{ \varepsilon_{ABAB}\varepsilon_{CDCD}})^2. \end{equation} Even though this bound is not optimal it shows in a more transparent way how the prefactor of the off-diagonal OTOC is bounded by the diagonal ones. This is the main conceptual point, in a holographic setting this shows hows a generic interaction is bounded by the gravitational interactions. As a final comment,we can also consider a correlator of the type \begin{equation} F(t)= {\rm Tr} [ y V y W(t) y V y W(t) ] ,~~~ V=\alpha_1 A + \alpha_2 C,~~{\rm and}~~W=\alpha_3 B + \alpha_4 D \end{equation} One might wonder whether the $\varepsilon>0$ constraint for $F(t)$ by varying all $\alpha$'s independently we can derive a bound on $\varepsilon_{ABCD}$ stronger than the one above which was derived by steps. It is easy to see that this is not the case, and considering the most general linear combination does not generate new constraints compared to equation \eqref{gencond}. \subsection{Non-Hermitian Operators}\label{sec:nonh} So far we have discussed OTOC between arbitrary hermitian operators. Some of the steps for the bound on chaos argument from \cite{MSS} do not directly work for non-hermitian operators. We will show here that the bound on hermitian operators is enough to prove this generalization. Consider the following OTOC between general non-hermitian operators \begin{equation}\label{eq:vdwdvw} F(t)= {\rm Tr} [ y V^\dagger y W^\dagger(t) y V y W(t) ] \end{equation} We will show how the bound on ${\rm Re}~F(t)$ derives from the bound on hermitian operators. This quantity is related to the double commutator between non-hermitian $V$ and $W$ that appear in the definition of chaos. We will expand a general non-hermitian operator $\mathcal{O}$ in two hermitian components $\mathcal{O}_R = (\mathcal{O}+\mathcal{O}^\dagger)/2$ and $\mathcal{O}_I = (\mathcal{O}-\mathcal{O}^\dagger)/(2i)$, for $\mathcal{O}=V,W$. To simplify the expressions we write below, we will use a shorthand for the OTOC defining $\langle ABCD\rangle \equiv {\rm Tr} [ y A(0) y B(t) y C(0) y D(t)]$. Starting from the right hand side of \eqref{eq:vdwdvw}, expanding and using the cyclic property of the trace it is easy to show that \begin{eqnarray}\label{eq:nonhermtotl} {\rm Re} ~F(t)&=&{\rm Re}~{\rm Tr} [ y (V_R-i V_I) y W^\dagger(t) y (V_R+i V_I) y W(t) ] \nn &=&{\rm Re}~[ \langle V_R W^\dagger V_R W\rangle + \langle V_I W^\dagger V_I W\rangle- i (\langle V_I W^\dagger V_R W\rangle-\langle V_R W^\dagger V_I W\rangle)]\nn &=&\langle V_R W^\dagger V_R W\rangle + \langle V_I W^\dagger V_I W\rangle \end{eqnarray} where we used that $ \langle V_I W^\dagger V_R W\rangle^* = \langle W^\dagger V_R W V_I \rangle = \langle V_I W^\dagger V_R W\rangle$ and similarly for $\langle V_R W^\dagger V_I W\rangle$, implying they are both real, and therefore the last term in the right hand side of the equation above is purely imaginary. Now we can expand $W$ and use that $\langle A B C B\rangle$ is real for hermitian operators, to show \begin{equation}\label{eqnonh} {\rm Re} ~F(t) = \langle V_R W_R V_R W_R \rangle + \langle V_R W_I V_R W_I\rangle +\langle V_I W_R V_I W_R \rangle + \langle V_I W_I V_I W_I\rangle, \end{equation} Then if we write ${\rm Re} ~F(t) = F_d - \varepsilon ~e^{\lambda t}$, the chaos bound on the growth rate automatically applies to each term individually on the right hand side implying $\lambda \leq 2\pi/\beta$. Moreover since all terms appear with a plus sign, the bound on the sign of the prefactor still applies, implying $\varepsilon\geq0$. Taking the usual chaos bound for non-hermitian operators as a starting point we can derive analogous results as in the previous section for general non-hermitian operators. \section{An Example: 2d CFTs}\label{sec:2dCFT} In the context of 2d CFTs one can show that at large $c$, a large gap in the twist is enough to obtain maximal chaos by using results from semiclassical limits of Virasoro conformal blocks \cite{Roberts:2014ifa} \footnote{Other studies of chaos in 2d CFT from different perspectives can be found in \cite{Jackson:2014nla, Turiaci:2016cvo} (see also \cite{Perlmutter:2016pkf}). }. This is given purely by a product of stress tensors acting on the identity and therefore can be interpreted as coming in the bulk from a purely gravitational interaction. In this section we want to study in a simple setup which role the inelastic version of the chaos bound plays for large $c$ 2d CFTs with a sparse spectrum. Under some assumptions, we will study the general behavior of off-diagonal OTOC. We propose that a resummation of intermediate channels can be written as a single non-vacuum block corresponding to an operator with an effective dimension and an effective spin. From a bulk perspective this constrains matter interaction (OPE coefficients of arbitrary operators) using the chaos bound. \subsection{Kinematics} In any 2d CFT an arbitrary four point function can be written as \begin{equation}\label{eq:4pftnction} \langle W_1(z_1,\bar{z}_1)W_2 (z_2,\bar{z}_2) V_3 (z_3, \bar{z}_3) V_4 (z_4,\bar{z}_4) \rangle = \frac{1}{z_{12}^{h_1+h_2} z_{34}^{h_3+h_4} }\frac{1}{\bar{z}_{12}^{h_1+h_2} \bar{z}_{34}^{h_3+h_4} } G(z,\bar{z}) \end{equation} where $G(z,\bar{z})$ can be expanded in Virasoro conformal blocks and the cross-ratio is defined as $z=\frac{z_{12}z_{34}}{z_{13}z_{24}}$ and a similar anti-holomorphic version. The operators are arbitrary but we use the letters $V$ and $W$ to indicate which ones will be at time $0$ ($V$'s) and which at time $t$ ($W$'s). Schematically we expand the four-point function as \begin{equation}\label{eq:OPEdef} G(z,\bar{z}) = \sum_{p} C_{12p}C_{34p}~ \mathcal{F} \big[{}^{h_1}_{h_2}{}^{h_3}_{h_4}\big](h_p, z)~ \mathcal{F}\big[{}^{\bar{h}_1}_{\bar{h}_2}{}^{\bar{h}_3}_{\bar{h}_4}\big](\bar{h}_p,\bar{z}) \end{equation} where $\mathcal{F}_{h_p} \big[{}^{h_1}_{h_2}{}^{h_3}_{h_4}\big](z)$ are the Virasoro blocks corresponding to a Virasoro primary operator $\mathcal{O}_p$ with (anti)holomorphic weights $h_p$($\bar{h}_p$), dimension $\Delta = h_p + \bar{h}_p$ and spin $s=|h_p-\bar{h}_p|$. The blocks are normalized such that $\mathcal{F}_h(z) = z^h(1+\ldots)$ for a small $z$ expansion. In going from Euclidean to Lorenzian signature, different orders of operators are encoded in the monodromy of the blocks \cite{Rehren:1987ur}. Following \cite{Roberts:2014ifa} we choose the kinematics of the correlator to be \begin{eqnarray} &&z_1 = e^{\frac{2\pi}{\beta} ( t+i\epsilon_1)},~~~~\hspace{0.08cm}z_2 = e^{\frac{2\pi}{\beta} ( t+i\epsilon_2)},~~~~\hspace{0.08cm}z_3 = e^{\frac{2\pi}{\beta} ( x+i\epsilon_3)},~~~z_4 = e^{\frac{2\pi}{\beta} ( x+i\epsilon_4)},\nn &&\bar{z}_1 = e^{-\frac{2\pi}{\beta} ( t+i\epsilon_1)},~~~\bar{z}_2 = e^{-\frac{2\pi}{\beta} ( t+i\epsilon_2)},~~~\bar{z}_3 = e^{\frac{2\pi}{\beta} ( x-i\epsilon_3)},~~~\bar{z}_4 = e^{\frac{2\pi}{\beta} ( x-i\epsilon_4)} \end{eqnarray} where $\epsilon_1<\epsilon_3<\epsilon_2<\epsilon_4$ and as we raise the time from $0$ to $t$ the cross-ratio $z$ goes once around $z=1$ while $\bar{z}$ remains in the first sheet. For times larger than the dissipation time, which is of order $\beta$, we can evaluate the blocks on the second sheet with cross-ratios $z \approx - \epsilon_{12}^\star \epsilon_{34} e^{\frac{2\pi}{\beta}(x-t)}$ and $\bar{z} \approx - \epsilon_{12}^\star \epsilon_{34} e^{-\frac{2\pi}{\beta}(x+t)}$, where $\epsilon_{ij} = i (e^{\frac{2\pi}{\beta} i \epsilon_i} - e^{\frac{2\pi}{\beta} i \epsilon_j})$. For a configuration with operators equally spaced on the thermal circle, $\epsilon_{12}^\star \epsilon_{34}=4i$ and $z = -4 i e^{\frac{2\pi}{\beta}(x-t)}$ and $\bar{z} = - 4i e^{-\frac{2\pi}{\beta}(x+t)}$. If we normalize operators by their 2pt function on the plane $\langle O (x) O(0) \rangle =(x)^{-2\Delta_O}$, then each term in the factorized answer for the four-point function gives a factor of ${\rm Tr} [ y^2 O y^2 O] = (\pi/\beta )^{2\Delta_O}$. This also appears from the position dependent prefactor in the right hand side of equation \eqref{eq:4pftnction} when the operators are all different. \subsection{Elastic case} In this case we take the two operators at $t=0$ and $t$ to be the same $V_3 = V_4 = V$ and $W_1 = W_2 = W$. Then the identity appears on the intermediate channel in the OPE written above in equation \eqref{eq:OPEdef}. Assuming a large twist gap we can approximate the full correlator by the vacuum block. Following \cite{Roberts:2014ifa} we will take a large $c$ limit with $h_1/c\ll1$ fixed but small and $h_3\gg 1$. Then we can make use of heavy-light semiclassical blocks found in \cite{Fitzpatrick:2014vua}. The final answer is given by \begin{eqnarray}\label{eq:vacuumblockapprox} \frac{ {\rm Tr} [ y V y W(t) y V y W(t)]}{ {\rm Tr}[y^2 V y^2 V]{\rm Tr}[y^2 W y^2 W]} &\approx& \big( 1+\frac{6 \pi h_1}{c} e^{\frac{2 \pi}{\beta}(t-x)} \big)^{-2h_3}\nn &\approx&1 - \frac{12 \pi h_1 h_3}{c} e^{\frac{2 \pi}{\beta}(t-x)} + \ldots \end{eqnarray} which saturates the chaos bound. In the first line we wrote the chaos limit of the identity Virasoro block, while in the second line we focus on times $\beta^{-1} \ll t \ll t_{\rm sc}=\frac{\beta}{2\pi} \log c$. This was formally done at infinite gap. Corrections from finite gap and how the correlator Reggeize was recently studied in reference \cite{Chang:2018nzm}. Within this approximation, for times larger than the scrambling time $t\gg t_{\rm sc} $ this OTOC goes to zero exponentially at a rate related by the quasinormal modes. \subsection{Inelastic case} In the inelastic case we can take four arbitrary operators $W_1$, $W_2$, $V_3$ and $V_4$. In order to have analytic control over the Virasoro conformal blocks we will take large $c$ with $1 \ll h_3,h_4$ and fixed $h_1/c$, $h_2/c$ but small. Moreover we also take $h_{12}=(h_1-h_2)/2$ and $h_{34}=(h_3 -h_4)/2$ to be of order one such that the results of \cite{Fitzpatrick:2014vua} apply. We can take a basis of operators such that the identity block does not appear on the intermediate channel (this is automatic if the dimensions are different). Instead within a similar approximation as in the elastic case, we need to consider light intermediate operators of low twist. The semiclassical Virasoro block was also computed in this case, when the channel dimension $h_p$ is of order one \cite{Fitzpatrick:2014vua}. After going to the second sheet and using the chaos kinematics we get \begin{equation}\label{eq:nonvacblock} \mathcal{F} \big[{}^{h_1}_{h_2}{}^{h_3}_{h_4}\big](h, z) = \left(\frac{1}{ 1-\frac{12 \pi i (h_1+h_2)}{cz} }\right)^{h_3+h_4-h} z^h {}_2 F_1 (h - h_{12}, h+ h_{34}, 2h, z)|_{{\rm 2nd~sheet}} , \end{equation} where the hypergeometric function comes from $SL(2)$ descendants and it is evaluated on the second sheet. For times between dissipation and scrambling, the first term in the right hand side of \eqref{eq:nonvacblock} is constant and the exponential growth comes from the hypergeometric function \begin{equation} {\rm Tr} [ y V_3 y W_1(t) y V_4 y W_2(t)] = N_\beta \sum_{p} C_{12p}C_{34p}~d_p~e^{ \frac{2\pi}{\beta}(s_p-1)t}e^{- \frac{2\pi}{\beta}(\Delta_p-1) x}, \end{equation} where $\Delta_p$, $s_p$ is the dimension and spin of the intermediate channel operator $O_p$. The normalization coming from the prefactor of \eqref{eq:4pftnction} is $N_\beta = (\pi/\beta)^{\Delta_1 +\Delta_2 + \Delta_3 + \Delta_4}$. $d_p$ is a coefficient \begin{equation} d_p = \frac{8\pi }{(i4)^{h_p-\bar{h}_p }(2h-1)}\frac{\Gamma(2h_p)}{\Gamma(h_p\pm h_{12})}\frac{\Gamma(2h_p)}{\Gamma(h_p\pm h_{34})}, \end{equation} where $\Gamma(a\pm b) =\Gamma(a+b)\Gamma(a-b)$. This factor depends on the dimension of the intermediate channel and comes from the evaluation of the hypergeometric function in \eqref{eq:nonvacblock} on the second sheet. The growing part of elastic OTOC for holographic CFTs are dominated by the vacuum block, dual to gravitational interactions in the bulk. The growing part of inelastic OTOCs is not related to gravitational interactions. The fact that Virasoro descendants are irrelevant for the calculation of inelastic OTOCs is a manifestation of this fact. This is not completely obvious and we give the details in the appendix. The amplitude of the growing piece of the inelastic OTOC is small due to the fact that the OPE coefficients are subleading in $N$ and in the gap. Lets first assume that the spins are bounded. Then by looking at this expression assuming a large twist gap we can conclude that in the low-lying part of the spectrum all particles must have spin $s<2$. If a primary happens to have $s=2$ then its interactions with other particles cannot be stronger than gravity (bounds OPE coefficient). If there happens to be a light particle with $s>2$, then its contribution should be Reggeize among the low-lying spectrum to give an effective spin $s_{\rm eff} <2$. Then we can see the statement in the previous paragraph as a statement about inelastic Pomerons in the theory. \begin{figure}[t!] \begin{center} \begin{tikzpicture}[scale=1] \node[inner sep=0pt] (russell) at (2.5,1.5) {\includegraphics[width=.25\textwidth]{curve2.pdf}}; \node[inner sep=0pt] (russell) at (2.5,1.5) {\includegraphics[width=.25\textwidth]{curve.pdf}}; \draw[thick, ->] (0,0) -- (5,0); \draw[thick, ->] (0,0) -- (0,3); \draw[thick] (3.5,0-0.1)--(3.5,0+0.1); \draw (3.5,-0.5) node {$t_{\rm sc}$}; \draw (-1,2) node {$F(t)$}; \end{tikzpicture} \end{center} \vspace{-0.85cm} \caption{\small Sketch of a typical behavior of inelastic OTOC $F(t)$ in 2d CFT (black curve) as a function of time, assuming its approximated by a non-vacuum block with effective dimension $\Delta_{\rm eff}$ and spin $s_{\rm eff}$. Initially the OTOC grows exponentially with rate $\lambda = \frac{2\pi}{\beta} (s_{\rm eff}-1)$. For late times the fast decay is controlled by the quasi-normal modes. In blue we show a typical elastic OTOC. } \label{fig:temp-check} \end{figure} The calculation required in the previous paragraph to evaluate the inelastic OTOC is complicated, even in the case of elastic OTOC \cite{Chang:2018nzm}. We can conjecture that the result of summing over infinite spins is equivalent to a single non-vacuum block with effective $h_{\rm eff}$, $\bar{h}_{\rm eff}$ and effective dimension $\Delta_{\rm eff}=h_{\rm eff} + \bar{h}_{\rm eff}$ and spin $s_{\rm eff}=|h_{\rm eff} - \bar{h}_{\rm eff}|$. With this assumption the inelastic OTOC is given by \begin{equation} N^{-1}_\beta~{\rm Tr} [ y V_3 y W_1(t) y V_4 y W_2(t)] = ~C_{12}C_{34}~\mathcal{F} \big[{}^{h_1}_{h_2}{}^{h_3}_{h_4}\big](h_{\rm eff}, z) ~\bar{z}^{\bar{h}_{\rm eff}}, \end{equation} where the holomorphic block is given by equation \eqref{eq:nonvacblock}, and $C_{12}$ ($C_{34}$) are effective couplings between operators $W_1$, $W_2$ ($V_3$, $V_4$) and the effective Pomeron state $h_{\rm eff}, \bar{h}_{\rm eff}$. Depending on the effective spin, $C_{12}C_{34}$ might be bounded by the dimensions of the external operators, following \eqref{ec:introbound}. With the proposal of the previous paragraph, we can analyze times longer than scrambling $t_{\rm sc} \lesssim t$. In this case the situation changes and the part of the block coming from the Virasoro descendants dominate. Namely, the first factor in the right hand side of \eqref{eq:nonvacblock} decays exponentially. Assuming the behavior of the correlator is equivalent to a single block of dimension $\Delta_{\rm eff}$ of order one and spin $s_{\rm eff}<2$, the OTOC decays exponentially as $\sim e^{- \frac{2\pi}{\beta} (h_3+h_4)t} $. Under the assumption of the previous paragraph, we show in figure \ref{fig:temp-check} the behavior of a typical OTOC between different operators. To summarize, inelastic OTOCs grow exponentially in a way controlled by a Pomeron exchange (unrelated to gravity), until the scrambling time $t_{\rm sc}$ at which the correlator begins to decay according to the quasinormal modes frequencies. This is expected since this decay is due to the bulk-boundary propagators appearing in the bulk calculation of the OTOC. The picture and the proposal that emerges from this analysis should be worked out in more detail following \cite{Chang:2018nzm} and \cite{Collier:2018exn} (see also \cite{Liu:2018iki} and \cite{Hampapura:2018otw}), but we leave it for future work. \section{Open Questions}\label{Sec:conc} To conclude we would like to state some open questions. It would be nice to compute the off-diagonals OTOC introduced here for SYK models \footnote{In particular, the approach of \cite{Berkooz:2018jqr} might be very useful for this.}. Following the notation of \cite{Kitaev:2017awl} and \cite{Gu:2018jsv}, we can write a generic OTOC as a convolution between form factors describing the coupling of external operators to an effective `scramblon' mode, and the scramblon propagator which grows exponentially with time. In this perspective the bound stated here constraints the behavior of the general form factors appearing in these models. Their rate of growth in time is bounded by the growth of the scramblon mode through the elastic OTOC. Moreover, the mode responsible to the Lyapunov behavior of inelastic OTOC might not be the same as the one for elastic case (for example, it might not have the quantum numbers of the vacuum). Therefore there must be a mode which similarly to the Schwarzian mode generates exponential growth\footnote{From the perspective of \cite{MTV} the problem is analogous to finding a generalization of Liouville theory that allows primary operators with spin.}. A simple proposal would be a similar mode living on ${\rm Diff}(S^1)/U(1)$ instead of ${\rm Diff}(S^1)/SL(2)$, but this requires further study. This question can be extended to higher dimensions. In 2d CFTs the maximal chaos behavior coming from the identity block can be understood as coming from a Goldstone mode of broken reparametrization invariance \cite{Turiaci:2016cvo}. It would be nice to find a description of a similar soft mode producing exponential growth of off-diagonal OTOC, related to non-vacuum representations. To analyze this problem 2d versions of SYK might be useful as explicit examples \cite{Gu:2016oyy, Turiaci:2017zwd, Murugan:2017eto, Berkooz:2017efq}. In the particular case of 2d CFTs it would be interesting to repeat the analysis of \cite{Chang:2018nzm} for a general OTOC. The analysis of this paper can be extended to higher order OTOC with more than four operators. The diagonal version of these correlators was studied in \cite{Haehl:2017pak} (see also \cite{Haehl:2018izb}). Results in this direction were derived in \cite{Basu:2018akv}. Finally, after understanding how correlators Reggeize in the chaos limit, it would be nice to recast the bound derived in this paper as a bound on OPE data. This might also help sharpen the statement about gravity being the highest spin, strongest, interaction. \bigskip \begin{center} {\bf Acknowledgements} \end{center} \vspace{-2mm} We want to thank G. Horowitz, J. Maldacena, M. Rangamani and D. Stanford for discussions and comments on the draft. This work was supported by a Fundamental Physics Fellowship. We also benefited from the workshop ``Chaos and Order: From strongly correlated systems to black holes" at KITP, supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 to the KITP. \begin{appendix} \section{Semiclassical Virasoro Blocks}\label{app:blocks} \end{appendix} In this paper we used a Virasoro conformal block with two external light operators of weights $h_L\pm \delta_L$ and two external heavy operators of weights $h_H \pm \delta_H$. In the large $c$ limit with $h_H/c$ fixed these were obtained in \cite{Fitzpatrick:2014vua} for a light intermediate channel $h$. Within this approximation it is given by \begin{equation}\label{app:fitzkapblock} \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](h, z)= (1-w)^{(h_L+\delta_L)(1-\alpha^{-1})}\left( \frac{w}{\alpha ~z} \right)^{h-2h_L} z^h {}_2 F_1 \left( h - \frac{\delta_H}{\alpha} , h+\delta_L , 2 h , w \right) \end{equation} where $\alpha=\sqrt{1-24 h_H/c}$ and $w(z)=1-(1-z)^\alpha$. In the case of pairwise identical operators, $\delta_H=\delta_L=0$, and for the vacuum channel, this expression gives \begin{equation} \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](0, z) = (1-w)^{(h_L+\delta_L)(1-\alpha^{-1})}\left( \frac{w}{\alpha~z} \right)^{-2h_L}. \end{equation} In the limit for which $h_H/c \ll 1$ ($\alpha\approx 1-12h_H/c$), for small $z$ in the second sheet the block is approximately $ \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](0, z) \approx \left(\frac{z}{1-(1-z)^\alpha} \right)^{2h_L}$. This expression reproduces equation \eqref{eq:vacuumblockapprox}, which is the main result in \cite{Roberts:2014ifa}. For the case of generic intermediate channel we can use the general formula, go to the second sheet and evaluate in the chaos limit. This gives the same answer as doing the analytic continuation of the expression \begin{equation}\label{app:eq3} \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](h, z)= \left( \frac{z}{1-(1-z)^\alpha} \right)^{2h_L-h} z^h {}_2 F_1 \left( h - \frac{\delta_H}{\alpha} , h+\delta_L , 2 h , w \right) \end{equation} The first term is equivalent to the vacuum block with shifted dimensions. This term is entirely due to Virasoro descendants. The evaluation of this term in the second sheet is therefore \begin{equation}\label{eq:prefactorapp} \left( \frac{z}{1-(1-z)^\alpha} \right)^{2h_L-h} \Bigg|_{2{\rm nd~sheet}} \approx \left( \frac{1}{1-\frac{24 \pi i h_H}{cz} } \right)^{2h_L -h} \end{equation} In the chaos limit, and for times smaller than the scrambling time $c z \gg 1$ ($t\ll \frac{\beta}{2\pi} \log c$), this gives approximately a constant $1$. After the scrambling time $c z \ll 1$, this term decays and controls the decay of correlators. The second term of \eqref{app:eq3} evaluated on the second sheet and for small $z$ gives \begin{equation} z^h{}_2 F_1 (h - \frac{h_{H}}{\alpha}, h+ h_{L}, 2h, w)|_{{\rm 2nd~sheet}}= \frac{2\pi i ~e^{i \pi (h_{L}-h_{H})}}{2h-1}\frac{\Gamma(2h)}{\Gamma(h\pm \frac{h_{H}}{\alpha})}\frac{\Gamma(2h)}{\Gamma(h\pm h_{L})} \frac{z}{(\alpha z)^h} \end{equation} For $h_H/c$ with $\alpha \sim 1$ this gives the same as the evaluation of the global block $z^h{}_2 F_1 (h - h_{H}, h+ h_{L}, 2h, z)$. Therefore we can approximate the Virasoro block by \begin{equation} \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](h, z)\approx \left( \frac{z}{1-(1-z)^{1-12 h_H/c}} \right)^{2h_L-h} z^h {}_2 F_1 \left( h - \delta_H , h+\delta_L , 2 h , z \right). \end{equation} The evaluation of this expression above in the second sheet for small $z$ gives the same answer as applied on the original \eqref{app:fitzkapblock}. Therefore all the effects of Virasoro descendants in the chaos limit come from the prefactor in the equation above. \begingroup\raggedright
{'timestamp': '2019-04-18T02:20:05', 'yymm': '1901', 'arxiv_id': '1901.04360', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.04360'}
ArXiv
\section{Introduction and statement of results}\label{Sect1} It is a classical Abelian theorem of Frobenius \cite{Frobenius} that for an arithmetic function $f:\mathbb N \to \mathbb C$ (for which we will set $f(0):=0$ in this study), if the limit of the average value $L:=\lim_{N\to \infty}\ \frac{1}{N}\sum_{1\leq k \leq N}f(k)$ exists then for $q\in\mathbb C, |q|<1$, when $q\to 1$ {radially} we have \begin{equation}\label{Frob} \lim_{q\to 1}\ (1-q)\sum_{n\geq 1}f(n)q^n =L. \end{equation} Equation \eqref{Frob} is the statement of {\it Abel's convergence theorem} if the sequence $\{f(n)\}$ converges; if $\{f(n)\}$ diverges, one may assign limiting value $L$ to the sequence so long as the average value converges. In fact, the limit \eqref{Frob} holds if $q\to 1$ through any path in a so-called {\it Stolz sector} of the unit disk, viz. a region with vertex at $z=1$ such that $\frac{|1-q|}{1-|q|}\leq M$ for some $M> 0$. See \cite{Divergent, Tauberian} for more about Abelian theorems and the complementary class of Tauberian theorems. Note that if $f(n)$ is the indicator function for a subset $S\subseteq \mathbb N$ with arithmetic density $d_S$, then \eqref{Frob} gives the limiting value $L=d_S$ as $q\to 1$. In \cite{Paper1, Paper2}, inspired by work of Alladi \cite{A}, the author and my collaborators exploit this idea to prove partition-theoretic and $q$-series formulas for $d_S$ with $q\to 1$, as well as at other roots of unity $\zeta$. In this paper, intended as a complement to \cite{Paper2}, we employ similar techniques to prove partition and $q$-series analogues of \eqref{Frob}. Let $\mathcal P$ denote the integer partitions. For $\lambda \in \mathcal P$, let $|\lambda|$ denote the {size} of $\lambda$ (sum of parts), $\ell(\lambda)$ denote the {length} (number of parts), and let $\operatorname{sm}(\lambda), \operatorname{lg}(\lambda)$ denote the smallest part and largest part of $\lambda$, respectively, noting $|\emptyset|=\ell(\emptyset)= \operatorname{sm}(\emptyset)= \operatorname{lg}(\emptyset):=0$ for $\lambda=\emptyset$ the empty partition. For $z,q\in \mathbb C, |q|<1$, let $(z;q)_n:=\prod_{0\leq k <n}(1-zq^k)$ denote the {$q$-Pochhammer symbol}, with $(z;q)_{\infty}:=\lim_{n\to \infty} (z;q)_{n}$. (For further reading on partition theory, see \cite{And}.) Throughout, we take $q\to 1$ in a Stolz sector of the unit disk, $L:=\lim_{N\to \infty}\frac{1}{N}\sum_{1\leq k \leq N}f(k)$. We postpone proofs for Section \ref{Sect2}. \begin{theorem}\label{thm1} For $f(n)$ an arithmetic function, we have \begin{flalign*} \lim_{q\to 1}\ (q;q)_{\infty}\sum_{\lambda \in \mathcal P} f\left(\operatorname{lg}(\lambda)\right)q^{|\lambda|}\ =\ L, \end{flalign*} where the sum is taken over all partitions $\lambda$. \end{theorem} Theorem \ref{thm1} is a partition analogue of Frobenius' formula \eqref{Frob}; if $f(n)\to L$ as $n\to \infty$, then the theorem yields a partition analogue of Abel's convergence theorem. Applying partition generating function methods gives further formulas to compute the limit $L$. We will require the {partition-theoretic M\"{o}bius function} defined in \cite{Schneider_arithmetic}: \begin{equation}\label{mudef} \mu_{\mathcal{P}}(\lambda) := \begin{cases} 0 & \rm{if} \ \lambda \ \rm{has \ any\ part\ repeated}, \\ (-1)^{\ell(\lambda)} & \rm{otherwise}. \end{cases} \end{equation} \begin{corollary}\label{cor1} For $f(n)$ an arithmetic function, we have \begin{flalign*} -\lim_{q\to 1}\ \sum_{\lambda \in \mathcal P}\mu_{\mathcal P}(\lambda)f\left(\operatorname{sm}(\lambda)\right)q^{|\lambda|}\ =\ L. \end{flalign*} \end{corollary} \begin{corollary}\label{cor1.5} For $f(n)$ an arithmetic function, we have \begin{flalign*} \lim_{q\to 1}\ (q;q)_{\infty}\sum_{n\geq 1}\frac{f(n)q^n}{(q;q)_{n}}\ =\ -\lim_{q\to 1}\ \sum_{n\geq 1}\sum_{k\geq 1}\frac{(-1)^k f(n) q^{nk+\frac{k(k-1)}{2}}}{(q;q)_{k-1}}\ =\ L. \end{flalign*} \end{corollary} \begin{remark} Setting $f(n)$ equal to the indicator function for $S\subseteq \mathbb N$ in Theorem \ref{thm1} and Corollaries \ref{cor1} and \ref{cor1.5} re-proves the main arithmetic density results of \cite{Paper1, Paper2} with $\zeta=1$. \end{remark} \begin{remark} That the left-hand limit in Corollary \ref{cor1.5} is equal to $L$ confirms the conjecture the author made just below Example E.1.1 in \cite{SchneiderPhD} with $\zeta=1$. \end{remark} \begin{example} Let $f(n)=\frac{\varphi(n)}{n}$ in Corollary \ref{cor1.5} with $\varphi(n)$ the Euler phi function; it is a well-known fact that the average value of $f(n)$ approaches $L=6/\pi^2$ as $n\to \infty$. Then we have $$\lim_{q\to 1}\ (q;q)_{\infty}\sum_{n\geq 1}\frac{\varphi(n)q^n}{n\cdot (q;q)_{n}} \ =\ \frac{6}{\pi^2}. $$ \end{example} \begin{remark} This re-proves Example E.1.1 of \cite{SchneiderPhD} with $\zeta=1$. \end{remark} Somewhat surprisingly, if one replaces ``$\operatorname{lg}$'' with ``$\operatorname{sm}$'' in Theorem \ref{thm1}, the limit still holds. \begin{theorem}\label{thm2} For $f(n)$ an arithmetic function, we have \begin{flalign*} \lim_{q\to 1}\ (q;q)_{\infty}\sum_{\lambda \in \mathcal P} f\left(\operatorname{sm}(\lambda)\right)q^{|\lambda|}\ =\ L. \end{flalign*} \end{theorem} Theorem \ref{thm2} is a second partition analogue of \eqref{Frob}. As with Theorem \ref{thm1}, generating function methods yield further formulas to compute $L$. \begin{corollary}\label{cor2} For $f(n)$ an arithmetic function, we have \begin{flalign*} -\lim_{q\to 1}\ \sum_{\lambda \in \mathcal P}\mu_{\mathcal P}(\lambda)f\left(\operatorname{lg}(\lambda)\right)q^{|\lambda|} \ =\ L. \end{flalign*} \end{corollary} \begin{corollary}\label{cor2.5} For $f(n)$ an arithmetic function, we have \begin{flalign*} \lim_{q\to 1}\ \sum_{n\geq 1}f(n)q^n(q;q)_{n-1}\ = \ \lim_{q\to 1}\ \sum_{n\geq 1}\sum_{k\geq 1}\frac{f(n)q^{nk}}{(q;q)_{k-1}}\ =\ L. \end{flalign*} \end{corollary} \begin{remark} Setting $f(n)$ equal to the indicator function for $S\subseteq \mathbb N$, then that the second limit in Corollary \ref{cor2} is equal to $L=d_{S}$ re-proves Theorem 3.6 of \cite{Paper2}. \end{remark} Now we give an application of these ideas, to an operator from statistical physics that has sparked much recent work in the theories of partitions and modular forms. For $a(\lambda)$ a function defined on partitions, and $|q|<1$, recall the {\it $q$-bracket of Bloch and Okounkov} introduced in \cite{B-O}, \begin{equation} \left<a\right>_q:=\frac{\sum_{\lambda \in \mathcal P}a(\lambda)q^{|\lambda|}}{\sum_{\lambda \in \mathcal P}q^{|\lambda|}}\ =\ (q;q)_{\infty}\sum_{\lambda \in \mathcal P}a(\lambda)q^{|\lambda|}. \end{equation} The $q$-bracket can be interpreted as the expected value (i.e., average value) of $a(\lambda)$ over all partitions, given certain hypotheses from statistical physics; it is connected to modular, quasi-modular and $p$-adic modular forms \cite{B-O,BOW, Padic, Zagier}. Let us also recall the statistic ${A}_n(q), n\geq1,$ defined in \cite{Paper2}, which serves to distinguish between different classes of limiting phenomena in that paper: \begin{equation}\label{A_ndef} {A}_n(q):=a\left((n)\right)+\sum_{\operatorname{sm}(\lambda){\geq}^* n}\left[a(\lambda\cdot (n))-a(\lambda)\right] q^{|\lambda|}, \end{equation} with $(n)\in \mathcal P$ the partition of $n$ with one part, $\lambda\cdot(n)\in \mathcal P$ the partition formed by adjoining part $n$ to partition $\lambda$, and with ${\geq}^*$ denoting $>$ if $a(\lambda)=0$ at partitions with any part repeated, and denoting $\geq$ otherwise. For $n=0$ we will set $A_0(q):=a(\emptyset)$, which is consistent with \eqref{A_ndef}. Interestingly, the statistic $A_n(q)$ is intertwined with the $q$-bracket $\left<a\right>_q$. \begin{theorem}\label{cor3} For $a(\lambda)$ a function defined on partitions, $|q|<1$, we have that $$\left<a\right>_q\ =\ (q;q)_{\infty}\sum_{n\geq 0}\frac{A_n(q)q^n}{(q;q)_n}.$$ If $A_k:=\lim_{q\to 1}A_k(q)$ exists for every $k\geq 1$, let ${A}:=\lim_{N\to \infty}\frac{1}{N}\sum_{1\leq k \leq N}{A}_k$. Then $$\lim_{q\to 1}\ \left<a\right>_q\ =\ A.$$ \end{theorem} We discuss in passing another, conjectural application. Euler's continued fraction formula \cite{Euler} converts any $q$-hypergeometric series with multiplicative coefficients into a $q$-continued fraction (see \cite{SchneiderPhD}); if $f(n)$ is completely multiplicative, then the $q$-series identities in Corollaries \ref{cor1.5} and \ref{cor2.5} translate to $q$-continued fractions that converge to $L$ as $q\to 1$ in a Stolz sector. Perhaps then applying convergence acceleration and continued fraction techniques like those summarized in \cite{Vanderpoorten} to $q$-series and $q$-continued fractions, could yield approaches to prove irrationality of constants $L$ such as $\zeta(m), m\geq 2$.\footnote{The author discussed this idea with J. Lagarias and L. Rolen after my talk at the Vanderbilt University Number Theory Seminar, Oct. 27, 2020.} \begin{remark} One anticipates similar limiting formulas hold as $q$ approaches other roots of unity. \end{remark} \section{Proofs of our results}\label{Sect2} The proofs in this section begin with a rewriting of \eqref{Frob}, multiplied by an additional factor of $q$ (which does not affect the asymptotic below), as $q\to1$ in a Stolz sector of the unit disk: \begin{equation}\label{qasymp} \sum_{n\geq 1}f(n)q^n \ \sim\ L\cdot \frac{q}{1-q}, \end{equation} with $L:=\lim_{N\to \infty}\ \frac{1}{N}\sum_{1\leq k \leq N}f(k)$ as above. This asymptotic equality can be used as a building block for producing further $q$-series formulas to compute the limit $L$. Theorem \ref{thm1} and its corollaries result from a generalization of the proof of Theorem 3.5 in \cite{Paper2}. \begin{proof}[Proof of Theorem \ref{thm1}] Take $q\mapsto q^k$ in \eqref{qasymp}. Multiply through by $(-1)^{k} q^{\frac{k(k-1)}{2}}(q;q)_{k-1}^{-1}$ and sum both sides over $k\geq 1$, swapping order of summation on the left-hand side to give \begin{equation}\label{qleft} \sum_{n \geq 1} \sum_{k\geq 1} \frac{(-1)^{k} f(n) q^{nk+\frac{k(k-1)}{2}}}{(q;q)_{k-1}}. \end{equation} For each $k\geq 1$, the factor $(q;q)_{k-1}^{-1}$ generates partitions with largest part strictly $<k$. The factor $q^{nk}$ adjoins a largest part $k$ with multiplicity $n$ to each partition, for every $n\geq 1$. The $q^{k(k-1)/2}=q^{1+2+3+...+(k-1)}$ factor guarantees at least one part of each size $<k$. Thus \eqref{qleft} is the generating function for partitions $\gamma$ with every natural number $<\operatorname{lg}(\gamma)$ appearing as a part, weighted by $(-1)^{\operatorname{lg}(\gamma)}f\left(m_{\operatorname{lg}}(\gamma) \right)=(-1)^k f(n)$, where $m_{\operatorname{lg}}(\gamma)=n$ denotes the multiplicity of the largest part of each partition $\gamma$. Under conjugation, this set of partitions $\gamma$ maps to partitions $\lambda$ into distinct parts weighted by $\mu_{\mathcal P}(\lambda)f\left(\operatorname{sm}(\lambda)\right)=(-1)^{\ell(\lambda)}f\left(\operatorname{sm}(\lambda)\right)=(-1)^k f(n)$, which is nonzero since $\lambda$ has no repeated part. Thus, noting $f(0):=0$, and multiplying through by a factor of $-1$ to produce the desired end result, we have \begin{flalign}\label{nexttolast} -\sum_{n\geq 1}\sum_{k\geq 1}\frac{(-1)^k f(n) q^{nk+\frac{k(k-1)}{2}}}{(q;q)_{k-1}}\ &=\ -\sum_{\lambda \in \mathcal P} \mu_{\mathcal P}(\lambda)f\left(\operatorname{sm}(\lambda)\right)q^{|\lambda|}=\ \sum_{n\geq 1} f(n)q^n(q^{n+1};q)_{\infty}\ \\ \nonumber &= \ (q;q)_{\infty}\sum_{n\geq 1}\frac{f(n)q^n}{(q;q)_n}\ =\ (q;q)_{\infty}\sum_{\lambda \in \mathcal P} f\left(\operatorname{lg}(\lambda)\right)q^{|\lambda|}, \end{flalign} using standard partition generating function methods. Manipulating the right-hand side of \eqref{qasymp} accordingly gives \begin{equation}\label{finalstep} -\sum_{n\geq 1}\sum_{k\geq 1}\frac{(-1)^k f(n) q^{nk+\frac{k(k-1)}{2}}}{(q;q)_{k-1}}\ \sim \ -L\cdot \sum_{k\geq 1}\frac{(-1)^k q^{\frac{k(k+1)}{2}}}{(q;q)_{k}}\ =\ -L\cdot \left((q;q)_{\infty}-1 \right),\end{equation} using an identity of Euler in the final step, which approaches $L$ as $q\to 1$. Comparing the right-hand sides of \eqref{nexttolast} and \eqref{finalstep} in the limit, completes the proof. \end{proof} \begin{proof}[Proof of Corollaries \ref{cor1} and \ref{cor1.5}] These two corollaries record alternative expressions for the limit $L$ derived during the proof of Theorem \ref{thm1} above. \end{proof} The proof of Theorem \ref{thm2} generalizes the proof of Theorem 3.6 in \cite{Paper2}. \begin{proof}[Proof of Theorem \ref{thm2}] Take $q\mapsto q^k$ in \eqref{qasymp}. Multiply both sides by $(q;q)_{k-1}^{-1}$, sum over $k\geq 1$, then swap order of summation and make the change of indices $k\mapsto k+1$ on the left, to give \begin{flalign}\label{asymp3} \sum_{n\geq 1}\sum_{k\geq 1}\frac{f(n)q^{nk}}{(q;q)_{k-1}}=\sum_{n\geq 1}f(n)q^n\sum_{k\geq 0}\frac{q^{nk}}{(q;q)_{k}}&\ \sim\ L \cdot \sum_{k\geq 1}\frac{q^k}{(q;q)_k}. \end{flalign} By the $q$-binomial theorem, the inner sum over $k\geq 0$ in the second double series is equal to $(q^n;q)_{\infty}^{-1}$, and the summation on the right is $(q;q)_{\infty}^{-1}-1$. Multiplying both sides of \eqref{asymp3} by $(q;q)_{\infty}$ gives, from standard generating function arguments, as $q\to 1$: \begin{flalign}\label{asymp4} (q;q)_{\infty}\sum_{\lambda \in \mathcal P} f\left(\operatorname{sm}(\lambda)\right)q^{|\lambda|}=(q;q)_{\infty}\sum_{n\geq 1} \frac{f(n)q^n}{(q^{n};q)_{\infty}}\ &=\ \sum_{n\geq 1}f(n)q^n(q;q)_{n-1}\\ \nonumber &=\ -\sum_{\lambda\in \mathcal P}\mu_{\mathcal P}(\lambda) f\left(\operatorname{lg}(\lambda)\right)q^{|\lambda|}\ \sim \ L.\end{flalign} That the left side is asymptotically equal to $L$, is equivalent to the statement of the theorem. \end{proof} \begin{proof}[Proof of Corollaries \ref{cor2} and \ref{cor2.5}] These corollaries record alternative expressions for the limit $L$ derived during the proof of Theorem \ref{thm2} above. \end{proof} \begin{proof}[Proof of Theorem \ref{cor3}] The first identity of the theorem is the special case $f(n)=1$ of Lemma 4.3 in \cite{Paper2}, after adding $a(\emptyset)$ to both sides of the lemma, in light of \cite{Paper2}, eq. (21). Then under the hypotheses in the statement of the theorem, we have \begin{equation}\left<a\right>_q\ =\ (q;q)_{\infty}\sum_{n\geq 0}\frac{A_n(q)q^n}{(q;q)_n}\ \sim\ (q;q)_{\infty}\sum_{n\geq 1}\frac{A_nq^n}{(q;q)_n}\ \sim\ A \end{equation} as $q\to 1$, noting $a(\emptyset)\cdot (q;q)_{\infty}\to 0$ so the $n=0$ term vanishes in the first asymptotic, and the final asymptotic is the $f(n)=A_n, L=A$ case of Corollary \ref{cor1.5}. \end{proof} The asymptotic equality \eqref{qasymp} can be involved in $q$-series manipulations in diverse ways, much like the formulas involving the $q$-density statistic in \cite{Paper2}, which themselves could be generalized to give new limit formulas as in this paper. For example, if one takes $q\mapsto q^k$ in \eqref{qasymp} and sums over all $k\geq 1$, swapping order of summation on the left in the absolutely convergent double series, then geometric series and classical Lambert series identities yield as $q\to 1$ that \begin{equation} \lim_{q\to 1}\ \frac{\sum_{n\geq 1}\frac{f(n)q^n}{1-q^n}}{\sum_{n\geq 1}\frac{q^n}{1-q^n}} \ =\ L \end{equation} with limit $L$ as defined previously, a generalization of Example 2 in \cite{Paper2}. Indeed, using techniques from Lambert series, $q$-hypergeometric series, modular forms, et al. (not to mention repeated application of L'Hospital's rule), the creative analyst might produce any number of further ``$q$-Abelian'' theorems. \section*{Acknowledgments} The author is indebted to George Andrews, Matthew R. Just, Ken Ono, Paul Pollack, A. V. Sills and Ian Wagner for discussions that strongly informed this study. %
{'timestamp': '2020-12-17T02:25:10', 'yymm': '2011', 'arxiv_id': '2011.08386', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.08386'}
ArXiv
\section{Introduction} When one is attempting to associate newly discovered meteoroid streams to their parent bodies, there are four critical steps that need to be carried out. The first is obviously the stream discovery through the search of databases and past records, which is ideally performed on meteor data comprised of Keplerian orbital elements. The second phase involves the verification of the meteoroid stream using completely independent meteor databases and stream searches as published online and/or reported in the literature. This is to help validate the existence of the stream. The third step explores the identification of candidate parent bodies, such as comets and asteroids, which show similar orbits to the space-time aggregated meteoroid Keplerian elements of the found stream. However, close similarity of the orbits between a meteoroid stream and a potential parent body is not necessarily conclusive proof of association or linkage, since the two object types (parent body and meteoroid) can undergo significantly different orbital evolution as shown by \citet{vaubaillon2006mechanisms}. Thus the most critical fourth step in determining the actual association is to perform dynamic modeling and orbital evolution on a sample of particles ejected from a candidate parent body. Given a comet's or asteroid's best estimated orbit in the past, and following the ejected stream particles through many hundreds to thousands of years, one looks for eventual encounters with the Earth at the time of meteor observation, and whether those encounters have a geometric similarity to the observed meteoroids of the stream under investigation. The work by \citet{mcnaught1999leonid} demonstrates this point. However, this current paper follows the approach of \citet{vaubaillon2005new} in focusing on the results of the dynamical modeling phase and is a culmination of all the steps just outlined and performed on new streams discovered from recent Croatian Meteor Network stream searches. The application of dynamical stream modeling indicates, with a high level of confidence, that seven new streams can be associated to either comets or asteroids, the latter of which are conjectured to be dormant comets. \section{Processing approach} The seven streams and their hypothetical parent body associations were initially discovered using a meteor database search technique as described in \citet{segon2014parent}. In summary, the method compared every meteor orbit to every other meteor orbit using the combined Croatian Meteor Network \citep{segon2012croatian, korlevic2013croatian}\footnote{\label{cmn_orbits}\url{http://cmn.rgn.hr/downloads/downloads.html#orbitcat}.} and SonotaCo \citep{sonotaco2009meteor}\footnote{\label{sonotaco_orbits}\url{http://sonotaco.jp/doc/SNM/index.html}.} video meteor orbit databases, looking for clusters and groupings in the five-parameter, Keplerian orbital element space. This was based on the requirement that three D-criteria (\citet{southworth1963statistics}, \citet{drummond1981test}, \citet{jopek2008meteoroid}) were all satisfied within a specified threshold. These groups had their mean orbital elements computed and sorted by number of meteor members. Mean orbital elements where computed by a simple averaging procedure. Working down from the largest sized group, meteors with similar orbits to the group under evaluation were assigned to the group and eliminated from further aggregation. This captured the known streams quickly, removing them from the meteor pool, and eventually found the newly discovered streams. According to International Astronomical Union (IAU) shower nomenclature rules \citep{jenniskens2006iau}, all results for the stream discoveries were first published. In these cases the search results can be found in three papers posted to WGN, The Journal of the International Meteor Organization \citep{andreic2014, gural2014results, segon2014results}. Next, the literature was scoured for similar stream searches in other independent data sets, such as the CAMS \citep{rudawska2014new, jenniskens2016cams} and the EDMOND \citep{rudawska2014independent} video databases, to determine the validity of the new streams found. The verified new streams were then compared against known cometary and asteroidal orbits, from which a list of candidate parent bodies were compiled based once again on meeting multiple D-criteria for orbital similarity. Each section below describes in greater detail the unique processes and evidence for each stream's candidate association to a parent body. Besides the seven reported shower cases and their hypothetical parent bodies, the possibility of producing a meteor shower has also been investigated for four possible streams with similar orbital parameters to asteroids, 2002 KK3, 2008 UZ94, 2009 CR2, and 2011 YX62, but the results were inconclusive or negative. The remaining possible parent bodies from the search were not investigated due to the fact that those comets do not have orbital elements precise enough to be investigated or are stated to have parabolic orbits. The dynamical analysis for each object was performed as follows. First, the nominal orbit of the body was retrieved from the JPL HORIZONS ephemeris\footnote{\label{horizonsJPL}\url{http://horizons.jpl.nasa.gov}} for the current time period as well as for each perihelion passage for the past few centuries (typically two to five hundred years). Assuming the object presented cometary-like activity in the past, the meteoroid stream ejection and evolution was simulated and propagated following \citet{vaubaillon2005new}. In detail, the method considers the ejection of meteoroids when the comet is within 3 AU from the Sun. The ejection velocity is computed following \citet{crifo1997dependence}. The ejection velocities typically range from 0 to \textasciitilde100 m/s. Then the evolution of the meteoroids in the solar system is propagated using numerical simulations. The gravitation of all the planets as well as non-gravitational forces (radiation pressure, solar wind, and the Poynting-Robertson effect) are taken into account. More details can be found in \citet{vaubaillon2005new}. When the parent body possessed a long orbital period, the stream was propagated starting from a more distant period in the past few thousand years. The intersection of the stream and the Earth was accumulated over 50 to 100 years, following the method by \citet{jenniskens2008minor}. Such a method provides a general view of the location of the meteoroid stream and give statistically meaningful results. For each meteoroid that is considered as intersecting the Earth, the radiant was computed following the \citet{neslusan1998computer} method (the software was kindly provided by those authors). Finally, the size distribution of particles intercepting the Earth was not considered in this paper, nor was the size of modeled particles compared to the size of observed particles. The size distribution comparison will be the topic of a future paper. \section{IAU meteor shower \#549 FAN - 49 Andromedids and Comet 2001 W2 Batters } The first case to be presented here is that of meteor shower IAU \#542 49 Andromedids. Following the IAU rules, this shower was first reported as part of a paper in WGN, Journal of International Meteor Organization by \citet{andreic2014}. Independent meteor shower database searches resulted in confirmation of the existence of this shower, namely \citet{rudawska2015independent} and \citet{jenniskens2016cams}. The radiant position from the Croatian Meteor Network (CMN) search into the SonotaCo and CMN orbit databases was found to be R.A. = 20.9°, Dec. = +46.7°, with a mean geocentric velocity V$_{g}$ = 60.1 km/s near the center of the activity period (solar longitude $\lambda_{0}$ = 114°, 35 orbits). \citet{rudawska2015independent} found the same radiant to be at R.A. = 19.0°, Dec. = +45.3° and V$_{g}$ = 59.8 km/s ($\lambda_{0}$ = 112.5°, 226 orbits), while \citet{jenniskens2016cams} give R.A. = 20.5°, Dec. = +46.6°, and V$_{g}$ = 60.2 km/s ($\lambda_{0}$ = 112°, 76 orbits). This shower was accepted as an established shower during the 2015 IAU Assembly\footnote{\label{IAU2015}\url{https://astronomy2015.org}.} and is now listed in the IAU meteor database. At the time of the initial finding, there were 35 meteors associated with this shower resulting in orbital parameters similar to published values for a known comet, namely 2001 W2 Batters. This Halley type comet with an orbital period of 75.9 years, has been well observed and its orbital parameters have been determined with higher precision than many other comets of this type. The mean meteoroid orbital parameters, as found by the above mentioned procedure, are compared with the orbit of 2001 W2 Batters in Table \ref{tab:table1}. Despite the fact that the orbital parameters' distance according to the Southworth-Hawkins D-criteria D$_{SH}$ = 0.14 seems a bit high to claim an association, the authors pointed out the necessity of using dynamic stream modeling to confirm or deny the association hypothesis because of the nearly identical ascending node values. Moreover, changes in 2001 W2 Batters' orbital parameters as far back as 3000 BC, as extracted from HORIZONS, has shown that the comet approached closer to Earth's orbit in 3000 BC than it has during the last few hundred years. Thus stream particles ejected from the comet farther in the past could have the possibility of producing a meteoroid stream observed at the Earth in the current epoch. \begin{table*}[t] \caption{Orbital parameters for the 49 Andromedids and Comet 2001 W2 Batters with corresponding D$_{SH}$ values. If the value of 112° for the ascending node (from \citet{jenniskens2016cams}) is used instead of the mean value (118°), then the resulting D$_{SH}$ is 0.16. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2001 W2 Batters.} \label{tab:table1} \centering \begin{tabular}{l c c c c c c} \hline\hline 49 Andromedids & q & e & i & Node & $\omega$ & D$_{SH}$ \\% table heading References & (AU) & & (\degr) & (\degr) & (\degr) & \\ \hline 1 & 0.918 & 0.925 & 118.2 & 114.0 & 143.1 & 0.14\\ 2 & 0.907 & 0.878 & 119.2 & 112.5 & 142.2 & 0.17\\ 3 & 0.898 & 0.922 & 117.9 & 118.0 & 139.8 & 0.19\\ 2001 W2 Batters & 1.051 & 0.941 & 115.9 & 113.4 & 142.1 & 0\\ \hline \end{tabular} \tablebib{(1) \citet{andreic2014}; (2) \citet{rudawska2015independent}; (3) \citet{jenniskens2016cams}. } \end{table*} The dynamical modeling for the hypothetical parent body 2001 W2 Batters was performed following \citet{vaubaillon2005new} and \citet{jenniskens2008minor}. In summary, the dynamical evolution of the parent body is considered over a few hundred to a few thousand years. At a specific chosen time in the past, the creation of a meteoroid stream is simulated and its evolution is followed forward in time until the present day. The intersection of the particles with the Earth is recorded and the radiant of each particle is computed and compared to observations. The first perihelion passages were initially limited to 500 years back in time. No direct hits to the Earth were found from meteoroids ejected during the aforementioned period. However, the authors were convinced that such close similarity of orbits may result in more favorable results if the dynamical modeling was repeated for perihelion passages back to 3000 BC. The new run did provide positive results, with direct hits to the Earth predicted at R.A. = 19.1°, Dec. = +46.9°, and V$_{g}$ = 60.2 km/s, at a solar longitude of $\lambda_{0}$ = 113.2°. A summary of the observed and modeled results is given in Table \ref{tab:table2}. \begin{table}[h] \caption{Observed and modeled radiant positions for the 49 Andromedids and comet Batters' meteoroids ejected 3000 years ago.} \label{tab:table2} \centering \begin{tabular}{l c c c c} \hline\hline 49 Andromedids & R.A. & Dec. & V$_{g}$ & $\lambda_{0}$\\ References & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline 1 & 20.9 & 46.7 & 60.1 & 114.0\\ 2 & 19.0 & 45.3 & 59.8 & 112.5\\ 3 & 20.5 & 46.6 & 60.2 & 112.0\\ 2001 W2 Batters \\meteoroids, this work & 19.1 & 46.9 & 60.2 & 113.2\\ \hline \end{tabular} \tablebib{(1) \citet{andreic2014}; (2) \citet{rudawska2015independent}; (3) \citet{jenniskens2016cams}. } \end{table} The maximum difference between the average observed radiant positions and modeled mean positions is less than 2° in both right ascension and declination, while there are also single meteors very close to the predicted positions according to the model. Since the observed radiant position fits very well with the predictions, we may conclude that there is a strong possibility that comet 2001 W2 Batters is indeed the parent body of the 49 Andromedids shower. The high radiant dispersion seen in the observations can be accounted for by 1) less precise observations in some of the reported results, and 2) the 3000 year old nature of the stream which produces a more dispersed trail. The next closest possible association was with comet 1952 H1 Mrkos but with D$_{SH}$ of 0.28, it was considered too distant to be connected with the 49 Andromedids stream. Figures \ref{fig:figure1} and \ref{fig:figure2} show the location of the stream with respect to the Earth's path, as well as the theoretical radiant. These results were obtained by concatenating the locations of the particles intersecting the Earth over 50 years in order to clearly show the location of the stream (otherwise there are too few particles to cross the Earth each year). As a consequence, it is expected that the level of activity of this shower would not change much from year to year. \begin{figure}[h] \resizebox{\hsize}{!}{\includegraphics{media/image1.jpeg}} \caption{Location of the nodes of the particles released by 2001 W2 Batters over several centuries, concatenated over the years 2000 to 2050. The Earth crosses the stream.} \label{fig:figure1} \end{figure} \begin{figure}[h] \resizebox{\hsize}{!}{\includegraphics{media/image2.png}} \caption{Theoretical radiant of the particles released by 2001 W2 Batters which were closest to the Earth. The range of solar longitudes for modeled radiants is from 113.0\degr to 113.9\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure2} \end{figure} \section{IAU meteor shower \#533 JXA - July $\xi$ Arietids and comet 1964 N1 Ikeya} The discovery of the possible meteor shower July $\xi$ Arietids was first published in \citet{segon2014new}. The shower had been found as a grouping of 61 meteoroid orbits, active from July 4 to August 12, peaking around July 21. Three other searches for meteor showers in different meteoroid orbit databases done by \citet{rudawska2015independent}, \citet{jenniskens2016cams}, and \citet{kornovs2014confirmation} found this shower as well, but with slight differences in the period of activity. This shower had been accepted as an established shower during the 2015 IAU Assembly held on Hawaii and is now referred to as shower \#533. Among the possible parent bodies known at the time of this shower's discovery, comet C/1964 N1 Ikeya was found to have similar orbital parameters as those of the July $\xi$ Arietids. Comet C/1964 N1 Ikeya is a long period comet, having an orbital period of 391 years and contrary to comet 2001 W2 Batters, has less precision in its orbit estimation. A summary of the mean orbital parameters of the shower compared with C/1964 N1 Ikeya are shown in Table \ref{tab:table3}, from which it can be seen that the distance estimated from D$_{SH}$ suggests a possible connection between the shower and the comet. \begin{table*}[t] \caption{Orbital parameters for the July $\xi$ Arietids and Comet 1964 N1 Ikeya with corresponding D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 1964 N1 Ikeya.} \label{tab:table3} \centering \begin{tabular}{l c c c c c c} \hline\hline July $\xi$ Arietids & q & e & i & Node & $\omega$ & D$_{SH}$\\ References & (AU) & & (\degr) & (\degr) & (\degr) & \\ \hline 1 & 0.883 & 0.965 & 171.6 & 299.0 & 318.0 & 0.10\\ 2 & 0.863 & 0.939 & 171.8 & 292.6 & 313.8 & 0.08\\ 3 & 0.836 & 0.919 & 171.5 & 291.1 & 309.8 & 0.09\\ 4 & 0.860 & 0.969 & 170.4 & 292.7 & 312.4 & 0.08\\ C/1964 N1 Ikeya & 0.822 & 0.985 & 171.9 & 269.9 & 290.8 & 0\\ \hline \end{tabular} \tablebib{(1) \citet{segon2014new}; (2) \citet{kornovs2014confirmation}; (3) \citet{rudawska2015independent}; (4) \citet{jenniskens2016cams}. } \end{table*} Similar to the previous case, the dynamical modeling was performed for perihelion passages starting from 5000 BC onwards. Only two direct hits were found from the complete analysis, but those two hits confirm that there is a high possibility that comet C/1964 N1 Ikeya is indeed the parent body of the July $\xi$ Arietids. The mean radiant positions for those two modeled meteoroids as well as the mean radiant positions found by other searches are presented in Table \ref{tab:table4}. As can be seen from Table \ref{tab:table4}, the difference in radiant position between the model and the observations appears to be very significant. \begin{table}[h] \caption{Observed and modeled radiant positions for July $\xi$ Arietids and comet C/1964 N1 Ikeya. Rows in bold letters show radiant positions of the entries above them at 106.7° of solar longitude. The applied radiant drift was provided in the respective papers.} \label{tab:table4} \centering \begin{tabular}{l c c c c} \hline\hline July $\xi$ Arietids & R.A. & Dec. & V$_{g}$ & $\lambda_{0}$\\ References & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline 1 & 40.1 & 10.6 & 69.4 & 119.0\\ & \textbf{32.0} & \textbf{7.5} & \textbf{...} & \textbf{106.7}\\ 2 & 35.0 & 9.2 & 68.9 & 112.6\\ 3 & 33.8 & 8.7 & 68.3 & 111.1\\ 4 & 41.5 & 10.7 & 68.9 & 119.0\\ & \textbf{29.6} & \textbf{7.0} & \textbf{...} & \textbf{106.7}\\ 1964 N1 Ikeya \\meteoroids, this work & 29.0 & 6.5 & 68.7 & 106.7\\ \hline \end{tabular} \tablebib{(1) \citet{segon2014new}; (2) \citet{kornovs2014confirmation}; (3) \citet{rudawska2015independent}; (4) \citet{jenniskens2016cams}. } \end{table} However, the radiant position for solar longitude as found from dynamical modeling fits very well with that predicted by the radiant's daily motion: assuming $\Delta$R.A. = 0.66° and $\Delta$Dec. = 0.25° from \citet{segon2014new}, the radiant position at $\lambda_{0}$ = 106.7° would be located at R.A. = 32.0°, Dec. = 7.5° or about three degrees from the modeled radiant. If we use results from \citet{jenniskens2016cams} ($\Delta$R.A. = 0.97° and $\Delta$Dec. = 0.30°), the resulting radiant position fits even better – having R.A. = 29.0° Dec. = 7.0° or about one degree from the modeled radiant. The fact that the model does not fit the observed activity may be explained by various factors, from the lack of precise data of the comet position in the past derived using the relatively small orbit arc of observations, to the possibility that this shower has some other parent body (possibly associated to C/1964 N1 Ikeya) as well. The next closest possible association was with comet 1987 B1 Nishikawa-Takamizawa-Tago where the D$_{SH}$ was 0.21, but due to high nodal distances between orbits, we consider this to be not connected to the July $\xi$ Arietids. The simulation of the meteoroid stream was performed for hypothetical comet returns back to 5000 years before the present. According to the known orbit of the comet, it experienced a close encounter with Jupiter and Saturn in 1676 and 1673 AD respectively, making the orbital evolution prior to this date much more uncertain. Nevertheless, the simulation of the stream was performed in order to get a big picture view of the stream in the present day solar system as visualized in Figures \ref{fig:figure3} and \ref{fig:figure4}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image3.jpg}} \caption{Location of the particles ejected by comet C/1964 N1 Ikea over several centuries, concatenated over 50 years in the vicinity of the Earth.} \label{fig:figure3} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image4.png}} \caption{Theoretical radiant of the particles released by C/1964 N1 Ikea which were closest to the Earth. The match with the July $\xi$ Arietids is not convincing in this case. The range of solar longitudes for modeled radiants is from 99.0\degr to 104.8\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure4} \end{figure} \section{IAU meteor shower \#539 ACP - $\alpha$ Cepheids and comet 255P Levy } The $\alpha$ Cepheids shower had been reported by \citet{segon2014new}, as a disperse grouping of 41 meteors active from mid-December to mid-January at a mean radiant position of R.A. = 318°, Dec. = 64° at $\lambda_{0}$ = 281° (January 2). The authors investigated the possibility that this new shower could be connected to the predicted enhanced meteor activity of IAU shower \#446 DPC December $\phi$ Cassiopeiids. However, the authors pointed out that \#466 DPC and \#539 ACP cannot be the same meteor shower \citep{segon2014new}. Despite the fact that a predicted meteor outburst was not detected \citep{roggemans2014letter}, there is a strong possibility that the activity from comet 255P/Levy produces a meteor shower which can be observed from the Earth as the $\alpha$ Cepheids shower. Meteor searches conducted by \citet{kornovs2014confirmation} and \citet{jenniskens2016cams} failed to detect this shower, but \citet{rudawska2015independent} found 11 meteors with a mean radiant position at R.A. = 333.5°, Dec. = +66°, V$_{g}$ = 13.4 km/s at $\lambda_{0}$ = 277.7°. The mean geocentric velocity for the $\alpha$ Cepheids meteors has been found to be small, of only 15.9 km/s, but ranges from 12.4 to 19.7 kilometres per second. Such a high dispersion in velocities may be explained by the fact that the D-criterion threshold for automatic search has been set to D$_{SH}$ = 0.15, which allowed a wider range of orbits to be accepted as meteor shower members. According to the dynamical modeling results, the geocentric velocity for meteoroids ejected from 255P/Levy should be of about 13 km/s, and observations show that some of the $\alpha$ Cepheids meteors indeed have such velocities at more or less the predicted radiant positions, as can be seen from Figure \ref{fig:figure5}. This leads us to the conclusion that this meteor shower has to be analyzed in greater detail, but at least some of the observations represent meteoroids coming from comet 255P/Levy. \begin{figure}[b] \resizebox{\hsize}{!}{\includegraphics{media/image5.png}} \caption{Radiant positions of observed $\alpha$ Cepheids and predicted meteors from 255P/Levy. The range of solar longitudes for modeled radiants is from 250\degr to 280\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure5} \end{figure} The simulation of the meteoroid stream ejected by comet 255P/Levy includes trails ejected from 1801 through 2017 as visualized in Figures \ref{fig:figure6} and \ref{fig:figure7}. Several past outbursts were forecasted by the dynamical modeling but none had been observed, namely during apparitions in 2006 and 2007 (see Table \ref{tab:table5}). As a consequence, the conclusion is that the activity of the $\alpha$ Cepheids is most likely due to the global background of the stream. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image6.jpg}} \caption{Location of the particles ejected by comet 255P/Levy in the vicinity of the Earth in 2006: an outburst should have been detected.} \label{fig:figure6} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image7.jpg}} \caption{Location of all the particles ejected by 255P over 50 years in order to show the location of the whole stream in the solar system. This graph does not imply several outbursts but rather provides a global indication of the stream.} \label{fig:figure7} \end{figure} \begin{table} \caption{Expected outburst caused by 255P/Levy. No unusual outburst was reported in 2006 and 2007. Columns: Year = the year of Earth's collision with the trail, Trail = year of particle ejection from the given trail, $\lambda_{0}$ = solar longitude in degrees, yyyy-mm-ddThh:mm:ss = date and time of the trail's closest approach, ZHR = zenithal hourly rate.} \label{tab:table5} \centering \begin{tabular}{c c c c c} \hline\hline Year & Trail & $\lambda_{0}$ & yyyy-mm-ddThh:mm:ss & ZHR\\ & & (\degr) & &\\ \hline 2001 & 1963 & 279.132 & 2001-12-30T18:37:00 & 1\\ 2001 & 1975 & 279.765 & 2001-12-31T12:01:00 & 3\\ 2001 & 1980 & 279.772 & 2001-12-31T15:00:00 & 2\\ 2001 & 1985 & 279.828 & 2001-12-31T11:24:00 & 11\\ 2001 & 1991 & 279.806 & 2001-12-31T10:44:00 & 13\\ 2002 & 1963 & 278.914 & 2002-10-20T14:56:00 & 1\\ 2002 & 1980 & 279.805 & 2002-12-31T10:23:00 & 2\\ 2002 & 1985 & 279.808 & 2002-12-31T10:40:00 & 15\\ 2002 & 1991 & 279.789 & 2002-12-31T10:24:00 & 6\\ 2006 & 1963 & 279.285 & 2006-12-31T08:01:00 & 1\\ 2007 & 1963 & 279.321 & 2007-12-31T07:04:00 & 1\\ 2012 & 1980 & 279.803 & 2012-12-31T06:25:00 & 6\\ 2013 & 1980 & 279.882 & 2013-12-31T08:16:00 & 2\\ 2014 & 1969 & 264.766 & 2014-12-17T00:07:00 & 1\\ 2017 & 1930 & 342.277 & 2017-09-21T18:39:00 & 1\\ 2017 & 1941 & 279.510 & 2017-12-30T03:41:00 & 1\\ 2018 & 1969 & 278.254 & 2018-12-29T07:29:00 & 1\\ 2033 & 1975 & 275.526 & 2033-12-27T10:12:00 & 1\\ 2033 & 1980 & 275.488 & 2033-12-27T10:06:00 & 1\\ 2033 & 1985 & 275.452 & 2033-12-27T09:55:00 & 1\\ 2033 & 1991 & 275.406 & 2033-12-27T09:54:00 & 1\\ 2033 & 1996 & 275.346 & 2033-12-27T08:58:00 & 1\\ 2034 & 1975 & 262.477 & 2034-12-13T22:22:00 & 1\\ 2034 & 1980 & 261.456 & 2034-06-06T03:40:00 & 1\\ 2034 & 1985 & 261.092 & 2034-04-05T17:02:00 & 1\\ 2034 & 1991 & 260.269 & 2034-03-09T11:52:00 & 1\\ 2035 & 1914 & 276.553 & 2035-01-09T07:59:00 & 1\\ 2035 & 1952 & 271.463 & 2035-12-20T03:11:00 & 1\\ 2039 & 1980 & 272.974 & 2039-12-25T01:51:00 & 1\\ 2039 & 1991 & 272.131 & 2039-12-25T01:05:00 & 1\\ \hline \end{tabular} \end{table} There are several other parent bodies possibly connected to the $\alpha$ Cepheids stream: 2007 YU56 (D$_{SH}$ = 0.20), 2005 YT8 (D$_{SH}$ = 0.19), 1999 AF4 (D$_{SH}$ = 0.19), 2011 AL52 (D$_{SH}$ = 0.19), 2013 XN24 (D$_{SH}$ = 0.12), 2008 BC (D$_{SH}$ = 0.17), and 2002 BM (D$_{SH}$ = 0.16). The analysis for those bodies will be done in a future analysis. \section{IAU meteor shower \#541 SSD - 66 Draconids and asteroid 2001 XQ } Meteor shower 66 Draconids had been reported by \citet{segon2014new}, as a grouping of 43 meteors having mean radiant at R.A. = 302°, Dec. = +62°, V$_{g}$ = 18.2 km/s. This shower has been found to be active from solar longitude 242° to 270° (November 23 to December 21), having a peak activity period around 255° (December 7). Searches by \citet{jenniskens2016cams} and \citet{kornovs2014confirmation} failed to detect this shower. But again, \citet{rudawska2015independent} found this shower to consist of 39 meteors from the EDMOND meteor orbits database, at R.A. = 296°, Dec. = 64°, V$_{g}$ = 19.3 km/s for solar longitude $\lambda_{0}$ = 247°. A search for a possible parent body of this shower resulted in asteroid 2001 XQ, which having a D$_{SH}$ = 0.10 represented the most probable choice. The summary of mean orbital parameters from the above mentioned searches compared with 2001 XQ are shown in Table \ref{tab:table6}. \begin{table*}[t] \caption{Orbital parameters for 66 Draconids and 2001XQ with respective D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2001XQ.} \label{tab:table6} \centering \begin{tabular}{l c c c c c c} \hline\hline 66 Draconids & q & e & i & Node & $\omega$ & D$_{SH}$\\ References & (AU) & & (\degr) & (\degr) & (\degr) & \\ \hline 1 & 0.981 & 0.657 & 27.2 & 255.2 & 184.4 & 0.10\\ 2 & 0.980 & 0.667 & 29.0 & 247.2 & 185.2 & 0.13\\ 2001 XQ & 1.035 & 0.716 & 29.0 & 251.4 & 190.1 & 0\\ \hline \end{tabular} \tablebib{(1) \citet{segon2014new}; (2) \citet{rudawska2015independent}. } \end{table*} Asteroid 2001 XQ has Tisserand parameter T$_{j}$ = 2.45, which is a value common for Jupiter family comets and this makes us suspect it may not be an asteroid per se, but rather a dormant comet. To the collected author's knowledge, no cometary activity has been observed for this body. Nor was there any significant difference in the full-width half-max spread between stars and the asteroid on the imagery provided courtesy of Leonard Kornoš (personal communication) from Modra Observatory. They had observed this asteroid (at that time named 2008 VV4) on its second return to perihelion, during which it reached \nth{18} magnitude. Numerical modeling of the hypothetical meteor shower whose particles originate from asteroid 2001 XQ was performed for perihelion passages from 800 AD up to 2100 AD. The modeling showed multiple direct hits into the Earth for many years, even outside the period covered by the observations. The summary of observed and modeled radiant positions is given in Table \ref{tab:table7}. \begin{table} \caption{Observed 66 Draconid and modeled 2001 XQ meteors' mean radiant positions (prefix C\_ stands for calculated (modeled), while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed 66 Draconid meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.} \label{tab:table7} \centering \begin{tabular}{l c c c c c} \hline\hline Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\ & (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline C\_2007 & 250.3 & 308.2 & 65.3 & 19.3 & ...\\ O\_2007 (5) & 257.5 & 300.1 & 63.2 & 18.2 & 4.1\\ C\_2008 & 248.2 & 326.8 & 56.9 & 16.1 & ...\\ O\_2008 (8) & 254.0 & 300.5 & 62.6 & 18.0 & 14.3\\ C\_2009 & 251.1 & 309.6 & 64.0 & 18.8 & ...\\ O\_2009 (5) & 253.6 & 310.4 & 61.0 & 17.0 & 3.0\\ C\_2010 & 251.2 & 304.0 & 63.1 & 19.1 & ...\\ O\_2010 (17) & 253.7 & 300.4 & 63.4 & 18.9 & 1.6\\ \hline \end{tabular} \end{table} Despite the fact that the difference in the mean radiant positions may seem significant, radiant plots of individual meteors show that some of the meteors predicted to hit the Earth at the observation epoch were observed at positions almost exactly as predicted. It is thus considered that the results of the simulations statistically represent the stream correctly, but individual trails cannot be identified as responsible for any specific outburst, as visualized in Figures \ref{fig:figure8} and \ref{fig:figure9}. The activity of this shower is therefore expected to be quite regular from year to year. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image8.jpg}} \caption{Location of the nodes of the particles released by 2001 XQ over several centuries, concatenated over 50 years. The Earth crosses the stream.} \label{fig:figure8} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image9.png}} \caption{Theoretical radiants of the particles released by 2001 XQ which were closest to the Earth. The range of solar longitudes for modeled radiants is from 231.1\degr to 262.8\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure9} \end{figure} Two other candidate parent bodies were initially considered, 2004 YY23 and 2015 WB13, in which both had a D$_{SH}$ of 0.26. This was deemed too distant to be associated with the 66 Draconids stream. \section{IAU meteor shower \#751 KCE - $\kappa$ Cepheids and asteroid 2009 SG18 } The meteor shower $\kappa$ Cepheids had been reported by \citet{segon2015four}, as a grouping of 17 meteors with very similar orbits, having average D$_{SH}$ of only 0.06. The activity period was found to from September 11 to September 23, covering solar longitudes from 168° to 180°. The radiant position was R.A. = 318°, Dec. = 78° with V$_{g}$ = 33.7 km/s, at a mean solar longitude of 174.4°. Since the new shower discovery has been reported only recently, the search by \citet{kornovs2014confirmation} could be considered totally blind having not found its existence, while the search by \citet{jenniskens2016cams} did not detect it as well in the CAMS database. Once again, the search by \citet{rudawska2015independent} found the shower, but in much higher numbers than it has been found in the SonotaCo and CMN orbit databases. In total 88 meteors have been extracted as $\kappa$ Cepheids members in the EDMOND database. A summary of the mean orbital parameters from the above mentioned searches compared with 2009 SG18 are shown in Table \ref{tab:table8}. \begin{table*}[t] \caption{Orbital parameters for $\kappa$ Cepheids and asteroid 2009 SG18 with corresponding D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2009 SG18.} \label{tab:table8} \centering \begin{tabular}{l c c c c c c} \hline\hline $\kappa$ Cepheids & q & e & i & Node & $\omega$ & D$_{SH}$\\ References & (AU) & & (\degr) & (\degr) & (\degr) & \\ \hline 1 & 0.983 & 0.664 & 57.7 & 174.4 & 198.4 & 0.10\\ 2 & 0.987 & 0.647 & 55.9 & 177.2 & 190.4 & 0.17\\ 2009 SG18 & 0.993 & 0.672 & 58.4 & 177.6 & 204.1 & 0\\ \hline \end{tabular} \tablebib{(1) \citet{segon2014new}; (2) \citet{rudawska2015independent}. } \end{table*} What can be seen at a glance is that the mean orbital parameters for both searches are very consistent (D$_{SH}$ = 0.06), while the difference between the mean shower orbits and the asteroid's orbit differs mainly in the argument of perihelion and perihelion distance. Asteroid 2009 SG18 has a Tisserand parameter for Jupiter of T$_{j}$ = 2.31, meaning that it could be a dormant comet. Numerical modeling of the hypothetical meteor shower originating from asteroid 2009 SG18 for perihelion passages from 1804 AD up to 2020 AD yielded multiple direct hits into the Earth for more years than the period covered by the observations, as seen in Figures \ref{fig:figure10} and \ref{fig:figure11}. The very remarkable coincidence found between the predicted and observed meteors for years 2007 and 2010 is summarized in Table \ref{tab:table9}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image10.jpg}} \caption{Location of the nodes of the particles released by 2009 SG18 over several centuries, concatenated over 50 years. The Earth crosses the stream.} \label{fig:figure10} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image11.png}} \caption{Theoretical radiant of the particles released by 2009 SG18 which were closest to the Earth. Several features are visible due to the difference trails, but care must be taken when interpreting these data. The range of solar longitudes for modeled radiants is from 177.0\degr to 177.7\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure11} \end{figure} \begin{table} \caption{Observed $\kappa$ Cepheids and modeled 2009 SG18 meteors' mean radiant positions (prefix C\_ stands for calculated or modeled, while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.} \label{tab:table9} \centering \begin{tabular}{l c c c c c} \hline\hline Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\ & (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline C\_2007 & 177.4 & 327.5 & 77.0 & 34.0 & ...\\ O\_2007 (3) & 177.1 & 328.3 & 77.9 & 35.3 & 0.9\\ C\_2010 & 177.7 & 327.7 & 77.7 & 34.3 & ...\\ O\_2010 (2) & 177.7 & 326.5 & 80.5 & 34.7 & 2.8\\ \hline \end{tabular} \end{table} Based on an initial analysis given in this paper, a prediction of possible enhanced activity on September 21, 2015 was made by \citet{segon2015croatian}. At the moment, there are no video meteor data that confirm the prediction of the enhanced activity, but a paper on visual observations of the $\kappa$ Cepheids by a highly reputable visual observer confirmed some level of increased activity (\citet{rendtel2015minor}). The encounters shown in Table \ref{tab:table10} between the trails ejected by 2009 SG18 and the Earth were found theoretically through the dynamical modeling. Caution should be emphasized when interpreting the results since the confirmation of any historical outbursts still need to be performed before trusting such predictions. \begin{table*}[t] \caption{Prediction of possible outbursts caused by 2009 SG18. Columns: Year = the year of Earth's collision with the trail, Trail = year of particle ejection from the given trail, rE-rD = the distance between the Earth and the center of the trail, $\lambda_{0}$ = solar longitude in degrees, yyyy-mm-ddThh:mm:ss = date and time of the trail's closest approach, ZHR = zenithal hourly rate} \label{tab:table10} \centering \begin{tabular}{c c c c c c} \hline\hline Year & Trail & rE-rD & $\lambda_{0}$ & yyyy-mm-ddThh:mm:ss & ZHR\\ & & (AU) & (\degr) & &\\ \hline 2005 & 1967 & 0.00066 & 177.554 & 2005-09-20T12:08:00 & 11\\ 2006 & 1804 & 0.00875 & 177.383 & 2006-09-20T11:31:00 & 13\\ 2010 & 1952 & -0.00010 & 177.673 & 2010-09-20T21:38:00 & 12\\ 2015 & 1925 & -0.00143 & 177.630 & 2015-09-21T03:29:00 & 10\\ 2020 & 1862 & -0.00064 & 177.479 & 2020-09-20T06:35:00 & 11\\ 2021 & 1962 & 0.00152 & 177.601 & 2021-09-20T15:39:00 & 11\\ 2031 & 2004 & -0.00126 & 177.267 & 2031-09-20T21:15:00 & 12\\ 2031 & 2009 & -0.00147 & 177.222 & 2031-09-20T19:55:00 & 13\\ 2033 & 1946 & 0.00056 & 177.498 & 2033-09-20T14:57:00 & 10\\ 2036 & 1978 & -0.00042 & 177.308 & 2036-09-20T04:44:00 & 20\\ 2036 & 2015 & -0.00075 & 177.220 & 2036-09-20T02:33:00 & 20\\ 2036 & 2025 & 0.00109 & 177.254 & 2036-09-20T03:19:00 & 13\\ 2037 & 1857 & -0.00031 & 177.060 & 2037-09-20T04:37:00 & 13\\ 2037 & 1946 & 0.00021 & 177.273 & 2037-09-20T09:56:00 & 10\\ 2038 & 1841 & -0.00050 & 177.350 & 2038-09-20T18:02:00 & 10\\ 2038 & 1925 & 0.00174 & 177.416 & 2038-09-20T19:39:00 & 11\\ 2039 & 1815 & -0.00018 & 177.303 & 2039-09-20T23:01:00 & 10\\ \hline \end{tabular} \end{table*} The next closest possible association was 2002 CE26 with D$_{SH}$ of 0.35, which was deemed too distant to be connected to the $\kappa$ Cepheids stream. \section{IAU meteor shower \#753 NED - November Draconids and asteroid 2009 WN25 } The November Draconids had been previously reported by \citet{segon2015four}, and consist of 12 meteors on very similar orbits having a maximal distance from the mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The activity period was found to be between November 8 and 20, with peak activity at solar longitude of 232.8°. The radiant position at peak activity was found to be at R.A. = 194°, Dec. = +69°, and V$_{g}$ = 42.0 km/s. There are no results from other searches since the shower has been reported only recently. Other meteor showers were reported on coordinates similar to \#753 NED, namely \#387 OKD October $\kappa$ Draconids and \#392 NID November i Draconids \citep{brown2010meteoroid}. The difference in D$_{SH}$ for \#387 OKD is found to be far too excessive (0.35) to be considered to be the same shower stream. \#392 NID may be closely related to \#753, since the D$_{SH}$ of 0.14 derived from radar observations show significant similarity; however, mean orbits derived from optical observations by \citet{jenniskens2016cams} differ by D$_{SH}$ of 0.24 which we consider too far to be the same shower. The possibility that asteroid 2009 WN25 is the parent body of this possible meteor shower has been investigated by numerical modeling of the hypothetical meteoroids ejected for the period from 3000 BC up to 1500 AD and visualized in Figures \ref{fig:figure12} and \ref{fig:figure13}. The asteroid 2009 WN25 has a Tisserand parameter for Jupiter of T$_{j}$ = 1.96. Despite the fact that direct encounters with modeled meteoroids were not found for all years in which the meteors were observed, and that the number of hits is relatively small compared to other modeled showers, the averaged predicted positions fit the observations very well (see Table \ref{tab:table11}). This shows that the theoretical results have a statistically meaningful value and validates the approach of simulating the stream over a long period of time and concatenating the results to provide an overall view of the shower. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image12.jpg}} \caption{Location of the nodes of the particles released by 2009 WN25 over several centuries, concatenated over 100 years. The Earth crosses the stream.} \label{fig:figure12} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image13.png}} \caption{Theoretical radiant of the particles released by 2009 WN25 which were closest to the Earth. The range of solar longitudes for modeled radiants is from 230.3\degr to 234.6\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure13} \end{figure} \begin{table} \caption{Averaged observed and modeled radiant positions for \#753 NED and 2009 WN25. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.} \label{tab:table11} \centering \begin{tabular}{l c c c c c} \hline\hline November Draconids & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\ & (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline Predicted & 232.8 & 194.2 & 68.6 & 42.0 & ...\\ Observed & 232.4 & 196.5 & 67.6 & 41.8 & 1.3\\ \hline \end{tabular} \end{table} Moreover, it appears that the predicted 2014 activity sits exactly at the same equatorial location (R.A. = 199°, Dec. = +67°) seen on Canadian Meteor Orbit Radar (CMOR) plots.\footnote{\label{cmor_plots}\url{http://fireballs.ndc.nasa.gov/} - "radar".} The shower has been noted as NID, but its position fits more closely to the NED. Since orbital data from the CMOR database are not available online, the authors were not able to confirm the hypothesis that the radar is seeing the same meteoroid orbits as the model produces. However, the authors received a confirmation from Dr. Peter Brown at the University of Western Ontario (private correspondence) that this stream has shown activity each year in the CMOR data, and likely belongs to the QUA-NID complex. A recently published paper \citep{micheli2016evidence} suggests that asteroid 2009 WN25 may be a parent body of the NID shower as well, so additional analysis with more observations will be needed to reveal the true nature of this shower complex. The next closest possible association was 2012 VF6 with D$_{SH}$ of 0.49, which was deemed too distant to be connected to the November Draconids stream. \section{IAU meteor shower \#754 POD - $\psi$ Draconids and asteroid 2008 GV } The possible new meteor shower $\psi$ Draconids was reported by \citet{segon2015four}, consisting of 31 tight meteoroid orbits, having maximal distance from a mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The $\psi$ Draconids were found to be active from March 19 to April 12, with average activity around solar longitude of 12° with radiant at R.A. = 262°, Dec. = +73°, and V$_{g}$ = 19.8 km/s. No confirmation from other shower searches exists at the moment, since the shower has been reported upon only recently. If this shower's existence could be confirmed, the most probable parent body known at this time would be asteroid 2008 GV. This asteroid was found to have a very similar orbit to the average orbit of the $\psi$ Draconids, D$_{SH}$ being of 0.08. Since the asteroid has a Tisserand parameter of T$_{j}$ = 2.90, it may be a dormant comet as well. Dynamical modeling has been done for hypothetical meteoroids ejected for perihelion passages from 3000 BC to 2100 AD, resulting in direct hits with the Earth for almost every year from 2000 onwards. For the period covered by observations used in the CMN search, direct hits were found for years 2008, 2009, and 2010. The summary of the average radiant positions from the observations and from the predictions are given in Table \ref{tab:table12}. The plots of modeled and observed radiant positions are shown in Figure \ref{fig:figure15}, while locations of nodes of the modeled particles released by 2008 GV are shown in Figure \ref{fig:figure14}. \begin{table} \caption{Observed $\psi$ Draconids and modeled 2008 GV meteors' mean radiant positions (prefix C\_ stands for calculated (modeled), while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.} \label{tab:table12} \centering \begin{tabular}{l c c c c c} \hline\hline Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\ & (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline C\_2008 & 15.9 & 264.6 & 75.2 & 19.4 & ...\\ O\_2008 (2) & 14.2 & 268.9 & 73.3 & 20.7 & 2.2\\ C\_2009 & 13.9 & 254.0 & 74.3 & 19.3 & ...\\ O\_2009 (11) & 9.5 & 257.4 & 72.0 & 19.9 & 2.5\\ C\_2010 & 12.8 & 244.7 & 73.4 & 19.1 & ...\\ O\_2010 (6) & 15.1 & 261.1 & 73.0 & 19.8 & 4.7\\ \hline \end{tabular} \end{table} As can be seen from Table \ref{tab:table12}, the mean observations fit very well to the positions predicted by dynamical modeling, and for two cases there were single meteors very close to predicted positions. On the other hand, the predictions for year 2015 show that a few meteoroids should hit the Earth around solar longitude 14.5° at R.A. = 260°, Dec. = +75°, but no significant activity has been detected in CMN observations. However, small groups of meteors can be seen on CMOR plots for that solar longitude at a position slightly lower in declination, but this should be verified using radar orbital measurements once available. According to Dr. Peter Brown at the University of Western Ontario (private correspondence), there is no significant activity from this shower in the CMOR orbital data. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image14.jpg}} \caption{Location of the nodes of the particles released by 2008 GV over several centuries, concatenated over 100 years. The Earth crosses the stream.} \label{fig:figure14} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image15.png}} \caption{Theoretical radiant of the particles released by 2008 GV which were closest to the Earth. The range of solar longitudes for modeled radiants is from 355.1\degr to 17.7\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure15} \end{figure} One other potential parent body may be connected to the $\psi$ Draconids stream: 2015 FA118. The analysis for that potential parent alternative will be done in a future analysis. \section{IAU meteor shower \#755 MID - May $\iota$ Draconids and asteroid 2006 GY2 } The possible new meteor shower May $\iota$ Draconids was reported by \citet{segon2015four}, consisting of 19 tight meteoroid orbits, having maximal distance from their mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The May $\iota$ Draconids were found to be active from May 7 to June 6, with peak activity around solar longitude of 60° at R.A. = 231°, Dec. = +53°, and V$_{g}$ = 16.7 km/s. No confirmation from other searches exists at the moment, since the shower has been reported in the literature only recently. Greaves (from the meteorobs mailing-list archives\footnote{\label{meteorobs}\url{http://lists.meteorobs.org/pipermail/meteorobs/2015-December/018122.html}.}) stated that this shower should be the same as \#273 PBO $\phi$ Bootids. However, if we look at the details of this shower as presented in \citet{jenniskens2006meteor}, we find that the solar longitude stated in the IAU Meteor Data Center does not correspond to the mean ascension node for three meteors chosen to represent the $\phi$ Bootid shower. If a weighted orbit average of all references is calculated, the resulting D$_{SH}$ from MID is 0.18 which we consider a large enough value to be a separate shower (if the MID exists at all). Three \#273 PBO orbits from the IAU MDC do indeed match \#755 MID, suggesting that these two possible showers may somehow be connected. Asteroid 2006 GY2 was investigated as a probable parent body using dynamical modeling as in previous cases. The asteroid 2006 GY2 has a Tisserand parameter for Jupiter of T$_{j}$ = 3.70. From all the cases we discussed in this paper, this one shows the poorest match between the observed and predicted radiant positions. The theoretical stream was modeled with trails ejected from 1800 AD through 2100 AD. According to the dynamical modeling analysis, this parent body should produce meteors for all years covered by the observations and at more or less the same position, R.A. = 248.5°, Dec. = +46.2°, and at same solar longitude of 54.4° with V$_{g}$ = 19.3 km/s, as visualized in Figures \ref{fig:figure16} and \ref{fig:figure17}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image16.jpg}} \caption{Location of the nodes of the particles released by 2006 GY2 over several centuries, concatenated over 50 years. The Earth crosses the stream.} \label{fig:figure16} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image17.png}} \caption{Theoretical radiant of the particles released by 2006 GY2 which were closest to the Earth. The range of solar longitudes for modeled radiants is from 54.1\degr to 54.5\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure17} \end{figure} However, six meteors belonging to the possible \#755 MID shower found in the solar longitude range from 52.3 to 53.8 (the next meteor found was at 58.6°) show a mean radiant position at R.A. = 225.8°, Dec. = +46.4°, with mean V$_{g}$ of 16.4 km/s. Given the angular distance of 15.6\degr from the observed radiant, the difference in geocentric velocity (3 km/s) compared to the modeled meteor radiant parameters, and the fact that there were no single model meteors observed at that position nor nearby, we cannot conclude that this asteroid may be the parent body of the possible meteor shower May $\iota$ Draconids. Another potential parent body was 2002 KK3, having a D$_{SH}$ = 0.18. However, the dynamical modeling for 2002 KK3 showed no crossings with Earth's orbit. There were also three more distant bodies at D$_{SH}$ = 0.20: 2010 JH3, 2013 JY2, and 2014 WC7. The analysis for those bodies will be done in a future analysis. \section{IAU meteor shower \#531 GAQ - $\gamma$ Aquilids and comet C/1853G1 (Schweizer), and other investigated bodies } The possible new meteor shower $\gamma$ Aquilids was reported by \citet{segon2014new}, and found in other stream search papers (\citet{kornovs2014confirmation}, \citet{rudawska2015independent} and \citet{jenniskens2016cams}). Meteoroids from the suggested parent body comet C/1853G1 (Schweizer) were modeled for perihelion passages ranging from 3000 BC up to the present, and were evaluated. Despite their being very similar orbits between \#531 GAQ and the comet C/1853G1 (Schweizer), no direct hits to the Earth were found. Besides C/1853G1 (Schweizer), negative results were found for dynamical analyses done on asteroids 2011 YX62 (as a possible parent body of \#604 ACZ $\zeta$1 Cancrids) and 2002 KK3, 2008 UZ94, and 2009 CR2 (no shower association reported). \section{Discussion} \paragraph{} The new meteoroid stream discoveries described in this work have been reported on previously, and were searches based on well-defined conditions and constraints that individual meteoroid orbits must meet for association. The simulated particles ejected by the hypothetical parent bodies were treated in the same rigorous manner. Although we consider the similarity of the observed radiant and the dynamically modeled radiant as sufficient evidence for association with the hypothetical parent body when following the approach of \citet{vaubaillon2005new}, there are several points worth discussing. All meteoroid orbits used in the analysis of \citet{segon2014parent} were generated using the UFOorbit software package by \citet{sonotaco2009meteor}. As this software does not estimate errors of the observations, or the errors of calculated orbital elements, it is not possible to consider the real precision of the individual meteoroid orbits used in the initial search analysis. Furthermore, all UFOorbit generated orbits are calculated on the basis of the meteor's average geocentric velocity, not taking the deceleration into consideration. This simplification introduces errors in orbital elements of very slow, very long, and/or very bright meteors. The real impact of this simplification is discussed in \citet{segon2014draconids} where the 2011 Draconid outburst is analyzed. Two average meteoroid orbits generated from average velocities were compared, one with and one without the linear deceleration model. These two orbits differed by as much as 0.06 in D$_{SH}$ (D$_{H}$ = 0.057, D$_{D}$ = 0.039). The deviation between the orbits does not necessarily mean that the clustering would not be determined, but it does mean that those orbits will certainly differ from the orbits generated with deceleration taken into account, as well as differing from the numerically generated orbits of hypothetical parent bodies. Consequently, the radiant locations of slower meteors can be, besides the natural radiant dispersion, additionally dispersed due to the varying influence of the deceleration in the position of the true radiant. This observation is not only relevant for UFOorbit, but for all software which potentially does not properly model the actual deceleration. CAMS Coincidence software uses an exponential deceleration model \citep{jenniskens2011cams}, however not all meteors decelerate exponentially as was shown in \citet{borovivcka2007atmospheric}. The real influence of deceleration in radiant dispersion will be a topic of some future work. Undoubtedly an important question is whether the dispersion caused by the improperly calculated deceleration of slow (e.g., generated by near-Earth objects) meteors can render members of a meteoroid stream to be unassociated with each other by the automated stream searching methods. Besides the lack of error estimation of meteor observations, parent bodies on relatively unstable orbits are observed over a short observation arc, thus they often do not have very precise orbital element solutions. Moreover, unknown past parent body activity presents a seemingly unsolvable issue of how the parent orbit could have been perturbed on every perihelion pass close to the Sun. Also if the ejection modeling assumed that the particle ejection occurred during a perihelion passage when the parent body was not active, there would be no meteors present when the Earth passes through the point of the falsely predicted filament. On the other hand, if the Earth encounters meteoroids from a perihelion passage of particularly high activity, an unpredicted outburst can occur. \citet{vaubaillon2015latest} discuss the unknowns regarding parent bodies and the problems regarding meteor shower outburst prediction in greater detail. Another fundamental problem that was encountered during this analysis was the lack of any rigorous definitions of what meteor showers or meteoroid streams actually are. Nor is there a common consensus to refer to. This issue was briefly discussed in \citet{brown2010meteoroid} and no real advances towards a clear definition have been made since. We can consider a meteor shower as a group of meteors which annually appear near the same radiant and which have approximately the same entry velocity. To better embrace the higher-dimensional nature of orbital parameters and time evolution versus a radiant that is fixed year after year, this should be extended to mean there exists a meteoroid stream with meteoroids distributed along and across the whole orbit with constraints dictated by dynamical evolution away from the mean orbit. By using the first definition however, some meteor showers caused by Jupiter-family comets will not be covered very well, as they are not active annually. The orbits of these kinds of meteor showers are not stable in the long term due to the gravitational and non-gravitational influences on the meteoroid stream.\footnote{\label{vaubaillonIMC2014}\url{http://www.imo.net/imc2014/imc2014-vaubaillon.pdf}.} On the other hand, if we are to consider any group of radiants which exhibit similar features but do not appear annually as a meteor shower, we can expect to have thousands of meteor shower candidates in the near future. It is the opinion of the authors that with the rising number of observed multi-station video meteors, and consequently the rising number of estimated meteoroid orbits, the number of new potential meteor showers detected will increase as well, regardless of the stream search method used. As a consequence of the vague meteor shower definition, several methods of meteor shower identification have been used in recent papers. \citet{vida2014meteor} discussed a rudimentary method of visual identification combined with D-criterion shower candidate validation. \citet{rudawska2014new} used the Southworth and Hawking D-criterion as a measure of meteoroid orbit similarity in an automatic single-linkage grouping algorithm, while in the subsequent paper by \citet{rudawska2015independent}, the geocentric parameters were evaluated as well. In \citet{jenniskens2016cams} the results of the automatic grouping by orbital parameters were disputed and a manual approach was proposed. Although there are concerns about automated stream identification methods, we believe it would be worthwhile to explore the possibility of using density-based clustering algorithms, such as DBSCAN or OPTICS algorithms by \citet{kriegel2005density}, for the purpose of meteor shower identification. They could possibly discriminate shower meteors from the background as they have a notion of noise and varying density of the data. We also strongly encourage attempts to define meteor showers in a more rigorous manner, or an introduction of an alternate term which would help to properly describe such a complex phenomenon. The authors believe that a clear definition would be of great help in determining whether a parent body actually produces meteoroids – at least until meteor observations become precise enough to determine the connection of a parent body to a single meteoroid orbit. \section{Conclusion} From this work, we can conclude that the following associations between newly discovered meteoroid streams and parent bodies are validated: \begin{itemize} \item \#549 FAN 49 Andromedids and Comet 2001 W2 Batters \item \#533 JXA July $\xi$ Arietids and Comet C/1964 N1 Ikeya \item \#539 ACP $\alpha$ Cepheids and Comet P/255 Levy \item \#541 SSD 66 Draconids and Asteroid 2001 XQ \item \#751 KCE $\kappa$ Cepheids and Asteroid 2009 SG18 \item \#753 NED November Draconids and Asteroid 2009 WN25 \item \#754 POD $\psi$ Draconids and Asteroid 2008 GV \end{itemize} The connection between \#755 MID May $\iota$ Draconids and asteroid 2006 GY2 is not firmly established enough and still requires some additional observational data before any conclusion can be drawn. The asteroidal associations are interesting in that each has a Tisserand parameter for Jupiter indicating it was a possible Jupiter-family comet in the past, and thus each may now be a dormant comet. Thus it may be worth looking for outgassing from asteroids 2001 XQ, 2009 SG18, 2009 WN25, 2008 GV, and even 2006 GY2 during their perihelion passages in the near future using high resolution imaging. \begin{acknowledgements} JV would like to acknowledge the availability of computing resources on the Occigen super computer at CINES (France) to perform the computations required for modeling the theoretical meteoroid streams. Special acknowledgement also goes to all members of the Croatian Meteor Network for their devoted work on this project. \end{acknowledgements} \bibpunct{(}{)}{;}{a}{}{,} \bibliographystyle{aa} \section{Introduction} When one is attempting to associate newly discovered meteoroid streams to their parent bodies, there are four critical steps that need to be carried out. The first is obviously the stream discovery through the search of databases and past records, which is ideally performed on meteor data comprised of Keplerian orbital elements. The second phase involves the verification of the meteoroid stream using completely independent meteor databases and stream searches as published online and/or reported in the literature. This is to help validate the existence of the stream. The third step explores the identification of candidate parent bodies, such as comets and asteroids, which show similar orbits to the space-time aggregated meteoroid Keplerian elements of the found stream. However, close similarity of the orbits between a meteoroid stream and a potential parent body is not necessarily conclusive proof of association or linkage, since the two object types (parent body and meteoroid) can undergo significantly different orbital evolution as shown by \citet{vaubaillon2006mechanisms}. Thus the most critical fourth step in determining the actual association is to perform dynamic modeling and orbital evolution on a sample of particles ejected from a candidate parent body. Given a comet's or asteroid's best estimated orbit in the past, and following the ejected stream particles through many hundreds to thousands of years, one looks for eventual encounters with the Earth at the time of meteor observation, and whether those encounters have a geometric similarity to the observed meteoroids of the stream under investigation. The work by \citet{mcnaught1999leonid} demonstrates this point. However, this current paper follows the approach of \citet{vaubaillon2005new} in focusing on the results of the dynamical modeling phase and is a culmination of all the steps just outlined and performed on new streams discovered from recent Croatian Meteor Network stream searches. The application of dynamical stream modeling indicates, with a high level of confidence, that seven new streams can be associated to either comets or asteroids, the latter of which are conjectured to be dormant comets. \section{Processing approach} The seven streams and their hypothetical parent body associations were initially discovered using a meteor database search technique as described in \citet{segon2014parent}. In summary, the method compared every meteor orbit to every other meteor orbit using the combined Croatian Meteor Network \citep{segon2012croatian, korlevic2013croatian}\footnote{\label{cmn_orbits}\url{http://cmn.rgn.hr/downloads/downloads.html#orbitcat}.} and SonotaCo \citep{sonotaco2009meteor}\footnote{\label{sonotaco_orbits}\url{http://sonotaco.jp/doc/SNM/index.html}.} video meteor orbit databases, looking for clusters and groupings in the five-parameter, Keplerian orbital element space. This was based on the requirement that three D-criteria (\citet{southworth1963statistics}, \citet{drummond1981test}, \citet{jopek2008meteoroid}) were all satisfied within a specified threshold. These groups had their mean orbital elements computed and sorted by number of meteor members. Mean orbital elements where computed by a simple averaging procedure. Working down from the largest sized group, meteors with similar orbits to the group under evaluation were assigned to the group and eliminated from further aggregation. This captured the known streams quickly, removing them from the meteor pool, and eventually found the newly discovered streams. According to International Astronomical Union (IAU) shower nomenclature rules \citep{jenniskens2006iau}, all results for the stream discoveries were first published. In these cases the search results can be found in three papers posted to WGN, The Journal of the International Meteor Organization \citep{andreic2014, gural2014results, segon2014results}. Next, the literature was scoured for similar stream searches in other independent data sets, such as the CAMS \citep{rudawska2014new, jenniskens2016cams} and the EDMOND \citep{rudawska2014independent} video databases, to determine the validity of the new streams found. The verified new streams were then compared against known cometary and asteroidal orbits, from which a list of candidate parent bodies were compiled based once again on meeting multiple D-criteria for orbital similarity. Each section below describes in greater detail the unique processes and evidence for each stream's candidate association to a parent body. Besides the seven reported shower cases and their hypothetical parent bodies, the possibility of producing a meteor shower has also been investigated for four possible streams with similar orbital parameters to asteroids, 2002 KK3, 2008 UZ94, 2009 CR2, and 2011 YX62, but the results were inconclusive or negative. The remaining possible parent bodies from the search were not investigated due to the fact that those comets do not have orbital elements precise enough to be investigated or are stated to have parabolic orbits. The dynamical analysis for each object was performed as follows. First, the nominal orbit of the body was retrieved from the JPL HORIZONS ephemeris\footnote{\label{horizonsJPL}\url{http://horizons.jpl.nasa.gov}} for the current time period as well as for each perihelion passage for the past few centuries (typically two to five hundred years). Assuming the object presented cometary-like activity in the past, the meteoroid stream ejection and evolution was simulated and propagated following \citet{vaubaillon2005new}. In detail, the method considers the ejection of meteoroids when the comet is within 3 AU from the Sun. The ejection velocity is computed following \citet{crifo1997dependence}. The ejection velocities typically range from 0 to \textasciitilde100 m/s. Then the evolution of the meteoroids in the solar system is propagated using numerical simulations. The gravitation of all the planets as well as non-gravitational forces (radiation pressure, solar wind, and the Poynting-Robertson effect) are taken into account. More details can be found in \citet{vaubaillon2005new}. When the parent body possessed a long orbital period, the stream was propagated starting from a more distant period in the past few thousand years. The intersection of the stream and the Earth was accumulated over 50 to 100 years, following the method by \citet{jenniskens2008minor}. Such a method provides a general view of the location of the meteoroid stream and give statistically meaningful results. For each meteoroid that is considered as intersecting the Earth, the radiant was computed following the \citet{neslusan1998computer} method (the software was kindly provided by those authors). Finally, the size distribution of particles intercepting the Earth was not considered in this paper, nor was the size of modeled particles compared to the size of observed particles. The size distribution comparison will be the topic of a future paper. \section{IAU meteor shower \#549 FAN - 49 Andromedids and Comet 2001 W2 Batters } The first case to be presented here is that of meteor shower IAU \#542 49 Andromedids. Following the IAU rules, this shower was first reported as part of a paper in WGN, Journal of International Meteor Organization by \citet{andreic2014}. Independent meteor shower database searches resulted in confirmation of the existence of this shower, namely \citet{rudawska2015independent} and \citet{jenniskens2016cams}. The radiant position from the Croatian Meteor Network (CMN) search into the SonotaCo and CMN orbit databases was found to be R.A. = 20.9°, Dec. = +46.7°, with a mean geocentric velocity V$_{g}$ = 60.1 km/s near the center of the activity period (solar longitude $\lambda_{0}$ = 114°, 35 orbits). \citet{rudawska2015independent} found the same radiant to be at R.A. = 19.0°, Dec. = +45.3° and V$_{g}$ = 59.8 km/s ($\lambda_{0}$ = 112.5°, 226 orbits), while \citet{jenniskens2016cams} give R.A. = 20.5°, Dec. = +46.6°, and V$_{g}$ = 60.2 km/s ($\lambda_{0}$ = 112°, 76 orbits). This shower was accepted as an established shower during the 2015 IAU Assembly\footnote{\label{IAU2015}\url{https://astronomy2015.org}.} and is now listed in the IAU meteor database. At the time of the initial finding, there were 35 meteors associated with this shower resulting in orbital parameters similar to published values for a known comet, namely 2001 W2 Batters. This Halley type comet with an orbital period of 75.9 years, has been well observed and its orbital parameters have been determined with higher precision than many other comets of this type. The mean meteoroid orbital parameters, as found by the above mentioned procedure, are compared with the orbit of 2001 W2 Batters in Table \ref{tab:table1}. Despite the fact that the orbital parameters' distance according to the Southworth-Hawkins D-criteria D$_{SH}$ = 0.14 seems a bit high to claim an association, the authors pointed out the necessity of using dynamic stream modeling to confirm or deny the association hypothesis because of the nearly identical ascending node values. Moreover, changes in 2001 W2 Batters' orbital parameters as far back as 3000 BC, as extracted from HORIZONS, has shown that the comet approached closer to Earth's orbit in 3000 BC than it has during the last few hundred years. Thus stream particles ejected from the comet farther in the past could have the possibility of producing a meteoroid stream observed at the Earth in the current epoch. \begin{table*}[t] \caption{Orbital parameters for the 49 Andromedids and Comet 2001 W2 Batters with corresponding D$_{SH}$ values. If the value of 112° for the ascending node (from \citet{jenniskens2016cams}) is used instead of the mean value (118°), then the resulting D$_{SH}$ is 0.16. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2001 W2 Batters.} \label{tab:table1} \centering \begin{tabular}{l c c c c c c} \hline\hline 49 Andromedids & q & e & i & Node & $\omega$ & D$_{SH}$ \\% table heading References & (AU) & & (\degr) & (\degr) & (\degr) & \\ \hline 1 & 0.918 & 0.925 & 118.2 & 114.0 & 143.1 & 0.14\\ 2 & 0.907 & 0.878 & 119.2 & 112.5 & 142.2 & 0.17\\ 3 & 0.898 & 0.922 & 117.9 & 118.0 & 139.8 & 0.19\\ 2001 W2 Batters & 1.051 & 0.941 & 115.9 & 113.4 & 142.1 & 0\\ \hline \end{tabular} \tablebib{(1) \citet{andreic2014}; (2) \citet{rudawska2015independent}; (3) \citet{jenniskens2016cams}. } \end{table*} The dynamical modeling for the hypothetical parent body 2001 W2 Batters was performed following \citet{vaubaillon2005new} and \citet{jenniskens2008minor}. In summary, the dynamical evolution of the parent body is considered over a few hundred to a few thousand years. At a specific chosen time in the past, the creation of a meteoroid stream is simulated and its evolution is followed forward in time until the present day. The intersection of the particles with the Earth is recorded and the radiant of each particle is computed and compared to observations. The first perihelion passages were initially limited to 500 years back in time. No direct hits to the Earth were found from meteoroids ejected during the aforementioned period. However, the authors were convinced that such close similarity of orbits may result in more favorable results if the dynamical modeling was repeated for perihelion passages back to 3000 BC. The new run did provide positive results, with direct hits to the Earth predicted at R.A. = 19.1°, Dec. = +46.9°, and V$_{g}$ = 60.2 km/s, at a solar longitude of $\lambda_{0}$ = 113.2°. A summary of the observed and modeled results is given in Table \ref{tab:table2}. \begin{table}[h] \caption{Observed and modeled radiant positions for the 49 Andromedids and comet Batters' meteoroids ejected 3000 years ago.} \label{tab:table2} \centering \begin{tabular}{l c c c c} \hline\hline 49 Andromedids & R.A. & Dec. & V$_{g}$ & $\lambda_{0}$\\ References & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline 1 & 20.9 & 46.7 & 60.1 & 114.0\\ 2 & 19.0 & 45.3 & 59.8 & 112.5\\ 3 & 20.5 & 46.6 & 60.2 & 112.0\\ 2001 W2 Batters \\meteoroids, this work & 19.1 & 46.9 & 60.2 & 113.2\\ \hline \end{tabular} \tablebib{(1) \citet{andreic2014}; (2) \citet{rudawska2015independent}; (3) \citet{jenniskens2016cams}. } \end{table} The maximum difference between the average observed radiant positions and modeled mean positions is less than 2° in both right ascension and declination, while there are also single meteors very close to the predicted positions according to the model. Since the observed radiant position fits very well with the predictions, we may conclude that there is a strong possibility that comet 2001 W2 Batters is indeed the parent body of the 49 Andromedids shower. The high radiant dispersion seen in the observations can be accounted for by 1) less precise observations in some of the reported results, and 2) the 3000 year old nature of the stream which produces a more dispersed trail. The next closest possible association was with comet 1952 H1 Mrkos but with D$_{SH}$ of 0.28, it was considered too distant to be connected with the 49 Andromedids stream. Figures \ref{fig:figure1} and \ref{fig:figure2} show the location of the stream with respect to the Earth's path, as well as the theoretical radiant. These results were obtained by concatenating the locations of the particles intersecting the Earth over 50 years in order to clearly show the location of the stream (otherwise there are too few particles to cross the Earth each year). As a consequence, it is expected that the level of activity of this shower would not change much from year to year. \begin{figure}[h] \resizebox{\hsize}{!}{\includegraphics{media/image1.jpeg}} \caption{Location of the nodes of the particles released by 2001 W2 Batters over several centuries, concatenated over the years 2000 to 2050. The Earth crosses the stream.} \label{fig:figure1} \end{figure} \begin{figure}[h] \resizebox{\hsize}{!}{\includegraphics{media/image2.png}} \caption{Theoretical radiant of the particles released by 2001 W2 Batters which were closest to the Earth. The range of solar longitudes for modeled radiants is from 113.0\degr to 113.9\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure2} \end{figure} \section{IAU meteor shower \#533 JXA - July $\xi$ Arietids and comet 1964 N1 Ikeya} The discovery of the possible meteor shower July $\xi$ Arietids was first published in \citet{segon2014new}. The shower had been found as a grouping of 61 meteoroid orbits, active from July 4 to August 12, peaking around July 21. Three other searches for meteor showers in different meteoroid orbit databases done by \citet{rudawska2015independent}, \citet{jenniskens2016cams}, and \citet{kornovs2014confirmation} found this shower as well, but with slight differences in the period of activity. This shower had been accepted as an established shower during the 2015 IAU Assembly held on Hawaii and is now referred to as shower \#533. Among the possible parent bodies known at the time of this shower's discovery, comet C/1964 N1 Ikeya was found to have similar orbital parameters as those of the July $\xi$ Arietids. Comet C/1964 N1 Ikeya is a long period comet, having an orbital period of 391 years and contrary to comet 2001 W2 Batters, has less precision in its orbit estimation. A summary of the mean orbital parameters of the shower compared with C/1964 N1 Ikeya are shown in Table \ref{tab:table3}, from which it can be seen that the distance estimated from D$_{SH}$ suggests a possible connection between the shower and the comet. \begin{table*}[t] \caption{Orbital parameters for the July $\xi$ Arietids and Comet 1964 N1 Ikeya with corresponding D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 1964 N1 Ikeya.} \label{tab:table3} \centering \begin{tabular}{l c c c c c c} \hline\hline July $\xi$ Arietids & q & e & i & Node & $\omega$ & D$_{SH}$\\ References & (AU) & & (\degr) & (\degr) & (\degr) & \\ \hline 1 & 0.883 & 0.965 & 171.6 & 299.0 & 318.0 & 0.10\\ 2 & 0.863 & 0.939 & 171.8 & 292.6 & 313.8 & 0.08\\ 3 & 0.836 & 0.919 & 171.5 & 291.1 & 309.8 & 0.09\\ 4 & 0.860 & 0.969 & 170.4 & 292.7 & 312.4 & 0.08\\ C/1964 N1 Ikeya & 0.822 & 0.985 & 171.9 & 269.9 & 290.8 & 0\\ \hline \end{tabular} \tablebib{(1) \citet{segon2014new}; (2) \citet{kornovs2014confirmation}; (3) \citet{rudawska2015independent}; (4) \citet{jenniskens2016cams}. } \end{table*} Similar to the previous case, the dynamical modeling was performed for perihelion passages starting from 5000 BC onwards. Only two direct hits were found from the complete analysis, but those two hits confirm that there is a high possibility that comet C/1964 N1 Ikeya is indeed the parent body of the July $\xi$ Arietids. The mean radiant positions for those two modeled meteoroids as well as the mean radiant positions found by other searches are presented in Table \ref{tab:table4}. As can be seen from Table \ref{tab:table4}, the difference in radiant position between the model and the observations appears to be very significant. \begin{table}[h] \caption{Observed and modeled radiant positions for July $\xi$ Arietids and comet C/1964 N1 Ikeya. Rows in bold letters show radiant positions of the entries above them at 106.7° of solar longitude. The applied radiant drift was provided in the respective papers.} \label{tab:table4} \centering \begin{tabular}{l c c c c} \hline\hline July $\xi$ Arietids & R.A. & Dec. & V$_{g}$ & $\lambda_{0}$\\ References & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline 1 & 40.1 & 10.6 & 69.4 & 119.0\\ & \textbf{32.0} & \textbf{7.5} & \textbf{...} & \textbf{106.7}\\ 2 & 35.0 & 9.2 & 68.9 & 112.6\\ 3 & 33.8 & 8.7 & 68.3 & 111.1\\ 4 & 41.5 & 10.7 & 68.9 & 119.0\\ & \textbf{29.6} & \textbf{7.0} & \textbf{...} & \textbf{106.7}\\ 1964 N1 Ikeya \\meteoroids, this work & 29.0 & 6.5 & 68.7 & 106.7\\ \hline \end{tabular} \tablebib{(1) \citet{segon2014new}; (2) \citet{kornovs2014confirmation}; (3) \citet{rudawska2015independent}; (4) \citet{jenniskens2016cams}. } \end{table} However, the radiant position for solar longitude as found from dynamical modeling fits very well with that predicted by the radiant's daily motion: assuming $\Delta$R.A. = 0.66° and $\Delta$Dec. = 0.25° from \citet{segon2014new}, the radiant position at $\lambda_{0}$ = 106.7° would be located at R.A. = 32.0°, Dec. = 7.5° or about three degrees from the modeled radiant. If we use results from \citet{jenniskens2016cams} ($\Delta$R.A. = 0.97° and $\Delta$Dec. = 0.30°), the resulting radiant position fits even better – having R.A. = 29.0° Dec. = 7.0° or about one degree from the modeled radiant. The fact that the model does not fit the observed activity may be explained by various factors, from the lack of precise data of the comet position in the past derived using the relatively small orbit arc of observations, to the possibility that this shower has some other parent body (possibly associated to C/1964 N1 Ikeya) as well. The next closest possible association was with comet 1987 B1 Nishikawa-Takamizawa-Tago where the D$_{SH}$ was 0.21, but due to high nodal distances between orbits, we consider this to be not connected to the July $\xi$ Arietids. The simulation of the meteoroid stream was performed for hypothetical comet returns back to 5000 years before the present. According to the known orbit of the comet, it experienced a close encounter with Jupiter and Saturn in 1676 and 1673 AD respectively, making the orbital evolution prior to this date much more uncertain. Nevertheless, the simulation of the stream was performed in order to get a big picture view of the stream in the present day solar system as visualized in Figures \ref{fig:figure3} and \ref{fig:figure4}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image3.jpg}} \caption{Location of the particles ejected by comet C/1964 N1 Ikea over several centuries, concatenated over 50 years in the vicinity of the Earth.} \label{fig:figure3} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image4.png}} \caption{Theoretical radiant of the particles released by C/1964 N1 Ikea which were closest to the Earth. The match with the July $\xi$ Arietids is not convincing in this case. The range of solar longitudes for modeled radiants is from 99.0\degr to 104.8\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure4} \end{figure} \section{IAU meteor shower \#539 ACP - $\alpha$ Cepheids and comet 255P Levy } The $\alpha$ Cepheids shower had been reported by \citet{segon2014new}, as a disperse grouping of 41 meteors active from mid-December to mid-January at a mean radiant position of R.A. = 318°, Dec. = 64° at $\lambda_{0}$ = 281° (January 2). The authors investigated the possibility that this new shower could be connected to the predicted enhanced meteor activity of IAU shower \#446 DPC December $\phi$ Cassiopeiids. However, the authors pointed out that \#466 DPC and \#539 ACP cannot be the same meteor shower \citep{segon2014new}. Despite the fact that a predicted meteor outburst was not detected \citep{roggemans2014letter}, there is a strong possibility that the activity from comet 255P/Levy produces a meteor shower which can be observed from the Earth as the $\alpha$ Cepheids shower. Meteor searches conducted by \citet{kornovs2014confirmation} and \citet{jenniskens2016cams} failed to detect this shower, but \citet{rudawska2015independent} found 11 meteors with a mean radiant position at R.A. = 333.5°, Dec. = +66°, V$_{g}$ = 13.4 km/s at $\lambda_{0}$ = 277.7°. The mean geocentric velocity for the $\alpha$ Cepheids meteors has been found to be small, of only 15.9 km/s, but ranges from 12.4 to 19.7 kilometres per second. Such a high dispersion in velocities may be explained by the fact that the D-criterion threshold for automatic search has been set to D$_{SH}$ = 0.15, which allowed a wider range of orbits to be accepted as meteor shower members. According to the dynamical modeling results, the geocentric velocity for meteoroids ejected from 255P/Levy should be of about 13 km/s, and observations show that some of the $\alpha$ Cepheids meteors indeed have such velocities at more or less the predicted radiant positions, as can be seen from Figure \ref{fig:figure5}. This leads us to the conclusion that this meteor shower has to be analyzed in greater detail, but at least some of the observations represent meteoroids coming from comet 255P/Levy. \begin{figure}[b] \resizebox{\hsize}{!}{\includegraphics{media/image5.png}} \caption{Radiant positions of observed $\alpha$ Cepheids and predicted meteors from 255P/Levy. The range of solar longitudes for modeled radiants is from 250\degr to 280\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure5} \end{figure} The simulation of the meteoroid stream ejected by comet 255P/Levy includes trails ejected from 1801 through 2017 as visualized in Figures \ref{fig:figure6} and \ref{fig:figure7}. Several past outbursts were forecasted by the dynamical modeling but none had been observed, namely during apparitions in 2006 and 2007 (see Table \ref{tab:table5}). As a consequence, the conclusion is that the activity of the $\alpha$ Cepheids is most likely due to the global background of the stream. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image6.jpg}} \caption{Location of the particles ejected by comet 255P/Levy in the vicinity of the Earth in 2006: an outburst should have been detected.} \label{fig:figure6} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image7.jpg}} \caption{Location of all the particles ejected by 255P over 50 years in order to show the location of the whole stream in the solar system. This graph does not imply several outbursts but rather provides a global indication of the stream.} \label{fig:figure7} \end{figure} \begin{table} \caption{Expected outburst caused by 255P/Levy. No unusual outburst was reported in 2006 and 2007. Columns: Year = the year of Earth's collision with the trail, Trail = year of particle ejection from the given trail, $\lambda_{0}$ = solar longitude in degrees, yyyy-mm-ddThh:mm:ss = date and time of the trail's closest approach, ZHR = zenithal hourly rate.} \label{tab:table5} \centering \begin{tabular}{c c c c c} \hline\hline Year & Trail & $\lambda_{0}$ & yyyy-mm-ddThh:mm:ss & ZHR\\ & & (\degr) & &\\ \hline 2001 & 1963 & 279.132 & 2001-12-30T18:37:00 & 1\\ 2001 & 1975 & 279.765 & 2001-12-31T12:01:00 & 3\\ 2001 & 1980 & 279.772 & 2001-12-31T15:00:00 & 2\\ 2001 & 1985 & 279.828 & 2001-12-31T11:24:00 & 11\\ 2001 & 1991 & 279.806 & 2001-12-31T10:44:00 & 13\\ 2002 & 1963 & 278.914 & 2002-10-20T14:56:00 & 1\\ 2002 & 1980 & 279.805 & 2002-12-31T10:23:00 & 2\\ 2002 & 1985 & 279.808 & 2002-12-31T10:40:00 & 15\\ 2002 & 1991 & 279.789 & 2002-12-31T10:24:00 & 6\\ 2006 & 1963 & 279.285 & 2006-12-31T08:01:00 & 1\\ 2007 & 1963 & 279.321 & 2007-12-31T07:04:00 & 1\\ 2012 & 1980 & 279.803 & 2012-12-31T06:25:00 & 6\\ 2013 & 1980 & 279.882 & 2013-12-31T08:16:00 & 2\\ 2014 & 1969 & 264.766 & 2014-12-17T00:07:00 & 1\\ 2017 & 1930 & 342.277 & 2017-09-21T18:39:00 & 1\\ 2017 & 1941 & 279.510 & 2017-12-30T03:41:00 & 1\\ 2018 & 1969 & 278.254 & 2018-12-29T07:29:00 & 1\\ 2033 & 1975 & 275.526 & 2033-12-27T10:12:00 & 1\\ 2033 & 1980 & 275.488 & 2033-12-27T10:06:00 & 1\\ 2033 & 1985 & 275.452 & 2033-12-27T09:55:00 & 1\\ 2033 & 1991 & 275.406 & 2033-12-27T09:54:00 & 1\\ 2033 & 1996 & 275.346 & 2033-12-27T08:58:00 & 1\\ 2034 & 1975 & 262.477 & 2034-12-13T22:22:00 & 1\\ 2034 & 1980 & 261.456 & 2034-06-06T03:40:00 & 1\\ 2034 & 1985 & 261.092 & 2034-04-05T17:02:00 & 1\\ 2034 & 1991 & 260.269 & 2034-03-09T11:52:00 & 1\\ 2035 & 1914 & 276.553 & 2035-01-09T07:59:00 & 1\\ 2035 & 1952 & 271.463 & 2035-12-20T03:11:00 & 1\\ 2039 & 1980 & 272.974 & 2039-12-25T01:51:00 & 1\\ 2039 & 1991 & 272.131 & 2039-12-25T01:05:00 & 1\\ \hline \end{tabular} \end{table} There are several other parent bodies possibly connected to the $\alpha$ Cepheids stream: 2007 YU56 (D$_{SH}$ = 0.20), 2005 YT8 (D$_{SH}$ = 0.19), 1999 AF4 (D$_{SH}$ = 0.19), 2011 AL52 (D$_{SH}$ = 0.19), 2013 XN24 (D$_{SH}$ = 0.12), 2008 BC (D$_{SH}$ = 0.17), and 2002 BM (D$_{SH}$ = 0.16). The analysis for those bodies will be done in a future analysis. \section{IAU meteor shower \#541 SSD - 66 Draconids and asteroid 2001 XQ } Meteor shower 66 Draconids had been reported by \citet{segon2014new}, as a grouping of 43 meteors having mean radiant at R.A. = 302°, Dec. = +62°, V$_{g}$ = 18.2 km/s. This shower has been found to be active from solar longitude 242° to 270° (November 23 to December 21), having a peak activity period around 255° (December 7). Searches by \citet{jenniskens2016cams} and \citet{kornovs2014confirmation} failed to detect this shower. But again, \citet{rudawska2015independent} found this shower to consist of 39 meteors from the EDMOND meteor orbits database, at R.A. = 296°, Dec. = 64°, V$_{g}$ = 19.3 km/s for solar longitude $\lambda_{0}$ = 247°. A search for a possible parent body of this shower resulted in asteroid 2001 XQ, which having a D$_{SH}$ = 0.10 represented the most probable choice. The summary of mean orbital parameters from the above mentioned searches compared with 2001 XQ are shown in Table \ref{tab:table6}. \begin{table*}[t] \caption{Orbital parameters for 66 Draconids and 2001XQ with respective D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2001XQ.} \label{tab:table6} \centering \begin{tabular}{l c c c c c c} \hline\hline 66 Draconids & q & e & i & Node & $\omega$ & D$_{SH}$\\ References & (AU) & & (\degr) & (\degr) & (\degr) & \\ \hline 1 & 0.981 & 0.657 & 27.2 & 255.2 & 184.4 & 0.10\\ 2 & 0.980 & 0.667 & 29.0 & 247.2 & 185.2 & 0.13\\ 2001 XQ & 1.035 & 0.716 & 29.0 & 251.4 & 190.1 & 0\\ \hline \end{tabular} \tablebib{(1) \citet{segon2014new}; (2) \citet{rudawska2015independent}. } \end{table*} Asteroid 2001 XQ has Tisserand parameter T$_{j}$ = 2.45, which is a value common for Jupiter family comets and this makes us suspect it may not be an asteroid per se, but rather a dormant comet. To the collected author's knowledge, no cometary activity has been observed for this body. Nor was there any significant difference in the full-width half-max spread between stars and the asteroid on the imagery provided courtesy of Leonard Kornoš (personal communication) from Modra Observatory. They had observed this asteroid (at that time named 2008 VV4) on its second return to perihelion, during which it reached \nth{18} magnitude. Numerical modeling of the hypothetical meteor shower whose particles originate from asteroid 2001 XQ was performed for perihelion passages from 800 AD up to 2100 AD. The modeling showed multiple direct hits into the Earth for many years, even outside the period covered by the observations. The summary of observed and modeled radiant positions is given in Table \ref{tab:table7}. \begin{table} \caption{Observed 66 Draconid and modeled 2001 XQ meteors' mean radiant positions (prefix C\_ stands for calculated (modeled), while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed 66 Draconid meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.} \label{tab:table7} \centering \begin{tabular}{l c c c c c} \hline\hline Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\ & (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline C\_2007 & 250.3 & 308.2 & 65.3 & 19.3 & ...\\ O\_2007 (5) & 257.5 & 300.1 & 63.2 & 18.2 & 4.1\\ C\_2008 & 248.2 & 326.8 & 56.9 & 16.1 & ...\\ O\_2008 (8) & 254.0 & 300.5 & 62.6 & 18.0 & 14.3\\ C\_2009 & 251.1 & 309.6 & 64.0 & 18.8 & ...\\ O\_2009 (5) & 253.6 & 310.4 & 61.0 & 17.0 & 3.0\\ C\_2010 & 251.2 & 304.0 & 63.1 & 19.1 & ...\\ O\_2010 (17) & 253.7 & 300.4 & 63.4 & 18.9 & 1.6\\ \hline \end{tabular} \end{table} Despite the fact that the difference in the mean radiant positions may seem significant, radiant plots of individual meteors show that some of the meteors predicted to hit the Earth at the observation epoch were observed at positions almost exactly as predicted. It is thus considered that the results of the simulations statistically represent the stream correctly, but individual trails cannot be identified as responsible for any specific outburst, as visualized in Figures \ref{fig:figure8} and \ref{fig:figure9}. The activity of this shower is therefore expected to be quite regular from year to year. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image8.jpg}} \caption{Location of the nodes of the particles released by 2001 XQ over several centuries, concatenated over 50 years. The Earth crosses the stream.} \label{fig:figure8} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image9.png}} \caption{Theoretical radiants of the particles released by 2001 XQ which were closest to the Earth. The range of solar longitudes for modeled radiants is from 231.1\degr to 262.8\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure9} \end{figure} Two other candidate parent bodies were initially considered, 2004 YY23 and 2015 WB13, in which both had a D$_{SH}$ of 0.26. This was deemed too distant to be associated with the 66 Draconids stream. \section{IAU meteor shower \#751 KCE - $\kappa$ Cepheids and asteroid 2009 SG18 } The meteor shower $\kappa$ Cepheids had been reported by \citet{segon2015four}, as a grouping of 17 meteors with very similar orbits, having average D$_{SH}$ of only 0.06. The activity period was found to from September 11 to September 23, covering solar longitudes from 168° to 180°. The radiant position was R.A. = 318°, Dec. = 78° with V$_{g}$ = 33.7 km/s, at a mean solar longitude of 174.4°. Since the new shower discovery has been reported only recently, the search by \citet{kornovs2014confirmation} could be considered totally blind having not found its existence, while the search by \citet{jenniskens2016cams} did not detect it as well in the CAMS database. Once again, the search by \citet{rudawska2015independent} found the shower, but in much higher numbers than it has been found in the SonotaCo and CMN orbit databases. In total 88 meteors have been extracted as $\kappa$ Cepheids members in the EDMOND database. A summary of the mean orbital parameters from the above mentioned searches compared with 2009 SG18 are shown in Table \ref{tab:table8}. \begin{table*}[t] \caption{Orbital parameters for $\kappa$ Cepheids and asteroid 2009 SG18 with corresponding D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2009 SG18.} \label{tab:table8} \centering \begin{tabular}{l c c c c c c} \hline\hline $\kappa$ Cepheids & q & e & i & Node & $\omega$ & D$_{SH}$\\ References & (AU) & & (\degr) & (\degr) & (\degr) & \\ \hline 1 & 0.983 & 0.664 & 57.7 & 174.4 & 198.4 & 0.10\\ 2 & 0.987 & 0.647 & 55.9 & 177.2 & 190.4 & 0.17\\ 2009 SG18 & 0.993 & 0.672 & 58.4 & 177.6 & 204.1 & 0\\ \hline \end{tabular} \tablebib{(1) \citet{segon2014new}; (2) \citet{rudawska2015independent}. } \end{table*} What can be seen at a glance is that the mean orbital parameters for both searches are very consistent (D$_{SH}$ = 0.06), while the difference between the mean shower orbits and the asteroid's orbit differs mainly in the argument of perihelion and perihelion distance. Asteroid 2009 SG18 has a Tisserand parameter for Jupiter of T$_{j}$ = 2.31, meaning that it could be a dormant comet. Numerical modeling of the hypothetical meteor shower originating from asteroid 2009 SG18 for perihelion passages from 1804 AD up to 2020 AD yielded multiple direct hits into the Earth for more years than the period covered by the observations, as seen in Figures \ref{fig:figure10} and \ref{fig:figure11}. The very remarkable coincidence found between the predicted and observed meteors for years 2007 and 2010 is summarized in Table \ref{tab:table9}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image10.jpg}} \caption{Location of the nodes of the particles released by 2009 SG18 over several centuries, concatenated over 50 years. The Earth crosses the stream.} \label{fig:figure10} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image11.png}} \caption{Theoretical radiant of the particles released by 2009 SG18 which were closest to the Earth. Several features are visible due to the difference trails, but care must be taken when interpreting these data. The range of solar longitudes for modeled radiants is from 177.0\degr to 177.7\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure11} \end{figure} \begin{table} \caption{Observed $\kappa$ Cepheids and modeled 2009 SG18 meteors' mean radiant positions (prefix C\_ stands for calculated or modeled, while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.} \label{tab:table9} \centering \begin{tabular}{l c c c c c} \hline\hline Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\ & (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline C\_2007 & 177.4 & 327.5 & 77.0 & 34.0 & ...\\ O\_2007 (3) & 177.1 & 328.3 & 77.9 & 35.3 & 0.9\\ C\_2010 & 177.7 & 327.7 & 77.7 & 34.3 & ...\\ O\_2010 (2) & 177.7 & 326.5 & 80.5 & 34.7 & 2.8\\ \hline \end{tabular} \end{table} Based on an initial analysis given in this paper, a prediction of possible enhanced activity on September 21, 2015 was made by \citet{segon2015croatian}. At the moment, there are no video meteor data that confirm the prediction of the enhanced activity, but a paper on visual observations of the $\kappa$ Cepheids by a highly reputable visual observer confirmed some level of increased activity (\citet{rendtel2015minor}). The encounters shown in Table \ref{tab:table10} between the trails ejected by 2009 SG18 and the Earth were found theoretically through the dynamical modeling. Caution should be emphasized when interpreting the results since the confirmation of any historical outbursts still need to be performed before trusting such predictions. \begin{table*}[t] \caption{Prediction of possible outbursts caused by 2009 SG18. Columns: Year = the year of Earth's collision with the trail, Trail = year of particle ejection from the given trail, rE-rD = the distance between the Earth and the center of the trail, $\lambda_{0}$ = solar longitude in degrees, yyyy-mm-ddThh:mm:ss = date and time of the trail's closest approach, ZHR = zenithal hourly rate} \label{tab:table10} \centering \begin{tabular}{c c c c c c} \hline\hline Year & Trail & rE-rD & $\lambda_{0}$ & yyyy-mm-ddThh:mm:ss & ZHR\\ & & (AU) & (\degr) & &\\ \hline 2005 & 1967 & 0.00066 & 177.554 & 2005-09-20T12:08:00 & 11\\ 2006 & 1804 & 0.00875 & 177.383 & 2006-09-20T11:31:00 & 13\\ 2010 & 1952 & -0.00010 & 177.673 & 2010-09-20T21:38:00 & 12\\ 2015 & 1925 & -0.00143 & 177.630 & 2015-09-21T03:29:00 & 10\\ 2020 & 1862 & -0.00064 & 177.479 & 2020-09-20T06:35:00 & 11\\ 2021 & 1962 & 0.00152 & 177.601 & 2021-09-20T15:39:00 & 11\\ 2031 & 2004 & -0.00126 & 177.267 & 2031-09-20T21:15:00 & 12\\ 2031 & 2009 & -0.00147 & 177.222 & 2031-09-20T19:55:00 & 13\\ 2033 & 1946 & 0.00056 & 177.498 & 2033-09-20T14:57:00 & 10\\ 2036 & 1978 & -0.00042 & 177.308 & 2036-09-20T04:44:00 & 20\\ 2036 & 2015 & -0.00075 & 177.220 & 2036-09-20T02:33:00 & 20\\ 2036 & 2025 & 0.00109 & 177.254 & 2036-09-20T03:19:00 & 13\\ 2037 & 1857 & -0.00031 & 177.060 & 2037-09-20T04:37:00 & 13\\ 2037 & 1946 & 0.00021 & 177.273 & 2037-09-20T09:56:00 & 10\\ 2038 & 1841 & -0.00050 & 177.350 & 2038-09-20T18:02:00 & 10\\ 2038 & 1925 & 0.00174 & 177.416 & 2038-09-20T19:39:00 & 11\\ 2039 & 1815 & -0.00018 & 177.303 & 2039-09-20T23:01:00 & 10\\ \hline \end{tabular} \end{table*} The next closest possible association was 2002 CE26 with D$_{SH}$ of 0.35, which was deemed too distant to be connected to the $\kappa$ Cepheids stream. \section{IAU meteor shower \#753 NED - November Draconids and asteroid 2009 WN25 } The November Draconids had been previously reported by \citet{segon2015four}, and consist of 12 meteors on very similar orbits having a maximal distance from the mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The activity period was found to be between November 8 and 20, with peak activity at solar longitude of 232.8°. The radiant position at peak activity was found to be at R.A. = 194°, Dec. = +69°, and V$_{g}$ = 42.0 km/s. There are no results from other searches since the shower has been reported only recently. Other meteor showers were reported on coordinates similar to \#753 NED, namely \#387 OKD October $\kappa$ Draconids and \#392 NID November i Draconids \citep{brown2010meteoroid}. The difference in D$_{SH}$ for \#387 OKD is found to be far too excessive (0.35) to be considered to be the same shower stream. \#392 NID may be closely related to \#753, since the D$_{SH}$ of 0.14 derived from radar observations show significant similarity; however, mean orbits derived from optical observations by \citet{jenniskens2016cams} differ by D$_{SH}$ of 0.24 which we consider too far to be the same shower. The possibility that asteroid 2009 WN25 is the parent body of this possible meteor shower has been investigated by numerical modeling of the hypothetical meteoroids ejected for the period from 3000 BC up to 1500 AD and visualized in Figures \ref{fig:figure12} and \ref{fig:figure13}. The asteroid 2009 WN25 has a Tisserand parameter for Jupiter of T$_{j}$ = 1.96. Despite the fact that direct encounters with modeled meteoroids were not found for all years in which the meteors were observed, and that the number of hits is relatively small compared to other modeled showers, the averaged predicted positions fit the observations very well (see Table \ref{tab:table11}). This shows that the theoretical results have a statistically meaningful value and validates the approach of simulating the stream over a long period of time and concatenating the results to provide an overall view of the shower. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image12.jpg}} \caption{Location of the nodes of the particles released by 2009 WN25 over several centuries, concatenated over 100 years. The Earth crosses the stream.} \label{fig:figure12} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image13.png}} \caption{Theoretical radiant of the particles released by 2009 WN25 which were closest to the Earth. The range of solar longitudes for modeled radiants is from 230.3\degr to 234.6\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure13} \end{figure} \begin{table} \caption{Averaged observed and modeled radiant positions for \#753 NED and 2009 WN25. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.} \label{tab:table11} \centering \begin{tabular}{l c c c c c} \hline\hline November Draconids & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\ & (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline Predicted & 232.8 & 194.2 & 68.6 & 42.0 & ...\\ Observed & 232.4 & 196.5 & 67.6 & 41.8 & 1.3\\ \hline \end{tabular} \end{table} Moreover, it appears that the predicted 2014 activity sits exactly at the same equatorial location (R.A. = 199°, Dec. = +67°) seen on Canadian Meteor Orbit Radar (CMOR) plots.\footnote{\label{cmor_plots}\url{http://fireballs.ndc.nasa.gov/} - "radar".} The shower has been noted as NID, but its position fits more closely to the NED. Since orbital data from the CMOR database are not available online, the authors were not able to confirm the hypothesis that the radar is seeing the same meteoroid orbits as the model produces. However, the authors received a confirmation from Dr. Peter Brown at the University of Western Ontario (private correspondence) that this stream has shown activity each year in the CMOR data, and likely belongs to the QUA-NID complex. A recently published paper \citep{micheli2016evidence} suggests that asteroid 2009 WN25 may be a parent body of the NID shower as well, so additional analysis with more observations will be needed to reveal the true nature of this shower complex. The next closest possible association was 2012 VF6 with D$_{SH}$ of 0.49, which was deemed too distant to be connected to the November Draconids stream. \section{IAU meteor shower \#754 POD - $\psi$ Draconids and asteroid 2008 GV } The possible new meteor shower $\psi$ Draconids was reported by \citet{segon2015four}, consisting of 31 tight meteoroid orbits, having maximal distance from a mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The $\psi$ Draconids were found to be active from March 19 to April 12, with average activity around solar longitude of 12° with radiant at R.A. = 262°, Dec. = +73°, and V$_{g}$ = 19.8 km/s. No confirmation from other shower searches exists at the moment, since the shower has been reported upon only recently. If this shower's existence could be confirmed, the most probable parent body known at this time would be asteroid 2008 GV. This asteroid was found to have a very similar orbit to the average orbit of the $\psi$ Draconids, D$_{SH}$ being of 0.08. Since the asteroid has a Tisserand parameter of T$_{j}$ = 2.90, it may be a dormant comet as well. Dynamical modeling has been done for hypothetical meteoroids ejected for perihelion passages from 3000 BC to 2100 AD, resulting in direct hits with the Earth for almost every year from 2000 onwards. For the period covered by observations used in the CMN search, direct hits were found for years 2008, 2009, and 2010. The summary of the average radiant positions from the observations and from the predictions are given in Table \ref{tab:table12}. The plots of modeled and observed radiant positions are shown in Figure \ref{fig:figure15}, while locations of nodes of the modeled particles released by 2008 GV are shown in Figure \ref{fig:figure14}. \begin{table} \caption{Observed $\psi$ Draconids and modeled 2008 GV meteors' mean radiant positions (prefix C\_ stands for calculated (modeled), while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.} \label{tab:table12} \centering \begin{tabular}{l c c c c c} \hline\hline Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\ & (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\ \hline C\_2008 & 15.9 & 264.6 & 75.2 & 19.4 & ...\\ O\_2008 (2) & 14.2 & 268.9 & 73.3 & 20.7 & 2.2\\ C\_2009 & 13.9 & 254.0 & 74.3 & 19.3 & ...\\ O\_2009 (11) & 9.5 & 257.4 & 72.0 & 19.9 & 2.5\\ C\_2010 & 12.8 & 244.7 & 73.4 & 19.1 & ...\\ O\_2010 (6) & 15.1 & 261.1 & 73.0 & 19.8 & 4.7\\ \hline \end{tabular} \end{table} As can be seen from Table \ref{tab:table12}, the mean observations fit very well to the positions predicted by dynamical modeling, and for two cases there were single meteors very close to predicted positions. On the other hand, the predictions for year 2015 show that a few meteoroids should hit the Earth around solar longitude 14.5° at R.A. = 260°, Dec. = +75°, but no significant activity has been detected in CMN observations. However, small groups of meteors can be seen on CMOR plots for that solar longitude at a position slightly lower in declination, but this should be verified using radar orbital measurements once available. According to Dr. Peter Brown at the University of Western Ontario (private correspondence), there is no significant activity from this shower in the CMOR orbital data. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image14.jpg}} \caption{Location of the nodes of the particles released by 2008 GV over several centuries, concatenated over 100 years. The Earth crosses the stream.} \label{fig:figure14} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image15.png}} \caption{Theoretical radiant of the particles released by 2008 GV which were closest to the Earth. The range of solar longitudes for modeled radiants is from 355.1\degr to 17.7\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure15} \end{figure} One other potential parent body may be connected to the $\psi$ Draconids stream: 2015 FA118. The analysis for that potential parent alternative will be done in a future analysis. \section{IAU meteor shower \#755 MID - May $\iota$ Draconids and asteroid 2006 GY2 } The possible new meteor shower May $\iota$ Draconids was reported by \citet{segon2015four}, consisting of 19 tight meteoroid orbits, having maximal distance from their mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The May $\iota$ Draconids were found to be active from May 7 to June 6, with peak activity around solar longitude of 60° at R.A. = 231°, Dec. = +53°, and V$_{g}$ = 16.7 km/s. No confirmation from other searches exists at the moment, since the shower has been reported in the literature only recently. Greaves (from the meteorobs mailing-list archives\footnote{\label{meteorobs}\url{http://lists.meteorobs.org/pipermail/meteorobs/2015-December/018122.html}.}) stated that this shower should be the same as \#273 PBO $\phi$ Bootids. However, if we look at the details of this shower as presented in \citet{jenniskens2006meteor}, we find that the solar longitude stated in the IAU Meteor Data Center does not correspond to the mean ascension node for three meteors chosen to represent the $\phi$ Bootid shower. If a weighted orbit average of all references is calculated, the resulting D$_{SH}$ from MID is 0.18 which we consider a large enough value to be a separate shower (if the MID exists at all). Three \#273 PBO orbits from the IAU MDC do indeed match \#755 MID, suggesting that these two possible showers may somehow be connected. Asteroid 2006 GY2 was investigated as a probable parent body using dynamical modeling as in previous cases. The asteroid 2006 GY2 has a Tisserand parameter for Jupiter of T$_{j}$ = 3.70. From all the cases we discussed in this paper, this one shows the poorest match between the observed and predicted radiant positions. The theoretical stream was modeled with trails ejected from 1800 AD through 2100 AD. According to the dynamical modeling analysis, this parent body should produce meteors for all years covered by the observations and at more or less the same position, R.A. = 248.5°, Dec. = +46.2°, and at same solar longitude of 54.4° with V$_{g}$ = 19.3 km/s, as visualized in Figures \ref{fig:figure16} and \ref{fig:figure17}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image16.jpg}} \caption{Location of the nodes of the particles released by 2006 GY2 over several centuries, concatenated over 50 years. The Earth crosses the stream.} \label{fig:figure16} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{media/image17.png}} \caption{Theoretical radiant of the particles released by 2006 GY2 which were closest to the Earth. The range of solar longitudes for modeled radiants is from 54.1\degr to 54.5\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.} \label{fig:figure17} \end{figure} However, six meteors belonging to the possible \#755 MID shower found in the solar longitude range from 52.3 to 53.8 (the next meteor found was at 58.6°) show a mean radiant position at R.A. = 225.8°, Dec. = +46.4°, with mean V$_{g}$ of 16.4 km/s. Given the angular distance of 15.6\degr from the observed radiant, the difference in geocentric velocity (3 km/s) compared to the modeled meteor radiant parameters, and the fact that there were no single model meteors observed at that position nor nearby, we cannot conclude that this asteroid may be the parent body of the possible meteor shower May $\iota$ Draconids. Another potential parent body was 2002 KK3, having a D$_{SH}$ = 0.18. However, the dynamical modeling for 2002 KK3 showed no crossings with Earth's orbit. There were also three more distant bodies at D$_{SH}$ = 0.20: 2010 JH3, 2013 JY2, and 2014 WC7. The analysis for those bodies will be done in a future analysis. \section{IAU meteor shower \#531 GAQ - $\gamma$ Aquilids and comet C/1853G1 (Schweizer), and other investigated bodies } The possible new meteor shower $\gamma$ Aquilids was reported by \citet{segon2014new}, and found in other stream search papers (\citet{kornovs2014confirmation}, \citet{rudawska2015independent} and \citet{jenniskens2016cams}). Meteoroids from the suggested parent body comet C/1853G1 (Schweizer) were modeled for perihelion passages ranging from 3000 BC up to the present, and were evaluated. Despite their being very similar orbits between \#531 GAQ and the comet C/1853G1 (Schweizer), no direct hits to the Earth were found. Besides C/1853G1 (Schweizer), negative results were found for dynamical analyses done on asteroids 2011 YX62 (as a possible parent body of \#604 ACZ $\zeta$1 Cancrids) and 2002 KK3, 2008 UZ94, and 2009 CR2 (no shower association reported). \section{Discussion} \paragraph{} The new meteoroid stream discoveries described in this work have been reported on previously, and were searches based on well-defined conditions and constraints that individual meteoroid orbits must meet for association. The simulated particles ejected by the hypothetical parent bodies were treated in the same rigorous manner. Although we consider the similarity of the observed radiant and the dynamically modeled radiant as sufficient evidence for association with the hypothetical parent body when following the approach of \citet{vaubaillon2005new}, there are several points worth discussing. All meteoroid orbits used in the analysis of \citet{segon2014parent} were generated using the UFOorbit software package by \citet{sonotaco2009meteor}. As this software does not estimate errors of the observations, or the errors of calculated orbital elements, it is not possible to consider the real precision of the individual meteoroid orbits used in the initial search analysis. Furthermore, all UFOorbit generated orbits are calculated on the basis of the meteor's average geocentric velocity, not taking the deceleration into consideration. This simplification introduces errors in orbital elements of very slow, very long, and/or very bright meteors. The real impact of this simplification is discussed in \citet{segon2014draconids} where the 2011 Draconid outburst is analyzed. Two average meteoroid orbits generated from average velocities were compared, one with and one without the linear deceleration model. These two orbits differed by as much as 0.06 in D$_{SH}$ (D$_{H}$ = 0.057, D$_{D}$ = 0.039). The deviation between the orbits does not necessarily mean that the clustering would not be determined, but it does mean that those orbits will certainly differ from the orbits generated with deceleration taken into account, as well as differing from the numerically generated orbits of hypothetical parent bodies. Consequently, the radiant locations of slower meteors can be, besides the natural radiant dispersion, additionally dispersed due to the varying influence of the deceleration in the position of the true radiant. This observation is not only relevant for UFOorbit, but for all software which potentially does not properly model the actual deceleration. CAMS Coincidence software uses an exponential deceleration model \citep{jenniskens2011cams}, however not all meteors decelerate exponentially as was shown in \citet{borovivcka2007atmospheric}. The real influence of deceleration in radiant dispersion will be a topic of some future work. Undoubtedly an important question is whether the dispersion caused by the improperly calculated deceleration of slow (e.g., generated by near-Earth objects) meteors can render members of a meteoroid stream to be unassociated with each other by the automated stream searching methods. Besides the lack of error estimation of meteor observations, parent bodies on relatively unstable orbits are observed over a short observation arc, thus they often do not have very precise orbital element solutions. Moreover, unknown past parent body activity presents a seemingly unsolvable issue of how the parent orbit could have been perturbed on every perihelion pass close to the Sun. Also if the ejection modeling assumed that the particle ejection occurred during a perihelion passage when the parent body was not active, there would be no meteors present when the Earth passes through the point of the falsely predicted filament. On the other hand, if the Earth encounters meteoroids from a perihelion passage of particularly high activity, an unpredicted outburst can occur. \citet{vaubaillon2015latest} discuss the unknowns regarding parent bodies and the problems regarding meteor shower outburst prediction in greater detail. Another fundamental problem that was encountered during this analysis was the lack of any rigorous definitions of what meteor showers or meteoroid streams actually are. Nor is there a common consensus to refer to. This issue was briefly discussed in \citet{brown2010meteoroid} and no real advances towards a clear definition have been made since. We can consider a meteor shower as a group of meteors which annually appear near the same radiant and which have approximately the same entry velocity. To better embrace the higher-dimensional nature of orbital parameters and time evolution versus a radiant that is fixed year after year, this should be extended to mean there exists a meteoroid stream with meteoroids distributed along and across the whole orbit with constraints dictated by dynamical evolution away from the mean orbit. By using the first definition however, some meteor showers caused by Jupiter-family comets will not be covered very well, as they are not active annually. The orbits of these kinds of meteor showers are not stable in the long term due to the gravitational and non-gravitational influences on the meteoroid stream.\footnote{\label{vaubaillonIMC2014}\url{http://www.imo.net/imc2014/imc2014-vaubaillon.pdf}.} On the other hand, if we are to consider any group of radiants which exhibit similar features but do not appear annually as a meteor shower, we can expect to have thousands of meteor shower candidates in the near future. It is the opinion of the authors that with the rising number of observed multi-station video meteors, and consequently the rising number of estimated meteoroid orbits, the number of new potential meteor showers detected will increase as well, regardless of the stream search method used. As a consequence of the vague meteor shower definition, several methods of meteor shower identification have been used in recent papers. \citet{vida2014meteor} discussed a rudimentary method of visual identification combined with D-criterion shower candidate validation. \citet{rudawska2014new} used the Southworth and Hawking D-criterion as a measure of meteoroid orbit similarity in an automatic single-linkage grouping algorithm, while in the subsequent paper by \citet{rudawska2015independent}, the geocentric parameters were evaluated as well. In \citet{jenniskens2016cams} the results of the automatic grouping by orbital parameters were disputed and a manual approach was proposed. Although there are concerns about automated stream identification methods, we believe it would be worthwhile to explore the possibility of using density-based clustering algorithms, such as DBSCAN or OPTICS algorithms by \citet{kriegel2005density}, for the purpose of meteor shower identification. They could possibly discriminate shower meteors from the background as they have a notion of noise and varying density of the data. We also strongly encourage attempts to define meteor showers in a more rigorous manner, or an introduction of an alternate term which would help to properly describe such a complex phenomenon. The authors believe that a clear definition would be of great help in determining whether a parent body actually produces meteoroids – at least until meteor observations become precise enough to determine the connection of a parent body to a single meteoroid orbit. \section{Conclusion} From this work, we can conclude that the following associations between newly discovered meteoroid streams and parent bodies are validated: \begin{itemize} \item \#549 FAN 49 Andromedids and Comet 2001 W2 Batters \item \#533 JXA July $\xi$ Arietids and Comet C/1964 N1 Ikeya \item \#539 ACP $\alpha$ Cepheids and Comet P/255 Levy \item \#541 SSD 66 Draconids and Asteroid 2001 XQ \item \#751 KCE $\kappa$ Cepheids and Asteroid 2009 SG18 \item \#753 NED November Draconids and Asteroid 2009 WN25 \item \#754 POD $\psi$ Draconids and Asteroid 2008 GV \end{itemize} The connection between \#755 MID May $\iota$ Draconids and asteroid 2006 GY2 is not firmly established enough and still requires some additional observational data before any conclusion can be drawn. The asteroidal associations are interesting in that each has a Tisserand parameter for Jupiter indicating it was a possible Jupiter-family comet in the past, and thus each may now be a dormant comet. Thus it may be worth looking for outgassing from asteroids 2001 XQ, 2009 SG18, 2009 WN25, 2008 GV, and even 2006 GY2 during their perihelion passages in the near future using high resolution imaging. \begin{acknowledgements} JV would like to acknowledge the availability of computing resources on the Occigen super computer at CINES (France) to perform the computations required for modeling the theoretical meteoroid streams. Special acknowledgement also goes to all members of the Croatian Meteor Network for their devoted work on this project. \end{acknowledgements} \bibpunct{(}{)}{;}{a}{}{,} \bibliographystyle{aa}
{'timestamp': '2016-11-09T02:00:34', 'yymm': '1611', 'arxiv_id': '1611.02297', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.02297'}
ArXiv
\section{Introduction} Competition between growth and fragmentation is a common phenomenon for a structured population. It arises for instance in a context of cell division (see, among many others, \cite{adimy, basse, Bekkal1, Bekkal2, Doumic, farkas, GW2, MetzDiekmann, PTou}), polymerization (see \cite{biben, destaing}), telecommunication (see \cite{Baccelli}) or neurosciences (see \cite{PPS}). It is also a mechanism which rules the proliferation of prion's proteins (see \cite{CL1, Greer, LW}). These proteins are responsible of spongiform encephalopaties and appear in the form of aggregates in infected cells. Such polymers grow attaching non infectious monomers and converting them into infectious ones. On the other hand they increase their number by splitting. To describe such phenomena, we write the following integro-differential equation, \begin{equation}\label{eq:temporel}\left \{ \begin{array}{l} \displaystyle\dfrac{\partial}{\partial t} u(x,t) + \dfrac{\partial}{\partial x} \big(\tau(x) u(x,t)\big) + \beta(x) u(x,t) = 2 \int_{x}^{\infty} \beta(y) \kappa (x,y) \, u(y,t) \, dy, \qquad x \geqslant0, \\ \\ u(x,0)=u_0(x), \\ \\ u(0,t)=0. \end{array}\right.\end{equation} The function $u(x,t)$ represents the quantity of individuals (cells or polymers) of structured variable (size, protein content...) $x$ at time $t.$ These individuals grow (\emph{i.e.}, polymers aggregate monomers, or cells increase by nutrient uptake for instance) with the rate $\tau(x).$ Equation \eqref{eq:temporel} also takes into account the fragmentation of a polymer (or the division of a cell) of size $y$ into two smaller polymers of size $x$ and $y-x.$ This fragmentation occurs with a rate $\beta(y)$ and produce an aggregate of size $x$ with the rate $\kappa(x,y).$ Equation \eqref{eq:temporel} is a particular case of the more general one \begin{equation}\label{eq:general}\dfrac{\p}{\p t} u(x,t) + \dfrac{\p}{\p x} \big(\tau(x) u(x,t)\big) + [\beta(x)+\mu(x)] u(x,t) = n \int_{x}^{\infty} \beta(y) \kappa (x,y) \, u(y,t) \, dy, \qquad x \geqslant x_0,\end{equation} with the bound condition $u(x_0,t)=0$ (see \cite{Banasiak, CL1, LW}). Here, polymers are broken in an average of $n>1$ smaller ones by the fragmentation process, there is a death term $\mu(x)\geq0$ representing degradation, and a minimal size of polymers $x_0$ which can be positive. This more general model is biologically and mathematically relevant in the case of prion proliferation and is used in \cite{CL2, CL1, Greer, LW} with a coupling to an ODE. Our results remain true for this generalization. A fundamental tool to study the asymptotic behaviour of the population when $t\to\infty$ is the existence of eigenelements ($\lb,\U,\phi$) solution of the equation \begin{equation} \label{eq:eigenproblem} \left \{ \begin{array}{l} \displaystyle \f{\p}{\p x} (\tau(x) \U(x)) + ( \beta (x) + \lb) \U(x) = 2 \int_x^\infty \beta(y)\kappa(x,y) \U(y) dy, \qquad x \geqslant0, \\ \\ \tau\U(x=0)=0 ,\qquad \U(x)\geq0, \qquad \int_0^\infty \U(x)dx =1, \\ \\ \displaystyle -\tau(x) \f{\p}{\p x} (\phi(x)) + ( \beta (x) + \lb) \phi(x) = 2 \beta(x) \int_0^x \kappa(y,x) \phi(y) dy, \qquad x \geqslant0, \\ \\ \phi(x)\geq0, \qquad \int_0^\infty \phi(x)\U(x)dx =1. \end{array} \right. \end{equation} For the first equation (equation on $\U$) we are looking for ${\mathcal D}'$ solutions defined as follows : $\U\in L^1(\mathbb{R}^+)$ is a ${\mathcal D}'$ solution if $\forall\varphi\in{\mathcal C}^\infty_c(\mathbb{R}^+),$ \begin{equation}\label{eq:D'eigenproblem} \displaystyle -\int_0^\infty\tau(x)\U(x)\p_x\varphi(x)\,dx + \lb\int_0^\infty\U(x)\varphi(x)\,dx = \int_0^\infty\beta(x)\U(x)\Bigl(2\int_0^\infty\varphi(y)\kappa(y,x)\,dy-\varphi(x)\Bigr)\,dx. \end{equation} Concerning the dual equation, we are looking for a solution $\phi\in W^{1,\infty}_{loc}(0,\infty)$ such that the equality holds in $L^1_{loc}(0,\infty),\ i.e.$ almost everywhere. When such elements exist, the asymptotic growth rate for a solution to \eqref{eq:temporel} is given by the first eigenvalue $\lb$ and the asymptotic shape is given by the corresponding eigenfunction $\U.$ More precisely, it is proved for a constant fragmentation rate $\beta$ that $u(x,t)e^{-\lb t}$ converges exponentially fast to $\rho\U(x)$ where $\rho=\int u_0(y)dy$ (see \cite{LP,PR}). For more general fragmentation rates, one can use the dual eigenfunction $\phi$ and the so-called ''General Relative Entropy`` method introduced in \cite{MMP1,BP}. It provides similar results but without the exponential convergence, namely that $$\int_0^\infty \bigl|u(y,t)e^{-\lambda t}-\langle u_0,\phi\rangle{\mathcal U}(y)\bigr|\phi(y)\,dy\underset{t\to\infty}{\longrightarrow}0$$ where $\langle u_0,\phi\rangle=\int u_0(y)\phi(y)dy$ (see \cite{MMP1,MMP2}). The eigenvalue problem can also be used in nonlinear cases, such as prion proliferation equations, where there is a quadratic coupling of Equation \eqref{eq:temporel} or \eqref{eq:general} with a differential equation. In \cite{CL2, CL1, Engler, Pruss} for instance, the stability of steady states is investigated. The use of entropy methods in the case of nonlinear problems remains however a challenging and widely open field (see \cite{PTum} for a recent review). \ Existence and uniqueness of eigenelements has already been proved for general fragmentation kernels $\kappa(x,y)$ and fragmentation rates $\beta(x),$ but with very particular polymerization rates $\tau(x),$ namely constant ($\tau\equiv1$ in \cite{BP}), homogeneous ($\tau(x)=x^\mu$ in \cite{M1}) or with a compact support ($Supp\,\tau=[0,x_M]$ in \cite{Doumic}). The aim of this article is to consider more general $\tau$ as \cite{CL1, Silveira} suggest. Indeed, there is no biological justification to consider specific shapes of $\tau$ in the case when $x$ represents a size (mass or volume) or some structuring variable and not the age of a cell (even in this last case it is not so clear that $\f{dx}{dt}=1,$ since biological clocks may exhibit time distorsions). For instance, for the prion proteins, the fact that the small aggregates are little infectious (see \cite{Lenuzza,Silveira}) leads us to include the case of rates vanishing at $x=0.$ Considering fully general growth rates is thus indispensable to take into account biological or physical phenomena in their full diversity. The proof of \cite{BP} can be adapted for non constant rates but still positive and bounded ($0<m<\tau(x)<M$). The paper \cite{M1} gives results for $\tau(0)=0,$ but for a very restricted class of shape for $\tau.$ The paper \cite{Doumic} gives results for $\tau$ with general shape in the case where there is also an age variable (integration in age then allows to recover Problem \eqref{eq:temporel}), but requires a compact support and regular parameters. Here we consider polymerization rates that can vanish at $x=0,$ with general shape and few regularity for the all parameters ($\tau,\ \beta$ and $\kappa$). From a mathematical viewpoint, relaxing as far as possible the assumptions on the rates $\tau,\kappa,\beta,$ as we have done in this article, also leads to a better understanding of the intrinsic mechanisms driving the competition between growth and fragmentation. \begin{theorem}[Existence and Uniqueness]\label{th:eigenelements} Under assumptions \eqref{as:kappa1}-\eqref{as:betatauinf}, there exists a unique solution $(\lb,\U,\phi)$ (in the sense we have defined before) to the eigenproblem \eqref{eq:eigenproblem} and we have $$\lb>0,$$ $$x^\al\tau\U\in L^p(\mathbb{R}^+),\quad\forall \al\geq-\gamma,\quad\forall p\in[1,\infty],$$ $$x^\al\tau\U\in W^{1,1}(\mathbb{R}^+),\quad\forall \al\geq0$$ $$\exists k>0\ s.t.\ \f{\phi}{1+x^k}\in L^\infty(\mathbb{R}^+),$$ $$\tau\f{\p}{\p x}\phi\in L_{loc}^\infty(\mathbb{R}^+).$$ \end{theorem} The end of this paper is devoted to define precisely the assumptions and prove this theorem. It is organized as follows : in Section \ref{se:coefficients} we describe the assumptions and give some examples of interesting parameters. In Section \ref{se:proof} we prove Theorem \ref{th:eigenelements} using $a\ priori$ bounds on weighted norms and then we give some consequences and perspectives in Section \ref{se:csq}. The proof of technical lemmas and theorem can be found in the Appendix. \ \section{Coefficients}\label{se:coefficients} \subsection{Assumptions} For all $y\geq0,\ \kappa(.,y)$ is a nonnegative measure with a support included in $[0,y].$ We define $\kappa$ on $(\mathbb{R}_+)^2$ as follows : $\kappa(x,y)=0\ \text{for}\ x>y.$ We assume that for all continuous function $\psi,$ the application $f_\psi:y\mapsto\int\psi(x)\kappa(x,y)\,dx$ is Lebesgue measurable.\\ The natural assumptions on $\kappa$ (see \cite{Greer} for the motivations) are that polymers can split only in two pieces which is taken into account by \begin{equation}\label{as:kappa1}\int\kappa(x,y) dx = 1.\end{equation} So $\kappa(y,.)$ is a probability measure and $f_\psi\in L^\infty_{loc}(\mathbb{R}^+).$ The conservation of mass imposes \begin{equation}\label{as:kappa2}\int x\kappa(x,y) dx = \frac y 2,\end{equation} a property that is automatically satisfied for a symetric fragmentation ($i.e.\ \kappa(x,y)=\kappa(y-x,y)$) thanks to \eqref{as:kappa1}. For the more general model \eqref{eq:general}, assumption \eqref{as:kappa2} becomes $\int x\kappa(x,y) dx = \frac y n$ to preserv the mass conservation.\\ We also assume that the second moment of $\kappa$ is less than the first one \begin{equation}\label{as:kappa3}\int\f{x^2}{y^2} \, \kappa(x,y) dx \leq c < 1/2\end{equation} (it becomes $c<1/n$ for model \eqref{eq:general}). We refer to the Examples for an explanation of the physical meaning. \ For the polymerization and fragmentation rates $\tau$ and $\beta,$ we introduce the set $${\mathcal P}:=\bigl\{f\geq0\,:\,\exists\mu,\nu\geq0,\ \limsup_{x\to\infty}x^{-\mu}f(x)<\infty\ \text{and}\ \liminf_{x\to\infty}x^\nu f(x)>0\bigr\}$$ and the space $$L^1_0:=\bigr\{f,\ \exists a>0,\ f\in L^1(0,a)\bigl\}.$$ We consider \begin{equation}\label{as:betatauspace}\beta\in L^1_{loc}(\mathbb{R}^{+*})\cap{\mathcal P},\qquad \exists\al_0\geq0\ s.t.\ \tau\in L^\infty_{loc}(\mathbb{R}^{+},x^{\al_0}dx)\cap{\mathcal P}\end{equation} satisfying \begin{equation}\label{as:taupositivity}\forall K\ \text{compact of}\ (0,\infty),\ \exists m_K>0\quad s.t.\quad\tau(x)\geq m_K\ \text{for}\ a.e.\ x\in K\end{equation} (if $\tau$ is continuous, this assumption \eqref{as:taupositivity} is nothing but saying that for all $x>0,\ \tau(x)>0$) and \begin{equation}\label{as:betasupport}\exists b\geq0,\quad Supp\beta=[b,\infty).\end{equation} Assumption \eqref{as:betasupport} is necessary to prove uniqueness and existence for the adjoint problem. \ To avoid shattering (zero-size polymers formation, see \cite{Banasiak,LW}), we assume \begin{equation}\label{as:kappatau}\exists\, C>0,\gamma\geq0\quad s.t.\qquad\int_0^x\kappa(z,y)\,dz\leq \min\Bigl(1,C\Bigl(\f x y\Bigr)^\gamma\Bigr)\qquad\text{and}\qquad\f{x^\gamma}{\tau(x)}\in L^1_0\end{equation} which links implicitely $\tau$ to $\kappa,$ and also \begin{equation}\label{as:betatau0}\f{\beta}{\tau}\in L^1_0.\end{equation} On the other hand, to avoid forming infinitely long polymers (gelation phenomenon, see \cite{EscoMischler1,EscoMischler2}), we assume \begin{equation}\label{as:betatauinf}\lim_{x\rightarrow +\infty}\f{x\beta(x)}{\tau(x)}=+\infty.\end{equation} \begin{remark}\label{rk:kappa} In case when \eqref{as:kappatau} is satisfied for $\gamma>0,$ then \eqref{as:kappa3} is automatically fulfilled (see Lemma \ref{lm:kappa} in the Appendix). \end{remark} \ \subsection{Examples} First we give some examples of coefficients which satisfy or not our previous assumptions.\\ For the fragmentation kernel, we first check the assumptions \eqref{as:kappa1} and \eqref{as:kappa2}. They are satisfied for autosimilar measures, namely $\kappa(x,y)=\f1y\kappa_0(\f xy),$ with $\kappa_0$ a probability measure on $[0,1],$ symmetric in $1/2.$ Now we exhibit some $\kappa_0.$ \ \noindent{\bf General mitosis :} a cell of size $x$ divides in a cell of size $rx$ and one of size $(1-r)x$ (see \cite{M2}) \begin{equation}\label{ex:kappar}\kappa_0^r=\f12(\delta_{r}+\delta_{1-r})\qquad\text{for}\qquad r\in[0,1/2].\end{equation} Assumption \eqref{as:kappatau} is satisfied for any $\gamma>0$ in the cases when $r\in(0,1/2].$ So \eqref{as:kappa3} is also fulfilled thanks to Remark \ref{rk:kappa}. The particular value $r=1/2$ leads to equal mitosis ($\kappa(x,y)=\delta_{x=\f y2}$).\\ The case $r=0$ corresponds to the renewal equation ($\kappa(x,y)=\f12(\delta_{x=0}+\delta_{x=y})$). In this case, we cannot strictly speak of mitosis because the size of the daughters are $0$ and $x.$ It appears when $x$ is the age of a cell and not the size. This particular case is precisely the one that we want to avoid with assumption \eqref{as:kappa3} ; it can also be studied seperately with different tools (see \cite{PTum} for instance). For such a fragmentation kernel, assumption \eqref{as:kappatau} is satified only for $\gamma=0,$ and the moments $\int z^k\kappa_0(z)dz$ are equal to $1/2$ for all $k>0,$ so \eqref{as:kappa3} does not hold true. However, if we consider a convex combination of $\kappa_0^0$ with another kernel such as $\kappa_0^r$ with $r\in(0,1/2],$ then \eqref{as:kappatau} remains false for any $\gamma>0$ but \eqref{as:kappa3} is fulfilled. Indeed we have for $\rho\in(0,1)$ $$\int z^2(\rho\kappa_0^0(z)+(1-\rho)\kappa_0^r(z))\,dz=\f\rho2+\f{1-\rho}2(r^2+(1-r)^2)=\f12(1-2r(1-r)(1-\rho))<\f12.$$ \noindent{\bf Homogeneous fragmentation :} \begin{equation}\label{ex:kappaal}\kappa_0^\al(z)=\f{\al+1}{2}(z^\al+(1-z)^\al)\qquad\text{for}\qquad\al>-1.\end{equation} It gives another class of fragmentation kernels, namely in $L^1$ (unlike the mitosis case). The parameter $\gamma=1+\al>0$ suits for \eqref{as:kappatau} and so \eqref{as:kappa3} is fulfilled. It shows that our assumptions allow fragmentation at the ends of the polymers (called depolymerization, see \cite{Lenuzza}, when $\al$ is close to $-1$) once it is not the extreme case of renewal equation.\\ Uniform repartition ($\kappa(x,y)=\f1y{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{0\leq x\leq y}$) corresponds to $\al=0$ and is also included. \ This last case of uniform repartition is useful because it provides us with explicit formulas for the eigenelements. For instance, we can consider the two following examples. \ \noindent{\bf First example :} $\,\tau(x)=\tau_0,\ \beta(x)=\beta_0x.$\\ In this case, widely used by \cite{Greer}, the eigenelements exist and we have $$\lb=\sqrt{\beta_0\tau_0},$$ $$\U(x)=2\sqrt{\f{\beta_0}{\tau_0}}\Bigl(X+\f{X^2}2\Bigr)e^{-X-\f{X^2}2},\quad\text{with}\ X=\sqrt{\f{\beta_0}{\tau_0}}x,$$ $$\phi(x)=\f12(1+X).$$ \noindent{\bf Second example :} $\ \tau(x)=\tau_0x.$\\ For such $\beta$ for which there exists eigenelements, we have $$\lb=\tau_0\qquad \text{and}\qquad \phi(x)=\f x{\int y\U(y)}.$$ For instance when $\beta(x)=\beta_0x^n$ with $n\in\mathbb{N}^*,$ then the eigenelements exist and we can compute $\U$ and $\phi$ and we have the formulas in Table \ref{tab:examples}. In this table we can notice that $\U(0)>0$ but the boundary condition $\tau\U(0)=0$ is fulfilled. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|} \hline $n=1$&$\lb=\tau_0$&$\U(x)=\f{\beta_0}{\tau_0}e^{-\f{\beta_0}{\tau_0}x}$&$\phi(x)=\f{\beta_0}{\tau_0}x$\\ \hline $n=2$&$\lb=\tau_0$&$\U(x)=\sqrt{\f{2\beta_0}{\pi\tau_0}}e^{-\f12\f{\beta_0}{\tau_0}x^2}$&$\phi(x)=\sqrt{\f{\pi\beta_0}{2\tau_0}}x$\\ \hline $n$ & $\lb=\tau_0$&$\U(x)=\Bigl(\f{\beta_0}{n\tau_0}\Bigr)^{\f1n}\f n{\Gamma(\f1n)}e^{-\f1n\f{\beta_0}{\tau_0}x^n}$&$\phi(x)=\Bigl(\f{\beta_0}{n\tau_0}\Bigr)^{\f1n}\f {\Gamma(\f1n)}{\Gamma(\f2n)}x$\\ \hline \end{tabular} \caption{\label{tab:examples}The example $\tau(x)=\tau_0x,\ \beta(x)=\beta_0x^n$ and uniform repartition $\kappa(x,y)=\f1y{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{0\leq x\leq y}.$ The table gives the eigenelements solution to \eqref{eq:eigenproblem}.} \end{center} \end{table} Now we turn to non-existence cases. Let us consider constant fragmentation $\beta(x)=\beta_0$ with an affine polymerization $\tau(x)=\tau_0+\tau_1x,$ and any fragmentation kernel $\kappa$ which satisfies to assumptions \eqref{as:kappa1}-\eqref{as:kappa2}. We notice that \eqref{as:betatauinf} is not satisfied and look at two instructive cases. \ \noindent{\bf First case :} $\ \tau_0=0.$\\ In this case assumption \eqref{as:betatau0} does not hold true. Assume that there exists $\U\in L^1(\mathbb{R}^+)$ solution of \eqref{eq:eigenproblem} with the estimates of Theorem \ref{th:eigenelements}. Integrating the equation on $\U$ we obtain that $\lb=\beta_0,$ but multiplying the equation by $x$ before integration we have that $\lb=\tau_1.$ We conclude that eigenelements cannot exist if $\tau_1\neq\beta_0.$\\ Moreover, if we take $\kappa(x,y)=\f1y{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{0\leq x\leq y},$ then a formal computation shows that any solution to the first equation of \eqref{eq:eigenproblem} belongs to the plan $Vect\{x^{-1},x^{-\f{2\beta_0}{\tau_1}}\}.$ So, even if $\beta_0=\tau_1$, there does not exist an eigenvector in $L^1.$ \ \noindent{\bf Second case :} $\ \tau_0>0.$\\ In this case \eqref{as:betatau0} holds true but the same integrations than before lead to $$\int x\U(x)\,dx=\f{\tau_0}{\beta_0-\tau_1}.$$ So there cannot exist any eigenvector $\U\in L^1(x\,dx)$ for $\tau_1\geq\beta_0.$ \ \section{Proof of the main theorem}\label{se:proof} The proof of Theorem \ref{th:eigenelements} is divided as follows. We begin with a result concerning the positivity of the $a\ priori$ existing eigenvectors (Lemma \ref{lm:positivity}). We then define, in Section \ref{subsec:truncated}, a regularized and truncated problem for which we know that eigenelements exist (see the Appendix \ref{se:KR} for a proof using the Krein-Rutman theorem), and we choose it such that the related eigenvalue is positive (Lemma \ref{lm:lambdapositivity}). In Section~\ref{subsec:estim}, we give a series of estimates that allow us to pass to the limit in the truncated problem and so prove the existence for the original eigenproblem \eqref{eq:eigenproblem}. The positivity of the eigenvalue $\lb$ and the uniqueness of the eigenelements are proved in the two last subsections. \subsection{A preliminary lemma} Before proving Theorem \ref{th:eigenelements}, we give a preliminary lemma, useful to prove uniqueness of the eigenfunctions. \begin{lemma}[Positivity]\label{lm:positivity} Consider $\U$ and $\phi$ solutions to the eigenproblem \eqref{eq:eigenproblem}.\\ We define $\displaystyle m:=\inf_{x,y}\bigl\{x\,:\,(x,y)\in Supp\,\beta(y)\kappa(x,y)\bigr\}.$ Then we have, under assumptions \eqref{as:kappa1}, \eqref{as:kappa2}, \eqref{as:taupositivity} and \eqref{as:betasupport} $$Supp\,\U=[m,\infty)\qquad\text{and}\qquad\tau\U(x)>0\quad\forall x>m,$$ $$\phi(x)>0\quad\forall x>0.$$ If additionaly $\f1\tau\in L^1_0,$ then $\phi(0)>0.$ \end{lemma} \begin{remark} In case $Supp\,\kappa=\{(x,y)/x\leq y\},$ then $m=0$ and Lemma \ref{lm:positivity} and Theorem \ref{th:eigenelements} can be proved without the connexity condition \eqref{as:betasupport} on the support of $\beta.$ \end{remark} \begin{proof} Let $x_0>0,$ we define $F:x\mapsto \tau(x)\U(x)e^{\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}.$ We have that \begin{equation}\label{Upositivity}F'(x)=2e^{\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}\int\beta(y)\kappa(x,y)\U(y)\,dy\geq0.\end{equation} So, as soon as $\tau\U(x)$ once becomes positive, it remains positive for larger $x.$ \ We define $a:=\inf\{x\,:\,\tau(x)\U(x)>0\}.$ We first prove that $a\leq \f b2.$ For this we integrate the equation on $[0,a]$ to obtain $$\int_0^a\int_a^\infty\beta(y)\kappa(x,y)\U(y)\,dydx=0,$$ $$\int_a^\infty\beta(y)\U(y)\int_0^a\kappa(x,y)\,dxdy=0.$$ Thus for almost every $y\geq\max(a,b),\ \int_0^a\kappa(x,y)\,dx=0.$ As a consequence we have $$1=\int\kappa(x,y)\,dx=\int_a^y\kappa(x,y)\,dx\leq\f1a\int x\kappa(x,y)\,dx=\f y{2a}$$ thanks to \eqref{as:kappa1} and \eqref{as:kappa2}, and this is possible only if $b\geq2a.$ \ Assume by contradiction that $m<a,$ integrating \eqref{eq:eigenproblem} multiplied by $\varphi,$ we have for all $\varphi\in{\mathcal C}_c^\infty$ such that $Supp\,\varphi\subset[0,a]$ \begin{equation}\label{eq:positivity}\int\int\varphi(x)\beta(y)\kappa(x,y)\U(y)\,dydx=0.\end{equation} By definition of $m$ and using the fact that $m<a,$ there exists $(p,q)\in(m,a)\times(b,\infty)$ such that $(p,q)\in Supp\,\beta(y)\kappa(x,y).$ But we can choose $\varphi$ positive such that $\varphi(p)\U(q)>0$ and this is a contradiction with \eqref{eq:positivity}. So we have $m\geq a.$\\ To conclude we notice that on $[0,m],\ \U$ satisfies $$\p_x(\tau(x)\U(x))+\lb\U(x)=0.$$ So, thanks to the condition $\tau(0)\U(0)=0$ and the assumption \eqref{as:taupositivity}, we have $\U\equiv0$ on $[0,m],$ so $m=a$ and the first statement is proved. \ For $\phi,$ we define $G(x):=\phi(x)e^{-\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}.$ We have that \begin{equation}\label{eq:phipositivity}G'(x)=-2e^{-\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}\beta(x)\int_0^x\kappa(y,x)\phi(y)\,dy\leq0,\end{equation} so, as soon as $\phi$ vanishes, it remains null. Therefore $\phi$ is positive on an interval $(0,x_1)$ with $x_1\in\mathbb{R}_+^*\cup\{+\infty\}.$ Assuming that $x_1<+\infty$ and using that $x_1>a=m$ because $\int\phi(x)\U(x)dx=1,$ we can find $X\geq x_1$ such that $$\int_{x_1}^XG'(x)\,dx=-2\int_{x_1}^X\int_0^{x_1}e^{\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}\phi(y)\beta(x)\kappa(y,x)\,dy\,dx<0.$$ This contradicts that $\phi(x)=0$ for $x\geq x_1,$ and we have proved that $\phi(x)>0$ for $x>0.$\\ If $\f1\tau\in L^1_0,$ we can take $x_0=0$ in the definition of $G$ and so $\phi(0)>0$ or $\phi\equiv0.$ The fact that $\phi$ is positive ends the proof of the lemma. \qed \end{proof} \ \subsection{Truncated problem} \label{subsec:truncated} The proof of the theorem is based on uniform estimates on the solution to a truncated equation. Let $\eta,\ \delta,\ R$ positive numbers and define $$\tau_\eta(x)=\left\{\begin{array}{ll}\eta&0\leq x\leq\eta\\ \tau(x)&x\geq\eta. \end{array}\right.$$ Then $\tau_\eta$ is lower bounded on $[0,R]$ thanks to \eqref{as:taupositivity} and we denote by $\mu=\mu(\eta,R):=\inf_{[0,R]}\tau_\eta.$ The existence of eigenelements $(\lb_\eta^\delta,\U_\eta^\delta,\phi_\eta^\delta)$ for the following truncated problem when $\delta R<\mu$ is standard (see Theorem \ref{th:KreinRutman} in the Appendix). \begin{equation}\label{eq:truncated} \left \{ \begin{array}{l} \displaystyle \f{\p}{\p x} (\tau_\eta(x) \U_\eta^\delta(x)) + ( \beta(x) + \lb_{\eta}^\delta) \U_\eta^\delta(x) = 2 \int_x^R\beta(y)\kappa(x,y) \U_\eta^\delta(y)\,dy,\qquad 0<x<R, \\ \\ \tau_\eta\U_\eta^\delta(x=0)=\delta,\qquad \U_\eta^\delta(x)>0 , \qquad \int \U_\eta^\delta(x)dx =1, \\ \\ \displaystyle -\tau_\eta(x) \f{\p}{\p x} \phi_\eta^\delta(x) + ( \beta(x) + \lb_\eta^\delta) \phi_\eta^\delta(x) - 2 \beta(x) \int_0^x\kappa(y,x) \phi_\eta^\delta(y)\,dy = \delta\phi_\eta^\delta(0),\qquad 0<x<R, \\ \\ \phi_\eta^\delta(R)=0, \qquad \phi_\eta^\delta(x)>0 , \qquad \int \phi_\eta^\delta(x)\U_\eta^\delta(x)dx =1. \end{array} \right. \end{equation} The proof of the theorem \ref{th:eigenelements} requires $\lb_\eta^\delta>0.$ To enforce it, we take $\delta R=\f\mu2$ and we consider $R$ large enough to satisfy the following lemma. \begin{lemma}\label{lm:lambdapositivity} Under assumptions \eqref{as:kappa1}, \eqref{as:betatauspace} and \eqref{as:betatauinf}, there exists a $R_0>0$ such that for all $R>R_0,$ if we choose $\delta=\f\mu{2R},$ then we have $\lb_\eta^\delta>0.$ \end{lemma} \begin{proof} Assume by contradiction that $R>0$ and $\lb_\eta^\delta\leq0$ with $\delta=\f\mu{2R}.$ Then, integrating between $0$ and $x>0,$ we obtain \begin{eqnarray*} 0&\geq&\lb\int_0^x\U(y)\,dy\\ &=&\delta-\tau(x)\U(x)-\int_0^x\beta(y)\U(y)\,dy+2\int_0^x\int_z^R\beta(y)\kappa(z,y)\U(y)\,dy\,dz\\ &=&\delta-\tau(x)\U(x)+\int_0^x\beta(y)\U(y)\,dy+2\int_x^R\Bigl(\int_0^x\kappa(z,y)\,dz\Bigr)\beta(y)\U(y)\,dy\\ &\geq&\delta-\tau(x)\U(x)+\int_0^x\beta(y)\U(y)\,dy. \end{eqnarray*} Consequently $$\tau(x)\U(x)\geq\delta+\int_0^x\f{\beta(y)}{\tau(y)}\tau(y)\U(y)\,dy$$ and, thanks to Gr\"onwall's lemma, $$\tau(x)\U(x)\geq\delta e^{\int_0^x\f{\beta(y)}{\tau(y)}dy}.$$ But assumption \eqref{as:betatauinf} ensures that for all $n\geq0,$ there is a $A>0$ such that $$\f{\beta(x)}{\tau(x)}\geq\f nx,\qquad\forall x\geq A$$ and thus we have $$\tau(x)\U(x)\geq\delta x^n,\quad\forall x\geq A.$$ Thanks to \eqref{as:betatauspace} we can choose $n$ and $A$ such that $x^{-n}\tau(x)\leq \f\mu4$ for $x\geq A$ and then we have $$1=\int_0^R\U(x)\,dx\geq\int_A^R\U(x)\,dx\geq\delta\int_A^R\f{x^n}{\tau(x)}\,dx\geq\f2R(R-A)$$ what is a contradiction as soon as $R>2A$; so Lemma \ref{lm:lambdapositivity} holds for $R_0=2A.$ \qed \end{proof} \ \subsection{Limit as $\delta\to0$ for $\U_\eta^\delta$ and $\lb_\eta^\delta$} \label{subsec:estim} Fix $\eta$ and let $\delta\rightarrow0$ (then $R\to\infty$ since $\delta R=\f\mu2$). \paragraph{\it First estimate: $\lb_\eta^\delta$ upper bound.} Integrating equation \eqref{eq:truncated} between $0$ and $R,$ we find $$\lb_\eta^\delta\leq\delta+\int\beta(x)\U_\eta^\delta(x)\,dx,$$ then the idea is to prove a uniform estimate on $\int\beta\U_\eta^\delta.$ For this we begin with bounding the higher moments $\int x^\al\beta\U_\eta^\delta$ for $\al\geq\max{(2,\al_0+1)}:=m.$\\ Let $\al\geq\text m,$ according to \eqref{as:kappa3} we have $$\int\f{x^\al}{y^\al}\kappa(x,y)\,dx\leq\int\f{x^2}{y^2}\kappa(x,y)\,dx\leq c<\f12.$$ Multiplying the equation on $\U_\eta^\delta$ by $x^\al$ and then integrating on $[0,R],$ we obtain for all $A\geq\eta$ \begin{eqnarray*} \int x^\al\bigl((1-2c)\beta(x)\bigr)\U_\eta^\delta(x)\,dx&\leq&\al\int x^{\al-1}\tau_\eta(x)\U_\eta^\delta(x)\,dx\\ &=&\al\int_{x\leq A}x^{\al-1}\tau_\eta(x)\U_\eta^\delta(x)\,dx+\al\int_{x\geq A}x^{\al-1}\tau(x)\U_\eta^\delta(x)\,dx\\ &\leq&\al A^{\al-1-\al_0}\sup_{x\in(0,A)}{\{x^{\al_0}\tau(x)\}}+\omega_{A,\al}\int x^\al\beta(x)\U_\eta^\delta(x)\,dx, \end{eqnarray*} where $\omega_{A,\al}$ is a positive number chosen to have $\al\tau(x)\leq\omega_{A,\al} x\beta(x),\ \forall x\geq A.$ Thanks to \eqref{as:kappa3} and \eqref{as:betatauinf}, we can choose $A_\al$ large enough to have $\omega_{A_\al,\al}<1-2c.$ Thus we find \begin{equation}\label{eq:L1bound1}\forall\al\geq m,\,\exists A_\al:\ \forall\eta,\delta>0,\quad\int x^\al\beta(x)\U_\eta^\delta(x)\,dx\leq\f{\al {A_\al}^{\al-1-\al_0}\sup_{(0,A)}{\{x^{\al_0}\tau(x)\}}}{1-2c-\omega_{A_\al,\al}}:=B_\al.\end{equation} The next step is to prove the same estimates for $0\leq\al<m$ and for this we first give a bound on $\tau_\eta\U_\eta^\delta.$ We fix $\rho\in(0,1/2)$ and define $x_\eta>0$ as the unique point such that $\int_0^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}dy=\rho.$ It exists because $\beta$ is nonnegative and locally integrable, and $\tau_\eta$ is positive. Thanks to assumption \eqref{as:betatau0}, we know that $x_\eta\underset{\eta\to0}{\longrightarrow}x_0$ where $x_0>0$ satisfies $\int_0^{x_0}\f{\beta(y)}{\tau(y)}dy=\rho,$ so $x_\eta$ is bounded by $0<\underline x\leq x_\eta\leq\overline x.$ Then, integrating \eqref{eq:truncated} between $0$ and $x\leq x_\eta,$ we find \begin{eqnarray*} \tau_\eta(x)\U_\eta^\delta(x)&\leq&\delta+2\int_0^{x}\int\beta(y)\U_\eta^\delta(y)\kappa(z,y)\,dy\,dz\\ &\leq&\delta+2\int\beta(y)\U_\eta^\delta(y)\,dy\\ &=&\delta+2\int_0^{x_\eta}\beta(y)\U_\eta^\delta(y)\,dy+2\int_{x_\eta}^\infty\beta(y)\U_\eta^\delta(y)\,dy\\ &\leq&\delta+2\sup_{(0,x_\eta)}\{\tau_\eta\U_\eta^\delta\}\int_0^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\,dy+\f2{x_\eta^m}\int_0^\infty y^m\beta(y)\U_\eta^\delta(y)\,dy\\ &\leq&\delta+2\rho\sup_{(0,x_\eta)}\{\tau_\eta\U_\eta^\delta\}+\f2{x_\eta^m}B_m. \end{eqnarray*} Consequently, if we consider $\delta\leq1$ for instance, we obtain \begin{equation}\label{eq:Linfbound1}\sup_{x\in(0,\underline x)}\tau_\eta(x)\U_\eta^\delta(x)\leq\f{1+2B_m/\underline x^m}{1-2\rho}:=C\end{equation} so $\tau_\eta\U_\eta^\delta$ is uniformly bounded in a neighborhood of zero.\\ Now we can prove a bound $B_\al$ for $x^\al\beta\U_\eta^\delta$ in the case $0\leq\al<m.$ Thanks to the estimates \eqref{eq:L1bound1} and \eqref{eq:Linfbound1} we have \begin{eqnarray}\label{eq:L1bound2} \int x^\al\beta(x)\U_\eta^\delta(x)\,dx&=&\int_0^{\overline x}x^\al\beta(x)\U_\eta^\delta(x)\,dx+\int_{\overline x}^Rx^\al\beta(x)\U_\eta^\delta(x)\,dx\nonumber\\ &\leq&\overline x^\al\sup_{(0,\overline x)}\{\tau_\eta\U_\eta^\delta\}\int_0^{\overline x}\f{\beta(y)}{\tau_\eta(y)}\,dy+\overline x^{\al-m}\int_{\overline x}^Rx^m\beta(x)\U_\eta^\delta(x)\,dx\nonumber\\ &\leq&C\rho\overline x^\al+B_m\overline x^{\al-m}:=B_\al. \end{eqnarray} Combining \eqref{eq:L1bound1} and \eqref{eq:L1bound2} we obtain \begin{equation}\label{eq:L1bound3}\forall\al\geq0,\,\exists B_\al:\ \forall\eta,\delta>0,\quad\int x^\al\beta(x)\U_\eta^\delta(x)\,dx\leq B_\al,\end{equation} and finally we bound $\lb_\eta^\delta$ \begin{equation}\label{eq:lambdaupperbound}\lb_\eta^\delta=\delta +\int\beta\U_\eta^\delta\leq\delta +B_0.\end{equation} So the family $\{\lb_\eta^\delta\}_\delta$ belong to a compact interval and we can extract a converging subsequence \mbox{$\lb_\eta^\delta\underset{\delta\to0}{\longrightarrow}\lb_\eta.$} \ \paragraph{\it Second estimate : $W^{1,1}$bound for $x^\al\tau_\eta\U_\eta^\delta,\ \al\geq0.$} We use the estimate \eqref{eq:L1bound3}. First we give a $L^\infty$bound for $\tau_\eta\U_\eta^\delta$ by integrating \eqref{eq:truncated} between $0$ and $x$ \begin{equation}\label{eq:Linfbound2}\tau_\eta(x)\U_\eta^\delta(x)\leq\delta+2\int_0^R\beta(y)\U_\eta^\delta(y)\,dy\leq\delta+2B_0:=D_0.\end{equation} Then we bound $x^\al\tau_\eta\U_\eta^\delta$ in $L^1$ for $\al>-1.$ Assumption \eqref{as:betatauinf} ensures that there exists $X>0$ such that\\ $\tau(x)\leq x\beta(x),\ \forall x\geq X,$ so we have for $R>X$ \begin{eqnarray*} \int x^\al\tau_\eta(x)\U_\eta^\delta(x)\,dx&\leq&\sup_{(0,X)}\{\tau_\eta\U_\eta^\delta\}\int_0^X x^\al\,dx+\int_X^Rx^{\al+1}\beta(x)\U_\eta^\delta(x)\,dx\\ &\leq&\sup_{(0,X)}\{\tau_\eta\U_\eta^\delta\}\f{X^{\al+1}}{\al+1}+B_{\al+1}:=C_\al. \end{eqnarray*} Finally \begin{equation}\label{eq:L1bound4} \forall\al>-1,\,\exists C_\al:\ \forall\eta,\delta>0,\quad\int x^\al\tau_\eta(x)\U_\eta^\delta(x)\,dx\leq C_\al \end{equation} and we also have that $x^\al\U_\eta^\delta$ is bounded in $L^1$ because $\tau\in{\mathcal P}$ (see assumption \eqref{as:betatauspace}).\\ A consequence of \eqref{eq:L1bound3} and \eqref{eq:L1bound4} is that $x^\al\tau_\eta\U_\eta^\delta$ is bound in $L^\infty$ for all $\al\geq0.$ We already have \eqref{eq:Linfbound2} and for $\al>0$, we multiply \eqref{eq:truncated} by $x^\al,$ integrate on $[0,x]$ and obtain $$ x^\al\tau_\eta(x)\U_\eta^\delta(x)\leq\al\int_0^R y^{\al-1}\tau_\eta(y)\U_\eta^\delta(y)\,dy+2\int_0^R y^\al\beta(y)\U_\eta^\delta(y)\,dy\leq\al C_\al+2B_\al:=D_\al, $$ that give immediately \begin{equation}\label{eq:Linfbound3} \forall\al\geq0,\,\exists D_\al:\ \forall\eta,\delta>0,\quad\sup_{x>0}x^\al\tau_\eta(x)\U_\eta^\delta(x)\leq D_\al. \end{equation} To conclude we use the fact that neither the parameters nor $\U_\eta^\delta$ are negative and we find by the chain rule, for $\al\geq0$ $$\int\bigl|\f\p{\p x}(x^\al\tau_\eta(x)\U_\eta^\delta(x))\bigr|dx\leq\al\int x^{\al-1}\tau_\eta(x)\U_\eta^\delta(x)\,dx+\int x^\al\bigl|\p_x(\tau_\eta(x)\U_\eta^\delta(x))\bigr|\,dx\nonumber$$ \begin{equation}\label{eq:W11bound}\hspace{4cm}\leq\al\int x^{\al-1}\tau_\eta(x)\U_\eta^\delta(x)\,dx+\lb_\eta^\delta\int x^\al\U_\eta^\delta(x)\,dx+3\int x^\al\beta(x)\U_\eta^\delta(x)\,dx \end{equation} and all the terms in the right hand side are uniformly bounded thanks to the previous estimates.\\ \ Since we have proved that the family $\{x^\al\tau_\eta\U_\eta^\delta\}_\delta$ is bounded in $W^{1,1}(\mathbb{R}^+)$ for all $\al\geq0,$ then, because $\tau_\eta$ is positive and belongs to ${\mathcal P},$ we can extract from $\{\U_\eta^\delta\}_\delta$ a subsequence which converges in $L^1(\mathbb{R}^+)$ when $\delta\to0.$ Passing to the limit in equation \eqref{eq:truncated} we find that \begin{equation}\label{eq:truncated2}\left\{\begin{array}{l}\displaystyle\f{\p}{\p x} (\tau_\eta(x) \U_\eta(x))+(\beta(x)+\lb_{\eta})\U_\eta(x) = 2 \int_x^\infty\beta(y)\kappa(x,y) \U_\eta(y)\,dy,\\ \\ \U_\eta(0)=0,\quad\U_\eta(x)\geq0,\quad\int\U_\eta=1,\end{array}\right.\end{equation} with $\lb_\eta\geq0.$ \ \subsection{Limit as $\eta\to0$ for $\U_\eta$ and $\lb_\eta$} All the estimates \eqref{eq:L1bound1}-\eqref{eq:W11bound} remain true for $\delta=0.$ So we still know that the family $\{x^\al\tau_\eta\U_\eta\}_\eta$ belongs to a compact set of $L^1,$ but not necessarily $\{\U_\eta\}_\eta$ because in the limit $\tau$ can vanish at zero. We need one more estimate to study the limit $\eta\to0.$ \paragraph{\it Third estimate: $L^\infty$bound for $x^\al\tau_\eta\U_\eta,\ \al\geq-\gamma$.} We already know that $x^\al\tau_\eta\U_\eta$ is bounded for $\al\geq0.$ So, to prove the bound, it only remains to prove that $x^{-\gamma}\tau_\eta\U_\eta$ is bounded in a neighborhood of zero. Let define $f_\eta:x\mapsto\sup_{(0,x)}\tau_\eta\U_\eta.$ If we integrate \eqref{eq:truncated2} between $0$ and $x'<x,$ we find $$\tau_\eta(x')\U_\eta(x')\leq2\int_0^{x'}\int\beta(y)\U_\eta(y)\kappa(z,y)\,dy\,dz\leq2\int_0^x\int\beta(y)\U_\eta(y)\kappa(z,y)\,dy\,dz$$ and so for all $x$ $$f_\eta(x)\leq2\int_0^x\int\beta(y)\U_\eta(y)\kappa(z,y)\,dy\,dz.$$ We consider $x_\eta$ and $\underline x$ defined in the first estimate and, using \eqref{as:kappatau} and \eqref{as:betatau0}, we have for all $x<x_\eta$ \begin{eqnarray*} f_\eta(x)&\leq&2\int_0^x\int\beta(y)\U_\eta(y)\kappa(z,y)\,dy\,dz\\ &=&2\int\beta(y)\U_\eta(y)\int_0^x\kappa(z,y)\,dz\,dy\\ &\leq&2\int_0^\infty\beta(y)\U_\eta(y)\min\Bigl(1,C\Bigl(\f x y\Bigr)^\gamma\Bigr)\,dy\\ &=&2\int_0^x\beta(y)\U_\eta(y)\,dy+2C\int_x^{x_\eta}\beta(y)\U_\eta(y)\Bigl(\f x y\Bigr)^\gamma \,dy+2C\int_{x_\eta}^\infty\beta(y)\U_\eta(y)\Bigl(\f x y\Bigr)^\gamma \,dy\\ &=&2\int_0^x\f{\beta(y)}{\tau_\eta(y)}\tau_\eta(y)\U_\eta(y)\,dy+2Cx^\gamma\int_x^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\f{\tau_\eta(y)\U_\eta(y)}{y^\gamma}\,dy+2C\int_{x_\eta}^\infty\beta(y)\U_\eta(y)\Bigl(\f x y\Bigr)^\gamma \,dy\\ &\leq&2f_\eta(x)\int_0^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\,dy+2Cx^\gamma\int_x^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\f{f_\eta(y)}{y^\gamma}\,dy+2C\|\beta\U_\eta\|_{L^1}\f{x^\gamma}{x_\eta^\gamma}. \end{eqnarray*} We set ${\mathcal V}_\eta(x)=x^{-\gamma}f_\eta(x)$ and we obtain $$(1-2\rho){\mathcal V}_\eta(x)\leq K+2C\int_x^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}{\mathcal V}_\al(y)\,dy.$$ Hence, using Gr\"onwall's lemma, we find that $\displaystyle{\mathcal V}_\eta(x)\leq\f {Ke^{\f{2C\rho}{1-2\rho}}}{1-2\rho}$ and consequently \begin{equation}\label{eq:0Linfbound}x^{-\gamma}\tau_\eta(x)\U_\eta(x)\leq\f {Ke^{\f{2C\rho}{1-2\rho}}}{1-2\rho}:=\wt C,\quad\forall x\in[0,\underline x].\end{equation} \ This last estimate allows us to bound $\U_\eta$ by $\f{x^\gamma}{\tau}$ which is in $L^1_0$ by the assumption \eqref{as:kappatau}. Thanks to the second estimate, we also have that $\int x^\al\U_\eta$ is bounded in $L^1$ and so, thanks to the Dunford-Pettis theorem (see \cite{Brezis} for instance), $\{\U_\eta\}_\eta$ belong to a $L^1$-weak compact set. Thus we can extract a subsequence which converges $L^1-$weak toward $\U.$ But for all $\e>0,\ \{x^\al\U_\eta\}_\eta$ is bounded in $W^{1,1}([\e,\infty))$ for all $\al\geq1$ thanks to \eqref{eq:W11bound} and so the convergence is strong on $[\e,\infty).$ Then we write \begin{eqnarray*} \int|\U_\eta-\U|&=&\int_0^\e|\U_\eta-\U|+\int_\e^\infty|\U_\eta-\U|\\ &\leq&2\wt C\int_0^\e\f{x^\gamma}{\tau(x)}+\int_\e^\infty|\U_\eta-\U|. \end{eqnarray*} The first term on the right hand side is small for $\e$ small because $\f{x^\gamma}{\tau}\in L^1_0$ and then the second term is small for $\eta$ small because of the strong convergence. Finally $\U_\eta\underset{\eta\to0}{\longrightarrow}\U$ strongly in $L^1(\mathbb{R}^+)$ and $\U$ solution of the eigenproblem \eqref{eq:eigenproblem}. \ \subsection{Limit as $\delta,\eta\to0$ for $\phi_\eta^\delta$} We prove uniform estimates on $\phi_\eta^\delta$ which are enough to pass to the limit and prove the result. \paragraph{\it Fourth estimate : uniform $\phi_\eta^\delta$-bound on $[0,A]$.} Let $A>0,$ our first goal is to prove the existence of a constant $C_0(A)$ such that $$\forall\eta,\delta,\qquad\sup_{(0,A)}{\phi_\eta^\delta}\leq C_0(A).$$ We divide the equation on $\phi_\eta^\delta$ by $\tau_\eta$ and we integrate between $x$ and $x_\eta$ with $0<x<x_\eta,$ where $x_\eta,$ bounded by $\underline x$ and $\overline x,$ is defined in the first estimate. Considering $\delta<\f{\mu(1-2\rho)}{\overline x}$ (fulfilled for $R>\f{\overline x}{2(1-2\rho)}$ since $\delta=\f{\mu}{2R}$), we find \begin{eqnarray*} \phi_\eta^\delta(x)&\leq&\phi_\eta^\delta(x_\eta)+2\int_x^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\int_0^y\kappa(z,y)\phi_\eta^\delta(z)\,dz+x_\eta\f\delta \mu\phi_\eta^\delta(0)\\ &\leq&\phi_\eta^\delta(x_\eta)+\sup_{(0,x_\eta)}\{\phi_\eta^\delta\}\Bigl(2\int_0^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\int_0^y\kappa(z,y)\,dz+x_\eta\f\delta\mu\Bigr) \end{eqnarray*} and we obtain $$\sup_{x\in(0,\underline x)}{\phi_\eta^\delta(x)}\leq\f 1 {1-2\rho-\delta\overline x/\mu}\phi_\eta^\delta(x_\eta).$$ Using the decay of $\phi_\eta^\delta(x)e^{-\int_{\underline x}^x\f{\beta+\lb_\eta^\delta}{\tau_\eta}},$ there exists $C(A)$ such that $$\sup_{x\in(0,A)}{\phi_\eta^\delta(x)}\leq C(A)\phi_\eta^\delta(x_\eta).$$ Noticing that $\int\phi_\eta^\delta(x)\U_\eta^\delta(x)dx =1,$ we conclude $$1\geq\int_0^{x_\eta}\phi_\eta^\delta(x)\U_\eta^\delta(x)dx\geq \phi_\eta^\delta(x_\eta)\int_0^{x_\eta}e^{-\int_x^{x_\eta}\f{\beta+\lb_\eta^\delta}{\tau_\eta}}\U_\eta^\delta(x)\,dx,$$ so, as $x_\eta\to x_0$ and $\int_0^{x_0}\U(x)dx>0$ (thanks to Lemma \ref{lm:positivity} and because $x_0>b\geq a$), we have \begin{equation}\label{eq:phi0bound}\sup_{(0,A)}{\phi_\eta^\delta}\leq C_0(A).\end{equation} \ \paragraph{\it Fifth estimate : uniform $\phi_\eta^\delta$-bound on $[A,\infty)$.} Following an idea introduced in \cite{PR} we notice that the equation in \eqref{eq:truncated} satisfied by $\phi_\eta^\delta$ is a transport equation and therefore satisfies the maximum principle (see Lemma \ref{lm:supersolution} in the Appendix). Therefore it remains to build a supersolution $\overline\phi$ that is positive at $x=R,$ to conclude $\phi_\eta^\delta(x)\leq\overline\phi(x)$ on $[0,R].$ This we cannot do on $[0,R],$ but on a subinterval $[A_0,R]$ only. So we begin with an auxiliary function $\overline\vp(x)=x^k+\theta$ with $k$ and $\theta$ positive numbers to be determined. We have to check that on $[A_0,R]$ $$-\tau(x)\f{\p}{\p x}\overline\vp(x)+(\lb_\eta^\delta+\beta(x))\overline\vp(x)\geq 2\beta(x)\int\kappa(y,x)\overline\vp(y)\,dy+\delta\phi_\eta^\delta(0),$$ {\it i.e.} $$-k\tau(x)x^{k-1}+(\lb_\eta^\delta+\beta(x))\overline\vp(x)\geq\Bigl(2\theta+2\int\kappa(y,x)y^k\,dy\Bigr)\beta(x)+\delta\phi_\eta^\delta(0).$$ For $k\geq2,$ we know that $\int\kappa(y,x)\f{y^k}{x^k}\,dy\leq c<1/2$ so it is sufficient to prove that there exists $A_0>0$ such that we have \begin{equation}\label{eq:sursolution1}-k\tau(x)x^{k-1}+(\lb_\eta^\delta+\beta(x))(x^k+\theta)\geq(2\theta+2cx^k)\beta(x)+\delta C_0(1)\end{equation} for all $x>A_0,$ where $C_0$ is defined in \eqref{eq:phi0bound}. For this, dividing \eqref{eq:sursolution1} by $x^{k-1}\tau(x),$ we say that if we have \begin{equation}\label{eq:sursolution2}(1-2c)\f{x\beta(x)}{\tau(x)}\geq k+\f{2\theta\beta(x)+\delta C_0(1)}{x^{k-1}\tau(x)},\end{equation} then \eqref{eq:sursolution1} holds true. Thanks to assumptions \eqref{as:betatauspace} and \eqref{as:betatauinf} we know that there exists $k>0$ such that for any $\theta>0,$ there exists $A_0>0$ for which \eqref{eq:sursolution2} is true on $[A_0,+\infty).$ Then we conclude by choosing the supersolution $\overline\phi(x)=\f{C_0(A_0)}\theta\overline\vp(x)$ so that $$\overline\phi(x)\geq\phi_\eta^\delta(x)\quad \text{on} \ [0,A_0],$$ and on $[A_0,R],$ we have \begin{equation}\label{eq:supersol}\left\{\begin{array}{l} -\tau(x)\f{\p}{\p x}\overline\phi(x)+(\lb_\eta^\delta+\beta(x))\overline\phi(x) \geq 2\beta(x)\int_0^x\kappa(y,x)\overline\phi(y)\,dy+\delta\phi_\eta^\delta(0), \\ \\ \overline\phi(R)>0, \end{array}\right.\end{equation} which is a supersolution to the equation satisfied by $\phi_\eta^\delta.$ Therefore $\phi_\eta^\delta\leq\overline\phi$ uniformly in $\eta$ and $\delta$ and we get \begin{equation}\label{eq:phibound}\exists k,\theta,C\ s.t.\ \forall\eta,\delta,\quad\phi_\eta^\delta(x)\leq(Cx^k+\theta).\end{equation} \ Equation \eqref{eq:truncated} and the fact that $\phi_\eta^\delta$ is uniformly bounded in $L^\infty_{loc}(\mathbb{R}^+)$ give immediately that $\p_x\phi_\eta^\delta$ is uniformly bounded in $L^\infty_{loc}(\mathbb{R}^+,\tau(x)dx),$ so in $L^\infty_{loc}(0,\infty)$ thanks to \eqref{as:taupositivity}. \ Then we can extract a subsequence of $\{\phi_\eta^\delta\}$ which converges ${\mathcal C}^0(0,\infty)$ toward $\phi.$ Now we check that $\phi$ satisfied the adjoint equation of \eqref{eq:eigenproblem}. We consider the terms of \eqref{eq:truncated} one after another.\\ First $(\lb_\eta^\delta+\beta(x))\phi_\eta^\delta(x)$ converges to $(\lb+\beta(x))\phi(x)$ in $L^\infty_{loc}.$\\ For $\p_x\phi_\eta^\delta,$ we have an $L^\infty$ bound on each compact of $(0,\infty).$ So it converges $L^\infty-*weak$ toward $\p_x\phi.$\\ It remains the last term which we write, for all $x>0,$ $$\int_0^x\kappa(y,x)(\phi_\eta^\delta(y)-\phi(y))\,dy\leq\|\phi_\eta^\delta-\phi\|_{L^\infty(0,x)}\underset{\eta,\delta\to0}{\longrightarrow}0.$$ The fact that $\int\phi\U=1$ comes from the convergence $L^\infty-L^1$ when written as $$1=\int\phi_\eta^\delta(x)\U_\eta^\delta(x)\,dx=\int\f{\phi_\eta^\delta(x)}{1+x^k}(1+x^k)\U_\eta^\delta(x)\,dx\longrightarrow\int\f{\phi(x)}{1+x^k}(1+x^k)\U(x)\,dx=\int\phi\U.$$ \bigskip\bigskip At this stage we have found $(\lb,\U,\phi)\in\mathbb{R}^+\times L^1(\mathbb{R}^+)\times{\mathcal C}(\mathbb{R}^+)$ solution of \eqref{eq:eigenproblem}. The estimates announced in Theorem \ref{th:eigenelements} also follow from those uniform estimates. It remains to prove that $\lb>0$ and the uniqueness. \ \subsection{Proof of $\lb>0$} We prove a little bit more, namely that \begin{equation}\label{eq:lowerbound}\lb\geq\f12\sup_{x\geq0}\{\tau(x)\U(x)\}. \end{equation} We integrate the first equation of \eqref{eq:eigenproblem} between $0$ and $x$ and find \begin{eqnarray*} 0\leq\lb\int_0^x\U(y)\,dy&=&-\tau(x)\U(x)-\int_0^x\beta(y)\U(y)\,dy+2\int_0^x\int_z^\infty\beta(y)\kappa(z,y)\U(y)\,dy\,dz\\ &\leq&-\tau(x)\U(x)+2\int_0^\infty\int_z^\infty\beta(y)\kappa(z,y)\U(y)\,dy\,dz\\ &=&-\tau(x)\U(x)+2\int_0^\infty\beta(y)\U(y)\,dy\\ &=&-\tau(x)\U(x)+2\lb, \end{eqnarray*} Hence $2\lb\geq \tau(x)\U(x)$ and \eqref{eq:lowerbound} is proved. \ \subsection{Uniqueness} We follow the idea of \cite{M1}. Let $(\lb_1,\U_1,\phi_1)$ and $(\lb_2,\U_2,\phi_2)$ two solutions to the eigenproblem \eqref{eq:eigenproblem}. First we have \begin{eqnarray*} \lb_1\int\U_1(x)\phi_2(x)\,dx&=&\int\Bigl(-\p_x(\tau(x)\U_1(x))-\beta(x)\U_1(x)+2\int_x^\infty\beta(y)\kappa(x,y)\U_1(y)\,dy\Bigr)\phi_2(x)\,dx\\ &=&\int\Bigl(\tau(x)\p_x\phi_2(x)-\beta(x)\phi_2(x)+2\beta(x)\int_0^x\kappa(y,x)\phi_2(y)\,dy\Bigr)\U_1(x)\,dx\\ &=&\lb_2\int\U_1(x)\phi_2(x)\,dx \end{eqnarray*} and then $\lb_1=\lb_2=\lb$ because $\int\U_1\phi_2>0$ thanks to Lemma \ref{lm:positivity}. \\ For the eigenvectors we use the General Relative Entropy method introduced in \cite{MMP1,MMP2}. For $C>0,$ we test the equation on $\U_1$ against $\sgn\bigl(\f{\U_1}{\U_2}-C\bigr)\phi_1,$ $$0=\int\Bigl[\p_x(\tau(x)\U_1(x))+(\lb+\beta(x))\U_1(x)-2\int_x^\infty\beta(y)\kappa(x,y)\U_1(y)\,dy\,\Bigr]\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx.$$ Deriving $\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\tau(x)\U_2(x)\phi_1(x)$ we find $$\begin{array}{l}\displaystyle \int\p_x(\tau(x)\U_1(x))\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx=\int\p_x\Bigl(\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\tau(x)\U_2(x)\phi_1(x)\Bigr)\,dx\\ \\ \displaystyle\hspace{1cm}+\int\p_x(\tau(x)\U_2(x))\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx-\int\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\p_x(\tau(x)\U_2(x)\phi_1(x))\,dx \end{array}$$ and then $$\begin{array}{l}\displaystyle \int\p_x(\tau(x)\U_1(x))\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx=\\ \\ \displaystyle\hspace{2cm}2\int\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\Bigl[\int_0^x\beta(x)\kappa(y,x)\U_2(x)\phi_1(y)\,dy-\int_x^\infty\beta(y)\kappa(x,y)\U_2(y)\phi_1(x)\,dy\Bigr]\,dx\\ \\ \displaystyle\hspace{4cm}+2\int\int_x^\infty\beta(y)\kappa(x,y)\U_2(y)\,dy\,\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx\\ \\ \displaystyle\hspace{6cm}-\int(\lb+\beta(x))\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\U_2(x)\phi_1(x)\,dx, \end{array}$$ \ $$\begin{array}{l}\displaystyle \int\p_x(\tau(x)\U_1(x))\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx=\\ \\ \displaystyle\hspace{2cm}2\int\int\beta(y)\kappa(x,y)\Bigl[\Bigl|\f{\U_1}{\U_2}(y)-C\Bigr|-\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\Bigr]\U_2(y)\phi_1(x)\,dxdy\\ \\ \displaystyle\hspace{4cm}+2\int\int_x^\infty\beta(y)\kappa(x,y)\U_2(y)\,dy\,\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx\\ \\ \displaystyle\hspace{6cm}-\int(\lb+\beta(x))\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\U_2(x)\phi_1(x)\,dx. \end{array}$$ \ So $$\begin{array}{l}\displaystyle 0=2\int\int\beta(y)\kappa(x,y)\Bigl[\Bigl|\f{\U_1}{\U_2}(y)-C\Bigr|-\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\Bigr]\U_2(y)\phi_1(x)\,dxdy\\ \\ \displaystyle\hspace{4cm}+2\int\int_x^\infty\beta(y)\kappa(x,y)\U_2(y)\,dy\,\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx\\ \\ \displaystyle\hspace{6cm}-2\int\int_x^\infty\beta(y)\kappa(x,y)\U_1(y)\,dy\,\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx \end{array}$$ \ $$0=\int\int\beta(y)\kappa(x,y)\U_2(y)\Bigl|\f{\U_1}{\U_2}(y)-C\Bigr|\Bigl[1-\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\sgn\Bigl(\f{\U_1}{\U_2}(y)-C\Bigr)\Bigr]\phi_1(x)\,dxdy.$$ \ Hence $\Bigl[1-\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\sgn\Bigl(\f{\U_1}{\U_2}(y)-C\Bigr)\Bigr]=0$ on the support of $\kappa(x,y)$ for all $C$ thus $\f{\U_1}{\U_2}(x)=\f{\U_1}{\U_2}(y)$ on the support of $\kappa(x,y)$ and \begin{equation}\label{eq:uniqueness}\p_x\f{\U_1}{\U_2}(x)=\int\beta(y)\kappa(x,y)\Bigl(\f{\U_1}{\U_2}(y)-\f{\U_1}{\U_2}(x)\Bigr)\f{\U_2(y)}{\U_2(x)}\,dy=0\end{equation} so $\displaystyle\f{\U_1}{\U_2}\equiv cst=1.$ \ We can prove in the same way that $\phi_1=\phi_2$ even if we can have $\U\equiv0$ on $[0,m]$ with $m>0.$ Indeed in this case we know that $\beta\equiv0$ on $[0,m]$ and so $$\phi_i(x)=\phi_i(0)e^{\int_0^x\f{\lb}{\tau(s)}ds}\quad\forall x\in[0,m],\ i\in\{1,2\}.$$ \ \section{Conclusion, Perspectives}\label{se:csq} We have proved the existence and uniqueness of eigenelements for the aggregation-fragmentation equation \eqref{eq:temporel} with assumptions on the parameters as large as possible, in order to render out the widest variety of biological or physical models. It gives access to the asymptotic behaviour of the solution by the use of the General Relative Entropy principle. \ A following work is to study the dependency of the eigenvalue $\lb$ on parameters $\tau$ and $\beta$ (see \cite{M2}). For instance, our assumptions allow $\tau$ to vanish at zero, what is a necessary condition to ensure that $\lb$ tends to zero when the fragmentation tends to infinity. Such results give precious information on the qualitative behaviour of the solution. \ Another possible extension of the present work is to prove existence of eigenelements in the case of time-periodic parameters, using the Floquet's theory, and then compare the new $\lb_F$ with the time-independent one $\lb$ (see \cite{Lepoutre}). Such studies can help to choose a right strategy in order to optimize, for instance, the total mass $\int x u(t,x) dx$ in the case of prion proliferation (see \cite{CL1}) or on the contrary minimize the total population $\int u(t,x) dx$ in the case of cancer therapy (see \cite{Lepoutre, Clairambault}). \ Finally, this eigenvalue problem could be used to recover some of the equation parameters like $\tau$ and $\beta$ from the knowledge of the asymptotic profile of the solution, as introduced in \cite{DPZ, PZ} in the case of symmetric division ($\tau=1$ and $\kappa=\delta_{x=\frac{y}{2}}$), by the use of inverse problems techniques. The method of \cite{PZ} has to be adapted to our general case, in order to model prion proliferation for instance, or yet to recover the aggregation rate $\tau$ ; this is another direction for future research. \vspace{1cm} {\bf Aknowledgment} The authors thank a lot Beno\^it Perthame for his precious help and his corrections. \newpage{\noindent\LARGE\bf Appendix}
{'timestamp': '2010-02-10T11:53:16', 'yymm': '0907', 'arxiv_id': '0907.5467', 'language': 'en', 'url': 'https://arxiv.org/abs/0907.5467'}
ArXiv
\section*{Methods} Samples were prepared based on a 15~nm SiGe/Si/SiGe quantum well grown in an ultrahigh-vacuum chemical-vapor-deposition (UHVCVD) apparatus.\cite{lu09,lu10} The Hall-bar samples with width 50~$\mu$m and the distance between potential probes 150~$\mu$m were patterned using standard photo-lithography. Contacts consisted of AuSb alloy, deposited in a thermal evaporator in vacuum and annealed for 5 minutes in N$_2$ atmosphere at 440$^\circ$C. Next, approximately 300~nm thick layer of SiO was deposited in a thermal evaporator and a $>20$~nm thick Al gate was deposited on top of SiO. No mesa etching was used, and the 2D electron gas was created in a way similar to silicon MOSFETs (for details, see Refs.~\cite{melnikov14,melnikov15}). The maximum electron mobility in our samples reached 240~m$^2$/Vs which is the highest mobility reported for this electron system.\cite{melnikov15} Measurements were made in an Oxford TLM-400 dilution refrigerator in a temperature range 0.03 -- 1.2~K. The resistance was measured by a standard four-terminal lock-in technique in a frequency range 1 -- 11~Hz. The applied currents varied in the range 0.5 -- 4~nA. We used a saturating infra-red illumination to improve the quality of the contacts and increase the electron mobility. This did not affect the electron density at a fixed gate voltage.
{'timestamp': '2017-11-08T02:08:12', 'yymm': '1604', 'arxiv_id': '1604.08527', 'language': 'en', 'url': 'https://arxiv.org/abs/1604.08527'}
ArXiv
\section{Introduction} In longitudinal studies, measurements from the same individuals (units) are repeatedly taken over time. However, individuals may be lost to follow up or do not show up at some of the planned measurement occasions, leading to attrition (also referred to as \emph{dropout}) and intermittent missingness, respectively. \citet{rub1976} provides a well-known taxonomy for mechanisms that generate incomplete sequences. If the probability of a missing response does not depend neither on the observed nor on the missing responses, conditional on the observed covariates, the data are said to be missing completely at random (MCAR). Data are missing at random (MAR) if, conditional on the observed data (both covariates and responses), the missingness does not depend on the non-observed responses. When the previous assumptions do not hold, that is when, conditional on the observed data, the mechanism leading to missing data still depends on the unobserved responses, data are referred to as missing not at random (MNAR). In the context of likelihood inference, when the parameters in the measurement and in the missingness processes are distinct, processes leading either to MCAR or MAR data may be ignored; when either the parameter spaces are not distinct or the missing data process is MNAR, missing data are non-ignorable (NI). Only when the ignorability property is satisfied, standard (likelihood) methods can be used to obtain consistent parameter estimates. Otherwise, some form of joint modeling of the longitudinal measurements and the missigness process is required. See \citet{litrub2002} for a comprehensive review of the topic. For this purpose, in the following, we will focus on the class of Random Coefficient Based Dropout Models \citep[RCBDMs - ][]{Little1995}. In this framework, separate (conditional) models are built for the two partially observed processes, and the link between them is due to sharing common or dependent individual- (and possibly outcome-) specific random coefficients. The model structure is completed by assuming that the random coefficients are drawn from a given probability distribution. Obviously, a choice is needed to define such a distribution and, in the past years, the literature focused both on parametric and nonparametric specifications. Frequently, the random coefficients are assumed to be Gaussian \citep[e.g.][]{ver2002, gao2004}, but this assumption was questioned by several authors, see e.g. \citet{sch1999}, since the resulting inference can be sensitive to such assumptions, especially in the case of short longitudinal sequences. For this reason, \citet{alf2009} proposed to leave the random coefficient distribution unspecified, defining a semi-parametric model where the longitudinal and the dropout processes are linked through dependent (discrete) random coefficients. \cite{tso2009} suggested to follow a similar approach for handling intermittent, potentially non ignorable, missing data. A similar approach to deal with longitudinal Gaussian data subject to missingness was proposed by \citet{Beunc2008}, where a finite mixture of mixed effect regression models for the longitudinal and the dropout processes was discussed. Further generalizations in the shared parameter model framework were proposed by \citet{cre2011}, who discussed an approach based on \emph{partially} shared individual (and outcome) specific random coefficients, and by \citet{bart2015} who extended standard latent Markov models to handle potentially informative dropout, via shared discrete random coefficients. In the present paper, the association structure between the measurement and the dropout processes is based on a random coefficient distribution which is left completely unspecified, and estimated through a discrete distribution, leading to a (bi-dimensional) finite mixture model. The adopted bi-dimensional structure allows the bivariate distribution for the random coefficients to reduce to the product of the corresponding marginals when the dropout mechanism is ignorable. Therefore, a peculiar feature of the proposed modeling approach, when compared to standard finite mixture models, is that the MNAR specification properly nests the MAR/MCAR ones, and this allows a straightforward (local) sensitivity analysis. We propose to explore the sensitivity of parameter estimates in the longitudinal model to the assumptions on non-ignorability of the dropout process by developing an appropriate version of the so-called \emph{index of sensitivity to non-ignorability} (ISNI) developed by \cite{trox2004} and \cite{ma2005}, considering different perturbation scenarios. {The structure of the paper follows. In section \ref{sec:2} we introduce the motivating application, the Leiden 85+ study, entailing the dynamics of cognitive functioning in the elderly. Section \ref{sec:3} discusses general random coefficient based dropout models, while our proposal is detailed in section\ref{sec:4}. Sections \ref{sec:5}-\ref{sec:6} detail the proposed EM algorithm for maximum likelihood estimation of model parameters and the index of local sensitivity we propose. Section \ref{sec:7} provides the application of the proposed model to data from the motivating example, using either MAR or MNAR assumptions, and the results from sensitivity analysis. Last section contains concluding remarks. } \section{Motivating example: Leiden 85+ data} \label{sec:2} The motivating data come from the Leiden 85+ study, a retrospective study entailing 705 Leiden inhabitants (in the Netherlands), who reached the age of 85 years between September 1997 and September 1999. The study aimed at identifying demographic and genetic determinants for the dynamics of cognitive functioning in the elderly. Several covariates collected at the beginning of the study were considered: gender {(female is the reference category)}, educational status distinguishing between primary {(reference category)} or higher education, plasma Apolipoprotein E (APOE) genotype. As regards the educational level, this was determined by the number of years each subject went to school; primary education corresponds to less than 7 years of schooling. As regards the APOE genotype, the three largest groups were considered: $\epsilon2,\epsilon3$, and $\epsilon 4$. This latter allele is known to be linked to an increased risk for dementia, whereas $\epsilon 2$ allele carriers are relatively protected. Only 541 subjects present complete covariate information and will be considered in the following. Study participants were visited yearly until the age of $90$ at their place of residence and face-to-face interviews were conducted through a questionnaire whose items are designed to assess orientation, attention, language skills and the ability to perform simple actions. The Mini Mental State Examination index, in the following MMSE \citep{fol1975}, is obtained by summing the scores on the items of the questionnaire designed to assess potential cognitive impairment. The observed values are integers ranging between $0$ and $30$ (maximum total score). A number of enrolled subjects dropout prematurely, because of poor health conditions or death. In Table \ref{tab1}, we report the total number of available measures for each follow-up visit. Also, we report the number (and the percentage) of participants who leave the study between the current and the subsequent occasion, distinguishing between those who dropout and those who die. As it can be seen, less than half of the study participants presents complete longitudinal sequences ($49\%$) and this is mainly due to death ($44\%$ of the subjects died during the follow-up). \begin{table}[h] \begin{center} \caption{Available measures per follow-up visit and number (percentage) of subjects leaving the study between subsequent occasions due to poor health conditions or death} \label{tab1} \vspace{1mm} \begin{tabular}{l c c c c c } \hline Follow-up & Total & Complete (\%) & Do not (\%) & Die (\%) \\ age & & & participate & \\ \hline 85-86 & 541 & 484 (89.46) & 9 (1.66) & 48 (8.87) \\ 86-87 & 484 & 422 (87.19) & 3 (0.62) & 59 (12.19) \\ 87-88 & 422 & 373 (88.39) & 2 (0.47) & 47 (11.14)\\ 88-89 & 373 & 318 (85.25) & 6 (1.61) & 49 (13.14) \\ 89-90 & 318 & 266 (83.65) & 15 (4.72) & 37 (11.63) \\ \hline Total & 541 & 266 (0.49) & 35 (0.07) & 240 (0.44) \\ \hline \end{tabular} \end{center} \end{table} With the aim at understanding how the MMSE score evolves over time, we show in Figure \ref{fig:mean} the corresponding overall mean value across follow-up visits. We also represent the evolution of the mean MMSE stratified by participation in the study (completer, dropout/death before next occasion). As it is clear, cognitive functioning levels in individuals who die are much lower than those corresponding to subjects who dropout for other reasons or participate until the study end. The same figure is represented by considering the transform $\log[1+(30-MMSE)]$ which is negatively proportional to the MMSE score but will be further considered as it avoids well known ceiling and floor effects that are usually faced when dealing with this kind of indexes. \begin{figure}[h] \centering \subfloat[]{{\includegraphics[width=6.5cm]{MeanMMSE2} }}% \quad \subfloat[]{{\includegraphics[width=6.5cm]{MeanLogMMSE2} }}% \caption{Mean MMSE (a) and mean $\log[1+(30-MMSE)]$ (b) over time stratified by subjects' participation to the study.} \label{fig:mean} \centering \end{figure} A further empirical evidence which is worth to be observed is that, while the decline in cognitive functioning (as measured by the MMSE score) through time seems to be (at least approximately) constant across groups defined by patterns of dropout, the differential participation in the study leads to a different slope when the overall mean score is considered. Such a finding highlights a potential dependence between the evolution of the MMSE score over time and the dropout process, which may bias parameter estimates and corresponding inference. In the next sections, we will introduce a bi-dimensional finite mixture model for the analysis of longitudinal data subject to potentially non-ignorable dropouts. \section{Random coefficient-based dropout models} \label{sec:3} Let $Y_{it}$ represent a set of longitudinal measures recorded on $i=1,\ldots,n,$ subjects at time occasions $t=1,\ldots,T$, and let $\mathbf{x}_{it}=(x_{it1},\ldots,x_{itp})^{\prime}$ denote the corresponding $p$-dimensional vector of observed covariates. Let us assume that, conditional on a $q$-dimensional set of individual-specific random coefficients ${\bf b}_{i}$, the observed responses $y_{it}$ are independent realizations of a random variable with density in the Exponential Family. The canonical parameter $\theta_{it}$ that indexes the density is specified according to the following regression model: \begin{equation*} \theta_{it}=\mathbf{x}_{it}^{\prime} \boldsymbol{\beta} + \mathbf{m}_{it}^{\prime}\mathbf{b}_i. \label{modlong} \end{equation*} The terms $\mathbf{b}_i$, $i=1,\ldots,n,$ are used to model the effects of unobserved individual-specific, time-invariant, heterogeneity common to each lower-level unit (measurement occasion) within the same $i$-th upper-level unit (individual). Furthermore, $\boldsymbol{\beta}$ is a $p-$dimensional vector of regression parameters that are assumed to be constant across individuals. The covariates whose effects (are assumed to) vary across individuals are collected in the design vector $\mathbf{m}_{it}=(m_{it1},\ldots,m_{itq})$, which represents a proper/improper subset of $\mathbf{x}_{it}$. For identifiability purposes, standard assumptions on the random coefficient vector are $$ \textrm{E}({\bf b}_{i})={\bf 0}, \quad \textrm{Cov}({\bf b}_{i})={\bf D}, \quad i=1,\dots,n. $$ Experience in longitudinal data modeling suggests that a potential major issue when dealing with such a kind of studies is represented by missing data. That is, some individuals enrolled in the study do not reach its end and, therefore, only partially participate in the planned sequence of measurement occasions. In this framework, let $\mathbf{R}_i$ denote the missing data indicator vector, with generic element $R_{it}=1$ if the $i$-th unit drops-out at any point in the window $(t-1,t)$, and $R_{it}=0$ otherwise. As we remarked above, we consider a discrete time structure for the study and the time to dropout; however, the following arguments may apply, with a limited number of changes, to continuous time survival process as well. We assume that, once a person drops out, he/she is out forever (attrition). Therefore, if the designed completion time is denoted by $T$, we have, for each participant, $T_i\leq T$ available measures. To describe the (potential) dependence between the primary (longitudinal) and the secondary (dropout) processes, we may introduce an explicit model for the dropout mechanism, conditional on a set of dropout specific covariates, say $\mathbf{v}_i$, and (a subset of) the random coefficients in the longitudinal response model: \begin{equation} h(\mathbf{r}_i \mid \mathbf{v}_i,\mathbf{y}_i,\mathbf{b}^{\ast}_i) = h(\mathbf{r}_i \mid \mathbf{v}_i,\mathbf{b}^{\ast}_i)= \prod_{t=1}^{\min(T, T_i+1)} h(r_{it} \mid \mathbf{v}_i,\mathbf{b}^{\ast}_i), \qquad i=1,\ldots,n. \label{drop} \end{equation} The distribution is indexed by a canonical parameter defined via the regression model: \[ \phi_{it}=\mathbf{v}_{it}^\prime\boldsymbol{\gamma}+\mathbf{d}_{it}^\prime\mathbf{b}^{\ast}_i \] where ${\bf b}_{i}^{\ast}={\bf C} {\bf b}_{i}$, $i=1,\dots,n$, and $\bf C$ is a binary $q_{1}$-dimensional matrix ($q_1 \leq q$), with at most a one in each row. These models are usually referred to as shared (random) coefficient models; see \cite{wu1988}, \cite{wu1989} for early developments in the field. As it may be evinced from equation \eqref{drop}, the assumption of this class of models is that the longitudinal response and the dropout indicator are independent conditional on the individual-specific random coefficients. According to this (local independence) assumption, the joint density of the observed longitudinal responses and and the missingness indicator can be specified as \begin{eqnarray*} f({\bf y}_{i}, {\bf r}_{i} \mid \mathbf{X}_{i},\mathbf{V}_i)&=&\int f({\bf y}_{i}, {\bf r}_{i} \mid \mathbf{X}_{i},\mathbf{V}_i, \mathbf b_i) dG(\mathbf{b}_i) = \nonumber \\ &=& \int \left[ \prod_{t=1}^{T_{i}} f(y_{it} \mid \mathbf{x}_{it},\mathbf{b}_i) \prod_{t=1}^{\min(T, T_i+1)} h(r_{it} \mid \mathbf{v}_{it}, \mathbf{b}_i) \right]dG(\mathbf{b}_i), \label{joint} \end{eqnarray*} where $G(\cdot)$ represents the random coefficient distribution, often referred to as the \emph{mixing} distribution. Dependence between the measurement and the missigness, if any, is completely accounted for by the latent effects which are also used to describe unobserved, individual-specific, heterogeneity in each of the two (univariate) profiles. As it can be easily noticed, this modeling structure leads to a perfect correlation between (subsets of) the random coefficients in the two equations, and this may not be a general enough setting. As an alternative, we may consider equation-specific random coefficients. In this context, while the random terms describe univariate heterogeneity and overdispersion, the corresponding joint distribution allows to model the association between the random coefficients in the two equations and, therefore, between the longitudinal and the missing data process (on the link function scale). \cite{ait2003} discussed such an alternative parameterization referring to it as the \emph{correlated} random effect model. To avoid any confusion with the estimator proposed by \cite{cham1984}, we will refer to it as the \emph{dependent} random coefficient model. When compared to \emph{shared} random coefficient models, this approach avoids unit correlation between the random terms in the two equations and, therefore, it represents a more flexible approach, albeit still not general. Let $\mathbf{b}_i=(\mathbf{b}_{i1},\mathbf{b}_{i2})$ denote a set of individual- and outcome-specific random coefficients. Based on the standard local independence assumption, the joint density for the couple $\left({\bf Y}_{i}, {\bf R}_{i}\right)$ can be factorized as follows: \begin{equation} f({\bf y}_{i}, {\bf r}_{i} \mid \mathbf{X}_{i},\mathbf{V}_i)=\int \left[ \prod_{t=1}^{T_{i}} f(y_{it}|\mathbf{x}_{it},\mathbf{b}_{i1})\prod_{t=1}^{\min(T, T_i+1)} h(r_{it}|\mathbf{v}_{it},\mathbf{b}_{i2}) \right] dG(\mathbf{b}_{i1},\mathbf{b}_{i2}). \label{joint_corr} \end{equation} A different approach to dependent random coefficient models may be defined according to the general scheme proposed by \cite{cre2011}, where common, partially shared and independent (outcome-specific) random coefficients are considered in the measurement and the dropout process. This approach leads to a particular case of dependent random coefficients where, however, the observed and the missing part of the longitudinal response do not generally come from the same distribution. \subsection{The random coefficient distribution} When dealing with dependent random coefficient models, a common assumption is that outcome-specific random coefficients are iid Gaussian variates. According to \cite{wan2001}, \cite{son2002}, \cite{tsi2004}, \cite{nehu2011a}, \cite{nehu2011b} the choice of the random effect distribution may not have great impact on parameter estimates, except in extreme cases, e.g. when the \emph{true} underlying distribution is discrete. In this perspective, a major role is played by the individual sequence length: when all subjects have a relatively large number of repeated measurements, the effects of misspecifying the random effect distribution on model parameter estimates becomes minimal; see the discussion in \cite{riz2008}, who designed a simulation study to investigate the effects that a misspecification of the random coefficient distribution may have on parameter estimates and corresponding standard errors when a shared parameter model is considered. The authors showed that, as the number of repeated measurements per individual grows, the effect of misspecifying the random coefficient distribution vanishes for certain parameter estimates. These results are motivated by making explicit reference to theoretical results in \citet{car2000}. In several contexts, however, the follow-up times may be short (e.g. in clinical studies) and individual sequences include only limited information on the random coefficients. In these cases, assumptions on such a distribution may play a crucial role. As noticed by \cite{tso2009}, the choice of an \emph{appropriate} distribution is generally difficult for, at least, three reasons; see also \citet{alf2009}. First, there is often little information about unobservables in the data, and any assumption is difficult to be justified by looking at the observed data only. Second, when high dimensional random coefficients are considered, the use of a parametric multivariate distribution imposing the same shape on every dimension can be restrictive. Last, a potential dependence of the random coefficients on omitted covariates induces heterogeneity that may be hardly captured by parametric assumptions. In studies where subjects have few measurements, the choice of the random coefficient distribution may therefore be extremely important. With the aim at proposing a generally applicable approach, \cite{tso2009} considered a semi-parametric approach with shared (random) parameters to analyze continuous longitudinal responses while adjusting for non monotone missingness. On the same line, \cite{alf2009} discussed a model for longitudinal binary responses subject to dropout, where dependence is described via outcome-specific, dependent, random coefficients. According to these finite mixture-based approaches and starting from equation \eqref{joint_corr}, we may write the observed data log-likelihood function as follows: \begin{align} \ell(\boldsymbol\Psi, \boldsymbol\Phi, \boldsymbol{\pi}) & = \sum_{i=1}^n \log \left\lbrace \sum_{k=1}^K f(\mathbf{y}_i \mid \mathbf{X}_i,\boldsymbol{\zeta}_{1k}) h(\mathbf{r}_i \mid \mathbf{V}_i,\boldsymbol{\zeta}_{2k})\pi_k \right\rbrace \nonumber \\ & = \sum_{i=1}^n \log \left\lbrace \sum_{k=1}^K \left[\prod_{t = 1}^{T_i} f({y}_{it} \mid \mathbf{X}_i,\boldsymbol{\zeta}_{1k}) \prod_{t=1}^{\min(T, T_i +1)} h({r}_{it} \mid \mathbf{V}_i,\boldsymbol{\zeta}_{2k})\pi_k \right]\right\rbrace, \label{eq:log-likelihood} \end{align} where $\b \Psi=(\boldsymbol{\beta}, \boldsymbol{\zeta}_{11}, \dots,\boldsymbol{\zeta}_{1K})$, $\b \Psi=(\boldsymbol{\gamma}, \boldsymbol{\zeta}_{21}, \dots,\boldsymbol{\zeta}_{2K})$, with $\boldsymbol{\zeta}_{1k}$ and $\boldsymbol{\zeta}_{2k}$ denoting the vectors of discrete random coefficients in the longitudinal and in the missingness process, respectively. Last, $\boldsymbol{\pi}=(\pi_{1},\dots,\pi_{K})$, with $\pi_k=\Pr({\bf b}_{i}=\boldsymbol{\zeta}_k)=\Pr\left({\bf b}_{i1}=\boldsymbol{\zeta}_{1k},{\bf b}_{i2}=\boldsymbol{\zeta}_{2k}\right)$ identifies the joint probability of (multivariate) locations $\boldsymbol{\zeta}_k=\left(\boldsymbol{\zeta}_{1k},\boldsymbol{\zeta}_{2k}\right)$, $k=1,\ldots,K$. It is worth noticing that, in the equation above, ${\bf y}_{i}$ refers to the observed individual sequence, say $\mathbf{y}_i^{o}$. Under the model assumptions and due to the local independence between responses coming from the same sample unit, missing data, say $\mathbf{y}_i^{m}$, can be directly integrated out from the joint density of all longitudinal responses, say $f(\mathbf{y}_i^{o}, \mathbf{y}_i^{m} \mid \mathbf{x}_i, \boldsymbol{\zeta}_{k})$, and this leads to the log-likelihood function in equation \eqref{eq:log-likelihood}. The use of finite mixtures has several significant advantages over parametric approaches. Among others, the EM algorithm for ML estimation is computationally efficient and the discrete nature of the estimates may help classify subjects into disjoint components that may be interpreted as clusters of individuals characterized by homogeneous values of model parameters. However, as we may notice by looking at expression in equation (\ref{eq:log-likelihood}), the latent variables used to account for individual (outcome-specific) departures from homogeneity are intrinsically uni-dimensional. That is, while the locations may differ across profiles, the number of locations ($K$) and the prior probabilities ($\pi_k$) are common to all profiles. This clearly reflects the standard \emph{unidimensionality} assumption in latent class models. \section{A bi-dimensional finite mixture approach} \label{sec:4} Although the modeling approach described above is quite general and flexible, a clear drawback is related to the non-separability of the association structure between the random coefficients in the longitudinal and the missing data profiles. Moreover, even if it can be easily shown that the likelihood in equation (\ref{eq:log-likelihood}) refers to a MNAR model in Rubin's taxonomy \citep{litrub2002}, it is also quite clear that it does not reduce to the (corresponding) MAR model, but in very particular cases (e.g. when $K=1$ or when either $\boldsymbol{\zeta}_{1k}=cost$ or $\boldsymbol{\zeta}_{2k}=cost$, $\forall k=1,\dots,K$). This makes the analysis of sensitivity to modeling assumptions complex to be exploited and, therefore, makes the application scope of such a modeling approach somewhat narrow. Based on the considerations above and with the aim at enhancing model flexibility, we suggest to follow an approach similar to that proposed by \cite{alf2012}. That is, we consider outcome-specific sets of discrete random coefficients for the longitudinal and the missingness outcome, where each margin is characterized by a (possibly) different number of components. Components in each margin are successively joined by a full (bi-dimensional) matrix containing the masses associated to couples $(g,l)$, where the first index refers to the components in the longitudinal response profile, while the second denotes the components in the dropout process. To introduce our proposal, let $\tbf b_i = (\tbf b_{i1}, \tbf b_{i2})$ denote the vector of individual random coefficients associated to the $i$-th subject, with $i=1, \dots n$. Let us assume the vector of individual-specific random coefficients $\tbf b_{1i}$ influences the longitudinal data process and follows a discrete distribution defined on $K_1$ distinct support points $\{\b \zeta_{11}, \dots, \b \zeta_{1K_1}\}$ with masses $\pi_{g\star} = \Pr(\tbf b_{i1} = \b\zeta_{1g})$. Similarly, let us assume the vector of random coefficients $\tbf b_{2i}$ influences the missing data process and follows a discrete distribution with $K_2$ distinct support points $\{\b \zeta_{21}, \dots, \b \zeta_{1K_2}\}$ with masses $\pi_{\star l} = \Pr(\tbf b_{i2} = \b\zeta_{2l})$. That is, we assume that \[ \tbf b_{1i} \sim \sum_{g = 1}^{K_1} \pi_{g\star} \, \delta(\b \zeta_{1g}), \quad \quad \tbf b_{2i} \sim \sum_{\ell = 1}^{K_2} \pi_{\star \ell} \, \delta(\b \zeta_{2\ell}), \] where $\delta(a)$ denotes an indicator function that puts unit mass at $a$. To complete the modeling approach we propose, we introduce a joint distribution for the random coefficients, associating a mass $\pi_{g\ell}=\Pr({\bf b}_{i1}=\boldsymbol{\zeta}_{1g}, {\bf b}_{i2}=\boldsymbol{\zeta}_{2\ell})$ to each couple of locations $(\boldsymbol{\zeta}_{1g}, \boldsymbol{\zeta}_{2\ell})$, entailing the longitudinal response and the dropout process, respectively. Obviously, masses $\pi_{g\star}$ and $\pi_{\star \ell}$ in the univariate profiles are obtained by properly marginalizing $\pi_{g\ell}$: \[ \pi_{g\star} = \sum_{\ell = 1}^{K_2} \pi_{g\ell}, \quad \quad \pi_{\star \ell} = \sum_{g = 1}^{K_1} \pi_{g\ell}. \] Under the proposed model specification, the likelihood in equation (\ref{eq:log-likelihood}) can be written as follows: \begin{equation} \ell(\boldsymbol\Phi, \boldsymbol\Psi, \boldsymbol{\pi}) = \sum_{i=1}^n \log \left\lbrace \sum_{g=1}^{K_1} \sum_{\ell=1}^{K_2} \left[f(\mathbf{y}_i \mid \mathbf{x}_i,\boldsymbol{\zeta}_{1g})h(\mathbf{r}_i \mid \mathbf{v}_i,\boldsymbol{\zeta}_{2l})\right] \pi_{g\ell} \right\rbrace. \label{eq:log-likelihooddouble} \end{equation} Using this approach, marginals control for heterogeneity in the univariate profiles, while the matrix of joint probabilities $\pi_{gl}$ describes the association between the latent effects in the two sub-models. The proposed specification could be considered as a standard finite mixture with $K = K_{1} \times K_{2}$ components, where each of the $K_{1}$ locations in the first profile pairs with each of the $K_{2}$ locations in the second profile. However, when compared to a standard finite mixture model, the proposed specification provides a more flexible (albeit more complex) representation for the random coefficient distribution. Also, by looking at equation \eqref{eq:log-likelihooddouble}, it is immediately clear that the MNAR model directly reduces to its M(C)AR counterpart when $\pi_{g\ell}=\pi_{g\star} \pi_{\star \ell}$, for $g=1,\dots,K_{1}$ and $\ell=1,\dots,K_{2}$. As we stressed before, this is not true in the case of equation \eqref{eq:log-likelihood}. Considering a logit transform for the joint masses $\pi_{g\ell}$, we may write \begin{equation}\label{pi:equation} \xi_{g \ell}=\log \left(\frac{\pi_{g \ell}}{\pi_{K_{1} K_{2}}}\right) = \alpha_{g \star} + \alpha_{\star \ell} + \lambda_{g \ell}, \end{equation} where $\alpha_{g \star} = \log \left({\pi_{g \star}}/{\pi_{K_{1} \star}}\right), \alpha_{\star \ell} = \log \left({\pi_{\star \ell}}/{\pi_{\star K_{2}}}\right),$ and $\lambda_{g \ell}$ provides a measures of the departure from independence model. That is, if $\lambda_{g \ell}=0$, for all $(g, \ell) \in \left\{1,\dots,K_{1}\right\} \times \left\{1,\dots,K_{2}\right\}$, then \[ \log \left(\frac{\pi_{g \ell}}{\pi_{K_{1} K_{2}}}\right) = \alpha_{g \star} + \alpha_{\star \ell}=\log \left(\frac{\pi_{g \star}}{\pi_{K_{1} \star}}\right)+\log \left(\frac{\pi_{\star \ell}}{\pi_{\star K_{2}}}\right). \] This corresponds to the independence between the random coefficients in the two equations, and, as a by-product, to the independence between the longitudinal and the dropout process. Therefore, the vector $\boldsymbol{\lambda} = (\lambda_{11}, \dots, \lambda_{K_1K_2})$ can be formally considered as a \textit{sensitivity} parameter vector, since when $\boldsymbol{\lambda}=\b 0$ the proposed MNAR model reduces to the corresponding M(C)AR model. It is worth noticing that the proposed approach has some connections with the model discussed by \citet{Beunc2008}, where parametric shared random coefficients for the longitudinal response and the dropout indicator are joined by means of a (second-level) finite mixture. In fact, according to Theorem 1 in \cite{dun2009}, the elements of any $K_{1} \times K_{2}$ probability matrix $\mbox{\boldmath$\Pi$} \in \mathcal{M}_{K_{1}K_{2}}$, can be decomposed as: \begin{equation} \pi_{g\ell}=\sum_{h=1}^{M} \tau_{h} \pi_{g \star \mid h} \pi_{\star \ell \mid h}, \label{eq:pidecomp} \end{equation} for an appropriate choice of $M$ and under the following constraints: \begin{eqnarray*} \sum_{h} \tau_{h}=\sum_{g} \pi_{g\star \mid h}=\sum_{\ell} \pi_{\star \ell \mid h}=\sum_{g} \sum_{\ell} \pi_{g\ell}=1. \end{eqnarray*} If we use the above parameterization for the random coefficient distribution, the association between locations $\boldsymbol{\zeta}_{1g}$ and $\boldsymbol{\zeta}_{2\ell}$, $g=1,\dots,K_{1}$ $\ell=1,\dots,K_{2}$ is modeled via the masses $\pi_{g \star \mid h}$ and $\pi_{\star \ell \mid h}$, that vary according to the upper-level (latent) class $h=1,\dots,M$. That is, random coefficients $\mathbf{b}_{1i}$ and $\mathbf{b}_{2i}$, $i=1,\dots,n$ are assumed to be independent conditional on the $h$-th (upper level) latent class $h=1,\dots,M$. Also, the mean and covariance matrix in profile-specific random coefficient distribution may vary with second-level component, while in the approach we propose, the second level structure is just a particular way to join the two profiles and, therefore, control for dependence between outcome-specific random coefficients. \section{ML parameter estimation} \label{sec:5} Let us start by assuming that the data vector is composed by an observable part $(\mathbf{y}_{i}, \mathbf{r}_i)$ and by unobservables $\mathbf{z}_{i}=(z_{i11},\dots,z_{ig\ell}, \dots, z_{iK_1K_2})$. Let us further assume the random variable $\mathbf{z}_{i}$ has a multinomial distribution, with parameters $\pi_{g\ell}$ denoting the probability of the $g$-th component in the first and the $\ell$-th component in the second profile, for $g=1, \dots, K_1$ and $\ell = 1, \dots, K_2$,. Let $\boldsymbol{\Upsilon} = \left\{\boldsymbol\Phi, \boldsymbol\Psi, \boldsymbol{\pi}\right\}$ denote the vector of all (free) model parameters, where, as before, $\boldsymbol{\Phi} = (\boldsymbol{\beta}, \boldsymbol{\zeta}_{11}, \dots, \boldsymbol{\zeta}_{1K_1})$ and $\boldsymbol \Psi = (\boldsymbol{\gamma}, \boldsymbol{\zeta}_{21}, \dots, \boldsymbol{\zeta}_{2K_2})$ collect the parameters for the longitudinal and the missing data model, respectively, and $\b \pi= (\pi_{11}, \dots, \pi_{K_1 K_2})$. Based on the modeling assumptions introduced so far, the complete data likelihood function is given by \begin{eqnarray*} L_{c}(\boldsymbol{\Upsilon})& = & \prod_{i=1}^{n}\prod_{g=1}^{K_{1}}\prod_{\ell=1}^{K_{2}}\left\{f(\mathbf{ y}_{i},\mathbf{r}_i\mid z_{ig\ell} = 1) \pi_{g\ell}\right\}^{z_{ig\ell}} \\ & =& \prod_{i=1}^{n}\prod_{g=1}^{K_{1}}\prod_{\ell=1}^{K_{2}}\left\{ \left[ \prod_{t=1}^{T_{i}} f(y_{it}\mid z_{ig\ell}=1 ) \prod_{t=1}^{\min(T, T_{i}+1)} h(r_{it}\mid z_{ig\ell}=1)\right]\pi_{g\ell} \right\}^{z_{ig\ell}}. \end{eqnarray*} To derive parameter estimates, we can exploit an extended EM algorithm which, as usual, alternates two separate steps. In the ($r$-th iteration of the) E-step, we compute the posterior expectation of the complete data log-likelihood, conditional on the observed data $(\mathbf{y}_{i}, \mathbf{r}_i)$ and the current parameter estimates $\hat{\boldsymbol{\boldsymbol{\Upsilon}}}^{(r-1)}$. This translates into the computation of the posterior probability of component membership $w_{ig\ell}$, defined as posterior expectation of the random variable $z_{ig\ell}$. In the M-step, we maximize the expected complete-data log-likelihood with respect to model parameters. Clearly, for the finite mixture probabilities $\pi_{g\ell}$, estimation is based upon the constraint $\sum_{g=1}^{K_1}\sum_{\ell=1}^{K_2} \pi_{g\ell} = 1$. As a result, the following score functions are obtained: \begin{eqnarray*} \label{eq:score1} S_{c}(\boldsymbol{\Phi})&=& \sum\limits_{i=1}^{n} \frac{\partial }{\partial {\boldsymbol \Phi}} \sum\limits_{g=1}^{K_{1}} \sum\limits_{\ell=1}^{K_{2}} w_{ig\ell}^{(r)} \left[\log(f_{ig\ell}) + \log(\pi_{g\ell}) \right]= \sum\limits_{i=1}^{n} \frac{\partial }{\partial {\boldsymbol \Phi}} \sum\limits_{g=1}^{K_{1}} w_{ig\star}^{(r)}\left[\log(f_{i1g})\right], \\ \label{eq:score2} S_{c}(\boldsymbol{\Psi})&=&\sum\limits_{i=1}^{n}\frac{\partial }{\partial {\boldsymbol \Psi}} \sum\limits_{g=1}^{K_{1}}\sum\limits_{\ell=1}^{K_{2}} w_{ig\ell}^{(r)}\left[\log(f_{ig\ell}) + \log(\pi_{g\ell}) \right]= \sum\limits_{i=1}^{n} \frac{\partial }{\partial {\boldsymbol \Psi}}\sum\limits_{\ell=1}^{K_{2}} w_{i\star \ell}^{(r)} \left[\log(f_{i2\ell})\right], \\ \label{eq:score3} S_{c}({\pi}_{g\ell})&=&\sum\limits_{i=1}^{n} \frac{\partial }{\partial {\pi}_{g\ell}} \sum_{g=1}^{K_{1}} \sum_{\ell=1}^{K_{2}} w_{ig\ell}^{(r)} \pi_{g\ell} - \kappa \left( \sum_{g=1}^{K_{1}} \sum_{\ell=1}^{K_{2}} \pi_{g\ell} -1 \right). \end{eqnarray*} In the equations above, $f_{ig\ell} = f(\textbf y_{i}, \textbf r_i \mid z_{ig\ell})$, while $w_{ig\star}^{(r)}$, $w_{i\star l}^{(r)}$, $f_{i1g}$ and $f_{i2\ell}$ represent the marginals for the posterior probability $w_{ig\ell}$ and for the joint density $f_{ig\ell}$, respectively. As it is typical in finite mixture models, equation (\ref{eq:score3}) can be solved analytically to give the updates \[ \hat \pi_{g\ell}^{(r)} = \frac{\sum_{i=1}^n w_{ig\ell}^{(r)}}{n}, \] while the remaining model parameters may be updated by using standard Newton-type algorithms. The E- and the M-step of the algorithm are alternated until convergence, that is, until the (relative) difference between two subsequent likelihood values is smaller than a given quantity $\varepsilon > 0$. Given that this criterion may indicate lack of progress rather than true convergence, see eg \cite{karl2001}, and the log-likelihood may suffer from multiple local maxima, we usually suggest to start the algorithm from several different starting points. In all the following analyses, we used B=50 starting points. Also, as it is typically done when dealing with finite mixtures, the number of locations $K_1$ and $K_2$ is treated as fixed and known. The algorithm is run for varying $(K_1, K_2)$ combinations and the optimal solution is chosen via standard model selection techniques, such as AIC, \citep{Akaike1973} or BIC \citep{Schwarz1978}. Standard errors for model parameter estimates are obtained at convergence of the EM algorithm by the standard sandwich formula \citep{White1980, Royall1986}. This leads to the following estimate of the covariance matrix of model parameters: \begin{align*} \label{eq:swd} \widehat{\rm Cov(\hat{\boldsymbol{\Upsilon}})} = \tbf I_o(\hat{\boldsymbol{\Upsilon}})^{-1} \widehat{\mbox{Cov}(\tbf S)} \tbf I_o(\hat{\boldsymbol{\Upsilon}})^{-1}, \end{align*} where $\tbf I_o(\hat{\boldsymbol{\Upsilon}})$ denotes the observed information matrix which is computed via the Oakes' formula \citep{oak1999}. Furthermore, $\tbf S$ denotes the score vector evaluated at $\hat \boldsymbol{\Upsilon}$ and $\widehat{\mbox{Cov}(\tbf S)} = \sum_{i=1}^n \tbf S_i(\hat{\boldsymbol{\Upsilon}}) \tbf S_i^\prime(\hat{\boldsymbol{\Upsilon}})$ denotes the estimate for the covariance matrix of the score function $\rm{Cov}(\tbf S)$, with $\tbf S_i$ being the individual contribution to the score vector. \section{Sensitivity analysis: definition of the index} \label{sec:6} The proposed bi-dimensional finite mixture model allows to account for possible effects of non-ignorable dropouts on the primary outcome of interest. However, as highlighted by \cite{mol2008}, for every MNAR model there is a corresponding MAR counterpart that produces exactly the same fit to observed data. This is due to the fact that the MNAR model is fitted by using the observed data only and it implicitly assumes that the distribution of the missing responses is identical to that of the observed ones. Further, the dependence between the longitudinal response (observed and missing) and the dropout indicator which is modeled via the proposed model specification is just one out of several possible choices. Therefore, rather than relying on a single (possibly misspecified) MNAR model and in order evaluate how maximum likelihood estimates for the longitudinal model parameters are influenced by the hypotheses about the dropout mechanism, a sensitivity analysis is always recommended. In this perspective, most of the available proposals focus on Selection or Pattern Mixture Model specifications \citep{Little1995}, while few proposals are available for shared random coefficient models. A notable exception is the proposal by \cite{cre2010}. Here, the authors considered a sensitivity parameter in the model and studied how model parameter estimates vary when the sensitivity parameter is forced to move away from zero. Looking at \emph{local} sensitivity, \cite{trox2004} developed an index of local sensitivity to non-ignorability (ISNI) via a first order Taylor expansion, with the aim at describing the "geometry" of the likelihood surface in a neighborhood of a MAR solution. Such an index was further extended by \cite{ma2005}, \cite{Xie2004}, and \cite{Xie2008} to deal with the general case of $q$-dimensional ($q >1$) non ignorability parameters by considering an $L_{2}$ to summarize the impact of a unit change in its elements. An $L_{1}$ norm was instead considered by \cite{Xie2012}, while \cite{Gao2016} further extended the ISNI definition by considering a higher order Taylor expansion. In the context of joint models for longitudinal responses and (continuous) time to event data, \cite{viv2014} proposed a relative index based on the ratio between the ISNI and a measure of its variability under the MAR assumption. Due to the peculiarity of the proposed model specification, to specify a index of sensitivity to non-ignorability we proceed as follows. As before, let $\boldsymbol{\lambda} = (\lambda_{11}, \dots, \lambda_{K_1K_2})$ denote the vector of non ignorability parameters and let $\boldsymbol{\lambda}={\bf 0}$ correspond to a MAR model. Also, let $\boldsymbol{\xi} = (\xi_{11}, \dots, \xi_{K_1K_2})$ denote the vector of all logit transforms defined in equation \eqref{pi:equation} and let $\boldsymbol{\xi}_0$ correspond to a MAR model. That is, $\boldsymbol{\xi}_0$ has elements \[ \xi_{gl} = \alpha_{g\star} + \alpha_{\star \ell}, \quad g=1, \dots, K_1, \ell=1, \dots, K_2. \] Both vectors $\boldsymbol{\lambda}$ and $\boldsymbol{\xi}$ may be interchangeably considered as non-ignorability parameters in the proposed model specification, but to be coherent with the definition of the index, we will use $\boldsymbol{\lambda}$ in the following. Last, let us denote by $\hat{\boldsymbol \Phi}(\boldsymbol{\lambda})$ the maximum likelihood estimate for model parameters in the longitudinal data model, conditional on a given value for the sensitivity parameters $\boldsymbol{\lambda}$. The \emph{index of sensitivity to non-ignorability} may be derived as \begin{equation} \label{eq:ISNIbase} ISNI_{\boldsymbol\Phi}=\left.\frac{\partial \hat{\boldsymbol\Phi}(\boldsymbol{\lambda})} {\partial \boldsymbol{\lambda}} \right|_{\bl \Phi(\bl 0)} \simeq - \left(\left.\frac{\partial^{2} \ell(\boldsymbol\Phi, \boldsymbol\Psi, \boldsymbol{\pi})}{\partial \boldsymbol\Phi \boldsymbol\Phi^{\prime}}\right|_{\bl \Phi(\bl 0)}\right)^{-1} \left. \frac{\partial^{2} \ell(\boldsymbol\Phi, \boldsymbol\Psi, \boldsymbol{\pi})}{\partial \boldsymbol\Phi \boldsymbol{\lambda}}\right|_{\bl \Phi(\bl 0)}. \end{equation} Based on the equation above, it is clear that the \textit{ISNI} measures the displacement of model parameter estimates from their MAR counterpart, in the direction of $\boldsymbol{\lambda}$, when we move far from $\boldsymbol{\lambda} = \b 0$. Following similar arguments as those detailed by \citep{Xie2008}, it can be shown that the following expression holds: \begin{align*} \label{eq:ISNIapprox} \hat \boldsymbol\Phi(\boldsymbol{\lambda})=\hat \boldsymbol\Phi({\bf 0})+ISNI_{\boldsymbol\Phi}\boldsymbol{\lambda}; \end{align*} that is, the ISNI may be also interpreted as the linear impact that changes in the elements of $\boldsymbol{\lambda}$ have on $\hat \boldsymbol\Phi$. It is worth to highlight that, $ISNI_{\boldsymbol\Phi}$ denotes a matrix with $D$ rows and $(K_{1}-1)(K_{2}-1)$ columns representing the effect each element in $\boldsymbol{\lambda}$ has on the $D$ elements in $\boldsymbol\Phi$. That is, the proposed formulation of the index leads to a matrix rather than a scalar or a vector as in the original formulations. In this respect, to derive a global measure of local sensitivity for the parameter estimate $\hat \Phi_d$ when moving far from the MAR assumption, for $ d =1, \dots, D,$ a proper summary of the corresponding row in the \textit{ISNI} matrix, say $ISNI_{\Phi_d}$, needs to be computed. \section{Analysis of the Leiden 85+ data} \label{sec:7} In this section, the bi-dimensional finite mixture model is applied to the analysis of the Leiden 85+ study data. We aim at understanding the effects for a number of covariates on the dynamics of cognitive functioning in the elderly, while controlling for potential bias in the parameter estimates due to possible non-ignorable dropouts. First, we provide a description of the available covariates in section \ref{sec_preliminary} and describe the sample in terms of demographic and genetic characteristics of individuals participating in the study. Afterwards, we analyze the joint effect of these factors on the dynamics of the (transformed) MMSE score. Results are reported in sections \ref{sec_MARmodel}--\ref{sec_MNARmodel}-. Last in section \ref{sec_isni}, a sensitivity analysis is performed to give insight on changes in parameters estimates when we move far from the MAR assumption. Two scenarios are investigated and results reported. \subsection{Preliminary analysis}\label{sec_preliminary} We start the analysis by summarizing in Table \ref{tabII} individual features of the sample of subjects participating in the Leiden 85+ study, both in terms of covariates and MMSE scores, conditional on the observed pattern of participation. That is, we distinguish between those individuals who completed the study and those who did not. As highlighted before, subjects who present incomplete information are likely to leave the study because of poor health conditions and this questions weather the analysis based on the observed data only may lead to biased results. By looking at the overall results, we may observe that the $64.88\%$ of the sample has a low level of education and females represent the $66.73\%$ of whole sample. As regards the \textit{APOE} genotype, the most referenced category is obviously $APOE_{33}$ $(58.96 \%)$, far from $APOE_{34-44}$ $(21.08\%)$ and $APOE_{22-23}$ $(17.74 \%)$, while only a very small portion of the sample ($2.22\%$) is characterized by $APOE_{24}$. Last, we may notice that more than half of the study participants ($50.83\%$) leave the study before the scheduled end. This proportion is relatively higher for participants with low level of education ($52.71\%$), for males ($58.89\%$), and for those in the $APOE_{34-44}$ group ($61.40\%$). \begin{table}[htb] \caption{Leiden 85+ Study: demographic and genetic characteristics of participants} \label{tabII} \begin{center} \begin{tabular}{l c c c } \hline Variable & Total & Completed (\%) & Did not complete (\%) \\ \hline \textbf{Gender } & & & \\ Male & 180 (33.27) & 74 (41.11) & 106 (58.89) \\ Female & 361 (66.73) & 192 (53.19) & 169 (46.81) \\ \textbf{Education} & & & \\ Primary & 351 (64.88) & 166 (47.29) & 185 (52.71) \\ Secondary & 190 (35.12) & 100 (52.63) & 90 (47.37) \\ \textbf{APO-E} & & & \\ 22-23 & 96 (17.74) & 54 (56.25) & 42 (43.75) \\ 24 & 12 (2.22) & 6 (50) & 6 (50) \\ 33 & 319 (58.96) & 162 (50.78) & 157 (49.22) \\ 34-44 & 114 (21.08) & 44 (38.60) & 70 (61.40) \\ \hline Total & 541 (100) & 266 (49.17) & 275 (50.83) \\ \end{tabular} \end{center} \end{table} Figure \ref{fig:plot_cov} reports the evolution of the mean MMSE over time stratified by the available covariates. As it is clear, cognitive impairment is higher for males than females, even if the differences seem to decrease with age, maybe due to a direct age effect or a to differential dropout by gender (Figure \ref{fig:plot_cov}a). By looking at Figure \ref{fig:plot_cov}b, we may also observe that participants with higher education are less cognitively impaired at the begining of the study, and this difference remains persistent over the analyzed time window. Rather than only a direct effect of education, this may suggest differential socio-economic statuses being associated to differential levels of education. Last, lower MMSE scores are observed for $APOE_{34-44}$, that is when allele $\epsilon4$, which is deemed to be a risk factor for dementia, is present. The irregular pattern for $APOE_{24}$ may be due to the sample size of this group (Figure \ref{fig:plot_cov}c). \begin{center} \begin{figure}[h!] \caption{Leiden 85+ Study: mean of MMSE score stratified by age, and gender(a), educational level (b), APOE (c)} \centerline{\includegraphics[scale=0.7]{plot_cov.pdf}} \label{fig:plot_cov} \end{figure} \end{center} \subsection{The MAR model}\label{sec_MARmodel} We start by estimating a MAR model, based on the assumption of independence between the longitudinal and the dropout process. In terms of equation (\ref{eq:log-likelihooddouble}), this is obtained by assuming $\pi_{g\ell}=\pi_{g\star}\pi{\star \ell}$, for $g=1,\dots,K_{1}$ and $\ell=1,\dots,K_{2}$. Alternatively, we can derive it by fixing $\boldsymbol{\lambda}={\bf 0}$ in equation (\ref{pi:equation}) or $M=1$ in eq. (\ref{eq:pidecomp}). To get insight on the effects of demographic and genetic features on the individual dynamics of the MMSE score, we focused on the following model specification: \begin{align*} \left\{ \begin{array}{ccc} Y_{it} \mid \textbf x_{it},b_{i1} \sim {\rm N}(\mu_{it}, \sigma^{2}) \\ R_{it} \mid \textbf v_{it},b_{i2} \sim {\rm Bin}(1, \phi_{it}) \end{array} \right. \end{align*} The canonical parameters are defined by the following regression models: \begin{eqnarray*} \mu_{it}&=&(\beta_{0}+b_{i1})+\beta_{1}\, (Age_{it}-85)+\beta_{2}\, Gender_{i}+\beta_{3}\, Educ_{it}+ \\&+&\beta_{4}\, APOE_{22-23}+\beta_{5}\, APOE_{24}+\beta_{6}\, APOE_{34-44}, \\[2mm] {\rm logit}(\phi_{it})&=&(\gamma_{0}+b_{i2})+\gamma_{1}\, (Age_{it}-85)+\gamma_{2}\, Gender_{i}+\gamma_{3}\, Educ_{it}+\\ &+&\gamma_{4}\, APOE_{22-23}+\gamma_{5}\, APOE_{24}+\gamma_{6}\, APOE_{34-44}. \end{eqnarray*} As regards the response variable, the transform $Y_{it} = \log[1+ (30 - \mbox{MMSE}_{it})]$ was adopted as it is nearly optimal in a Box-Cox sense. Both a parametric and a semi-parametric specification of the random coefficient distribution were considered. In the former case, Gaussian distributed random effects were inserted into the linear predictors for the longitudinal response and the dropout indicator. In the latter case, for each margin, the algorithm was run for varying number of locations and the solution corresponding to the lowest BIC index was retained, leading to the selection of $K_1 = 5$ and $K_2 = 3$ components for the longitudinal and the dropout process, respectively. Estimated parameters, together with corresponding standard errors are reported in Table \ref{tabmar}. \begin{table} \caption{Leiden 85+ Study: MAR models. Maximun likelihood estimates, standard errors, log-likelihood, and BIC value} \label{tabmar} \begin{center} \begin{tabular}{l|l r r r r } \hline Process & & \multicolumn{2}{c}{Semi-parametric}& \multicolumn{2}{c}{Parametric} \\ & Variable & Coeff. & Std. Err. & Coeff. & Std. Err. \\ \hline\hline & \textit{Intercept} & 1.686 & & 1.792 & 0.050 \\ & \textit{Age} & 0.090 & 0.008 & 0.089 & 0.005 \\ & \textit{Gender} & -0.137 & 0.042 & -0.085 & 0.066 \\ & \textit{Educ} & -0.317 & 0.068 & -0.623 & 0.065 \\ Y & $APOE_{22-23}$ & 0.062 & 0.072 & 0.056 & 0.083 \\ & $APOE_{24}$ & -0.105 & 0.062 & 0.096 & 0.211 \\ & $APOE_{34-44}$ & 0.347 & 0.060 & 0.369 & 0.079 \\ & $\sigma_{y}$ & 0.402 & & 0.398 & \\ & $\sigma_{b_{1}}$& 0.696 & & 0.684 & \\ \hline & \textit{Intercept} & -11.475 & & -3.877 & 0.520 \\ & \textit{Age} & 2.758 & 0.417 & 0.526 & 0.131 \\ & \textit{Gender} & 0.559 & 0.467 & 0.656 & 0.218 \\ & \textit{Educ} & -2.162 & 0.772 & -0.486 & 0.212 \\ R & $APOE_{22-23}$ & 0.476 & 0.409 & -0.246 & 0.252 \\ & $APOE_{24}$ & -0.026 & 0.939 & 0.131 & 0.618 \\ & $APOE_{34-44}$ & 0.805 & 0.461 & 0.565 & 0.237 \\ & $\sigma_{b_{2}}$& 5.393 & & 1.525 & \\ \hline & $\log L$ & -2685.32 & & -2732.84 & \\ & BIC & 5534.26 & & 5572.67 & \\ \hline \end{tabular} \end{center} \end{table} By looking at the results, few findings are worth to be discussed. First, the estimates obtained via either the parametric or the semi-parametric approach are quite similar when we consider the longitudinal process. That is, $\log\left[1+(30-MMSE)\right]$ increases (and MMSE decreases) with age. A significant gender effect can be observed, with males being less impaired (on average) than females. Furthermore, a strong protective effect seems to be linked to socio-economic status in early life as it may be deduced from the significant and negative effect of higher educational levels. Table \ref{tabmar} also highlights how $APOE_{34-44}$ represents a strong risk factor, with a positive estimate on the adopted response scale and, therefore, a negative effect on the MMSE. Only few differences may be highlighted when comparing the estimates obtained under the parametric and the semi-parametric approach for the longitudinal data process. In particular, these differences are related to the gender effect, which is not significant in the parametric model, and the effect of higher education, which is much higher under the parametric specification. This differences may be possibly due to the discrete nature of the random effect distribution in the semi-parametric case, which may lead to partial aliasing with the time-constant covariates. When the dropout process is considered, we may observe that the results are \emph{qualitatively} the same, but the size of parameter estimates is quite different. This could be due, at least partially, to the different scale of the estimated random coefficient distribution, with $\sigma_{b_{2}}=5.393$ and $\sigma_{b_2} = 1.525$ in the semi-parametric and in the parametric model, respectively. As it is clear, in the semi-parametric case, the estimated intercepts are quite higher than those that can be predicted by a Gaussian distribution and this leads to inflated effects for the set of observed covariates as well. However, if we look at the estimated dropout probabilities resulting either from the semi-parametric or the parametric models, these are very close to each other, but for few extreme cases which are better recovered by the semi-parametric model. \subsection{The MNAR model}\label{sec_MNARmodel} To provide further insight into the effect of demographic and genetic factors on the MMSE dynamics, while considering the potential non-ignorability of the dropout process, we fitted both a uni-dimensional and a bi-dimensional finite mixture model. For the former approach, we run the estimation algorithm for $K = 1, \dots, 10$ and retained the optimal solution according to the BIC index. This corresponds to a model with $K = 5$ components. Similarly, for the proposed bi-dimensional finite mixture model, we run the algorithm for $K_1 = 1, \dots, 10$ and $K_2 = 1, \dots, 5$ components and, as before, we retained as optimal solution that with the lowest BIC. That is, the solution with $K_1 = 5$ and $K_2=3$ components for the longitudinal and the dropout process, respectively. This result is clearly coherent with that obtained by marginal modeling the longitudinal response and the dropout indicator. Parameter estimates and the corresponding standard errors for both model specifications are reported in Table \ref{tabmnar}. \begin{table} \caption{Leiden 85+ Study: MNAR models. Maximun likelihood estimates, standard errors, log-likelihood, and BIC value}\label{tabmnar} \begin{center} \begin{tabular}{l|l r r r r } \hline Process & & \multicolumn{2}{c}{Semipar. ``Uni-dim."} & \multicolumn{2}{c}{Semipar.``Bi-dim."} \\ & Variable & Coeff. & Std. Err. & Coeff. & Std. Err. \\ \hline\hline & Intercept & 1.682 & & 1.687 & \\ & Age & 0.094 & 0.007 & 0.094 & 0.007 \\ & Gender & -0.129 & 0.048 & -0.135 & 0.039 \\ Y & Educ & -0.31 & 0.051 & -0.317 & 0.050 \\ & APOE$_{22-23}$ & 0.091 & 0.061 & 0.086 & 0.058 \\ & APOE$_{24}$ & -0.098 & 0.055 & -0.099 & 0.056 \\ & APOE$_{34-44}$ & 0.345 & 0.050 & 0.344 & 0.051 \\ & $\sigma_{y}$ & 0.402 & & 0.402 & \\ & $\sigma_{b_{1}}$& 0.701 & & 0.699 & \\ \hline & Intercept & -3.361 & & -10.767 & \\ & Age & 0.367 & 0.037 & 2.406 & 0.384 \\ & Gender & 0.504 & 0.147 & 1.061 & 0.850 \\ R & Educ & -0.200 & 0.151 & -1.646 & 0.530 \\ & APOE$_{22-23}$ & -0.090 & 0.199 & 0.481 & 1.090 \\ & APOE$_{24}$ & -0.148 & 0.508 & -0.334 & 0.647 \\ & APOE$_{34-44}$ & 0.541 & 0.174 & 1.365 & 0.745 \\ & $\sigma_{b_{2}}$& 0.577 & & 4.891 & \\ & $\sigma_{b_{1},b_{2}}$ & 0.349 & & 0.985 & \\ & $\rho_{b_{1},b_{2}}$ & 0.863 & & 0.288 & \\ \hline \hline & $\log L$ & -2686.902 & & -2660.391 & \\ & BIC & 5537.433 & & 5534.758 & \\ \hline \end{tabular} \end{center} \end{table} When looking at the estimated parameters for the longitudinal data process and at their significance (left panel in the table), we may conclude that estimates are coherent with those obtained in the MAR analysis. A small departure can be observed for the effect of age and gender. Males and patients with high education tend to be less cognitively impaired when compared to the rest of the sample, while subjects carrying $\epsilon4$ alleles, that is with category $APOE_{34-44}$, present a steeper increase in the observed response, e.g. a steeper decline in MMSE values. Focusing the attention on the dropout process, we may observe that age, gender and $APOE_{33-34}$ are all positively related with an increased dropout probability. That is, older men carrying $\epsilon4$ alleles are more likely to leave the study prematurely than younger females carrying $\epsilon3$ alleles. By comparing the estimates obtained under the uni- and the bi-dimensional finite mixture model, it seems that the above results hold besides the chosen model specification. The only remarkable difference is in the estimated magnitude of the effects for the dropout process and for the random coefficient distribution. For the bi-dimensional finite mixture model, we may observe a stronger impact of the covariates on the dropout probability. However, as for the univariate model described in section \ref{sec_MARmodel}, this result is likely due to the estimated scale with an intercept value which is much lower under the bi-dimensional specification than under the uni-dimensional one. Further, under the uni-dimensional model specification, the Gaussian process for the longitudinal response may have a much higher impact on the likelihood function when compared to the Bernoulli process for the dropout indicator. As a result, the estimates for component-specific locations and the corresponding variability in the dropout equation substantially differ when comparing the uni-dimensional and the univariate model. In the uni-dimensional model, the estimated correlation is quite high due to reduced variability of the random coefficients in the dropout equation, while this is substantially reduced in the bi-dimensional case. We also report in Table \ref{tabprob} the estimated random intercepts for the longitudinal and the dropout process, together with the corresponding conditional distribution i.e. $\pi_{\ell \mid g} = \Pr(b_{i2}=\zeta_{2l} \mid b_{i1}=\zeta_{1g}=$. When focusing on the estimated locations in the longitudinal data process, that is $\zeta_{1g}$, we may observe higher cognitively impairment when moving from the first to the latter mixture component. On the other hand, for the dropout process, estimated locations, $\zeta_{2\ell}$, suggest that higher components correspond to a higher chance to dropout from the study. When looking at the estimated conditional probabilities, we may observe a link between lower (higher) values of $\zeta_{1g}$ and lower (higher) values of $\zeta_{2\ell}$. That is, participants with better cognitive functioning (i.e. with lower response values) are usually characterized by a lower probability of dropping out from the study. On the contrary, cognitively impaired participants (i.e. with higher response values) present a higher chance to dropout prematurely from the study, even if there is still some overlapping between the second and the third component in the dropout profile. \begin{table}[h!] \caption{Maximun likelihood estimates and conditional distribution for the random parameters}\label{tabprob} \centering \begin{tabular}{l | c c c | c} & \multicolumn{3}{c}{$\zeta_{2\ell}$} & \\ \hline $\zeta_{1g}$ & -15.053 & -8.701 & -3.378 & \\ \hline\hline 0.519 & 0.865 & 0.090 & 0.045 & 1 \\ 1.065 & 0.585 & 0.170 & 0.245 & 1 \\ 1.681 & 0.573 & 0.227 & 0.199 & 1 \\ 2.297 & 0.467 & 0.289 & 0.244 & 1 \\ 2.905 & 0.144 & 0.364 & 0.492 & 1 \\ \hline Tot. & 0.528 & 0.229 & 0.243 & 1 \end{tabular} \end{table} Looking at the parameter estimates obtained through the MNAR model approach, we may observe a certain degree of correlation between the random effects in the two equations. This suggests the presence of a potential non-ignorable dropout process affecting the longitudinal outcome. However, such an influence cannot be formally tested, as we may fit the proposed model to the observed data only and derive estimates on the basis of strong assumptions on the behavior of missing responses. Therefore, it could be of interest to verify how assumptions on the missing data mechanism can influence parameter estimates. \subsection{Sensitivity analysis: results}\label{sec_isni} To investigate the robustness of inference with respect to the assumptions on the missing data mechanism, we computed the matrix $ISNI_{\boldsymbol\Phi}$ according to formulas provided in equation \eqref{eq:ISNIbase}. For each model parameter estimates $\hat \Phi_d$, we derived a global measure of its sensitivity to the MAR assumption by computing the norm, the minimum and the maximum of $\lvert ISNI_{\hat \Phi_d}\rvert$ and its ratio to the corresponding standard error estimates from the MAR model. \begin{table}[h!] \caption{MAR model estimates: ISNI norm, minimum and maximum (in absolute values), and ratio to the corresponding standard error.}\label{tabISNI} \centering \scalebox{0.85}{ \begin{tabular}{l | c c c c c c c} \hline & se & ISNI & norm(ISNI)/se & $\lvert ISNI\rvert$ & min$\lvert ISNI\rvert$/se & ISNI & max$\lvert ISNI\rvert$/se \\ Variable & & (norm) & & (min) & & (max) & \\ \hline $\boldsymbol{\zeta}_{11}$ & 0.117 & 0.0414 & 0.354 & 0.0014 & 0.012 & 0.0204 & 0.174 \\ $\boldsymbol{\zeta}_{12}$ & 0.074 & 0.0580 & 0.784 & 0.0016 & 0.022 & 0.0303 & 0.409 \\ $\boldsymbol{\zeta}_{13}$ & 0.074 & 0.044 & 0.595 & 0.0002 & 0.003 & 0.0255 & 0.345 \\ $\boldsymbol{\zeta}_{14}$ & 0.083 & 0.1044 & 1.258 & 0.0005 & 0.006 & 0.0527 & 0.635 \\ $\boldsymbol{\zeta}_{15}$ & 0.071 & 0.0088 & 0.124 & 0.0009 & 0.013 & 0.0045 & 0.063 \\ \textit{Age} & 0.008 & 0.0089 & 1.113 & 0.0001 & 0.013 & 0.0054 & 0.675 \\ \textit{Gender} & 0.042 & 0.0058 & 0.138 & 0.0003 & 0.007 & 0.0028 & 0.067 \\ \textit{Educ} & 0.068 & 0.0075 & 0.110 & 0.0001 & 0.001 & 0.004 & 0.059 \\ $APOE_{22-23}$ & 0.072 & 0.0111 & 0.154 & 0.0001 & 0.001 & 0.0074 & 0.103 \\ $APOE_{24}$ & 0.062 & 0.0123 & 0.198 & 0.0005 & 0.008 & 0.0051 & 0.082 \\ $APOE_{34-44}$ & 0.06 & 0.012 & 0.200 & 0.0009 & 0.015 & 0.0061 & 0.102 \\ $\sigma_{y}$ & 0.194 & 0.1123 & 0.579 & 0.0001 & 0.001 & 0.0824 & 0.425 \\ \hline \end{tabular} } \end{table} {By looking at the results reported in Table \ref{tabISNI}, we may observed that, as far as fixed model parameters are concerned, the global indexes we computed to investigate how they vary when moving far from the MAR assumption are all quite close to zero. The only remarkable exception is for the \textit{age} variable. In this case, the \textit{ISNI} seems to take slightly higher values and this is particularly evident when focusing on the standardized statistics. Higher \textit{ISNI} values may be also observed when looking at the random intercepts. However, this represents an expected results being this parameters connected (even if indirectly) to the missingness process.} To further study the potential impact that assumptions on the missing data generating mechanism may have on the parameters of interest, we may analyze how changes in $\boldsymbol{\lambda}$ parameters affect the vector $\hat \boldsymbol\Phi$. In this respect, we considered the following two scenarios. \begin{itemize} \item[Scenario 1] We simulated $B=1000$ values for each element in $\boldsymbol{\lambda}$ from a uniform distribution, $\lambda_{g \ell}(b) \sim {\rm U}(-3,3)$ for $g=1,\dots,K_{1}-1$ and $\ell=1,\dots,K_{2}-1$. Then, based on the simulated values, we computed \[ \hat \boldsymbol\Phi(b)=\hat \boldsymbol\Phi({\bf 0})+ISNI_{\boldsymbol\Phi} \times \boldsymbol{\lambda}(b). \] \item[Scenario 2] We simulated $B=1000$ values for a scale constant $c$ from a uniform distribution, $c(b) \sim {\rm U}(-3,3)$. Then, based on the simulated values, we computed \[ \xi_{g \ell}(b) =\xi({\bf 0}) + c(b) \hat{\lambda}_{g \ell}, \qquad g=1,\dots,K_{1}-1, \ell=1,\dots,K_{2}-1, \] where $\hat{\lambda}_{g\ell}$ denotes the maximum likelihood estimate of $\lambda_{g\ell}$ under the MNAR model. This scenario allows us to consider perturbations in the component specific masses, while preserving the overall dependence structure we estimated through the proposed MNAR model. That is, this allows us to link changes in the longitudinal model parameters with increasing (respectively decreasing) correlation between the random coefficients in the two profiles of interest. The corresponding (approximated) parameter estimates are computed as \[ \hat \boldsymbol\Phi(b)=\hat \boldsymbol\Phi({\bf 0})+ISNI_{\boldsymbol\Phi}\boldsymbol{\lambda}(b), \] where $\boldsymbol{\lambda}(b)=c(b) \hat{\boldsymbol{\lambda}}$. \end{itemize} {The first scenario is designed to study the general sensitivity of parameter estimates. That is, we aim at analyzing how model parameter estimates vary when random changes in $\boldsymbol{\lambda}$ (in any direction) are observed. The second scenario starts from the estimated pattern of dependence between the random intercepts in the longitudinal and the missing data models and try to get insight on the changes in parameter estimates that could be registered in the case the correlation increases (in absolute value) with respect to the estimated one. In Figures \ref{FigureScenario1} and \ref{FigureScenario2} we report parameter estimates derived under Scenario 1 and Scenario 2, respectively. The red line and the grey bands in each graph correspond to the point and the $95\%$ interval estimates of model parameters under the MAR assumption.} When focusing Figure \ref{FigureScenario1} (Scenario 1), it can be easily observed that only the parameter associated to the \textit{age} variable is slightly sensitive to changes in the assumptions about the ignorability of the dropout process. All the other estimates remain quite constant and, overall, within the corresponding $95\%$ MAR confidence interval. No particular patterns of dependence/correlation between the random coefficients can be linked to points outside the interval for the estimated effect of $age$. {Rather, we observed that strong \emph{local} changes in the random coefficient probability matrix may cause positive (respectively negative) changes in the \textit{age} effect. In particular, changes in the upper left or intermediate right components, that is components with low values of both random coefficients (first case) or with high values for $\zeta_{2\ell}$ and intermediate values for $\zeta_{ig}$, respectively. } Overall, the relative frequency of points within the corresponding MAR confidence interval is equal to $0.737$, which suggests a certain sensitivity to assumptions regarding ignorability of the dropout process, even though estimates always remain within a reasonable set. \begin{figure} \caption{Leiden 85+ Study: Sensitivity analysis according to Scenario 1} \centerline{\includegraphics[scale=0.55]{Sensitivity.pdf}} \label{FigureScenario1} \end{figure} \begin{figure} \caption{Leiden 85+ Study: Sensitivity analysis according to Scenario 1} \centerline{\includegraphics[scale=0.55]{Sensitivity2.pdf}} \label{FigureScenario2} \end{figure} When focusing on Figure \ref{FigureScenario2} (Scenario 2), we may observe that changes in the parameter estimates are more clearly linked to correlation between the random effects in the two profiles. As for the former scenario, a slight sensitivity to departures from the MAR assumption is observed for the \textit{age} variable only. In this case, the relative frequency of points within the corresponding MAR confidence interval for the \textit{age} effect is equal to $0.851$, which suggests a lower sensitivity to assumptions on the ignorability of the dropout process, when compared to the one observed under Scenario 1. In this case, high positive correlation between the random coefficients leads to MAR estimates that are lower than the corresponding MNAR counterparts. On the other hand, high negative correlation leads to MAR estimates that tend to be higher than the MNAR counterparts. The proposed approach for sensitivity analysis could be seen as a particular version of local influence diagnostics developed in the context of regression models to check for influential observations by perturbations in individual-specific weights; see eg \cite{Jans2003} and \cite{rakh2016,rakh2017} for more recent developments. Here, rather than perturbing individual observations, we perturb weights associated to the group of subjects allocated to a given component. Obviously, a \emph{global} influence approach could be adopted as well, for example by looking at the \emph{mean score} approach detailed in \cite{whit2017}. \section{Conclusions} \label{sec:8} We defined a random coefficient based dropout model where the association between the longitudinal and the dropout process is modeled through discrete, outcome-specific, latent effects. A bi-dimensional representation for the random coefficient distribution was used and a (possibly) different number of locations in each margin is allowed. A full probability matrix connecting the locations in a margin to those in the other one was considered. The main advantage of this flexible representation for the random coefficient distribution is that the resulting MNAR model properly nests a model where the dropout mechanism is non ignorable. This allows us to consider a (local) sensitivity analysis, based on the ISNI index, to check changes in model parameter estimates as we move far from the MAR assumption. The data application showed good robustness of all model parameter estimates. A slight sensitivity to assumptions on the missing data generating mechanism was only observe for the $age$ effect which, however, is always restricted to a reasonable set. \section*{Acknowledgement} We gratefully acknowledge Dr Ton de Craen and Dr Rudi Westendorp of the Leiden University Medical Centre, for kindly providing the analyzed data.
{'timestamp': '2017-07-10T02:06:57', 'yymm': '1707', 'arxiv_id': '1707.02182', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.02182'}
ArXiv
\section{Introduction} \label{sec:intro} This paper is devoted to the mathematical analysis of the following class of parabolic systems: \begin{align}\label{CH1} & u_t - \Delta w = 0,\\ \label{CH2} & w = \delta \Delta^2 u - a(u) \Delta u - \frac{a'(u)}2 |\nabla u|^2 + f(u) + \epsi u_t, \end{align} on $(0,T) \times \Omega$, $\Omega$ being a bounded smooth subset of $\RR^3$ and $T>0$ an assigned final time. The restriction to the three-dimensional setting is motivated by physical applications. Similar, or better results, are expected to hold in space dimensions 1 and 2. The system is coupled with the initial and boundary conditions \begin{align}\label{iniz-intro} & u|_{t=0} = u_0, \quext{in }\,\Omega,\\ \label{neum-intro} & \dn u = \dn w = \delta \dn \Delta u = 0, \quext{on }\,\partial\Omega,\ \quext{for }\,t\in(0,T) \end{align} and represents a variant of the Cahn-Hilliard model for phase separation in binary materials. The function $f$ stands for the derivative of a {\sl singular}\/ potential $F$ of a {\sl double obstacle}\ type. Namely, $F$ is assumed to be $+\infty$ outside a bounded interval (assumed equal to $[-1,1]$ for simplicity), where the extrema correspond to the pure states. A physically significant example is given by the so-called Flory-Huggins logarithmic potential \begin{equation}\label{logpot} F(r)=(1-r)\log(1-r)+(1+r)\log(1+r) - \frac\lambda2 r^2, \quad \lambda\ge 0. \end{equation} As in this example, we will assume $F$ to be at least {\sl $\lambda$-convex}, i.e., convex up to a quadratic perturbation. In this way, we can also allow for singular potentials having more than two minima in the interval $[-1,1]$ (as it happens in the case of the oil-water-surfactant models described below, where the third minimum appears in relation to the so-called ``microemulsion'' phase). We assume the coefficients $\delta,\epsi$ to be $\ge 0$, with the case $\delta>0$ giving rise to a {\sl sixth order}\ model and the case $\epsi>0$ related to possible viscosity effects that are likely to appear in several models of Cahn-Hilliard type (see, e.g., \cite{Gu}). The investigation of the limits as $\delta$ or $\epsi$ tend to zero provides validation of these models as the approximates of the limit fourth order model. The main novelty of system \eqref{CH1}-\eqref{CH2} is related to the presence of the nonlinear function $a$ in \eqref{CH2}, which is supposed smooth, bounded, and strongly positive (i.e., everywhere larger than some constant $\agiu>0$). Mathematically, the latter is an unavoidable assumption as we are mainly interested in the behavior of the problem when $\delta$ is let tend to $0$ and in the properties of the (fourth order) limit system $\delta=0$. On the other hand, at least in the physical context of the sixth order model, it would also be meaningful to admit $a$ to take negative values, as it may happen in presence of the ``microemulsion'' phase (see \cite{GK93a,GK93b}). We will not deal with this situation, but we just point out that, as long as $\delta>0$ is fixed, this should create no additional mathematical difficulties since the nonlinear diffusion term is then dominated by the sixth order term. From the analytical point of view, as a basic observation we can notice that this class of systems has an evident variational structure. Indeed, (formally) testing \eqref{CH1} by $w$, \eqref{CH2} by $u_t$, taking the difference of the obtained relations, integrating with respect to~space variables, using the {\sl no-flux}\/ conditions \eqref{neum-intro}, and performing suitable integrations by parts, one readily gets the {\sl a priori}\/ bound \begin{equation}\label{energyineq} \ddt\calE\dd(u) + \| \nabla w \|_{L^2(\Omega)}^2 + \epsi \| u_t \|_{L^2(\Omega)}^2 = 0, \end{equation} which has the form of an {\sl energy equality} for the {\sl energy functional} \begin{equation}\label{defiE} \calE\dd(u)=\io \Big( \frac\delta2 |\Delta u|^2 + \frac{a(u)}2 |\nabla u|^2 + F(u) \Big), \end{equation} where the interface (gradient) part contains the nonlinear function $a$. In other words, the system \eqref{CH1}-\eqref{CH2} arises as the $(H^1)'$-gradient flow problem for the functional $\calE\dd$. While the literature on the fourth order Cahn-Hilliard model with logarithmic free energy is very wide (starting from the pioneering work \cite{DD} up to more recent works like, e.g., \cite{AW,GMS,MZ}, see also the recent review \cite {ChMZ} and the references therein), it seems that potentials of logarithmic types have never been considered in the case of a nonconstant coefficient $a$. Similarly, the sixth order Cahn-Hilliard type equations, which appear as models of various physical phenomena and have recently attracted a notable interest in the mathematical literature (see discussion below), seem not to be so far studied in the case of logarithmic potentials. The sixth order system \eqref{CH1}-\eqref{CH2} arises as a model of dynamics of ternary oil-water-surfactant mixtures in which three phases occupying a region $\Omega$ in $ \RR^3$, microemulsion, almost pure oil and almost pure water, can coexist in equilibrium. The phenomenological Landau-Ginzburg theory for such mixtures has been proposed in a series of papers by Gompper et~al.~(see, e.g., \cite{GK93a,GK93b,GZ92} and other references in \cite{PZ11}). This theory is based on the free energy functional \eqref{defiE} with constant $\delta>0$ (in general, however, this coefficient can depend on $u$, see \cite{SS93}), and with $F(u)$, $a(u)$ approximated, respectively, by a sixth and a second order polynomial: \begin{equation}\label{apprFa} F(u)= (u+1)^2 (u^2+h_0) (u-1)^2, \qquad a(u) = g_0 + g_2 u^2, \end{equation} where the constant parameters $h_0,g_0,g_2$ are adjusted experimentally, $g_2>0$ and $h_0$, $g_0$ are of arbitrary sign. In this model, $u$ is the scalar, conserved order parameter representing the local difference between oil and water concentrations; $u=-1$, $u=1$, and $u=0$ correspond to oil-rich, water-rich and microemulsion phases, respectively, and the parameter $h_0$ measures the deviation from oil-water-microemulsion coexistence. The associated evolution system \eqref{CH1}-\eqref{CH2} has the standard Cahn-Hilliard structure. Equation \eqref{CH1} expresses the conservation law \begin{equation}\label{conslaw} u_t + \nabla \cdot j = 0 \end{equation} with the mass flux $j$ given by \begin{equation}\label{mflux} j = - M\nabla w. \end{equation} Here $M > 0$ is the constant mobility (we set $M=1$ for simplicity), and $w$ is the chemical potential difference between the oil and water phases. The chemical potential is defined by the constitutive equation \begin{equation}\label{chpot} w = \frac{\delta \calE\dd(u)}{\delta u} + \epsi u_t, \end{equation} where $ \frac{\delta \calE\dd(u)}{\delta u}$ is the first variation of the functional $ \calE\dd(u) $, and the constant $ \epsi \geq 0$ represents possible viscous effects. For energy \eqref{defiE} equation \eqref{chpot} yields \eqref{CH2}. We note also that the boundary conditions $ \dn u = \delta \dn \Delta u = 0 $ are standardly used in the frame of sixth order Cahn-Hilliard models due to their mathematical simplicity. Moreover, they are related to the variational structure of the problem in terms of the functional \eqref{defiE}. However, other types of boundary conditions for $u$ might be considered as well, paying the price of technical complications in the proofs. Concerning, instead, the condition $ \dn w = 0$, in view of \eqref{mflux}, it simply represents the mass isolation at the boundary of $\Omega$. The system \eqref{CH1}-\eqref{neum-intro} with functions $ F(u), a(u)$ in the polynomial form \eqref{apprFa}, and with no viscous term ($\epsi=0$) has been recently studied in~\cite{PZ11}. It has been proved there that for a sufficiently smooth initial datum $u_0$ the system admits a unique global solution in the strong sense. The sixth order Cahn-Hilliard type equation with the same structure as \eqref{CH1}-\eqref{CH2}, $\delta > 0$, polynomial $F(u)$, and negative constant $a$, arises also as the so-called phase field crystal (PFC) atomistic model of crystal growth, developed by Elder et al., see e.g., \cite{EG04, BGE06, BEG08}, and \cite{GDL09} for the overview and up-to date references. It is also worth mentioning a class of sixth order convective Cahn-Hilliard type equations with different (nonconservative) structure than \eqref{CH1}-\eqref{CH2}. This type of equations arise in particular as a model of the faceting a of a growing crystalline surface, derived by Savina et al.~\cite{S03} (for a review of other convective 4th and 6th order Cahn-Hilliard models see \cite{KEMW08}). In this model, contrary to~\eqref{CH1}-\eqref{CH2}, the order parameter $u$ is not a conserved quantity due to the presence of a force-like term related to the deposition rate. Such class of models has been recently studied mathematically in one- and two- dimensional cases by Korzec et al.~\cite{KEMW08, KNR11,KR11}. Finally, let us note that in the case $\delta=0$, $a(u)=\const>0$, the functional \eqref{defiE} represents the classical Cahn-Hilliard free energy \cite{Ca,CH}. The original Cahn-Hilliard free energy derivation has been extended by Lass et al. \cite{LJS06} to account for composition dependence of the gradient energy coefficient $a(u)$. For a face-centered cubic crystal the following expressions for $a(u)$ have been derived, depending on the level of approximation of the nearest-neighbor interactions: \begin{equation}\label{appra} a(u) = a_0 + a_1 u + a_2 u^2, \end{equation} where $a_0>0$, $a_1,a_2\in\RR$ in the case of four-body interactions, $a_2=0$ in the case of three-body interactions, and $a_1=a_2=0$ in the case of pairwise interactions. Numerical experiments in \cite{LJS06} indicate that these three different approximations (all reflecting the face-centered cubic crystal symmetry) have a substantial effect on the shape of the equilibrium composition profile and the interfacial energy. A specific free energy with composition dependent gradient energy coefficient $a(u)$ also arises in modelling of phase separation in polymers \cite{dG80}. This energy, known as the Flory-Huggins-de Gennes one, has the form \eqref{defiE} with $\delta=0$, $F(u)$ being the logarithmic potential \eqref{logpot}, and the singular coefficient \begin{equation}\label{adG} a(u) = \frac1{(1-u)(1+u)}. \end{equation} We mention also that various formulations of phase-field models with gradient energy coefficient dependent on the order parameter (and possibly on other fields) appear, e.g., in~\cite{Aif86,BS96}. Our objective in this paper is threefold. First, we would like to extend the result of \cite{PZ11} both to the viscous problem ($\epsi>0$) and to the case when the configuration potential is {\sl singular}\ (e.g., of the form \eqref{logpot}). While the first extension is almost straighforward, considering constraint (singular) terms in fourth order equations (\eqref{CH2}, in the specific case) gives rise to regularity problems since it is not possible, up to our knowledge, to estimate all the terms of equation~\eqref{CH2} in $L^p$-spaces. For this reason, the nonlinear term $f(u)$ has to be intended in a weaker form, namely, as a selection of a nonlinear, and possibly multivalued, mapping acting from $V=H^1(\Omega)$ to $V'$. This involves some monotone operator technique that is developped in a specific section of the paper. As a second step, we investigate the behavior of the solutions to the sixth order system as the parameter $\delta$ is let to tend to $0$. In particular, we would like to show that, at least up to subsequences, we can obtain in the limit suitably defined solutions to the fourth order system obtained setting $\delta = 0$ in \eqref{CH2}. Unfortunately, we are able to prove this fact only under additional conditions. The reason is that the natural estimate required to control second space derivatives of $u$, i.e., testing \eqref{CH2} by $-\Delta u$, is compatible with the nonlinear term in $\nabla u$ only under additional assumptions on $a$ (e.g., if $a$ is concave). This nontrivial fact depends on an integration by parts formula devised by Dal Passo, Garcke and Gr\"un in \cite{DpGG} in the frame of the thin-film equation and whose use is necessary to control the nonlinear gradient term. It is however likely that the use of more refined integration by parts techniques may permit to control the nonlinear gradient term under more general conditions on $a$. Since we are able to take the limit $\delta\searrow0$ only in special cases, in the subsequent part of the paper we address the fourth order problem by using a direct approach. In this way, we can obtain existence of a weak solution under general conditions on $a$ (we notice that, however, uniqueness is no longer guaranteed for $\delta=0$). The proof of existence is based on an ``ad hoc'' regularization of the equations by means of a system of phase-field type. This kind of approach has been proved to be effective also in the frame of other types of Cahn-Hilliard equations (see, e.g., \cite{BaPa05}). Local existence for the regularized system is then shown by means of the Schauder theorem, and, finally, the regularization is removed by means of suitable a priori estimates and compactness methods. This procedure involves some technicalities since parabolic spaces of H\"older type have to be used for the fixed point argument. Indeed, the use of Sobolev techniques seems not suitable due to the nonlinearity in the highest order term, which prevents from having compactness of the fixed point map with respect to Sobolev norms. A further difficulty is related with the necessity of estimating the second order space derivatives of $u$ in presence of the nonlinear term in the gradient. This is obtained by introducing a proper transformed variable, and rewriting \eqref{CH2} in terms of it. Proceeding in this way, we can get rid of that nonlinearity, but at the same time, we can still exploit the good monotonicity properties of $f$. We note here that a different method based on entropy estimates could also be used to estimate $\Delta u$ without making the change of variable, which seems however a simpler technique. Finally, in the last section of the paper, we discuss further property of weak solutions. More precisely, we address the problems of uniqueness (only for the 4th order system, since in the case $\delta>0$ it is always guaranteed) and of parabolic time-regularization of solutions (both for the 6th and for the 4th order system). We are able to prove such properties only when the energy functional $\calE\dd$ is $\lambda$-convex (so that its gradient is monotone up to a linear perturbation). In terms of the coefficient $a$, this corresponds to asking that $a$ is a {\sl convex}\/ function and, moreover, $1/a$ is {\sl concave}\/ (cf.~\cite{DNS} for generalizations and further comments regarding this condition). If these conditions fail, then the gradient of the energy functional exhibits a nonmonotone structure in terms of the space derivatives of the highest order. For this reason, proving an estimate of contractive type (which would be required for having uniqueness) appears to be difficult in that case. As a final result, we will show that, both in the 6th and in the {\sl viscous}\/ 4th order case, all weak solutions satisfy the energy {\sl equality}\/ \eqref{energyineq}, at least in an integrated form (and not just an energy inequality). This property is the starting point for proving existence of the global attractor for the dynamical process associated to system \eqref{CH1}-\eqref{CH2}, an issue that we intend to investigate in a forthcoming paper. Actually, it is not difficult to show that the set of initial data having finite energy constitutes a complete metric space (see, e.g., \cite[Lemma~3.8]{RS}) which can be used as a {\sl phase space}\/ for the system. Then, by applying the so-called ``energy method'' (cf., e.g., \cite{MRW,Ba1}), one can see that the energy equality implies precompactness of trajectories for $t\nearrow\infty$ with respect to the metric of the phase space. In turn, this gives existence of the global attractor with respect to the same metric. On the other hand, the question whether the energy equality holds in the nonviscous 4th order case seems to be more delicate, and, actually, we could not give a positive answer to it. It is also worth to notice an important issue concerned with the sharp interface limit of the Cahn-Hilliard equation with a nonlinear gradient energy coefficient $a(u)$. To our knowledge this issue has not been so far addressed in the literature. Let us mention that using the method of matched asymptotic expansions the sharp interface limits of the Cahn-Hilliard equation with constant coefficient $a$ have been investigated by Pego \cite{Peg89} and rigorously by Alikakos et al. \cite {ABC94}. Such method has been also successfully applied to a number of phase field models of phase transition problems, see e.g., \cite{CF88}, \cite{C90}. In view of various physical applications described above, it would be of interest to apply the matched asymptotic expansions in the case of a nonlinear coefficient $a(u)$ to investigate what kind of corrections it may introduce to conditions on the sharp interface. The plan of the paper is as follows. In the next Section~\ref{sec:main}, we will report our notation and hypotheses, together with some general tools that will be used in the proofs. Section~\ref{sec:6th} will contain the analysis of the sixth order model. The limit $\delta\searrow 0$ will then be analyzed in Section~\ref{sec:6thto4th}. Section~\ref{sec:4th} will be devoted to the analysis of the fourth order model. Finally, in Section~\ref{sec:uniq} uniqueness and regularization properties of the solutions will be discussed, as well as the validity of the energy equality. \medskip \noinden {\bf Acknowledgment.}~~The authors are grateful to Prof.~Giuseppe Savar\'e for fruitful discussions about the strategy of some proofs. \section{Notations and technical tools} \label{sec:main} Let $\Omega$ be a smooth bounded domain of $\RR^3$ of boundary $\Gamma$, $T>0$ a given final time, and let $Q:=(0,T)\times\Omega$. We let $H:=L^2(\Omega)$, endowed with the standard scalar product $(\cdot,\cdot)$ and norm $\| \cdot \|$. For $s>0$ and $p\in[1,\infty]$, we use the notation $W^{s,p}(\Omega)$ to indicate Sobolev spaces of positive (possibly fractional) order. We also set $H^s(\Omega):=W^{s,2}(\Omega)$ and let $V:=H^1(\Omega)$. We note by $\duav{\cdot,\cdot}$ the duality between $V'$ and $V$ and by $\|\cdot\|_X$ the norm in the generic Banach space $X$. We identify $H$ with $H'$ in such a way that $H$ can be seen as a subspace of $V'$ or, in other words, $(V,H,V')$ form a Hilbert triplet. We make the following assumptions on the nonlinear terms in \eqref{CH1}-\eqref{CH2}: \begin{align}\label{hpa1} & a \in C^2_b(\RR;\RR), \quad \esiste \agiu,\asu>0:~~ \agiu \le a(r)\le \asu~~\perogni r\in \RR;\\ \label{hpa2} & \esiste a_-,a_+\in [\agiu,\asu]:~~ a(r)\equiv a_-~~\perogni r\le-2, \quad a(r)\equiv a_+~~\perogni r\ge 2;\\ \label{hpf1} & f\in C^1((-1,1);\RR), \quad f(0)=0, \quad \esiste\lambda\ge 0:~~f'(r)\ge -\lambda~~\perogni r\in (-1,1);\\ \label{hpf2} & \lim_{|r|\to 1}f(r)r = \lim_{|r|\to 1}\frac{f'(r)}{|f(r)|} = + \infty. \end{align} The latter condition in \eqref{hpf2} is just a technical hypotheses which is actually verified in all significant cases. We also notice that, due to the choice of a singular potential (mathematically represented here by assumptions \eqref{hpf1}-\eqref{hpf2}), any weak solution $u$ will take its values only in the physical interval $[-1,1]$). For this reason, the behavior of $a$ is also significant only in that interval and we have extended it outside $[-1,1]$ just for the purpose of properly constructing the approximating problem (see Subsection~\ref{subsec:appr} below). Note that our assumptions on $a$ are not in conflict with \eqref{apprFa} or \eqref{appra} since these conditions (or, more generally, any condition on the values of $a(u)$ for large $u$) make sense in the different situation of a function $f$ with polynomial growth (which does not constrain $u$ in the interval $(-1,1)$). It should be pointed out, however, that the assumptions \eqref{hpa1}-\eqref{hpf2} do not admit the singular Flory-Huggins-de Gennes free energy model with $a(u)$ given by \eqref{adG}. We expect that the analysis of such a singular model could require different techniques. In \eqref{hpa1}, $C^2_b$ denotes the space of functions that are continuous and globally bounded together with their derivatives up to the second order. Concerning $f$, \eqref{hpf1} states that it can be written in the form \begin{equation}\label{f0} f(r)=f_0(r)-\lambda r, \end{equation} i.e., as the difference between a (dominating) monotone part $f_0$ and a linear perturbation. By \eqref{hpf1}-\eqref{hpf2}, we can also set, for $r\in(-1,1)$, \begin{equation}\label{F0} F_0(r):=\int_0^r f_0(s)\,\dis \qquext{and~}\, F(r):=F_0(r)-\frac\lambda2 r^2, \end{equation} so that $F'=f$. Notice that $F_0$ may be bounded in $(-1,1)$ (e.g., this occurs in the case of the logarithmic potential \eqref{logpot}). If this is the case, we extend it by continuity to $[-1,1]$. Then, $F_0$ is set to be $+\infty$ either outside $(-1,1)$ (if it is unbounded in $(-1,1)$) or outside $[-1,1]$ (if it is bounded in $(-1,1)$). This standard procedure permits to penalize the non-physical values of the variable $u$ and to intend $f_0$ as the subdifferential of the (extended) convex function $F_0:\RR\to[0,+\infty]$. That said, we define a number of operators. First, we set \begin{equation}\label{defiA} A:V\to V', \qquad \duav{A v, z}:= \io \nabla v \cdot \nabla z, \quext{for }\, v,z \in V. \end{equation} Then, we define \begin{equation}\label{defiW} W:=\big\{z\in H^2(\Omega):~\dn z=0~\text{on }\Gamma\big\} \end{equation} and recall that (a suitable restriction of) $A$ can be seen as an unbounded linear operator on $H$ having domain $W$. The space $W$ is endowed with the natural $H^2$-norm. We then introduce \begin{equation}\label{deficalA} \calA: W \to H, \qquad \calA(z) := - a(z)\Delta z - \frac{a'(z)}2 |\nabla z|^2. \end{equation} It is a standard issue to check that, indeed, $\calA$ takes its values in~$H$. \subsection{Weak subdifferential operators} \label{sec:weak} To state the weak formulation of the 6th order system, we need to introduce a proper relaxed form of the maximal monotone operator associated to the function $f_0$ and acting in the duality between $V'$ and $V$ (rather than in the scalar product of $H$). Actually, it is well known (see, e.g., \cite[Ex.~2.1.3, p.~21]{Br}) that $f_0$ can be interpreted as a maximal monotone operator on~$H$ by setting, for $v,\xi\in H$, \begin{equation} \label{betaL2} \xi = f_0(v)\quext{in $H$}~~~\Longleftrightarrow~~~ \xi(x) = f_0(v(x))\quext{a.e.~in $\Omega$}. \end{equation} If no danger of confusion occurs, the new operator on $H$ will be still noted by the letter $f_0$. Correspondingly, $f_0$ is the $H$-subdifferential of the convex functional \begin{equation} \label{betaL2-2} \calF_0:H\mapsto[0,+\infty], \qquad \calF_0(v):= \io F_0(v(x)), \end{equation} where the integral might possibly be $+\infty$ (this happens, e.g., when $|v|>1$ on a set of strictly positive Lebesgue measure). The weak form of $f_0$ can be introduced by setting \begin{equation} \label{betaV} \xi\in \fzw(v) \Longleftrightarrow \duav{\xi,z-v}\le \calF_0(z)-\calF_0(v) \quext{for any $z\in V$}. \end{equation} Actually, this is nothing else than the definition of the subdifferential of (the restriction to $V$ of) $\calF_0$ with respect to the duality pairing between $V'$ and $V$. In general, $\fzw$ can be a {\sl multivalued}\/ operator; namely, $\fzw$ is a {\sl subset}\ of $V'$ that may contain more than one element. It is not difficult to prove (see, e.g., \cite[Prop.~2.5]{BCGG}) that, if $v\in V$ and $f_0(v)\in H$, then \begin{equation} \label{betavsbetaw} \{f_0(v)\}\subset\fzw(v). \end{equation} Moreover, \begin{equation} \label{betavsbetaw2} \text{if }\,v\in V~\,\text{and }\, \xi \in \fzw(v) \cap H, \quext{then }\,\xi = f_0(v) ~\,\text{a.e.~in }\,\Omega. \end{equation} In general, the inclusion in \eqref{betavsbetaw} is strict and, for instance, it can happen that $f_0(v)\not\in H$ (i.e., $v$ does not belong to the $H$-domain of $f_0$), while $\fzw(v)$ is nonempty. Nevertheless, we still have some ``automatic'' gain of regularity for any element of $\fzw(v)$: \bepr\label{misura} Let $v\in V$, $\xi\in\fzw(v)$. Then, $\xi$ can be seen as an element of the space ${\cal M}({\overline \Omega})=C^0(\barO)'$ of the bounded real-valued Borel measures on $\overline \Omega$. More precisely, there exists $T\in {\cal M}({\overline \Omega})$, such that \begin{equation} \label{identif} \duav{\xi,z}=\ibaro z\,\diT \qquext{for any~\,$z\in V\cap C^0(\overline\Omega)$}. \end{equation} \empr \begin{proof} Let $z\in C^0(\overline \Omega)\cap V$ with $z\not = 0$. Then, using definition \eqref{betaV}, it is easy to see that \begin{align} \label{prova-meas} \duav{\xi,z} & = 2\| z \|_{L^\infty(\Omega)} \duavg{\xi,\frac{z}{2\| z \|_{L^\infty(\Omega)}}} \le 2\| z \|_{L^\infty(\Omega)} \bigg( \duav{\xi,v}+\calF_0\Big( \frac{z}{2\| z \|_{L^\infty(\Omega)}} \Big)-\calF_0(v) \bigg)\\ & \le 2\| z \|_{L^\infty(\Omega)} \Big( |\duav{\xi,v}|+|\Omega|\big(F_0(-1/2)+F_0(1/2)\big) \Big). \end{align} This actually shows that the linear functional $z\mapsto\duav{\xi,z}$ defined on $C^0(\overline \Omega)\cap V$ (that is a dense subspace of $C^0(\overline \Omega)$, recall that $\Omega$ is smooth) is continuous with respect to the sup-norm. Thus, by the Riesz representation theorem, it can be represented over $C^0(\barO)$ by a measure $T\in {\cal M}({\overline \Omega})$. \end{proof} \noinden Actually, we can give a general definition, saying that a functional $\xi\in V'$ belongs to the space $V'\cap {\cal M}(\overline \Omega)$ provided that $\xi$ is continuous with respect to the sup-norm on $\overline \Omega$. In this case, we can use \eqref{identif} and say that the measure $T$ represents $\xi$ on ${\cal M}(\overline \Omega)$. We now recall a result \cite[Thm.~3]{brezisart} that will be exploited in the sequel. \bete\label{teobrezis} Let $v\in V$, $\xi\in \fzw(v)$. Then, denoting by $\xi_a+\xi_s=\xi$ the Lebesgue decomposition of $\xi$, with $\xi_a$ ($\xi_s$) standing for the absolute continuous (singular, respectively) part of $\xi$, we have \begin{align}\label{bre1} & \xi_a v\in L^1(\Omega),\\ \label{bre2} & \xi_a(x) = f_0(v(x)) \qquext{for a.e.~$x\in\Omega$,}\\ \label{bre3} & \duav{\xi,v} - \io \xi_a v\,\dix = \sup \bigg\{\ibaro z\,\dixi_s,~z\in C^0(\overline\Omega),~ z(\overline\Omega)\subset[-1,1] \bigg\}. \end{align} \ente \noinden Actually, in \cite{brezisart} a slightly different result is proved, where $V$ is replaced by $H^1_0(\Omega)$ and, correspondingly, ${\cal M}(\overline \Omega)$ is replaced by ${\cal M}(\Omega)$ (i.e., the dual of $C_c^0(\Omega)$). Nevertheless, thanks to the smoothness of $\Omega$, one can easily realize that the approximation procedure used in the proof of the theorem can be extended to cover the present situation. The only difference is given by the fact that the singular part $\xi_s$ may be supported also on the boundary. \smallskip Let us now recall that, given a pair $X,Y$ of Banach spaces, a sequence of (multivalued) operators ${\cal T}_n:X\to 2^Y$ is said to G-converge (strongly) to ${\cal T}$ iff \begin{equation}\label{defGconv} \perogni (x,y)\in {\cal T}, \quad \esiste (x_n,y_n)\in {\cal T}_n \quext{such that \, $(x_n,y_n)\to(x,y)$~~strongly in }\, X\times Y. \end{equation} We would like to apply this condition to an approximation of the monotone function $f_0$ that we now construct. Namely, for $\sigma\in(0,1)$ (intended to go to $0$ in the limit), we would like to have a family $\{f\ssi\}$ of monotone functions such that \begin{align}\label{defifsigma} & f\ssi\in C^{1}(\RR), \qquad f\ssi'\in L^{\infty}(\RR), \qquad f\ssi(0)=0, \\ \label{convcomp} & f\ssi\to f_0 \quext{uniformly on compact subsets of }\,(-1,1). \end{align} Moreover, noting \begin{equation}\label{defiFsigma} F\ssi(r):=\int_0^r f\ssi(s)\,\dis, \quext{for }\,r\in\RR, \end{equation} we ask that \begin{equation}\label{propFsigma} F\ssi(r) \ge \lambda r^2 - c, \end{equation} for some $c\ge 0$ independent of $\sigma$ and for all $r\in\RR$, $\sigma\in (0,1)$, where $\lambda$ is as in \eqref{hpf1} (note that the analogue of the above property holds for $F$ thanks to the first of \eqref{hpf2}). Moreover, we ask the monotonicity condition \begin{equation}\label{defifsigma2} F_{\sigma_1}(r)\le F_{\sigma_2}(r) \qquext{if }\,\sigma_2\le \sigma_1 \quext{and for all }\, r\in \RR. \end{equation} Finally, on account of the last assumption \eqref{hpf2}, we require that \begin{equation}\label{goodmono} \perogni m>0,~~\esiste C_m\ge 0:~~~ f\ssi'(r) - m |f\ssi(r)| \ge - C_m, \quad \perogni r\in[-2,2] \end{equation} with $C_m$ being independent of $\sigma$. Notice that it is sufficient to ask the above property for $r\in [-2,2]$. The details of the construction of a family $\{f\ssi\}$ fulfilling \eqref{defifsigma}-\eqref{goodmono} are standard and hence we leave them to the reader. For instance, one can first take Yosida regularizations (see, e.g., \cite[Chap.~2]{Br}) and then mollify in order to get additional smoothness. Thanks to the monotonicity property \eqref{defifsigma2}, we can apply \cite[Thm.~3.20]{At}, which gives that \begin{align}\label{Gforte} & f\ssi\quext{G-converges to }\,f_0 \quext{in \,$H\times H$},\\ \label{Gdebole} & f\ssi\quext{G-converges to }\,\fzw \quext{in \,$V\times V'$}. \end{align} A notable consequence of G-convergence is the following property, whose proof can be obtained by slightly modifying \cite[Prop.~1.1, p.~42]{barbu}: \bele\label{limimono} Let $X$ be an Hilbert space, ${\cal B}\ssi$, ${\cal B}$ be maximal monotone operators in $X\times X'$ such that \begin{equation}\label{Gastr} {\cal B}\ssi\quext{G-converges to }\, {\cal B} \quext{in }\,X\times X', \end{equation} as $\sigma\searrow0$. Let also, for any $\sigma>0$, $v\ssi\in X$, $\xi\ssi\in X'$ such that $\xi\ssi\in{\cal B}\ssi(v\ssi)$. Finally, let us assume that, for some $v\in X$, $\xi\in X'$, there holds \begin{align}\label{Gastrnew} & v\ssi\to v\quext{weakly in }\, X, \qquad \xi\ssi\to \xi\quext{weakly in }\,X',\\[1mm] \label{Gastrnew-2} & \limsup_{\sigma\searrow0} \duavg{\xi\ssi,v\ssi}_X \le \duavg{\xi,v}_X. \end{align} Then, $\xi\in {\cal B}(v)$. \enle \noinden Next, we present an integration by parts formula: \bele\label{BSesteso} Let $u\in W\cap H^3(\Omega)$, $\xi\in V'$ such that $\xi\in \fzw(u)$. Then, we have that \begin{equation}\label{majozero} \duav{\xi,Au}\geq 0. \end{equation} \enle \begin{proof} Let us first note that the duality above surely makes sense in the assigned regularity setting. Actually, we have that $Au\in V$. We then consider the elliptic problem \begin{equation}\label{elpromon} u\ssi\in V, \qquad u\ssi + A^2 u\ssi + f\ssi (u\ssi) = u + A^2 u + \xi \text{~~~~in \,$V'$.} \end{equation} Since $f\ssi$ is monotone and Lipschitz continuous and the above \rhs\ lies in $V'$, it is not difficult to show that the above problem admits a unique solution $u\ssi \in W \cap H^3(\Omega)$. Moreover, the standard a priori estimates for $u\ssi$ lead to the following convergence relations, which hold, for some $v\in V$ and $\zeta\in V'$, up to the extraction of (non-relabelled) subsequences (in fact uniqueness guarantees them for the whole $\sigma\searrow0$): \begin{align}\label{stlemma11} & u\ssi\longrightarrow v \quext{weakly in }\, H^3(\Omega)~~ \text{and strongly in }\,W,\\ \label{stlemma11.2} & A^2 u\ssi\longrightarrow A^2 v \quext{weakly in }\, V',\\ \label{stlemma11.3} & f\ssi(u\ssi)\longrightarrow \zeta \quext{weakly in }\, V'. \end{align} As a byproduct, the limit functions satisfy $ v+A^2v+\zeta=u+A^2u+\xi$ in~$V'$. Moreover, we deduce from (\ref{elpromon}) \begin{equation}\label{contolemma11} \big(f\ssi(u\ssi),u\ssi\big) = \duavg{ u + A^2 u + \xi - u\ssi - A^2u\ssi, u\ssi}, \end{equation} whence \begin{equation}\label{old11} \lim_{\sigma \rightarrow 0} \,\big(f\ssi(u\ssi),u\ssi\big) = \duavg{u + A^2 u + \xi - v - A^2 v, v} = \duav{\zeta,v}. \end{equation} Then, on account of \eqref{stlemma11}, \eqref{stlemma11.3}, \eqref{Gdebole} and Lemma~\ref{limimono} (cf., in particular, relation \eqref{Gdebole}) applied to the sequence $\{f_\sigma(v_\sigma)\}$, we readily obtain that $\zeta\in \fzw(v)$. By uniqueness, $v=u$ and $\zeta=\xi$. Let us finally verify the required property. Actually, for $\sigma>0$, thanks to monotonicity of $f\ssi$ we have \begin{equation}\label{contolemma12} 0 \leq \big(f\ssi(u\ssi), A u\ssi \big) = \duavg{ u + A^2 u+\xi-u\ssi-A^2u\ssi, A u\ssi}. \end{equation} Taking the supremum limit, we then obtain \begin{equation}\label{contolemma12b} 0 \leq \limsup_{\sigma\searrow 0} \duavg{ u + A^2 u+\xi-u\ssi-A^2u\ssi, A u\ssi} = \duavg{ u + A^2 u+ \xi - u, Au} - \liminf_{\sigma\searrow 0} \duavg{A^2u\ssi, A u\ssi}. \end{equation} Then, using \eqref{stlemma11} and semicontinuity of norms with respect to weak convergence, \begin{equation}\label{contolemma12c} - \liminf_{\sigma\searrow 0} \duavg{A^2u\ssi, A u\ssi} = - \liminf_{\sigma\searrow 0} \| \nabla A u\ssi \|^2 \le - \| \nabla A u \|^2 = - \duavg{A^2 u , A u}, \end{equation} whence we finally obtain \begin{equation}\label{old12} 0 \le \duav{u+A^2u+\xi-u-A^2u,Au} = \duav{\xi,Au}, \end{equation} as desired. \end{proof} \noinden Next, we recall a further integration by parts formula that extends the classical result \cite[Lemma~3.3, p.~73]{Br} (see, e.g., \cite[Lemma~4.1]{RS} for a proof): \bele\label{BResteso} Let $T>0$ and let $\calJ:H\to [0,+\infty]$ a convex, lower semicontinuous and proper functional. Let $u\in \HUVp \cap \LDV$, $\eta\in \LDV$ and let $\eta(t)\in \de\calJ(u(t))$ for a.e.~$t\in(0,T)$, where $\de\calJ$ is the $H$-subdifferential of $\calJ$. Moreover, let us suppose the coercivity property \begin{equation}\label{coerccalJ} \esiste k_1>0,~k_2\ge 0 \quext{such that }\,\calJ(v) \ge k_1 \| v \|^2 - k_2 \quad\perogni v\in H. \end{equation} Then, the function $t\mapsto \calJ(u(t))$ is absolutely continuous in $[0,T]$ and \begin{equation}\label{ipepardiff} \ddt \calJ(u(t)) = \duav{u_t(t),\eta(t)} \quext{for a.e.~}\, t\in (0,T). \end{equation} In particular, integrating in time, we have \begin{equation}\label{ipepars} \int_s^t \duav{u_t(r),\eta(r)}\,\dir = \calJ(u(t)) - \calJ(u(s)) \quad\perogni s,t\in [0,T]. \end{equation} \enle \noinden We conclude this section by stating an integration by parts formula for the operator $\calA$. \bele\label{lemma:ipp} Let $a$ satisfy \eqref{hpa1} and let either \begin{equation} \label{x11} v \in \HUH \cap \LDW \cap L^\infty(Q), \end{equation} or \begin{equation} \label{x12} v \in \HUVp \cap \LIW \cap L^2(0,T;H^3(\Omega)). \end{equation} Then, the function \begin{equation} \label{x13} t\mapsto \io \frac{a(v(t))}2 | \nabla v(t) |^2 \end{equation} is absolutely continuous over $[0,T]$. Moreover, for all $s,t\in [0,T]$ we have that \begin{equation} \label{x14} \int_s^t \big(\calA(v(r)),v_t(r)\big)\,\dir = \io \frac{a(v(t))}2 |\nabla v(t)|^2 - \io \frac{a(v(s))}2 |\nabla v(s)|^2, \end{equation} where, in the case \eqref{x12}, the scalar product in the integral on the \lhs\ has to be replaced with the duality $\duav{v_t(r),\calA(v(r))}$. \enle \begin{proof} We first notice that \eqref{x13}-\eqref{x14} surely hold if $v$ is smoother. Then, we can proceed by first regularizing $v$ and then passing to the limit. Namely, we define $v\ssi$, a.e.~in~$(0,T)$, as the solution of the singular perturbation problem \begin{equation} \label{co93} v\ssi + \sigma A v\ssi = v, \quext{for }\, \sigma\in (0,1). \end{equation} Then, in the case \eqref{x11}, we have \begin{equation} \label{co93-b} v\ssi \in H^1(0,T;W) \cap L^2(0,T;H^4(\Omega)), \end{equation} whereas, if \eqref{x12} holds, we get \begin{equation} \label{co93-c} v\ssi \in H^1(0,T;V) \cap L^\infty(0,T;H^4(\Omega)). \end{equation} Moreover, proceeding as in \cite[Appendix]{CGG} (cf., in particular, Proposition~6.1 therein) and applying the Lebesgue dominated convergence theorem in order to control the dependence on time variable, we can easily prove that \begin{equation} \label{y11} v\ssi \to v \quext{strongly in }\,\HUH \cap \LDW ~~\text{and weakly star in }\, L^\infty(Q) \end{equation} (the latter condition following from the maximum principle), if \eqref{x11} holds, or \begin{equation} \label{y12} v\ssi \to v \quext{strongly in }\,\HUVp \cap L^2(0,T;H^3(\Omega)) ~~\text{and weakly star in }\, \LIW, \end{equation} if \eqref{x12} is satisfied instead. Now, the functions $v\ssi$, being smooth, surely satisfy the analogue of \eqref{x14}: \begin{equation} \label{x14ssi} \int_s^t \big(\calA(v\ssi(r)),v_{\sigma,t}(r)\big)\,\dir = \io \frac{a(v\ssi(t))}2 |\nabla v\ssi(t)|^2 - \io \frac{a(v\ssi(s))}2 |\nabla v\ssi(s)|^2, \end{equation} for all $s,t\in[0,T]$. Let us prove that we can take the limit $\sigma\searrow0$, considering first the case \eqref{x11}. Then, using \eqref{y11} and standard compactness results, it is not difficult to check that \begin{equation} \label{co93-b2} \calA(v\ssi) \to \calA(v), \quext{(at least) weakly in }\,L^2(0,T;H). \end{equation} In particular, to control the square gradient term in $\calA$, we use the Gagliardo-Nirenberg inequality (cf.~\cite{Ni}) \begin{equation}\label{ineq:gn} \| \nabla z \|_{L^4(\Omega)} \le c\OO \| z \|_{W}^{1/2} \| z \|_{L^\infty(\Omega)}^{1/2} + \| z \| \qquad \perogni z \in W, \end{equation} so that, thanks also to \eqref{hpa1}, \begin{equation}\label{conseq:gn} \big\| a'(v\ssi) |\nabla v\ssi|^2 \big\|_{\LDH} \le \| a'(v\ssi) \|_{L^\infty(Q)} \| \nabla v\ssi \|_{L^4(Q)}^2 \le c \| v\ssi \|_{L^\infty(Q)} \big( 1 + \| A v\ssi \|_{\LDH} \big), \end{equation} and \eqref{co93-b2} follows. Moreover, by \eqref{y11} and the continuous embedding $H^1(0,T;H) \cap L^2(0,T;W) \subset C^0([0,T];V)$, we also have that \begin{equation} \label{co93-d} v\ssi \to v \quext{strongly in }\,C^0([0,T];V). \end{equation} Combining \eqref{y11}, \eqref{co93-b2} and \eqref{co93-d}, we can take the limit $\sigma\searrow 0$ in \eqref{x14ssi} and get back \eqref{x14}. Then, the absolute continuity property of the functional in \eqref{x13} follows from the summability of the integrand on the \lhs\ of \eqref{x14}. Finally, let us come to the case \eqref{x12}. Then, \eqref{y12} and the Aubin-Lions theorem give directly \eqref{co93-d}, so that we can pass to the limit in the \rhs\ of \eqref{x14ssi}. To take the limit of the \lhs, on account of the first \eqref{y12}, it is sufficient to prove that \begin{equation} \label{x15} \calA(v\ssi) \to \calA(v) \quext{at least weakly in }\,L^2(0,T;V). \end{equation} Since weak convergence surely holds in $\LDH$, it is then sufficient to prove uniform boundedness in $\LDV$. With this aim, we compute \begin{align} \no & \nabla \Big( a(v\ssi)\Delta v\ssi + \frac{a'(v\ssi)}2 |\nabla v\ssi|^2 \Big)\\ \label{x16} & \mbox{}~~~~~ = a'(v\ssi) \nabla v\ssi \Delta v\ssi + a(v\ssi) \nabla \Delta v\ssi + \frac{a''(v\ssi)}2 | \nabla v\ssi |^2 \nabla v\ssi + a'(v\ssi) D^2 v\ssi \nabla v\ssi , \end{align} and, using \eqref{y12}, \eqref{hpa1}, and standard embedding properties of Sobolev spaces, it is a standard procedure to verify that the \rhs\ is uniformly bounded in $\LDH$ (and, consequently, so is the left). This concludes the proof. \end{proof} \section{The 6th order problem} \label{sec:6th} We start by introducing the concept of {\sl weak solution}\ to the sixth order problem associated with system \eqref{CH1}-\eqref{neum-intro}: \bede\label{def:weaksol6th} Let $\delta>0$ and $\epsi\ge 0$. Let us consider the {\rm 6th order problem} given by the system \begin{align}\label{CH1w} & u_t + A w = 0, \quext{in }\,V',\\ \label{CH2w} & w = \delta A^2 u + \calA(u) + \xi - \lambda u + \epsi u_t, \quext{in }\,V',\\ \label{CH3w} & \xi \in \fzw(u) \end{align} together with the initial condition \begin{equation}\label{init} u|_{t=0}=u_0, \quext{a.e.~in }\,\Omega. \end{equation} A (global in time) {\rm weak solution} to the 6th order problem\/ \eqref{CH1w}-\eqref{init} is a triplet $(u,w,\xi)$, with \begin{align}\label{regou} & u\in \HUVp\cap L^\infty(0,T;W) \cap L^2(0,T;H^3(\Omega)), \qquad \epsi u\in \HUH,\\ \label{regoFu} & F(u) \in L^\infty(0,T;L^1(\Omega)),\\ \label{regofu} & \xi \in L^2(0,T;V'),\\ \label{regow} & w\in L^2(0,T;V), \end{align} satisfying\/ \eqref{CH1w}-\eqref{CH3w} a.e.~in~$(0,T)$ together with~\eqref{init}. \edde \noinden We can then state the main result of this section: \bete\label{teoesi6th} Let us assume\/ \eqref{hpa1}-\eqref{hpf2}. Let $\epsi\ge 0$ and $\delta>0$. Moreover, let us suppose that \begin{equation}\label{hpu0} u_0\in W, \quad F(u_0)\in L^1(\Omega), \quad (u_0)\OO \in (-1,1), \end{equation} where $(u_0)\OO$ is the spatial mean of $u_0$. Then, the sixth order problem admits one and only one weak solution. \ente \noinden The proof of the theorem will be carried out in several steps, presented as separate subsequences. \beos\label{rem:mean} We observe that the last condition in \eqref{hpu0}, which is a common assumption when dealing with Cahn-Hilliard equations with constraints (cf.~\cite{KNP} for more details), does not simply follow from the requirement $F(u_0)\in L^1(\Omega)$. Indeed, $F$ may be bounded over $[-1,1]$, as it happens, for instance, with the logarithmic potential~\eqref{logpot}. In that case, $F(u_0)\in L^1(\Omega)$ simply means $-1 \le u_0 \le 1$ almost everywhere and, without the last \eqref{hpu0}, we could have initial data that coincide almost everywhere with either of the pure states $\pm1$. However, solutions that assume (for example) the value $+1$ in a set of strictly positive measure cannot be considered, at least in our regularity setting. Indeed, if $|\{u=1\}|>0$, then regularity~\eqref{regofu} (which is crucial for passing to the limit in our approximation scheme) is broken, because $f(r)$ is {\sl unbounded}\/ for $r\nearrow +1$ and $\xi$ is nothing else than a relaxed version of $f(u)$. \eddos \subsection{Approximation and local existence} \label{subsec:appr} First of all, we introduce a suitably approximated statement. The monotone function $f_0$ is regularized by taking a family $\{f\ssi\}$, $\sigma\in(0,1)$, defined as in Subsection~\ref{sec:weak}. Next, we regularize $u_0$ by singular perturbation, similarly as before (cf.~\eqref{co93}). Namely, we take $u\zzs$ as the solution to the elliptic problem \begin{equation}\label{defiuzzd} u\zzs + \sigma A u\zzs = u_0, \end{equation} and we clearly have, by Hilbert elliptic regularity results, \begin{equation}\label{regouzzd} u\zzs \in D(A^2) \quad\perogni \sigma\in(0,1). \end{equation} Other types of approximations of the initial datum are possible, of course. The choice \eqref{defiuzzd}, beyond its simplicity, has the advantage that it preserves the mean value. \smallskip \noinden {\bf Approximate problem.}~~For $\sigma\in(0,1)$, we consider the problem \begin{align}\label{CH1appr} & u_t + A w = 0,\\ \label{CH2appr} & w = \delta A^2 u + \calA(u) + f\ssi(u) - \lambda u + (\epsi+\sigma) u_t,\\ \label{inisd} & u|_{t=0}=u\zzs, \quext{a.e.~in }\,\Omega. \end{align} We shall now show that it admits at least one local in time weak solution. Namely, there holds the following \bele\label{teo:loc:appr} Let us assume\/ \eqref{hpa1}-\eqref{hpf2}. Then, for any $\sigma\in(0,1)$, there exist $T_0\in(0,T]$ (possibly depending on $\sigma$) and a pair $(u,w)$ with \begin{align}\label{regovsd} & u\in H^1(0,T_0;H) \cap L^\infty(0,T_0;W) \cap L^2(0,T_0;D(A^2)),\\ \label{regowsd} & w \in L^2(0,T_0;W), \end{align} such that \eqref{CH1appr}-\eqref{CH2appr} hold a.e.~in~$(0,T_0)$ and the initial condition~\eqref{inisd} is satisfied. \enle \begin{proof} The theorem will be proved by using the Schauder fixed point theorem. We take \begin{equation}\label{defiBR} B_R:=\big \{ v\in L^2(0,T_0;W)\cap L^4(0,T_0;W^{1,4}(\Omega)) : \| v \|_{L^2(0,T_0;W)} + \| v \|_{L^4(0,T_0;W^{1,4}(\Omega))}\le R \big\}, \end{equation} for $T_0$ and $R$ to be chosen below. Then, we take $\baru \in B_R$ and consider the problem given by \eqref{inisd} and \begin{align}\label{CH1schau} & u_t + A w = 0, \quext{in }\,H,\\ \label{CH2schau} & w = \delta A^2 u + \calA(\baru) + f\ssi(u) - \lambda u + (\epsi+\sigma) u_t, \quext{in }\,H. \end{align} Then, as $\baru\in B_R$ is fixed, we can notice that \begin{equation}\label{conto21c} \| \calA(\baru) \|_{L^2(0,T_0;H)}^2 \le c \big( \| \baru \|_{L^2(0,T_0;W)}^2 + \| \baru \|_{L^4(0,T_0;W^{1,4}(\Omega))}^4 \big) \le Q(R). \end{equation} Here and below, $Q$ denotes a computable function, possibly depending on $\sigma$, defined for any nonnegative value of its argument(s) and increasingly monotone in (each of) its argument(s). As we substitute into \eqref{CH1schau} the expression for $w$ given by \eqref{CH2schau} and apply the inverse operator $(\Id + (\epsi + \sigma )A )^{-1}$, we obtain a parabolic equation in $u$ which is linear up to the Lipschitz perturbation $f\ssi(u)$. Hence, owing to the regularity \eqref{conto21c} of the forcing term, to the regularity \eqref{regouzzd} of the initial datum, and to the standard Hilbert theory of linear parabolic equations, there exists a unique pair $(u,w)$ solving the problem given by \eqref{CH1schau}-\eqref{CH2schau} and the initial condition \eqref{inisd}. Such a pair satisfies the regularity properties \eqref{regovsd}-\eqref{regowsd} (as it will also be apparent from the forthcoming a priori estimates). We then note as $\calK$ the map such that $\calK: \baru \mapsto u$. To conclude the proof we will have to show the following three properties:\\[2mm] {\sl (i)}~~$\calK$ takes its values in $B_R$;\\[1mm] {\sl (ii)}~~$\calK$ is continuous with respect to the $L^2(0,T_0;W)$ and the $L^4(0,T_0;W^{1,4}(\Omega))$ norms;\\[1mm] {\sl (iii)}~~$\calK$ is a compact map.\\[2mm] To prove these facts, we perform a couple of a priori estimates. To start, we test \eqref{CH1schau} by $w$ and \eqref{CH2schau} by $u_t$ (energy estimate). This gives \begin{align} \no & \ddt \bigg( \frac\delta2 \| A u \|^2 + \io \Big( F\ssi(u) - \frac\lambda2 u^2 \Big) \bigg) + (\epsi+\sigma) \| u_t \|^2 + \| \nabla w \|^2\\ \label{contox11} & \mbox{}~~~~~ = - \big( \calA(\baru) , u_t \big) \le \frac\sigma2 \| u_t \|^2 + \frac1{2\sigma}\| \calA(\baru) \|^2 \end{align} and, after integration in time, the latter term can be estimated using \eqref{conto21c}. Next, we observe that, thanks to \eqref{propFsigma}, we have \begin{equation}\label{contox12} \frac\delta2 \| A u \|^2 + \io \Big( F\ssi(u) - \frac\lambda2 u^2 \Big) \ge \eta \| u \|_W^2 - c, \end{equation} for some $\eta>0$, $c\ge 0$ independent of $\sigma$ and for all $u$ in $W$. Thus, \eqref{contox11} provides the bounds \begin{equation}\label{boundx11} \| u \|_{L^\infty(0,T_0;W)} + \| u_t \|_{L^2(0,T_0;H)} + \| \nabla w \|_{L^2(0,T_0;H)} \le Q\big(R,T_0,\| u\zzs \|_W \big). \end{equation} Next, testing \eqref{CH2schau} by $A^2 u$ and performing some standard computations (in particular, the terms $(\calA(\baru),A^2 u)$ and $(f\ssi(u),A^2u)$ are controlled by using \eqref{conto21c}, H\"older's and Young's inequalities, and the Lipschitz continuity of $f\ssi$), we obtain the further bound \begin{equation}\label{st21} \| A^2 u \|_{L^2(0,T_0;H)} \le Q\big(R,T_0,\|u\zzs\|_{W}\big). \end{equation} Hence, estimates \eqref{boundx11} and \eqref{st21} and a standard application of the Aubin-Lions lemma permit to see that the range of $\calK$ is relatively compact both in $L^2(0,T_0;W)$ and in $L^4(0,T_0;W^{1,4}(\Omega))$. Thus, {\sl (iii)}\/ follows. \medskip Concerning {\sl (i)}, we can now simply observe that, by \eqref{boundx11}, \begin{equation}\label{st31} \| u \|_{L^2(0,T_0;W)} \le T_0^{1/2} \| u \|_{L^\infty(0,T_0;W)} \le T_0^{1/2} Q\big(R,T_0,\|u\zzs\|_{W}\big). \end{equation} whence the \rhs\ can be made smaller than $R$ if $T_0$ is chosen small enough. A similar estimate works also for the $L^4(0,T_0;W^{1,4}(\Omega))$-norm since $W\subset W^{1,4}(\Omega)$ continuously. Thus, also {\sl (i)}\ is proved. \medskip Finally, to prove condition {\sl (ii)}, we first observe that, if $\{\baru_n\}\subset B_R$ converges strongly to $\baru$ in $L^2(0,T_0;W)\cap L^4(0,T_0;W^{1,4}(\Omega))$, then, using proper weak compactness theorems, it is not difficult to prove that \begin{equation}\label{conto31} \calA(\baru_n)\to \calA(\baru) \quext{weakly in }\,L^2(0,T_0;H). \end{equation} Consequently, if $u_n$ (respectively $u$) is the solution to \eqref{CH1schau}-\eqref{CH2schau} corresponding to $\baru_n$ (respectively $\baru$), then estimates \eqref{boundx11}-\eqref{st21} hold for the sequence $\{u_n\}$ with a function $Q$ independent of $n$. Hence, standard weak compactness arguments together with the Lipschitz continuity of $f\ssi$ permit to prove that \begin{equation}\label{st33} u_n=\calK(\baru_n) \to u=\calK(\baru) \quext{strongly in }\,L^2(0,T_0;W) \cap L^4(0,T_0;W^{1,4}(\Omega)), \end{equation} i.e., condition {\sl (ii)}. The proof of the lemma is concluded. \end{proof} \subsection{A priori estimates} \label{sec:apriori} In this section we will show that the local solutions constructed in the previous section satisfy uniform estimates with respect both to the approximation parameter $\sigma$ and to the time $T_0$. By standard extension methods this will yield a global in time solution (i.e., defined over the whole of $(0,T)$) in the limit. However, to avoid technical complications, we will directly assume that the approximating solutions are already defined over $(0,T)$. Of course, to justify this, we will have to take care that all the constants appearing in the forthcoming estimates be independent of $T_0$. To be precise, in the sequel we will note by $c>0$ a computable positive constant (whose value can vary on occurrence) independent of all approximation parameters (in particular of $T_0$ and $\sigma$) and also of the parameters $\epsi$ and $\delta$. \smallskip \noinden {\bf Energy estimate.}~ First, integrating \eqref{CH1appr} in space and recalling \eqref{defiuzzd}, we obtain the {\sl mass conservation}\/ property \begin{equation}\label{consmedie} (u(t))\OO = (u\zzs)\OO = (u_0)\OO. \end{equation} Next, we can test \eqref{CH1appr} by $w$, \eqref{CH2appr} by $u_t$ and take the difference, arriving at \begin{equation}\label{conto41} \ddt \calE\ssid(u) + \| \nabla w \|^2 + (\epsi+\sigma) \| u_t \|^2 = 0, \end{equation} where the ``approximate energy'' $\calE\ssid(u)$ is defined as \begin{equation}\label{defiEssid} \calE\ssid(u)=\io \Big( \frac\delta2 | A u |^2 + \frac{a(u)}2 |\nabla u|^2 + F\ssi(u) - \frac{\lambda}2u^2 \Big). \end{equation} Actually, it is clear that the high regularity of approximate solutions (cf.~\eqref{regovsd}-\eqref{regowsd}) allows the integration by parts necessary to write \eqref{conto41} (at least) almost everywhere in time. Indeed, all single terms in \eqref{CH2appr} lie in $\LDH$ and the same holds for the test function $u_t$. Then, we integrate \eqref{conto41} in time and notice that, by \eqref{propFsigma}, \begin{equation}\label{Essicoerc} \calE\ssid(u) \ge \eta \big( \delta \| u \|_W^2 + \| u \|_V^2 \big) - c \quad\perogni t\in(0,T). \end{equation} Consequently, \eqref{conto41} provides the bounds \begin{align} \label{st41} & \| u \|_{\LIV} + \delta^{1/2} \| u \|_{\LIW} + (\epsi+\sigma)^{1/2} \| u_t \|_{\LDH} \le c,\\ \label{st43} & \| \nabla w \|_{\LDH} \le c,\\ \label{st44} & \| F\ssi(u) \|_{L^\infty(0,T;L^1(\Omega))} \le c, \end{align} where it is worth stressing once more that the above constants $c$ neither depend explicitly on $\delta$ nor on $\epsi$. \smallskip \noinden {\bf Second estimate.}~ We test \eqref{CH2appr} by $u-u\OO$, $u\OO$ denoting the (constant in time) spatial mean of $u$. Integrating by parts the term $\calA(u)$, we obtain \begin{align}\no & \delta \| A u \|^2 + \io a(u) | \nabla u |^2 + \io f\ssi(u)\big( u - u\OO \big)\\ \label{conto51} & \mbox{}~~~~~ \le \big( w + \lambda u - (\epsi+\sigma) u_t, u - u\OO \big) - \io \frac{a'(u)}2 | \nabla u |^2 ( u - u\OO ) \end{align} and we have to estimate some terms. First of all, we observe that there exists a constant $c$, depending on the (assigned once $u_0$ is fixed) value of $u\OO$, but {\sl independent of $\sigma$}, such that \begin{equation}\label{conto52} \io f\ssi(u)\big( u - u\OO \big) \ge \frac12 \| f\ssi(u) \|_{L^1(\Omega)} - c. \end{equation} To prove this inequality, one basically uses the monotonicity of $f\ssi$ and the fact that $f\ssi(0)=0$ (cf.~\cite[Appendix]{MZ} or \cite[Third a priori estimate]{GMS} for the details). Next, by \eqref{hpa2}, the function $r\mapsto a'(r)(r-u\OO)$ is uniformly bounded, whence \begin{equation}\label{conto53} - \io \frac{a'(u)}2 | \nabla u |^2 ( u - u\OO ) \le c \| \nabla u \|^2. \end{equation} Finally, using that $(w\OO+\lambda u\OO, u-u\OO)=0$ since $w\OO+\lambda u\OO$ is constant with respect to space variables, and applying the Poincar\'e-Wirtinger inequality, \begin{align}\no & \big( w + \lambda u - (\epsi+\sigma) u_t, u - u\OO \big) = \big( w - w\OO + \lambda (u-u\OO) - (\epsi+\sigma) u_t, u - u\OO \big)\\ \no & \mbox{}~~~~~ \le c \| \nabla w \| \| \nabla u \| + c \| \nabla u \|^2 + c (\epsi + \sigma) \| u_t \| \| \nabla u \| \\ \label{conto54} & \mbox{}~~~~~ \le c\big( \| \nabla w \| + (\epsi + \sigma) \| u_t \| + 1 \big), \end{align} the latter inequality following from estimate~\eqref{st41}. Thus, squaring \eqref{conto51}, using \eqref{conto52}-\eqref{conto54}, and integrating in time, we arrive after recalling \eqref{st41}, \eqref{st43} at \begin{equation} \label{st51} \| f\ssi(u) \|_{L^2(0,T;L^1(\Omega))} \le c. \end{equation} Next, integrating \eqref{CH2appr} with respect to space variables (and, in particular, integrating by parts the term $\calA(u)$), using \eqref{st51}, and recalling \eqref{st43}, we obtain (still for $c$ independent of $\epsi$ and $\delta$) \begin{equation} \label{st52} \| w \|_{L^2(0,T;V)} \le c. \end{equation} \noinden {\bf Third estimate.}~ We test \eqref{CH2appr} by $Au$. Using the monotonicity of $f\ssi$ and \eqref{hpa1}, it is not difficult to arrive at \begin{equation}\label{conto61} \frac{\epsi+\sigma}2\ddt \| \nabla u \|^2 + \delta \| \nabla A u \|^2 + \frac{\agiu}2 \| A u \|^2 \le \big( \nabla w + \lambda \nabla u , \nabla u \big) + c \| \nabla u \|_{L^4(\Omega)}^4. \end{equation} Using the continuous embedding $H^{3/4}(\Omega)\subset L^4(\Omega)$ (so that, in particular, $H^{7/4}(\Omega)\subset W^{1,4}(\Omega)$) together with the interpolation inequality \begin{equation}\label{new-interp} \| v \|_{H^{3/4}(\Omega)} \le \| v \|^{3/8}_{H^3(\Omega)} \| v \|^{5/8}_{H^1(\Omega)} \quad \perogni v \in H^{3/4}(\Omega), \end{equation} and recalling estimate \eqref{st41}, the last term is treated as follows: \begin{equation}\label{conto62} c \| \nabla u \|_{L^4(\Omega)}^4 \le c \| u \|_{H^3(\Omega)}^{3/2} \| u \|_{V}^{5/2} \le \frac\delta2 \| \nabla A u \|^2 + c(\delta). \end{equation} Note that the latter constant $c(\delta)$ is expected to explode as $\delta\searrow 0$ but, on the other hand, is independent of $\sigma$. Next, noting that \begin{equation}\label{conto63} \big( \nabla w + \lambda \nabla u , \nabla u \big) \le c \big( \| \nabla u \|^2 + \| \nabla w \|^2 ), \end{equation} from \eqref{conto61} we readily deduce \begin{equation} \label{st61} \| u \|_{L^2(0,T;H^3(\Omega))} \le c(\delta). \end{equation} A similar (and even simpler) argument permits to check that it is also \begin{equation} \label{st62} \| \calA(u) \|_{L^2(0,T;H)} \le c(\delta). \end{equation} Thus, using \eqref{st41}, \eqref{st52}, \eqref{st61}-\eqref{st62} and comparing terms in \eqref{CH2appr}, we arrive at \begin{equation} \label{st63} \| f\ssi(u) \|_{L^2(0,T;V')} \le c(\delta). \end{equation} \subsection{Limit $\boldsymbol \sigma\searrow 0$} \label{sec:sigma} We now use the machinery introduced in Subsection~\ref{sec:weak} to take the limit $\sigma\searrow 0$ in \eqref{CH1appr}-\eqref{CH2appr}. For convenience, we then rename as $(u\ssi,w\ssi)$ the solution. Then, recalling estimates \eqref{st41}-\eqref{st44}, \eqref{st52} and \eqref{st61}-\eqref{st63}, and using the Aubin-Lions compactness lemma, we deduce \begin{align} \label{conv41} & u\ssi \to u \quext{strongly in }\, C^0([0,T];H^{2-\epsilon}(\Omega)) \cap L^2(0,T;H^{3-\epsilon}(\Omega)),\\ \label{conv42} & u\ssi \to u \quext{weakly star in }\, H^1(0,T;V') \cap L^\infty(0,T;W) \cap L^2(0,T;H^3(\Omega)),\\ \label{conv42b} & (\epsi+\sigma) u_{\sigma,t} \to \epsi u_t \quext{weakly in }\, L^2(0,T;H),\\ \label{conv43} & w\ssi \to w \quext{weakly in }\, \LDV,\\ \label{conv44} & f\ssi(u\ssi) \to \xi \quext{weakly in }\, \LDVp, \end{align} for suitable limit functions $u,w,\xi$, where $\epsilon>0$ is arbitrarily small. It is readily checked that the above relations (\eqref{conv41} in particular) are strong enough to guarantee that \begin{equation} \label{conv45} \calA(u\ssi) \to \calA(u), \quext{strongly in }\, \LDH. \end{equation} This allows us to take the limit $\sigma\searrow 0$ in \eqref{CH1appr}-\eqref{inisd} (rewritten for $u\ssi,w\ssi$) and get \begin{align}\label{CH1delta} & u_t + A w = 0, \quext{in }\,V',\\ \label{CH2delta} & w = \delta A^2 u + \calA(u) + \xi - \lambda u + \epsi u_t, \quext{in }\,V',\\ \label{iniz-delta} & u|_{t=0} = u_0 \quext{a.e.~in }\,\Omega. \end{align} To identify $\xi$, we observe that, thanks to \eqref{conv41}, \eqref{conv44}, and Lemma~\ref{limimono} applied with the choices of $X=V$, $X'=V'$, $\calB\ssi=f\ssi$, $\calB=\fzw$, $v\ssi=u\ssi$, $v=u$ and $\xi\ssi=f\ssi(u\ssi)$, it follows that \begin{equation} \label{incldelta} \xi\in \fzw(u). \end{equation} Namely, $\xi$ is identified with respect to the weak (duality) expression of the function $f_0$. This concludes the proof of Theorem~\ref{teoesi6th} for what concerns existence. \subsection{Uniqueness} \label{sec:uniq6th} To conclude the proof of Theorem~\ref{teoesi6th}, it remains to prove uniqueness. To this purpose, we write both \eqref{CH1w} and \eqref{CH2w} for a couple of solutions $(u_1,w_1,\xi_1)$, $(u_2,w_2,\xi_2)$, and take the difference. This gives \begin{align}\label{CH1d0} & u_t + A w = 0, \quext{in }\,V',\\ \no & w = \delta A^2 u - a(u_1) \Delta u - \big( a(u_1) - a(u_2) \big) \Delta u_2 - \frac{a'(u_1)}2 \big( | \nabla u_1 |^2 - | \nabla u_2 |^2 \big)\\ \label{CH2d0} & \mbox{}~~~~~~~~~~ - \frac{a'(u_1) - a'(u_2)}2 | \nabla u_2 |^2 + \xi_1 - \xi_2 - \lambda u + \epsi u_t, \quext{in }\,V', \end{align} where we have set $(u,w,\xi):=(u_1,w_1,\xi_1)-(u_2,w_2,\xi_2)$. Then, we test \eqref{CH1d0} by $A^{-1}u$, \eqref{CH2d0} by $u$, and take the difference. Notice that, indeed, $u$ has zero mean value by \eqref{consmedie}. Thus, the operator $A^{-1}$ makes sense since $A$ is bijective from $W_0$ to $H_0$, the subscript $0$ indicating the zero-mean condition. A straighforward computation involving use of standard embedding properties of Sobolev spaces then gives \begin{align}\no & \Big(- a(u_1) \Delta u - \big( a(u_1) - a(u_2) \big) \Delta u_2 - \frac{a'(u_1)}2 \big( | \nabla u_1 |^2 - | \nabla u_2 |^2 \big) - \frac{a'(u_1) - a'(u_2)}2 | \nabla u_2 |^2 , u \Big) \\ \label{uniq22} & \mbox{}~~~~~ \le Q\big( \| u_1 \|_{L^\infty(0,T;W)},\| u_2 \|_{L^\infty(0,T;W)} \big) \| u \|_W \| u \| \end{align} and we notice that the norms inside the function $Q$ are controlled thanks to \eqref{regou}. Thus, also on account of the monotonicity of $\fzw$, we arrive at \begin{align}\no & \ddt\Big( \frac12 \| u \|_{V'}^2 + \frac\epsi2 \| u \|^2 \Big) + \delta \| A u \|^2 \le c \| u \|_W \| u \| + \lambda \| u \|^2\\ \label{uniq23} & \mbox{}~~~~~ \le c \| u \|_W^{4/3} \| u \|_{V'}^{2/3} \le \frac\delta2 \| A u \|^2 + c(\delta) \| u \|_{V'}^2, \end{align} where, to deduce the last two inequalities, we used the interpolation inequality $\| u \| \le \| u \|_{V'}^{2/3} \| u \|_W^{1/3}$ (note that $(V,H,V')$ form a Hilbert triplet, cf., e.g., \cite[Chap.~5]{BrAF}) together with Young's inequality and the fact that the function $\| \cdot \|_{V'} + \| A \cdot \|$ is an equivalent norm on $W$. Thus, the thesis of Theorem~\ref{teoesi6th} follows by applying Gronwall's lemma to \eqref{uniq23}. \section{From the 6th order to the 4th order model} \label{sec:6thto4th} In this section, we analyze the behavior of solutions to the 6th order problem as $\delta$ tends to $0$. To start with, we specify the concept of weak solution in the 4th order case: \bede\label{def:weaksol4th} Let $\delta=0$ and $\epsi\ge 0$. Let us consider the\/ {\rm 4th order problem} given by the system \begin{align}\label{CH1w4th} & u_t + A w = 0, \quext{in }\,V',\\ \label{CH2th} & w = \calA(u) + f(u) + \epsi u_t, \quext{in }\,H, \end{align} together with the initial condition~\eqref{init}. A\/ (global in time) {\rm weak solution} to the 4th order problem \eqref{CH1w4th}-\eqref{CH2th}, \eqref{init} is a pair $(u,w)$, with \begin{align}\label{regou4} & u\in \HUVp\cap L^\infty(0,T;V) \cap L^2(0,T;W), \qquad \epsi u\in \HUH,\\ \label{regoFu4} & F(u) \in L^\infty(0,T;L^1(\Omega)),\\ \label{regofu4} & f_0(u) \in L^2(0,T;H),\\ \label{regow4} & w\in L^2(0,T;V), \end{align} satisfying\/ \eqref{CH1w4th}-\eqref{CH2th} a.e.~in~$(0,T)$ together with\/ \eqref{init}. \edde \noinden \bete\label{teo6thto4th} Let us assume\/ \eqref{hpa1}-\eqref{hpf2} together with \begin{equation}\label{aconcave} a \quext{is concave on }\,[-1,1]. \end{equation} Let also $\epsi\ge 0$ and let, for all $\delta\in(0,1)$, $u\zzd$ be an initial datum satisfying\/ \eqref{hpu0}. Moreover, let us suppose \begin{equation}\label{convuzzd} u\zzd\to u_0 \quext{strongly in }\,V, \qquad \calE\dd(u\zzd)\to \calE_0(u_0), \quext{where }\,(u_0)\OO\in(-1,1). \end{equation} Let, for any $\delta\in (0,1)$, $(u\dd,w\dd,\xi\dd)$ be a weak solution to the 6th order system in the sense of\/ {\rm Definition~\ref{def:weaksol6th}}. Then, we have that, up to a (nonrelabelled) subsequence of $\delta\searrow 0$, \begin{align}\label{co4th11} & u\dd \to u \quext{weakly star in }\,\HUVp \cap \LIV \cap \LDW,\\ \label{co4th12} & \epsi u\dd \to \epsi u \quext{weakly in }\,\HUH,\\ \label{co4th13} & w\dd \to w \quext{weakly in }\,\LDV,\\ \label{co4th13b} & \delta u\dd \to 0 \quext{strongly in }\,L^2(0,T;H^3(\Omega)),\\ \label{co4th14} & \xi\dd \to f_0(u) \quext{weakly in }\,\LDVp, \end{align} and $(u,w)$ is a weak solution to the 4th order problem. \ente \noinden \begin{proof} The first part of the proof consists in repeating the ``Energy estimate'' and the ``Second estimate'' of the previous section. In fact, we could avoid this procedure since we already noted that the constants appearing in those estimates were independent of $\delta$. However, we choose to perform once more the estimates working directly on the 6th order problem (rather than on its approximation) for various reasons. First, this will show that the estimates do not depend on the chosen regularization scheme. Second, the procedure has an independent interest since we will see that the use of ``weak'' subdifferential operators still permits to rely on suitable integration by parts formulas and on monotonicity methods. Of course, many passages, which were trivial in the ``strong'' setting, need now a precise justification. Finally, in this way we are able to prove, as a byproduct, that any solution to the 6th order system satisfies an energy {\sl equality}\/ (and not just an inequality). Actually, this property may be useful for addressing the long-time behavior of the system. \smallskip \noinden {\bf Energy estimate.~~ As before, we would like to test \eqref{CH1w} by $w\dd$, \eqref{CH2w} by $u_{\delta,t}$, and take the difference. To justify this procedure, we start observing that $w\dd\in L^2(0,T;V)$ by \eqref{regow}. Actually, since \eqref{CH1w} is in fact a relation in $L^2(0,T;V')$, the use of $w\dd$ as a test function makes sense. The problem, instead, arises when working on \eqref{CH2w} and, to justify the estimate, we can just consider the (more difficult) case $\epsi=0$. Then, it is easy to check that the assumptions of Lemma~\ref{lemma:ipp} are satisfied. In particular, we have \eqref{x12} thanks to \eqref{regou}. Hence, \eqref{x14} gives \begin{equation}\label{en-11} \duavb{u_{\delta,t},\calA(u\dd)} = \frac12 \ddt \io a(u\dd)|\nabla u\dd|^2, \quext{a.e.~in }\,(0,T). \end{equation} Thus, it remains to show that \begin{equation}\label{en-12} \duavg{u_{\delta,t},\delta A^2 u\dd + \xi\dd} = \ddt \io \Big( \frac\delta2 |A u\dd|^2 + F(u\dd) \Big), \quext{a.e.~in }\,(0,T). \end{equation} To prove this, we observe that \begin{equation}\label{comparis} \delta A^2 u\dd + \xi\dd \in \LDV \quad \perogni \delta\in(0,1). \end{equation} Actually, we already noted above that $w\dd$, $\calA(u\dd)$ lie in $\LDV$. Since $u\dd\in \LDV$ by \eqref{regou} and we assumed $\epsi=0$, \eqref{comparis} simply follows by comparing terms in \eqref{CH2w}. Thus, the duality on the \lhs\ of \eqref{en-12} makes sense. Moreover, as we set \begin{equation}\label{en-13} \calJ\dd(v):= \io \Big( \frac\delta2 |A v|^2 + F(v) \Big), \end{equation} then a direct computation permits to check that \begin{equation}\label{en-14} \delta A^2 u\dd + \xi\dd \in \de \calJ\dd ( u\dd ) \quext{a.e.~in }\,(0,T). \end{equation} Indeed, by definition of $H$-subdifferential, this corresponds to the relation \begin{equation}\label{en-15} \duavb { \delta A^2 u\dd + \xi\dd , v - u\dd } \le \calJ\dd ( v ) - \calJ\dd ( u\dd ) \quad \perogni v\in H, \end{equation} and it is sufficient to check it for $v\in V$ since for $v\in V\setminus H$ the \rhs\ is $+\infty$ and consequently the relation is trivial. However, for $v\in V$, \eqref{en-15} follows by definition of the relaxed operator $\fzw$. Thanks to \eqref{en-14}, \eqref{en-12} is a then a direct consequence of inequality \eqref{ipepardiff} of Lemma~\ref{BResteso}. Thus, the above procedure permits to see that (any) weak solution $(u\dd,w\dd,\xi\dd)$ to the 6th order problem satisfies the energy {\sl equality} \begin{equation}\label{energy-6th} \ddt \calE\dd(u(t)) + \| \nabla w(t) \|^2 + \epsi \| u_t(t) \|^2 = 0 \end{equation} for almost all $t\in[0,T]$. As a consequence, we get back the first two convergence relations in \eqref{co4th11} as well as \eqref{co4th12}. Moreover, we have \begin{equation}\label{6to4-01} \| \nabla w\dd \|_{\LDH} \le c. \end{equation} \smallskip \noinden {\bf Second estimate.~~ Next, to get \eqref{co4th13} and \eqref{co4th14}, we essentially need to repeat the ``Second estimate'' of the previous section. Indeed, we see that $u\dd-(u\dd)\OO$ is an admissible test function in \eqref{CH1w}. However, we now have to obtain an estimate of $\xi\dd$ from the duality product \begin{equation}\label{6to4-21} \duavb{\xi\dd, u\dd - (u\dd)\OO}. \end{equation} Actually, if $\xi\dd=\xi\dda+\xi\dds$ is the Lebesgue decomposition of the {\sl measure}\ $\xi\dd$ given in Theorem~\ref{teobrezis}, then, noting that for all $t\in [0,T]$ we have $u\dd(t)\in W\subset C^0(\barO)$, we can write \begin{equation}\label{6to4-22} \duavb{\xi\dd(t), u\dd(t) - (u\dd)\OO} = \io \xi\dda(t) \big(u\dd(t) - (u\dd)\OO\big)\,\dix + \io \big(u\dd(t) - (u\dd)\OO \big) \dixi_{\delta,s}(t). \end{equation} Next, we notice that, as a direct consequence of assumption~\eqref{convuzzd}, \begin{equation}\label{unifsep} \esiste\mu\in(0,1):~~ -1+\mu \le (u\zzd)\OO \le 1-\mu, \quad\perogni \delta\in(0,1), \end{equation} where $\mu$ is independent of $\delta$. In other words, the spatial means $(u\zzd)\OO$ are uniformly separated from $\pm1$. Then, recalling \eqref{bre2} and proceeding as in \eqref{conto52}, we have \begin{equation}\label{conto52dd} \io \xi\dda(t) \big(u\dd(t) - (u\dd)\OO\big)\,\dix \ge \frac12\| \xi\dda(t) \|_{L^1(\Omega)} - c, \end{equation} where $c$ does not depend on $\delta$. On the other hand, let us note as $\deriv\!\xi\dds=\phi\dds \dixis$ the {\sl polar}\ decomposition of $\xi\dds$, where $|\xi\dds|$ is the {\sl total variation}\/ of $\xi\dds$ (cf., e.g., \cite[Chap.~6]{Ru}). Then, introducing the bounded linear functional $\calS\dd:C^0(\barO)\to \RR$ given by \begin{equation}\label{deficalS} \calS\dd(z):= \ibaro z \,\dixi_{\delta,s} \end{equation} using, e.g., \cite[Thm.~6.19]{Ru}, and recalling \eqref{bre3}, we can estimate the norm of $\calS$ as follows: \begin{align}\no |\xi\dds|(\barO) & = \ibaro \dixis = \| \calS\dd \|_{\calM(\barO)}\\ \no & = \sup\Big\{\ibaro z\, \dixi_{\delta,s},~ z\in C^0(\overline\Omega),~ z(\barO)\subset[-1,1] \Big\}\\ \label{6to4-23} & = \duav{\xi\dd,u\dd} - \io \xi\dda u\dd = \ibaro u\dd \,\dixi_{\delta,s} = \ibaro u\dd\phi\dds\,\dixis, \end{align} where we also used that $u\dd\in C^0(\barO)$. Comparing terms, it then follows \begin{equation}\label{6to4-24} u\dd = \phi\dds, \quad |\xi\dds|-\text{a.e.~in $\barO$}. \end{equation} Then, since is clear that \begin{equation}\label{6to4-25} u\dd=\pm 1 \implica \frac{u\dd-(u\dd)\OO}{|u\dd-(u\dd)\OO|}=\pm1, \end{equation} coming back to \eqref{6to4-23} we deduce \begin{equation}\label{6to4-26} \ibaro \dixis = \ibaro \phi\dds\frac{u\dd-(u\dd)\OO}{|u\dd-(u\dd)\OO|}\,\dixis \le c \ibaro \phi\dds \big( u\dd-(u\dd)\OO \big)\,\dixis = c \ibaro \big( u\dd-(u\dd)\OO \big) \, \dixi\dds. \end{equation} Here we used again in an essential way the uniform separation property \eqref{unifsep}. Collecting \eqref{6to4-22}-\eqref{6to4-26}, we then have \begin{equation}\label{6to4-21b} \duavb{\xi\dd, u\dd - (u\dd)\OO} \ge \frac12 \| \xi\dda(t) \|_{L^1(\Omega)} + \eta \ibaro \dixis - c, \end{equation} for some $c\ge0$, $\eta>0$ independent of $\delta$. On the other hand, mimicking \eqref{conto51}-\eqref{conto54}, we obtain \begin{equation}\label{6to4-27} \delta \| A u\dd \|^2 + \agiu \| \nabla u\dd \|^2 + \duavb{\xi\dd, u\dd - (u\dd)\OO} \le c \big(\| \nabla w\dd \| + \epsi \| u_{\delta,t} \| + 1 \big), \end{equation} whence squaring, integrating in time, and using \eqref{co4th12} and \eqref{6to4-01}, we obtain that the function \begin{equation}\label{6to4-28} t \mapsto \| \xi\dda(t) \|_{L^1(\Omega)} + \io \dixist \quext{is bounded in $L^2(0,T)$, independently of $\delta$.} \end{equation} Integrating now \eqref{CH2w} in space, we deduce \begin{equation}\label{6to4-29} \io w\dd = \frac12 \io a'(u\dd) | \nabla u\dd |^2 + \io \xi\dd - \lambda (u\dd)\OO, \end{equation} whence \begin{equation}\label{6to4-29b} \bigg| \io w\dd \bigg| \le c \bigg( \| \nabla u\dd \|^2 + \| \xi\dda(t) \|_{L^1(\Omega)} + \ibaro \dixist + 1 \bigg). \end{equation} Thus, squaring, integrating in time, and recalling \eqref{6to4-01} and \eqref{6to4-28}, we finally obtain \eqref{co4th13}. \smallskip \noinden {\bf Key estimate.}~ To take the limit $\delta>0$, we have to provide a bound on $\calA(u\dd)$ independent of $\delta$. This will be obtained by means of the following integration by parts formula due to Dal Passo, Garcke and Gr\"un (\cite[Lemma 2.3]{DpGG}): \bele\label{lemma:dpgg} Let $h\in W^{2,\infty}(\RR)$ and $z\in W$. Then, \begin{align} \no & \io h'(z) |\nabla z|^2 \Delta z = -\frac13 \io h''(z) |\nabla z|^4\\ \label{byparts} & \mbox{}~~~~~ + \frac23 \io h(z) \big( |D^2 z|^2 - | \Delta z|^2 \big) + \frac23 \iga h(z) II( \nabla z ), \end{align} where $II(\cdot)$ denotes the second fundamental form of $\Gamma$. \enle \noinden We then test \eqref{CH2w} by $Au\dd$ in the duality between $V'$ and $V$. This gives the relation \begin{equation} \label{conto71} \frac\epsi2\ddt \| \nabla u\dd \|^2 + \delta \| \nabla A u\dd \|^2 +\big( \calA(u\dd), A u\dd \big) + \duavg{\xi\dd,Au\dd} = \big(\nabla w\dd, \nabla u\dd\big) + \lambda \| \nabla u\dd \|^2 \end{equation} and some terms have to be estimated. First, we note that \begin{equation} \label{conto71b} \big( \calA(u\dd), Au\dd \big) = \Big( a(u\dd)\Delta u\dd + \frac{a'(u\dd)}2 |\nabla u\dd|^2, \Delta u\dd \Big). \end{equation} Thus, using Lemma~\ref{lemma:dpgg} with the choice of $h(\cdot)=a'(\cdot)/2$, we obtain \begin{align} \no & \big( \calA(u\dd), Au\dd \big) = \io a(u\dd) | \Delta u\dd |^2 + \frac13 \io a(u\dd) \big( |D^2 u\dd|^2 - | \Delta u\dd|^2 \big)\\ \label{conto45} & \mbox{}~~~~~~~~~~ - \frac16 \io a''(u\dd)|\nabla u\dd|^4 + \frac13 \iga a(u\dd) II(\nabla u\dd). \end{align} Let us now point out that, being $\Gamma$ smooth, we can estimate \begin{equation} \label{conto46} \frac13 \bigg| \iga a(u\dd) II(\nabla u\dd) \bigg| \le c \| \nabla u\dd \|_{L^2(\Gamma)}^2 \le \omega \| A u\dd \|_{W}^2 + c_\omega \| u\dd \|^2, \end{equation} for small $\omega>0$ to be chosen below, the last inequality following from the continuity of the trace operator (applied to $\nabla u$) from $H^s(\Omega)$ into $L^2(\Gamma)$ for $s\in(1/2,1)$ and the compactness of the embedding $W\subset H^{1+s}(\Omega)$ for $s$ in the same range. Thus, using the {\sl concavity}\/ assumption \eqref{aconcave} on $a$ and the fact that $|u_\delta|\le 1$ almost everywhere in $(0,T)\times\Omega)$, we get \begin{equation}\label{elliptic3} \big( \calA(u\dd), Au\dd \big) \ge \eta \| A u\dd \|^2 - c, \end{equation} for proper strictly positive constants $\eta$ and $c$, both independent of $\delta$. Next, we observe that, by \eqref{incldelta} and Lemma~\ref{BSesteso}, we obtain $\duavg{\xi\dd,Au\dd}\ge 0$. Finally, we have \begin{equation}\label{conto73} - (\nabla w\dd, \nabla u\dd) \le c \| \nabla w\dd \| \| \nabla u\dd \|, \end{equation} and the \rhs\ is readily estimated thanks to \eqref{co4th12} and \eqref{co4th13}. Thus, on account of \eqref{elliptic3}, integrating \eqref{conto71} in time, we readily obtain the last of \eqref{co4th11} as well as \eqref{co4th13b}. Moreover, since $-1\le u\dd\le 1$ almost everywhere, we have for free \begin{equation} \label{st-key} \| u\dd \|_{L^\infty((0,T)\times\Omega)}\le 1. \end{equation} Thus, using the Gagliardo-Nirenberg inequality \eqref{ineq:gn}, we have also \begin{equation} \label{conv54} u\dd \to u \quext{weakly in }\, L^4(0,T;W^{1,4}(\Omega)). \end{equation} This readily entails \begin{equation} \label{conv55} \calA(u\dd) \to \calA(u) \quext{weakly in }\, L^2(0,T;H). \end{equation} Thus, a comparison of terms in \eqref{CH2w} gives also \begin{equation} \label{conv56} \xi\dd \to \xi \quext{weakly in }\, L^2(0,T;V'). \end{equation} Then, we can take the limit $\delta\searrow 0$ in \eqref{CH1w} and get \eqref{CH1w4th}. On the other hand, if we take the limit of \eqref{CH2w}, we obtain \begin{equation} \label{CH2provv} w = \calA(u) + \xi - \lambda u + \epsi u_t \end{equation} and we have to identify $\xi$. Actually, \eqref{conv56}, the strong convergence $u\dd\to u$ in $\LDV$ (following from \eqref{co4th11} and the Aubin-Lions lemma) and Lemma~\ref{limimono} permit to show that \begin{equation} \label{incldelta2} \xi\in \fzw(u) \quext{a.e.~in }\,(0,T). \end{equation} On the other hand, a comparison argument in \eqref{CH2provv} permits to see that $\xi\in \LDH$, whence, thanks to \eqref{betavsbetaw2}, we obtain that $\xi(t)=f_0(u(t))\in H$ for a.e.~$t\in(0,T)$. This concludes the proof of Theorem~\ref{teo6thto4th}. \end{proof} \section{Analysis of the fourth order problem} \label{sec:4th} In this section, we will prove existence of a weak solution to Problem~\eqref{CH1}-\eqref{neum-intro} in the fourth order case $\delta =0$ by means of a direct approach not relying on the 6th order approximation. This will allow us to consider a general function $a$ (without the concavity assumption \eqref{aconcave}). More precisely, we have the following \bete\label{teo:4th} Let assumptions\/ \eqref{hpa1}-\eqref{hpf2} hold, let $\epsi\ge 0$ and let \begin{equation}\label{hpu0-4} u_0\in V, \quad F(u_0)\in L^1(\Omega), \quad (u_0)\OO \in (-1,1). \end{equation} Then, there exists\/ {\rm at least} one weak solution to the 4th order problem, in the sense of\/ {\rm Definition~\ref{def:weaksol4th}.} \ente \noinden The rest of the section is devoted to the proof of the above result, which is divided into several steps. \smallskip \noinden {\bf Phase-field approximation.}~~For $\sigma\in(0,1)$, we consider the system \begin{align}\label{CH1-4ap} & u_t + \sigma w_t + A w = 0,\\ \label{CH2-4ap} & w = \calA(u) + f\ssi(u) - \lambda u + (\epsi+\sigma) u_t. \end{align} This will be endowed with the initial conditions \begin{equation}\label{init-4ap} u|_{t=0} = u\zzs, \qquad w|_{t=0} = 0. \end{equation} Similarly as before (compare with \eqref{defiuzzd}), we have set \begin{equation}\label{defiuzzs} u\zzs + \sigma A^2 u\zzs = u_0 \end{equation} and, by standard elliptic regularity, we have that \begin{equation}\label{propuzzs} u\zzs \in H^5(\Omega)\subset C^{3+\alpha}(\barO) \quext{for }\,\alpha\in(0,1/2), \qquad \dn u\zzs=\dn A u\zzs=0,~~\text{on }\,\Gamma. \end{equation} Moreover, of course, $u\zzs\to u_0$ in a suitable sense as $\sigma\searrow0$. \smallskip \noinden {\bf Fixed point argument.}~~We now prove existence of a local solution to the phase-field approximation by a further Schauder fixed point argument. Namely, we introduce the system \begin{align}\label{CH1-4pf} & u_t + \sigma w_t + A w = 0,\\ \label{CH2-4pf} & \barw = -a(\baru) \Delta u - \frac{a'(\baru)}2 | \nabla \baru |^2 + f\ssi(\baru) - \lambda \baru + (\epsi + \sigma) u_t, \qquad \dn u = 0~~\text{on }\,\Gamma, \end{align} which we still endow with the condition \eqref{init-4ap}. Here, $f\ssi$ is chosen as in \eqref{defifsigma}. Next, we set \begin{equation}\label{deficalU} \calU:=\left\{ u\in C^{0,1+\alpha} ([0,T_0]\times \barO):~ u|_{t=0}=u\zzs,~ \| u \|_{C^{0,1+\alpha}}\le 2R \right\}, \end{equation} where $R:=\max\{1,\|u\zzs\|_{C^{1+\alpha}(\barO)}\}$ and $T_0$ will be chosen at the end of the argument. It is clear that $R$ depends in fact on $\sigma$ (so that the same will happen for $T_0$). This dependence is however not emphasized here. For the definition of the parabolic H\"older spaces used in this proof we refer the reader to \cite[Chap.~5]{Lu}, whose notation is adopted. Moreover, in the sequel, in place of $C^{0,\alpha}([0,T_0]\times \barO)$ (and similar spaces) we will just write $C^{0,\alpha}$, for brevity. We then also define \begin{equation}\label{deficalW} \calW:=\left\{ w\in C^{0,\alpha}:~ w|_{t=0}=0,~ \| w \|_{C^{0,\alpha}}\le R \right\}, \end{equation} where $R$ is, for simplicity, the same number as in \eqref{deficalU}. Then, choosing $(\baru,\barw)$ in $\calU\times\calW$ and inserting it in \eqref{CH2-4pf}, we observe that, by the Lipschitz regularity of $a$ (cf.~\eqref{hpa1}) and standard multiplication properties of H\"older spaces, there exists a computable monotone function $Q$, also depending on $\sigma$, but independent of the time $T_0$, such that \begin{equation}\label{prop-norme} \|a(\baru)\|_{C^{0,\alpha}} + \big\|a'(\baru)|\nabla \baru|^2\big\|_{C^{0,\alpha}} + \|f\ssi(\baru)\|_{C^{0,\alpha}} \le Q(R). \end{equation} Thanks to \cite[Thm.~5.1.21]{Lu}, then there exists one and only one solution $u$ to \eqref{CH2-4pf} with the first initial condition \eqref{init-4ap}. This solution satisfies \begin{equation}\label{regou-4pf} \| u \|_{C^{1,2+\alpha}} \le Q(R). \end{equation} Substituting then $u_t$ in \eqref{CH1-4pf} and applying the same theorem of \cite{Lu} to this equation with the second initial condition \eqref{init-4ap}, we then obtain one and only one solution $w$, with \begin{equation}\label{regow-4pf} \| w \|_{C^{1,2+\alpha}} \le Q(R). \end{equation} We then note as $\calT$ the map such that $\calT: (\baru,\barw) \mapsto (u,w)$. As before, we need to show that:\\[2mm] {\sl (i)}~~$\calT$ takes its values in $\calU\times\calW$;\\[1mm] {\sl (ii)}~~$\calT$ is continuous with respect to the $C^{0,1+\alpha}\times C^{0,\alpha}$ norm of $\calU\times\calW$;\\[1mm] {\sl (iii)}~~$\calT$ is a compact map.\\[2mm] First of all let us prove {\sl (i)}. We just refer to the component $u$, the argument for $w$ being analogous and in fact simpler. We start observing that, if $u\in \Pi_1 (\calT (\calU\times \calW))$, $\Pi_1$ denoting the projection on the first component, then \begin{equation}\label{i-11} \| u(t) \|_{C^\alpha(\barO)} \le \| u_0 \|_{C^\alpha(\barO)} + \int_0^t \| u_t(s) \|_{C^\alpha(\barO)} \,\dis \le R + T_0 Q(R), \quad \perogni t\in[0,T_0], \end{equation} which is smaller than $2R$ if $T_0$ is chosen suitably. Next, using the continuous embedding (cf.~\cite[Lemma~5.1.1]{Lu}) \begin{equation} \label{contemb} C^{1,2+\alpha} \subset C^{1/2}([0,T_0];C^{1+\alpha}(\barO)) \cap C^{\alpha/2}([0,T_0];C^2(\barO)), \end{equation} we obtain that, analogously, \begin{equation}\label{i-12} \| \nabla u(t) \|_{C^\alpha(\barO)} \le \| \nabla u_0 \|_{C^\alpha(\barO)} + T_0^{1/2} \| u \|_{C^{1/2}([0,T_0];C^{1+\alpha}(\barO))} \le R + T_0^{1/2} Q(R). \end{equation} Hence, passing to the supremum for $t\in[0,T_0]$, we see that the norm of $u$ in $C^{0,1+\alpha}$ can be made smaller than $2R$ if $T_0$ is small enough. Thus, {\sl (i)}\ is proved. \medskip Let us now come to {\sl (iii)}. As before, we just deal with the component $u$. Namely, on account of \eqref{regou-4pf}, we have to show that the space $C^{1,2+\alpha}$ is compactly embedded into $C^{0,1+\alpha}$. Actually, by \eqref{contemb} and using standard compact inclusion properties of H\"older spaces, this relation is proved easily. Hence, we have {\sl (iii)}. \medskip Finally, we have to prove {\sl (ii)}. This property is however straighforward. Actually, taking $(\baru_n,\barw_n)\to (\baru,\barw)$ in $\calU\times \calW$, we have that the corresponding solutions $(u_n,w_n)=\calT(\baru_n,\barw_n)$ are bounded in the sense of \eqref{regou-4pf}-\eqref{regow-4pf} uniformly in $n$. Consequently, a standard weak compactness argument, together with the uniqueness property for the initial value problems associated to \eqref{CH1-4pf} and to \eqref{CH2-4pf}, permit to see that the {\sl whole sequence}\/ $(u_n,w_n)$ converges to a unique limit point $(u,w)$ solving \eqref{CH1-4pf}-\eqref{CH2-4pf} with respect to the limit data $(\baru,\barw)$. Moreover, by the compactness property proved in {\sl (iii)}, this convergence holds with respect to the original topology of $\calU\times \calW$. This proves that $(u,w) = \calT (\baru,\barw)$, i.e., {\sl (ii)}\ holds. \medskip \noinden {\bf A priori estimates.}~~For any $\sigma>0$, we have obtained a local (i.e., with a final time $T_0$ depending on $\sigma$) solution to \eqref{CH1-4ap}-\eqref{CH2-4ap} with the initial conditions \eqref{init-4ap}. To emphasize the $\sigma$-dependence, we will note it by $(u\ssi,w\ssi)$ in the sequel. To let $\sigma\searrow 0$, we now devise some of a priori estimates uniform both with respect to $\sigma$ and with respect to $T_0$. As before, this will give a global solution in the limit and, to avoid technicalities, we can directly work on the time interval $[0,T]$. Notice that the high regularity of $(u\ssi,w\ssi)$ gives sense to all the calculations performed below (in particular, to all the integrations by parts). That said, we repeat the ``Energy estimate'', exactly as in the previous sections. This now gives \begin{align} \label{st11ap} & \| u\ssi \|_{\LIV} + \| F\ssi(u\ssi) \|_{L^\infty(0,T;L^1(\Omega))} \le c,\\ \label{st12ap} & (\sigma+\epsi)^{1/2} \| u_{\sigma,t} \|_{\LDH} \le c,\\ \label{st13ap} & \sigma^{1/2} \| w\ssi \|_{L^\infty(0,T;H)} + \| \nabla w\ssi \|_{\LDH} \le c. \end{align} Next, working as in the ``Second estimate'' of Subsection~\ref{sec:apriori}, we obtain the analogue of \eqref{st51} and \eqref{st52}. To estimate $f\ssi(u\ssi)$ in $H$, we now test \eqref{CH2-4ap} by $f\ssi(u\ssi)$, to get \begin{align}\no & \frac{\epsi+\sigma}2 \ddt \io F\ssi(u\ssi) + \io \Big( a(u\ssi) f\ssi'(u\ssi) + \frac{a'(u\ssi)}2 f\ssi(u\ssi) \Big) | \nabla u\ssi |^2 + \| f\ssi(u\ssi) \|^2 \\ \label{conto51-4th} & \mbox{}~~~~~ = \big( w\ssi + \lambda u\ssi, f\ssi(u\ssi) \big), \end{align} and it is a standard matter to estimate the \rhs\ by using the last term on the \lhs, H\"older's and Young's inequalities, and properties \eqref{st11ap} and \eqref{st52}. Now, we notice that, thanks to \eqref{goodmono}, \begin{equation}\label{4th-21} a(r) f\ssi'(r) + \frac{a'(r)}2 f\ssi(r) \ge \agiu f\ssi'(r) - c | f\ssi(r) | \ge \frac{\agiu}2 f\ssi'(r) - c \quad \perogni r\in [-2,2], \end{equation} with the last $c$ being independent of $\sigma$. On the other hand, for $r\not\in [-2,2]$ we have that $a'(r)=0$ by \eqref{hpa2}. Hence, also thanks to \eqref{st11ap}, the second term on the \lhs\ of \eqref{conto51-4th} can be controlled. We then arrive at \begin{equation} \label{st21ap} \| f\ssi(u\ssi) \|_{L^2(0,T;H)} \le c. \end{equation} The key point is represented by the next estimate, which is used to control the second space derivatives of $u$. To do this, we have to operate a change of variable, first. Namely, we set \begin{equation} \label{defiphi} \phi(s):=\int_0^s a^{1/2} (r) \, \dir, \qquad z\ssi:=\phi(u\ssi) \end{equation} and notice that, by \eqref{hpa1}-\eqref{hpa2}, $\phi$ is monotone and Lipschitz together with its inverse. Then, by \eqref{st11ap}, \begin{equation} \label{st21ap2} \| z\ssi \|_{L^\infty(0,T;V)} \le c \end{equation} and it is straighforward to realize that \eqref{CH2-4ap} can be rewritten as \begin{equation} \label{CH2-z} w\ssi = - \phi'(u\ssi) \Delta z\ssi + f\ssi\circ\phi^{-1}(z\ssi) - \lambda u\ssi + (\epsi + \sigma) u_{\sigma,t}, \qquad \dn u\ssi = 0~~\text{on }\,\Gamma. \end{equation} By the H\"older continuity of $u\ssi$ up to its second space derivatives and the Lipschitz continuity of $a$ and $a'$ (cf.~\eqref{hpa1}-\eqref{hpa2}), $-\Delta z\ssi$ is also H\"older continuous in space. Thus, we can use it as a test function in \eqref{CH2-z}. Using the monotonicity of $f\ssi$ and $\phi^{-1}$, and recalling \eqref{st21ap}, we then easily obtain \begin{equation} \label{st31ap} \| z\ssi \|_{L^2(0,T;W)} \le c. \end{equation} \smallskip \noinden {\bf Passage to the limit.}~~As a consequence of \eqref{st11ap}-\eqref{st13ap}, \eqref{st51}-\eqref{st52} and \eqref{st21ap}, we have \begin{align} \label{co11ap} & u\ssi \to u \quext{weakly star in }\, \HUVp \cap \LIV,\\ \label{co12ap} & (\sigma+\epsi) u_{\sigma,t} \to \epsi u_t \quext{weakly in }\, \LDH,\\ \label{co13ap} & f\ssi(u\ssi) \to \barf \quext{weakly in }\, \LDH,\\ \label{co14ap} & w\ssi \to w \quext{weakly in }\, \LDV,\\ \label{co15ap} & u_{\sigma,t} + \sigma w_{\sigma,t} \to u_t \quext{weakly in }\, \LDVp, \end{align} for suitable limit functions $u$, $w$ and $\barf$. Here and below, all convergence relations have to be intended to hold up to (nonrelabelled) subsequences of $\sigma\searrow0$. Now, by the Aubin-Lions lemma, we have \begin{equation} \label{co21ap} u\ssi \to u \quext{strongly in }\, \CZH \quext{and a.e.~in }\,Q. \end{equation} Then, \eqref{co13ap} and a standard monotonicity argument (cf.~\cite[Prop.~1.1]{barbu}) imply that $\barf=f(u)$ a.e.~in $Q$. Furthermore, by \eqref{hpa1}-\eqref{hpa2} and the generalized Lebesgue's theorem, we have \begin{equation} \label{co21ap2} a(u\ssi) \to a(u),~~a'(u\ssi) \to a'(u),~~ \quext{strongly in }\, L^q(Q)~~\text{for all }\,q\in[1,+\infty). \end{equation} Analogously, recalling \eqref{st21ap2}, $z\ssi=\phi(u\ssi)\to \phi(u)=:z$, strongly in $L^q(Q)$ for all $q\in[1,6)$. Actually, the latter relation holds also weakly in $\LDW$ thanks to the bound \eqref{st31ap}. Moreover, by \eqref{st21ap2}, \eqref{st31ap} and interpolation, we obtain \begin{equation} \label{co22ap} \| \nabla z\ssi \|_{L^{10/3}(Q)} \le c, \end{equation} whence, clearly, it is also \begin{equation} \label{co22ap2} \| \nabla u\ssi \|_{L^{10/3}(Q)} \le c. \end{equation} As a consequence, being \begin{equation} \label{co22ap3} - \Delta u\ssi = - \frac1{a^{1/2}(u\ssi)} \Delta z\ssi + \frac{a'(u\ssi)}{2a(u\ssi)} | \nabla u\ssi |^2, \end{equation} we also have that \begin{equation} \label{co22ap4} \Delta u\ssi \to \Delta u \quext{weakly in }\, L^{5/3}(Q). \end{equation} Combining this with \eqref{co11ap} and using the generalized Aubin-Lions lemma (cf., e.g., \cite{Si}), we then arrive at \begin{equation} \label{co22ap5} u\ssi \to u \quext{strongly in }\, L^{5/3}(0,T;W^{2-\epsilon,5/3}(\Omega)) \cap C^0([0,T];H^{1-\epsilon}(\Omega)), \quad \perogni \epsilon > 0, \end{equation} whence, by standard interpolation and embedding properties of Sobolev spaces, we obtain \begin{equation} \label{co22ap6} \nabla u\ssi \to \nabla u \quext{strongly in }\, L^q(Q) \quext{for some }\,q>2. \end{equation} Consequently, recalling \eqref{co21ap2}, \begin{equation} \label{co22ap7} a'(u\ssi) |\nabla u\ssi|^2 \to a'(u) |\nabla u|^2, \quext{say, weakly in }\, L^1(Q). \end{equation} This is sufficient to take the limit $\sigma\searrow 0$ in \eqref{CH2-4ap} and get back \eqref{CH2th}. To conclude the proof, it only remains to show the regularity \eqref{regou4} for what concerns the second space derivatives of $u$. Actually, by \eqref{st31ap} and the Gagliardo-Nirenberg inequality \eqref{ineq:gn}, \begin{equation} \label{co22ap8} z \in L^2(0,T;W) \cap L^\infty(Q) \subset L^4(0,T;W^{1,4}(\Omega)). \end{equation} Thus, we have also $u \in L^4(0,T;W^{1,4}(\Omega))$ and, consequently, a comparison of terms in \eqref{CH2th} permits to see that $\Delta u \in L^2(0,T;H)$, whence \eqref{regou4} follows from elliptic regularity. The proof of Theorem~\ref{teo:4th} is concluded. \section{Further properties of weak solutions} \label{sec:uniq} \subsection{Uniqueness for the 4th order problem} \label{subsec:uniq} We will now prove that, if the interfacial (i.e., gradient) part of the free energy $\calE\dd$ satisfies a {\sl convexity}\/ condition (in the viscous case $\epsi>0$) or, respectively, a {\sl strict convexity}\/ condition (in the non-viscous case $\epsi=0$), then the solution is unique also in the 4th order case. Actually, the stronger assumption (corresponding to $\kappa>0$ in the statement below) required in the non-viscous case is needed for the purpose of controlling the nonmonotone part of $f(u)$, while in the viscous case we can use the term $\epsi u_t$ for that aim. It is worth noting that, also from a merely thermodynamical point of view, the convexity condition is a rather natural requirement. Indeed, it corresponds to asking the second differential of $\calE\dd$ to be positive definite, to ensure that the stationary solutions are dynamically stable (cf., e.g., \cite{Su} for more details). \bete\label{teouniq} Let the assumptions of\ {\rm Theorem~\ref{teo:4th}} hold and assume that, in addition, \begin{equation} \label{1aconc} a''(r)\ge 0, \quad \Big(\frac1a\Big)''(r)\le -\kappa, \quad\perogni r\in[-1,1], \end{equation} where $\kappa>0$ if $\epsi=0$ and $\kappa\ge 0$ if $\epsi> 0$. Then, the 4th order problem admits a unique weak solution. \ente \begin{proof} Let us denote by $J$ the gradient part of the energy, i.e., \begin{equation} \label{defiJ} J:V\to [0,+\infty), \qquad J(u):=\io \frac{a(u)}2 | \nabla u |^2. \end{equation} Then, we clearly have \begin{equation} \label{Jprime} \duavg{J'(u),v} = \io \Big( a(u) \nabla u \cdot \nabla v + \frac{a'(u)}2 |\nabla u|^2 v \Big). \end{equation} and we can correspondingly compute the second derivative of $J$ as \begin{equation}\label{Jsecond} \duavg{J''(u)v,z} = \io \Big( \frac{a''(u)|\nabla u|^2 vz}2 + a'(u) v \nabla u\cdot\nabla z + a'(u) z \nabla u\cdot\nabla v + a(u) \nabla v\cdot\nabla z \Big). \end{equation} To be more precise, we have that $J'(u)\in V'$ and $J''(u)\in \calL(V,V')$ at least for $u\in W$ (this may instead not be true if we only have $u\in V$, due to the quadratic terms in the gradient). This is however the case for the 4th order system since for any weak solution we have that $u(t) \in W$ at least for a.e.~$t\in(0,T)$. From \eqref{Jsecond}, we then have in particular \begin{align} \no \duavg{J''(u)v,v} & = \io \Big( \frac{a''(u)|\nabla u|^2 v^2}2 + 2 a'(u) v \nabla u\cdot\nabla v + a(u) | \nabla v |^2 \Big)\\ \label{Jvv} & \ge \io \Big( a(u) - \frac{2 a'(u)^2}{a''(u)} \Big) | \nabla v|^2, \end{align} whence the functional $J$ is convex, at least when restricted to functions $u$ such that \begin{equation} \label{doveJconv} u \in W, \quad u(\Omega)\subset [-1,1], \end{equation} provided that $a$ satisfies \begin{equation} \label{aaprimo} a(r)a''(r) - 2a'(r)^2 \ge 0 \quad\perogni r\in [-1,1]. \end{equation} Noting that \begin{equation} \label{1asecondo} \Big(\frac1a\Big)'' = \frac{2(a')^2-aa''}{a^3}, \end{equation} we have that $J$ is (strictly) convex if $1/a$ is (strictly) concave, i.e., \eqref{1aconc} holds (cf.~also \cite[Sec.~3]{DNS} for related results). Note that, in deducing the last inequality in \eqref{Jvv}, we worked as if it was $a''>0$. However, if $a''(r)=0$ for some $r$, then also $a'(r)$ has to be $0$ due to \eqref{aaprimo}. So, this means that in the set $\{u=r\}$ the first two summands in the \rhs\ of the first line of \eqref{Jvv} identically vanish. \smallskip That said, let us write both \eqref{CH1w} and \eqref{CH2w} for a couple of solutions $(u_1,w_1)$, $(u_2,w_2)$, and take the difference. Setting $(u,w):=(u_1,w_1)-(u_2,w_2)$, we obtain \begin{align}\label{CH1d} & u_t + A w = 0,\\ \label{CH2d} & w = J'(u_1) - J'(u_2) + f(u_1) - f(u_2) + \epsi u_t. \end{align} Then, we can test \eqref{CH1d} by $A^{-1}u$, \eqref{CH2d} by $u$, and take the difference. Indeed, $u=u_1-u_2$ has zero mean value by \eqref{consmedie}. We obtain \begin{equation} \label{contod1} \frac12 \ddt \Big( \| u \|_{V'}^2 + \epsi \| u \|^2 \Big) + \duavg{J'(u_1) - J'(u_2),u} + \big( f(u_1) - f(u_2), u \big) = 0 \end{equation} and, using the convexity of $J$ coming from \eqref{1aconc} and the $\lambda$-monotonicity of $f$ (see~\eqref{hpf1}), we have, for some function $\xi$ belonging to $W$ a.e.~in time and taking its values in $[-1,1]$, \begin{equation} \label{contod2} \frac12 \ddt \Big( \| u \|_{V'}^2 + \epsi \| u \|^2 \Big) + \kappa \| \nabla u \|^2 \le \frac12 \ddt \Big( \| u \|_{V'}^2 + \epsi \| u \|^2 \Big) + \duavg{J''(\xi) u, u} \le \lambda \| u \|^2. \end{equation} Thus, in the case $\epsi>0$ (where it may be $\kappa=0$), we can just use Gronwall's Lemma. Instead, if $\epsi=0$ (so that we assumed $\kappa>0$), by the Poincar\'e-Wirtinger inequality we have \begin{equation} \label{contod3} \lambda \| u \|^2 \le \frac\kappa2 \| \nabla u \|^2 + c \| u \|_{V'}^2, \end{equation} and the thesis follows again by applying Gronwall's lemma to \eqref{contod2}. \end{proof} \subsection{Additional regularity} \label{sec:add} We prove here parabolic regularization properties of the solutions to the 4th order system holding in the case of a convex energy functional. An analogous result would hold also for the 6th order system under general conditions on $a$ since the bilaplacean in that case dominates the lower order terms (we omit the details). \bete\label{teoreg} Let the assumptions of\/ {\rm Theorem~\ref{teouniq}} hold. Then, the solution satisfies the additional regularity property \begin{equation} \label{add-reg} \| u \|_{L^\infty(\tau,T;W)} + \| u \|_{L^\infty(\tau,T;W^{1,4}(\Omega))} \le Q(\tau^{-1}) \quad \perogni \tau>0, \end{equation} where $Q$ is a computable monotone function whose expression depends on the data of the problem and, in particular, on $u_0$. \ente \begin{proof} The proof is based on a further a priori estimate, which has unfortunately a formal character in the present regularity setting. To justify it, one should proceed by regularization. For instance, a natural choice would be that of refining the fixed point argument leading to existence of a weak solution (cf.~Sec.~\ref{sec:4th}) by showing (e.g., using a bootstrap regularity argument) that, at least locally in time, the solution lies in higher order H\"older spaces. We leave the details to the reader. That said, we test \eqref{CH1w} by $w_t$ and subtract the result from the time derivative of \eqref{CH2w} tested by $u_t$. We obtain \begin{equation} \label{contoe1} \frac 12 \ddt \| \nabla w \|^2 + \frac\epsi2 \ddt \| u_t \|^2 + \duavg{J''(u) u_t, u_t} + \io f'(u) u_t^2 \le 0. \end{equation} Then, by convexity of $J$, \begin{equation} \label{contoe2} \duavg{J''(u) u_t, u_t} \ge \kappa \| u_t \|_{\LDV}^2. \end{equation} On the other hand, the $\lambda$-monotonicity of $f$ gives \begin{equation} \label{contoe3} \io f'(u) u_t^2 \ge - \lambda \| u_t \|_{\LDH}^2 \end{equation} and, if $\epsi=0$ (so that $\kappa>0$), we have as before \begin{equation} \label{contoe3-b} - \lambda \| u_t \|_{\LDH}^2 \ge - \frac\kappa2 \| u_t \|_{\LDV}^2 - c \| u_t \|_{\LDVp}^2. \end{equation} Thus, recalling the first of \eqref{regou} and applying the {\sl uniform}\/ Gronwall lemma (cf.~\cite[Lemma~I.1.1]{Te}), it is not difficult to infer \begin{equation} \label{ste1} \| \nabla w \|_{L^\infty(\tau,T;H)} + \epsi^{1/2} \| u_t \|_{L^\infty(\tau,T;H)} + \kappa \| u_t \|_{L^2(\tau,T;V)} \le Q(\tau^{-1}) \quad \perogni \tau>0. \end{equation} Next, testing \eqref{CH2w} by $u-u\OO$ and proceeding as in the ``Second estimate'' of Subsection~\ref{sec:apriori}, but taking now the essential supremum as time varies in $[\tau,T]$, we arrive at \begin{equation} \label{ste2} \| w \|_{L^\infty(\tau,T;V)} + \| f(u) \|_{L^\infty(\tau,T;L^1(\Omega))} \le Q(\tau^{-1}) \quad \perogni \tau>0. \end{equation} Thus, thanks to \eqref{ste2}, we can test \eqref{CH2th} by $-\Delta z$, with $z=\phi(u)$ (cf.~\eqref{defiphi}). Proceeding similarly with Section~\ref{sec:4th} (but taking now the supremum over $[\tau,T]$ rather than integrating in time), we easily get \eqref{add-reg}, which concludes the proof. \end{proof} \subsection{Energy equality} \label{sec:long} As noted in Section~\ref{sec:6thto4th}, any weak solution to the 6th order system satisfies the energy {\sl equality} \eqref{energy-6th}. We will now see that the same property holds also in the {\sl viscous}\/ 4th order case (i.e., if $\delta=0$ and $\epsi>0$). More precisely, we can prove the \bepr\label{prop:energy} Let the assumptions of\/ {\rm Theorem~\ref{teo:4th}} hold and let $\epsi>0$. Then, any weak solution to the 4th order system satisfies the\/ {\rm integrated} energy equality \begin{equation}\label{energy-4th-i} \calE_0(u(t)) = \calE_0(u_0) - \int_0^t \big( \| \nabla w(s) \|^2 - \epsi \| u_t(s) \|^2 \big) \dis \quad \perogni t\in[0,T]. \end{equation} \empr \begin{proof} As before, we proceed by testing \eqref{CH1w4th} by $w$, \eqref{CH2th} by $u_t$ and taking the difference. As $u_t\in \LDH$ and $f_0(u)\in \LDH$ (cf.~\eqref{regou4} and \eqref{regofu4}), then the integration by parts \begin{equation} \label{co91} \big(f(u),u_t\big) = \ddt \io F(u), \quext{a.e.~in }\,(0,T) \end{equation} is straighforward (it follows directly from \cite[Lemma~3.3, p.~73]{Br}). Moreover, in view of \eqref{regou4}, assumption \eqref{x11} of Lemma~\ref{lemma:ipp} is satisfied. Hence, by \eqref{x14}, we deduce that \begin{equation} \label{co92} \int_0^t \big(\calA(u(s)),u_t(s)\big)\,\dis = \io \frac{a(u(t))}2 |\nabla u(t)|^2 - \io \frac{a(u_0)}2 |\nabla u_0|^2. \end{equation} Combining \eqref{co91} and \eqref{co92}, we immediately get the assert. \end{proof} \noinden It is worth noting that the energy equality obtained above has a key relevance in the investigation of the long-time behavior of the system. In particular, given $m\in(-1,1)$ (the spatial mean of the initial datum, which is a conserved quantity due to~\eqref{consmedie}), we can define the {\sl phase space} \begin{equation} \label{defiXd} \calX\ddm:=\big\{u\in V:~\delta u\in W,~F(u)\in L^1(\Omega),~ u\OO = m\big\} \end{equation} and view the system (both for $\delta>0$ and for $\delta=0$) as a (generalized) dynamical process in $\calX\ddm$. Then, \eqref{energy-4th-i} (or its 6th order analogue) stands at the basis of the so-called {\sl energy method}\/ (cf.~\cite{Ba1,MRW}) for proving existence of the {\sl global attractor} with respect to the ``strong'' topology of the phase space. This issue will be analyzed in a forthcoming work. \beos\label{nonviscous} Whether the equality \eqref{energy-4th-i} still holds in the nonviscous case $\epsi=0$ seems to be a nontrivial question. The answer would be positive in case one could prove the integration by parts formula \begin{equation} \label{co101} \itt\duavb{u_t,\calA(u)+f(u)} = \io\Big(\frac{a(u(t))}2 |\nabla u(t)|^2 + F(u(t))\Big) - \io\Big(\frac{a(u_0)}2|\nabla u(t)|^2 + F(u_0)\Big), \end{equation} under the conditions \begin{equation} \label{co102} u \in \HUVp \cap L^2(0,T;W) \cap L^\infty(Q), \qquad \calA(u)+f(u) \in \LDV, \end{equation} which are satisfied by our solution (in particular the latter \eqref{co102} follows by a comparison of terms in \eqref{CH2th}, where now $\epsi=0$). Actually, if \eqref{co102} holds, then both hands sides of \eqref{co101} make sense. However, devising an approximation argument suitable for proving \eqref{co101} could be a rather delicate problem. \eddos
{'timestamp': '2012-06-26T02:07:39', 'yymm': '1106', 'arxiv_id': '1106.1581', 'language': 'en', 'url': 'https://arxiv.org/abs/1106.1581'}
ArXiv
\section{Introduction} HERA is a prodigious source of quasi--real photons from reactions where the electron is scattered at very small angles. This permits the study of photoproduction reactions at photon--proton centre of mass (c.m.) energies an order of magnitude larger than in previous fixed target experiments. The majority of the $\gamma p$ collisions are due to interactions of the proton with the hadronic structure of the photon, a process that has been successfully described by the vector meson dominance model (VDM)\cite{VDM}. Here, the photon is pictured to fluctuate into a virtual vector meson that subsequently collides with the proton. Such collisions exhibit the phenomenological characteristics of hadron--hadron interactions. In particular they can proceed via diffractive or non--diffractive channels. The diffractive interactions are characterized by very small four momentum transfers and no colour exchange between the colliding particles leading to final states where the colliding particles appear either intact or as more massive dissociated states. However, it has been previously demonstrated that photoproduction collisions at high transverse momentum cannot be described solely in terms of the fluctuation of the photon into a hadron--like state \cite{omega,na14}. The deviations come from contributions of two additional processes called direct and anomalous. In the former process the photon couples directly to the charged partons inside the proton. The anomalous component corresponds to the process where the photon couples to a {\it q\={q}} pair without forming a bound state. The interactions of the photon via the hadron--like state and the anomalous component are referred to as resolved photoproduction, since both of them can be described in terms of the partonic structure of the photon \cite{Storrow}. In this paper we present the measurement of the transverse momentum spectra of charged particles produced in photoproduction reactions at an average c.m. energy of $\langle W \rangle = 180\GeV$ and in the laboratory pseudorapidity range $-1.2<\eta<1.4$ \footnote{Pseudorapidity $\eta$ is calculated from the relation $ \eta = - ln( tan( \theta / 2))$, where $\theta$ is a polar angle calculated with respect to the proton beam direction. }. This range approximately corresponds to the c.m. pseudorapidity interval of $0.8<\eta_{c.m.}<3.4$, where the direction is defined such that positive $\eta_{c.m.}$ values correspond to the photon fragmentation region. The transverse momentum distributions of charged particles are studied for non--diffractive and diffractive reactions separately. The $p_{T}$ spectrum from non--diffractive events is compared to low energy photoproduction data and to hadron--hadron collisions at a similar c.m. energy. In the region of high transverse momenta we compare the data to the predictions of a next--to--leading order QCD calculation. The diffractive reaction ($\gamma p \rightarrow X p$), where $X$ results from the dissociation of the photon, was previously measured by the E612 Fermilab experiment at much lower c.m. energies, $11.8<W<16.6\GeV$ \cite{chapin}. It was demonstrated that the properties of the diffractive excitation of the photon resemble diffraction of hadrons in terms of the distribution of the dissociated mass, the distribution of the four-momentum transfer between the colliding objects \cite{chapin} and the ratio of the diffractive cross section to the total cross section \cite{zeus-sigmatot}. The hadronization of diffractively dissociated photons has not yet been systematically studied. In this analysis we present the measurement of inclusive $p_{T}$ spectra in two intervals of the dissociated photon mass with mean values of $\langle M_{X} \rangle = 5\GeV$ and $\langle M_{X} \rangle = 10\GeV$. \section{Experimental setup} The analysis is based on data collected with the ZEUS detector in 1993, corresponding to an integrated luminosity of $0.40\:{\rm pb}^{-1}$. The HERA machine was operating at an electron energy of $26.7\GeV$ and a proton energy of $820\GeV$, with 84 colliding bunches. In addition 10 electron and 6 proton bunches were left unpaired for background studies (pilot bunches). A detailed description of the ZEUS detector may be found elsewhere \cite{status93,zeus-description}. Here, only a brief description of the detector components used for this analysis is given. Throughout this paper the standard ZEUS right--handed coordinate system is used, which has its origin at the nominal interaction point. The positive Z--axis points in the direction of the proton beam, called the forward direction, and X points towards the centre of the HERA ring. Charged particles created in $ep$ collisions are tracked by the inner tracking detectors which operate in a magnetic field of $1.43{\: \rm T}$ provided by a thin superconducting solenoid. Immediately surrounding the beampipe is the vertex detector (VXD), a cylindrical drift chamber which consists of 120 radial cells, each with 12 sense wires running parallel to the beam axis \cite{VXD}. The achieved resolution is $50\:{\rm \mu m}$ in the central region of a cell and $150\:{\rm \mu m}$ near the edges. Surrounding the VXD is the central tracking detector (CTD) which consists of 72 cylindrical drift chamber layers organized in 9 superlayers \cite{CTD}. These superlayers alternate between those with wires parallel to the collision axis and those with wires inclined at a small angle to provide a stereo view. The magnetic field is significantly inhomogeneous towards the ends of the CTD thus complicating the electron drift. With the present understanding of the chamber, a spatial resolution of $\approx 260\:{\rm \mu m}$ has been achieved. The hit efficiency of the chamber is greater than $95\%$. In events with charged particle tracks, using the combined data from both chambers, the position resolution of the reconstructed primary vertex are $0.6\:{\rm cm}$ in the Z direction and $0.1\:{\rm cm}$ in the XY plane. The resolution in transverse momentum for full length tracks is $\sigma_{p_{T}} / p_{T} \leq 0.005 \cdot p_{T} \oplus 0.016$ ($p_{T}$ in $\GeV$). The description of the track and the vertex reconstruction algorithms may be found in \cite{zeus-breit} and references therein. The solenoid is surrounded by the high resolution uranium--scintillator calorimeter (CAL) divided into the forward (FCAL), barrel (BCAL) and rear (RCAL) parts \cite{CAL}. Holes of $20 \times 20 {\rm\: cm}^{2}$ in the centre of FCAL and RCAL are required to accommodate the HERA beam pipe. Each of the calorimeter parts is subdivided into towers which in turn are segmented longitudinally into electromagnetic (EMC) and hadronic (HAC) sections. These sections are further subdivided into cells, which are read out by two photomultiplier tubes. Under test beam conditions, an energy resolution of the calorimeter of $\sigma_{E}/E = 0.18/\sqrt{E (\GeV)}$ for electrons and $\sigma_{E}/E = 0.35/\sqrt{E (\GeV)}$ for hadrons was measured. In the analysis presented here CAL cells with an EMC (HAC) energy below $60\MeV$ ($110\MeV$) are excluded to minimize the effect of calorimeter noise. This noise is dominated by uranium activity and has an r.m.s.~value below $19\MeV$ for EMC cells and below $30\MeV$ for HAC cells. The luminosity detector (LUMI) measures the rate of the Bethe--Heitler process $e p \rightarrow e \gamma p$. The detector consists of two lead--scintillator sandwich calorimeters installed in the HERA tunnel and is designed to detect electrons scattered at very small angles and photons emitted along the electron beam direction \cite{lumi}. Signals in the LUMI electron calorimeter are used to tag events and to measure the energy of the interacting photon, $E_{\gamma}$, from $E_{\gamma}=E_{e}-E'_{e}=26.7\GeV-E'_{e}$, where $E'_{e}$ is the energy measured in the LUMI. \section{Trigger} The events used in the following analysis were collected using a trigger requiring a coincidence of the signals in the LUMI electron calorimeter and in the central calorimeter. The small angular acceptance of the LUMI electron calorimeter implied that in all the triggered events the virtuality of the exchanged photon was between $4\cdot 10^{-8} < Q^{2}< 0.02 \GeV^{2}$. The central calorimeter trigger required an energy deposit in the RCAL EMC section of more than $464\:{\rm MeV}$ (excluding the towers immediately adjacent to the beam pipe) or $1250\:{\rm MeV}$ (including those towers). In addition we also used the events triggered by an energy in the BCAL EMC section exceeding $3400\:{\rm MeV}$. At the trigger level the energy was calculated using only towers with more than $464\:{\rm MeV}$ of deposited energy. \section{Event selection} In the offline analysis the energy of the scattered electron detected in the LUMI calorimeter was required to satisfy $15.2<E'_{e}<18.2\GeV$, limiting the $\gamma p$ c.m. energy to the interval $167<W<194\GeV$. The longitudinal vertex position determined from tracks was required to be $-35\cm < Z_{vertex} < 25\cm $. The vertex cut removed a substantial part of the beam gas background and limited the data sample to the region of uniform detector acceptance. The cosmic ray background was suppressed by requiring the transverse momentum imbalance of the deposits in the main calorimeter, $P_{missing}$, relative to the square root of the total transverse energy, $\sqrt{E_{T}}$, to be small: $P_{missing}/\sqrt{E_{T}} < 2\sqrt{\GeV}$. The data sample was divided into a diffractive and a non--diffractive subset according to the pseudorapidity, $\eta_{max}$, of the most forward energy deposit in the FCAL with energy above $400\MeV$. The requirement of $\eta_{max} < 2$ selects events with a pronounced rapidity gap that are predominantly due to diffractive processes ($\approx 96\%$ according to Monte Carlo (MC) simulation, see section~6). The events with $\eta_{max} > 2$ are almost exclusively ($\approx 95\%$) due to non-diffractive reactions. The final non--diffractive data sample consisted of 149500 events. For the diffractive data sample ($\eta_{max} < 2$) an additional cut $\eta_{max} > -2$ was applied to suppress the production of light vector mesons $V$ in the diffractive reactions $\gamma p \rightarrow V p$ and $\gamma p \rightarrow V N$, where $N$ denotes a nucleonic system resulting from the dissociation of the proton. The remaining sample was analyzed as a function of the mass of the dissociated system reconstructed from the empirical relationship \[ M_{X rec} \approx A\cdot\sqrt{E^{2}-P_{Z}^{2}}+B = A\cdot\sqrt{(E+P_{Z}) \cdot E_{\gamma}}+B .\] The above formula exploits the fact that in tagged photoproduction the diffractively excited photon state has a relatively small transverse momentum. The total hadronic energy, $E$, and longitudinal momentum $P_{Z}=E \cdot cos\theta$ were measured with the uranium calorimeter by summing over all the energy deposits of at least $160\MeV$. The correction factors $A=1.7$ and $B=1.0\GeV$ compensate for the effects of energy loss in the inactive material, beam pipe holes, and calorimeter cells that failed the energy threshold cuts. The formula was optimized to give the best approximation of the true invariant mass in diffractive photon dissociation events obtained from MC simulations, while being insensitive to the calorimeter noise. The diffractive data were analyzed in two intervals of the reconstructed mass, namely \nobreak{$4<M_{X\:rec}<7\GeV$} and $8<M_{X\:rec}<13\GeV$. According to the MC simulation the first cut selects events generated with a mass having a mean value and spread of $\langle M_{X\:GEN} \rangle = 5\GeV$ and ${\rm r.m.s.} = 1.8\GeV$. The second cut results in $\langle M_{X GEN} \rangle = 10\GeV$ and ${\rm r.m.s.} = 2.3\GeV$. Details of the MC simulation are given in section \ref{s:mc}. The final data sample consisted of 5123 events in the lower $M_{X}$ interval and of 2870 events in the upper interval. The contamination of the final data samples from e--gas background ranges from \nobreak{$<0.1\%$} (non-diffractive sample) to $\approx 10\%$ (diffractive sample, $\langle M_{X} \rangle = 5\GeV$). The p--gas contribution is between $1\%$ (non-diffractive sample) and $2\%$ (diffractive sample, $\langle M_{X} \rangle = 5\GeV$). The e--gas background was statistically subtracted using the electron pilot bunches. A similar method was used to correct for the p--gas background that survived the selection cuts because of an accidental coincidence with an electron bremsstrahlung $(ep \rightarrow \gamma ep)$. A large fraction of these background events were identified using the LUMI detector, since the energy deposits in the electron and photon calorimeters summed up to the electron beam energy. The identified background events were included with negative weights into all of the distributions in order to compensate for the unidentified part of the coincidence background. A detailed description of the statistical background subtraction method may be found in \cite{zeus-sigmatot,phdburow}. \section{Track selection} The charged tracks used for this analysis were selected with the following criteria:\ \begin{itemize} \item only tracks accepted by an event vertex fit were selected. This eliminated most of the tracks that came from secondary interactions and decays of short lived particles; \item tracks must have hits in each of the first 5 superlayers of the CTD. This requirement ensures that only long, well reconstructed tracks are used for the analysis; \item $-1.2<\eta<1.4$ and $p_{T} > 0.3\GeV$. These two cuts select the region of high acceptance of the CTD where the detector response and systematics are best understood. \end{itemize} Using Monte Carlo events, we estimated that the efficiency of the charged track reconstruction convoluted with the acceptance of the selection cuts is about $90\%$ and is uniform in $p_{T}$. The contamination of the final sample from secondary interaction tracks, products of decays of short lived particles, and from spurious tracks (artifacts of the reconstruction algorithm) ranges from $5\%$ at $p_{T}=0.3\GeV$ to $3\%$ for $p_{T}>1\GeV$. The inefficiency and remaining contamination of the final track sample is accounted for by the acceptance correction described in the following section. The transverse momenta of the measured tracks displayed no correlation with $\eta$ over the considered interval and were symmetric with respect to the charge assigned to the track. \section{Monte Carlo models} \label{s:mc} For the acceptance correction and selection cut validation we used Monte Carlo events generated with a variety of programs. Soft, non-diffractive collisions of the proton with a VDM type photon were generated using HERWIG 5.7 with the minimum bias option \cite{HERWIG}. The generator was tuned to fit the ZEUS data on charged particle multiplicity and transverse energy distributions. For the evaluation of the model dependence of our measurements we also used events from the PYTHIA generator with the soft hadronic interaction option \cite{PYTHIA}. Hard resolved and direct subprocesses were simulated using the standard HERWIG 5.7 generator with the lower cut-off on the transverse momentum of the final--state partons, $p_{T min}$, chosen to be $2.5 \GeV$. For the parton densities of the colliding particles, the GRV--LO \cite{GRV} (for the photon) and MRSD$'$\_ \cite{MRS} (for the proton) parametrisations were used. As a cross--check we also used hard $\gamma p$ scattering events generated by PYTHIA with $p_{T min} = 5\GeV$. The soft and hard MC components were combined in a ratio that gave the best description of the transverse momentum distribution of the track with the largest $p_{T}$ in each event. For $p_{T min} = 2.5\GeV$, the hard component comprises $11\%$ of the non-diffractive sample and for $p_{T min} = 5\GeV$ only about $3\%$. Each diffractive subprocess was generated separately. The diffractive production of vector mesons $(\rho, \omega, \phi)$ was simulated with PYTHIA. The same program was used to simulate the double diffractive dissociation ($\gamma p \rightarrow X N$). The diffractive excitation of the photon ($\gamma p \rightarrow X p$) was generated with the EPDIF program which models the diffractive system as a quark--antiquark pair produced along the collision axis \cite{Solano}. Final state QCD radiation and hadronization were simulated using JETSET \cite{PYTHIA}. For the study of systematic uncertainties, a similar sample of events was obtained by enriching the standard PYTHIA diffractive events with the hard component simulated using the POMPYT Monte Carlo program (hard, gluonic pomeron with the direct photon option) \cite{bruni}. The MC samples corresponding to the diffractive subprocesses were combined with the non-diffractive component in the proportions given by the ZEUS measurement of the partial photoproduction cross sections \cite{zeus-sigmatot}. The MC events were generated without electroweak radiative corrections. In the considered $W$ range, the QED radiation effects result in $\approx 2\%$ change in the number of measured events so that the effect on the results of this analysis are negligible. The generated events were processed through the detector and trigger simulation programs and run through the standard ZEUS reconstruction chain. \section{Acceptance correction} \label{s:correction} The acceptance corrected transverse momentum spectrum was derived from the reconstructed spectrum of charged tracks, by means of a multiplicative correction factor, calculated using Monte Carlo techniques: \[ C(p_{T}) = (\frac{1}{N_{gen\: ev}} \cdot \frac{d N_{gen}}{d p_{T gen}})/ (\frac{1}{N_{rec\: ev}} \cdot \frac{d N_{rec}}{d p_{T rec}}) . \] $N_{gen}$ denotes the number of primary charged particles generated with a transverse momentum $p_{T gen}$ in the considered pseudorapidity interval and $N_{gen\: ev}$ is the number of generated events. Only the events corresponding to the appropriate type of interaction were included, e.g. for the lower invariant mass interval of the diffractive sample only the Monte Carlo events corresponding to diffractive photon dissociation with the generated invariant mass $4<M_{X gen}<7\GeV$ were used. $N_{rec}$ is the number of reconstructed tracks passing the experimental cuts with a reconstructed transverse momentum of $p_{T rec}$, while $N_{rec\: ev}$ denotes the number of events used. Only the events passing the trigger simulation and the experimental event selection criteria were included in the calculation. To account for the contribution of all the subprocesses, the combination of the MC samples described in section~\ref{s:mc} was used. This method corrects for the following effects in the data: \begin{itemize} \item the limited trigger acceptance; \item the inefficiencies of the event selection cuts, in particular the contamination of the diffractive spectra from non--diffractive processes and the events with a dissociated mass that was incorrectly reconstructed. Also the non-diffractive sample is corrected for the contamination from diffractive events with high dissociated mass; \item limited track finding efficiency and acceptance of the track selection cuts, as well as the limited resolution in momentum and angle; \item loss of tracks due to secondary interactions and contamination from secondary tracks; \item decays of charged pions and kaons, photon conversions and decays of lambdas and neutral kaons. Thus, in the final spectra the charged kaons appear, while the decay products of neutral kaons and lambdas do not. For all the other strange and charmed states, the decay products were included. \end{itemize} The validity of our acceptance correction method relies on the correct simulation of the described effects in the Monte Carlo program. The possible discrepancies between reality and Monte Carlo simulation were analyzed and the estimation of the effect on the final distributions was included in the systematic uncertainty, as described in the following section. \section{Systematic effects} \label{s:systematics} One of the potential sources of systematic inaccuracy is the tracking system and its simulation in the Monte Carlo events used for the acceptance correction. Using an alternative simulation code with artificially degraded tracking performance we verified that the efficiency to find a track which fulfills all the selection cuts is known with an accuracy of about $10\%$. The error due to an imprecise description of the momentum resolution at high $p_{T}$ is negligible compared to the statistical precision of the data. We also verified that the final spectra would not change significantly if the tracking resolution at high $p_{T}$ had non--gaussian tails at the level of $10\%$ or if the measured momentum was systematically shifted from the true value by the momentum resolution. Another source of systematic uncertainty is the Monte Carlo simulation of the trigger response. We verified that even a very large ($20\%$) inaccuracy of the BCAL energy threshold would not produce a statistically significant effect. An incorrect RCAL trigger simulation would change the number of events observed, but would not affect the final $p_{T}$ spectra since it is normalized to the number of events. The correlation between the RCAL energy and the $p_{T}$ of tracks is very small. To evaluate the model dependence we repeated the calculation of the correction factors using an alternative set of Monte Carlo programs (see section \ref{s:mc}) and compared the results with the original ones. The differences between the obtained factors varied between $5\%$ for the high mass diffractive sample and $11\%$ for the non--diffractive one. The sensitivity of the result to the assumed relative cross sections of the physics processes was checked by varying the subprocess ratios within the error limits given in \cite{zeus-sigmatot}. The effect was at most $3\%$. All the above effects were combined in quadrature, resulting in an overall systematic uncertainty of the charged particle rates as follows: $15\%$ in the non-diffractive sample, $15\%$ in the $\langle M_{X} \rangle = 5\GeV$ diffractive sample and $9\%$ in the $\langle M_{X} \rangle = 10\GeV$ diffractive sample. All these systematic errors are independent of $p_{T}$. \section{Results} The double differential rate of charged particle production in an event of a given type is calculated as the number of charged particles $\Delta N$ produced within $\Delta \eta$ and $\Delta p_{T}$ in $N_{ev}$ events as a function of $p_{T}$: \[\frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta} = \frac{1}{N_{ev}} \cdot \frac{1}{2 p_{T} \Delta \eta} \cdot \frac{\Delta N}{\Delta p_{T}} .\] The charged particle transverse momentum spectrum was derived from the transverse momentum distribution of observed tracks normalized to the number of data events by means of the correction factor described in section \ref{s:correction}. The resulting charged particle production rates in diffractive and non-diffractive events are presented in Fig.~\ref{f:corrected_pt} and listed in Tables \ref{t:results1}, \ref{t:results2} and \ref{t:results3}. In the figure the inner error bars indicate the statistical error. Quadratically combined statistical and systematic uncertainties are shown as the outer error bars. The $\langle M_{X} \rangle = 5\GeV$ diffractive spectrum extends to $p_{T}=1.75\GeV$ and the $\langle M_{X} \rangle = 10\GeV$ distribution extends to $p_{T}=2.5\GeV$. The non--diffractive distribution falls steeply in the low $p_{T}$ region but lies above the exponential fit at higher $p_{T}$ values. The measurements extend to $p_{T}=8\GeV$. \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|} \hline \ \ $p_{T} [\GeV]$ \ \ & $\frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta} [\GeV^{-2}]$ & $\sigma_{stat} [\GeV^{-2}]$ & $\sigma_{syst} [\GeV^{-2}]$\\ \hline\hline 0.30-- 0.40 & 4.98 & 0.05 & 0.74 \\ 0.40-- 0.50 & 2.99 & 0.03 & 0.44 \\ 0.50-- 0.60 & 1.78 & 0.02 & 0.26 \\ 0.60-- 0.70 & 1.09 & 0.01 & 0.16 \\ 0.70-- 0.80 & 0.641 & 0.012 & 0.096 \\ 0.80-- 0.90 & 0.420 & 0.010 & 0.063 \\ 0.90-- 1.00 & 0.259 & 0.007 & 0.038 \\ 1.00-- 1.10 & 0.164 & 0.005 & 0.024 \\ 1.10-- 1.20 & 0.107 & 0.004 & 0.016 \\ 1.20-- 1.30 & 0.0764 & 0.0034 & 0.0114 \\ 1.30-- 1.40 & 0.0513 & 0.0017 & 0.0077 \\ 1.40-- 1.50 & 0.0329 & 0.0012 & 0.0049 \\ 1.50-- 1.60 & 0.0242 & 0.0010 & 0.0036 \\ 1.60-- 1.70 & 0.0175 & 0.0008 & 0.0026 \\ 1.70-- 1.80 & 0.0133 & 0.0006 & 0.0020 \\ 1.80-- 1.90 & 0.0082 & 0.0005 & 0.0012 \\ 1.90-- 2.00 & 0.00615 & 0.00038 & 0.00092 \\ 2.00-- 2.14 & 0.00454 & 0.00028 & 0.00068 \\ 2.14-- 2.29 & 0.00360 & 0.00024 & 0.00054 \\ 2.29-- 2.43 & 0.00215 & 0.00017 & 0.00032 \\ 2.43-- 2.57 & 0.00166 & 0.00013 & 0.00025 \\ 2.57-- 2.71 & 0.00126 & 0.00012 & 0.00018 \\ 2.71-- 2.86 & 0.00098 & 0.00010 & 0.00015 \\ 2.86-- 3.00 & 0.000625 & 0.000071 & 0.000093 \\ 3.00-- 3.25 & 0.000456 & 0.000048 & 0.000068 \\ 3.25-- 3.50 & 0.000252 & 0.000031 & 0.000037 \\ 3.50-- 3.75 & 0.000147 & 0.000020 & 0.000022 \\ 3.75-- 4.00 & 0.000094 & 0.000012 & 0.000014 \\ 4.00-- 4.50 & 0.000067 & 0.000008 & 0.000010 \\ 4.50-- 5.00 & 0.0000301 & 0.0000045 & 0.0000045 \\ 5.00-- 5.50 & 0.0000151 & 0.0000029 & 0.0000023 \\ 5.50-- 6.00 & 0.0000082 & 0.0000021 & 0.0000012 \\ 6.00-- 7.00 & 0.0000038 & 0.0000009 & 0.0000006 \\ 7.00-- 8.00 & 0.0000014 & 0.0000005 & 0.0000002 \\ \hline \end{tabular} }} \vspace{1cm} \bf\caption{\it The rate of charged particle production in an average non--diffractive event. The data correspond to $-1.2<\eta<1.4$. The $\sigma_{stat}$ and $\sigma_{syst}$ denote the statistical and systematic errors.} \label{t:results1} \end{table} \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|} \hline \ \ $p_{T} [\GeV]$ \ \ & $\frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta} [\GeV^{-2}]$ & $\sigma_{stat} [\GeV^{-2}]$ & $\sigma_{syst} [\GeV^{-2}]$\\ \hline\hline 0.30-- 0.40 & 1.63 & 0.06 & 0.24 \\ 0.40-- 0.50 & 1.02 & 0.04 & 0.15 \\ 0.50-- 0.60 & 0.559 & 0.028 & 0.083 \\ 0.60-- 0.70 & 0.308 & 0.019 & 0.046 \\ 0.70-- 0.80 & 0.165 & 0.013 & 0.024 \\ 0.80-- 0.90 & 0.088 & 0.011 & 0.013 \\ 0.90-- 1.00 & 0.0479 & 0.0059 & 0.0071 \\ 1.00-- 1.10 & 0.0312 & 0.0052 & 0.0046 \\ 1.10-- 1.20 & 0.0196 & 0.0042 & 0.0029 \\ 1.20-- 1.35 & 0.0100 & 0.0018 & 0.0015 \\ 1.35-- 1.50 & 0.00304 & 0.00087 & 0.00045 \\ 1.50-- 1.75 & 0.00153 & 0.00052 & 0.00023 \\ \hline \end{tabular} }} \vspace{1cm} \bf\caption{\it The rate of charged particle production in an average event with a diffractively dissociated photon state of a mass $\langle M_{X} \rangle = 5\GeV$. The data correspond to $-1.2<\eta<1.4$. The $\sigma_{stat}$ and $\sigma_{syst}$ denote the statistical and systematic errors.} \label{t:results2} \end{table} \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|} \hline \ \ $p_{T} [\GeV]$ \ \ & $\frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta} [\GeV^{-2}]$ & $\sigma_{stat} [\GeV^{-2}]$ & $\sigma_{syst} [\GeV^{-2}]$\\ \hline\hline 0.30-- 0.40 & 3.87 & 0.10 & 0.34 \\ 0.40-- 0.50 & 2.32 & 0.06 & 0.20 \\ 0.50-- 0.60 & 1.46 & 0.04 & 0.13 \\ 0.60-- 0.70 & 0.803 & 0.033 & 0.072 \\ 0.70-- 0.80 & 0.485 & 0.023 & 0.043 \\ 0.80-- 0.90 & 0.288 & 0.017 & 0.025 \\ 0.90-- 1.00 & 0.176 & 0.012 & 0.015 \\ 1.00-- 1.10 & 0.109 & 0.009 & 0.009 \\ 1.10-- 1.20 & 0.0732 & 0.0075 & 0.0065 \\ 1.20-- 1.35 & 0.0294 & 0.0035 & 0.0026 \\ 1.35-- 1.50 & 0.0186 & 0.0028 & 0.0016 \\ 1.50-- 1.75 & 0.0086 & 0.0014 & 0.0008 \\ 1.75-- 2.00 & 0.00260 & 0.00066 & 0.00023 \\ 2.00-- 2.50 & 0.00076 & 0.00023 & 0.00007 \\ \hline \end{tabular} }} \vspace{1cm} \bf\caption{\it The rate of charged particle production in an average event with a diffractively dissociated photon state of a mass $\langle M_{X} \rangle = 10\GeV$. The data correspond to $-1.2<\eta<1.4$. The $\sigma_{stat}$ and $\sigma_{syst}$ denote the statistical and systematic errors.} \label{t:results3} \end{table} The soft interactions of hadrons can be successfully described by thermodynamic models that predict a steep fall of the transverse momentum spectra that can be approximated with the exponential form \cite{hagedorn}: \begin{equation} \frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta}= exp(a - b \cdot \sqrt{p_{T}^{2} + m_{\pi}^{2}}) \label{e:exp} \end{equation} where $m_{\pi}$ is the pion mass. The results of the fits of this function to ZEUS data in the interval $0.3<p_{T}<1.2\GeV$ are also shown as the full line in Fig.~\ref{f:corrected_pt}. The resulting values of the exponential slope $b$ are listed in Table \ref{t:slopes}. The systematic errors were estimated by varying the relative inclusive cross sections within the systematic error limits (see section \ref{s:systematics}) and by varying the upper boundary of the fitted interval from $p_{T}=1.0\GeV$ to $1.4\GeV$. In Fig.~\ref{f:pt_slopes} we present a comparison of the $b$ parameter resulting from the fits of (\ref{e:exp}) to proton-proton and proton-antiproton data as a function of the c.m. energy. The slope of the ZEUS non-diffractive spectrum agrees with the data from hadron--hadron scattering at an energy close to the ZEUS photon--proton c.m. energy. The diffractive slopes agree better with the hadronic data corresponding to a lower energy. In Fig.~\ref{f:pt_slopes} the ZEUS diffractive points are plotted at $5\GeV$ and $10\GeV$, the values of the invariant mass of the dissociated photon. A similar behaviour has been observed for the diffractive dissociation of protons, i.e. the scale of the fragmentation of the excited system is related to the invariant mass rather than to the total c.m. energy \cite{p-diff}. The dashed line in Fig.~\ref{f:pt_slopes} is a parabola in $log(s)$ and was fitted to all the hadron--hadron points to indicate the trend of the data. As one can see, our photoproduction results are consistent with the hadronic data. \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|c|c|c|} \hline \ \ sample \ \ & $b [\GeV^{-1}]$ & $\sigma_{stat}(b)$ & $\sigma_{syst}(b)$ & $a$ & $\sigma_{stat}(a)$ & $cov(a,b)$\\ \hline\hline non-diffractive & 4.94 & 0.09 & 0.19 & 3.39 & 0.09 & -0.011\\ diff $\langle M_{X} \rangle = 5\GeV$ & 5.91 & 0.17 & 0.19 & 2.78 & 0.10 & -0.016\\ diff $\langle M_{X} \rangle = 10\GeV$ & 5.28 & 0.10 & 0.17 & 3.34 & 0.06 & -0.006\\ \hline \end{tabular} }} \vspace{1cm} \bf\caption{\it The values of the parameters resulting from the fits of equation (\protect\ref{e:exp}) to ZEUS data in the interval $0.3<p_{T}<1.2\GeV$. The $\sigma_{stat}$ and $\sigma_{syst}$ indicate the statistical and systematic errors.} \label{t:slopes} \end{table} The non--diffractive spectrum in Fig.~\ref{f:corrected_pt} clearly departs from the exponential shape at high $p_{T}$ values. Such a behaviour is expected from the contribution of the hard scattering of partonic constituents of the colliding particles, a process that can be described in the framework of perturbative QCD. It results in a high $p_{T}$ behaviour of the inclusive spectrum that can be approximated by a power law formula: \begin{equation} \frac{1}{N_{ev}} \cdot \frac{d^{2}N}{dp^{2}_{T} d\eta}= A \cdot (1 + \frac{p_{T}}{p_{T\:0}})^{-n} \label{e:power} \end{equation} where $A$, $p_{T\:0}$ and $n$ are parameters determined from the data. The fit to the ZEUS points in the region of $p_{T}>1.2\GeV$ gives a good description of the data and results in the parameter values $p_{T\:0}=0.54$ GeV, $n=7.25$ and $A=394$ GeV$^{-2}$. The statistical precision of these numbers is described by the covariance matrix shown in Table \ref{t:cov}. The fitted function is shown in Fig.~\ref{f:corrected_pt} as the dotted line. \begin{table}[h] \centerline{\hbox{ \begin{tabular}{|c||c|c|c|} \hline & $p_{T\:0}$ & $n$ & $A$ \\ \hline\hline $p_{T\:0}$ & $0.32\cdot 10^{-3}$ & $0.48\cdot 10^{-3}$ & $-0.10\cdot 10^{1}$\\ $n$ & & $0.12\cdot 10^{-2}$ & $-0.12\cdot 10^{1}$\\ $A$ & & & $ 0.32\cdot 10^{4}$\\ \hline \end{tabular}}} \vspace{1cm} \bf\caption{\it The covariance matrix corresponding to the fit of equation (\protect\ref{e:power}) to the non--diffractive data for $p_{T}>1.2\GeV$.} \label{t:cov} \end{table} In Fig.~\ref{f:pt_comparison} the ZEUS data are presented together with the results of a similar measurement from the H1 collaboration at $\langle W \rangle = 200$ GeV \cite{H1} and the data from the WA69 photoproduction experiment at a c.m. energy of $\langle W \rangle = 18\GeV$ \cite{omega}. For the purpose of this comparison, the inclusive cross sections published by those experiments were divided by the corresponding total photoproduction cross sections \cite{H1-sigmatot,ALLM}. Our results are in agreement with the H1 data. The comparison with the WA69 data shows that the transverse momentum spectrum becomes harder as the energy of the $\gamma p$ collision increases. Figure~\ref{f:pt_comparison} also shows the functional fits of the form (\ref{e:power}) to {\it p\={p}} data from UA1 and CDF at various c.m. energies \cite{UA1,CDF}. Since the fits correspond to inclusive cross sections published by these experiments, they have been divided by the cross section values used by these experiments for the absolute normalization of their data. The inclusive $p_{T}$ distribution from our photoproduction data is clearly harder than the distribution for {\it p\={p}} interactions at a similar c.m. energy and in fact is similar to {\it p\={p}} at $\sqrt{s}=900\GeV$. This comparison indicates that in spite of the apparent similarity in the low $p_{T}$ region between photoproduction and proton--antiproton collisions at a similar c.m. energy, the two reactions are different in the hard regime. There are many possible reasons for this behaviour. Firstly, both of the {\it p\={p}} experiments used for the comparison measured the central rapidity region ($|\eta|<2.5$ for UA1 and $|\eta|<1$ for CDF), while our data correspond to $0.8<\eta_{c.m.}<3.4$. Secondly, according to VDM, the bulk of the $\gamma p$ collisions can be approximated as an interaction of a vector meson $V$ with the proton. The $p_{T}$ spectrum of $Vp$ collisions may be harder than {\it p\={p}} at a similar c.m. energy, since the parton momenta of quarks in mesons are on average larger than in baryons. Thirdly, in the picture where the photon consists of a resolved part and a direct part, both the anomalous component of the resolved photon and the direct photon become significant at high $p_{T}$ and make the observed spectrum harder compared to that of $Vp$ reactions. Figure \ref{f:pt_kniehl} shows the comparison of our non--diffractive data with the theoretical prediction obtained recently from NLO QCD calculations \cite{krammer}. The charged particle production rates in a non--diffractive event were converted to inclusive non-diffractive cross sections by multiplying by the non--diffractive photoproduction cross section of $\sigma_{nd}(\gamma p \rightarrow X)=91\pm 11{\rm \:\mu b}$ \cite{zeus-sigmatot}. The theoretical calculations relied on the GRV parametrisation of the parton densities in the photon and on the CTEQ2M parametrisation for partons in the proton\cite{CTEQ}. The NLO fragmentation functions describing the relation between the hadronic final state and the partonic level were derived from the $e^{+}e^{-}$ data \cite{krammer-fragm}. The calculation depends strongly on the parton densities in the proton and in the photon, yielding a spread in the predictions of up to $30\%$ due to the former and $20\%$ due to the latter. The factorization scales of the incoming and outgoing parton lines, as well as the renormalization scale, were set to $p_{T}$. The uncertainty due to the ambiguity of this choice was estimated by changing all three scales up and down by a factor of 2. The estimates of the theoretical errors were added in quadrature and indicated in Fig.~\ref{f:pt_kniehl} as a shaded band. The theoretical calculation is in good agreement with the ZEUS data. \section{Conclusions} We have measured the inclusive transverse momentum spectra of charged particles in diffractive and non--diffractive photoproduction events with the ZEUS detector. The inclusive transverse momentum spectra fall exponentially in the low $p_{T}$ region, with a slope that increases slightly going from the non--diffractive to the diffractive collisions with the lowest $M_{X}$. The diffractive slopes are consistent with hadronic data at a c.m. energy equal to the invariant mass of the diffractive system. The non--diffractive low $p_{T}$ slope is consistent with the result from {\it p\={p}} at a similar c.m. energy but displays a high $p_{T}$ tail clearly departing from the exponential shape. Compared to photoproduction data at a lower c.m. energy we observe a hardening of the transverse momentum spectrum as the collision energy increases. The shape of our $p_{T}$ distribution is comparable to that of {\it p\={p}} interactions at $\sqrt{s}=900\GeV$. The results from a NLO QCD calculation agree with the measured cross sections for inclusive charged particle production. \section{Acknowledgments} We thank the DESY Directorate for their strong support and encouragement. The remarkable achievements of the HERA machine group were essential for the successful completion of this work and are gratefully appreciated. We gratefully acknowledge the support of the DESY computing and network services. We would like to thank B.A. Kniehl and G. Kramer for useful discussions and for providing the NLO QCD calculation results. \pagebreak
{'timestamp': '1996-03-17T07:32:41', 'yymm': '9503', 'arxiv_id': 'hep-ex/9503014', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ex/9503014'}
ArXiv
\section{Introduction} The first report of a detection of \igr\ in the hard X-ray band is found in the {\it INTEGRAL}/IBIS all-sky survey catalogues, which are based on data taken before the end of 2006 \citep{krivonos07,bird07}. At that time, its nature was unknown, other than it was a transient source, detected during a series of observations between December 2002 and February 2004 but which was below the threshold of the {\it INTEGRAL} detectors between 2004-2007 \citep{bikmaev08}. A {\it Chandra} observation was performed on 18 December 2006, that is, during the off state of the {\it INTEGRAL} instruments. However, a weak source consistent with the position of \igr\ was detected. The {\it Chandra} observation allowed the refinement of its X-ray position and the suggestion of an optical counterpart \citep{sazonov08}. Low-resolution ($FWHM \sim 15$ \AA) optical spectroscopic observations of the likely counterpart indicated a B3 star. Although the H$\alpha$ line was found in absorption, \igr\ was proposed to be a high-mass X-ray binary with a Be star companion. It was argued that the star was going through a disc-loss episode at the time of the observations \citep{bikmaev08}. Be/X-ray binaries are a class of high-mass X-ray binaries that consist of a Be star and a neutron star \citep{reig11}. The mass donor in these systems is a relatively massive ($\simmore 10 ~{\rm M}_\odot$) and fast-rotating ($\simmore$80\% of break-up velocity) star, whose equator is surrounded by a disc formed from photospheric plasma ejected by the star. \ha\ in emission is typically the dominant feature in the spectra of such stars. In fact, the strength of the Balmer lines in general and of \ha\ in particular (whether it has ever been in emission) together with a luminosity class III-V constitute the defining properties of this class of objects. The equatorial discs are believed to be quasi-Keplerian and supported by viscosity \citep{okazaki01}. The shape and strength of the spectral emission lines are useful indicators of the state of the disc. Global disc variations include the transition from a Be phase, i.e., when the disc is present, to a normal B star phase, i.e., when the disc is absent and also cyclic V/R changes, i.e., variation in the ratio of the blue to red peaks of a split profile that are attributed to the precession of a density perturbation inside the disc \citep{okazaki91,papaloizou06}. In this work we present the first long-term study of the optical counterpart to the X-ray source \igr\ and report a disc-loss episode. The absence of the disc allows us to derive some of the fundamental physical parameters such as reddening, distance, and rotation velocity, without contamination from the disc. We also confirm that \igr\ is a Be/X-ray binary, although with an earlier spectral type than the one suggested by \citet{bikmaev08}. \begin{table} \caption{Log of the spectroscopic observations in the blue region.} \label{blue} \centering \begin{tabular}{@{~~}l@{~~}c@{~~}l@{~~}l@{~~}c} \noalign{\smallskip} \hline \noalign{\smallskip} Date &JD &Telescope &Wavelength &num. of \\ &(2,400,000+) & &coverage (\AA) &spectra \\ \noalign{\smallskip}\hline\noalign{\smallskip} 13-09-2012 &56184.34 &SKO &3810--5165 &3 \\ 14-09-2012 &56185.31 &SKO &3800--5170 &5 \\ 15-10-2012 &56216.74 &FLWO &3890--4900 &5 \\ 26-12-2012 &56288.36 &WHT &3870--4670 &3 \\ 15-07-2013 &56488.87 &FLWO &3900--4900 &3 \\ 23-08-2013 &56528.58 &WHT &3840--4670 &3 \\ \noalign{\smallskip} \hline \end{tabular} \end{table} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{./fig1.eps}} \caption[]{Evolution of the \ha\ and He I 6678 \AA\ lines. Absorption by the disc along the line of sight produces very narrow lines (shell profiles). } \label{haprof} \end{figure*} \section{Observations} Optical spectroscopic and photometric observations of the optical counterpart to the INTEGRAL source \igr\ were obtained from the 1.3m telescope of the Skinakas observatory (SKO) in Crete (Greece) and from the Fred Lawrence Whipple Observatory (FLWO) at Mt. Hopkins (Arizona). In addition, \igr\ was observed in service time with the 4.2-m William Herschel Telescope telescope (WHT) of El Roque de los Muchachos observatory in La Palma (Spain). The 1.3\,m telescope of the Skinakas Observatory was equipped with a 2000$\times$800 ISA SITe CCD and a 1302 l~mm$^{-1}$ grating, giving a nominal dispersion of $\sim$1.04 \AA/pixel. On the nights 29 September 2009 and 6 September 2011, a 2400 l~mm$^{-1}$ grating with a dispersion of $\sim$0.46 \AA/pixel was used. We also observed \igr\ in queue mode with the 1.5-m telescope at Mt. Hopkins (Arizona), and the FAST-II spectrograph \citep{fabricant98} plus FAST3 CCD, a backside-illuminated 2688x512 UA STA520A chip with 15$\mu$m pixels and a 1200 l~mm$^{-1}$ grating (0.38 \AA/pixel). The WHT spectra were obtained in service mode on the nights 26 December 2012 and 23 August 2013 with the ISIS spectrograph and the R1200B grating plus the EEV12 4096$\times$2048 13.5-$\mu$m pixel CCD (0.22 \AA/pixel) for the blue arm and the R1200R grating and REDPLUS 4096$\times$2048 15-$\mu$m pixel CCD (0.25 \AA/pixel) for the red arm. The spectra were reduced with the dedicated packages for spectroscopy of the {\tt STARLINK} and {\tt IRAF} projects following the standard procedure. In particular, the FAST spectra were reduced with the FAST pipeline \citep{tokarz97}. The images were bias subtracted and flat-field corrected. Spectra of comparison lamps were taken before each exposure in order to account for small variations of the wavelength calibration during the night. Finally, the spectra were extracted from an aperture encompassing more than 90\% of the flux of the object. Sky subtraction was performed by measuring the sky spectrum from an adjacent object-free region. To ensure the homogeneous processing of the spectra, they were normalized with respect to the local continuum, which was rectified to unity by employing a spline fit. The photometric observations were made from the 1.3-m telescope of the Skinakas Observatory. \igr\ was observed through the Johnson/Bessel $B$, $V$, $R$, and $I$ filters \citep{bessel90}. For the photometric observations the telescope was equipped with a 2048$\times$2048 ANDOR CCD with a 13.5 $\mu$m pixel size (corresponding to 0.28 arcsec on the sky) and thus provides a field of view of 9.5 arcmin $\times$ 9.5 arcmin. The gain and read out noise of the CCD camera at a read-out velocity of 2 $\mu$s/pixel are 2.7 $e^{-}$/ADU and 8 $e^{-}$, respectively. The FWHM (seeing estimate) of the point sources in the images varied from 4 to 6 pixels (1.1''--1.7'') during the different campaigns. Reduction of the data was carried out in the standard way using the IRAF tools for aperture photometry. First, all images were bias-frame subtracted and flat-field corrected using twilight sky flats to correct for pixel-to-pixel variations on the chip. The resulting images are therefore free from the instrumental effects. All the light inside an aperture with radius 4.5'' was summed up to produce the instrumental magnitudes. The sky background was determined as the statistical mode of the counts inside an annulus 5 pixels wide and 20 pixels from the center of the object. The absorption caused by the Earth's atmosphere was taken into account by nightly extinction corrections determined from measurements of selected stars that also served as standards. Finally, the photometry was accurately corrected for colour equations and transformed to the standard system using nightly observations of standard stars from Landolt's catalogue \citep{landolt92,landolt09}. The error of the photometry was calculated as the root-mean-square of the difference between the derived final calibrated magnitudes of the standard stars and the magnitudes of the catalogue. The photometric magnitudes are given in Table~\ref{phot}, while information about the spectroscopic observations can be found in Tables~\ref{red} and \ref{blue}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{./fig2.eps}} \caption[]{From top to bottom, evolution of the \ha\ equivalent width, V/R ratio, $V$ magnitude, $(B-V)$ colour, and velocity shift with time.} \label{specpar} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=16cm,height=10cm]{./fig3.eps} \caption[]{WHT spectrum of \igr\ and identified lines used for spectral classification. The spectrum was smoothed with a Gaussian filter (FWHM=1).} \label{wht} \end{center} \end{figure*} \section{Results} \subsection{The \ha\ line: evolution of spectral parameters} \label{haevol} The \ha\ line is the prime indicator of the circumstellar disc state. In particular, its equivalent width (\ew) provides a good measure of the size of the circumstellar disc \citep{quirrenbach97,tycner05,grundstrom06}. \ha\ emission results from recombination of photoionised electrons by the optical and UV radiation from the central star. Thus, in the absence of the disc, no emission should be observed and the line should display an absorption profile. The \ha\ line of the massive companion in \igr\ is highly variable, both in strength and shape. When the line appears in emission it always shows a double-peaked profile, but the relative intensity of the blue (V) over the red (R) peaks varies. The central absorption that separates the two peaks goes beyond the continuum, placing \igr\ in the group of the so-called {\em shell} stars \citep{hanuschik95,hanuschik96a,hummel00,rivinius06}. Figure \ref{haprof} displays the evolution of the line profiles. V/R variability is clearly seen, indicating a distorted disc \citep{hummel97}. Significant changes in the structure of the equatorial disc on timescales of months are observed. In addition, a long-term growth/dissipation of the disc is suggested by the increase of the equivalent width and subsequent decrease. Table~\ref{red} gives the log of the spectroscopic observations and some important parameters that resulted from fitting Gaussian functions to the \ha\ line profile. Due to the deep central absorption, three Gaussian components (two in emission and one in absorption) were generally needed to obtain good fits. Column 5 gives the equivalent width of the entire (all components) \ha\ line. The main source of uncertainty in the equivalent width stems from the always difficult definition of the continuum. The \ew\ given in Table~\ref{red} correspond to the average of twelve measurements, each one from a different definition of the continuum and the quoted error is the scatter (standard deviation) present in those twelve measurements. Column 6 shows the ratio between the core intensity of the blue and red humps. The V/R ratio is computed as the logarithm of the ratio of the relative fluxes at the blue and red emission peak maxima. Thus negative values indicate a red-dominated peak, that is, $V<R$, and positive values a blue-dominated line, $V>R$ . Columns 7 and 8 in Table~\ref{red} give the ratio of the peak flux of each component over the minimum flux of the deep absorption core. This ratio simply serves to confirm the shell nature of \igr\ in a more quantitative way. \citet{hanuschik96a} established an empirical {\em shell criterion} based on the H$\alpha$ line , whereby shell stars are those with $F_{\rm p}/F_{\rm cd}\simmore 1.5$, where $F_{\rm p}$ and $F_{\rm cd}$ are the mean peak and trough flux, respectively. Column 9 is the velocity shift of the central narrow absorption shell feature when the disc is present or of the absorption profile of the \ha\ line in the absence of the disc. Prior to the measurement of the velocity shift, all the spectra were aligned taking the value of the insterstellar line at 6612.8 \AA\ as reference. Note that these shifts do not necessarily represent the radial velocity of the binary, as the \ha\ line is strongly affected by circumstellar matter \citep[see e.g.,][for a discussion of the various effects when circumstellar matter is present in the system]{harmanec03}. Typically, the He I lines in the blue end part of the spectrum are used for radial velocity studies. Note, however, that in \igr, even these lines are affected by disc emission (see Sect.~\ref{diskc}). Nevertheless, we measured the radial velocity of the binary by cross-correlating the higher resolution blue-end spectra obtained from the FLWO and WHT (Table~\ref{blue}) with a template using the {\em fxcor} task in the {\it IRAF} package. This template was generated from the BSTAR2006 grid of synthetic spectra \citep{lanz07} and correspond to a model atmosphere with $T_{\rm eff}=25000$ K, $log g=3.75$ convolved by a rotational profile with $v \sin i=380$ km s$^{-1}$. The results for the July and August 2013 observations, when the contribution of the disc is expected to be minimum, are $v_r=-127\pm30$ km s$^{-1}$ (HJD 2,456,488.863) and $v_r=-120\pm10$ km s$^{-1}$ (HJD 2,456,528.582), respectively. Figure \ref{specpar} shows the evolution of \ew, the V/R ratio, the V magnitude, the $(B-V)$ colour, and the velocity shift with time. In the top panel of this figure, different symbols represent the equivalent width of the different components of the line. Open circles give the sum of the equivalent widths of the individual V and R peaks, while the squares are the equivalent width of the deep central absorption. These values were obtained from the Gaussian fits. The overall equivalent width (filled circles) was measured directly from the spectra. In the bottom panel of Fig.~\ref{specpar}, open symbols correspond to the velocity shifts measured from the \ha\ line, while filled symbols are radial velocities obtained by crossc-orrelating the 3950--4500 \AA\ spectra with the template. Black circles denote SKO spectra, blue triangles correspond to data taken from the FLWO, and red diamonds spectra obtained with the WHT. \subsection{Spectral classification} \label{specl} Figure~\ref{wht} shows the average blue spectrum of \igr\ obtained with the 4.2-m WHT on the night 23 August 2013. The main spectral features have been identified. The 3900--4600 \AA\ spectrum is dominated by hydrogen and neutral helium absorption lines, clearly indicating an early-type B star. The earliest classes (B0 and B0.5) can be ruled out because no ionised helium is present. However, \ion{Si}{III} 4552-68-75 is crearly detected, favouring a spectral type B1-B1.5. The relatively weakness of of \ion{Mg}{II} at 4481 \AA\ also indicates a spectral type earlier than B2. The strength of the \ion{C}{III}+\ion{O}{II} blend at 4070 \AA\ and 4650 \AA\ agrees with this range (B1--B2) and points toward an evolved star. A subgiant or giant star, i.e., luminosity class IV or III, is also favoured by the presence of \ion{O}{II} at 4415-17 \AA. However, in this case, the triplet {\ion{Si}{III} 4552-68-75 \AA\ and \ion{Si}{IV} 4089 \AA\ should be stronger than observed. We conclude that the spectral type of the optical counterpart to \igr\ is in the range B1--B1.5 V--III, with a prefered classification of B1IV. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{./fig4.eps}} \caption[]{Comparison of two spectra of \igr\ at different epochs, when the disc was present (solid black line) and when the disc presumably had vanished (dashed red line). The spectra were taken from the FLWO on 15 October 2012 and 15 July 2013, respectively.} \label{widthcomp} \end{figure} \subsection{Contribution of the disc to the spectral lines and colours} \label{diskc} Figure~\ref{widthcomp} shows a comparison of two blue-end spectra of \igr\ taken from the FLWO at two different epochs. The October 2012 spectrum corresponds to a Be phase when the \ha\ line was strongly in emission, while the July 2013 spectrum corresponds to a B phase when this line displayed an absorption profile. One of the most striking results that can be directly derived from the visual comparison of the spectra is the significantly narrower width of the spectral lines, particularly those of the Balmer series and the He I lines, when the disc is present. To estimate the contribution of the disc to the width of the lines we measured the FWHM of the \ion{He}{I} at 6678 \AA\ (Fig.~\ref{haprof}) as a function of time. The result can be seen in Fig.~\ref{hei}, where the evolution of \ew\ with time is also plotted. The width of the helium line was significantly narrower and the core deeper during the strong shell phase (spectra taken during 2012, MJD $\sim$56100--56200) than at instances where the disc was weak, in 2009 and 2013). The disc emission also affects the photometric magnitudes and colours. The observed $(B-V)$ colour when the disc was present (observation taken in 2011) is 0.05 mag larger than during 2013 when the disc disappeared. That is, the disc introduces and extra reddening component. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{./fig5.eps}} \caption[]{Evolution of the full width half maximum of the He I $\lambda$6678 line. Note the progressive narrowing of the line as the shell phase develops and the large values when \ew\ is positive (interpreted as the absence of the disc).} \label{hei} \end{figure} \subsection{The He I lines: rotational velocity} \label{rotvel} Shell stars are Be stars with strongly rotationally broadened photospheric emission lines with a deep absorption core \citep{rivinius06}. The rotational velocity is believed to be a crucial parameter in the formation of the circumstellar disc. A rotational velocity close to the break-up or critical velocity (i.e. the velocity at which centrifugal forces balance Newtonian gravity) reduces the effective equatorial gravity to the extent that weak processes such as gas pressure and/or non-radial pulsations may trigger the ejection of photospheric matter with sufficient energy and angular momentum to make it spin up into a Keplerian disc. Because stellar absorption lines in Be stars are rotationally broadened, their widths can be used to estimate the projected rotational velocity, $v \sin i$, where $v$ is the equatorial rotational velocity and $i$ the inclination angle toward the observer. However, to obtain a reliable measurement of the rotational velocity of the Be star companion the He I lines have to be free of disc emission. As shown in the previous section, the width of the line becomes narrower as the disc grows, underestimating the value of the rotational velocity. We estimated the projected rotational velocity of \igr\ by measuring the full width at half maximum (FWHM) of He I lines, following the calibration by \citet{steele99}. These authors used four neutral helium lines, namely 4026 \AA, 4143 \AA, 4387 \AA, and 4471 \AA, to derive rotational velocities. We measured the width of these lines from the August 2013 WHT spectrum as it provides the highest resolution in our sample and correspond to a disc-loss phase, as indicated by the absorption profile of the \ha\ line. We made five different selections of the continuum and fitted Gaussian profiles to these lines. We also corrected the lines for instrumental broadening by subtracting in quadrature the FWHM of a nearby line from the calibrated spectra. The projected rotational velocity obtained as the average of the values from the four He I lines was $v \sin i=365\pm15$ km s$^{-1}$. The quoted errors are the standard deviation of all the measurements. The rotational velocity can also be estimated by comparing the high-resolution August 2013 WHT spectrum with a grid of synthetic spectra broadened at various values of the rotational velocity. We employed the BSTAR2006 grid \citep{lanz07}, which uses the code TLUSTY \citep{hubeny88,hubeny92,hubeny94} to create the model atmosphere and SYNSPEC\footnote{http://nova.astro.umd.edu} to calculate the emergent spectrum. We assumed a model atmosphere with solar composition, $T_{\rm eff}=25000$ K and $\log g=3.50$ and a microturbulent velocity of 2 km s$^{-1}$. This spectrum was convolved with rotational and intrumental (gaussian) profiles using ROTIN3. Thirteen rotational velocities from 300 km s$^{-1}$ to 420 km s$^{-1}$ with steps of 10 km s$^{-1}$ were considered. The rotational velocity that minimises the sum of the squares of the difference between data and model corresponded to $v \sin i=380$ km s$^{-1}$, consistent with the previous value. \subsection{Reddening and distance} To estimate the distance, the amount of reddening to the source has to be determined. In a Be star, the total measured reddening is made up of two components: one produced mainly by dust in the interstellar space through the line of sight and another produced by the circumstellar gas around the Be star \citep{dachs88,fabregat90}. Although the physical origin and wavelength dependence of these two reddenings is completely different, their final effect upon the colours is very difficult to disentangle \citep{torrejon07}. In fact, interstellar reddening is caused by {\em absorption} and {\em scattering} processes, while circumstellar reddening is due to extra {\em emission} at longer wavelenths. The disc-loss episode observed in \igr\ allows us to derive the true magnitudes and colours of the underlying Be star, without the contribution of the disc. Thus the total reddening measured during a disc-loss episode corresponds entirely to interstellar extinction. The observed colour of \igr\ in the absence of the disc is $(B-V)=0.50\pm0.02$ (Table~\ref{phot}), while the expected one for a B1--1.5V--IV star is $(B-V)_0=-0.26$ \citep{johnson66,fitzgerald70,gutierrez-moreno79,wegner94}. Thus we derive a colour excess of $E(B-V)=0.76\pm0.02$ or visual extinction $A_{\rm V}=R \times E(B-V)= 2.4\pm0.1$, where the standard extinction law $R=3.1$ was assumed. Taking an average absolute magnitude of $M_V=-3.0$, typical of a star of this spectral type \citep{humphreys84,wegner06}, the distance to \igr\ is estimated to be 8.7$\pm$1.3 kpc. The final error was obtained by propagating the errors of $B-V$ (0.02 mag), $A_V$ (0.1 mag) and $M_V$ (0.3 mag). \section{Discussion} \label{discussion} We have investigated the long-term variability of the optical counterpart to the X-ray source \igr. \citet{bikmaev08} suggested that \igr\ is a high-mass X-ray binary with a B3e companion, even though their spectroscopic observations showed \ha\ in absorption. They argued that the system could be in a disc-loss phase. We confirm the Be nature of \igr, but suggest an earlier type companion, in agreement with the spectral type distribution of Be/X-ray binaries in the Milky Way. All spectroscopically identified optical companions of Be/X-ray binaries in the Galaxy do not have spectral type later than B2 \citep{negueruela98b}. \begin{table*} \caption{Comparison of the characteristic time scales of \igr\ with other Be/X-ray binaries. $P_{\rm orb}$ is the orbital period, $T_{\rm V/R}$is the time needed to complete a V/R cycle, and $T_{\rm disc}$ is the time for the formation and dissipation of the disc. } \label{comp} \centering \begin{tabular}{@{~~}l@{~~}l@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}l} \noalign{\smallskip} \hline \noalign{\smallskip} X-ray &Spectral &Disc-loss &P$_{\rm orb}$ &$T_{\rm V/R}$ &$T_{\rm disc}$ &Reference \\ source &type &episode\tablefootmark{$^\dag$}&(days) &(year) &(year) & \\ \noalign{\smallskip} \hline \noalign{\smallskip} \igr &B1V-IV &yes &-- &0.5-0.8 &5--6 &This work \\ 4U 0115+634 &B0.2V &yes &24.3 &0.5--1 &3--5 &1,2 \\ RX J0146.9+6121 &B1III-V &no &-- &3.4 &-- &3 \\ V 0332+53 &O8-9V &no &34.2 &1 &-- &4 \\ X-Per &O9.5III &yes &250 &0.6--2 &7 &5,6,7 \\ RX J0440.9+4431 &B1III-V &yes &-- &1.5--2 &$>10$ &8 \\ 1A 0535+262 &O9.7III &yes &111 &1--1.5 &4--5 &9,10,11 \\ IGR J06074+2205 &B0.5IV &yes &-- &-- &4--5 &12 \\ RX J0812.4-3114 &B0.5III-V &yes &81.3 &-- &3--4 &13 \\ 4U 1145-619 &B0.2III &no &187 &3 &-- &14 \\ 4U 1258-61 &B2V &yes &132 &0.36 &-- &15 \\ SAX J2103.5+4545&B0V &yes &12.7 &-- &1.5--2 &16 \\ \noalign{\smallskip} \hline \end{tabular} \tablefoot{ \tablefoottext{$^\dag$}{By disc-loss episodes we mean periods when the \ew\ was seen to be positive.} } \tablebib{ (1) \citet{negueruela01}; (2) \citet{reig07b}; (3) \citet{reig00} ; (4) \citet{negueruela98a}; (5) \citet{lyubimkov97} ; (6) \citet{delgado01}; (7) \citet{clark01}; (8) \citet{reig05b} ; (9) \citet{clark98}; (10) \citet{haigh04}; (11) \citet{grundstrom07} ; (12) \citet{reig10b}; (13) \citet{reig01}; (14) \citet{stevens97} ; (15) \citet{corbetgx86} ; (16) \citet{reig10a} } \end{table*} \subsection{Spectral evolution and variability time scales} Our monitoring of \igr\ reveals large amplitude changes in the shape and strength of the spectral lines and two different time scales associated with the variability of the disc: disc formation/dissipation is estimated to occur on time scales of years, while V/R variability is seen on time scales of months. Our first observation was performed in July 2009 and shows the contribution of a weak disc. Although the \ew\ is positive, indicating that absorption dominates over emission, its value is smaller than that expected from a pure photospheric line, which according to \citet{jaschek87} should be $\sim$3.5--4 \AA. Also, the two peaks, V and R, separated by the central depression can already be distinguished in our first spectrum. The strength of the \ha\ line increased and its shape changed from an absorption dominated profile into an emission dominated one during the period July 2009--September 2012. As the intensity increased, the line became progressively more asymmetric. After September 2012, \ew\ began to decrease with a faster rate than the rise. By Summer 2013, the \ha\ line profile had turned into absorption, that is, the system entered a disc-loss episode. Due to the observational gaps, it is difficult to determine the overall time scale for the formation and dissipation of the circumstellar disc. The observations of \igr\ by \citet{bikmaev08} were made in Spring 2007 (low resolution) and Autumn 2007 (high-resolution) and show the \ha\ line in absorption, although in the high-resolution spectrum the shape of the line is reminiscent of a shell profile, i.e., with the contribution of a small disc. If the Spring 2007 spectrum really showed an absorption profile similar to our latest observations, then we can estimate the formation/dissipation cycle to be about six years. In addition to large amplitude changes in the strength of the \ha\ line, \igr\ also display marked variations in the shape of the spectral lines. The most prominent spectroscopic evidence of disc activity is the long-term V/R variability, that is, the cyclic variation of the relative intensity of the blue ($V$) and red ($R$) peaks in the split profile of the line. The V/R variability is believed to be caused by the gradual change of the amount of the emitting gas approaching the observer and that receding from the observer due to the precession of a density perturbation in the disc \citep{kato83,okazaki91,okazaki97,papaloizou06}. Double-peak symmetric profiles are expected when the high-density part is behind or in front of the star, while asymmetric profiles are seen when the high-density perturbation is on one side of the disc \citep{telting94}. In principle, it is possible from the data themselves to find out whether the perturbation travels in the same direction as Keplerian orbits of the material in the disc (prograde precession) or in opposite direction (retrograde precession). If the motion of the density perturbation is prograde and the disc is viewed at a high inclination angle (as in \igr), then the $V>R$ phase should be followed by a $V=R$ phase with a strong shell profile corresponding to the case where the perturbation lies between the star and the observer \citep[see][for a sketch of prograde motion]{telting94}. In \igr, as the disc grew, the \ha\ line changed from a symmetric to an asymmetric profile. When the disc was weak, that is, when the equivalent width of the \ha\ line was a few Angstrom (before 2012), the intensity of the blue and red peaks was roughly equal, $V=R$. The data reveal that the perturbation developed between the end of 2011 and beginning of 2012. From August 2012, a blue dominated profile is clearly present ($V>>R$). However, the V/R ratio showed a fast decrease through the 2012 observations (Fig.~\ref{specpar}), indicating that the V$>$R phase was coming to an end. This was confirmed by the January 2013 observations, where an extreme red-dominated profile is seen, $V/R\approx -0.9$. It is worth noticing the extremely fast V/R time scales. Although it is not possible to pin down the exact moment of the onset of the V/R cycle due to an observational gap of about a year (September 2011-August 2012), the changes occurred very rapidly once the cycle started. During the $V>R$ phase, the V/R ratio changed from $\sim+0.4$ to $\sim+0.1$ in less than one month, September-October 2012 (see Fig.~\ref{haprof} and Table~\ref{red}). Likewise, the change from a blue-dominated $V>R$ to a red-dominated $V<R$ profile occurred in just two months (October-December 2012). If the motion is prograde, a $V=R$ phase with a strong shell profile should be observed in between the blue-dominated and red-dominate phases. We seem to have missed most of this phase, which would have occurred between October and December 2012. That is, in just two months the density perturbation must have gone through the shell $V=R$ phase and most of the $V<R$ phase. Extrapolating this behaviour, we estimate the duration of a whole revolution to be of 6--9 months. These changes are among the fastest in a BeXB. The very short time scale of the observed V/R variations rises the question of whether these spectral changes are modulated by the orbital period. Phase-locked V/R variations have been observed for various Be binaries with hot companions \citep[][and references therein]{harmanec01}, but possibly only on one BeXB (4U\,1258--61). \citet{corbetgx86} found that the probability that the V/R variability observed in 4U\,1258--61 was modulated by the X-ray flare period of $\sim 132$ d, which was proposed to be the orbital period of the system, was $\sim$87\%. Table~\ref{comp} gives a few well-studied characteristic time scales of BeXBs: the orbital period, $P_{\rm orb}$, the V/R quasi-periods, $T_{\rm V/R}$, and the approximate duration of the formation/dissipation of the disc, $T_{\rm disc}$. In common with other BeXBs \citep{reig05b,reig10b}, asymmetric profiles are not seen until the disc reaches certain size and density. During the initial stages of disc growth, the shape of the \ha\ line is always symmetric. Only when \ew\ $\simmore -6$ \AA\ the \ha\ line displays an asymmetric profile. \igr\ also agrees with this result. As can be seen in Fig.~\ref{haprof}, all the spectra during the period 2009--2011 show a symmetric profile and a small \ew. The star took all this time to build the disc. Once a critical size and density was reached (some time at the beginning of 2012), the density perturbation developed and started to travel around in the disc. This result is also apparent from a comparison of the two panels in Fig.~\ref{specpar}. Before MJD 55800, $V\approx R$ and \ew\ low. After MJD 56000, a strong asymmetric emission profile is seen. Although the maximum \ew\ measured is relatively small, \ew $\approx -8$ \AA, compared to other BeXB, it is consistent with a well developed disc. In systems viewed edge-on, the maximum \ew\ is much smaller than in the face-on case, because the projected area of the optically thick disc on the sky is much smaller \citep{hummel94,sigut13}. On the other hand, although a fully developed disc must have been formed, it may not extend too far away from the star. The fast V/R changes favoured a relatively compact disc, where the density perturbation is capable to achieve a complete revolution in a few months. \subsection{Shell lines and disc contribution} The Be star companion in \igr\ is a shell star as implied by the deep central absorption between the two peaks of the \ha. This central depression clearly goes beyond the continuum. The shell profiles are thought to arise when the observer's line of sight toward the central star intersects parts of the disc, which is cooler than the stellar photosphere \citep{hanuschik95,rivinius06}. Statistical studies on the distribution of rotational velocities of Be stars are consistent with the idea that Be shell stars are simply normal Be stars seen near edge-on, that is, seen at a large inclination angle \citep{porter96}. Our observations agree with this idea. Figure~\ref{widthcomp} clearly shows that the spectral lines are significantly narrower and deeper when the disc is present. The narrower lines would result from the fact that the disc conceals the equator of the star, where the contribution to the rotational velocity is largest. The deeper lines would result from absorption of the photospheric emission by the disc. Both circumstances require a high inclination angle. Further evidence for a high inclination angle is provided by the photometric observations. Both, positive and negative correlations between the emission-line strength and light variations have been observed and attributed to geometrical effects \citep{harmanec83,harmanec00}. Stars viewed at very high inclination angle show the inverse correlation because the inner parts of the Be envelope block partly the stellar photosphere, while the small projected area of the disc on the sky keeps the disc emission to a minimum. In stars seen at certain inclination angle, $i\simmore i_{\rm crit}$, the effect of the disc is to increase the effective radius of the star, that is, as the disc grows an overall (star plus disc) increase in brightness is expected. The value of the critical inclination angle is not known but a rough estimate based on available data suggest $i_{\rm crit}\sim 75^{\circ}$ \citep{sigut13}. \igr\ exhibits the inverse correlation, that is, it becomes fainter at the beginning of a new emission episode ($\sim$ JD 2,455,800, see Fig.~\ref{specpar}). We have assesed the contribution of the disc on the width of helium lines, which is the main parameter to estimate the star's rotational velocity, by measuring the FWHM of the \ion{He}{I} 6678 \AA\ over time and by determining the rotational velocity when the disc was present. A difference of up to 7 \AA\ was measured between the witdh of the \ion{He}{I} 6678 \AA\ line with and without disc (see Fig.\ref{hei}). Repeating the calculation performed in Sect.~\ref{rotvel} on the \ion{He}{I} 4026 \AA, 4387 \AA, and 4471 \AA\ when the disc was present and using the WHT December 2012 spectra, we obtain $v\sin i=170\pm20$ km s$^{-1}$. Thus, the contribution of the disc clearly underestimates the true rotational velocity. In \igr, the shell profile seems to be a permanent feature. We do not observe any transition from a shell absorption profile to a Be "ordinary" (i.e., pure emission) profile. In models that favour geometrically thin discs with small opening angles, this result implies that the inclination angle must be well above $70^{\circ}$ \citep{hanuschik96a}. Thus, with such a large inclination angle, the true rotational velocity is estimated to be $v_{\rm rot}\sim$380-400 km s$^{-1}$, and the ratio of the equatorial rotational velocity over the critical break-up velocity, $w=v_{\rm rot}/v_{\rm crit}\sim0.8$\footnote{The break-up velocity of a B1Ve star is $\sim 500$ km s$^{-1}$ \citep{porter96,townsend04,cranmer05}.} If gravity darkening is taken into account, then the fractional rotational velocity would be even larger. Gravity darkening results from fast rotation. Rapidly rotating B stars have centrifugally distorted shapes with the equatorial radius larger than the polar radius. As a result, the poles have a higher surface gravity, and thus higher temperature. Gravity darkening breaks the linear relationship between the line width and the projected rotational velocity and makes fast rotators display narrower profiles, hence underestimate the true rotational velocity. The reduction of the measured rotational velocities with respect to the true critical velocity amounts to 10--30\%, with the larger values corresponding to the later spectral subtypes \citep{townsend04}. Correcting for gravity darkening, the rotational velocity would be $v_{\rm rot} \sim \approx 450$ km s$^{-1}$ (assuming $i=80^{\circ}$) and the fractional rotational velocity of the Be companion of \igr\ $w \approx0.9$, confirming the idea that shell stars are Be stars rotating at near-critical rotation limit. We caution the reader that the values of the break-up velocity assume that it is possible to assign to each Be star a mass and radius equal to that of a much less rapidly rotating B star, e.g., from well studied eclipsing binaries. Given that there is not a single direct measurement of the mass and radius for any known Be star, the break-up velocity should be taken as an approximation \citep[see e.g][]{harmanec00}. \section{Conclusion} We have performed optical photometric and spectroscopic observations of the optical counterpart to \igr. Our observations show that \igr\ is a high-mass X-ray binary with a Be shell type companion. Its long-term optical spectroscopic variability is characterised by global changes in the structure of the equatorial disc. These global changes manifest observationally as asymmetric profiles and significant intensity variability of the \ha\ line. The changes in the strength of the line are associated with the formation and dissipation of the circumstellar disc. At least since 2009, \igr\ has been in an active Be phase that ended in mid 2013, when a disc-loss episode was observed. The entire formation/dissipation cycle is estimated to be six years, although given the lack of data before 2009, this figure needs to be confirmed by future observations. In contrast, the V/R variability is among the fastest observed in Be/X-ray binaries with characteristic timescales of the order of few weeks for each V/R phase. The absence of the disc left the underlying B star exposed, allowing us to derive its astrophysical parameters. From the ratios of various metallic lines we have derived a spectral type B1IVe. The width of He I lines imply a rotational velocity of $\sim370$ km s$^{-1}$. Using the photometric magnitudes and colours we have estimated the interstellar colour excess $E(B-V)\sim0.76$ mag, and the distance $d\sim$ 8.5 kpc. The presence of shell absorption lines indicate that the line of sight to the star lies nearly perpendicular to its rotation axis. Although the Balmer lines show the most clearly marked shell variability, the helium lines are also strongly affected by disc emission, making them narrower than in the absence of the disc. \begin{acknowledgements} We thank the referee P. Harmanec for his useful comments and suggestions which has improved the clarity of this paper. We also thank observers P. Berlind and M. Calkins for performing the FLWO observations and I. Psaridaki for helping with the Skinakas observations. Skinakas Observatory is a collaborative project of the University of Crete, the Foundation for Research and Technology-Hellas and the Max-Planck-Institut f\"ur Extraterrestrische Physik. The WHT and its service programme (service proposal references SW2012b14 and SW2013a19) are operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'{\i}sica de Canarias. This paper uses data products produced by the OIR Telescope Data Center, supported by the Smithsonian Astrophysical Observatory. This work has made use of NASA's Astrophysics Data System Bibliographic Services and of the SIMBAD database, operated at the CDS, Strasbourg, France. \end{acknowledgements} \bibliographystyle{aa}
{'timestamp': '2013-11-14T02:06:54', 'yymm': '1311', 'arxiv_id': '1311.3093', 'language': 'en', 'url': 'https://arxiv.org/abs/1311.3093'}
ArXiv
\section{Introduction} If $C$ is a general curve of genus $g$, equipped with a general map $f \colon C \to \pp^3$ of degree $d$, it is natural to ask whether the intersection $f(C) \cap Q$ of its image with a general quadric $Q$ is a general collection of $2d$ points on $Q$. Interest in this question historically developed as a result of the work of Hirschowitz \cite{mrat} on the maximal rank conjecture for rational space curves, and the later extension of Ballico and Ellia \cite{ball} of this method to nonspecial space curves: The heart of these arguments revolve precisely around understanding the intersection of a general curve with a general quadric. In hopes of both simplifying and extending these results, Ellingsrud and Hirschowitz \cite{eh}, and later Perrin \cite{perrin}, using the technique of liaison, gave partial results on the generality of this intersection. However, a complete analysis has so far remained conjectural. To state the problem precisely, we make the following definition: \begin{defi} We say a stable map $f \colon C \to \pp^r$ from a curve $C$ to $\pp^r$ (with $r \geq 2$) is a \emph{Weak Brill-Noether curve (WBN-curve)} if it corresponds to a point in a component of $\bar{M}_g(\pp^r, d)$ which both dominates $\bar{M}_g$, and whose generic member is a map from a smooth curve, which is an immersion if $r \geq 3$, and birational onto its image if $r = 2$; and which is either nonspecial or nondegenerate. In the latter case, we refer to it as a \emph{Brill-Noether curve} (\emph{BN-curve}). \end{defi} \noindent The celebrated Brill-Noether theorem then asserts that BN-curves of degree~$d$ and genus~$g$ to~$\pp^r$ exist if and only if \[\rho(d, g, r) := (r + 1)d - rg - r(r + 1) \geq 0.\] Moreover, for $\rho(d, g, r) \geq 0$, the parameter space of BN-curves is irreducible. (In particular, it makes sense to talk about a ``general BN-curve''.) \medskip In this paper, we give a complete answer to the question posed above: For $f \colon C \to \pp^3$ a general BN-curve of degree $d$ and genus $g$ (with, of course, $\rho(d, g, 3) \geq 0$), we show the intersection $f(C) \cap Q$ is a general collection of $2d$ points on $Q$ except in exactly six cases. Furthermore, in these six cases, we compute precisely what the intersection is. A natural generalization of this problem is to study the intersection of a general BN-curve $f \colon C \to \pp^r$ (for $r \geq 2$) with a hypersurface $H$ of degree $n \geq 1$: In particular, we ask when this intersection consists of a general collection of $dn$ points on $H$ (in all but finitely many cases). For $r = 2$, the divisor $f(C) \cap H$ on $H$ is linearly equivalent to $\oo_H(d)$; in particular, it can only be general if $H$ is rational, i.e.\ if $n = 1$ or $n = 2$. In general, we note that in order for the intersection to be general, it is evidently necessary for \[(r + 1)d - (r - 3)g \sim (r + 1)d - (r - 3)(g - 1) = \dim \bar{M}_g(\pp^r, d)^\circ \geq (r - 1) \cdot dn.\] (Here $\bar{M}_g(\pp^r, d)^\circ$ denotes the component of $\bar{M}_g(\pp^r, d)$ corresponding to the BN-curves, and $A \sim B$ denotes that $A$ differs from $B$ by a quantity bounded by a function of $r$ alone.) If the genus of $C$ is as large as possible (subject to the constraint that $\rho(d, g, r) \geq 0$), i.e.\ if \[g \sim \frac{r + 1}{r} \cdot d,\] then the intersection can only be general when \[(r + 1) \cdot d - (r - 3) \cdot \left(\frac{r + 1}{r} \cdot d \right) \gtrsim (r - 1) n \cdot d;\] or equivalently if \[(r + 1) - (r - 3) \cdot \frac{r + 1}{r} \geq (r - 1) n \quad \Leftrightarrow \quad n \leq \frac{3r + 3}{r^2 - r}.\] For $r = 3$, this implies $n = 1$ or $n = 2$; for $r = 4$, this implies $n = 1$; and for $r \geq 5$, this is impossible. \medskip To summarize, there are only five pairs $(r, n)$ where this intersection could be, with the exception of finitely many $(d, g)$ pairs, a collection of $dn$ general points on $H$: The intersection of a plane curve with a line, the intersection of a plane curve with a conic, the intersection of a space curve with a quadric, the intersection of a space curve with a plane, and the intersection of a curve to $\pp^4$ with a hyperplane. Our three main theorems (five counting the first two cases which are trivial) give a complete description of this intersection in these cases: \begin{thm} \label{main-2} Let $f \colon C \to \pp^2$ be a general BN-curve of degree~$d$ and genus~$g$. Then the intersection $f(C) \cap Q$, of $C$ with a general conic $Q$, consists of a general collection of $2d$ points on~$Q$. \end{thm} \begin{thm} \label{main-2-1} Let $f \colon C \to \pp^2$ be a general BN-curve of degree~$d$ and genus~$g$. Then the intersection $f(C) \cap L$, of $C$ with a general line $L$, consists of a general collection of $d$ points on~$L$. \end{thm} \begin{thm} \label{main-3} Let $f \colon C \to \pp^3$ be a general BN-curve of degree~$d$ and genus~$g$. Then the intersection $f(C) \cap Q$, of $C$ with a general quadric $Q$, consists of a general collection of $2d$ points on $Q$, unless \[(d, g) \in \{(4, 1), (5, 2), (6, 2), (6, 4), (7, 5), (8, 6)\}.\] And conversely, in the above cases, we may describe the intersection $f(C) \cap Q \subset Q \simeq \pp^1 \times \pp^1$ in terms of the intrinsic geometry of $Q \simeq \pp^1 \times \pp^1$ as follows: \begin{itemize} \item If $(d, g) = (4, 1)$, then $f(C) \cap Q$ is the intersection of two general curves of bidegree $(2, 2)$. \item If $(d, g) = (5, 2)$, then $f(C) \cap Q$ is a general collection of $10$ points on a curve of bidegree~$(2, 2)$. \item If $(d, g) = (6, 2)$, then $f(C) \cap Q$ is a general collection of $12$ points $p_1, \ldots, p_{12}$ lying on a curve $D$ which satisfy: \begin{itemize} \item The curve $D$ is of bidegree $(3, 3)$ (and so is in particular of arithmetic genus $4$). \item The curve $D$ has two nodes (and so is in particular of geometric genus $2$). \item The divisors $\oo_D(2,2)$ and $p_1 + \cdots + p_{12}$ are linearly equivalent when pulled back to the normalization of $D$. \end{itemize} \item If $(d, g) = (6, 4)$, then $f(C) \cap Q$ is the intersection of two general curves of bidegrees $(2, 2)$ and $(3,3)$ respectively. \item If $(d, g) = (7, 5)$, then $f(C) \cap Q$ is a general collection of $14$ points $p_1, \ldots, p_{14}$ lying on a curve $D$ which satisfy: \begin{itemize} \item The curve $D$ is of bidegree $(3, 3)$. \item The divisor $p_1 + \cdots + p_{14} - \oo_D(2, 2)$ on $D$ is effective. \end{itemize} \item If $(d, g) = (8, 6)$, then $f(C) \cap Q$ is a general collection of $16$ points on a curve of bidegree~$(3,3)$. \end{itemize} In particular, the above descriptions show $f(C) \cap Q$ is not a general collection of $2d$ points on~$Q$. \end{thm} \begin{thm} \label{main-3-1} Let $f \colon C \to \pp^3$ be a general BN-curve of degree~$d$ and genus~$g$. Then the intersection $f(C) \cap H$, of $C$ with a general plane $H$, consists of a general collection of $d$ points on $H$, unless \[(d, g) = (6, 4).\] And conversely, for $(d, g) = (6, 4)$, the intersection $f(C) \cap H$ is a general collection of $6$ points on a conic in $H \simeq \pp^2$; in particular, it is not a general collection of $d = 6$ points. \end{thm} \begin{thm} \label{main-4} Let $f \colon C \to \pp^4$ be a general BN-curve of degree~$d$ and genus~$g$. Then the intersection $f(C) \cap H$, of $C$ with a general hyperplane $H$, consists of a general collection of $d$ points on $H$, unless \[(d, g) \in \{(8, 5), (9, 6), (10, 7)\}.\] And conversely, in the above cases, we may describe the intersection $f(C) \cap H \subset H \simeq \pp^3$ in terms of the intrinsic geometry of $H \simeq \pp^3$ as follows: \begin{itemize} \item If $(d, g) = (8, 5)$, then $f(C) \cap H$ is the intersection of three general quadrics. \item If $(d, g) = (9, 6)$, then $f(C) \cap H$ is a general collection of $9$ points on a curve $E \subset \pp^3$ of degree~$4$ and genus~$1$. \item If $(d, g) = (8, 5)$, then $f(C) \cap H$ is a general collection of $10$ points on a quadric. \end{itemize} \end{thm} The above theorems can be proven by studying the normal bundle of the general BN-curve $f \colon C \to \pp^r$: For any hypersurface $S$ of degree $n$, and unramified map $f \colon C \to \pp^r$ dimensionally transverse to $S$, basic deformation theory implies that the map \[f \mapsto (f(C) \cap S)\] (from the corresponding Kontsevich space of stable maps, to the corresponding symmetric power of $S$) is smooth at $[f]$ if and only if \[H^1(N_f(-n)) = 0.\] Here, $N_f(-n) = N_f \otimes f^* \oo_{\pp^r}(-n)$ denotes the twist of the normal bundle $N_f$ of the map $f \colon C \to \pp^r$; this is the vector bundle on the domain $C$ of $f$ defined via \[N_f = \ker(f^* \Omega_{\pp^r} \to \Omega_C)^\vee.\] Since a map between reduced irreducible varieties is dominant if and only if it is generically smooth, the map $f \mapsto (f(C) \cap S)$ is therefore dominant if and only if $H^1(N_f(-n)) = 0$ for $[f]$ general. This last condition being visibly open, our problem is thus to prove the existence of an unramified BN-curve $f \colon C \to \pp^r$ of specified degree and genus, for which $H^1(N_f(-n)) = 0$. For this, we will use a variety of techniques, most crucially specialization to a map from a reducible curve $X \cup_\Gamma Y \to \pp^r$. We begin, in Section~\ref{sec:reducible}, by giving several tools for studying the normal bundle of a map from a reducible curve. Then in Section~\ref{sec:inter}, we review results on the closely-related \emph{interpolation problem} (c.f.\ \cite{firstpaper}). In Section~\ref{sec:rbn}, we review results about when certain maps from reducible curves, of the type we shall use, are BN-curves. Using these techniques, we then concentrate our attention in Section~\ref{sec:indarg} on maps from reducible curves $X \cup_\Gamma Y \to \pp^r$ where $Y$ is a line or canonical curve. Consideration of these curves enables us to make an inductive argument that reduces our main theorems to finite casework. This finite casework is then taken care of in three steps: First, in Sections~\ref{sec:hir}--\ref{sec:hir-3}, we again use degeneration to a map from a reducible curve, considering the special case when $Y \to \pp^r$ factors through a hyperplane. Second, in Section~\ref{sec:in-surfaces}, we specialize to immersions of smooth curves contained in Del Pezzo surfaces, and study the normal bundle of our curve using the normal bundle exact sequence for a curve in a surface. Lastly, in Section~\ref{sec:51} we use the geometry of the cubic scroll in $\pp^4$ to construct an example of an immersion of a smooth curve $f \colon C \hookrightarrow \pp^3$ of degree $5$ and genus $1$ with $H^1(N_f(-2)) = 0$. Finally, in Section~\ref{sec:converses}, we examine each of the cases in our above theorems where the intersection is not general. In each of these cases, we work out precisely what the intersection is (and show that it is not general). \subsection*{Conventions} In this paper we make the following conventions: \begin{itemize} \item We work over an algebraically closed field of characteristic zero. \item A \emph{curve} shall refer to a nodal curve, which is assumed to be connected unless otherwise specified. \end{itemize} \subsection*{Acknowledgements} The author would like to thank Joe Harris for his guidance throughout this research. The author would also like to thank Gavril Farkas, Isabel Vogt, and members of the Harvard and MIT mathematics departments for helpful conversations; and to acknowledge the generous support both of the Fannie and John Hertz Foundation, and of the Department of Defense (NDSEG fellowship). \section{Normal Bundles of Maps from Reducible Curves \label{sec:reducible}} In order to describe the normal bundle of a map from a reducible curve, it will be helpful to introduce some notions concerning modifications of vector bundles. The interested reader is encouraged to consult \cite{firstpaper} (sections 2, 3, and~5), where these notions are developed in full; we include here only a brief summary, which will suffice for our purposes. \begin{defi} If $f \colon X \to \pp^r$ is a map from a scheme $X$ to $\pp^r$, and $p \in X$ is a point, we write $[T_p C] \subset \pp^r$ for the \emph{projective realization of the tangent space} --- i.e.\ for the linear subspace $L \subset \pp^r$ containing $f(p)$ and satisfying $T_{f(p)} L = f_*(T_p C)$. \end{defi} \begin{defi} Let $\Lambda \subset \pp^r$ be a linear subspace, and $f \colon C \to \pp^r$ be an unramified map from a curve. Write $U_{f, \Lambda} \subset C$ for the open subset of points $p \in C$ so that the projective realization of the tangent space $[T_p C]$ does not meet $\Lambda$. Suppose that $U_{f, \Lambda}$ is nonempty, and contains the singular locus of $C$. Define \[N_{f \to \Lambda}|_{U_{f, \Lambda}} \subset N_f|_{U_{f, \Lambda}}\] as the kernel of the differential of the projection from $\Lambda$ (which is regular on a neighborhood of $f(U_{f, \Lambda})$). We then let $N_{f \to \Lambda}$ be the unique extension of $N_{f \to \Lambda}|_{U_{f, \Lambda}}$ to a sub-vector-bundle (i.e.\ a subsheaf with locally free quotient) of $N_f$ on $C$. For a more thorough discussion of this construction (written for $f$ an immersion but which readily generalizes), see Section~5 of \cite{firstpaper}. \end{defi} \begin{defi} Given a subbundle $\mathcal{F} \subset \mathcal{E}$ of a vector bundle on a scheme $X$, and a Cartier divisor $D$ on $X$, we define \[\mathcal{E}[D \to \mathcal{F}]\] as the kernel of the natural map \[\mathcal{E} \to (\mathcal{E} / \mathcal{F})|_D.\] Note that $\mathcal{E}[D \to \mathcal{F}]$ is naturally isomorphic to $\mathcal{E}$ on $X \smallsetminus D$. Additionally, note that $\mathcal{E}[D \to \mathcal{F}]$ depends only on $\mathcal{F}|_D$. For a more thorough discussion of this construction, see Sections~2 and~3 of \cite{firstpaper}. \end{defi} \begin{defi} Given a subspace $\Lambda \subset \pp^r$, an unramified map $f \colon C \to \pp^r$ from a curve, and a Cartier divisor $D$ on $C$, we define \[N_f[D \to \Lambda] := N_f[D \to N_{f \to \Lambda}].\] \end{defi} We note that these constructions can be iterated on a smooth curve: Given subbundles $\mathcal{F}_1, \mathcal{F}_2 \subset \mathcal{E}$ of a vector bundle on a smooth curve, there is a unique subbundle $\mathcal{F}_2' \subset \mathcal{E}[D_1 \to \mathcal{F}_1]$ which agrees with $\mathcal{F}_2$ away from $D_1$ (c.f.\ Proposition~3.1 of \cite{firstpaper}). We may then define: \[\mathcal{E}[D_1 \to \mathcal{F}_1][D_2 \to \mathcal{F}_2] := \mathcal{E}[D_1 \to \mathcal{F}_1][D_2 \to \mathcal{F}_2'].\] Basic properties of this construction (as well as precise conditions when such iterated modifications make sense for higher-dimensional varieties) are investigated in \cite{firstpaper} (Sections~2 and~3). For example, we have natural isomorphisms $\mathcal{E}[D_1 \to \mathcal{F}_1][D_2 \to \mathcal{F}_2] \simeq \mathcal{E}[D_2 \to \mathcal{F}_2][D_1 \to \mathcal{F}_1]$ in several cases, including when $\mathcal{F}_1 \subseteq \mathcal{F}_2$. Using these constructions, we may give a partial characterization of the normal bundle $N_f$ of an unramified map from a reducible curve $f \colon X \cup_\Gamma Y \to \pp^r$: \begin{prop}[Hartshorne-Hirschowitz] Let $f \colon X \cup_\Gamma Y \to \pp^r$ be an unramified map from a reducible curve. Write $\Gamma = \{p_1, p_2, \ldots, p_n\}$, and for each $i$ let $q_i \neq f(p_i)$ be a point on the projective realization $[T_{p_i} Y]$ of the tangent space to $Y$ at $p_i$. Then we have \[N_f|_X = N_{f|_X}(\Gamma)[p_1 \to q_1][p_2 \to q_2] \cdots [p_n \to q_n].\] \end{prop} \begin{proof} This is Corollary~3.2 of \cite{hh}, re-expressed in the above language. (Hartshorne and Hirschowitz states this only for $r = 3$ and $f$ an immersion; but the argument they give works for $r$ arbitrary.) \end{proof} Our basic strategy to study the normal bundle of an unramified map from a reducible curve $f \colon C \cup_\Gamma D \to \pp^r$ is given by the following lemma: \begin{lm} \label{glue} Let $f \colon C \cup_\Gamma D \to \pp^r$ be an unramified map from a reducible curve, and let $E$ and $F$ be divisors supported on $C \smallsetminus \Gamma$ and $D \smallsetminus \Gamma$ respectively. Suppose that the natural map \[\alpha \colon H^0(N_{f|_D}(-F)) \to \bigoplus_{p \in \Gamma} \left(\frac{T_p (\pp^r)}{f_* (T_p (C \cup_\Gamma D))}\right)\] is surjective (respectively injective), and that \begin{gather*} H^1(N_f|_D (-F)) = 0 \quad \text{(respectively } H^0(N_f|_D (-F)) = H^0(N_{f|_D} (-F))\text{)} \\ H^1(N_{f|_C} (-E)) = 0 \quad \text{(respectively } H^0(N_{f|_C} (-E)) = 0\text{)}. \end{gather*} Then we have \[H^1(N_f(-E-F)) = 0 \quad \text{(respectively } H^0(N_f(-E-F)) = 0\text{)}.\] \end{lm} \begin{proof} Write $\mathcal{K}$ for the sheaf supported along $\Gamma$ whose stalk at $p \in \Gamma$ is the quotient of tangent spaces: \[\mathcal{K}_p = \frac{T_p(\pp^r)}{f_*(T_p(C \cup_\Gamma D))}.\] Additionally, write $\mathcal{N}$ for the (not locally-free) subsheaf of $N_f$ ``corresponding to deformations which do not smooth the nodes $\Gamma$''; or in symbols, as the kernel of the natural map \[N_f \to T^1_\Gamma,\] where $T^1$ is the Lichtenbaum-Schlessinger $T^1$-functor. We have the following exact sequences of sheaves: \[\begin{CD} 0 @>>> \mathcal{N} @>>> N_f @>>> T^1_\Gamma @>>> 0 \\ @. @VVV @VVV @| @. \\ 0 @>>> N_{f|_D} @>>> N_f|_D @>>> T^1_\Gamma @>>> 0 \\ @. @. @. @. @. \\ 0 @>>> \mathcal{N} @>>> N_{f|_C} \oplus N_{f|_D} @>>> \mathcal{K} @>>> 0. \\ \end{CD}\] The first of sequence above is just the definition of $\mathcal{N}$. Restriction of the first sequence to~$D$ yields the second sequence (we have $\mathcal{N}|_D \simeq N_{f|_D}$); the map between them being of course the restriction map. The final sequence expresses $\mathcal{N}$ as the gluing of $\mathcal{N}|_C \simeq N_{f|_C}$ to $\mathcal{N}|_D \simeq N_{f|_D}$ along $\mathcal{N}|_\Gamma \simeq \mathcal{K}$. Twisting everything in sight by $-E-F$, we obtain new sequences: \[\begin{CD} 0 @>>> \mathcal{N}(-E-F) @>>> N_f(-E-F) @>>> T^1_\Gamma @>>> 0 \\ @. @VVV @VVV @| @. \\ 0 @>>> N_{f|_D}(-F) @>>> N_f|_D(-F) @>>> T^1_\Gamma @>>> 0 \\ @. @. @. @. @. \\ 0 @>>> \mathcal{N}(-E-F) @>>> N_{f|_C}(-E) \oplus N_{f|_D}(-F) @>>> \mathcal{K} @>>> 0. \\ \end{CD}\] The commutativity of the rightmost square in the first diagram implies that the image of $H^0(N_f(-E-F)) \to H^0(T^1_\Gamma)$ is contained in the image of $H^0(N_f|_D(-F)) \to H^0(T^1_\Gamma)$. Consequently, we have \begin{align} \dim H^0(N_f(-E-F)) &= \dim H^0(\mathcal{N}(-E-F)) + \dim \im\left(H^0(N_f(-E-F)) \to H^0(T^1_\Gamma)\right) \nonumber \\ &\leq \dim H^0(\mathcal{N}(-E-F)) + \dim \im\left(H^0(N_f|_D(-F)) \to H^0(T^1_\Gamma)\right) \nonumber \\ &= \dim H^0(\mathcal{N}(-E-F)) + \dim H^0(N_f|_D(-F)) - \dim H^0(N_{f|_D}(-F)). \label{glue-dim} \end{align} Next, our assumption that $H^0(N_{f|_D}(-F)) \to H^0(\mathcal{K})$ is surjective (respectively our assumptions that $H^0(N_{f|_C}(-E)) = 0$ and $H^0(N_{f|_D}(-F)) \to H^0(\mathcal{K})$ is injective) implies in particular that $H^0(N_{f|_C}(-E) \oplus N_{f|_D}(-F)) \to H^0(\mathcal{K})$ is surjective (respectively injective). In the ``respectively'' case, this yields $H^0(\mathcal{N}(-E-F)) = 0$, which combined with \eqref{glue-dim} and our assumption that $H^0(N_f|_D(-F)) = H^0(N_{f|_D}(-F))$ implies $H^0(N_f(-E-F)) = 0$ as desired. In the other case, we have a bit more work to do; the surjectivity of $H^0(N_{f|_D}(-F)) \to H^0(\mathcal{K})$ yields \[\dim H^0(\mathcal{N}(-E-F)) = \dim H^0(N_{f|_C}(-E) \oplus N_{f|_D}(-F)) - \dim H^0(\mathcal{K});\] or upon rearrangement, \begin{align*} \dim H^0(\mathcal{N}(-E-F)) - \dim H^0(N_{f|_D}(-F)) &= \dim H^0(N_{f|_C}(-E)) - \dim H^0(\mathcal{K}) \\ &= \chi(N_{f|_C}(-E)) - \chi(\mathcal{K}). \end{align*} (For the last equality, $\dim H^0(N_{f|_C}(-E)) = \chi(N_{f|_C}(-E)) + \dim H^1(N_{f|_C}(-E)) = \chi(N_{f|_C}(-E))$ because $H^1(N_{f|_C}(-E)) = 0$ by assumption. Additionally, $\dim H^0(\mathcal{K}) = \chi(\mathcal{K})$ because $\mathcal{K}$ is punctual.) Substituting this into \eqref{glue-dim}, and noting that $\dim H^0(N_f|_D(-F)) = \chi(N_f|_D(-F))$ because $H^1(N_f|_D(-F)) = 0$ by assumption, we obtain: \begin{align} \dim H^0(N_f(-E-F)) &\leq \dim H^0(N_f|_D(-F)) + \dim H^0(\mathcal{N}(-E-F)) - \dim H^0(N_{f|_D}(-F)) \nonumber \\ &= \chi(N_f|_D(-F)) + \chi(N_{f|_C}(-E)) - \chi(\mathcal{K}) \nonumber \\ &= \chi(N_f|_D(-F)) + \chi(N_f|_C(-E - \Gamma)) \nonumber \\ &= \chi(N_f(-E - F)). \label{glue-done} \end{align} For the final two equalities, we have used the exact sequences of sheaves \begin{gather*} 0 \to N_f|_C(-E - \Gamma) \to N_{f|_C}(-E) \to \mathcal{K} \to 0 \\[1ex] 0 \to N_f|_C(-E - \Gamma) \to N_f(-E-F) \to N_f|_D(-F) \to 0; \end{gather*} which are just twists by $-E-F$ of the exact sequences: \begin{gather*} 0 \to N_f|_C(-\Gamma) \to N_{f|_C} \to \mathcal{K} \to 0 \\[1ex] 0 \to N_f|_C(-\Gamma) \to N_f \to N_f|_D \to 0. \end{gather*} \noindent To finish, we note that, by \eqref{glue-done}, \[\dim H^1(N_f(-E-F)) = \dim H^0(N_f(-E-F)) - \chi(N_f(-E - F)) \leq 0,\] and so $H^1(N_f(-E-F)) = 0$ as desired. \end{proof} In the case where $f|_D$ factors through a hyperplane, the hypotheses of Lemma~\ref{glue} become easier to check: \begin{lm} \label{hyp-glue} Let $f \colon C \cup_\Gamma D \to \pp^r$ be an unramified map from a reducible curve, such that $f|_D$ factors as a composition of $f_D \colon D \to H$ with the inclusion of a hyperplane $\iota \colon H \subset \pp^r$, while $f|_C$ is transverse to $H$ along $\Gamma$. Let $E$ and $F$ be divisors supported on $C \smallsetminus \Gamma$ and $D \smallsetminus \Gamma$ respectively. Suppose that, for some $i \in \{0, 1\}$, \[H^i(N_{f_D}(-\Gamma-F)) = H^i(\oo_D(1)(\Gamma-F)) = H^i(N_{f|_C} (-E)) = 0.\] Then we have \[H^i(N_f(-E-F)) = 0.\] \end{lm} \begin{proof} If $i = 0$, we note that $H^0(\oo_D(1)(\Gamma - F)) = 0$ implies $H^0(\oo_D(1)(-F)) = 0$. In particular, using the exact sequences \[\begin{CD} 0 @>>> N_{f_D}(-F) @>>> N_{f|_D}(-F) @>>> \oo_D(1)(-F) @>>> 0 \\ @. @| @VVV @VVV @. \\ 0 @>>> N_{f_D}(-F) @>>> N_f|_D(-F) @>>> \oo_D(1)(\Gamma - F) @>>> 0, \end{CD}\] we conclude from the first sequence that $H^0(N_{f_D}(-F)) \to H^0(N_{f|_D}(-F))$ is an isomorphism, and from the $5$-lemma applied to the corresponding map between long exact sequences that $H^0(N_{f|_D}(-F)) = H^0(N_f|_D(-F))$. Similarly, when $i = 1$, we note that $H^1(N_{f_D}(-\Gamma-F)) = 0$ implies $H^1(N_{f_D}(-F)) = 0$; we thus conclude from the second sequence that $H^1(N_f|_D(-F)) = 0$. It thus remains to check that the map $\alpha$ in Lemma~\ref{glue} is injective if $i = 0$ and surjective if $i = 1$. For this we use the commutative diagram \[\begin{CD} \displaystyle H^0(N_{f_D}(-F)) @>\beta>> N_{f_D}|_\Gamma \simeq \displaystyle \bigoplus_{p \in \Gamma} \left(\frac{T_p H}{f_*(T_p D)}\right) \\ @VgVV @VV{\iota_*}V \\ \displaystyle H^0(N_{f|_D}(-F)) @>\alpha>> \displaystyle \bigoplus_{p \in \Gamma} \left(\frac{T_p (\pp^r)}{f_*(T_p (C \cup_\Gamma D))}\right). \end{CD}\] Since $f|_C$ is transverse to $H$ along $\Gamma$, the map $\iota_*$ above is an isomorphism. In particular, since $g$ is an isomorphism when $i = 0$, it suffices to check that $\beta$ is injective if $i = 0$ and surjective if $i = 1$. But using the exact sequence \[0 \to N_{f_D}(-\Gamma-F) \to N_{f_D}(-F) \to N_{f_D}|_\Gamma \to 0,\] this follows from our assumption that $H^i(N_{f_D}(-\Gamma-F)) = 0$. \end{proof} \section{Interpolation \label{sec:inter}} If we generalize $N_f(-n)$ to $N_f(-D)$, where $D$ is a general effective divisor, we get the problem of ``interpolation.'' Geometrically, this corresponds to asking if there is a curve of degree $d$ and genus $g$ which passes through a collection of points which are general in $\pp^r$ (as opposed to general in a hypersurface $S$). This condition is analogous in some sense to the conditions of semistability and section-semistability (see Section~3 of~\cite{nasko}), as well as to the Raynaud condition (property $\star$ of \cite{raynaud}); although we shall not make use of these analogies here. \begin{defi} \label{def:inter} We say a vector bundle $\mathcal{E}$ on a curve $C$ \emph{satisfies interpolation} if it is nonspecial, and for a general effective divisor $D$ of any degree, \[H^0(\mathcal{E}(-D)) = 0 \tor H^1(\mathcal{E}(-D)) = 0.\] \end{defi} We have the following results on interpolation from \cite{firstpaper}. To rephrase them in our current language, note that if $f \colon C \to \pp^r$ is a general BN-curve for $r \geq 3$, then $f$ is an immersion, so $N_f$ coincides with the normal bundle $N_{f(C)/\pp^r}$ of the image. Note also that, from Brill-Noether theory, a general BN-curve $f \colon C \to \pp^r$ of degree $d$ and genus $g$ is nonspecial (i.e.\ satisfies $H^1(f^* \oo_{\pp^r}(1)) = 0$) if and only if $d \geq g + r$. \begin{prop}[Theorem~1.3 of~\cite{firstpaper}] \label{inter} Let $f \colon C \to \pp^r$ (for $r \geq 3$) be a general BN-curve of degree $d$ and genus $g$, where \[d \geq g + r.\] Then $N_f$ satisfies interpolation, unless \[(d, g,r) \in \{(5,2,3), (6,2,4), (7,2,5)\}.\] \end{prop} \begin{prop}[Proposition~4.12 of~\cite{firstpaper}] \label{twist} Let $\mathcal{E}$ be a vector bundle on a curve $C$, and $D$ be a divisor on $C$. If $\mathcal{E}$ satisfies interpolation and \[\chi(\mathcal{E}(-D)) \geq (\rk \mathcal{E}) \cdot (\operatorname{genus} C),\] then $\mathcal{E}(-D)$ satisfies interpolation. In particular, \[H^1(\mathcal{E}(-D)) = 0.\] \end{prop} \begin{lm} \label{g2} Let $f \colon C \to \pp^r$ (for $r \in \{3, 4, 5\}$) be a general BN-curve of degree $r + 2$ and genus $2$. Then $H^1(N_f(-1)) = 0$. \end{lm} \begin{proof} We will show that there exists an immersion $C \hookrightarrow \pp^r$, which is a BN-curve of degree $r + 2$ and genus $2$, and whose image meets a hyperplane $H$ transversely in a general collection of $r + 2$ points. For this, we first find a rational normal curve $R \subset H$ passing through $r + 2$ general points, which is possible by Corollary~1.4 of~\cite{firstpaper}. This rational normal curve is then the hyperplane section of some rational surface scroll $S \subset \pp^r$ (and we can freely choose the projective equivalence class of $S$). It thus suffices to prove that there exists a smooth curve $C \subset S$, for which $C \subset S \subset \pp^r$ is a BN-curve of degree $r + 2$ and genus $2$, such that $C \cap (H \cap S)$ a set of $r + 2$ general points on $H \cap S$; or alternatively such that the map \[C \mapsto (C \cap (H \cap S)),\] from the Hilbert scheme of curves on $S$, to the Hilbert scheme of points on $H \cap S$, is smooth at $[C]$; this in turn would follow from $H^1(N_{C/S}(-1)) = 0$. But by Corollary~13.3 of \cite{firstpaper}, the general BN-curve $C' \subset \pp^r$ (which is an immersion since $r \geq 3$) of degree $r + 2$ and genus $2$ in $\pp^r$ is contained in some rational surface scroll $S'$, and satisfies $\chi(N_{C'/S'}) = 11$. Since we can choose $S$ projectively equivalent to $S'$, we may thus find a BN-curve $C \subset S$ of degree~$r + 2$ and genus~$2$ with $\chi(N_{C/S}) = 11$. But then, \[\chi(N_{C/S}(-1)) = 11 - d \geq g \quad \Rightarrow \quad H^1(N_{C/S}(-1)) = 0. \qedhere\] \end{proof} \noindent Combining these results, we obtain: \begin{lm} \label{from-inter} Let $f \colon C \to \pp^r$ (for $r \geq 3$) be a general BN-curve of degree $d$ and genus $g$. Suppose that $d \geq g + r$. \begin{itemize} \item If $r = 3$ and $g = 0$, then $H^1(N_f(-2)) = 0$. In fact, $N_f(-2)$ satisfies interpolation. \item If $r = 3$, then $H^1(N_f(-1)) = 0$. In fact, $N_f(-1)$ satisfies interpolation except when $(d, g) = (5, 2)$. \item If $r = 4$ and $d \geq 2g$, then $H^1(N_f(-1)) = 0$. In fact, $N_f(-1)$ satisfies interpolation except when $(d, g) = (6, 2)$. \end{itemize} \end{lm} \begin{proof} When $(d, g, r) \in \{(5, 2, 3), (6, 2, 4)\}$, the desired result follows from Lemma~\ref{g2}. Otherwise, from Propositions~\ref{inter}, we know that $N_f$ satisfies interpolation. Hence, the desired conclusion follows by applying Proposition~\ref{twist}: If $r = 3$, then \begin{align*} \chi(N_f(-1)) &= 2d \geq 2g = (r - 1) g\\ \chi(N_f(-2)) &= 0 = (r - 1)g; \end{align*} and if $r = 4$ and $d \geq 2g$, then \[\chi(N_f(-1)) = 2d - g + 1 \geq 3g = (r - 1)g. \qedhere \] \end{proof} \begin{lm} \label{addone-raw} Suppose $f \colon C \cup_u L \to \pp^3$ is an unramified map from a reducible curve, with $L \simeq \pp^1$, and $u$ a single point, and $f|_L$ of degree~$1$. Write $v \neq f(u)$ for some other point on $f(L)$. If \[H^1(N_{f|_C}(-2)(u)[2u \to v]) = 0,\] then we have \[H^1(N_f(-2)) = 0.\] \end{lm} \begin{proof} We apply Lemma~8.5 of \cite{firstpaper} (which is stated for $f$ an immersion, in which case $N_f = N_{C \cup L}$ and $N_{f|_C} = N_C$, but the same proof works whenever $f$ is unramified); we take $N_C' = N_{f|_C}(-2)$ and $\Lambda_1 = \Lambda_2 = \emptyset$. This implies $N_f(-2)$ satisfies interpolation (c.f.\ Definition~\ref{def:inter}) provided that $N_{f|_C}(-2)(u)[u \to v][u \to v]$ satisfies interpolation. But we have \[\chi(N_f(-2)) = \chi(N_{f|_C}(-2)(u)[u \to v][u \to v]) = 0;\] so both of these interpolation statements are equivalent to the vanishing of $H^1$. That is, we have $H^1(N_f(-2)) = 0$, provided that \[H^1(N_{f|_C}(-2)(u)[u \to v][u \to v]) = H^1(N_{f|_C}(-2)(u)[2u \to v]) = 0,\] as desired. \end{proof} We finish this section with the following proposition, which immediately implies Theorems~\ref{main-2} and~\ref{main-2-1}: \begin{prop} \label{p2} Let $f \colon C \to \pp^2$ be a curve. Then $N_f(-2)$ satisfies interpolation. In particular $H^1(N_f(-2)) = H^1(N_f(-1)) = 0$. \end{prop} \begin{proof} By adjunction, \[N_f \simeq K_C \otimes f^* K_{\pp^3}^{-1} \simeq K_f(3) \imp N_f(-2) \simeq K_C(1).\] By Serre duality, \[H^1(K_C(1)) \simeq H^0(\oo_C(-1))^\vee = 0;\] which since $K_C(1)$ is a line bundle implies it satisfies interpolation. \end{proof} \section{Reducible BN-Curves \label{sec:rbn}} \begin{defi} Let $\Gamma \subset \pp^r$ be a finite set of $n$ points. A pair $(f \colon C \to \pp^r, \Delta \subset C_{\text{sm}})$, where $C$ is a curve, $f$ is map from $C$ to $\pp^r$, and $\Delta$ is a subset of $n$ points on the smooth locus $C_{\text{sm}}$, shall be called a \emph{marked curve (respectively marked BN-curve, respectively marked WBN-curve) passing through $\Gamma$} if $f \colon C \to \pp^r$ is a map from a curve (respectively a BN-curve, respectively a WBN-curve) and $f(\Delta) = \Gamma$. Given a marked curve $(f \colon C \to \pp^r, \Delta)$ passing through $\Gamma$, we realize $\Gamma$ as a subset of $C$ via $\Gamma \simeq \Delta \subset C$. For $p \in \Gamma$, we then define the \emph{tangent line $T_p (f, \Gamma)$ at $p$} to be the unique line $\ell \subset \pp^r$ through $p$ with $T_p \ell = f_* T_p C$. \end{defi} Let $\Gamma \subset \pp^r$ be a finite set of $n$ general points, and $(f_i \colon C_i \to \pp^r, \Gamma_i)$ be marked WBN-curves passing through $\Gamma$. We then write $C_1 \cup_\Gamma C_2$ for the curve obtained from $C_1$ and $C_2$ by gluing $\Gamma_1$ to $\Gamma_2$ via the isomorphism $\Gamma_1 \simeq \Gamma \simeq \Gamma_2$. The maps $f_i$ give rise to a map $f \colon C_1 \cup_\Gamma C_2 \to \pp^r$ from a reducible curve. Then we have the following result: \begin{prop}[Theorem~1.3 of \cite{rbn}] \label{prop:glue} Suppose that, for at least one $i \in \{1, 2\}$, we have \[(r + 1) d_i - r g_i + r \geq rn.\] Then $f \colon C_1 \cup_\Gamma C_2 \to \pp^r$ is a WBN-curve. \end{prop} \begin{prop} \label{prop:interior} In Proposition~\ref{prop:glue}, suppose that $[f_1, \Gamma_1]$ is general in some component of the space of marked WBN-curves passing through $\Gamma$, and that $H^1(N_{f_2}) = 0$. Then $H^1(N_f) = 0$. \end{prop} \begin{proof} This follows from combining Lemmas~3.2 and~3.4 of~\cite{rbn}. \end{proof} The following lemmas give information about the spaces of marked BN-curves passing through small numbers of points. \begin{lm} \label{small-irred} Let $\Gamma \subset \pp^r$ be a general set of $n \leq r + 2$ points, and $d$ and $g$ be integers with $\rho(d, g, r) \geq 0$. Then the space of marked BN-curves of degree $d$ and genus $g$ to $\pp^r$ passing through $\Gamma$ is irreducible. \end{lm} \begin{proof} First note that, since $n \leq r + 2$, any $n$ points in linear general position are related by an automorphism of $\pp^r$. Fix some ordering on $\Gamma$. The space of BN-curves of degree $d$ and genus $g$ is irreducible, and the source of the generic BN-curve is irreducible; consequently the space of such BN-curves with an ordered collection of $n$ marked points, and the open subset thereof where the images of the marked points are in linear general position, is irreducible. It follows that the space of such marked curves endowed with an automorphism bringing the images of the ordered marked points to~$\Gamma$ (respecting our fixed ordering on $\Gamma$) is also irreducible. But by applying the automorphism to the curve and forgetting the order of the marked points, this latter space dominates the space of such BN-curves passing through~$\Gamma$; the space of such BN-curves passing through~$\Gamma$ is thus irreducible. \end{proof} \begin{lm} \label{gen-tang-rat} Let $\Gamma \subset \pp^r$ be a general set of $n \leq r + 2$ points, and $\{\ell_p : p \in \Gamma\}$ be a set of lines with $p \in \ell_p$. Then the general marked rational normal curve passing through $\Gamma$ has tangent lines at each point $p \in \Gamma$ distinct from $\ell_p$. \end{lm} \begin{proof} Since the intersection of dense opens is a dense open, it suffices to show the general marked rational normal curve $(f \colon C \to \pp^r, \Delta)$ passing through $\Gamma$ has tangent line at $p$ distinct from $\ell_p$ for any one $p \in \Gamma$. For this we consider the map, from the space of such marked rational normal curves, to the space of lines through $p$, which associates to the curve its tangent line at $p$. Basic deformation theory implies this map is smooth (and thus nonconstant) at $(f, \Delta)$ so long as $H^1(N_f(-\Delta)(-q)) = 0$, where $q \in \Delta$ is the point sent to $p$ under $f$, which follows from combining Propositions~\ref{inter} and~\ref{twist}. \end{proof} \begin{lm} \label{contains-rat} A general BN-curve $f \colon C \to \pp^r$ can be specialized to an unramified map from a reducible curve $f^\circ \colon X \cup_\Gamma Y \to \pp^r$, where $f^\circ|_X$ is a rational normal curve. \end{lm} \begin{proof} Write $d$ and $g$ for the degree and genus of $f$. We first note it suffices to produce a marked WBN-curve $(f^\circ_2 \colon Y \to \pp^r, \Gamma_2)$ of degree $d - r$ and genus $g' \geq g - r - 1$, passing through a set $\Gamma$ of $g + 1 - g'$ general points. Indeed, $g + 1 - g' \leq g + 1 - (g - r - 1) = r + 2$ by assumption; by Lemma~\ref{gen-tang-rat}, there is a marked rational normal curve $(f^\circ_1 \colon X \to \pp^r, \Gamma_1)$ passing through $\Gamma$, whose tangent lines at $\Gamma$ are distinct from the tangent lines of $(f_2^\circ, \Gamma_2)$ at~$\Gamma$. Then $f^\circ \colon X \cup_\Gamma Y \to \pp^r$ is unramified (as promised by our conventions) and gives the required specialization by Proposition~\ref{prop:glue}. It remains to construct $(f_2^\circ \colon Y \to \pp^r, \Gamma_2)$. If $g \leq r$, then we note that since $d$ and $g$ are integers, \[d \geq d - \frac{\rho(d, g, r)}{r + 1} = g + r - \frac{g}{r + 1} \imp d \geq g + r \quad \Leftrightarrow \quad g + 1 \leq (d - r) + 1.\] Consequently, by inspection, there is a marked rational curve $(f_2^\circ \colon Y \to \pp^r, \Gamma_2)$ of degree $d - r$ passing through a set $\Gamma$ of $g + 1$ general points. On the other hand, if $g \geq r + 1$, then we note that \[\rho(d - r, g - r - 1, r) = (r + 1)(d - r) - r(g - r - 1) - r(r + 1) = (r + 1)d - rg - r(r + 1) = \rho(d, g, r) \geq 0.\] We may therefore let $(f_2^\circ \colon Y \to \pp^r, \Gamma_2)$ be a marked BN-curve of degree $d - r$ and genus $g - r - 1$ passing through a set $\Gamma$ of $r + 2$ general points. \end{proof} \begin{lm} \label{gen-tang} Let $\Gamma \subset \pp^r$ be a general set of $n \leq r + 2$ points, $\{\ell_p : p \in \Gamma\}$ be a set of lines with $p \in \ell_p$, and $d$ and $g$ be integers with $\rho(d, g, r) \geq 0$. Then the general marked BN-curve $(f \colon C \to \pp^r, \Delta)$ of degree $d$ and genus $g$ passing through $\Gamma$ has tangent lines at every $p \in \Gamma$ which are distinct from $\ell_p$. \end{lm} \begin{proof} By Lemma~\ref{contains-rat}, we may specialize $f \colon C \to \pp^r$ to $f^\circ \colon X \cup_\Gamma Y \to \pp^r$ where $f^\circ|_X$ is a rational normal curve. Specializing the marked points $\Delta$ to lie on $X$ (which can be done since a marked rational normal curve can pass through $n \leq r + 2$ general points by Proposition~\ref{inter}), it suffices to consider the case when $f$ is a rational normal curve. But this case was already considered in Lemma~\ref{gen-tang-rat}. \end{proof} \begin{lm} \label{contains-rat-sp} Lemma~\ref{contains-rat} remains true even if we instead ask $f^\circ|_X$ to be an arbitrary nondegenerate specialization of a rational normal curve. \end{lm} \begin{proof} We employ the construction used in the proof of Lemma~\ref{contains-rat}, but flipping the order in which we construct $X$ and $Y$: First we fix $(f_1^\circ \colon X \to \pp^r, \Gamma_1)$; then we construct $(f_2^\circ \colon Y \to \pp^r, \Gamma_2)$ passing through $\Gamma$, whose tangent lines at $\Gamma$ are distinct from the tangent lines of $(f_1^\circ, \Gamma_1)$ at $\Gamma$ thanks to Lemma~\ref{gen-tang}. \end{proof} \section{Inductive Arguments \label{sec:indarg}} Let $f \colon C \cup_u L \to \pp^r$ be an unramified map from a reducible curve, with $L \simeq \pp^1$, and $u$ a single point, and $f|_L$ of degree~$1$. By Proposition~\ref{prop:glue}, these curves are BN-curves. \begin{lm} \label{p4-add-line} If $H^1(N_{f|_C}(-1)) = 0$, then $H^1(N_f(-1)) = 0$. \end{lm} \begin{proof} This is immediate from Lemma~\ref{glue} (taking $D = L$). \end{proof} \begin{lm} \label{p3-add-line} If $H^1(N_{f|_C}(-2)) = 0$, and $f$ is a general map of the above type extending $f|_C$, then $H^1(N_f(-2)) = 0$. \end{lm} \begin{proof} By Lemma~\ref{addone-raw}, it suffices to prove that for $(u, v) \in C \times \pp^3$ general, \[H^1(N_{f|_C}(-2)(u)[2u \to v]) = 0.\] Since $H^1(N_{f|_C}(-2)) = 0$, we also have $H^1(N_{f|_C}(-2)(u)) = 0$; in particular, Riemann-Roch implies \begin{align*} \dim H^0(N_{f|_C}(-2)(u)) &= \chi(N_{f|_C}(-2)(u)) = 2 \\ \dim H^0(N_{f|_C}(-2)) &= \chi(N_{f|_C}(-2)) = 0. \end{align*} The above dimension estimates imply there is a unique section $s \in \pp H^0(N_{f|_C}(-2)(u))$ with $s|_u \in N_{f|_C \to v}|_u$; it remains to show that for $(u, v)$ general, $\langle s|_{2u} \rangle \neq N_{f|_C \to v}|_{2u}$. For this, it suffices to verify that if $v_1$ and $v_2$ are points with $\{v_1, v_2, f(2u)\}$ coplanar --- but neither $\{v_1, v_2, f(u)\}$, nor $\{v_1, f(2u)\}$, nor $\{v_2, f(2u)\}$ collinear; and $\{v_1, v_2, f(3u)\}$ not coplanar --- then $N_{f|_C \to v_1}|_{2u} \neq N_{f|_C \to v_2}|_{2u}$. To show this, we choose a local coordinate $t$ on $C$, and coordinates on an appropriate affine open $\aa^3 \subset \pp^3$, so that: \begin{align*} f(t) &= (t, t^2 + O(t^3), O(t^3)) \\ v_1 &= (1 , 0 , 1) \\ v_2 &= (-1 , 0 , 1). \end{align*} It remains to check that the vectors $f(t) - v_1$, $f(t) - v_2$, and $\frac{d}{dt} f(t)$ are linearly independent at first order in $t$. That is, we want to check that the determinant \[\left|\begin{array}{ccc} t - 1 & t^2 + O(t^3) & O(t^3) - 1 \\ t + 1 & t^2 + O(t^3) & O(t^3) - 1 \\ 1 & 2t + O(t^2) & O(t^2) \end{array}\right|\not\equiv 0 \mod t^2.\] Or, reducing the entries of the left-hand side modulo $t^2$, that \[-4t = \left|\begin{array}{ccc} t - 1 & 0 & - 1 \\ t + 1 & 0 & - 1 \\ 1 & 2t & 0 \end{array}\right|\not\equiv 0 \mod t^2,\] which is clear. \end{proof} \begin{lm} \label{add-can-3} Let $\Gamma \subset \pp^3$ be a set of $5$ general points, $(f_1 \colon C \to \pp^3, \Gamma_1)$ be a general marked BN-curve passing through $\Gamma$, and $(f_2 \colon D \to \pp^3, \Gamma_2)$ be a general marked canonical curve passing through $\Gamma$. If $H^1(N_{f_1}(-2)) = 0$, then $f \colon C \cup_\Gamma D \to \pp^r$ satisfies $H^1(N_f(-2)) = 0$. \end{lm} \begin{rem} By Lemma~\ref{small-irred}, it makes sense to speak of a ``general marked BN-curve (respectively general marked canonical curve) passing through $\Gamma$''; by Lemma~\ref{gen-tang}, the resulting curve $f$ is unramified. \end{rem} \begin{proof} By Lemma~\ref{glue}, our problem reduces to showing that the natural map \[H^0(N_{f_2} (-2)) \to \bigoplus_{p \in \Gamma} \left(\frac{T_p (\pp^r)}{f_* (T_p (C \cup_\Gamma D))}\right)\] is surjective, and that \[H^1(N_f|_D (-2)) = 0.\] These conditions both being open, we may invoke Lemma~\ref{contains-rat} to specialize $(f_1 \colon C \to \pp^3, \Gamma_1)$ to a marked BN-curve with reducible source $(f_1^\circ \colon C_1 \cup_\Delta C_2 \to \pp^3, \Gamma_1^\circ)$, with $f_1^\circ|_{C_1}$ a rational normal curve and $\Gamma_1^\circ \subset C_1$. It thus suffices to prove the above statements in the case when $f_1 = f_1^\circ$ is a rational normal curve. For this, we first observe that $f(C) \cap f(D) = \Gamma$: Since there is a unique rational normal curve through any $6$ points, and a $1$-dimensional family of possible sixth points on $D$ once $D$ and $\Gamma$ are fixed --- but there is a $2$-dimensional family of rational normal curves through $5$ points in linear general position --- dimension counting shows $f_1(C)$ and $f_2(D)$ cannot meet at a sixth point for $([f_1, \Gamma_1], [f_2, \Gamma_2])$ general. In particular, $f$ is an immersion. Next, we observe that $f(D)$ is contained in a $5$-dimensional space of cubics. Since it is one linear condition, for a cubic that vanishes on $f(D)$, to be tangent to $f(C)$ at a point of $\Gamma$, there is necessarily a cubic surface $S$ containing $f(D)$ which is tangent to $f(C)$ at four points of $\Gamma$. If $S$ were a multiple of $Q$, say $Q \cdot H$ where $H$ is a hyperplane, then since $f(C)$ is transverse to $Q$, it would follow that $H$ contains four points of $\Gamma$. But any $4$ points on $f(C)$ are in linear general position. Consequently, $S$ is not a multiple of $Q$. Or equivalently, $f(D) = Q \cap S$ gives a presentation of $f(D)$ as a complete intersection. If $S$ were tangent to $f(C)$ at all five points of $\Gamma$, then restricting the equation of $S$ to $f(C)$ would give a section of $\oo_C(3) \simeq \oo_{\pp^1}(9)$ which vanished with multiplicity two at five points. Since the only such section is the zero section, we would conclude that $f(C) \subset S$. But then $f(C)$ would meet $f(D)$ at all $6$ points of $f(C) \cap Q$, which we already ruled out above. Thus, $S$ is tangent to $f(C)$ at precisely four points of $\Gamma$. Write $\Delta$ for the divisor on $D$ defined by these four points, and $p$ for the fifth point. Note that for $q \neq p$ in the tangent line to $(f_1, \Delta \cup \{p\})$ at $p$, \begin{align*} N_f|_D &\simeq \big(N_{f(D)/S}(\Delta + p) \oplus N_{f(D)/Q}(p)\big)[p \to q] \\ &\simeq \big(\oo_D(2)(\Delta + p) \oplus \oo_D(3)(p)\big)[p \to q] \\ \Rightarrow \ N_f|_D(-2) &\simeq \big(\oo_D(\Delta + p) \oplus \oo_D(1)(p)\big)[p \to q] \\ &\simeq \big(\oo_D(\Delta + p) \oplus K_D(p)\big)[p \to q]. \end{align*} By Riemann-Roch, $\dim H^0(K_D(p)) = 4 = \dim H^0(K_D)$; so every section of $K_D(p)$ vanishes at $p$. Consequently, the fiber of every section of $\oo_D(\Delta + p) \oplus K_D(p)$ at $p$ lies in the fiber of the first factor. Since the fiber $N_{f_2 \to q}|_p$ does not lie in the fiber of the first factor, we have an isomorphism \[H^0(N_f|_D(-2)) \simeq H^0\Big(\big(\oo_D(\Delta + p) \oplus K_D(p)\big)(-p)\Big) \simeq H^0(\oo_D(\Delta)) \oplus H^0(K_D).\] Consequently, \[\dim H^0(N_f|_D(-2)) = \dim H^0(\oo_D(\Delta)) + \dim H^0(K_D) = 1 + 4 = 5 = \chi(N_f|_D(-2)),\] which implies \[H^1(N_f|_D(-2)) = 0.\] \noindent Next, we prove the surjectivity of the evaluation map \[\text{ev} \colon H^0(N_{f_2}(-2)) \to \bigoplus_{x \in \Gamma} \left(\frac{T_x (\pp^r)}{f_* (T_x (C \cup_\Gamma D))}\right)\] For this, we use the isomorphism \[N_{f_2}(-2) \simeq N_{f(D)/\pp^3}(-2) \simeq N_{f(D)/S}(-2) \oplus N_{f(D)/Q}(-2) \simeq \oo_D \oplus K_D.\] The restriction of $\text{ev}$ to $H^0(N_{f(D)/S}(-2) \simeq \oo_D)$ maps trivially into the quotient $\frac{T_x (\pp^r)}{f_*(T_x (C \cup_\Gamma D))}$ for $x \in \Delta$, since $S$ is tangent to $f(C)$ along $\Delta$. Because $S$ is not tangent to $f(C)$ at $p$, the restriction of $\text{ev}$ to $H^0(N_{f(D)/S}(-2) \simeq \oo_D)$ thus maps isomorphically onto the factor $\frac{T_p (\pp^r)}{f_*(T_p (C \cup_\Gamma D))}$. It is therefore sufficient to show that the evaluation map \[H^0(N_{f(D)/Q}(-2) \simeq K_D) \to \bigoplus_{x \in \Delta} \left(\frac{T_x (\pp^r)}{f_*(T_x (C \cup_\Gamma D))}\right)\] is surjective. Or equivalently, since $Q$ is not tangent to $f(C)$ at any $x \in \Delta$, that the evaluation map \[H^0(K_D) \to K_D|_\Delta\] is surjective. But this is clear since $\dim H^0(K_D) = 4 = \# \Delta$ and $\Delta$ is a general effective divisor of degree~$4$ on $D$. \end{proof} \begin{lm} \label{to-3-skew} Let $f \colon C \to \pp^4$ be a general BN-curve in $\pp^4$, of arbitrary degree and genus. Then we can specialize $f$ to an unramified map from a reducible curve $f^\circ \colon C' \cup L_1 \cup L_2 \cup L_3 \to \pp^4$, so that each $L_i$ is rational, $f^\circ|_{L_i}$ is of degree~$1$, and the images of the $L_i$ under $f^\circ$ are in linear general position. \end{lm} \begin{proof} By Lemma~\ref{contains-rat-sp}, our problem reduces to the case $f\colon C \to \pp^4$ is a rational normal curve. In this case, we begin by taking three general lines in $\pp^4$. The locus of lines meeting each of our lines has class $\sigma_2$ in the Chow ring of the Grassmannian $\mathbb{G}(1, 4)$ of lines in $\pp^4$. By the standard calculus of Schubert cycles, we have $\sigma_2^3 = \sigma_{2,2} \neq 0$ in the Chow ring of $\mathbb{G}(1, 4)$. Thus, there exists a line meeting each of our three given lines. The (immersion of the) union of these four lines is then a specialization of a rational normal curve. \end{proof} \begin{lm} \label{add-can-4} Let $\Gamma \subset \pp^4$ be a set of $6$ points in linear general position; $(f_1 \colon C \to \pp^4, \Gamma_1)$ be either a general marked immersion of three disjoint lines, or a general marked BN-curve in $\pp^4$, passing through $\Gamma$; and $(f_2 \colon D \to \pp^4, \Gamma_2)$ be a general marked canonical curve passing through~$\Gamma$. If $H^1(N_{f_1}(-1)) = 0$, then $f \colon C \cup_\Gamma D \to \pp^4$ satisfies $H^1(N_f(-1)) = 0$. \end{lm} \begin{proof} By Lemma~\ref{glue}, it suffices to prove that the natural map \[H^0(N_{f_2}(-1)) \to \bigoplus_{p \in \Gamma} \left(\frac{T_p(\pp^r)}{f_*(T_p(C \cup_\Gamma D))}\right)\] is surjective, and that \[H^1(N_f|_D(-1)) = 0.\] These conditions both being open, we may apply Lemma~\ref{to-3-skew} to specialize $(f_1, \Gamma_1)$ to a marked curve with reducible source $(f_1^\circ \colon C_1 \cup C_2 \to \pp^r, \Gamma_1^\circ)$, with $C_1 = L_1 \cup L_2 \cup L_3$ a union of $3$ disjoint lines, and $\Gamma_1^\circ \subset C_1$ with $2$ points on each line. It thus suffices to prove the above statements in the case when $C = C_1 = L_1 \cup L_2 \cup L_3$ is the union of $3$ general lines. Write $\Gamma = \Gamma_1 \cup \Gamma_2 \cup \Gamma_3$, where $\Gamma_i \subset L_i$. It is well known that every canonical curve in $\pp^4$ is the complete intersection of three quadrics; write $V$ for the vector space of quadrics vanishing along $f(D)$. For any $2$-secant line $L$ to $f(D)$, it is evident that it is one linear condition on quadrics in $V$ to contain $L$; and moreover, that general lines impose independent conditions unless there is a quadric which contains all $2$-secant lines. Now the projection from a general line in $\pp^4$ of $f(D)$ yields a nodal plane curve of degree $8$ and geometric genus $5$, which in particular must have \[\binom{8 - 1}{2} - 5 = 16\] nodes. Consequently, the secant variety to $f(D)$ is a hypersurface of degree $16$; and is thus not contained in a quadric. Thus, vanishing on general lines impose independent conditions on~$V$. As $f(L_1)$, $f(L_2)$, and $f(L_3)$ are general, we may thus choose a basis $V = \langle Q_1, Q_2, Q_3 \rangle$ so that $Q_i$ contains $L_j$ if an only if $i \neq j$ (where the $Q_i$ are uniquely defined up to scaling). By construction, $f(D)$ is the complete intersection $Q_1 \cap Q_2 \cap Q_3$. We now consider the direct sum decomposition \[N_{f_2} \simeq N_{f(D)/\pp^4} \simeq N_{f(D)/(Q_1 \cap Q_2)} \oplus N_{f(D)/(Q_2 \cap Q_3)} \oplus N_{f(D)/(Q_3 \cap Q_1)},\] which induces a direct sum decomposition \[N_f|_D \simeq N_{f(D)/(Q_1 \cap Q_2)}(\Gamma_3) \oplus N_{f(D)/(Q_2 \cap Q_3)}(\Gamma_1) \oplus N_{f(D)/(Q_3 \cap Q_1)}(\Gamma_2).\] To show that $H^1(N_f|_D(-1)) = 0$, it is sufficient by symmetry to show that \[H^1(N_{f(D)/(Q_1 \cap Q_2)}(\Gamma_3)(-1)) = 0.\] But we have \[N_{f(D)/(Q_1 \cap Q_2)}(\Gamma_3)(-1) \simeq \oo_D(2)(\Gamma_3)(-1) \simeq \oo_D(1)(\Gamma_3) = K_D(\Gamma_3);\] so by Serre duality, \[H^1(N_{f(D)/(Q_1 \cap Q_2)}(\Gamma_3)(-1)) \simeq H^0(\oo_D(-\Gamma_3))^\vee = 0.\] \noindent Next, we examine the evaluation map \[H^0(N_{f_2}(-1)) \to \bigoplus_{p \in \Gamma} \left(\frac{T_p(\pp^r)}{f_*(T_p(C \cup_\Gamma D))}\right).\] For this, we use the direct sum decomposition \[N_{f_2} \simeq N_{f(D)/\pp^4} \simeq N_{f(D)/(Q_1 \cap Q_2)}(-1) \oplus N_{f(D)/(Q_2 \cap Q_3)}(-1) \oplus N_{f(D)/(Q_3 \cap Q_1)}(-1),\] together with the decomposition (for $p \in \Gamma_i$): \[\frac{T_p (\pp^r)}{f_*(T_p(C \cup_{\Gamma_i} L_i))} \simeq \bigoplus_{j \neq i} N_{f(D)/(Q_i \cap Q_j)}|_p.\] This reduces our problem to showing (by symmetry) the surjectivity of \[H^0(N_{f(D)/(Q_1 \cap Q_2)}(-1)) \to \bigoplus_{p \in \Gamma_1 \cup \Gamma_2} N_{f(D)/(Q_1 \cap Q_2)}|_p.\] But for this, it is sufficient to note that $\Gamma_1 \cup \Gamma_2$ is a general collection of $4$ points on $D$, and \[N_{f(D)/(Q_1 \cap Q_2)}(-1) \simeq \oo_D(2)(-1) = \oo_D(1) \simeq K_D.\] It thus remains to show \[H^0(K_D) \to K_D|_{\Gamma_1 \cup \Gamma_2}\] is surjective, where $\Gamma_1 \cup \Gamma_2$ is a general collection of $4$ points on $D$. But this is clear because $K_D$ is a line bundle and $\dim H^0(K_D) = 5 \geq 4$. \end{proof} \begin{cor} \label{finite} To prove the main theorems (excluding the ``conversely\ldots'' part), it suffices to verify them in the following special cases: \begin{enumerate} \item For Theorem~\ref{main-3}, it suffices to consider the cases where $(d, g)$ is one of: \begin{gather*} (5, 1), \quad (7, 2), \quad (6, 3), \quad (7, 4), \quad (8, 5), \quad (9, 6), \quad (9, 7), \\ (10, 9), \quad (11, 10), \quad (12, 12), \quad (13, 13) \quad (14, 14). \end{gather*} \item For Theorem~\ref{main-3-1}, it suffices to consider the cases where $(d, g)$ is one of: \[(7, 5), \quad (8, 6).\] \item For Theorem~\ref{main-4}, it suffices to consider the cases where $(d, g)$ is one of: \[(9, 5), \quad (10, 6), \quad (11, 7), \quad (12, 9), \quad (16, 15), \quad (17, 16), \quad (18, 17).\] \end{enumerate} In proving the theorems in each of these cases, we may suppose the corresponding theorem holds for curves of smaller genus. \end{cor} \begin{proof} For Theorem~\ref{main-3}, note that by Lemma~\ref{p3-add-line} and Proposition~\ref{prop:glue}, it suffices to show Theorem~\ref{main-3} for each pair $(d, g)$, where $d$ is minimal (i.e.,\ where $\rho(d, g) = \rho(d, g, r = 3) \geq 0$ and $(d, g)$ is not in our list of counterexamples; but either $\rho(d - 1, g) < 0$, or $(d - 1, g)$ is in our list of counterexamples). If $\rho(d, g) \geq 0$ and $g \geq 15$, then $(d - 6, g - 8)$ is not in our list of counterexamples, and $\rho(d - 6, g - 8) = \rho(d, g) \geq 0$. By induction, we know $H^1(N_f(-2)) = 0$ for $f$ a BN-general curve of degree $d - 6$ and genus $g - 8$. Applying Lemma~\ref{add-can-3} (and Proposition~\ref{prop:glue}), we conclude the desired result. If $\rho(d, g) \geq 0$ and $g \leq 14$, and $d$ is minimal as above, then either $(d, g)$ is in our above list, or $(d, g) \in \{(3, 0), (9, 8), (12, 11)\}$. The case of $(d, g) = (3, 0)$ follows from Lemma~\ref{from-inter}. But in these last two cases, Lemma~\ref{add-can-3} again implies the desired result (using Theorem~\ref{main-3} for $(d', g') = (d - 6, g - 8)$ as our inductive hypotheses). For Theorem~\ref{main-3-1}, we note that if $H^1(N_f(-2)) = 0$, then it follows that $H^1(N_f(-1)) = 0$. It therefore suffices to check the list of counterexamples appearing in Theorem~\ref{main-3} besides the counterexample $(d, g) = (6, 4)$ listed in Theorem~\ref{main-3-1}. The cases $(d, g) \in \{(4, 1), (5, 2), (6, 2)\}$ follow from Lemma~\ref{from-inter}, so we only have to consider the remaining cases (which form the given list). Finally, for Theorem~\ref{main-4}, Lemma~\ref{p4-add-line} implies it suffices to show Theorem~\ref{main-4} for each pair $(d, g)$ with $d$ minimal. If $\rho(d, g) \geq 0$ and $g \geq 18$, then $(d - 8, g - 10)$ is not in our list of counterexamples, and $\rho(d - 8, g - 10) = \rho(d, g) \geq 0$. By induction, we know $H^1(N_f(-1)) = 0$ for $C$ is a general curve of degree $d - 8$ and genus $g - 10$. Applying Lemma~\ref{add-can-4}, we conclude the desired result. If $\rho(d, g) \geq 0$ and $g \leq 17$, and $d$ is minimal as above, then either $(d, g)$ is in our above list, or \[(d, g) \in \{(4, 0), (5, 1), (6, 2), (7, 3), (8, 4)\},\] or \[(d, g) \in \{(11, 8), (12, 10), (13, 11), (14, 12), (15, 13), (16, 14)\},\] In the first set of cases above, Lemma~\ref{from-inter} implies the desired result. But in the last set of cases, Lemma~\ref{add-can-4} again implies the desired result. Here, for $(d, g) = (11, 8)$, our inductive hypothesis is that $H^1(N_f(-1)) = 0$ for $f \colon L_1 \cup L_2 \cup L_3 \to \pp^4$ an immersion of three skew lines. In the remaining cases, we use Theorem~\ref{main-3} for $(d', g') = (d - 8, g - 10)$ as our inductive hypothesis. \end{proof} \section{Adding Curves in a Hyperplane \label{sec:hir}} In this section, we explain an inductive strategy involving adding curves contained in hyperplanes, which will help resolve many of our remaining cases. \begin{lm} \label{smoothable} Let $H \subset \pp^r$ (for $r \geq 3$) be a hyperplane, and let $(f_1 \colon C \to \pp^r, \Gamma_1)$ and \mbox{$(f_2 \colon D \to H, \Gamma_2)$} be marked curves, both passing through a set $\Gamma \subset H \subset \pp^r$ of $n \geq 1$ points. Assume that $f_2$ is a general BN-curve of degree $d$ and genus $g$ to $H$, that $\Gamma_2$ is a general collection of $n$ points on $D$, and that $f_1$ is transverse to $H$ along $\Gamma$. If \[H^1(N_{f_1}(-\Gamma)) = 0 \quad \text{and} \quad n \geq g - d + r,\] then $f \colon C \cup_\Gamma D \to \pp^r$ satisfies $H^1(N_f) = 0$ and is a limit of unramified maps from smooth curves. If in addition $f_1$ is an immersion, $f(C) \cap f(D)$ is exactly equal to $\Gamma$, and $\oo_D(1)(\Gamma)$ is very ample away from $\Gamma$ --- i.e.\ if $\dim H^0(\oo_D(1)(\Gamma)(-\Delta)) = \dim H^0(\oo_D(1)(\Gamma)) - 2$ for any effective divisor $\Delta$ of degree $2$ supported on $D \smallsetminus \Gamma$ --- then $f$ is a limit of immersions of smooth curves. \end{lm} \begin{rem} \label{very-ample-away} The condition that $\oo_D(1)(\Gamma)$ is very ample away from $\Gamma$ is immediate when $\oo_D(1)$ is very ample (which in particular happens for $r \geq 4$). It is also immediate when $n \geq g$, in which case $\oo_D(1)(\Gamma)$ is a general line bundle of degree $d + n \geq g + r \geq g + 3$ and is thus very ample. \end{rem} \begin{proof} Note that $N_{f_1}$ is a subsheaf of $N_f|_C$ with punctual quotient (supported at $\Gamma$). Twisting down by $\Gamma$, we obtain a short exact sequence \[0 \to N_{f_1}(-\Gamma) \to N_f|_C(-\Gamma) \to * \to 0,\] where $*$ denotes a punctual sheaf, which in particular has vanishing $H^1$. Since $H^1(N_{f_1}(-\Gamma)) = 0$ by assumption, we conclude that $H^1(N_f|_C(-\Gamma)) = 0$ too. Since $f_2$ is a general BN-curve, $H^1(N_{f_2}) = 0$. The exact sequences \begin{gather*} 0 \to N_f|_C(-\Gamma) \to N_f \to N_f|_D \to 0 \\ 0 \to N_{f_2} \to N_f|_D \to N_H|_D(\Gamma) \simeq \oo_D(1)(\Gamma) \to 0 \end{gather*} then imply that, to check $H^1(N_f) = 0$, it suffices to check $H^1(\oo_D(1)(\Gamma)) = 0$. They moreover imply that every section of $N_H|_D(\Gamma) \simeq \oo_D(1)(\Gamma)$ lifts to a section of $N_f$, which, as $H^1(N_f) = 0$, lifts to a global deformation of $f$. To check $f$ is a limit of unramified maps from smooth curves, it remains to see that the generic section of $N_H|_D(\Gamma) \simeq \oo_D(1)(\Gamma)$ corresponds to a first-order deformation which smoothes the nodes $\Gamma$ --- or equivalently does not vanish at $\Gamma$. Since by assumption $f_1$ is an immersion and there are no other nodes where $f(C)$ and $f(D)$ meet besides $\Gamma$, to see that $f$ is a limit of immersions of smooth curves, it remains to note in addition that the generic section of $N_H|_D(\Gamma) \simeq \oo_D(1)(\Gamma)$ separates the points of $D$ identified under $f_2$ --- which is true by assumption that $\oo_D(1)(\Gamma)$ is very ample away from $\Gamma$. To finish the proof, it thus suffices to check $H^1(\oo_D(1)(\Gamma)) = 0$, and that the generic section of $\oo_D(1)(\Gamma)$ does not vanish at any point $p \in \Gamma$. Equivalently, it suffices to check $H^1(\oo_D(1)(\Gamma)(-p)) = 0$ for $p \in \Gamma$. Since $f_2$ is a general BN-curve, we obtain \[\dim H^1(\oo_D(1)) = \max(0, g - d + (r - 1)) \leq n - 1.\] Twisting by $\Gamma \smallsetminus \{p\}$, which is a set of $n - 1$ general points, we therefore obtain \[H^1(\oo_D(1)(\Gamma \smallsetminus \{p\})) = 0,\] as desired. \end{proof} \begin{lm} \label{lm:hir} Let $k \geq 1$ be an integer, $\iota \colon H \hookrightarrow \pp^r$ ($r \geq 3$) be a hyperplane, and $(f_1 \colon C \to \pp^r, \Gamma_1)$ and \mbox{$(f_2 \colon D \to H, \Gamma_2)$} be marked curves, both passing through a set $\Gamma \subset H \subset \pp^r$ of $n \geq 1$ points. Assume that $f_2$ is a general BN-curve of degree $d$ and genus $g$ to $H$, that $\Gamma_2$ is a general collection of $n$ points on $D$, and that $f_1$ is transverse to $H$ along $\Gamma$. Suppose moreover that: \begin{enumerate} \item The bundle $N_{f_2}(-k)$ satisfies interpolation. \item We have $H^1(N_{f_1}(-k)) = 0$. \item We have \[(r - 2) n \leq rd - (r - 4)(g - 1) - k \cdot (r - 2) d.\] \item We have \[n \geq \begin{cases} g & \text{if $k = 1$;} \\ g - 1 + (k - 1)d & \text{if $k > 1$.} \end{cases}\] \end{enumerate} Then $f \colon C \cup_\Gamma D \to \pp^r$ satisfies \[H^1(N_f(-k)) = 0.\] \end{lm} \begin{proof} Since $N_{f_2}(-k)$ satisfies interpolation by assumption and \[(r - 2) n \leq \chi(N_{f_2}(-k)) = rd - (r - 4)(g - 1) - k \cdot (r - 2) d,\] we conclude that $H^1(N_{f_2}(-k)(-\Gamma)) = 0$. Since $H^1(N_{f_1} (-k)) = 0$ by assumption, to apply Lemma~\ref{hyp-glue} it remains to check \[H^1(\oo_D(1 - k)(\Gamma)) = 0.\] It is therefore sufficient for \[n = \#\Gamma \geq \dim H^1(\oo_D(1 - k)) = \begin{cases} g & \text{if $k = 1$;} \\ g - 1 + (k - 1)d & \text{if $k > 1$.} \end{cases}\] But this is precisely our final assumption. \end{proof} \section{Curves of Large Genus \label{sec:hir-2}} In this section, we will deal with a number of our special cases, of larger genus. Taking care of these cases separately is helpful --- since in the remaining cases, we will not have to worry about whether our curve is a BN-curve, thanks to results of~\cite{iliev} and~\cite{keem} on the irreducibility of the Hilbert scheme of curves. \begin{lm} \label{bn3} Let $H \subset \pp^3$ be a plane, $\Gamma \subset H \subset \pp^3$ a set of $6$ general points, $(f_1 \colon C \to \pp^3, \Gamma_1)$ a general marked BN-curve passing through $\Gamma$ of degree and genus one of \[(d, g) \in \{(6, 1), (7, 2), (8, 4), (9, 5), (10, 6)\},\] and $(f_2 \colon D \to H, \Gamma_2)$ a general marked canonical curve passing through $\Gamma$. Then $f \colon C \cup_\Gamma D \to \pp^3$ is a BN-curve which satisfies $H^1(N_f) = 0$. \end{lm} \begin{proof} Note that the conclusion is an open condition; we may therefore freely specialize $(f_1, \Gamma_1)$. Write $\Gamma = \{s, t, u, v, w, x\}$. In the case $(d, g) = (6, 1)$, we specialize $(f_1, \Gamma_1)$ to $(f_1^\circ \colon C^\circ = C_1 \cup_p C_2 \cup_{\{q, r\}} C_3 \to \pp^3, \Gamma_1^\circ)$, where $f_1^\circ|_{C_1}$ is a conic, $f_1^\circ|_{C_2}$ is a line with $C_2$ joined to $C_1$ at one point $p$, and $f_1^\circ|_{C_3}$ is a rational normal curve with $C_3$ joined to $C_1$ at two points $\{q, r\}$; note that $f_1^\circ$ is a BN-curve by (iterative application of) Proposition~\ref{prop:glue}. We suppose that $(f_1^\circ|_{C_1}, \Gamma_1^\circ \cap C_1)$ passes through $\{s, t\}$, while $(f_1^\circ|_{C_2}, \Gamma_1^\circ \cap C_2)$ passes through $u$, and $(f_1^\circ|_{C_3}, \Gamma_1^\circ \cap C_3)$ passes through $\{v, w, x\}$; it is clear this can be done so $\{s, t, u, v, w, x\}$ are general. Writing \[f^\circ \colon C^\circ \cup_\Gamma D = C_2 \cup_{\{p, u\}} C_3 \cup_{\{q, r, v, w, x\}} (C_1 \cup_{\{s, t\}} D) \to \pp^3,\] it suffices by Propositions~\ref{prop:glue} and~\ref{prop:interior} to show that $f^\circ|_{C_1 \cup D}$ is a BN-curve which satisfies $H^1(N_{f^\circ|_{C_1 \cup D}}) = 0$. For $(d, g) = (8, 4)$, we specialize $(f_1, \Gamma_1)$ to $(f_1^\circ \colon C^\circ = C_1 \cup_{\{p, q, r\}} C_2 \cup_{\{y, z, a\}} C_3 \to \pp^3, \Gamma_1^\circ)$, where $f_1^\circ|_{C_1}$ is a conic, and $f_1^\circ|_{C_2}$ and $f_1^\circ|_{C_3}$ are rational normal curves, with both $C_2$ and $C_3$ joined to $C_1$ at $3$ points (at $\{p, q, r\}$ and $\{y, z, a\}$ respectively); note that $f_1^\circ$ is a BN-curve by (iterative application of) Proposition~\ref{prop:glue}. We suppose that $(f_1^\circ|_{C_1}, \Gamma_1^\circ \cap C_1)$ passes through $\{s, t\}$, while $(f_1^\circ|_{C_2}, \Gamma_1^\circ \cap C_2)$ passes through $\{u, v\}$, and $(f_1^\circ|_{C_3}, \Gamma_1^\circ \cap C_3)$ passes through $\{w, x\}$; it is clear this can be done so $\{s, t, u, v, w, x\}$ are general. Writing \[f^\circ \colon C^\circ \cup_\Gamma D = C_2 \cup_{\{p, q, r, u, v\}} C_3 \cup_{\{w, x, y, z, a\}} (C_1 \cup_{\{s, t\}} D) \to \pp^3,\] it again suffices by Propositions~\ref{prop:glue} and~\ref{prop:interior} to show that $f^\circ|_{C_1 \cup D}$ is a BN-curve which satisfies $H^1(N_{f^\circ|_{C_1 \cup D}}) = 0$. For this, we first note that $f^\circ|_{C_1 \cup D}$ is a curve of degree $6$ and genus $4$, and that the moduli space of smooth curves of degree $6$ and genus $4$ in $\pp^3$ is irreducible (they are all canonical curves). Moreover, by Lemma~\ref{smoothable} (c.f.\ Remark~\ref{very-ample-away} and note that $\oo_D(1) \simeq K_D$ is very ample), $f^\circ|_{C_1 \cup D}$ is a limit of immersions of smooth curves, and satisfies $H^1(N_{f^\circ|_{C_1 \cup D}}) = 0$; this completes the proof. \end{proof} \begin{lm} \label{bn4} Let $H \subset \pp^4$ be a hyperplane, $\Gamma \subset H \subset \pp^4$ a set of $7$ general points, \mbox{$(f_1 \colon C \to \pp^4, \Gamma_1)$} a general marked BN-curve passing through $\Gamma$ of degree and genus one of \[(d, g) \in \{(7, 3), (8, 4), (9, 5)\},\] and $(f_2 \colon D \to H, \Gamma_2)$ a general marked BN-curve of degree~$9$ and genus~$6$ passing through $\Gamma$. Then $f \colon C \cup_\Gamma D \to \pp^4$ is a BN-curve which satisfies $H^1(N_f) = 0$. \end{lm} \begin{proof} Again, we note that the conclusion is an open statement; we may therefore freely specialize $(f_1, \Gamma_1)$. Write $\Gamma = \{t, u, v, w, x, y, z\}$. First, we claim it suffices to consider the case $(d, g) = (7, 3)$. Indeed, suppose $(f_1, \Gamma_1)$ is a marked BN-curve of degree $7$ and genus $3$ passing through $\Gamma$. Then $f_1' \colon C \cup_{\{p, q\}} L \to \pp^4$ and $f_1'' \colon C \cup_{\{p, q\}} L \cup_{\{r, s\}} L' \to \pp^4$ (where $f_1'|_L$ and $f_1''|_L$ and $f_1''|_{L'}$ are lines with $L$ and $L'$ joined to $C$ at two points) are BN-curves by Proposition~\ref{prop:glue}, of degree and genus $(8, 4)$ and $(9, 5)$ respectively. If $f \colon C \cup_\Gamma D \to \pp^4$ is a BN-curve with $H^1(N_f) = 0$, then invoking Propositions~\ref{prop:glue} and~\ref{prop:interior}, both \begin{gather*} f' \colon (C \cup_{\{p, q\}} L) \cup_\Gamma D = (C \cup_\Gamma D) \cup_{\{p, q\}} L \to \pp^4 \\ \text{and} \quad f'' \colon (C \cup_{\{p, q\}} L \cup_{\{r, s\}} L') \cup_\Gamma D = (C \cup_\Gamma D) \cup_{\{p, q\}} L \cup_{\{r, s\}} L' \to \pp^4 \end{gather*} are BN-curves, which satisfy $H^1(N_{f'}) = H^1(N_{f''}) = 0$. So it remains to consider the case $(d, g) = (7, 3)$. In this case, we begin by specializing $(f_1, \Gamma_1)$ to $(f_1^\circ \colon C^\circ = C' \cup_{\{p, q\}} L \to \pp^4, \Gamma_1^\circ)$, where $f_1^\circ|_{C'}$ is a general BN-curve of degree $6$ and genus $2$, and $f_1^\circ|_L$ is a line with $L$ joined to $C'$ at two points $\{p, q\}$. We suppose that $(f_1^\circ|_L, \Gamma_1^\circ \cap L)$ passes through $t$, while $(f_1^\circ|_{C'}, \Gamma_1^\circ \cap C')$ passes through $\{u, v, w, x, y, z\}$; we must check this can be done so $\{t, u, v, w, x, y, z\}$ are general. To see this, it suffices to show that the intersection $f_1^\circ(C') \cap H$ and the points $\{f_1^\circ(p), f_1^\circ(q)\}$ independently general. In other words, we are claiming that the map \[\{(f_1^\circ|_{C'} \colon C' \to \pp^4, p, q) : p, q \in C'\} \mapsto (f_1^\circ|_{C'}(C') \cap H, f_1^\circ|_{C'}(p), f_1^\circ|_{C'}(q))\] is dominant; equivalently, that it is smooth at a generic point $(f_1^\circ|_{C'}, p, q)$. But the obstruction to smoothness lies in $H^1(N_{f_1^\circ|_{C'}}(-1)(-p-q)) = 0$, which vanishes because because $N_{f_1^\circ|_{C'}}(-1)$ satisfies interpolation by Lemma~\ref{from-inter}. We next specialize $(f_2, \Gamma_2)$ to $(f_2^\circ \colon D^\circ = D' \cup_\Delta D_1 \to H, \Gamma_2^\circ)$, where $f_2^\circ|_{D'}$ is a general BN-curve of degree $6$ and genus $3$, and $f_2^\circ|_{D_1}$ is a rational normal curve with $D_1$ joined to $D'$ at a set $\Delta$ of $4$ points; note that $f_2^\circ$ is a BN-curve by Proposition~\ref{prop:glue}. We suppose that $(f_2^\circ|_{D_1}, \Gamma_2^\circ \cap D_1)$ passes through $t$, while $(f_2^\circ|_{D'}, \Gamma_2^\circ \cap D')$ passes through $\{u, v, w, x, y, z\}$; this can be done so $\{t, u, v, w, x, y, z\}$ are still general, since $f_2^\circ|_{D'}$ (marked at general points of the source) can pass through $6$ general points, while $(f_2^\circ|_{D_1}$ (again marked at general points of the source) can pass through $5$ general points, both by Corollary~1.4 of~\cite{firstpaper}. In addition, $(f_2^\circ|_{D_1}, (\hat{t} = \Gamma_2^\circ \cap D_1) \cup \Delta)$ has a general tangent line at $t$; to see this, note that we are asserting that the map sending $(f_2^\circ|_{D_1}, \hat{t} \cup \Delta)$ to its tangent line at $t$ is dominant; equivalently, that it is smooth at a generic point of the source. But the obstruction to smoothness lies in $H^1(N_{f_2^\circ|_{D_1}}(-\Delta - 2\hat{t} \, ))$, which vanishes because $N_{f_2^\circ|_{D_1}}(-2\hat{t} \, )$ satisfies interpolation by combining Propositions~\ref{inter} and~\ref{twist}. As $\{p, q\} \subset C'$ is general, we thus know that the tangent lines to $(f_2^\circ|_{D_1}, \hat{t} \cup \Delta)$ at $t$, and to $(f_1^\circ|_{C'}, \{p, q\})$ at $f_1^\circ(p)$ and $f_1^\circ(q)$, together span all of $\pp^4$; write $\bar{t}$, $\bar{p}$, and $\bar{q}$ for points on each of these tangent lines distinct from $t$, $f_1^\circ(p)$, and $f_1^\circ(q)$ respectively. We then use the exact sequences \begin{gather*} 0 \to N_{f^\circ}|_L(-\hat{t} - p - q) \to N_{f^\circ} \to N_{f^\circ}|_{C' \cup D^\circ} \to 0 \\ 0 \to N_{f^\circ|_{C' \cup D^\circ}} \to N_{f^\circ}|_{C' \cup D^\circ} \to * \to 0, \end{gather*} where $*$ is a punctual sheaf (which in particular has vanishing $H^1$). Write $H_t$ for the hyperplane spanned by $f_1^\circ(L)$, $\bar{p}$, and $\bar{q}$; and $H_p$ for the hyperplane spanned by $f_1^\circ(L)$, $\bar{t}$, and $\bar{q}$; and $H_q$ for the hyperplane spanned by $f_1^\circ(L)$, $\bar{t}$, and $\bar{p}$. Then $f_1^\circ(L)$ is the complete intersection $H_t \cap H_p \cap H_q$, and so we get a decomposition \[N_{f^\circ}|_L \simeq N_{f_1^\circ(L) / H_t}(\hat{t} \, ) \oplus N_{f_1^\circ(L) / H_p}(p) \oplus N_{f_1^\circ(L) / H_q}(q),\] which upon twisting becomes \[N_{f^\circ}|_L(-\hat{t} - p - q) \simeq N_{f_1^\circ(L) / H_t}(-p-q) \oplus N_{f_1^\circ(L) / H_p}(-\hat{t}-q) \oplus N_{f_1^\circ(L) / H_q}(-\hat{t} - p).\] Note that $N_{f_1^\circ(L) / H_t}(-p-q) \simeq \oo_L(-1)$ has vanishing $H^1$, and similarly for the other factors; consequently, $H^1(N_{f^\circ}|_L(-\hat{t} - p - q)) = 0$. We conclude that $H^1(N_{f^\circ}) = 0$ provided that $H^1(N_{f^\circ|_{C' \cup D^\circ}}) = 0$. Moreover, writing $C' \cup_{\{u, v, w, x, y, z\}} D^\circ = D_1 \cup_\Delta (D' \cup_{\{u, v, w, x, y, z\}} C')$ and applying Proposition~\ref{prop:interior}, we know that $H^1(N_{f^\circ|_{C' \cup D^\circ}}) = 0$ provided that $H^1(N_{f^\circ|_{C' \cup D'}}) = 0$. And if $f^\circ|_{C' \cup D'}$ is a BN-curve, then $f^\circ \colon (C' \cup_{\{u, v, w, x, y, z\}} D') \cup_{\Delta \cup \{p, q\}} (D_1 \cup_t L) \to \pp^4$ is a BN-curve too by Proposition~\ref{prop:glue}, Putting this all together, it is sufficient to show that $f^\circ|_{C' \cup D'}$ is a BN-curve which satisfies $H^1(N_{f^\circ|_{C' \cup D'}}) = 0$. Our next step is to specialize $(f_1^\circ|_{C'}, \Gamma_1^\circ \cap C')$ to $(f_1^{\circ\circ} \colon C^{\circ\circ} = C'' \cup_{\{r, s\}} L' \to \pp^4, \Gamma_1^{\circ\circ})$, where $f_1^{\circ\circ}|_{C''}$ is a general BN-curve of degree~$5$ and genus~$1$, and $f_1^{\circ\circ}|_{L'}$ is a line with $L'$ joined to $C''$ at two points $\{r, s\}$. We suppose that $(f_1^{\circ\circ}|_{C''}, \Gamma_1^{\circ\circ} \cap C'')$ passes through $u$, while $(f_1^{\circ\circ}|_{C''}, \Gamma_1^{\circ\circ} \cap C'')$ passes through $\{v, w, x, y, z\}$; as before this can be done so $\{u, v, w, x, y, z\}$ are general. We also specialize $(f_2^\circ|_{D'}, \Gamma_2^\circ \cap D')$ to $(f_2^{\circ\circ} \colon D'' \cup_\Delta D_2 \to \pp^4, \Gamma_2^{\circ\circ})$, where $f_2^{\circ\circ}|_{D''}$ and $f_2^{\circ\circ}|_{D_2}$ are both rational normal curves with $D''$ and $D_2$ joined at a set $\Delta$ of $4$ general points. We suppose that $(f_2^{\circ\circ}|_{D_2}, \Gamma_2^{\circ\circ} \cap D_2)$ passes through $u$, while $(f_2^{\circ\circ}|_{D''}, \Gamma_2^{\circ\circ} \cap D'')$ passes through $\{v, w, x, y, z\}$; as before this can be done so $\{u, v, w, x, y, z\}$ are general. The same argument as above, mutatis mutandis, then implies it is sufficient to show that $f^{\circ\circ}|_{C'' \cup D''} \colon C'' \cup_{\{v, w, x, y, z\}} D'' \to \pp^4$ is a BN-curve which satisfies $H^1(N_{f^{\circ\circ}|_{C'' \cup D''}}) = 0$. For this, we first note that $f^{\circ\circ}|_{C'' \cup D''}$ is a curve of degree $8$ and genus $5$, and that the moduli space of smooth curves of degree $8$ and genus $5$ in $\pp^4$ is irreducible (they are all canonical curves). To finish the proof, it suffices to note by Lemma~\ref{smoothable} that $f^{\circ\circ}|_{C'' \cup D''}$ is a limit of immersions of smooth curves and satisfies $H^1(N_{f^{\circ\circ}|_{C'' \cup D''}}) = 0$. \end{proof} \begin{cor} \label{smooth-enough} To prove the main theorems (excluding the ``conversely\ldots'' part), it suffices to show the existence of (nondegenerate immersions of) smooth curves, of the following degrees and genera, which satisfy the conclusions: \begin{enumerate} \item For Theorem~\ref{main-3}, it suffices to show the existence of smooth curves, satisfying the conclusions, where $(d, g)$ is one of: \[(5, 1), \quad (7, 2), \quad (6, 3), \quad (7, 4), \quad (8, 5), \quad (9, 6), \quad (9, 7).\] \item For Theorem~\ref{main-3-1}, it suffices to show the existence of smooth curves, satisfying the conclusions, where $(d, g)$ is one of: \[(7, 5), \quad (8, 6).\] \item For Theorem~\ref{main-4}, it suffices to show the existence of smooth curves, satisfying the conclusions, where $(d, g)$ is one of: \[(9, 5), \quad (10, 6), \quad (11, 7), \quad (12, 9).\] \end{enumerate} (And in constructing the above smooth curves, we may suppose the corresponding theorem holds for curves of smaller genus.) \end{cor} \begin{proof} By Lemmas~\ref{bn3} and~\ref{lm:hir}, and Proposition~\ref{p2}, we know that Theorem~\ref{main-3} holds for $(d, g)$ one of \[(10, 9), \quad (11, 10), \quad (12, 12), \quad (13, 13), \quad (14, 14).\] Similarly, by Lemmas~\ref{bn4}, \ref{lm:hir}, and~\ref{from-inter}, we know that Theorem~\ref{main-4} holds for $(d, g)$ one of \[(16, 15), \quad (17, 16), \quad (18, 17).\] Eliminating these cases from the lists in Corollary~\ref{finite}, we obtain the given lists of pairs $(d, g)$. Moreover --- in each of the cases appearing in the statement of this corollary --- results of \cite{keem} (for $r = 3$) and \cite{iliev} (for $r = 4$) state that the Hilbert scheme of curves of degree $d$ and genus $g$ in $\pp^r$ has a \emph{unique} component whose points represent smooth irreducible nondegenerate curves. The condition that our curve be a BN-curve may thus be replaced with the condition that our curve be smooth irreducible nondegenerate. \end{proof} \section{More Curves in a Hyperplane \label{sec:hir-3}} In this section, we give several more applications of the technique developed in the previous two sections. Note that from Corollary~\ref{smooth-enough}, it suffices to show the existence of curves satisfying the desired conclusions which are limits of immersions of smooth curves; it not necessary to check that these curves are BN-curves. \begin{lm} \label{lm:ind:3} Suppose $N_f(-2)$ satisfies interpolation, where $f \colon C \to \pp^3$ is a general BN-curve of degree $d$ and genus $g$ to $\pp^3$. Then the same is true for some smooth curve of degree and genus: \begin{enumerate} \item \label{33} $(d + 3, g + 3)$ (provided $d \geq 3$); \item \label{42} $(d + 4, g + 2)$ (provided $d \geq 3$); \item \label{46} $(d + 4, g + 6)$ (provided $d \geq 5$). \end{enumerate} \end{lm} \begin{proof} We apply Lemma~\ref{lm:hir} for $f_2$ a curve of degree up to $4$ (and note that $N_{f_2}(-2)$ satisfies interpolation by Proposition~\ref{p2}), namely: \begin{enumerate} \item $(d_2, g_2) = (3, 1)$ and $n = 3$; \item $(d_2, g_2) = (4, 0)$ and $n = 3$; \item $(d_2, g_2) = (4, 2)$ and $n = 5$. \end{enumerate} Finally, we note that $C \cup_\Gamma D \to \pp^r$ as above is a limit of immersions of smooth curves by Lemma~\ref{smoothable}. \end{proof} \begin{cor} Suppose that Theorem~\ref{main-3} holds for $(d, g) = (5, 1)$. Then Theorem~\ref{main-3} holds for $(d, g)$ one of: \[(7, 2), \quad (6, 3), \quad (9, 6), \quad (9, 7).\] \end{cor} \begin{proof} For $(d, g) = (7, 2)$, we apply Lemma~\ref{lm:ind:3}, part~\ref{42} (taking as our inductive hypothesis the truth of Theorem~\ref{main-3} for $(d', g') = (3, 0)$). Similarly, for $(d, g) = (6, 3)$ and $(d, g) = (9, 6)$, we apply Lemma~\ref{lm:ind:3}, part~\ref{33} (taking as our inductive hypothesis the truth of Theorem~\ref{main-3} for $(d', g') = (3, 0)$, and the just-established $(d', g') = (6, 3)$, respectively). Finally, for $(d, g) = (9, 7)$, we apply Lemma~\ref{lm:ind:3}, part~\ref{46} (taking as our inductive hypothesis the yet-to-be-established truth of Theorem~\ref{main-3} for $(d', g') = (5, 1)$). \end{proof} \begin{lm} Suppose that Theorem~\ref{main-3-1} holds for $(d, g) = (7, 5)$. Then Theorem~\ref{main-3-1} holds for $(d, g) = (8, 6)$. \end{lm} \begin{proof} We simply apply Lemma~\ref{glue} with $f\colon C \cup_\Gamma D \to \pp^3$ such that $f|_C$ is a general BN-curve of degree $7$ and genus $5$, and $f|_D$ is a line, with $C$ joined to $D$ at a set $\Gamma$ of two points. \end{proof} \begin{lm} \label{lm:ind:4} Suppose $N_f(-1)$ satisfies interpolation, where $f$ is a general BN-curve of degree $d$ and genus $g$ in $\pp^4$. Then the same is true for some smooth curve of degree $d + 6$ and genus $g + 6$, provided $d \geq 4$. \end{lm} \begin{proof} We apply Lemmas~\ref{lm:hir} and~\ref{smoothable} for $f_2$ a curve of degree $6$ and genus $3$ to $\pp^3$, with $n = 4$. Note that $N_{f_2}(-1)$ satisfies interpolation by Propositions~\ref{inter} and~\ref{twist}. \end{proof} \begin{lm} Theorem~\ref{main-4} holds for $(d, g)$ one of: \[(10, 6), \quad (11, 7), \quad (12, 9).\] \end{lm} \begin{proof} We simply apply Lemma~\ref{lm:ind:4} (taking as our inductive hypothesis the truth of Theorem~\ref{main-4} for $(d', g') = (d - 6, g - 6)$). \end{proof} To prove the main theorems (excluding the ``conversely\ldots'' part), it thus remains to produce five smooth curves: \begin{enumerate} \item For Theorem~\ref{main-3}, it suffices to find smooth curves, satisfying the conclusions, of degrees and genera $(5, 1)$, $(7, 4)$, and $(8, 5)$. \item For Theorem~\ref{main-3-1}, it suffices to find a smooth curve, satisfying the conclusions, of degree $7$ and genus $5$. \item For Theorem~\ref{main-4}, it suffices to find a smooth curve, satisfying the conclusions, of degree $9$ and genus $5$. \end{enumerate} \section{Curves in Del Pezzo Surfaces \label{sec:in-surfaces}} In this section, we analyze the normal bundles of certain curves by specializing to immersions $f \colon C \hookrightarrow \pp^r$ of smooth curves whose images are contained in Del Pezzo surfaces $S \subset \pp^r$ (where the Del Pezzo surface is embedded by its complete anticanonical series). Since $f$ will be an immersion, we shall identify $C = f(C)$ with its image, in which case the normal bundle $N_f$ becomes the normal bundle $N_C$ of the image. Our basic method in this section will be to use the normal bundle exact sequence associated to $C \subset S \subset \pp^r$: \begin{equation} \label{nb-exact} 0 \to N_{C/S} \to N_C \to N_S|_C \to 0. \end{equation} Since $S$ is a Del Pezzo surface, we have by adjunction an isomorphism \begin{equation} \label{ncs} N_{C/S} \simeq K_C \otimes K_S^\vee \simeq K_C(1). \end{equation} \begin{defi} \label{pic-res} Let $S \subset \pp^r$ be a Del Pezzo surface, $k$ be an integer with $H^1(N_S(-k)) = 0$, and $\theta \in \pic S$ be any divisor class. Let $F$ be a general hypersurface of degree $k$. We consider the moduli space $\mathcal{M}$ of pairs $(S', \theta')$, with $S'$ a Del Pezzo surface containing $S \cap F$, and $\theta' \in \pic S'$. Define $V_{\theta, k} \subseteq \pic(S \cap F)$ to be the subvariety obtained by restricting $\theta'$ to $S \cap F \subseteq S'$, as $(S', \theta)$ varies over the component of $\mathcal{M}$ containing $(S, \theta)$. Note that there is a unique such component, since $\mathcal{M}$ is smooth at $[(S, \theta)]$ thanks to our assumption that $H^1(N_S(-k)) = 0$. \end{defi} Our essential tool is given by the following lemma, which uses the above normal bundle sequence together with the varieties $V_{\theta, k}$ to analyze $N_C$. \begin{lm} \label{del-pezzo} Let $C \subset S \subset \pp^r$ be a general curve (of any fixed class) in a general Del Pezzo surface $S \subset \pp^r$, and $k$ be a natural number with $H^1(N_S(-k)) = 0$. Suppose that (for $F$ a general hypersurface of degree $k$): \[\dim V_{[C], k} = \dim H^0(\oo_C(k - 1)) \quad \text{and} \quad H^1(N_S|_C(-k)) = 0,\] and that the natural map \[H^0(N_S(-k)) \to H^0(N_S|_C(-k))\] is an isomorphism. Then, \[H^1(N_{C}(-k)) = 0.\] \end{lm} \begin{proof} Twisting our earlier normal bundle exact sequence \eqref{nb-exact}, and using the isomorphism \eqref{ncs}, we obtain the exact sequence: \[0 \to K_C(1-k) \to N_C(-k) \to N_S|_C(-k) \to 0.\] This gives rise to a long exact sequence in cohomology: \[\cdots \to H^0(N_C(-k)) \to H^0(N_S|_C(-k)) \to H^1(K_C(1 - k)) \to H^1(N_C(-k)) \to H^1(N_S|_C(-k)) \to \cdots.\] Since $H^1(N_S|_C(-k)) = 0$ by assumption, it suffices to show that the image of the natural map $H^0(N_C(-k)) \to H^0(N_S|_C(-k))$ has codimension \[\dim H^1(K_C(1 - k)) = \dim H^0(\oo_C(k - 1)) = \dim V_{[C], k}.\] Because the natural map $H^0(N_S(-k)) \to H^0(N_S|_C(-k))$ is an isomorphism, we may interpret sections of $N_S|_C(-k)$ as first-order deformations of the Del Pezzo surface $S$ fixing $S \cap F$. So it remains to show that the space of such deformations coming from a deformation of $C$ fixing $C \cap F$ has codimension $\dim V_{[C], k}$. The key point here is that deforming $C$ on $S$ does not change its class $[C] \in \pic(S)$, and every deformation of $S$ comes naturally with a deformation of the element $[C] \in \pic(S)$. It thus suffices to prove that the space of first-order deformations of $S$ which leave invariant the restriction $[C]|_{S \cap F} \in \pic(S \cap F)$ has codimension $\dim V_{[C], k}$. But since the map $\mathcal{M} \to V_{[C], k}$ is smooth at $(S, [C])$, the vertical tangent space has codimension in the full tangent space equal to the dimension of the image. \end{proof} In applying Lemma~\ref{del-pezzo}, we will first consider the case where $S \subset \pp^3$ is a general cubic surface, which is isomorphic to the blowup $\bl_\Gamma \pp^2$ of $\pp^2$ along a set \[\Gamma = \{p_1, \ldots, p_6\} \subset \pp^2\] of six general points. Recall that this a Del Pezzo surface, which is to say that the embedding $\bl_\Gamma \pp^2 \simeq S \hookrightarrow \pp^3$ as a cubic surface is via the complete linear system for the inverse of the canonical bundle: \[-K_{\bl_\Gamma \pp^2} = 3L - E_1 - \cdots - E_6,\] where $L$ is the class of a line in $\pp^2$ and $E_i$ is the exceptional divisor in the blowup over $p_i$. Note that by construction, \[N_S \simeq \oo_S(3).\] In particular, $H^1(N_S(-1)) = H^1(N_S(-2)) = 0$ by Kodaira vanishing. \begin{lm} \label{cubclass} Let $C \subset \bl_\Gamma \pp^2 \simeq S \subset \pp^3$ be a general curve of class either: \begin{enumerate} \item \label{74} $5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$; \item \label{85} $5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$; \item \label{86} $6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$; \item \label{75} $6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$; \end{enumerate} Then $C$ is smooth and irreducible. In the first two cases, $H^1(\oo_C(1)) = 0$. \end{lm} \begin{proof} We first show the above linear series are basepoint-free. To do this, we write each as a sum of terms which are evidently basepoint-free: \begin{align*} 5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\ &\qquad + (L - E_1) + (L - E_2) \\ 5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) + (L - E_1) \\ 6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\ &\qquad + L + (2L - E_3 - E_4 - E_5 - E_6) \\ 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\ &\qquad + (L - E_2) + (2L - E_3 - E_4 - E_5 - E_6). \end{align*} Since all our linear series are basepoint-free, the Bertini theorem implies that $C$ is smooth. Moreover, by basepoint-freeness, we know that $C$ does not contain any of our exceptional divisors. We conclude that $C$ is a the proper transform in the blowup of a curve $C_0 \subset \pp^2$. This curve satisfies: \begin{itemize} \item In case~\ref{74}, $C_0$ has exactly two nodes, at $p_1$ and $p_2$, and is otherwise smooth. In particular, $C_0$ (and thus $C$) must be irreducible, since otherwise (by B\'ezout's theorem) it would have at least $4$ nodes (where the components meet). \item In case~\ref{85}, $C_0$ has exactly one node, at $p_1$, and is otherwise smooth. As above, $C_0$ (and thus $C$) must be irreducible. \item In case~\ref{86}, $C_0$ has exactly four nodes, at $\{p_3, p_4, p_5, p_6\}$, and is otherwise smooth. As above, $C_0$ (and thus $C$) must be irreducible. \item In case~\ref{75}, $C_0$ has exactly $5$ nodes, at $\{p_2, p_3, p_4, p_5, p_6\}$, and is otherwise smooth. As above, $C_0$ must either be irreducible, or the union of a line and a quintic. (Otherwise, it would have at least $8$ nodes.) But in the second case, all $5$ nodes must be collinear, contradicting our assumption that $\{p_2, p_3, p_4, p_5, p_6\}$ are general. Consequently, $C_0$ (and thus $C$) must be irreducible. \end{itemize} We now turn to showing $H^1(\oo_C(1)) = 0$ in the first two cases. In the first case, we note that $\Gamma$ contains $4 = \operatorname{genus}(C)$ general points $\{p_3, p_4, p_5, p_6\}$ on $C$; consequently, $E_3 + E_4 + E_5 + E_6$ --- and therefore $\oo_C(1) = (3L - E_1 - E_2) - (E_3 + E_4 + E_5 + E_6)$ --- is a general line bundle of degree $7$, which implies $H^1(\oo_C(1)) = 0$. Similarly, in the second case, we note that $\Gamma$ contains $5 = \operatorname{genus}(C)$ general points $\{p_2, p_3, p_4, p_5, p_6\}$ on $C$. As in the first case, this implies $H^1(\oo_C(1)) = 0$, as desired. \end{proof} \begin{lm} \label{foo} Let $C \subset \pp^3$ be a general BN-curve of degree and genus $(7, 4)$ or $(8, 5)$. Then we have $H^1(N_C(-2)) = 0$. \end{lm} \begin{proof} We take $C \subset S$, as constructed in Lemma~\ref{cubclass}, parts~\ref{74} and~\ref{85} respectively. These curves have degrees and genera $(7, 4)$ and $(8, 5)$ respectively, which can be seen by calculating the intersection product with the hyperplane class and using adjunction. For example, for the curve in part~\ref{74} of class $5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$, we calculate \[\deg C = (5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6) \cdot (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) = 7,\] and \[\operatorname{genus} C = 1 + \frac{K_S \cdot C + C^2}{2} = 1 + \frac{-\deg C + C^2}{2} = 1 + \frac{-7 + 13}{2} = 4.\] Because $N_S \simeq \oo_S(3)$, we have \[H^1(N_S|_C(-2)) = H^1(\oo_C(1)) = 0.\] Moreover, $\oo_S(1)(-C)$ is either $-2L + E_1 + E_2$ or $-2L + E_1$ respectively; in either case we have $H^0(\oo_S(1)(-C)) = 0$. Consequently, the restriction map \[H^0(\oo_S(1)) \to H^0(\oo_C(1))\] is injective. Since \[\dim H^0(\oo_S(1)) = 4 = \dim H^0(\oo_C(1)),\] the above restriction map is therefore an isomorphism. Applying Lemma~\ref{del-pezzo}, it thus suffices to show that \[\dim V_{[C], 2} = \dim H^0(\oo_C(1)) = 4.\] To do this, we first observe that $[C]$ is always a linear combination $aH + bL_1 + cL_2$ of the hyperplane class $H$, and two nonintersecting lines $L_1$ and $L_2$, such that both $b$ and $c$ are nonvanishing. Indeed: \begin{align*} 5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6 &= 3(3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\ &\quad - (2L - E_1 - E_3 - E_4 - E_5 - E_6) \\ &\quad - (2L - E_2 - E_3 - E_4 - E_5 - E_6) \\ 5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6 &= 3(3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) + E_1 \\ &\quad - 2(2L - E_2 - E_3 - E_4 - E_5 - E_6). \end{align*} Writing $F$ for a general quadric hypersurface, and $D = F \cap S$, we observe that $\pic(D)$ is $4$-dimensional. It is therefore sufficient to prove that for a general class $\theta \in \pic^{6a + 2b + 2c}(D)$, there exists a smooth cubic surface $S$ containing $D$ and a pair $(L_1, L_2)$ of disjoint lines on $S$, such that the restriction $(aH + bL_1 + cL_2)|_D = \theta$. Since $H|_D = \oo_D(1)$ is independent of $S$ and the choice of $(L_1, L_2)$, we may replace $\theta$ by $\theta(-a)$ and set $a = 0$. We thus seek to show that for $b, c \neq 0$ and $\theta \in \pic^{2b + 2c}(D)$ general, there exists a smooth cubic surface $S$ containing $D$, and a pair $(L_1, L_2)$ of disjoint lines on $S$, with $(bL_1 + cL_2)|_D = \theta$. Equivalently, we want to show the map \[\{(S, E_1, E_2) : E_1, E_2 \subset S \supset D\} \mapsto \{(E_1, E_2)\},\] from the space of smooth cubic surfaces $S$ containing $D$ with a choice of pair of disjoint lines $(E_1, E_2)$, to the space of pairs of $2$-secant lines to $D$, is dominant. For this, it suffices to check the vanishing of $H^1(N_S(-D -E_1 - E_2))$, for any smooth cubic $S$ containing $D$ and disjoint lines $(E_1, E_2)$ on $S$, in which lies the obstruction to smoothness of this map. But $N_S(-D -E_1 - E_2) = 3L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$ has no higher cohomology by Kawamata-Viehweg vanishing. \end{proof} \begin{lm} Let $C \subset \pp^3$ be a general BN-curve of degree $7$ and genus $5$. Then we have $H^1(N_C(-1)) = 0$. \end{lm} \begin{proof} We take $C \subset S$, as constructed in Lemma~\ref{cubclass}, part~\ref{75}. Because $N_S \simeq \oo_S(3)$, we have \[H^1(N_S|_C(-1)) = H^1(\oo_C(2)) = 0.\] Moreover, $\oo_S(2)(-C) \simeq \oo_S(-E_1)$ has no sections. Consequently, the restriction map \[H^0(\oo_S(2)) \to H^0(\oo_C(2))\] is injective. Since \[\dim H^0(\oo_S(2)) = 10 = \dim H^0(\oo_C(2)),\] the above restriction map is therefore an isomorphism. Applying Lemma~\ref{del-pezzo}, it thus suffices to show that \[\dim V_{[C], 1} = \dim H^0(\oo_C) = 1.\] Writing $F$ for a general hyperplane, and $D = F \cap S$, we observe that $\pic(D)$ is $1$-dimensional. Since $[C] = 2H + E_1$, it is therefore sufficient to prove that for a general class $\theta \in \pic^7(D)$, there exists a cubic surface $S$ containing $D$ and a line $L$ on $S$, such that the restriction $(2H + L)|_D = \theta$. Since $H|_D = \oo_D(1)$ is independent of $S$ and the choice of $L$, we may replace $\theta$ by $\theta(-1)$ and look instead for $L|_D = \theta \in \pic^1(D)$. Equivalently, we want to show the map \[\{(S, E_1) : E_1 \subset S \supset D\} \mapsto \{(E_1, E_2)\},\] from the space of smooth cubic surfaces $S$ containing $D$ with a choice of line $E_1$, to the space of $1$-secant lines to $D$, is dominant; it suffices to check the vanishing of $H^1(N_S(-D-E_1))$, for any smooth cubic $S$ containing $D$ and line $E_1$ on $S$, in which lies the obstruction to smoothness of this map. But $N_S(-D-E_1) = 6L - 3E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$ has no higher cohomology by Kodaira vanishing. \end{proof} Next, we consider the case where $S \subset \pp^4$ is the intersection of two quadrics, which is isomorphic to the blowup $\bl_\Gamma \pp^2$ of $\pp^2$ along a set \[\Gamma = \{p_1, \ldots, p_5\}\] of five general points. Recall that this is a Del Pezzo surface, which is to say that the embedding $\bl_\Gamma \pp^2 \simeq S \hookrightarrow \pp^4$ as the intersection of two quadrics is via the complete linear system for the inverse of the canonical bundle: \[-K_{\bl_\Gamma \pp^2} = 3L - E_1 - \cdots - E_5,\] where $L$ is the class of a line in $\pp^2$ and $E_i$ is the exceptional divisor in the blowup over $p_i$. Note that by construction, \[N_S \simeq \oo_S(2) \oplus \oo_S(2).\] In particular, $H^1(N_S(-1)) = 0$ by Kodaira vanishing. \begin{lm} \label{qclass} Let $C \subset \bl_\Gamma \pp^2 \simeq S \subset \pp^4$ be a general curve of class either: \begin{enumerate} \item $5L - 2E_1 - E_2 - E_3 - E_4 - E_5$; \item $6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5$. \end{enumerate} Then $C$ is smooth and irreducible. In the first case, $H^1(\oo_C(1)) = 0$. \end{lm} \begin{proof} We first show the above linear series are basepoint-free. To do this, we write them as a sum of terms which are evidently basepoint-free: \begin{align*} 5L - 2E_1 - E_2 - E_3 - E_4 - E_5 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5) + (L - E_1) + L \\ 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5) \\ &\qquad + (2L - E_2 - E_3 - E_4 - E_5) + L \end{align*} As in Lemma~\ref{cubclass}, we conclude that $C$ is smooth and irreducible. In the first case, we have $\deg \oo_C(1) = 9 > 8 = 2g - 2$, which implies $H^1(\oo_C(1)) = 0$ as desired. \end{proof} \begin{lm} Let $C \subset \pp^4$ be a general BN-curve of degree $9$ and genus $5$. Then we have $H^1(N_C(-1)) = 0$. \end{lm} \begin{proof} We take $C \subset S$, as constructed in Lemma~\ref{qclass}. Because $N_S \simeq \oo_S(2) \oplus \oo_S(2)$, we have \[H^1(N_S|_C(-1)) = H^1(\oo_C(1) \oplus \oo_C(1)) = 0.\] Moreover, $\oo_S(1)(-C) \simeq \oo_S(-2L + E_1)$ has no sections. Consequently, the restriction map \[H^0(\oo_S(1) \oplus \oo_S(1)) \to H^0(\oo_C(1) \oplus \oo_C(1))\] is injective. Since \[\dim H^0(\oo_S(1) \oplus \oo_S(1)) = 10 = \dim H^0(\oo_C(1) \oplus \oo_C(1)),\] the above restriction map is therefore an isomorphism. Applying Lemma~\ref{del-pezzo}, it thus suffices to show that \[\dim V_{[C], 1} = \dim H^0(\oo_C) = 1.\] Writing $F$ for a general hyperplane, and $D = F \cap S$, we observe that $\pic(D)$ is $1$-dimensional. Since $[C] = 3(3L - E_1 - E_2 - E_3 - E_4 - E_5) - 2(2L - E_1 - E_2 - E_3 - E_4 - E_5) - E_1$, it is therefore sufficient to prove that for a general class $\theta \in \pic^9(D)$, there exists a quartic Del Pezzo surface $S$ containing $D$, and a pair $\{L_1, L_2\}$ of intersecting lines on $S$, such that the restriction $(3H - 2L_1 - L_2)|_D = \theta$. Since $H|_D = \oo_D(1)$ is independent of $S$ and the choice of $L$, we may replace $\theta$ by $\theta^{-1}(3)$ and look instead for $(2L_1 + L_2)|_D = \theta \in \pic^3(D)$. For this, it suffices to show the map \[\{(S, L_1, L_2) : L_1, L_2 \subset S \supset D\} \mapsto \{(L_1, L_2)\},\] from the space of smooth quartic Del Pezzo surfaces $S$ containing $D$ with a choice of pair of intersecting lines $(L_1, L_2)$, to the space of pairs of intersecting $1$-secant lines to $D$, is dominant. Taking $[L_1] = E_1$ and $[L_2] = L - E_1 - E_2$, it suffices to check the vanishing of the first cohomology of the vector bundle $N_S(-D - E_1 - (L - E_1 - E_2))$ --- which is isomorphic to a direct sum of two copies of the line bundle $2L - E_1 - E_3 - E_4 - E_5$ --- for any smooth quartic Del Pezzo surface $S$ containing $D$, in which lies the obstruction to smoothness of this map. But $2L - E_1 - E_3 - E_4 - E_5$ has no higher cohomology by Kodaira vanishing. \end{proof} To prove the main theorems (excluding the ``conversely\ldots'' part), it thus remains to produce a smooth curve $C \subset \pp^3$ of degree $5$ and genus $1$, with $H^1(N_C(-2)) = 0$. \section{\boldmath Elliptic Curves of Degree $5$ in $\pp^3$ \label{sec:51}} In this section, we construct an immersion $f \colon C \hookrightarrow \pp^3$ of degree~$5$ from a smooth elliptic curve, with $H^1(N_f(-2)) = 0$. As in the previous section, we shall identify $C = f(C)$ with its image, in which case the normal bundle $N_f$ becomes the normal bundle $N_C$ of the image. Our basic method in this section will be to use the geometry of the cubic scroll $S \subset \pp^4$. Recall that the cubic scroll can be constructed in two different ways: \begin{enumerate} \item Let $Q \subset \pp^4$ and $M \subset \pp^4$ be a plane conic, and a line disjoint from the span of $Q$, respectively. As abstract varieties, $Q \simeq \pp^1 \simeq M$. Then $S$ is the ruled surface swept out by lines joining pairs of points identified under some choice of above isomorphism. \item Let $x \in \pp^2$ be a point, and consider the blowup $\bl_x \pp^2$ of $\pp^2$ at the point $\{x\}$. Then, $S$ is the image of $f \colon \bl_x \pp^2 \hookrightarrow \pp^4$ under the complete linear series attached to the line bundle \[2L - E,\] where $L$ is the class of a line in $\pp^2$, and $E$ is the exceptional divisor in the blowup. \end{enumerate} To relate these two constructions, we fix a line $L \subset \pp^2$ not meeting $x$ in the second construction, and consider the isomorphism $L \simeq \pp^1 \simeq E$ defined by sending $p \in L$ to the intersection with $E$ of the proper transform of the line joining $p$ and $x$. Then the $f(L)$ and $f(E)$ are $Q$ and $M$ respectively in the second construction; the proper transforms of lines through $x$ are the lines of the ruling. \medskip Now take two points $p, q \in L$. Since $f(L)$ is a plane conic, the tangent lines to $f(L)$ at $p$ and $q$ intersect; we let $y$ be their point of intersection. From the first description of $S$, it is clear that any line through $y$ intersects $S$ quasi-transversely --- except for the lines joining $y$ to $p$ and $q$, each of which meets $S$ in a degree~$2$ subscheme of $f(L)$. Write $\bar{S}$ for the image of $S$ under projection from $y$; by construction, the projection $\pi \colon S \to \bar{S} \subseteq \pp^3$ is unramified away from $\{p, q\}$, an immersion away from $f(L)$, and when restricted to $f(L)$ is a double cover of its image with ramification exactly at $\{p, q\}$. At $\{p, q\}$, the differential drops rank transversely, with kernel the tangent space to $f(L)$. (By ``drops rank transversely'', we mean that the section $d\pi$ of $\hom(T_S, \pi^* T_{\pp^3})$ is transverse to the subvariety of $\hom(T_S, \pi^* T_{\pp^3})$ of maps with less-than-maximal rank.) If $C \subset \bl_{\{p\}} \pp^2 \simeq S$ is a curve passing through $p$ and $q$, but transverse to $L$ at each of these points, then any line through $y$ intersects $C$ quasi-transversely. In particular, if $C$ meets $L$ in at most one point outside of $\{p, q\}$, the image $\bar{C}$ of $C$ under projection from $y$ is smooth. Moreover, the above analysis of $d\pi$ on $S$ implies that the natural map \[N_{C/S} \to N_{\bar{C}/\pp^3}\] induced by $\pi$ is fiberwise injective away from $\{p, q\}$, and has a simple zero at both $p$ and $q$. That is, we have an exact sequence \begin{equation} \label{51} 0 \to N_{C/S}(p + q) \to N_{\bar{C}/\pp^3} \to \mathcal{Q} \to 0, \end{equation} with $\mathcal{Q}$ a vector bundle. \medskip We now specialize to the case where $C$ is the proper transform of a plane cubic, passing through $\{x, p, q\}$, and transverse to $L$ at $\{p, q\}$. By inspection, $\bar{C}$ is an elliptic curve of degree $5$ in $\pp^3$; it thus suffices to show $H^1(N_{\bar{C}/\pp^3}(-2)) = 0$. \begin{lm} In this case, \begin{align*} N_{C/S}(p + q) &\simeq \oo_C(3L - E + p + q) \\ \mathcal{Q} &\simeq \oo_C(5L - 3E - p - q). \end{align*} \end{lm} \begin{proof} We first note that \[N_{C/S} \simeq N_{C/\pp^2}(-E) \simeq \oo_C(3L)(-E) \quad \Rightarrow \quad N_{C/S}(p + q) \simeq \oo_C(3L - E + p + q).\] Next, the Euler exact sequence \[0 \to \oo_{\bar{C}} \to \oo_{\bar{C}}(1)^4 \to T_{\pp^3}|_{\bar{C}} \to 0\] implies \[\wedge^3 (T_{\pp^3}|_{\bar{C}}) \simeq \oo_C(4).\] Combined with the normal bundle exact sequence \[0 \to T_C \to T_{\pp^3}|_{\bar{C}} \to N_{\bar{C}/\pp^3} \to 0,\] and the fact that $C$ is of genus $1$, so $T_C \simeq \oo_C$, we conclude that \[\wedge^2(N_{\bar{C}/\pp^3}) \simeq \oo_C(4) \otimes T_C^\vee \simeq \oo_C(4) = \oo_C(4(2L - E)) = \oo_C(8L - 4E).\] The exact sequence \eqref{51} then implies \[\mathcal{Q} \simeq \wedge^2(N_{\bar{C}/\pp^3}) \otimes (N_{C/S}(p + q))^\vee \simeq \oo_C(8L - 4E)(-3L + E - p - q) = \oo_C(5L - 3E - p - q),\] as desired. \end{proof} \noindent Twisting by $\oo_C(-2) \simeq \oo_C(-4L + 2E)$, we obtain isomorphisms: \begin{align*} N_{C/S}(p + q) &\simeq \oo_C(-L + E + p + q) \\ \mathcal{Q} &\simeq \oo_C(L - E - p - q). \end{align*} We thus have an exact sequence \[0 \to \oo_C(-L + E + p + q) \to N_{\bar{C}/\pp^3}(-2) \to \oo_C(L - E - p - q) \to 0.\] Since $\oo_C(-L + E + p + q)$ and $\oo_C(L - E - p - q)$ are both general line bundles of degree zero on a curve of genus $1$, we have \[H^1(\oo_C(-L + E + p + q)) = H^1(\oo_C(L - E - p - q)) = 0,\] which implies \[H^1(N_{\bar{C}/\pp^3}(-2)) = 0.\] This completes the proof the main theorems, except for the ``conversely\ldots'' parts. \section{The Converses \label{sec:converses}} In this section, we show that the intersections appearing in our main theorems fail to be general in all listed exceptional cases. We actually go further, describing precisely the intersection of a general BN-curve $f \colon C \to \pp^r$ in terms of the intrinsic geometry of $Q \simeq \pp^1 \times \pp^1$, $H \simeq \pp^2$, and $H \simeq \pp^3$ respectively. Since the general BN-curve $f \colon C \to \pp^r$ is an immersion, we can identify $C = f(C)$ with its image as in the previous two sections, in which case the normal bundle $N_f$ becomes the normal bundle $N_C$ of its image. There are two basic phenomenon which occur explain the majority of our exceptional cases: cases where $C$ is a complete intersection, and cases where $C$ lies on a surface of low degree. The first two subsections will be devoted to the exceptional cases that arise for these two reasons respectively. In the final subsection, we will consider the two remaining exceptional cases. \subsection{Complete Intersections} We begin by dealing with those exceptional cases which are complete intersections. \begin{prop} Let $C \subset \pp^3$ be a general BN-curve of degree $4$ and genus $1$. Then the intersection $C \cap Q$ is the intersection of two general curves of bidegree $(2, 2)$ on $Q \simeq \pp^1 \times \pp^1$. In particular, it is not a collection of $8$ general points. \end{prop} \begin{proof} It is easy to see that $C$ is the complete intersection of two general quadrics. Restricting these quadrics to $Q \simeq \pp^1 \times \pp^1$, we see that $C \cap Q$ is the intersection of two general curves of bidegree $(2, 2)$. Since general points impose independent conditions on the $9$-dimensional space of curves of bidegree $(2, 2)$, a general collection of $8$ points will lie only on one curve of bidegree $(2, 2)$. The intersections of two general curves of bidegree $(2, 2)$ is therefore not a collection of $8$ general points. \end{proof} \begin{prop} \label{64-to-Q} Let $C \subset \pp^3$ be a general BN-curve of degree $6$ and genus $4$. Then the intersection $C \cap Q$ is the intersection of two general curves of bidegrees $(2, 2)$ and $(3,3)$ respectively on $Q \simeq \pp^1 \times \pp^1$. In particular, it is not a collection of $12$ general points. \end{prop} \begin{proof} It is easy to see that $C$ is the complete intersection of a general quadric and cubic. Restricting these to $Q \simeq \pp^1 \times \pp^1$, we see that $C \cap Q$ is the intersection of two general curves of bidegrees $(2, 2)$ and $(3,3)$ respectively. Since general points impose independent conditions on the $9$-dimensional space of curves of bidegree $(2, 2)$, a general collection of $12$ points will not lie any curve of bidegree $(2,2)$, and in particular will not be such an intersection. \end{proof} \begin{prop} Let $C \subset \pp^3$ be a general BN-curve of degree $6$ and genus $4$. Then the intersection $C \cap H$ is a general collection of $6$ points lying on a conic. In particular, it is not a collection of $6$ general points. \end{prop} \begin{proof} As in Proposition~\ref{64-to-Q}, we see that $C \cap H$ is the intersection of general conic and cubic curves. In particular, $C \cap H$ lies on a conic. Conversely, any $6$ points lying on a conic are the complete intersection of a conic and a cubic by Theorem~\ref{main-2} (with $(d, g) = (3, 1)$). Since general points impose independent conditions on the $6$-dimensional space of plane conics, a general collection of $6$ points will not lie on a conic. We thus see our intersection is not a collection of $6$ general points. \end{proof} \begin{prop} Let $C \subset \pp^4$ be a general BN-curve of degree $8$ and genus $5$. Then the intersection $C \cap H$ is the intersection of three general quadrics in $H \simeq \pp^3$. In particular, it is not a collection of $8$ general points. \end{prop} \begin{proof} It is easy to see that $C$ is the complete intersection of three general quadrics. Restricting these quadrics to $H \simeq \pp^3$, we see that $C \cap H$ is the intersection of three general quadrics. Since general points impose independent conditions on the $10$-dimensional space of quadrics, a general collection of $8$ points will lie only on only two quadrics. The intersection of three general quadrics is therefore not a collection of $8$ general points. \end{proof} \subsection{Curves on Surfaces} Next, we analyze those cases which are exceptional because $C$ lies on a surface $S$ of small degree. To show the intersection is general subject to the constraint imposed by $C \subset S$, it will be useful to have the following lemma: \begin{lm} \label{pic-res-enough} Let $D$ be an irreducible curve of genus $g$ on a surface $S$, and $p_1, p_2, \ldots, p_n$ be a collection of $n$ distinct points on $D$. Suppose that $n \geq g$, and that $p_1, p_2, \ldots, p_g$ are general. Let $\theta \in \pic(S)$, with $\theta|_D \sim p_1 + p_2 + \cdots + p_n$. Suppose that \[\dim H^0(\theta) - \dim H^0(\theta(-D)) \geq n - g + 1.\] Then some curve $C \subset S$ of class $\theta$ meets $D$ transversely at $p_1, p_2, \ldots, p_n$. \end{lm} \begin{proof} Since $p_1, p_2, \ldots, p_g$ are general, and $\theta|_D = p_1 + p_2 + \cdots + p_n$, it suffices to show there is a curve of class $\theta$ meeting $D$ dimensionally-transversely and passing through $p_{g + 1}, p_{g + 2}, \ldots, p_n$; the remaining $g$ points of intersection are then forced to be $p_1, p_2, \ldots, p_g$. For this, we note there is a $\dim H^0(\theta) - (n - g) > \dim H^0(\theta(-D))$ dimensional space of sections of $\theta$ which vanish at $p_{g + 1}, \ldots, p_n$. In particular, there is some section which does not vanish along $D$. Its zero locus then gives the required curve $C$. (The curve $C$ meets $D$ dimensionally-transversely, because $C$ does not contain $D$ and $D$ is irreducible.) \end{proof} \begin{prop} Let $C \subset \pp^3$ be a general BN-curve of degree $5$ and genus $2$. Then the intersection $C \cap Q$ is a collection of $10$ general points lying on a curve of bidegree $(2, 2)$ on $Q \simeq \pp^1 \times \pp^1$. In particular, it is not a collection of $10$ general points. \end{prop} \begin{proof} Since $\dim H^0(\oo_C(2)) = 9$ and $\dim H^0(\oo_{\pp^3}(2)) = 10$, we conclude that $C$ lies on a quadric. Restricting to $Q$, we see that $C \cap Q$ lies on a curve of bidegree $(2,2)$. Conversely, given $10$ points $p_1, p_2, \ldots, p_{10}$ lying on a curve $D$ of bidegree $(2, 2)$, we may first find a pair of points $\{x, y\} \subset D$ so that $x + y + 2H \sim p_1 + \cdots + p_{10}$. We then claim there is a smooth quadric containing $D$ and the general $2$-secant line $\overline{xy}$ to $D$. Equivalently, we want to show the map \[\{(S, L) : L \subset S \supset D\} \mapsto \{L\},\] from the space of smooth quadric surfaces $S$ containing $D$ with a choice of line $L$, to the space of $2$-secant lines to $D$, is dominant; it suffices to check the vanishing of $H^1(N_S(-D-L))$, for any smooth quadric $S$ containing $D$ and line $L$ on $S$, in which lies the obstruction to smoothness of this map. But $N_S(-D-L) = \oo_S(0, -1)$ has no higher cohomology by Kodaira vanishing. Writing $L \in \pic(S)$ for the class of the line $\overline{xy}$, we see that $(L + 2H)|_D \sim p_1 + \cdots + p_{10}$ as divisor classes. Applying Lemma~\ref{pic-res-enough}, and noting that $\dim H^0(\oo_{S}(2H + L)) = 12$ while $\dim H^0(\oo_{S}(L)) = 2$, there is a curve $C$ of class $2H + L$ meeting $D$ transversely at $p_1, \ldots, p_{10}$. Since $\oo_{S}(2H + L)$ is very ample by inspection, $C$ is smooth (for $p_1, \ldots, p_{10}$ general). By results of \cite{keem}, this implies $C$ is a BN-curve. Since general points impose independent conditions on the $9$-dimensional space of curves of bidegree $(2, 2)$, a general collection of $10$ points does not lie on a curve of bidegree $(2, 2)$. A collection of $10$ general points on a general curve of bidegree $(2,2)$ is therefore not a collection of $10$ general points. \end{proof} \begin{prop} Let $C \subset \pp^3$ be a general BN-curve of degree $7$ and genus $5$. Then the intersection $C \cap Q$ is a collection of $14$ points lying on a curve $D \subset Q \simeq \pp^1 \times \pp^1$, which is general subject to the following conditions: \begin{enumerate} \item The curve $D$ is of bidegree $(3, 3)$. \item The divisor $C \cap Q - 2H$ on $D$ (where $H$ is the hyperplane class) is effective. \end{enumerate} In particular, it is not a collection of $14$ general points. \end{prop} \begin{proof} First we claim the general such curve $C$ lies on a smooth cubic surface $S$ with class $2H + E_1 = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$. Indeed, by Lemma~\ref{cubclass} part~\ref{75}, a general curve of this class is smooth and irreducible; such a curve has degree~$7$ and genus~$5$, and in particular is a BN-curve by results of \cite{keem}. It remains to see there are no obstructions to lifting a deformation of $C$ to a deformation of the pair $(S, C)$, i.e.\ that $H^1(N_S(-C)) = 0$. But $N_S(-C) = 3L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$, which has no higher cohomology by Kodaira vanishing. Thus, $C \cap Q - 2H$ is the restriction to $D$ of the class of a line on $S$; in particular, $C \cap Q - 2H$ is an effective divisor on $D$. Conversely, suppose that $p_1, p_2, \ldots, p_{14}$ are a general collection of $14$ points lying on a curve $D$ of bidegree $(3,3)$ with $p_1 + \cdots + p_{14} - 2H \sim x + y$ effective. We then claim there is a smooth cubic containing $D$ and the general $2$-secant line $\overline{xy}$ to $D$. Equivalently, we want to show the map \[\{(S, L) : L \subset S \supset D\} \mapsto \{L\},\] from the space of smooth cubic surfaces $S$ containing $D$ with a choice of line $L$, to the space of $2$-secant lines to $D$, is dominant; for this it suffices to check the vanishing of $H^1(N_S(-D-L))$. But $N_S(-D-L) = 3L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$, which has no higher cohomology by Kodaira vanishing. Choosing an isomorphism $S \simeq \bl_\Gamma \pp^2$ where $\Gamma = \{q_1, q_2, \ldots, q_6\}$, so that the line $\overline{xy} = E_1$ is the exceptional divisor over $q_1$, we now look for a curve $C \subset S$ of class \[[C] = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6.\] Again by Lemma~\ref{cubclass}, the general such curve is smooth and irreducible; such a curve has degree~$7$ and genus~$5$, and in particular is a BN-curve by results of \cite{keem}. Note that \[\dim H^0(\oo_S(6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6)) = 12 \quad \text{and} \quad \dim H^0(\oo_S(E_1)) = 1.\] Applying Lemma~\ref{pic-res-enough}, we conclude that some curve of our given class meets $D$ transversely at $p_1, p_2, \ldots, p_{14}$, as desired. It remains to see from this description that $C \cap Q$ is not a general collection of $14$ points. For this, first note that there is a $15$-dimensional space of such curves $D$ (as $\dim H^0(\oo_Q(3,3)) = 16$). On each each curve, there is a $2$-dimensional family of effective divisors $\Delta$; and for fixed $\Delta$, a $10$-dimensional family of divisors linearly equivalent to $2H + \Delta$ (because $\dim H^0(\oo_D(2H + \Delta)) = 11$ by Riemann-Roch). Putting this together, there is an (at most) $15 + 2 + 10 = 27$-dimensional family of such collections of points. But $\sym^{14}(Q)$ has dimension $28$. In particular, collections of such points cannot be general. \end{proof} \begin{prop} Let $C \subset \pp^3$ be a general BN-curve of degree $8$ and genus $6$. Then the intersection $C \cap Q$ is a general collection of $16$ points on a curve of bidegree $(3,3)$ on $Q \simeq \pp^1 \times \pp^1$. In particular, it is not a collection of $16$ general points. \end{prop} \begin{proof} Since $\dim H^0(\oo_C(3)) = 19$ and $\dim H^0(\oo_{\pp^3}(3)) = 20$, we conclude that $C$ lies a cubic surface. Restricting this cubic to $Q$, we see that $C \cap Q$ lies on a curve of bidegree $(3,3)$. Conversely, take a general collection $p_1, \ldots, p_{16}$ of $16$ points on a curve $D$ of bidegree $(3,3)$. The divisor $p_1 + \cdots + p_{16} - 2H$ is of degree $4$ on a curve $D$ of genus $4$; it is therefore effective, say \[p_1 + \cdots + p_{16} - 2H \sim x + y + z + w.\] We then claim there is a smooth cubic containing $D$ and the general $2$-secant lines $\overline{xy}$ and $\overline{zw}$ to $D$. Equivalently, we want to show the map \[\{(S, E_1, E_2) : E_1, E_2 \subset S \supset D\} \mapsto \{(E_1, E_2)\},\] from the space of smooth cubic surfaces $S$ containing $D$ with a choice of pair of disjoint lines $(E_1, E_2)$, to the space of pairs of $2$-secant lines to $D$, is dominant; for this it suffices to check the vanishing of $H^1(N_S(-D-E_1 - E_2))$. But $N_S(-D-E_1 - E_2) = 3L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$, which has no higher cohomology by Kawamata-Viehweg vanishing. We now look for a curve $C \subset S$ of class \[[C] = 6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6,\] which is of degree $8$ and genus $6$. By Lemma~\ref{cubclass}, we conclude that $C$ is smooth and irreducible; by results of \cite{keem}, this implies the general curve of this class is a BN-curve. Note that \[\dim H^0(\oo_S(6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6)) = 14 \quad \text{and} \quad \dim H^0(\oo_S(E_1 + E_2)) = 1.\] Applying Lemma~\ref{pic-res-enough}, we conclude that some curve of our given class meets $D$ transversely at $p_1, p_2, \ldots, p_{16}$, as desired. Since general points impose independent conditions on the $16$-dimensional space of curves of bidegree $(3, 3)$, a general collection of $16$ points will not lie any curve of bidegree $(3,3)$. Our collection of points is therefore not general. \end{proof} \begin{prop} Let $C \subset \pp^4$ be a general BN-curve of degree $9$ and genus $6$. Then the intersection $C \cap H$ is a general collection of $9$ points on an elliptic normal curve in $H \simeq \pp^3$. In particular, it is not a collection of $9$ general points. \end{prop} \begin{proof} Since $\dim H^0(\oo_C(2)) = 13$ and $\dim H^0(\oo_{\pp^4}(2)) = 15$, we conclude that $C$ lies on the intersection of two quadrics. Restricting these quadrics to $H \simeq \pp^3$, we see that $C \cap H$ lies on the intersection of two quadrics, which is an elliptic normal curve. Conversely, let $p_1, p_2, \ldots, p_9$ be a collection of $9$ points lying on an elliptic normal curve $D \subset \pp^3$. Since $D$ is an elliptic curve, there exists (a unique) $x \in D$ with \[\oo_D(p_1 + \cdots + p_9)(-2) \simeq \oo_D(x).\] Let $M$ be a general line through $x$. We then claim there is a quartic Del Pezzo surface containing $D$ and the general $1$-secant line $M$. Equivalently, we want to show the map \[\{(S, E_1) : E_1 \subset S \supset D\} \mapsto \{E_1\},\] from the space of smooth Del Pezzo surfaces $S$ containing $D$ with a choice of line $E_1$, to the space of $1$-secant lines to $D$, is dominant; for this it suffices to check the vanishing of $H^1(N_S(-D-E_1))$. But $N_S(-D-E_1)$ is a direct sum of two copies of the line bundle $3L - 2E_1 - E_2 - E_3 - E_4 - E_5$, which has no higher cohomology by Kodaira vanishing. We now consider curves $C \subset S$ of class \[[C] = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5,\] which are of degree $9$ and genus $6$. By Lemma~\ref{qclass}, we conclude that $C$ is smooth and irreducible; by results of \cite{iliev}, this implies the general curve of this class is a BN-curve. Note that \[\dim H^0(\oo_S(6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5)) = 15 \quad \text{and} \quad \dim H^0(\oo_S(3L - E_2 - E_3 - E_4 - E_5)) = 6.\] Applying Lemma~\ref{pic-res-enough}, we conclude that some curve of our given class meets $D$ transversely at $p_1, p_2, \ldots, p_9$, as desired. By Corollary~1.4 of \cite{firstpaper}, there does not exist an elliptic normal curve in $\pp^3$ passing through $9$ general points. \end{proof} \subsection{The Final Two Exceptional Cases} We have exactly two remaining exceptional cases: The intersection of a general BN-curve of degree $6$ and genus $2$ in $\pp^3$ with a quadric, and the intersection of a general BN-curve of degree $10$ and genus $7$ in $\pp^4$ with a hyperplane. We will show in the first case that the intersection fails to be general since $C$ is the projection of a curve $\tilde{C} \subset \pp^4$, where $\tilde{C}$ lies on a surface of small degree (a cubic scroll). In the second case, the intersection fails to be general since $C$ is contained in a quadric hypersurface. \begin{prop} Let $C \subset \pp^3$ be a general BN-curve of degree $6$ and genus $2$. Then the intersection $C \cap Q$ is a collection of $12$ points lying on a curve $D \subset Q \simeq \pp^1 \times \pp^1$, which is general subject to the following conditions: \begin{enumerate} \item The curve $D$ is of bidegree $(3, 3)$ (and so is in particular of arithmetic genus $4$). \item The curve $D$ has two nodes (and so is in particular of geometric genus $2$). \item The divisors $\oo_D(2,2)$ and $C \cap D$ are linearly equivalent when pulled back to the normalization of $D$. \end{enumerate} In particular, it is not a collection of $12$ general points. \end{prop} \begin{proof} We first observe that $\dim H^0(\oo_C(1)) = 5$, so $C$ is the projection from a point $p \in \pp^4$ of a curve $\tilde{C} \subset \pp^4$ of degree $6$ and genus $2$. Write $\pi \colon \pp^4 \dashedrightarrow \pp^3$ for the map of projection from $p$, and define the quadric hypersurface $\tilde{Q} = \pi^{-1}(Q)$. Let $S \subset \pp^4$ be the surface swept out by joining pairs of points on $\tilde{C}$ conjugate under the hyperelliptic involution. By Corollary~13.3 of \cite{firstpaper}, $S$ is a cubic surface; in particular, since $S$ has a ruling, $S$ is a cubic scroll. Write $H$ for the hyperplane section on $S$, and $F$ for the class of a line of the ruling. Th curve $\tilde{D} = \tilde{Q} \cap S$ (which for $C$ general is smooth by Kleiman transversality), is of degree $6$ and genus $2$. By construction, the intersection $C \cap Q$ lies on $D = \pi(\tilde{D})$. Since $D = \pi(S) \cap Q$, it is evidently a curve of bidegree $(3, 3)$ on $Q \simeq \pp^1 \times \pp^1$. Moreover, since $\tilde{D}$ has genus $2$, the geometric genus of $D$ is $2$. In particular, $D$ has two nodes. Next, we note that on $S$, the curve $\tilde{C}$ has class $2H$. Indeed, if $[\tilde{C}] = a \cdot H + b \cdot F$, then $a = \tilde{C} \cdot F = 2$ and $3a + b = \tilde{C} \cdot H = 6$; solving for $a$ and $b$, we obtain $a = 2$ and $b = 0$. Consequently, $\tilde{C} \cap \tilde{D}$ has class $2H$ on $\tilde{D}$. Or equivalently, $C \cap D = \pi(\tilde{C} \cap \tilde{D})$ has class equal to $\oo_D(2) = \oo_D(2,2)$ when pulled back to the normalization. Conversely, take $12$ points on $D$ satisfying our assumptions. Write $\tilde{D}$ for the normalization of $D$, and $p_1, p_2, \ldots, p_{12}$ for the preimages of our points in $\tilde{D}$. We begin by noting that $\dim H^0(\oo_{\tilde{D}}(1)) = 5$, so $D$ is the projection from a point $p \in \pp^4$ of $\tilde{D} \subset \pp^4$ of degree $6$ and genus $2$. As before, write $\pi \colon \pp^4 \dashedrightarrow \pp^3$ for the map of projection from $p$, and define the quadric hypersurface $\tilde{Q} = \pi^{-1}(Q)$. Again, we let $S \subset \pp^4$ be the surface swept out by joining pairs of points on $\tilde{D}$ conjugate under the hyperelliptic involution. As before, $S$ is a cubic scroll; write $H$ for the hyperplane section on $S$, and $F$ for the class of a line of the ruling. Note that $\tilde{D} \subseteq \tilde{Q} \cap S$; and since both sides are curves of degree $6$, we have $\tilde{D} = \tilde{Q} \cap S$. It now suffices to find a curve $\tilde{C} \subset S$ of class $2H$, meeting $\tilde{D}$ transversely in $p_1, \ldots, p_{12}$. For this, note that \[\dim H^0(\oo_S(2H)) = 12 \quad \text{and} \quad \dim H^0(\oo_S) = 1.\] Applying Lemma~\ref{pic-res-enough} yields the desired conclusion. It remains to see from this description that $C \cap Q$ is not a general collection of $12$ points. For this, we first note that such a curve $D \subset \pp^1 \times \pp^1$ is the same as specifying an abstract curve of genus $2$, two lines bundles of degree $3$ (corresponding to the pullbacks of $\oo_{\pp^1}(1)$ from each factor), and a basis-up-to-scaling for their space of sections (giving us two maps $D \to \pp^1$). Since there is a $3$-dimensional moduli space of abstract curves $D$ of genus $2$, and $\dim \pic^3(D) = 2$, and there is a $3$-dimensional family of bases-up-to-scaling of a $2$-dimensional vector space, the dimension of the space of such curves $D$ is $3 + 2 + 2 + 3 + 3 = 13$. Our condition $p_1 + \cdots + p_{12} \sim 2H$ then implies collections of such points on a fixed $D$ are in bijection with elements of $\pp \oo_D(2H) \simeq \pp^{10}$. Putting this together, there is an (at most) $13 + 10 = 23$ dimensional family of such collections of points. But $\sym^{12}(Q)$ has dimension $24$. In particular, collections of such points cannot be general. \end{proof} \begin{prop} Let $C \subset \pp^4$ be a general BN-curve of degree $10$ and genus $7$. Then the intersection $C \cap H$ is a general collection of $10$ points on a quadric in $H \simeq \pp^3$. In particular, it is not a collection of $10$ general points. \end{prop} \begin{proof} Since $\dim H^0(\oo_C(2)) = 14$ and $\dim H^0(\oo_{\pp^4}(2)) = 15$, we conclude that $C$ lies on a quadric. Restricting this quadric to $H \simeq \pp^3$, we see that $C \cap H$ lies on a quadric. For the converse, we take general points $p_1, \ldots, p_{10}$ lying on a general (thus smooth) quadric~$Q$. Since $\dim H^0(\oo_Q(3,3)) = 16$, we may find a curve $D \subset Q$ of type $(3,3)$ passing through $p_1, \ldots, p_{10}$. As divisor classes on $D$, suppose that \[p_1 + p_2 + \cdots + p_{10} - H \sim x + y + z + w.\] We now pick a general (quartic) rational normal curve $R \subset \pp^4$ whose hyperplane section is $\{x, y, z, w\}$. We then claim there is a smooth sextic K3 surface $S \subset \pp^4$ containing $D$ and the general $2$-secant lines $\overline{xy}$ and $\overline{zw}$ to $D$. Equivalently, we want to show the map \[\{(S, R) : R \subset S\} \mapsto \{(R, D)\},\] from the space of smooth sextic K3 surfaces $S$, to the space of pairs $(R, D)$ where $R$ is a rational normal curve meeting the canonical curve $D = S \cap H$ in four points, is dominant; for this it suffices to check the vanishing of $H^1(N_S(-H-R))$ at any smooth sextic K3 containing a rational normal curve $R$ (where $H = [D]$ is the hyperplane class on $S$). We first note that a sextic K3 surface $S$ containing a rational normal curve $R$ exists, by Theorem~1.1 of~\cite{knutsen}. On this K3 surface, our vector bundle $N_S(-H-R)$ is the direct sum of the line bundles $H - R$ and $2H - R$; consequently, it suffices to show $H^1(\oo_S(n)(-R)) = 0$ for $n \geq 1$. For this we use the exact sequence \[0 \to \oo_S(n)(-R) \to \oo_S(n) \to \oo_S(n)|_R = \oo_R(n) \to 0,\] and note that $H^1(\oo_S(n)) = 0$ by Kodaira vanishing, while $H^0(\oo_S(n)) \to H^0(\oo_R(n))$ is surjective since $R$ is projectively normal. This shows the existence of the desired K3 surface $S$ containing $D$ and the general $4$-secant rational normal curve $R$. Next, we claim that the linear series $H + R$ on $S$ is basepoint-free. To see this, we first note that $H$ is basepoint free, so any basepoints must lie on the curve $R$. Now the short exact sequence of sheaves \[0 \to \oo_S(H) \to \oo_S(H + R) \to \oo_S(H + R)|_R \to 0\] gives a long exact sequence in cohomology \[\cdots \to H^0(\oo_S(H + R)) \to H^0(\oo_S(H + R)|_R) \to H^1(\oo_S(H)) \to \cdots.\] Since the complete linear series attached to $\oo_S(H + R)|_R \simeq \oo_{\pp^1}(2)$ is basepoint-free, it suffices to show that $H^0(\oo_S(H + R)) \to H^0(\oo_S(H + R)|_R)$ is surjective. For this, it suffices to note that $H^1(\oo_S(H)) = 0$ by Kodaira vanishing. Thus, $H + R$ is basepoint-free. In particular, the Bertini theorem implies the general curve of class $H + R$ is smooth. Such a curve is of degree~$10$ and genus~$7$; in particular it is a BN-curve by results of \cite{iliev}. So it suffices to find a curve of class $H + R$ on $S$ passing through $p_1, p_2, \ldots, p_{10}$. By construction, as divisors on $D$, we have \[p_1 + p_2 + \cdots + p_{10} \sim H + R.\] By Lemma~\ref{pic-res-enough}, it suffices to show $\dim H^0(\oo_S(H + R)) = 8$ and $\dim H^0(\oo_S(R)) = 1$. More generally, for any smooth curve $X \subset S$ of genus $g$, we claim $\dim H^0(\oo_S(X)) = 1 + g$. To see this, we use the exact sequence \[0 \to \oo_S \to \oo_S(X) \to \oo_S(X)|_X \to 0,\] which gives rise to a long exact sequence in cohomology \[0 \to H^0(\oo_S) \to H^0(\oo_S(X)) \to H^0(\oo_S(X)|_X) \to H^1(\oo_S) \to \cdots.\] Because $H^1(\oo_S) = 0$, we thus have \begin{align*} \dim H^0(\oo_S(X)) &= \dim H^0(\oo_S(X)|_X) + \dim H^0(\oo_S) \\ &= \dim H^0(K_S(X)|_X) + 1 \\ &= \dim H^0(K_X) + 1 \\ &= g + 1. \end{align*} In particular, $\dim H^0(\oo_S(H + R)) = 8$ and $\dim H^0(\oo_S(R)) = 1$, as desired. Since general points impose independent conditions on the $10$-dimensional space of quadrics, a general collection of $10$ points will not lie on a quadric. In particular, our hyperplane section here is not a general collection of $10$ points. \end{proof}
{'timestamp': '2017-08-04T02:03:16', 'yymm': '1605', 'arxiv_id': '1605.06185', 'language': 'en', 'url': 'https://arxiv.org/abs/1605.06185'}
ArXiv
\section{Introduction} In this paper, we further develop a technique from \cite{RY} and apply it to study the Kobayashi Conjecture, $0$-cycles on hypersurfaces of general type, and Seshadri constants of very general hypersurfaces. The idea of the technique is to translate results about very general points on very general hypersurfaces to results about arbitrary points on very general hypersurfaces. Our first application is to hyperbolicity. Recall that a complex variety is Brody hyperbolic if it admits no holomorphic maps from $\mathbb{C}$. \begin{conjecture}[Kobayashi Conjecture] \label{conj-Kobayashi} A very general hypersurface $X$ of degree $d$ in $\mathbb{P}^n$ is Brody hyperbolic if $d$ is sufficiently large. Moreover, the complement $\mathbb{P}^n \setminus X$ is also Brody hyperbolic for large enough $d$. \end{conjecture} First conjectured in 1970 \cite{K}, the Kobayashi Conjecture has been the subject of intense study, especially in recent years \cite{Siu, Deng, Brotbeck, D2}. The suspected optimal bound for $d$ is approximately $d \geq 2n-1$. However, the best current bound is for $d$ greater than about $(en)^{2n+2}$ \cite{D2}. A related conjecture is the Green-Griffiths-Lang Conjecture. \begin{conjecture}[Green-Griffiths-Lang Conjecture] \label{conj-GGL} If $X$ is a variety of general type, then there is a proper subvariety $Y \subset X$ containing all the entire curves of $X$. \end{conjecture} The Green-Griffiths-Lang Conjecture says that holomorphic images of $\mathbb{C}$ under nonconstant maps do not pass through a general point of $X$. Conjecture \ref{conj-GGL} is well-studied for general hypersurfaces, and it is a natural result to prove on the way to proving Conjecture \ref{conj-Kobayashi}. We provide a new proof of the Kobayashi Conjecture using previous results on the Green-Griffiths-Lang Conjecture. \begin{theorem} A general hypersurface in $\mathbb{P}^n$ of degree $d$ admits no nonconstant holomorphic maps from $\mathbb{C}$ for $d \geq d_{2n-3}$, where $d_2 = 286, d_3 = 7316$ and \[ d_n = \left\lfloor \frac{n^4}{3} (n \log(n \log(24n)))^n \right\rfloor .\] \end{theorem} Our proof appears to be substantially simpler than the previous proofs (compare with \cite{Siu, Deng, Brotbeck, D2}), and can be adapted in a straightforward way as others use jet bundles to obtain better bounds for Conjecture \ref{conj-GGL}. Unfortunately, the bound of about $(2n \log(n\log(n)))^{2n+1}$ that we obtain is slightly worse than Demailly's bound of $(en)^{2n+2}$. However, assuming the optimal result on the Green-Griffiths-Lang Conjecture, our technique allows us to prove the conjectured bound of $d \geq 2n-1$ for the Kobayashi Conjecture. The Kobayashi Conjecture for complements of hypersurfaces has also been studied by several authors \cite{Dar, BD}. Using results of Darondeau \cite{Dar} along with our Grassmannian technique, we are able to prove the Kobayashi Conjecture for complements as well. \begin{theorem} If $X$ is a general hypersurface in $\mathbb{P}^n$ of degree at least $d_{2n}$, where $d_n = (5n)^2 n^n$, then $\mathbb{P}^n \setminus X$ is Brody hyperbolic. \end{theorem} Our bound of about $100 \cdot 2^n n^{2n+2}$ is slightly worse than Brotbeck and Deng's bound of about $e^3 n^{2n+6}$ \cite{BD}, but our proof is substantially shorter. Our second application concerns the Chow equivalence of points on very general complete intersections. Chen, Lewis, and Sheng \cite{CLS} make the following conjecture, which is inspired by work of Voisin \cite{voisinChow, V2,V3}. \begin{conjecture} \label{conj-CLS} Let $X \subset \mathbb{P}^n$ be a very general complete intersection of multidegree $(d_1, \dots, d_k)$. Then for every $p \in X$, the space of points of $X$ rationally equivalent to $p$ has dimension at most $2n-k-\sum_{i=1}^k d_i$. If $2n-k-\sum_{i=1}^k d_i < 0$, we understand this to mean that $p$ is equivalent to no other points of $X$. \end{conjecture} If this Conjecture holds, then the result is sharp \cite{CLS}. Voisin \cite{voisinChow, V2, V3} proves Conjecture \ref{conj-CLS} for hypersurfaces in the case $2n-d-1 < -1$. Chen, Lewis, and Sheng \cite{CLS} extend the result to $2n-d-1 = -1$, and also prove the analog of Voisin's bound for complete intersections. Both papers use fairly involved Hodge theory arguments. Roitman \cite{R1,R2} proves the $2n-k-\sum_{i=1}^k d_i = n-2$ case. Using Roitman's result, we prove all but the $2n-k-\sum_{i=1}^k d_i = -1$ case of Conjecture \ref{conj-CLS}, and in this case we prove the result holds with the exception of possibly countably many points. \begin{theorem} \label{thm-chow} If $X \subset \mathbb{P}^n$ is a very general complete intersection of multidegree $(d_1, \dots, d_k)$, then no two points of $X$ are rationally Chow equivalent if $2n-k-\sum_{i=1}^k d_i < -1$. If $2n-k-\sum_{i=1}^k d_i = -1$, then the set of points rationally equivalent to another point of $X$ is a countable union of points. If $2n-k-\sum_{i=1}^k d_i \geq 0$, then the space of points of $X$ rationally equivalent to a fixed point $p \in X$ has dimension at most $2n-k-\sum_{i=1}^k d_i$ in $X$. \end{theorem} Together with Chen, Lewis and Sheng's result, this completely resolves Conjecture \ref{conj-CLS} in the case of hypersurfaces. Our method appears substantially simpler than the previous work of Voisin \cite{voisinChow, V2, V3} and Chen, Lewis, and Sheng \cite{CLS}, although in the case of hypersurfaces, we do not recover the full strength of Chen, Lewis, and Sheng's result. The third result relates to Seshadri constants. Let $\epsilon(p,X)$ be the Seshadri constant of $X$ at the point $p$, defined to be the infimum of $\frac{\deg C}{\operatorname{mult}_p C}$ over all curves $C$ in $X$ passing through $p$. Let $\epsilon(X)$ be the Seshadri constant of $X$, defined to be the infimum of the $\epsilon(p,X)$ as $p$ varies over the hypersurface. \begin{theorem} \label{thm-seshadri} Let $r > 0$ be a real number. If for a very general hypersurface $X_0 \subset \mathbb{P}^{2n-1}$ of degree $d$ the Seshadri constant $\epsilon(p,X_0)$ of $X_0$ at a general point $p$ is at least $r$, then for a very general $X \subset \mathbb{P}^n$ of degree $d$, the Seshadri constant $\epsilon(X)$ of $X$ is at least $r$. \end{theorem} The layout of the paper is as follows. In Section \ref{sec-technique}, we lay out our general technique, and immediately use it to prove Theorem \ref{thm-seshadri}. In Section \ref{sec-hyperbolicity}, we discuss how to use the results of Section \ref{sec-technique} to prove hyperbolicity results. In Section \ref{sec-cycles}, we discuss how to prove Theorem \ref{thm-chow}. \subsection*{Acknowledgements} We would like to thank Xi Chen, Izzet Coskun, Jean-Pierre Demailly, Mihai P\u{a}un, Chris Skalit, and Matthew Woolf for helpful discussions and comments. \section{The Technique} \label{sec-technique} We set some notation. Let $B$ be the moduli space of complete intersections of multidegree $(d_1, \dots, d_k)$ in $\mathbb{P}^{n+k}$ and $\mathcal{U}_{n,\underline{d}} \subset \mathbb{P}^{n+k} \times B$ be the variety of pairs $([p],[X])$ with $[X] \in B$ and $p \in X$. We refer to elements of $\mathcal{U}_{n,\underline{d}}$ as pointed complete intersections. When we talk about the codimension of a countable union of subvarieties of $\mathcal{U}_{n,\underline{d}}$, we mean the minimum of the codimensions of each component. We need the following result from \cite{RY}. \begin{proposition} \label{GrassmanProp} Let $C \subset \mathbb{G}(r-1,m)$ be a nonempty family of $r-1$-planes of codimension $\epsilon > 0$, and let $B \subset \mathbb{G}(r,m)$ be the space of $r$ planes that contain some $r-1$-plane $c$ with $[c] \in C$. Then $\operatorname{codim}(B \subset \mathbb{G}(r,m)) \leq \epsilon -1$. \end{proposition} \begin{proof} For the reader's convenience, we sketch the proof. Consider the incidence-correspondence $I = \{ ([b],[c]) | \: [b] \in B, [c] \in C \} \subset \mathbb{G}(r,m) \times \mathbb{G}(r-1,m)$. The fibers of $\pi_2$ over $C \subset \mathbb{G}(r-1,m)$ are all $\mathbb{P}^{m-r}$'s, while for a general $[b] \in B$, the fiber $\pi_1^{-1}([b])$ has codimension at least $1$ in the $\mathbb{P}^r$ of $r-1$-planes contained in $b$ (since otherwise it can be shown that $C = \mathbb{G}(r,m)$). The result follows by a dimension count. \end{proof} We need a few other notions for the proof. A parameterized $r$-plane in $\mathbb{P}^m$ is a degree one injective map $\Lambda: \mathbb{P}^r \to \mathbb{P}^m$. Let $G_{r,m,p}$ be the space of parameterized $r$-planes in $\mathbb{P}^m$ whose images pass through $p$. If $(p,X)$ is a pointed hypersurface in $\mathbb{P}^m$, a parameterized $r$-plane section of $(p,X)$ is a pair $(\Lambda^{-1}(p), \Lambda^{-1}(X)) =: \phi^* (p,X)$, where $\Lambda: \mathbb{P}^r \to \mathbb{P}^m$ is a parameterized $r$-plane whose image does not lie entirely in $X$. We say that $\Lambda: \mathbb{P}^r \to \mathbb{P}^m$ contains $\Lambda': \mathbb{P}^{r-1} \to \mathbb{P}^m$ if $\Lambda(\mathbb{P}^r)$ contains $\Lambda'(\mathbb{P}^{r-1})$. \begin{corollary} \label{GrassmanCor} If $C \subset G_{r-1,m,p}$ is a nonempty subvariety of codimension $\epsilon > 0$ and $B \subset G_{r,m,p}$ is the subvariety of parameterized $r$-planes that contain some $r-1$-plane $[c] \in C$, then $\operatorname{codim}(B \subset G_{r,m,p}) \leq \epsilon -1$. \end{corollary} Let $\mathcal{X}_{n,\underline{d}} \subset \mathcal{U}_{n,\underline{d}}$ be an open subset. For instance, $\mathcal{X}_{n,\underline{d}}$ might be equal to $\mathcal{U}_{n,\underline{d}}$ or it might be the universal complete intersection over the space of smooth complete intersections. Our main technical tool is the following. \begin{theorem} \label{thm-technicaltool} Suppose we have an integer $m$ and for each $n \leq n_0$ we have $Z_{n,d} \subset \mathcal{X}_{n,\underline{d}}$ a countable union of locally closed varieties satisfying: \begin{enumerate} \item If $(p,X) \in Z_{n,\underline{d}}$ and is a parameterized hyperplane section of $(p',X')$, then $(p',X') \in Z_{n+1,\underline{d}}$. \item $Z_{m-1,\underline{d}}$ has codimension at least $1$ in $\mathcal{X}_{m-1,d}$. \end{enumerate} Then the codimension of $Z_{m-c,\underline{d}}$ in $\mathcal{X}_{m-c,\underline{d}}$ is at least $c$. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm-technicaltool}] We adopt the method from \cite{RY}. We prove that for a very general point $(p_0,X_0)$ of a component of $Z_{m-c,\underline{d}}$, there is a variety $\mathcal{F}_{m-c}$ and a map $\phi: \mathcal{F}_{m-c} \to \mathcal{U}_{m-c,\underline{d}}$ with $(p_0,X_0) \in \phi(\mathcal{F}_{m-c})$ and $\operatorname{codim}(\phi^{-1}(Z_{m-c,\underline{d}}) \subset \mathcal{F}_{m-c}) \geq c$. This suffices to prove the result. So, let $(p_0, X_0)$ be a general point of a component of $Z_{m-c,\underline{d}}$, and let $(p_1,X_1) \in \mathcal{X}_{m-1,\underline{d}}$ be very general, so that $(p_1,X_1)$ is not in the closure of any component of $Z_{m-1,\underline{d}}$ by hypothesis 2. Choose $(p,Y) \in \mathcal{X}_{N,\underline{d}}$ for some sufficiently large $N$ such that $(p_0,X_0)$ and $(p_1,X_1)$ are parameterized linear sections of $(p,Y)$. Then for all $n < N$, let $\mathcal{F}_{n}$ be the space of parameterized $n$-planes in $\mathbb{P}^N$ passing through $p$ such that for $\Lambda \in \mathcal{F}_{n}$, $\Lambda^*(p,Y)$ is in $\mathcal{X}_{n,\underline{d}}$. This means that $\mathcal{F}_n$ is an open subset of $G_{n,N,p}$. Let $\phi: \mathcal{F}_{n} \to \mathcal{X}_{n,\underline{d}}$ be the map sending $\Lambda: \mathbb{P}^n \to \mathbb{P}^N$ to $\Lambda^*(p,Y)$. We prove that $\operatorname{codim}(\phi^{-1}(Z_{m-c,\underline{d}}) \subset \mathcal{F}_{m-c}) \geq c$ by induction on $c$. For the $c=1$ case, we see by construction that $\phi^{-1}(Z_{m-1,\underline{d}})$ has codimension at least $1$ in $\mathcal{F}_{m-1}$, since $(p_1,X_1)$ is a parameterized $m-1$-plane section of $(p,Y)$ but is not in the closure of any component of $Z_{m-1,\underline{d}}$. Now suppose we know that $\operatorname{codim}(\phi^{-1}(Z_{m-c,\underline{d}}) \subset \mathcal{F}_{m-c}) \geq c$. We use Corollary \ref{GrassmanCor} with $C$ equal to $\phi^{-1}(Z_{m-c-1,\underline{d}})$. By hypothesis 1, we see that $B$ is contained in $\phi^{-1}(Z_{m-c,\underline{d}})$. It follows from this that \[ c \leq \operatorname{codim}( \phi^{-1}(Z_{m-c,\underline{d}}) \subset \mathcal{F}_{m-c})) \leq \operatorname{codim}(B \subset \mathcal{F}_{m-c}) \] \[ \leq \operatorname{codim}(C \subset \mathcal{F}_{m-c-1}) -1 . \] Rearranging, we see that \[ \operatorname{codim}( \phi^{-1}(Z_{m-c-1,\underline{d}}) \subset \mathcal{F}_{m-c-1}) = \operatorname{codim}(C \subset \mathcal{F}_{m-c-1}) \geq c+1 . \] The result follows. \end{proof} As an immediate application, we prove Theorem \ref{thm-seshadri}. \begin{proof}[Proof of Theorem \ref{thm-seshadri}] Let $r$ be given. Let $Z_{m,d} \subset \mathcal{U}_{m,d}$ be the set of pairs $(p,X)$ where $\epsilon(p,X) < r$. We apply Theorem \ref{thm-technicaltool} to $Z_{m,d}$. We see that $Z_{m,d}$ is a countable union of algebraic varieties, and by hypothesis, $Z_{2n-1,d} \subset \mathcal{U}_{2n-1,d}$ has codimension at least $1$. Now suppose that $(p_0,X_0) \in Z_{m,d}$. Then there is some curve $C$ in $X_0$ with $\frac{\deg C}{\operatorname{mult}_{p_0} C} < r$. Thus, for any $X$ containing $p$, we see that the Seshadri constant of $X$ at $p$ is at most $\frac{\deg C}{\operatorname{mult}_{p_0} C} < r$. This shows that the $Z_{m,d}$ satisfy the conditions of Theorem \ref{thm-technicaltool}, which shows that $Z_{n,d} \subset \mathcal{U}_{n,d}$ has codimension at least $n$. By dimension reasons, this means that $\mathcal{U}_{n,d}$ cannot dominate the space of hypersurfaces, so the result follows. \end{proof} \section{Hyperbolicity} \label{sec-hyperbolicity} Let $\mathcal{X}_{n,d}$ be the universal hypersurface in $\mathbb{P}^n$ over the open subset $U$ in the moduli space of all degree $d$ hypersurfaces in $\mathbb{P}^n$ consisting of all smooth hypersurfaces. Many people have developed a technique for restricting the entire curves contained in a fiber of the map $\pi_2: \mathcal{X}_{n,d} \to U$. See the article of Demailly for a detailed description of some of this work \cite{D2}. For a variety $X$, let $\operatorname{ev}: J_k(X) \to X$ be the space of $k$-jets of $X$. Then, if $X \subset \mathbb{P}^n$ is a smooth degree $d$ hypersurface, there is a vector bundle $E_{k,m}^{GG} T_X^*$ whose sections act on $J_k(X)$. Global sections of $E_{k,m}^{GG} T_X^* \otimes \mathcal{O}(-H)$ vanish on the $k$-jets of entire curves. This means that sections of $E_{k,m}^{GG} T_X^* \otimes \mathcal{O}(-H)$ cut out a closed subvariety $S'_{k,m}(X) \subset J_k(X)$ such that any entire curve is contained in $\operatorname{ev}(S'_{k,m}(X))$. In fact, it can be shown that any entire curve is contained in the closure of $\operatorname{ev}(S_{k,m}(X))$, where $S_{k,m}(X) \subset J_k(X)$ is $S'_{k,m}(X)$ minus the space of singular $k$-jets. The construction is functorial. In particular, if $V$ is the relative tangent bundle of the map $\pi_2$, there is a vector bundle $E_{k,m}^{GG} V^*$ whose restriction to each fiber of $\pi_2$ is $E_{k,m}^{GG} T_{X}^*$. Let $\mathcal{Y}_{n,d} \subset \mathcal{X}_{n,d}$ be the locus of $(p,X) \in \mathcal{X}_{n,d}$ such that $p \in \operatorname{ev}(S_{k,m}(X))$. Then by functoriality, $\mathcal{Y}_{n,d}$ is a finite union of locally closed varieties. \begin{theorem} \label{thm-Hyperbol} Suppose that $\mathcal{Y}_{r-1,d} \subset \mathcal{X}_{r-1,d}$ is codimension at least 1. Then $\mathcal{Y}_{r-c,d} \subset \mathcal{X}_{r-c,d}$ is codimension at least $c$. In particular, if $\mathcal{Y}_{2n-3, d}$ is codimension at least $1$ in $\mathcal{X}_{2n-3,d}$ and $d \geq 2n-1$, then a very general $X \subset \mathbb{P}^n$ of degree $d_n$ is hyperbolic. \end{theorem} \begin{proof} We check that $\mathcal{Y}_{r-1,d}$ satisfies both conditions of Theorem \ref{thm-technicaltool}. Condition 2 is a hypothesis. Condition 1 follows by the functoriality of Demailly's construction. Namely, if $(p,X_0)$ is a parameterized linear section of $(p,X)$, then the natural map $X_0 \to X$ induces a pullback map on sections \[ H^0(E_{k,m}^{GG} T_X^* \otimes \mathcal{O}(-H)) \to H^0(E_{k,m}^{GG} T_{X_0}^* \otimes \mathcal{O}(-H)) , \] compatible with the natural inclusion of jets $J_k(X_0) \to J_k(X)$. In particular, if some section $s$ of $H^0(E_{k,m}^{GG} T_X^* \otimes \mathcal{O}(-H))$ takes a nonzero value on a jet $\alpha(j)$, where $j \in J_k(X_0)$ and $\alpha:J_k(X_0) \to J_k(X)$ is the natural inclusion, then the restriction of $s$ to $X_0$ takes a nonzero value on the original jet $j \in J_k(X_0)$. Thus, if $X_0$ has a nonsingular $k$-jet at $p$ which is annihilated by every section in $H^0(E_{k,m}^{GG} T_{X_0}^* \otimes \mathcal{O}(-H))$, $X$ has such a $k$-jet as well. To see the second statement, observe that by Theorem \ref{thm-technicaltool}, $\mathcal{Y}_{n,d}$ has codimension in $\mathcal{X}_{n,d}$ at least $2n-3-n+1 = n-2$. It follows that a general $X$ of degree $d$ in $\mathbb{P}^n$ satisfies that the image of any entire curve is contained in an algebraic curve. Since $d \geq 2n-1$, by a theorem of Voisin \cite{V2, V3}, any algebraic curve in $X$ is of general type. The result follows. \end{proof} The current best bound for the Green-Griffiths-Lang Conjecture is from Demailly \cite{D1,D2}. The version we use comes out of Demailly's proof. \begin{theorem}[\cite{D2}, Section 10] If $k$ and $m$ are sufficiently large, we have that $\mathcal{Y}_{n,d} \subset \mathcal{X}_{n,d}$ is codimension at least $1$ for $d \geq d_n$, where $d_2 = 286, d_3 = 7316$ and \[ d_n = \left\lfloor \frac{n^4}{3} (n \log(n \log(24n)))^n \right\rfloor .\] \end{theorem} Using this bound, we obtain the following. \begin{corollary} The Kobayashi Conjecture holds hypersurfaces when $d \geq d_{2n-3}$, where $d_2 = 286, d_3 = 7316$ and \[ d_n = \left\lfloor \frac{n^4}{3} (n \log(n \log(24n)))^n \right\rfloor .\] \end{corollary} This bound of about $(2n \log (n\log n))^{2n+1}$ is slightly worse than the best current bound for the Kobayashi Conjecture from \cite{D2}, which is about $(en)^{2n+2}$. However, our technique is strong enough to allow us to prove the optimal bound from the Kobayashi Conjecture, provided one could prove the optimal result for the Green-Griffiths-Lang Conjecture. \begin{corollary} If $\mathcal{Y}_{d-2,d}$ has codimension at least $1$ in $\mathcal{X}_{d-2,d}$ (as we would expect from the Green-Griffiths-Lang Conjecture), then a very general hypersurface of degree $d \geq 2n-1$ in $\mathbb{P}^n$ is hyperbolic. \end{corollary} \begin{proof} We apply Theorem \ref{thm-Hyperbol}. We know that if $\mathcal{Y}_{2n-3, d}$ is codimension at least $1$ in $\mathcal{X}_{2n-3,d}$, then the Kobayashi Conjecture holds for hypersurfaces in $\mathbb{P}^n$ of degree $d$. We apply this result with $d = 2n-1$. \end{proof} Work has also been done on the hyperbolicity of complements of hypersurfaces in $\mathbb{P}^n$. There are similar jet bundle techniques in this case. Given a variety $Z$, and ample line bundle $A$, a subsheaf $V \subset T_Z$ and a subvariety $X \subset Z$, one can construct the vector bundles $E_{k,m} V^*(\log X)$, sections of which act on $k$-jets of $Z \setminus X$. It can be shown that any section of $H^0(E_{k,m} V^*(\log X) \otimes A^{*})$ must vanish on the $k$-jet of any entire curve in $Z \setminus X$. Then sections of $H^0(E_{k,m} V^*(\log X) \otimes A^{*})$ cut out a subvariety $S_{k,m}$ in the locus of nonsingular $k$-jets on $Z \setminus X$ such that any entire curve lies in the closure of $\operatorname{ev}(S_{k,m})$. Darondeau \cite{Dar} studies these objects for hypersurfaces in $\mathbb{P}^n$. Namely, he proves the following. \begin{theorem}[Darondeau] \label{thm-Darondeau} Let $d \geq (5n)^2 n^n$, let $U_{n,d}$ be the space of smooth degree $d$ hypersurfaces in $\mathbb{P}^n$ and consider the space $\mathbb{P}^n \times U_{n,d}$, with divisor $\mathcal{X}_{n,d}$ corresponding to the universal hypersurface. Let $V = T_{\pi_2}$ be the relative tangent space of projection of $\mathbb{P}^n \times U_{n,d}$ onto $U_{n,d}$, and consider the locus $S_{k,m}$ cut out by sections in $H^0(E_{k,m} V^*(\log \mathcal{X}_{n,d}) \otimes \mathcal{O}(-H))$. Then $\operatorname{ev}(S_{k,m})$ is codimension at least $2$ in $\mathbb{P}^n \times U_{n,d}$. \end{theorem} Using our technique, we obtain the following effective form of the Kobayashi Conjecture. \begin{theorem} Let $Z_{n,d} \subset \mathbb{P}^n \times U_{n,d} \setminus \mathcal{X}_{n,d}$ be the locus of pairs $(p,X)$ where $p \in (\mathbb{P}^n \setminus X) \setminus \operatorname{ev}(S_{k,m})$. Then if $Z_{r-1,d}$ has codimension at least $1$ in $(\mathbb{P}^{r-1} \times U_{r-1,d}) \setminus \mathcal{X}_{r-1,d}$, we have that $Z_{r-c,d}$ has codimension at least $c$ in $(\mathbb{P}^{r-c} \times U_{r-c,d}) \setminus \mathcal{X}_{r-c,d}$. \end{theorem} \begin{proof} The proof is very similar in spirit to the proof of Theorem \ref{thm-technicaltool}, but we give a new proof here for completeness. Let $(p, X_0)$ be a general point of a component of $Z_{r-c,d}$. We will find a family $\phi: \mathcal{F}_{r-c} \to \mathbb{P}^{r-c} \times U_{r-c,d} \setminus \mathcal{X}_{r-c,d}$ with $\phi^{-1}(Z_{r-c})$ having codimension at least $c$ in $\mathcal{F}_{r-c}$. Let $(p, X_1) \in \mathbb{P}^{r-1} \times U_{r-1,d} \setminus \mathcal{X}_{r-1,d}$ be very general so that $(p,X_1)$ is not in the closure of any component of $Z_{r-1,d}$. Let $(p,Y)$ be a hypersurface in some high dimensional projective space $\mathbb{P}^N$ together with a point $p \in \mathbb{P}^N \setminus Y$ such that $(p,X_0)$ and $(p,X_1)$ are both parameterized linear sections of $(p,Y)$. Let $\mathcal{F}_{r-c}$ be the space of parameterized linear spaces in $\mathbb{P}^N$ passing through $p$. Then we have a natural map $\phi: \mathcal{F}_{r-c} \to \mathbb{P}^{r-c} \times U_{r-c,d} \setminus \mathcal{X}_{r-c,d}$ sending the parameterized $r-c$ plane $\Lambda$ to $\Lambda^* (p,Y)$. By construction, the image of $\phi$ will certainly contain the point $(p,X_0)$, and since $(p,X_1)$ is very general, $\phi^{-1}(Z_{r-1,d})$ will have codimension at least $1$ in $\mathcal{F}_{r-1}$. Observe that if $(p,Y_0) \in Z_{r-c-1,d}$ is a linear section of $(p,Y_1)$, then $(p,Y_1) \in Z_{r-c,d}$, since if there is a nonsingular jet $j$ at $p$ such that all the sections of $E_{k,m} T_{\mathbb{P}^{r-c-1}}^*(\log Y_0) \otimes \mathcal{O}(-H)$ vanish on $j$, then certainly all the sections of $E_{k,m} T_{\mathbb{P}^{r-c}}^*(\log Y_1) \otimes \mathcal{O}(-H)$ will vanish on $j$. By repeated application of Corollary \ref{GrassmanCor}, we see that the codimension of $\phi^{-1}(Z_{r-c,d})$ in $\mathcal{F}_{r-c,d}$ is at least $c$. This concludes the proof. \end{proof} It follows that if $Z_{n,d} \subset \mathbb{P}^n \times U_{n,d} \setminus \mathcal{X}_{n,d}$ has codimension at least $1$ for $d \geq d_n$, then a very general hypersurface complement in $\mathbb{P}^n$ is Brody hyperbolic when $d \geq d_{2n-1}$. Using this together with Darondeau's result, we obtain the following corollary, weakening the bound a bit for brevity. \begin{corollary} Let $d \geq (10n)^2 (2n)^{2n}$. Then if $X \subset \mathbb{P}^n$ is very general, the complement $\mathbb{P}^n \setminus X$ is Brody hyperbolic. \end{corollary} \section{$0$-cycles} \label{sec-cycles} Let $R_{\mathbb{P}^1, X, p} = \{ q \in X | Nq \sim Np \text{ for some integer $N$} \}$, where the relation $\sim$ means Chow equivalent. The goal of this section is to prove all but the $2n - \sum_{i=1}^k (d_i - 1) = -1$ case of the following conjecture of Chen, Lewis and Sheng \cite{CLS}. \begin{conjecture} \label{mainConj} Let $X \subset \mathbb{P}^{n}$ be a very general complete intersection of multidegree $(d_1, \dots, d_k)$. Then for every $p \in X$, $\operatorname{dim} R_{\mathbb{P}^1, X, p} \leq 2n -k - \sum_{i=1}^k d_i$. \end{conjecture} Here, we adopt the convention that $\operatorname{dim} R_{\mathbb{P}^1, X, p}$ is negative if $R_{\mathbb{P}^1, X, p} = \{p \}$. Together with the main result of \cite{CLS}, this completely resolves Conjecture \ref{mainConj} in the case of hypersurfaces. Chen, Lewis and Sheng consider the more general notion of $\Gamma$ equivalence, although we are unable to prove the $\Gamma$ equivalence version here. The special case $\sum_i d_i = n+1$ is a theorem of Roitman \cite{R1, R2} and the case $2n -k - \sum_{i=1}^k d_i \leq -2$ is a theorem of Chen, Lewis and Sheng \cite{CLS} building on work of Voisin \cite{voisinChow, V2, V3}, who proves the result only for hypersurfaces. Chen, Lewis and Sheng prove Conjecture \ref{mainConj} for hypersurfaces and for arbitrary $\Gamma$ in the boundary case $2n - k - \sum_{i=1}^k d_i = -1$ in \cite{CLS}. The case $2n - k - \sum_{i=1}^k d_i = -1$ appears to be the most difficult, and is the only one we cannot completely resolve with our technique. We provide an independent proof of all but the $2n - k - \sum_{i=1}^k d_i = -1$ case of Conjecture \ref{mainConj}. Aside from Roitman's result, this is the first result we are aware of addressing the case $2n - k - \sum_{i=1}^k d_i \geq 0$. We rely on the result of Roitman in our proof, but not the results of Voisin \cite{voisinChow, V2} or Chen, Lewis, and Sheng \cite{CLS}. Let $E_{n,\underline{d}} \subset \mathcal{U}_{n,\underline{d}}$ be the set of $(p,X)$ such that $R_{\mathbb{P}^1, X, p}$ has dimension at least $1$. Let $G_{n,\underline{d}} \subset \mathcal{U}_{n,\underline{d}}$ be the set of $(p,X)$ such that $R_{\mathbb{P}^1,X,p}$ is not equal to $\{p\}$. Both $E_{n,\underline{d}}$ and $G_{n,\underline{d}}$ are countable unions of closed subvarieties of $\mathcal{U}_{n,\underline{d}}$. When we talk about the codimension of $E_{n,\underline{d}}$ or $G_{n,\underline{d}}$ in $\mathcal{U}_{n,\underline{d}}$, we mean the minimum of the codimensions of each component. We prove Conjecture \ref{mainConj} by proving the following theorem. \begin{theorem} \label{mainThm} The codimension of $E_{n,\underline{d}}$ in $\mathcal{U}_{n,\underline{d}}$ is at least $-n + \sum_i d_i$ and the codimension of $G_{n,\underline{d}}$ in $\mathcal{U}_{n,\underline{d}}$ is at least $-n-1 + \sum_i d_i$. \end{theorem} \begin{corollary} Conjecture \ref{mainConj} holds for $2n- k - \sum_{i=1}^k d_i \neq -1$. In the special case $2n-k- \sum_{i=1}^k d_i = -1$, the space of $p \in X$ Chow-equivalent to some other point of $X$ has dimension $0$ (i.e., is a countable union of points) but might not be empty as Conjecture \ref{mainConj} predicts. \end{corollary} \begin{proof} First we consider the case $2n - k - \sum_i d_i \geq 0$. Let $\pi_1: \mathcal{U}_{n,\underline{d}} \to B$ be the projection map. If $\pi_1|_{E_{n,\underline{d}}}$ is not dominant, then the result holds trivially. Thus, we may assume that the very general fiber of $\pi_1|_{E_{n,\underline{d}}}$ has dimension $n - k - \operatorname{codim}(E_{n,\underline{d}} \subset \mathcal{U}_{n,\underline{d}})$. If the bound on $E_{n,\underline{d}}$ from Theorem \ref{mainThm} holds, then the space of points $p$ of $X$ with positive-dimensional $R_{\mathbb{P}^1,X,p}$ has dimension at most $2n - k - \sum_i d_i$, which implies Conjecture \ref{mainConj} in the case $2n - k - \sum_i d_i \geq 0$. Now we consider the situation for $2n - k - \sum_i d_i \leq -1$. Conjecture \ref{mainConj} states that $\pi_1|_{G_{n,\underline{d}}}$ is not dominant for this range. By Theorem \ref{mainThm}, we see that the dimension of $G_{n,\underline{d}}$ is less than the dimension of $B$ if $2n - k - \sum_i d_i \leq -2$, proving Conjecture \ref{mainConj}. In the case $2n - k-\sum_i d_i = -1$, the dimension of $G_{n,\underline{d}}$ is at most that of $B$, which shows that there are at most finitely many points of $X$ equivalent to another point of $X$. This proves the result. \end{proof} Our technique would prove Conjecture \ref{mainConj} in all cases if we knew that $G_{n,\underline{d}}$ had codimension $-n + \sum_{i=1}^k d_i$ in $\mathcal{U}_{n,\underline{d}}$. However, this is not true for Calabi-Yau hypersurfaces. \begin{proposition} A general point of a very general Calabi-Yau hypersurface $X$ is rationally equivalent to at least one other point of the hypersurface. \end{proposition} \begin{proof} Let $X$ be a very general Calabi-Yau hypersurface. Then we claim that a general point of $X$ is Chow equivalent to another point of $X$. To see this, observe that any point $p$ of $X$ has finitely many lines meeting $X$ to order $d-1$ at $p$. Such a line meets $X$ in a single other point. Moreover, every point of $X$ has a line passing through it that meets $X$ at another point of $X$ with multiplicity $d-1$. Thus, let $q_1$ be a general point of $X$, let $\ell_1$ be a line through $p$ meeting $X$ at a second point, $p$ to order $d-1$, and $\ell_2$ be a different line meeting $X$ at $p$ to order $d-1$. Let $q_2$ be the residual intersections of $\ell_2$ with $X$. Then $q_1 \sim q_2$. \end{proof} \begin{proof}[Proof of Theorem \ref{mainThm}] First consider the bound on $E_{n,\underline{d}}$. By Roitman's Theorem plus the fact that $R_{\mathbb{P}^1,X,p}$ is a countable union of closed varieties, we see that $E_{-1+\sum_{i=1}^k d_i,\underline{d}}$ has codimension at least one in $\mathcal{U}_{-1 + \sum_{i=1}^k d_i,\underline{d}}$. We note that if $p \sim q$ as points of $Y$, and $Y \subset Y'$, then $p \sim q$ as points of $Y'$ as well. The rest of the result follows from Theorem \ref{thm-technicaltool} using $Z_{n,\underline{d}} = E_{n,\underline{d}}$. Now consider $G_{n,\underline{d}}$. From Roitman's theorem, it follows that a very general point of a Calabi-Yau complete intersection $X$ is equivalent to at most countably many other points of $X$. Thus, a very general hyperplane section of such an $X$ satisfies the property that the very general point is equivalent to no other points of $X$. From this, we see that $G_{-2+\sum_{i=1}^k d_i,\underline{d}}$ has codimension at least $1$ in $\mathcal{U}_{-2+\sum_{i=1}^k d_i,\underline{d}}$. Together with Theorem \ref{thm-technicaltool}, this implies the result. \end{proof} \bibliographystyle{plain}
{'timestamp': '2018-06-08T02:00:42', 'yymm': '1806', 'arxiv_id': '1806.02364', 'language': 'en', 'url': 'https://arxiv.org/abs/1806.02364'}
ArXiv
\section{Introduction} Periodic comet 12P/Pons-Brooks was discovered by J. L. Pons on 1812 July 21 from Marseille, France; the comet was followed during that apparition until September 28. It was re-discovered by W. R. Brooks on 1883 September 2 from Phelps, NY, USA, and followed until 1884 June 2; during that apparition, the comet experienced several outbursts. At its next apparition, in 1954, comet 12P also exhibited several outbursts. Condensed observational details of all three apparitions can be found in Kronk (2003, 2009). Despite the fairly high absolute magnitude of around 4-5 and the comet's apparent tendency to have occasional outbursts, it seems that no searches for earlier appearances have been made using historical data. In February 2020, the first author integrated the orbit of 12P backward until about the year 1000. The calculations used data from the apparitions of 1883-1884 and 1953-1954 (taken from the Minor Planet Center's online database). From the integration backwards, it was apparent that the orbit for this comet is very stable and does not experience strong planetary gravitational perturbations in the covered period. From a check of different cometographies, it was apparent that the first comet of 1457 and the comet of 1385 were almost perfect matches concerning the perihelion time. As a next step, these backward-integrated orbits were compared with catalogued orbits for the 1385 and 1457 comets, using a planetarium software program (``GUIDE", by B. Gray; cf. website URL {\tt https://www.projectpluto.com}) that showed that the integrated orbits for 12P were fully compatible with the observed paths and the observational circumstances of the 1457 and 1385 comets. Not only did 12P appear positioned within the area indicated by the ancient observations but also the sense of movement did fit perfectly. By adjusting the perihelion time by a few days for each of these apparitions, the match could be brought even closer. \section{The Comet of 1457} In 1864 a manuscript by Italian cartographer and astronomer P. Toscanelli was found in the National Library in Florence, Italy. The manuscripts contained observations of six comets seen by Toscanelli in the 15th century, with their positions drawn into celestial maps drawn by Toscanelli. After the discovery of the manuscripts, G. Celoria performed in-depth investigations of Toscanelli's manuscripts and derived orbits from the positions drawn by Toscanelli. The first comet of 1457 was observed daily on five nights between 1457 January 23 and 27. Celoria published his analyses, including the derived orbits, several times (Celoria 1884, 1894, 1921). His orbit is based on three of the five observations and via comparison with the other two observations. He correctly states that the orbit is not of much accuracy due to the very short arc. The Toscanelli drawings show a short tail extending to about $0.5{\circ}$. There are indications of a coordinate grid in Toscanelli's drawings that at times seems incomplete, but it helped Celoria to identify the area of the sky where the comet was seen. A more contemprorary analysis of Toscanelli's maps can be found in Jervis (1985). Figures 1 and 2 show the original drawing by Toscanelli and the representation by Celoria, respectively. It can be seen from the images that the accuracy can only be good to maybe a few degrees, and any orbit derived from them is prone to some considerable uncertainty, something that Celoria (1894) acknowledged quite clearly: ``The observations, despite their nature, are quite well represented from my orbital elements, but they, even if of a remarkable precision for that time, are too close to each other and too few to judge with certainty how much the orbital elements themselves are close to the real ones."\\ \begin{center} \includegraphics[width=0.8\textwidth]{"figure1.jpg"}\\ \end{center} Figure 1: Drawing of the positions of the comet of January 1457 by P. Toscanelli. Image courtesy F. Stoppa, Milan, Italy.\\ But Toscanelli was apparently not the only observer of this comet. There exists a Chinese observation of the 1457 comet that, however, bears a problem. The Chinese reports are as follows (Pankenier, Xu, and Jiang 2008) and are given for 1457 January 14: ``7th year of the Jingtai reign period of Emperor Yingzong of the Ming Dynasty, 12th month, day jiayin [51], at night, a broom star with bright rays 5 cun long reappeared in lunar mansion BI [LM 19], slowly traveling southeastward. Its bright rays gradually lengthened from this day through day guihai [60]. [\textit{Ming Yingzong shilu}] ch. 273". Another Chinese source given there (\textit{Ming shi: tianwen zhi}, ch. 27) supplies basically the same information.\\ \begin{center} \includegraphics[width=0.80\textwidth]{"figure2.jpg"}\\ \end{center} Figure 2: Representation of Toscanelli's drawing by G. Celoria (1894). Image courtesy F. Stoppa, Milan, Italy.\\ Stryuck (1740, p.\ 247) wrote: ``Besides the story about the Comet one finds in the last mentioned Chronicle [{\it Anton.\ de Ripalta Annal.\ Placent., col.\ 905}] that in the Year 1456, in the Month of December, and in the Year 1457, in January, four strange Stars appeared, moving from the East to the West, almost in the shape of a Cross. This could easily be some Fixed Stars or one or more Planets. {\it Ludov.\ Cavitel.\ Cremonen.\ Annal.}\ (col.\ 1456, tom.\ 3, par.\ 2) tells about a Comet that was seen in the Year 1456, the 5th of December, and another one in January ..." Interestingly, both Struyck and Lubienietz (1667) mention the comets of June 1456 and June 1457, but nothing more on the comet of Jan. 1457 (other than that above). And neither of these two cometographers mentioned the 1385 comet at all; as they both had access to many European historical materials on comets, it says a lot that these comets were not apparently widely known in Europe, so they must not have been very bright and thus not easily seen from Europe. What can be deduced from the Chinese texts? First, the object was seen as early as 1457 January 14 -- but possibly earlier, since the word `re-appeared' is used. Second, the tail was about $0.5^{\circ}$ long; 1 cun is about $0.1^{\circ}$. This nicely agrees with the length derived by Celoria from Toscanelli's drawings. Third, the object was seen for about 10 days, until January 23. The main problem with the Chinese observation (which was also discussed at length by Celoria) is the position in the lunar mansion Bi (or Pi), which corresponds to lunar mansion 19 and refers to the area around the Hyades and $\varepsilon$ Tau, also called the ``Hunting Net". However, using the orbital elements of 12P, the comet would have been near $\omega$ Psc and $\gamma$ Peg on that date. It can be concluded that the Chinese position is not at all compatible with the Toscanelli observations, except there is an error in the Chinese sources (see also the similar remark by Celoria). For the Chinese comet, 12P would have been in the 14th lunar mansion, called Dongbi or Tung-bi but sometimes also given as Pi (Ho 1962). If the 14th and 19th lunar mansions were mixed up, then 12P would be correctly placed on 1457 January 14; but, of course, this cannot be proven anymore. Finally, another argument for the identity with 12P: Based on the derived magnitude parameters of the apparition of 1953, the magnitude of 12P in 1457 was perhaps at mag 3-4 (assuming no outburst), with the comet being close to perihelion and about 0.95 AU from the earth. This would explain why it was not such a conspicuous object. If Celoria's orbit were correct, the comet would have brightened further with increasing elongation in the following days which raises the question why it was not observed further by Toscanelli. The orbit of comet 12P shows that it in fact became fainter with a slowly increasing elongation, which explains the short observation period. As a sidenote, it should be mentioned that the 1457 comet was long suspected to be identical with what later became known as comet 27P/Crommelin (Schulhof 1885; Galle 1894, p.\ 157; Procter and Crommelin 1937). Modern calculations were not able to confirm this and, moreover, that identity can be ruled out (Marsden 1975; Festou, Morando, and Rocher 1985). \\ The orbit by Celoria is given here for reference \\ $ q = 0.703\ AU, peri = 195^\circ , node = 258^\circ , i = 13^\circ $ \\ \section{The Comet of 1385} For the 1385 apparition, we only have the description of the comet's apparent movement from Asian sources: the \textit{Ming Taizu shilu}, ch. 175, and \textit{Ming shi: tianwen zhi}, ch. 27 (Pankenier et. al. 2008). On 1385 October 23, the comet appeared near Coma Berenices, Leo, and Virgo, and after that, moved towards $\beta$ Vir and left the area of $\beta$ and $\eta$ Vir. On 1385 October 30, the comet entered Crater; on November 4 it 'trespassed against' an asterism in Hydra. The comet had a 10-degree tail according to Biot (1843a; see also Carl 1864, p.\ 42). The widely cited orbit by Hasegawa (1979) of course resembles this general movement. The orbit of 12P is perfectly consistent with the above description and moves similar to comet C/1385 U1; the apparent path in the sky fits the description from the Chinese records even better. Using the magnitude parameters from the 1953 apparition the brightness was perhaps around magnitude 2 (assuming no outburst), since the apparition was very favorable due to a close approach to the earth. This agrees well with the Chinese observations, too. \\ Several orbits have been calculated from the Chinese descriptions in the past. The orbits (2000.0) derived by Peirce (1846), Hind (1846) and Hasegawa are given here for reference.\\ Peirce: $q = 0.755\ AU, \omega = 155^\circ, \Omega = 270^\circ, i = 105^\circ $ Hind: $q = 0.738\ AU, \omega = 130^\circ, \Omega = 296^\circ, i = 52^\circ $ Hasegawa: $q = 0.79\ AU, \omega = 289^\circ, \Omega = 103^\circ, i = 103^\circ $ \\ As noted above for the 1457 comet, neither Struyck nor Lubienietz mentioned the 1385 comet at all; since they both had access to many European historical materials on comets, it says a lot that these comets were not apparently widely known in Europe, so they must not have been very bright and thus not easily seen from Europe. Nevertheless, the comet was not completely missed in Europe. J. Meyerus Baliolanus (1561) gives an account of a comet seen on the feast day of Saints Cosmas and Damian, which corresponds to September 27, 1385, when the comet would have been at magnitude 5, but it may have been in outburst then. However, his text also says that the comet appeared in October. He goes on to say that the comet did shine in many colours. A similar account can be found in a later annal by E. Sueyro (1624) with the only difference that he put it in the year 1386. From the description it could also relate to an aurora. Another mention can be found in the annals the German city of Trier (1838). The editors of this edition remark on a manuscript which states that in 1385 a terrible comet appeared. The comet can also be found in annals of Iceland (1847) which simply says for 1385 that a comet appeared. \section{Linkage} For the linkage of the apparitions of 1385 and 1457, the following positions were derived from the descriptions in the historical sources.\\ \begin{verbatim} 1385 UT R.A. (2000) Decl. Mag. Oct. 22.9 12 00 +12 00 2 29.9 11 55 - 8 00 Nov. 3.9 11 55 -35 00 1457 UT R.A. (2000) Decl. Mag. Observer Jan. 23.7 0 40 - 3 40 3 Toscanelli 25.7 0 47 - 5 20 " 27.7 0 56 - 7 15 " \end{verbatim} The observations of the apparitions of 1812 and 1883-1884 were re-reduced recently by co-author T. Kobayashi, from which a linked orbit could be derived and which was published in Green (2020a) and Nakano (2020a). Prompted by these announcements, the comet was recovered on June 10 and 17, 2020, with the Lowell Observatory 4.3-m Discovery Telescope and the Large Monolithic Imager by Ye et. al. (2020a) with the comet at a distance of 11.9 AU. On stacked images a broad tail of 3' length was visible implying that it was already active. The correction to the orbit by Kobayashi based on the data from 1385 - 1954 was only +0.16 day (see Green (2020b). Including these recovery observations the following linked orbital elements were derived. These observations are listed in Table 1. His elements are based on a total of 1052 astrometric observations and include perturbations by Mercury-Neptune and Ceres, Pallas, and Vesta. Non-gravitational effects were included in the orbit computation. The weighted mean residual is 1$''$\llap.41. The comet passed 3.71 AU from Uranus on 1819 Apr. 26 and 1.62 AU from Saturn on 1957 July 29 UT. The comet has made numerous close approaches to the earth (0.41 AU on 1385 Oct. 29, 0.90 AU on 1457 Jan. 10, and 0.63 AU on 1884 Jan. 9 UT). It should be noted that a correction in ET-UT for the 1385 and 1457 observations was ignored since no definitive values for ET-UT are available. However, following Stephenson (1997) and using approximate values for (ET - UT) of +330 s for 1385 and +220 s for 1457, the residuals amount to about 48$''$ and 12$''$, and perihelion time corrections of only about -0.002 and +0.005 day, respectively. \\ \begin{verbatim} Epoch = 1385 Nov. 8.0 TT T = 1385 Nov. 6.327 TT Peri. = 200.036 e = 0.95505 Node = 255.125 2000.0 q = 0.78362 AU Incl. = 73.829 a = 17.431967 AU n = 0.013542 P = 72.78 years Epoch = 1457 Jan. 14.0 TT T = 1457 Jan. 30.1002 TT Peri. = 199.9041 e = 0.954800 Node = 255.2502 2000.0 q = 0.778438 AU Incl. = 74.0399 a = 17.22216 AU n = 0.0137903 P = 71.47 years Epoch = 1812 Aug. 30.0 TT T = 1812 Sep. 15.82612 TT Peri. = 199.29022 e = 0.9553274 Node = 255.63879 2000.0 q = 0.7771051 AU Incl. = 73.95643 a = 17.3955643 AU n = 0.01358458 P = 72.55 years Epoch = 1884 Jan. 25.0 TT T = 1884 Jan. 26.21681 TT Peri. = 199.17679 e = 0.9550368 Node = 255.77454 2000.0 q = 0.7757320 AU Incl. = 74.04048 a = 17.2526163 AU n = 0.01375377 P = 71.66 years Epoch = 1954 May 18.0 TT T = 1954 May 22.88058 TT Peri. = 199.02746 e = 0.9548317 Node = 255.89097 2000.0 q = 0.7736564 AU Incl. = 74.17689 a = 17.1283021 AU n = 0.01390377 P = 70.89 years Epoch = 2024 May 10.0 TT T = 2024 Apr. 20.99698 TT Peri. = 198.98718 e = 0.9545914 Node = 255.85595 2000.0 q = 0.7807641 AU Incl. = 74.19138 a = 17.1941867 AU n = 0.01382393 P = 71.30 years \end{verbatim} Figures 3 and 4 show the apparent paths for both apparitions based on Celoria's and Hasegawa's orbits, respectively. They also show the paths based on the linked orbit by Kobayashi. It can be seen that the apparent paths from the linked orbit by Kobayashi are quite similar to the apparent paths from orbits by both Celoria and Hasegwawa.\\ \begin{center} \includegraphics[width=0.9\textwidth]{"1457_chart_new.jpg"}\\ \end{center} Figure 3: Comparison of the paths of comet C/1457 A1 based on Celoria's orbit and comet 12P/Pons-Brooks based on the linked orbit.\\ \begin{center} \includegraphics[width=0.9\textwidth]{"1385_chart_new.jpg"}\\ \end{center} Figure 4: Comparison of the paths of comet C/1385 U1 based on Hasegawa's orbit and comet 12P/Pons-Brooks based on the linked orbit.\\ \section{Discussion of sightings at other apparitions} On the basis of the orbits given in Table 2 below, a search was conducted in historical comet reports for other sightings of comet 12P/Pons-Brooks. It is reasonable to assume that non-gravitational forces, which are also present for this comet, should not change the predicted orbits before 1385 by a large amount, since this would require a substantial change in these forces that were quite constant between 1385 and now. It should be stressed that for a comet to be noticed by chance without a telescope, it needs to be placed in dark skies at solar elongations > $40^{\circ}-50^{\circ}$, and at a certain brightness (say, brighter than visual magnitude 3-4 in a moonless sky). For most of the apparitions discussed in the text below, this is not the case. The apparitions of 1457 and 1385 were very favorable, where the comet was close to the earth and bright enough to be easily seen. From the orbital circumstances, a perihelion occurring between August and January provides the most favorable viewing conditions. However, one has to take into account that 12P is prone to outbursts. This is why it seems nevertheless useful to look at each apparition and see whether other historic candidates are available.\\ Table 2: Orbital elements by T. Kobayashi for 12P/Pons-Brooks, based on observations from 1385-2020. \begin{small} \begin{verbatim} 0012 33 03 24.392 0.783058 0.955528 201.801 253.835 73.443 330331 0012 105 08 27.621 0.777178 0.955319 201.719 253.956 73.656 1050820 0012 176 05 17.343 0.774058 0.954814 201.597 254.078 73.899 1760519 0012 245 09 15.466 0.772882 0.954562 201.409 254.157 74.028 2450914 0012 315 03 12.893 0.785689 0.954230 201.456 254.037 73.992 3150331 0012 386 09 08.388 0.787368 0.954884 201.372 254.025 73.741 3860825 0012 459 07 16.245 0.787975 0.955172 201.303 254.095 73.647 4590802 0012 532 06 28.502 0.783021 0.955135 201.252 254.266 73.771 5320709 0012 603 07 03.510 0.777660 0.954684 201.134 254.378 74.057 6030627 0012 673 01 08.923 0.777703 0.954382 200.940 254.435 74.145 6730110 0012 742 10 28.225 0.779050 0.954594 200.797 254.473 74.053 7421015 0012 813 08 06.236 0.780271 0.955070 200.671 254.407 73.909 8130823 0012 886 04 02.499 0.782316 0.955414 200.607 254.504 73.748 8860402 0012 959 01 24.841 0.777944 0.955474 200.548 254.627 73.733 9590129 0012 1030 07 12.737 0.772739 0.955157 200.430 254.763 73.951 10300625 0012 1100 06 12.277 0.770363 0.954827 200.273 254.872 74.143 11000617 0012 1170 02 27.511 0.781288 0.954490 200.288 254.843 74.039 11700210 0012 1241 04 11.464 0.783938 0.954852 200.192 254.899 73.886 12410418 0012 1313 05 01.651 0.784435 0.955108 200.093 254.962 73.812 13130510 0012 1385 11 06.327 0.783616 0.955047 200.036 255.125 73.829 13851108 0012 1457 01 30.100 0.778438 0.954800 199.904 255.250 74.040 14570114 0012 1527 03 12.625 0.776586 0.954673 199.720 255.326 74.106 15270328 0012 1597 07 03.752 0.777688 0.954791 199.582 255.410 74.060 15970618 0012 1668 04 17.408 0.777538 0.955174 199.441 255.389 73.979 16680426 0012 1740 07 14.017 0.779445 0.955366 199.371 255.513 73.900 17400628 0012 1812 09 15.826 0.777105 0.955327 199.290 255.639 73.956 18120830 0012 1884 01 26.217 0.775732 0.955037 199.177 255.775 74.040 18840125 0012 1954 05 22.881 0.773656 0.954832 199.027 255.891 74.177 19540518 0012 2024 04 20.997 0.780764 0.954591 198.987 255.856 74.191 20240510 \end{verbatim} \end{small} In the following we discuss earlier returns to perihelion that were missed by observers. Returns to perihelion prior 1385 are only discussed if there was a promising candidate identified or the comet experienced a particularly favorable apparition. \\ \textbf{Perihelion 1740 July 14} \\ This apparition was not favorable concerning the observing geometry. The comet might have become brighter than magnitude 10 in April, but already at an elongation below $50^{\circ}$. In May the brightness may have attained magnitude 7, but the elongation was then below $40^{\circ}$. Perihelion was reached almost behind the sun, the elongation being then around $16{\circ}$, the brightness perhaps magnitude 4. The comet then moved quickly southward and remained at elongations below $45^{\circ}$. There is also no promising candidate in any known records to be found. \\ \textbf{Perihelion 1668 April 17} \\ This return to perihelion was also not favorable, with the observing geometry being very poor. In January the comet might have been at magnitude 9 at elongations of just below $60^{\circ}$. Perihelion was reached at only $22^{\circ}$ elongation with a magnitude of perhaps 4. There was one comet observed in 1668 from March 3 to 30, which is known as C/1668 E1. Its orbit is in no way compatible with 12P and can clearly be excluded. This was a very bright comet, a sunskirter with a perihelion distance of only 0.066 AU that had a long tail and was brighter than Venus. There are other records of a comet seen earlier in 1668. In a paper by Park and Chae (2007), a comet is mentioned that was seen by Korean observers from March 11. While Park and Chae attribute this object to 12P, it is more likely that it is another description of C/1668 E1. It would have been strange to see another bright comet in the same general region of the sky, as only one bright known comet was then observed widely throughout the world. And it again has to be stressed that 12P was likely near magnitude 11 on 1668 March 11! It would have taken a very large and long-lived outburst to bring it to a brightness level to be seen with the unaided eye by the Koreans (and then only by the Koreans). \\ \textbf{Perihelion 1597 July 3} \\ The year 1597 saw another unfavorable return to perihelion for comet 12P, similar to or even worse than the one of 1740. There is also no candidate record in historic sources. \pagebreak \\ \textbf{Perihelion 1527 March 12} \\ In November 1526, the comet might have become brighter than magnitude 10 at an elongation of just below $70^{\circ}$. It continued to perihelion in March 1527, which was reached at an elongation of just above $30^{\circ}$ and with a brightness of perhaps magnitude 4. There are historic records of comets in 1523 and 1529, but the descriptions do not fit. \\ \textbf{Perihelion 1313 May 1} \\ The next perihelion before 1385 occurred in 1313, and it was again an unfavorable apparition for earth-based viewers (comparable to 1668). The comet remained at low elongations, and perihelion was attained with magnitude perhaps 4 at elongations below $15^{\circ}$. There was a comet seen on 1313 April 13, about 1.5 months prior to perihelion passage for 12P (when it would have been perhaps at magnitude 5 and at $15^{\circ}$ elongation). Park and Chae have suggested this comet as a candidate for 12P, too. Unfortunately, the indicated position in Gemini is not consistent with the position in Aries given by our orbit. So this object can be clearly ruled out. Pingre (p.\ 425) gives a description of the same or another object following the Asian account based on the manuscripts of the historian Mussati (1727, p.\ 554) who lived from 1261-1329: ``In Europe, on April 16, Jupiter and Venus were in conjunction in the sign of Gemini. Four days later a comet was seen in Italy towards the place in the sky where the Sun appeared, when it was about to enter the waters of the ocean: its hairy tail, similar to a whitish smoke, extended to the distance of twenty feet on the west side (it should be read, on the east side). After gradually weakening for a fortnight, this Comet finally vanished. Other Historians similarly testify that the Comet was seen from the west side; therefore his tail could not look to the West." On that date - April 17 - 12P/Pons-Brooks would have been at an elongation of only $13^{\circ}$ and visible low above the western horizon in twilight with a magnitude of maybe 4.5. If the comet would have experienced an outburst around that time it might have been visible even under such conditions but the descritption of a long tail seems to contradict a recent outburst. Probably this account also relates to the Asian Gemini object. \\ \textbf{Perihelion 1241 April 11} \\ Another unfavourable apparition with the comet becoming brighter than magnitude 6 already at a small elongation of below $35^{\circ}$. Maximum brightness of about magnitude 4.5 was attained at an elongation of $20^{\circ}$. In May the comet had traveled southward and became fainter than magnitude 6 at an elongation of about $45^{\circ}$. The Japanese text {\it Dai Nihon shi} reports that on February 17, ``a broom star was seen".(Pankenier 2008, p.\ 149). At that time the comet might have been as bright as magnitude 6.5-7 and at an elongation of $43^{\circ}$. It should have been visible only if there was an outburst. \\ \textbf{Perihelion 959 January 24} \\ The 959 return to perihelion of 12P is similar to that of 1457, when Toscanelli saw the comet from Italy. There is one comet in historical records in 959, but the details are very uncertain. They come from a Byzantine text dated 990 and provide no observational details, but rather relate it to the death of Constantine VII Porphyrogenitus (who died on 959 November 9; Kronk 1999). The comet would then be expected to be bright in January. Hasegawa (1980) gives a date of 959 Oct. 17 for this comet and lists another for 959 May, seen from Arabia. Stryuck (1740, p.\ 217) wrote: ``In the Year 959, a Comet was seen as a dim [literally ``sad and dark"] light. ({\it Constantin.\ Porphyr.\ incerti Continuat.}, p.\ 289 [e.g., cf.\ Niebuhr 1838]; {\it Symeon Magist \& Logoth. Annal.}, p.\ 496). When the Comet was seen at the death of the mentioned Emperor, than it must have appeared in the middle of November." Struyck also suggested identify with the 1652 comet: ``This was the Comet that was seen in the Year 1652." Chambers (1889, p.\ 572) cites two sources for a comet in 959, one saying ``a gloomy and obscure star" (citing the extension of the {\it Chronographia} of Theophanes the Confessor by Constantine VII, likely taking his citation directly from Struyck) and the other saying that it appeared from Oct.\ 17 to Nov.\ 1 [but a careful reading of the second source, Tackio (1653), doesn't appear to mention either the 959 comet of these specific dates]. \\ \textbf{Perihelion 886 April 2} \\ This apparition is comparable to that of 1241. The comet was already at an elongation of below $40^{\circ}$ when it became brighter than magnitude 6 at the end of February. Maximum magnitude of 4.5 was attained at an elongation of $22^{\circ}$. It then moves southward and became fainter than magnitude 6 in mid-May at an elongation of about $45^{\circ}$. Three Chinese texts mention a comet seen between June 6 and July 5 Pankenier 2008, p.\ 102). The {\it Xin Tang shu: Xizong ji}, the {\it Xin Tang shu: tianwen zhi} and the {\it Jiu Tang shu: Xizong ji} say that a ``star became fuzzy" in JI (lunar mansion 7, near $\gamma$ Sgr) and WEI (lunar mansion 6, near $\alpha$ Peg). It then passed through BEIDOU (near $\alpha$ UMa) and SHETI (near $\o$ and $\eta$ Boo). This can not be 12P since it was already situated far south. \\ \textbf{Perihelion 813 August 6} \\ The 813 return to perihelion was not perfect concerning the geometrical conditions. The comet may have become brighter than magnitude 6 in July at an elongation of about $30^{\circ}$. Maximum brightness with magnitude perhaps 4 was attained at the beginning of August with a similar elongation. The comet then moved southward and became fainter than magnitude 6 in September. Interestingly, Pingre (1783, pp.\ 337-338) lists a comet for 813 August 4, but his description (based on the medieval author Theophanes the Confessor) leaves great doubt whether this was indeed a comet: ``On August 4 a comet was seen, which resembled two moons joined together; they separated, and having taken different forms, at length appeared like a man without a head" (translation from Chambers 1889, p.\ 568). The description sounds more like that of a short-lived transient such as a bright meteor/fireball. This object of Theophanes was not included in the catalogues of Williams (1871), Ho (1962), and Hasegawa (1980), and is probably not related to 12P. \\ \textbf{Perihelion 742 October 28} \\ The 742 return to perihelion was quite favorable. At the beginning of September, comet 12P would be expected to have become brighter than magnitude 6, while being at an elongation of $65{\circ}$ and a declination of about $+62^{\circ}$. It then moved southward and attained a maximum magnitude of perhaps 1.5 in October, then with an elongation of $50^{\circ}$-$55^{\circ}$ and moving in declination from $0^{\circ}$ to $-20^{\circ}$ around its closest approach to the earth. Despite these favorable observing conditions, no historic object can be identified unambiguously from historical records. Pingre (1783, p.\ 336) cites several sources for objects around this year. For 742 and 743, Cedreni (1647, pp.\ 460-461) mentions ``a sign in the sky" and ``a sign in the sky appearing towards the North, which fell down to the ground like dust", respectively. For 743, Hoyland (2011, p.\ 242) gives four different chronicles describing a ``sign in the sky" which more or less agree with each other probably due to copying. This sign is said to have appeared in June and looked like three ``columns of fire that flickered and then remained constant". This sounds very much like an aurora. In June comet 12P would have been at perhaps magnitude 10. An identity, if real at all, is very unlikely. Two of the chronicles go on to say that another such sign was seen in September. Here the month would be matching with the visibility of 12P but details are too scarce to suspect it is a misdated description of a comet. Pingre, Cedreni and Hoyland mention another comet seen from Syria in 744 or 745, possibly in January. This object may have also been seen from Asia (Ho, p.\ 171) Lubienietz says that in 745 a comet was seen in Cancer according to an anonymous report from the German city of Nuremberg. The comet was seen for 39 days. All these objects - if real - have probably no relation to comet 12P. \\ \textbf{Perihelion 673 January 8} \\ The 673 return to perihelion of comet 12P geometrically falls between the favorable apparitions of 1385 and 1497. The comet would be predicted to have become brighter than magnitude 6 at the beginning of November 672, at an elongation of around $75^{\circ}$. Being at almost $+50^{\circ}$ declination, it then moved southerly, reaching maximum brightness of perhaps magnitude 2 at the end of the year. The elongation was then around $50^{\circ}$-$55^{\circ}$, and the declination around $0^{\circ}$. The comet then continued to move southward and should have become fainter than magnitude 6 in March 673. Pingre (1783, p.\ 331) lists a comet for this year -- however, with no details that help to decide on any identity with comet 12P/Pons-Brooks. At first, he cites two sentences from ancient chronicles: ``In the first year of Thierry's reign, we saw a Comet. A fire appeared in the sky for ten days. An extraordinary iris caused so much fright, that it was believed that the last day was near." The king mentioned was Theoderic III, who became king of Neustria in 673 and king of Austrasia (and thus of all Franks) in 679. Pingre then concludes: ``All this may be reduced to an aurora borealis. Of the Authors quoted [...], only one calls it a comet; while he is contemporary, the word comet is sometimes very ambiguous." Stryuck (1740, p.\ 209) wrote: ``In the Year 673, in the Month of March, a Fire shined 10 days in the Sky. ({\it Centuria. Magdeburg.}, cent.\ 7, cap.\ 13, p.\ 564)"; he also suggested identity of the comet of 673 with the 1337 and 1558 comets Stryuck (1753, pp.\ 19-20). Hevelius (1668, p.\ 812) and Funccius (p.\ 124) also mention a comet seen for 10 days in 673. Lubinietz (1667, p.\ 116) gives a comet for 674 and refers to Alstedius (1650, p.\ 506) and Berckringeri (1665, p.\ 32). The two latter sources are based on de Cesarea (1483) who also gives the phrase with the fire in the skies for ten days. Asian sources do not help in this case. For the period 672 September 27 to October 25, the Korean \textit{Samguk sagi} and \textit{Jeungbo munheon bigo} speak of a ``broom star" that ``emerged seven times in the north" (Pankenier et al. 2008, p.\ 74; Ho 1962). This could have been comet 12P if it had been unusually bright and experiencing an outburst. \\ \textbf{Perihelion 386 September 8} \\ The return to perihelion in 386 would have produced a quite favorable apparition. With the end of July, the comet would have appeared in the morning sky at about magnitude 6 at an elongation of $46^{\circ}$.At perihelion, the comet would have reached a maximum magnitude of about 3.5 at an elongation of $44^{\circ}$; it then started to move southward and would be presumably fainter than magnitude 6 by the end of October at an elongation of $45^{\circ}$. Pingre (1783, p.\ 303) reports a comet seen in this year in Sagittarius but says that it was a misdated account of comet C/390 Q1. Ho (1962) reports Asian records that state that there was a comet seen from April/May and disappeared in July/August and situated in Sagittarius. Biot (1843b) noted a comet seen in China in Sgr in April that was visible until July (see also Carl 1864, p.\ 18). Hasegawa (1980) considered this to be a nova. Since the time and position do not match, an identity with 12P is impossible. \\ \textbf{Perihelion 245 September 15} \\ A comet observed in the year 245 is probably an earlier sighting of comet 12P/Pons-Brooks. Pankenier et al. (2008, p.\ 40) provides: ``6th year of the Zhengshui reign period of King Qi of Wei, 8th month, day wuwu [55]; a white broom star 2 chi long appeared in QIXING [LM 25]. It advanced as far as ZHANG [LM 26] for 23 days in total then was extinguished. [\textit{Song shu: tianwen zhi}] ch. 23". Other authors (Kronk 1999; Ho 1962; Williams 1871; Pingre 1783) use a slightly different wording as shown as an example in the report by Ho: ``On a wu-wu day in the eighth month of the sixth year of the Cheng-Shih reign-period a white (hui) comet measuring 2 ft (chhih) appeared at the Chhi-Hsing (25th lunar mansion) moving towards the Chang (26th lunar mansion) and disappeared after 23 days." This means that a comet appeared on 245 Sept. 18 (probably 17.9 UT) close to $\alpha$, $\iota$, and $\tau$ Hya (QIXING or Chhi-Hsing), and moved towards $\kappa$, $\lambda$, $\mu$, and $\nu$ Hya (ZHANG or Chang). The tail was about $3^{\circ}$ long. The following derived position assumes that the comet was situated within QIXING. \\ 245 Sep 17.9: 09h 35m $-05^{\circ}$ 00' (2000.0). \\ Using Kobayashi's orbit for that epoch, the comet is situated about $8^{\circ}$ from the above position on 245 Sept. 17.9. Adjusting the perihelion time by +2.9 day QIXING would be around $6.5^{\circ}$ (cf. fig. 5). The brightness would have been at magnitude 2-3 around that time.\\ \includegraphics[width=0.9\textwidth]{"245_chart_new.jpg"}\\ Figure 5: Position of comet 12P/Pons-Brooks in relation to the Chinese constellation QIXING where the comet was seen on 245 Sep. 17.9 UT. Shown are the position based on Kobayashi's nominal linked orbit and after an adjustment of t + 2.9 days\\ The general direction of movement then carries the comet indeed in the direction of $\kappa$, $\lambda$, $\mu$, and $\nu$ Hya (also around $7^{\circ}$ distance). The problem here is that Pankenier et al. say that the comet disappeared after 23 days in the region of ZHANG. This should not have been the case. After 23 days (Oct. 10), it would already be in Cen, some $23^{\circ}$ away from ZHANG (or even farther, when using the perihelion date of Sept. 9). All other sources do not connect ZHANG with the date of the last sighting and can be understood as a scenario in which the comet was moving in the direction of ZHANG and disappeared after 23 days, which would agree with the expected path of comet 12P/Pons-Brooks. Apparently the original text contains some ambiguity in interpretation, and it can indeed be put in both ways. Upon the request from the authors of this paper, a word-by-word translation by Ye (2020b) gives: ``...advanced and arrived Zhang and settled for 23 days and then extinguished." From a linguistical point-of-view, it is not fully clear whether the word `settled' refers to the apparition or to the position of the comet with respect to ZHANG. We nevertheless think that the identification of 12P/Pons-Brooks with the comet of 245 is highly probable and would make it the comet with the third-longest observational arc after 1P/Halley and 109P/Swift-Tuttle. \section{The apparition of 2024} The impending apparition of 2024 is not very favorable, but will be better than the last one in 1954. An analysis by renowned visual observer Max Beyer (1958) included 76 observations made by himself using the 26-cm equatorial of the Hamburg-Bergedorf observatory (Germany) and shows numerous outbursts; he notes that the amplitude of these outbursts decreases with decreasing distance from the sun. At least five outbursts with amplitudes of at least $1^{mag}$ magnitude can be seen in his combined lightcurve of his visual observations and of photographic ones by G. van Biesbroeck. He finally gives lightcurve parameters of H0 =$4.66^{mag}$ and n = 4.33. This generally agrees with parameters derived by Green (2020) from observations in the database of the ICQ (H0 = $4.0^{mag}$, n = 3.2), which also roughly agree with the limited brightness information for the apparitions of the 19th century. An analysis of historic brightness information in Kronk (2003) confirmed not only the tendency for outbursts but also showed that the comet shows a rather steep decline in brightness after perihelion, hinting to a lightcurve asymmetry. It should also be taken into account that the comet has never been observed at distances farther way than 4.5 AU from the sun pre-perihelion and 2.2 AU post-perihelion. Especially for the pre-perihelion behavior, predictions are very complicated. A new analysis of the apparition of 1954 using data from the ICQ database and including the observations from Beyer show a clear difference between the pre- and the post-perihelion parts of the lightcurve (fig. 6). \\ \begin{verbatim} pre-perihelion: H0 = 4.5, n = 5.2 post-perihelion: H0 = 5.2, n = 5.1 \end{verbatim} It should be noted, however, that all post-perihelion estimates come from one observer only (A. Jones) and it is possible that the difference between pre- and post-perihelion is due to observer bias. The lightcurve also shows that the tendency for small outbursts is much more apparent prior perihelion.\\ \includegraphics[width=1.0\textwidth]{"figure6.png"}\\ Figure 6: Lightcurve of comet 12P/Pons-Brooks based on the ICQ database and Max Beyer's visual observations (1958). (Lightcurve prepared with \textit{``Comet for Windows"}, URL {\tt http://http://www.aerith.net/project/comet.html})\\ The ephemeris below uses the values of Green (labelled Mag(a)) and the pre-perihelion parameters from the new analysis (labelled Mag(b)). \\ \begin{small} \begin{verbatim} Date TT R. A. (2000) Decl. Delta r Elong. Phase Mag(a) Mag(b) 2020 03 12 18 17.81 +16 22.8 12.557 12.411 79.3 4.5 18.2 24.2 2020 04 21 18 17.16 +18 44.6 11.841 12.200 108.7 4.5 18.1 24.0 2020 05 31 18 07.69 +20 37.1 11.290 11.987 131.5 3.6 17.9 23.8 2020 07 10 17 53.70 +21 05.0 11.064 11.770 132.1 3.7 17.8 23.6 2020 08 19 17 42.81 +19 59.5 11.163 11.550 110.1 4.7 17.7 23.6 2020 09 28 17 40.62 +18 06.7 11.430 11.328 81.6 5.0 17.7 23.5 2020 11 07 17 47.72 +16 29.6 11.643 11.102 54.9 4.2 17.7 23.4 2020 12 17 18 01.02 +15 55.5 11.613 10.873 39.6 3.3 17.6 23.3 2021 01 26 18 15.65 +16 47.2 11.259 10.640 49.1 4.0 17.5 23.1 2021 03 07 18 26.20 +19 01.2 10.636 10.404 73.9 5.3 17.3 22.9 2021 04 16 18 27.77 +22 03.7 9.913 10.164 101.7 5.5 17.0 22.6 2021 05 26 18 18.08 +24 46.4 9.308 9.920 124.7 4.8 16.8 22.3 2021 07 05 18 00.88 +25 49.0 8.993 9.672 129.6 4.6 16.7 22.1 2021 08 14 17 45.81 +24 42.2 8.994 9.420 112.0 5.7 16.6 21.9 2021 09 23 17 41.22 +22 19.7 9.183 9.163 85.7 6.3 16.5 21.8 2021 11 02 17 48.69 +20 07.2 9.352 8.902 60.3 5.6 16.5 21.7 2021 12 12 18 04.79 +19 10.3 9.318 8.635 43.9 4.5 16.3 21.5 2022 01 21 18 23.98 +20 03.5 8.991 8.364 47.9 5.0 16.1 21.3 2022 03 02 18 39.94 +22 52.2 8.402 8.086 68.2 6.5 15.9 20.9 2022 04 11 18 46.04 +27 07.4 7.685 7.803 93.1 7.4 15.6 20.5 2022 05 21 18 36.94 +31 29.9 7.035 7.514 114.7 7.0 15.2 20.1 2022 06 30 18 13.92 +33 53.8 6.618 7.218 122.7 6.8 15.0 19.8 2022 08 09 17 50.09 +32 56.6 6.487 6.915 110.9 7.9 14.8 19.5 2022 09 18 17 40.46 +29 38.9 6.546 6.604 88.9 8.8 14.6 19.2 2022 10 28 17 48.98 +26 16.1 6.614 6.284 66.5 8.3 14.5 19.0 2022 12 07 18 11.31 +24 33.7 6.530 5.956 50.8 7.4 14.3 18.6 2023 01 16 18 40.85 +25 30.4 6.208 5.618 49.4 7.6 14.0 18.2 2023 02 25 19 10.55 +29 32.8 5.663 5.268 61.8 9.5 13.5 17.6 2023 04 06 19 32.17 +36 38.0 4.991 4.907 79.4 11.6 13.0 17.0 2023 05 16 19 33.79 +45 46.6 4.332 4.532 94.9 12.8 12.4 16.2 2023 06 25 18 59.15 +53 48.6 3.807 4.141 102.1 13.9 11.8 15.4 2023 08 04 17 55.68 +55 27.5 3.456 3.734 97.9 15.6 11.3 14.6 2023 09 13 17 19.54 +50 07.6 3.211 3.306 86.5 17.7 10.7 13.8 2023 10 23 17 33.16 +43 09.6 2.952 2.854 74.6 19.6 10.0 12.8 2023 12 02 18 27.92 +38 36.5 2.591 2.376 66.4 22.4 9.1 11.5 2024 01 11 20 06.47 +37 53.3 2.143 1.868 60.6 27.3 7.8 9.7 2024 02 20 22 48.67 +37 21.1 1.751 1.339 49.5 34.1 6.2 7.4 2024 03 31 02 05.11 +23 33.2 1.611 0.875 28.6 33.1 4.6 4.8 2024 05 10 04 31.52 -03 41.3 1.577 0.859 29.6 35.5 4.5 4.6 2024 06 19 07 00.37 -29 55.0 1.578 1.313 55.9 39.9 5.9 7.0 2024 07 29 10 07.58 -44 45.7 1.983 1.842 67.0 30.5 7.6 9.4 2024 09 07 12 40.01 -47 29.2 2.730 2.351 57.7 21.2 9.2 11.5 2024 10 17 14 23.46 -47 30.8 3.520 2.831 40.0 13.1 10.3 13.1 2024 11 26 15 39.66 -47 42.0 4.128 3.283 27.4 7.9 11.2 14.3 2025 01 05 16 36.61 -48 20.5 4.428 3.712 38.8 9.5 11.8 15.1 \end{verbatim} \end{small} \section{References} Alstedius, J. H. (1650). {\it Thesaurus Chronologiae} (Herbornae).\\ Berckringeri, D. (1665). {\it Dissertatio Histrorico-Politica de Cometis} (Ultrajectum: Meinard).\\ Beyer, M. (1958). ``Physische Beobachtungen von Kometen. X", {\it A.N.}\ {\bf 284}, 112-128.\\ Biot, E. C. (1843a). ``Catalogue Des Com\`etes observ\'ees en Chine depuis l'an 1230 jusqu'\`a l'an 1640 de notre \`ere", in {\it Connaissance des Temps ... Pour L'An 1846} (Paris: Bachelier, Imprimeur-Libraire du Bureau des Longitudes), Additions, p.\ 57.\\ Biot, E. C. (1843b). ``Catalogue Des \'Etoiles extraordinaires observ\'ens en Chine depuis les temps anciens jusqu'\`a l'an 1203 de notre \`ere", in {\it Connaissance des Temps ... Pour L'An 1846} (Paris: Bachelier, Imprimeur-Libraire du Bureau des Longitudes), Additions, p.\ 64.\\ Carl, P. (1864). {\it Repertorium der Cometen-Astronomie} (Muenchen: M. Rieger'sche Universitaets-Buchhandlung).\\ Cedreni, G. (1647). {\it Compendium Historiarum. Tomus II} (Paris).\\ Celoria, G. (1884). ``Comete del 1457", {\it A.N.}\ {\bf 110}, 174.\\ Celoria, G. (1894). ``Con un capitolo sui lavori astronomici del Toscanelli", in {\it Raccolta di documenti e studi pubblicati dalla R.\ Commissione colombiana} (Roma: Ministero della pubblica istruzione), part.\ 5, vol.\ 2.\\ Celoria, G. (1921). ``Sulle osservazioni di comete fatte da Paolo Dal Pozzo Toscanelli e sui lavori astronomici in generale", {\it Pubblicazioni del Reale Osservatorio Astronomico di Brera in Milano}, 55.\\ Chambers, G. F. (1889). {\it A Handbook of Descriptive and Practical Astronomy}, 4th ed. (Oxford: Clarendon Press), Vol.\ 1.\\ de Cesarea, E. (1483). {\it Chronicon} (venezia: Erhard Ratdolt).\\ Festou, M. C.; B. Morando; and P. Rocher (1985). ``The orbit of periodic comet Crommelin between the years 1000 and 2100", {\it Astron. Ap.}\ {\bf 142}, 421-429.\\ Funccio, J. (1578). {\it Chronologia} (Witebergae).\\ Galle, J. C. (1894). {\it Verzeichniss der Elemente der bisher berechneten Cometenbahnen} (Leipzig: Verlag von Wilhelm Kugelmann).\\ Gould, G. P., transl.\ (1977). {\it Manilius: Astronomica} (Cambridge, MA: Harvard University Press), p.\ 77.\\ Green, D. W. E. (2020a). {\it CBET} No. 4727.\\ Green, D. W. E. (2020b). {\it CBET} No. 4805.\\ Hasegawa, I. (1979). ``Orbits of ancient and medieval comets", {\it Publ.\ Astron.\ Soc.\ Japan} {\bf 31}, 257-270.\\ Hasegawa, I. (1980). ``Catalogue of Ancient and Naked-Eye Comets", {\it Vistas Astron.}\ {\bf 24}, 59-102.\\ Hevelii, J. (1668). {\it Cometographia, totam naturam cometarum} (Gedani, Simon Reininger).\\ Hind, J. R. (1846). ``Schreiben des Herrn J. R. Hind an den Herausgeber", {\it A.N.}\ {\bf 23}, 177.\\ Ho, P. Y. (1962). ``Ancient and mediaeval observations of comets and novae in Chinese sources", {\it Vistas Astron.}\ {\bf 5}, 127-225.\\ Hoyland, R. G. (2011). {\it Theophilus of Edessa's Chronicle}, Translated Texts for Historians, vol. 57 (Liverpool Univ. Press).\\ Jervis, J. L. (1985). {\it Cometary Theory in Fifteenth-Century Europe}, Studia Copernicana (Wroclaw).\\ Kobayashi, T. (2020). {\it Nakano Note}, No. 4136 (2020 June 28); posted at website URL\\ {\tt http://www.oaa.gr.jp/\textasciitilde oaacs/nk/nk4048.htm}.\\ Kronk, G. W. (1999). {\it Cometography}, Vol.\ 1 (Cambridge University Press).\\ Kronk, G. W. (2003). {\it Cometography}, Vol.\ 2 (Cambridge University Press).\\ Kronk, G. W. (2007). {\it Cometography}, Vol.\ 4 (Cambridge University Press).\\ Lubienietz, S. (1667). {\it Systematis Cometici Tomus Secundus . . . Theatri Cometici pars posterior . . . Historia Cometarum, . . .} (Amsterdam: Francisco Cuyper).\\ Marsden, B. G. (1975). {\it Catalogue of Cometary Orbits}, 2nd ed. (Cambridge, MA: Smithsonian Astrophysical Observatory).\\ Meyerus Baliolanus, J. M. (1561). {\it Commentarii sive annales rerum Flandricarum} (Anvers: Steelsius). \\ Mussati, A. (1723). ``Historia Augusta", {\it Rerum Italicarum Scriptores}. Tomus Decimus. (Mediolanum. Flandricarum)\\ Nakano, S. (2020a). {\it Nakano Note}, No. 4048 (2020 Mar. 3); posted at website URL \\ {\tt http://www.oaa.gr.jp/\textasciitilde oaacs/nk/nk4048.htm}.\\ Nakano, S. (2020b). {\it Nakano Note}, No. 4136 (2020 Jun 28); posted at website URL\\ {\tt http://www.oaa.gr.jp/\textasciitilde oaacs/nk/nk4136.htm}.\\ Niebuhr, B. G., ed.\ (1838). {\it Corpus Scriptorum Historiae Byzantinae. Theophanes Continuatus} (Bonn: Impensis Ed.\ Weberi).\\ Pankenier, D. W.; Z. Xu; and Y. Jiang (2008). {\it Archaeoastronomy in East Asia} (Amherst, NY: Cambria Press).\\ Park, S.-Y.; and J. Chae (2007). ``Analysis of Korean Historical Comet Records", {\it Publ.\ Korean Astron. Soc.}\ {\bf 22}, 151-168.\\ Peirce, B. (1846). {\it American Almanac for 1847}, p.\ 83.\\ Pingre, A. G. (1783). {\it Cometographie ou traite historique et theoretique des cometes. Tome premier} (Imprimerie Royale, Paris).\\ Procter, M.; and A. C. D. Crommelin (1937). {\it Comets: Their Nature, Origin, and Place in the Science of Astronomy} (The Technical Press, London).\\ Ramsey, J. T.; and A. L. Licht (1997). {\it The Comet of 44 B.C.\ and Caesar's Funeral Games} (Atlanta, GA: Scholar's Press), p.\ 94.\\ Schulhof, L. (1885). ``Ueber muthmassliche frühere Erscheinungen des Cometen 1873 VII", {\it A.N.}\ {\bf 113}, 143-144.\\ Stephenson, F. R. (1997). {\it Historical Eclipses and Earth's Rotation} (Cambridge University Press), p.\ 515.\\ Struyck, N. (1740). ``Inleiding tot de Algemeene Kennis der Comeeten, of Staarsterren", in {\it Inleiding tot de Algemeene Geographie, benevens eenige Sterrekundige en andere Verhandelingen} (Amsterdam: Isaak Tirion).\\ Struyck, N. (1753). ``Vervolg van de Beschryving der Comeeten of Staartsterren", in {\it Vervolg van de Beschryving der Staartsterren, en Nader Ontdekkingen Omtrent den Staat van 't Menschelyk Geslagt} (Amsterdam: Isaak Tirion).\\ Sueyro, E. (1624). {\it Anales de Flandes, Segunda Parte} (Anvers: Pedro y Iuan Belleros).\\ Tackio, J. (1653). {\it Coeli Anomalon, id est, De Cometis, sive Stellis Crinitis ...} (Giessen, Germany: Ex officina Typographica Chemliniana).\\ Werlauff, E. C. (1847). {\it Íslenzkir annálar, sive Annales Islandici ab anno Christi 803 ad annum 1430} (sumptibus Legati Arnæ-Magnæani).\\ Williams, J. (1871). {\it Observations of Comets from BC 611 to AD 1640} (London: Stangeways and Walden).\\ Wyttenbach, J. H., Müller, M. F. J. (1838). {\it Gesta Trevirorum integra lectionis varietate et animadversionibus illustrata ac indice duplici instructa. Vol. II} (Trier: Lintz).\\ Ye, Q. et. al. (2020a). ``Recovery of Returning Halley-Type Comet 12P/Pons-Brooks with the Lowell Discovery Telescope", {\it RNAAS}\ {\bf 4}, No. 7, 2020 July 7.\\ Ye, Q. (2020b). Personal communication.\\ \end{document}
{'timestamp': '2021-01-01T02:35:30', 'yymm': '2012', 'arxiv_id': '2012.15583', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.15583'}
ArXiv
\section{Preliminary and basic results} \par\medskip \par\medskip We begin this section by recalling the definition and notation about $C^*$-algebraic bundles. First of all, we refer the reader to [7, VIII.2.2] for the notion of Banach algebraic bundle $\cal B$ over a locally compact group $G$. Let $B$ be the bundle space of $\cal B$ and $\pi$ be the bundle projection. Let $B_t = \pi^{-1}(t)$ be the fiber over $t\in G$. It is clear that $B_e$ is a $C^*$-algebra if $\cal B$ is a $C^*$-algebraic bundle (see [7, VIII.16.2] for a definition). We will use the materials from [7, VIII] implicitly. Following the notation of [7], we denote by $\cal L(B)$ the set of all continuous cross-sections on $\cal B$ with compact support. Moreover, for any $f\in \cal L(B)$, ${\rm supp}(f)$ is the closed support of $f$. Furthermore, let $({\cal L}_p(\mu; {\cal B}), \|\ \|_p)$ be the normed space as defined in [7, II.15.7] (where $\mu$ is the left Haar measure on $G$). For simplicity, we will denote ${\cal L}_p(\mu; \cal B)$ by ${\cal L}_p(\cal B)$. By [7, II.15.9], $\cal L(B)$ is dense in ${\cal L}_p(\cal B)$. We also need the theory of operator valued integration from [7, II], especially, we would like to draw the readers' attention to [7, II.5.7] and [7, II.\S 16]. \par\medskip Throughout this paper, $\cal B$ is a $C^*$-algebraic bundle over a locally compact group $G$ with bundle space $B$ and bundle projection $\pi$. Denote by $C^*(\cal B)$ the cross-sectional $C^*$-algebra of $\cal B$ (see [7, VIII.17.2]). We recall from [7, VIII.5.8] that there exists a canonical map $m$ from the bundle space $B$ to the set of multipliers of ${\cal L}_1(\cal B)$ (or $C^*(\cal B)$). \par\medskip \begin{lemma} \label{1.4} The map $m$ from $B$ to $M(C^*(\cal B))$ is {\it faithful} in the sense that if $m_a = m_b$, then either $a=b$ or $a=0_r$ and $b=0_s$ ($r,s\in G$). \end{lemma} \noindent {\bf Proof}: Suppose that $\pi(a) = \pi(b) = r$. Then $m_{a-b} = 0$ will imply that $a-b = 0_r$ (since $\cal B$ has a strong approximate unit and enough continuous cross-sections). Suppose that $\pi(a)=r\neq s=\pi(b)$. Then there exists a neighbourhood $V$ of $e$ such that $rV\cap sV = \emptyset$. For any $f\in \cal L(B)$, $a(f(r^{-1}t)) = m_a(f)(t) = m_b(f)(t) = b(f(s^{-1}t))$. Now let $b_i$ be a strong approximate unit of $\cal B$ and $\{ f_i \}$ be elements in $\cal L(B)$ such that ${\rm supp}(f_i)\subseteq V$ and $f_i(e) = b_i$. Therefore $ab_i = a(f(e)) = b(f(s^{-1}r)) =0$ and hence $a=0_r$. Similarly, $b=0_s$. \par\medskip From now on, we will identify $B_r$ ($r\in G$) with its image in $M(C^*({\cal B}))$. \par\medskip Let ${\cal B}\times G$ be the $C^*$-algebraic bundle over $G\times G$ with the Cartesian product $B\times G$ as its bundle space such that the bundle projection $\pi'$ is given by $\pi'(b,t) = (\pi(b),t)$ ($b\in B$; $t\in G$). It is not hard to see that any non-degenerate representation $T'$ of ${\cal B}\times G$ is of the form $T'_{(b,t)} = T_bu_t$ ($b\in B$; $t\in G$) for a non-degenerate representation $T$ of ${\cal B}$ and a unitary representation $u$ of $G$ with commuting ranges. This gives following lemma. \par\medskip \begin{lemma} \label{1.5} $C^*({\cal B}\times G) = C^*({\cal B})\otimes_{\max}C^*(G)$. \end{lemma} Consider the map $\delta_{\cal B}$ from $B_r$ to $M(C^*({\cal B})\otimes_{\max}C^*(G))$ given by $\delta_{\cal B} (b) = b\otimes \Delta_r$ where $\Delta_r$ is the canonical image of $r$ in $M(C^*(G))$. Denote again by $\delta_{\cal B}$ the integral form of $\delta_{\cal B}$. Then we have the following equalities. \begin{eqnarray*} \delta_{\cal B}(f)(1\otimes k)(g\otimes l)(r,s) & = & \int_G f(t)g(t^{-1}r)[\int_G k(u)l(u^{-1}t^{-1}s)du]dt\\ & = & \int_G\int_G f(t)g(t^{-1}r)k(t^{-1}v)l(v^{-1}s) dvdt \end{eqnarray*} for any $f,g\in \cal L(B)$ and $k,l\in K(G)$. If we denote by $f\bullet k(r,s)=f(r)k(r^{-1}s)$, then $\delta_{\cal B}(f)(1\otimes k)(g\otimes l)(r,s) = (f\bullet k)(g\otimes l)(r,s)$. It is not hard to see that $\delta_{\cal B}$ is a full coaction (note that $\delta_{\cal B}$ extends to a representation of ${\cal L}_1({\cal B})$ and hence of $C^*({\cal B})$). \par\medskip \begin{lemma} \label{1.6} The set $N = \{ \sum_{i=1}^n f_i\bullet k_i:f_i\in {\cal L(B)}, k_i\in K(G)\}$ is dense in $C^*({\cal B}\times G)$. Consequently, $\delta_{\cal B}$ is a non-degenerate full coaction. \end{lemma} \noindent {\bf Proof}: It is sufficient to show that for any $f\in \cal L(B)$ and $k\in K(G)$, $f\otimes k$ can be approximated by elements in $N$ with respect to the ${\cal L}_1$-norm. Let $M$ and $K$ be the closed supports of $f$ and $k$ respectively. Since $k$ is uniformly continuous, for a given $\epsilon>0$, there exists a neighbourhood $V$ of $e$ such that $\|k(u)-k(v)\| < \epsilon/(\mu(M)\cdot \mu(MK)\cdot \sup_{t\in M}\|f(t)\|)$ if $u^{-1}v\in V$ (where $\mu$ is the left Haar measure of $G$). Since $M$ is compact, there exist $r_1,..., r_n$ in $M$ such that $\cup_{i=1}^n r_iV$ covers $M$. Let $k_i(s) = k(r_i^{-1}s)$ and let $g_1,...,g_n$ be the partition of unity subordinate to $\{ r_iV\}_{i=1}^n$. Then \begin{eqnarray*} \| f\bullet k(r,s) - \sum_{i=1}^n (g_if\otimes k_i)(r,s)\| & \leq & \|f(r)\| \cdot \mid \sum_{i=1}^n g_i(r)k(r^{-1}s) - \sum_{i=1}^n g_i(r)k(r_i^{-1}s)\mid \\ & \leq & \|f(r)\|\cdot (\ \sum_{i=1}^n\mid g_i(r)\mid\cdot \mid k(r^{-1}s) - k(r_i^{-1}s) \mid\ ). \end{eqnarray*} As $g_i(r)\neq 0$ if and only if $r\in r_iV$, we have $\int_{MK}\int_M \|f\bullet k(r,s) - \sum_{i=1}^n (g_if\otimes k_i)(r,s)\| drds \leq \epsilon$. This proves the lemma. \par\medskip The following lemma is about general coactions of $C^*(G)$. It implies, in particular that $\delta_{\cal B}$ is injective. Note that the trivial representation of $G$ on $\mathbb{C}$ induces a $*$-homomorphism $\cal E$ from $C^*(G)$ to $\mathbb{C}$ which is a coidentity in the sense that $({\cal E}\otimes {\rm id}) \delta_G = {\rm id} = ({\rm id}\otimes {\cal E})\delta_G$ (where $\delta_G$ is the comultiplication on $C^*(G)$) \par\medskip \begin{lemma} \label{1.1} Let $\epsilon$ be a coaction on $A$ by $C^*(G)$. Suppose that $\epsilon_{\cal E} = ({\rm id}\otimes {\cal E})\epsilon$ and $A_{\cal E} = \epsilon_{\cal E} (A)$. \par\noindent (a) If $\epsilon$ is non-degenerate, then it is automatically injective. \par\noindent (b) $A = A_{\cal E}\oplus \ker (\epsilon)$ (as Banach space). \end{lemma} \noindent {\bf Proof}: (a) We first note that $\epsilon_{\cal E}$ is a $*$-homomorphism from $A$ to itself and so $A_{\cal E}$ is a $C^*$-subalgebra of $A$. It is clear that $\epsilon$ is injective on $A_{\cal E}$ and we want to show that $A=A_\epsilon$. For any $a\in A$ and any $s\in C^*(G)$ such that ${\cal E}(s)=1$, $a\otimes s$ can be approximated by elements of the form $\sum \epsilon(b_i)(1\otimes t_i)$ (as $\epsilon$ is non-degenerate). Therefore, $\sum \epsilon_{\cal E}(b_i){\cal E}(t_i) = ({\rm id}\otimes {\cal E})(\sum \epsilon(b_i)(1\otimes t_i))$ converges to $a$. \par\noindent (b) Note that for any $a\in A$, $\epsilon(a-\epsilon_{\cal E}(a)) = 0$. Since $\epsilon_{\cal E}$ is a projection on $A$, $A=A_{\cal E}\oplus \ker (\epsilon)$. \par\medskip The above lemma actually holds for a general Hopf $C^*$-algebra with a co-identity instead of $C^*(G)$ (for a brief review of Hopf $C^*$-algebras, please see e.g. [5] or [10]). \par\medskip \begin{remark} By [5, 7.15], if $\Gamma$ is a discrete amenable group, then any injective coaction of $C^*_r(\Gamma)$ is automatically non-degenerate. More generally, the arguments in [5, \S7] actually show that for any discrete group $G$, any injective coaction of $C^*(G)$ is non-degenerate. Hence, a coaction of $C^*(G)$ is injective if and only if it is non-degenerate (when $G$ is discrete). \end{remark} We end this section with the following technical lemma. \par\medskip \begin{lemma} \label{1.3} Let $A$ be a $C^*$-algebra and $E$ be a Hilbert $A$-module. Suppose that $({\cal H},\pi)$ is a faithful representation of $A$. Then \par\noindent (a) $\|x\| = \sup \{\|x\otimes_\pi \xi\|: \|\xi\|\leq 1\}$ for any $x\in E$; \par\noindent (b) the canonical map from ${\cal L}(E)$ to ${\cal L}(E\otimes_\pi {\cal H})$ (which sends $a$ to $a\otimes 1$) is injective. \end{lemma} Part (a) follows from a direct computation and the part (b) is a consequence of part (a). \par\medskip \par\medskip \par\medskip \section{Reduced cross-sectional $C^*$-algebras} \par\medskip \par\medskip In this section, we will define the reduced cross-sectional $C^*$-algebras for $C^*$-algebraic bundles and show that they carry canonical reduced coactions. The intuitive idea is to consider the representation of ${\cal L}_1({\cal B})$ as bounded operators on ${\cal L}_2({\cal B})$. However, since ${\cal L}_2({\cal B})$ is not a Hilbert $C^*$-module, it seems unlikely that we can get a $C^*$-algebra out of this representation. Instead, we will consider a slightly different version of ``${\cal L}_2({\cal B})$'' which is a Hilbert $B_e$-module. The difficulty then is to show that the representation is well defined and bounded. This can be proved directly by a quite heavy analytical argument but we will use Lemma \ref{2.8} to do the trick instead. We will also define the interesting notion of proper $C^*$-algebraic bundles which will be needed in the next section. \par\medskip \begin{lemma} \label{2.1} Consider the map $\langle \ ,\ \rangle _e$ from $\cal L(B)\times \cal L(B)$ to $B_e$ defined by $$\langle f,g\rangle _e = \int_G f(t)^*g(t) dt$$ for all $f,g\in \cal L(B)$. Then $\langle \ ,\ \rangle _e$ is a $B_e$-valued inner product on $\cal L(B)$. \end{lemma} \noindent {\bf Proof}: It is easily seen that $\langle \ ,\ \rangle_e$ is a well defined $B_e$-valued pre-inner product. Moreover, for all $f\in \cal L(B)$, $\langle f,f\rangle _e = 0$ if and only if $\int_G \varphi(f(t)^*f(t))dt = 0$ for all $\varphi\in (B_e)^*_+$ which implies that $f(t)^*f(t) = 0$ for all $t\in G$. \par\medskip \begin{definition} \label{2.2} The completion of $\cal L(B)$ with respect to the $B_e$-valued inner product in Lemma \ref{2.1} is a Hilbert $B_e$-module and is denoted by \mbox{$(L^2_e({\cal B}), \|\cdot\|_e)$}. \end{definition} It is clear that $\|\langle f,g\rangle _e\|\leq \|f\|_2\|g\|_2$ (by [7, II.5.4] and the H\" older's inequality). Hence there is a continuous map $J$ from ${\cal L}_2(\cal B)$ to $L^2_e({\cal B})$ with dense range. In fact, it is not hard to see that ${\cal L}_2(\cal B)$ is a right Banach $B_e$-module and $J$ is a module map. \par\medskip Throughout this paper, $T$ is a non-degenerate *-representation of $\cal B$ on a Hilbert space ${\cal H}$ and $\phi$ is the restriction of $T$ on $B_e$. Moreover, $\mu_T$ is the representation of $C^*({\cal B})$ on $\cal H$ induced by $T$. By [7, VIII.9.4], $\phi$ is a non-degenerate representation of $B_e$. \par\medskip \begin{lemma} \label{2.4} There exists an isometry $$V:L^2_e({\cal B})\otimes_\phi {\cal H}\rightarrow L^2(G; {\cal H}),$$ such that for all $f\in \cal L(B)$, $\xi\in\cal H$, and $t\in G$ one has $$V(f\otimes \xi)(t) = T_{f(t)}\xi.$$ \end{lemma} \noindent {\bf Proof}: It is easy to check that the map $V$ defined as above is inner product preserving and hence extends to the required map. \par\medskip One technical difficulty in the study of reduced cross-sectional $C^*$-algebras is that $V$ is not necessarily surjective. \par\medskip \begin{example} \label{2.6} (a) If $\cal B$ is saturated, then $V$ is surjective. In fact, let $K= V(L^2_e({\cal B})\otimes_\phi {\cal H})$ and $\Theta$ be an element in the complement of $K$. For any $g\in \cal L(B)$ and $\eta\in {\cal H}$, $\int_G \langle T_{g(r)}\eta, \Theta(r)\rangle \ dr = 0$ which implies that $\int_G T_{g(r)}^*\Theta(r)\ dr = 0$. Now for any $f\in \cal L(B)$, we have $$(\mu_T\otimes \lambda_G)\delta_{\cal B}(f)(\Theta)(t) = \int_G T_{f(s)}\Theta(s^{-1}t)\ ds = \int_G T_{f(tr^{-1})}\Theta(r)\Delta(r)^{-1}\ dr.$$ Moreover, for any $b\in B_{t^{-1}}$, let $g(r) = \Delta(r)^{-1}f(tr^{-1})^*b^*$. Then $g\in \cal L(B)$ and $$T_b (\mu_T\otimes \lambda_G) \delta_{\cal B}(f)(\Theta)(t) = \int_G T_{g(r)}^*\Theta(r)\ dr = 0$$ for any $b\in B_{t^{-1}}$ (by the above equality). Since $\cal B$ is saturated and the restriction $\phi$ of $T$ is non-degenerate, $(\mu_T\otimes \lambda_G)\delta_{\cal B}(f)(\Theta)=0$ for any $f\in \cal L(B)$. Thus, $\Theta=0$ (because $(\mu_T\otimes \lambda_G)\circ\delta_{\cal B}$ is non-degenerate). \par\noindent (b) Let $\cal B$ be the trivial bundle over a discrete group $G$ (i.e. $B_e = \mathbb{C}$ and $B_t = (0)$ if $t\neq e$). Then $L^2_e({\cal B})\otimes_\phi {\cal H} \cong {\cal H}$ is a proper subspace of $L^2(G; {\cal H})$. \end{example} For any $b\in B$, let $\hat T_b$ be the map from $L^2_e({\cal B})$ to itself defined by $\hat T_b(f) = b\cdot f$ for any $f\in \cal L(B)$ (where $b\cdot f(t) = bf(\pi(b)^{-1}t)$). The argument for $\hat T$ being continuous seems not easy. Instead, we will consider the corresponding representation of ${\cal L}_1({\cal B})$ and show that it is well defined. \par\medskip For any $f\in {\cal L}({\cal B})$, define a map $\lambda_{\cal B} (f)$ from ${\cal L}({\cal B})$ to itself by $\lambda_{\cal B} (f)(g) = f\ast g$ ($g\in {\cal L}({\cal B})$). We would like to show that this map is bounded and induces a bounded representation of ${\cal L}_1({\cal B})$. In order to prove this, we will first consider a map $\tilde\lambda_{\cal B}(f)$ from ${\cal L}({\cal B})\otimes_{\rm alg} {\cal H}$ to itself given by $\tilde \lambda_{\cal B}(f)(g\otimes \xi) = f\ast g\otimes \xi$ ($g\in {\cal L}({\cal B})$; $\xi\in \cal H$). In the following, we will not distinguish ${\cal L}({\cal B})\otimes_{\rm alg} {\cal H}$ and its image in $L_e^2({\cal B})\otimes_\phi{\cal H}$. \par\medskip \begin{lemma} \label{2.8} For any $f\in \cal L(B)$, $\tilde \lambda_{\cal B}(f)$ extends to a bounded linear operator on $L_e^2({\cal B})\otimes_\phi{\cal H}$ such that $\mu_{\lambda,T} (f)\circ V = V\circ (\tilde \lambda_{\cal B}(f))$ (where $\mu_{\lambda,T}$ is the composition: $C^*({\cal B})\stackrel{\delta_{\cal B}}{\longrightarrow} C^*({\cal B})\otimes_{\rm max} C^*(G) \stackrel{\mu_T\otimes \lambda_G}{\longrightarrow} {\cal B}({\cal H}\otimes L^2(G))$). \end{lemma} \noindent {\bf Proof:} For any $g\in \cal L(B)$, $\xi\in \cal H$ and $s\in G$, we have, \begin{eqnarray*} \mu_{\lambda,T}(f)V(g\otimes \xi)(s) &=& \int_G (T_{f(t)}\otimes \lambda_t) V(g\otimes \xi)(s) dt \ \ =\ \ \int_G T_{f(t)}T_{g(t^{-1}s)}\xi dt\\ &=& T_{f\ast g(s)}\xi \ \ =\ \ V(f\ast g\otimes \xi)(s) \ \ =\ \ V(\tilde \lambda_{\cal B}(f)(g\otimes\xi))(s). \end{eqnarray*} Since $V$ is an isometry, $\tilde \lambda_{\cal B}(f)$ extends to a bounded linear operator on $L_e^2({\cal B})\otimes_\phi{\cal H}$ and satisfies the required equality. \par\medskip Now by considering the representation $T$ for which $\phi$ is injective and using Lemmas \ref{1.3}(a) and \ref{2.4}, $\lambda_{\cal B}(f)$ extends to a bounded linear map from $L_e^2({\cal B})$ to itself. It is not hard to show that $\langle f\ast g, h\rangle_e = \langle g,f^*\ast h\rangle_e$ (for any $g,h\in {\cal L}({\cal B})$). Hence $\lambda_{\cal B}(f)\in {\cal L}(L_e^2({\cal B}))$. Moreover, we have the following proposition. \par\medskip \begin{proposition} \label{2.10} The map $\lambda_{\cal B}$ from ${\cal L}_1(\cal B)$ to ${\cal L}(L^2_e(\cal B))$ given by $\lambda_{\cal B} (f)(g) = f\ast g$ ($f,g\in \cal L(B)$) is a well defined norm decreasing non-degenerate *-homomorphism such that $ \mu_{\lambda,T}(f)\circ V = V\circ (\lambda_{\cal B}(f) \otimes_\phi 1)$ ($f\in \cal L(B)$). \end{proposition} \begin{definition} \label{2.11} (a) $\lambda_{\cal B}$ is called the {\it reduced representation of $C^*({\cal B})$} and $C^*_r({\cal B}) = \lambda_{\cal B}(C^*(\cal B))$ is called the {\it reduced cross-sectional $C^*$-algebra of $\cal B$}. \par\noindent (b) $\cal B$ is said to be {\it amenable} if $\lambda_{\cal B}$ is injective. \end{definition} \begin{example} \label{2.12} Suppose that $\cal B$ is the semi-direct product bundle corresponding to an action $\alpha$ of $G$ on a $C^*$-algebra $A$. Then $C^*({\cal B}) = A\times_\alpha G$ and $C^*_r({\cal B}) = A\times_{\alpha, r} G$. \end{example} As in the case of full cross-sectional $C^*$-algebras, we can define non-degenerated reduced coactions on reduced cross-sectional $C^*$-algebras. First of all, let us consider (as in the case of reduced group $C^*$-algebras) an operator $W$ from ${\cal L(B}\times G)$ to itself defined by $W(F)(r,s) = F(r,r^{-1}s)$ ($F\in {\cal L(B}\times G)$). Note that for any $f\in\cal L(B)$ and $k\in K(G)$, $W(f\otimes k) = f\bullet k$ (where $f\bullet k$ is defined in the paragraph before Lemma \ref{1.6}) and that $L^2_e({\cal B}\times G) = L^2_e({\cal B})\otimes L^2(G)$ as Hilbert $B_e$-modules. \par\medskip \begin{lemma} \label{2.13} $W$ is a unitary in ${\cal L}(L^2_e({\cal B})\otimes L^2(G))$. \end{lemma} \par \noindent {\bf Proof}: For any $f,g\in \cal L(B)$ and $k,l\in K(G)$, we have the following equality: \begin{eqnarray*} \langle W(f\otimes k), W(g\otimes l)\rangle & = & \int_G\int_G (f\bullet k)(r,s)^*(g\bullet l)(r,s) dsdr\\ & = & \int_G\int_G f(r)^*\overline{k(r^{-1}s)} g(r)l(r^{-1}s) drds\\ & = & \int_G\int_G f(r)^*g(r)\overline{k(t)}l(t) dtdr \quad = \quad \langle f\otimes k, g\otimes l\rangle. \end{eqnarray*} Hence $W$ is continuous and extends to an operator on $L^2_e({\cal B})\otimes L^2(G)$. Moreover, if we define $W^*$ by $W^*(f\otimes k)(r,s) = f(r)k(rs)$, then $W^*$ is the adjoint of $W$ and $WW^* = 1 = W^*W$. \par\medskip As in [12], we can define a *-homomorphism $\delta^r_{\cal B}$ from $C^*_r(\cal B)$ to ${\cal L}(L^2_e({\cal B})\otimes L^2(G))$ by $\delta^r_{\cal B}(x) = W(x\otimes 1)W^*$ ($x\in C^*_r(\cal B)$). Moreover, for any $b\in B\subseteq M(C^*(\cal B))$ (see Lemma \ref{1.4}), $\delta^r_{\cal B}(\lambda_{\cal B}(b)) = \lambda_{\cal B}(b)\otimes \lambda_{\pi(b)}$ (where $\lambda_t$ is the canonical image of $t$ in $M(C^*_r(G))$). \par\medskip \begin{proposition} \label{2.14} The map $\delta^r_{\cal B}$ defined above is an injective non-degenerate coaction on $C^*_r(\cal B)$ by $C^*_r(G)$. \end{proposition} \noindent {\bf Proof}: It is clear that $\delta^r_{\cal B}$ is an injective *-homomorphism. Moreover, $(\lambda_{\cal B}\otimes\lambda_G)\circ\delta_{\cal B} = \delta^r_{\cal B}\circ\lambda_{\cal B}$ which implies that $\delta^r_{\cal B}$ is a non-degenerate coaction (see Lemma \ref{1.6}). \par\medskip There is an alternative natural way to define ``reduced'' cross-sectional $C^*$-algebra (similar to the corresponding situation of full and reduced crossed products): $C^*_R({\cal B}) := C^*({\cal B})/\ker (\epsilon_{\cal B})$ (where $\epsilon_{\cal B}$ is the composition: $C^*({\cal B})\stackrel {\delta_{\cal B}}{\longrightarrow}C^*({\cal B})\otimes_{\rm max}C^*(G) \stackrel{{\rm id}\otimes \lambda_G}{\longrightarrow}C^*({\cal B})\otimes C^*_r(G)$). \par\medskip \begin{remark} \label{2.b} (a) It is clear that $\mu_{\lambda,T} = (\mu_T\otimes \lambda_G)\circ\delta_{\cal B}$ (see Lemma \ref{2.8}) induces a representation of $C^*_R({\cal B})$ on $L^2(G; {\cal H})$. If $\mu_T$ is faithful, then this induced representation is also faithful and $C^*_R({\cal B})$ can be identified with the image of $\mu_{\lambda,T}$. \par\noindent (b) If $\mu_T$ is faithful, then so is $\phi$ and $\lambda_{\cal B}\otimes_\phi 1$ is a faithful representation of $C^*_r({\cal B})$ (by Lemma \ref{1.3}(b)). Therefore, part (a) and Lemma \ref{2.10} implies that $C^*_r({\cal B})$ is a quotient of $C^*_R({\cal B})$. \end{remark} In [12, 3.2(1)], it was proved that these two reduced cross-sectional $C^*$-algebras coincide in the case of semi-direct product bundles. The corresponding result in the case of $C^*$-algebraic bundles over discrete groups was proved implicitly in [6, 4.3]. In the following we shall see that it is true in general. \par\medskip The idea is to define a map $\varphi$ from $C_r^*(\cal B)$ to ${\cal L}(L^2(G; {\cal H}))$ such that $\varphi\circ\lambda_{\cal B} = \mu_{\lambda,T}$ (see Remark \ref{2.b}(a)). As noted above, the difficulty is that $V$ may not be surjective and, by Lemma 2.10, $\lambda_{\cal B}\otimes_\phi 1$ may only be a proper subrepresentation of $\mu_{\lambda,T}$ (see Example \ref{2.6}(b)). However, we may ``move it around'' filling out the whole representation space for $\mu_{\lambda,T}$ using the right regular representation $\rho$ of $G$ (on $L^2(G)$): $\rho_r(g)(s)=\Delta(r)^{1/2}g(sr)$ ($g\in L^2(G)$; $r,s\in G$) where $\Delta$ is the modular function for $G$. \par\medskip \begin{lemma} \label{2.a} For each $r\in G$, \par\noindent (a) The unitary operator $\rho_r\otimes 1$ on $L_2(G)\otimes {\cal H}= L^2(G;{\cal H})$ lies in the commutant of $\mu_{\lambda,T} (C^*(\cal B))$. \par\noindent (b) Consider the isometry $$ V^r: L_2({\cal B})\otimes_\phi{\cal H} \to L^2(G;{\cal H}), $$ given by $V^r = (\rho_r\otimes 1) V$. Then for all $a\in C^*({\cal B})$ one has $V^r (\lambda_{\cal B}(a)\otimes 1) = \mu_{\lambda,T}(a) V^r$. \par\noindent (c) Let $K_r$ be the range of\/ $V^r$. Then $K_r$ is invariant under $\mu_{\lambda,T}$ and the restriction of $\mu_{\lambda,T}$ to $K_r$ is equivalent to $\lambda_{\cal B}\otimes 1$. \end{lemma} \noindent {\bf Proof:} It is clear that $\rho_r\otimes 1$ commutes with $\mu_{\lambda,T}(b_t) = \lambda_t\otimes T_{b_t}$ for any $b_t\in B_t$ (see Lemma \ref{1.4}). It then follows that $\rho_r\otimes 1$ also commutes with the range of the integrated form of $\mu_{\lambda,T}$, whence (i). The second point follows immediately from (i) and Proposition \ref{2.10}. Finally, (iii) follows from (ii). \par\medskip Our next result is intended to show that the $K_r$'s do indeed fill out the whole of $L^2(G;{\cal H})$. \par\medskip \begin{proposition} \label{2.c} The linear span of\/ $\bigcup_{r\in G} K_r$ is dense in $L^2(G;{\cal H})$. \end{proposition} \noindent {\bf Proof:} Let $$ \Gamma = {\rm span}\{V^r( f\otimes \eta): r\in G,\ f\in {\cal L}({\cal B}), \ \eta\in{\cal H}\}. $$ Since for any $t\in G$, $$ V^r( f\otimes \eta)(t) = (\rho_r\otimes 1)V( f\otimes \eta)(t) = \Delta(r)^{1/2}V(f\otimes \eta)(tr) = \Delta(r)^{1/2}T_{f(tr)}\eta, $$ and since we are taking $f$ in ${\cal L}({\cal B})$ above, it is easy to see that $\Gamma$ is a subset of $C_c(G,{\cal H})$. Our strategy will be to use [7, II.15.10] (on the Banach bundle ${\cal H}\times G$ over $G$) for which we must prove that: \par \noindent (I) If $f$ is a continuous complex function on $G$ and $\zeta\in\Gamma$, then the pointwise product $f\zeta$ is in $\Gamma$; \par \noindent (II) For each $t\in G$, the set $\{\zeta(t):\zeta\in \Gamma\}$ is dense in ${\cal H}$. \par\noindent The proof of (I) is elementary in view of the fact that ${\cal L}({\cal B})$ is closed under pointwise multiplication by continuous scalar-valued functions [7, II.13.14]. In order to prove (II), let $\xi\in{\cal H}$ have the form $\xi=T_b\eta=\phi(b)\eta$, where $b\in B_e$ and $\eta\in{\cal H}$. By [7, II.13.19], let $f\in {\cal L}({\cal B})$ be such that $f(e)=b$. It follows that $\zeta_r:=V^r(f\otimes \eta)$ is in $\Gamma$ for all $r$. Also note that, setting $r=t^{-1}$, we have $$ \zeta_{t^{-1}}(t) = \Delta(t)^{-1/2}T_{ f(e)}\eta = \Delta(t)^{-1/2}T_{b}\eta = \Delta(t)^{-1/2}\xi. $$ This shows that $\xi\in\{\zeta(t):\zeta\in \Gamma\}$. Since the set of such $\xi$'s is dense in ${\cal H}$ (because $\phi$ is non-degenerate by assumption), we have that (II) is proven. As already indicated, it now follows from [7, II.15.10] that $\Gamma$ is dense in $L^2(G;{\cal H})$. Since $\Gamma$ is contained in the linear span of $\bigcup_{r\in G} K_r$, the conclusion follows. \par\medskip We can now obtain the desired result. \par\medskip \begin{theorem} \label{2.d} For all $a\in C^*({\cal B})$ one has that $\|\mu_{\lambda,T}(a)\| \leq \|\lambda_{\cal B}(a)\|$. Consequently, $C^*_R({\cal B})=C^*_r({\cal B})$. \end{theorem} \noindent {\bf Proof:} We first claim that for all $a\in C^*({\cal B})$ one has that $$ \lambda_{\cal B}(a) = 0 \quad =\!\!\Rightarrow \quad \mu_{\lambda,T}(a) = 0. $$ Suppose that $\lambda_{\cal B}(a) = 0$. Then for each $r\in G$ we have by Lemma \ref{2.a}(b) that $$ \mu_{\lambda,T}(a) V^r = V^r (\lambda_{\cal B}(a)\otimes 1) =0. $$ Therefore $\mu_{\lambda,T}(a)=0$ in the range $K_r$ of $V^r$. By Proposition \ref{2.c} it follows that $\mu_{\lambda,T}(a)=0$, thus proving our claim. Now define a map $$ \varphi : C^*_r({\cal B}) \longrightarrow {\cal B}(L^2(G;{\cal H})) $$ by $\varphi (\lambda_{\cal B}(a)) := \mu_{\lambda,T}(a)$, for all $a$ in $C^*({\cal B})$. By the claim above we have that $\varphi$ is well defined. Also, it is easy to see that $\varphi$ is a *-homomorphism. It follows that $\varphi$ is contractive and hence that for all $a$ in $C^*({\cal B})$ $$ \|\mu_{\lambda,T}(a)\| = \|\varphi (\lambda_{\cal B}(a))\| \leq \|\lambda_{\cal B}(a)\|. $$ For the final statement, we note that if $\mu_T$ is faithful, then the map $\varphi$ defined above is the inverse of the quotient map from $C^*_R({\cal B})$ to $C^*_r({\cal B})$ given in Remark \ref{2.b}(ii). \par\medskip The following generalises [11, 7.7.5] to the context of $C^*$- algebraic bundles: \par\medskip \begin{corollary} \label{2.e} Let $T:{\cal B}\rightarrow {\cal L}({\cal H})$ be a non-degenerate $*$-representation of the $C^*$-algebraic bundle ${\cal B}$ and let $\mu_{\lambda,T}$ be the representation of ${\cal B}$ on $L^2(G;{\cal H})$ given by $\mu_{\lambda,T}(b_t) = \lambda_t\otimes T_{b_t}$, for $t\in G$, and $b_t\in B_t$. Then $\mu_{\lambda,T}$ is a well defined representation and induces a representation of $C^*({\cal B}) ($again denoted by $\mu_{\lambda,T}$). In this case, $\mu_{\lambda,T}$ factors through $C^*_r({\cal B})$. Moreover, if $\phi = T\!\mid_{B_e}$ is faithful, the representation of $C^*_r({\cal B})$ arising from this factorisation is also faithful. \end{corollary} \noindent {\bf Proof:} By Remark \ref{2.b}(a), $\mu_{\lambda,T}$ factors through a representation of $C^*_R({\cal B})=C^*_r({\cal B})$ (Theorem \ref{2.d}). Now if $\phi$ is faithful, then by Theorem \ref{2.d}, Lemmas \ref{2.10} and \ref{1.3}(a), $\|\mu_{\lambda,T}(a)\| = \|\lambda_{\cal B}(a)\|$. This proves the second statement. \par\medskip \par\medskip \par\medskip \section{The approximation property of $C^*$-algebraic bundles} \par\medskip \par\medskip From now on, we assume that $\mu_T$ (see the paragraph before Lemma \ref{2.4}) is faithful. Moreover, we will not distinguish ${\cal L}({\cal B})$ and its image in $C^*({\cal B})$. \par\medskip The materials in this section is similar to the discrete case in [6]. Let ${\cal B}_e$ be the $C^*$-algebraic bundle $B_e\times G$ over $G$. We will first define a map from $L^2_e({\cal B}_e)\times C^*_r({\cal B})\times L^2_e({\cal B}_e)$ to $C^*(\cal B)$. For any $\alpha\in {\cal L(B}_e)$, let $V_\alpha$ be a map from ${\cal H}$ to $L^2(G; {\cal H})$ given by $$V_\alpha(\xi)(s) = \phi(\alpha(s))\xi$$ ($\xi\in {\cal H}$; $s\in G$). It is clear that $V_\alpha$ is continuous and $\|V_\alpha\|\leq \|\alpha\|$. Moreover, we have $V_\alpha^*(\Theta) = \int_G \phi(\alpha(r)^*)\Theta(r)\ dr$ ($\alpha\in {\cal L(B}_e)$; $\Theta\in L^2(G)\otimes {\cal H} = L^2(G;{\cal H})$) and $\|V_\alpha^*\|\leq \|\alpha\|$. Thus, for any $\alpha, \beta\in L^2_e({\cal B}_e)$, we obtain a continuous linear map $\Psi_{\alpha,\beta}$ from ${\cal L}(L^2(G)\otimes {\cal H})$ to ${\cal L}({\cal H})$ defined by $$\Psi_{\alpha,\beta}(x) = V_\alpha^*xV_\beta$$ with $\|\Psi_{\alpha,\beta}\|\leq \|\alpha\|\|\beta\|$. Recall from Remark \ref{2.b}(a) and Theorem \ref{2.d} that $C^*_r(\cal B)$ is isomorphic to the image of $C^*(\cal B)$ in ${\cal L}(L^2(G)\otimes {\cal H})$ under $\mu_{\lambda,T} = (\mu_T\otimes \lambda_G)\circ \delta_{\cal B}$. \par\medskip \begin{lemma} \label{3.1} Let $\alpha, \beta\in {\cal L(B}_e)$ and $f\in \cal L(B)$. Then $\Psi_{\alpha,\beta}(\mu_{\lambda,T}(f)) = \alpha\cdot f\cdot \beta$ where $\alpha\cdot f\cdot \beta\in {\cal L(B)}$ is defined by $\alpha\cdot f\cdot \beta(s) = \int_G \alpha(t)^*f(s)\beta(s^{-1}t)\ dt$. \end{lemma} \noindent {\bf Proof}: For any $\xi \in {\cal H}$, we have \begin{eqnarray*} \Psi_{\alpha,\beta}(\mu_{\lambda,T}(f))\xi & = & \int_G \phi(\alpha(t)^*)(\mu_{\lambda,T}(f)V_\beta\xi)(t)\ dt\\ & = & \int_G\int_G \phi(\alpha^*(t))T_{f(s)}\phi(\beta(s^{-1}t)) \xi\ ds dt\\ & = & \int_G T_{(\alpha\cdot f\cdot \beta)(s)}\xi\ ds. \end{eqnarray*} \par\medskip Hence we have a map from $L^2_e({\cal B}_e)\times C^*_r({\cal B})\times L^2_e({\cal B}_e)$ to $C^*(\cal B)$ such that $\|\alpha\cdot x\cdot \beta\|\leq \|\alpha\|\|x\|\|\beta\|$. Next, we will show that this map sends $L^2_e({\cal B}_e)\times \mu_{\lambda,T}({\cal L(B}))\times L^2_e({\cal B}_e)$ to $\cal L(B)$. \par\medskip \begin{lemma} \label{3.2} For any $\alpha, \beta\in L^2_e({\cal B}_e)$ and $f\in \cal L(B)$, $\alpha\cdot f\cdot \beta\in \cal L(B)$. \end{lemma} \noindent {\bf Proof}: If $\alpha', \beta' \in {\cal L(B}_e)$ \begin{eqnarray*} \|(\alpha'\cdot f\cdot \beta')(s)\| & = & \sup \{\mid \langle \eta,\int_G T_{\alpha'(t)^*f(s)\beta'(s^{-1}t)} \xi\ dt\rangle \mid: \|\eta\|\leq 1; \|\xi\|\leq 1\}\\ & \leq & \sup \{ \|f(s)\| (\int_G \|\phi(\alpha'(t))\eta\|^2\ dt)^{1/2} (\int_G \|\phi(\beta'(t))\xi\|^2\ dt)^{1/2}: \|\eta\|\leq 1; \|\xi\|\leq 1\}\\ & = & \|f(s)\|\|\alpha'\|\|\beta'\|. \end{eqnarray*} Let $\alpha_n$ and $\beta_n$ be two sequences of elements in ${\cal L(B}_e)$ that converge to $\alpha$ and $\beta$ respectively. Then $(\alpha_n\cdot f\cdot \beta_n)(s)$ converges to an element $g(s)\in B_s$. Moreover, since $f$ is of compact support and the convergence is uniform, $g\in \cal L(B)$ and ${\rm supp}(g)\subseteq {\rm supp}(f)$. In fact, this convergence actually takes place in ${\cal L}_1({\cal B})$ and hence in $C^*(\cal B)$. Therefore $\Psi_{\alpha,\beta}(\mu_{\lambda,T}(f))=\mu_T(\alpha\cdot f\cdot \beta)$. \par\medskip \begin{remark} \label{3.3} The proof of the above lemma also shows that $\Psi_{\alpha,\beta}$ sends the image of $B_t$ in $M(C^*_r(\cal B))$ to the image of $B_t$ in $M(C^*(\cal B))$. Hence $\Psi_{\alpha,\beta}$ induces a map $\Phi_{\alpha,\beta}$ from $B$ to $B$ which preserves fibers. \end{remark} \begin{definition} \label{3.4} Let $\{\Phi_i\}$ be a net of maps from $B$ to itself such that they preserve fibers and are linear on each fiber. \par\noindent (a) $\{\Phi_i\}$ is said to be {\it converging to $1$ on compact slices of $B$} if for any $f\in {\cal L(B})$ and any $\epsilon > 0$, there exists $i_0$ such that for any $i\geq i_0$, $\|\Phi_i(b)-b\|<\epsilon$ for any $b\in f(G)$ ($f(G)$ is called a {\it compact slice} of B). \par\noindent (b) Then $\{\Phi_i\}$ is said to be {\it converging to $1$ uniformly on compact-bounded subsets of $B$} if for any compact subset $K$ of $G$ and any $\epsilon > 0$, there exists $i_0$ such that for any $i\geq i_0$, $\|\Phi_i(b)-b\|<\epsilon$ if $\pi(b)\in K$ and $\|b\|\leq 1$. \end{definition} \begin{lemma} \label{3.5} Let $\{\Phi_i\}$ be a net as in Definition \ref{3.4}. Then each of the following conditions is stronger than the next one. \begin{enumerate} \item[i.] $\{\Phi_i\}$ converges to $1$ uniformly on compact-bounded subsets of $B$. \item[ii.] $\{\Phi_i\}$ converges to $1$ uniformly on compact slices of $B$. \item[iii.] For any $f\in \cal L(B)$, the net $\Phi_i\circ f$ converges to $f$ in ${\cal L}_1(\cal B)$. \end{enumerate} \end{lemma} \par\noindent {\bf Proof:} Since every element in ${\cal L(B})$ has compact support and is bounded, it is clear that (i) implies (ii). On the other hand, (ii) implies (iii) is obvious. \par\medskip Following the idea of [6], we define the approximation property of $\cal B$. \par\medskip \begin{definition} \label{3.6} (a) Let $\cal B$ be a $C^*$-algebraic bundle. For $M>0$, $\cal B$ is said to have the {\it $M$-approximation property} (respectively, {\it strong $M$-approximation property}) if there exist nets $(\alpha_i)$ and $(\beta_i)$ in ${\cal L(B}_e)$ such that \begin{enumerate} \item[i.] $\sup_i \|\alpha_i\|\|\beta_i\| \leq M$; \item[ii.] the map $\Phi_i = \Phi_{\alpha_i,\beta_i}$ (see Remark \ref{3.3}) converges to $1$ uniformly on compact slices of $B$ (respectively, uniformly on compact-bounded subsets of $B$). \end{enumerate} $\cal B$ is said to have the (respectively, {\it strong}) {\it approximation property} if it has the (respectively, strong) $M$-approximation property for some $M > 0$. \par\noindent (b) We will use the terms ({\it strong}) {\it positive $M$-approximation property} and ({\it strong}) {\it positive approximation property} if we can choose $\beta_i = \alpha_i$ in part (a). \end{definition} Because of Remark 3.7(b) below as well as [11, 7.3.8], we believe that the above is the weakest condition one can think of to ensure the amenability of the $C^*$-algebraic bundle. \par\medskip \begin{remark} \label{3.7} (a) Since any compact subset of a discrete group is finite and any $C^*$-algebraic bundle has enough cross-sections, the approximation property defined in [6] is the same as positive 1-approximation property defined above. \par\noindent (b) It is easy to see that the amenability of $G$ implies the positive 1-approximation property of $\cal B$ (note that the positive 1-approximation property is similar to the condition in [11, 7.3.8]). In fact, let $\xi_i$ be the net given by [11, 7.3.8] and let $\eta_i(t) = \overline{\xi_i(t)}$. If $u_j$ is an approximate unit of $B_e$ (which is also a strong approximate unit of $\cal B$ by [7, VIII.16.3]), then the net $\alpha_{i,j} = \beta_{i,j} = \eta_i u_j$ will satisfy the required property. \par\noindent (c) We can also formulate the approximation property as follows: there exists $M>0$ such that for any compact slice $S$ of $B$ and any $\epsilon > 0$, there exist $\alpha,\beta\in L^2_e({\cal B}_e)$ with $$\|\alpha\|\|\beta\|\leq M\qquad {\rm and} \qquad \|\alpha\cdot b\cdot \beta - b\| < \epsilon$$ if $b\in S$. In fact, we can replace $L^2_e({\cal B}_e)$ by ${\cal L(B}_e)$ and consider the directed set $D= \{ (K,\epsilon): K$ is a compact subset of $G$ and $\epsilon > 0 \}$. For any $d=(K,\epsilon)\in D$, we take $\alpha_d$ and $\beta_d$ that satisfying the above condition. These are the required nets. \end{remark} We can now prove the main results of this section. \par\medskip \begin{proposition} \label{3.8} If $\cal B$ has the approximation property, then the coaction $\epsilon_{\cal B} = ({\rm id}\otimes \lambda_G)\circ\delta_{\cal B}$ is injective. \end{proposition} \noindent {\bf Proof}: Let $\Phi_i = \Phi_{\alpha_i,\beta_i}$ be the map from $B$ to itself as given by Definition \ref{3.6}(a)(ii) and $\Psi_i = \Psi_{\alpha_i,\beta_i}$. Let $J_i=\Psi_i\circ\mu_{\lambda,T}$. By Lemma \ref{3.2}, for any $f\in\cal L(B)$, $J_i(f)\in {\cal L}({\cal B})$ (note that we regard ${\cal L}({\cal B})\subseteq C^*({\cal B})$) and $J_i(f)(s) = \Phi_i(f(s))$ ($s\in G$). Since $\Phi_i\circ f$ converges to $f$ in ${\cal L}_1(\cal B)$ (by Lemma \ref{3.5}), $J_i(f)$ converges to $f$ in $C^*(\cal B)$. Now because $\|J_i\|\leq \|\Psi_i\|\leq \sup_i\|\alpha_i\| \|\beta_i\|\leq M$, we know that $J_i(x)$ converges to $x$ for all $x\in C^*(\cal B)$ and $\epsilon_{\cal B}$ is injective. \par\medskip Note that if $G$ is amenable, we can also obtain directly from the Lemma \ref{1.1}(a) that $\epsilon_{\cal B}$ is injective. \par\medskip \begin{theorem} \label{3.9} Let $\cal B$ be a $C^*$-algebraic bundle having the approximation property (in particular, if $G$ is amenable). Then $\cal B$ is amenable. \end{theorem} \noindent {\bf Proof:} Proposition \ref{3.8} implies that $C_R^*({\cal B}) = C^*({\cal B})$ (see the paragraph before Remark \ref{2.b}). Now the amenability of ${\cal B}$ clearly follows from Theorem \ref{2.d}. \par\medskip \par\medskip \par\medskip \section{Two special cases} \par\medskip \par\medskip \noindent {\em I. Semi-direct product bundles and nuclearity of crossed products.} \par\medskip Let $A$ be a $C^*$-algebra with action $\alpha$ by a locally compact group $G$. Let ${\cal B}$ be the semi-direct product bundle of $\alpha$. \par\medskip \begin{remark} \label{4.1} $\cal B$ has the (respectively, strong) $M$-approximation property if there exist nets $\{\gamma_i\}_{i\in I}$ and $\{\theta_i\}_{i\in I}$ in $K(G;A)$ such that $$\|\int_G \gamma_i(r)^*\gamma_i(r)\ dr\|\cdot\|\int_G \theta_i(r)^*\theta_i(r)\ dr\| \leq M^2$$ and for any $f\in K(G;A)$ (respectively, for any compact subset $K$ of $G$), $\int_G \gamma_i(r)^*a\alpha_t (\theta_i(t^{-1}r))\ dr$ converges to $a\in A$ uniformly for $(t,a)$ in the graph of $f$ (respectively, uniformly for $t\in K$ and $\|a\|\leq 1$). \end{remark} \begin{definition} \label{4.2} An action $\alpha$ is said to have the (respectively, strong) {\it ($M$-)approximation property} (respectively, $\alpha$ is said to be {\it weakly amenable}) if the $C^*$-algebraic bundle $\cal B$ associated with $\alpha$ has the (respectively, strong) ($M$-)approximation property (respectively, ${\cal B}$ is amenable). \end{definition} Let $G$ and $H$ be two locally compact groups. Let $A$ and $B$ be $C^*$-algebras with actions $\alpha$ and $\beta$ by $G$ and $H$ respectively. Suppose that $\tau = \alpha\otimes \beta$ is the product action on $A\otimes B$ by $G\times H$. \par\medskip \begin{lemma} \label{4.3} With the notation as above, if $A$ is nuclear and both $\alpha$ and $\beta$ have the approximation property, then $(A\otimes B) \times_\tau(G\times H)=(A\otimes B)\times_{\tau,r}(G\times H)$. \end{lemma} \noindent {\bf Proof}: Let $\cal B$, $\cal D$ and $\cal F$ be the semi-direct product bundles of $\alpha$, $\beta$ and $\tau$ respectively. Then $C^*_r({\cal F})=C^*_r({\cal B})\otimes C^*_r({\cal D})$ (by Example \ref{2.12}). Moreover, since $A$ is nuclear, $C^*({\cal F}) = C^*({\cal B})\otimes_{\max} C^*({\cal D})$ (by Example \ref{2.12} and [10, 3.2]). It is not hard to see that the coaction, $\delta_{\cal F}$, on $C^*(\cal F)$ is the tensor product of the coactions on $C^*(\cal B)$ and $C^*(\cal D)$. Suppose that $C^*(\cal F)$ is a $C^*$-subalgebra of $\cal L(H)$. Consider as in Section 2, the composition: $$\mu_{\cal F}: C^*({\cal F})\stackrel{\delta_{\cal F}} {\longrightarrow} C^*({\cal F})\otimes_{\rm max} C^*(G\times H) \stackrel{{\rm id}\otimes \lambda_{G\times H}}{\longrightarrow} {\cal L} ({\cal H}\otimes L^2(G\times H))$$ and identify its image with $C^*_R({\cal F})=C^*_r({\cal F})$ (see Remark \ref{2.b}(a)). We also consider similarly the maps $\mu_{\cal B}$ and $\mu_{\cal D}$ from $C^*(\cal B)$ and $C^*(\cal D)$ to ${\cal L}({\cal H}\otimes L^2(G))$ and ${\cal L}({\cal H}\otimes L^2(H))$ respectively. Now for any $f\in \cal L(B)$ and $g\in \cal L(D)$, we have $\mu_{\cal F}(f\otimes g) = (\mu_{\cal B}(f))_{12} (\mu_{\cal D}(g))_{13}\in {\cal L}({\cal H}\otimes L^2(G)\otimes L^2(H))$. As in Section 3, we define, for any $k\in {\cal L(B}_e)$ and $l\in {\cal L(D}_e)$, an operator $V_{k\otimes l}$ from $\cal H$ to ${\cal H}\otimes L^2(G\times H)$ by $V_{k\otimes l}\zeta (r,s) = k(r)(l(s)\zeta)$ ($r\in G$; $s\in H$; $\zeta\in {\cal H}$). It is not hard to see that $V_{k\otimes l}(\zeta) = (V_k\otimes 1)V_l(\zeta)$ and $$V_{k\otimes l}^*\mu_{\cal F} (f\otimes g)V_{k'\otimes l'} = (V_k^*\mu_{\cal B}(f)V_{k'}) (V_l^*\mu_{\cal D}(g)V_{l'})\in \cal L(H)$$ (note that $B_r$ commutes with $D_s$ in ${\cal L}(\cal H)$). Now let $k_i, k'_i\in \cal L(B)$ and $l_j, l'_j\in \cal L(D)$ be the nets that give the corresponding approximation property on $\cal B$ and $\cal D$ respectively. Then $V_{k_i}^*\mu_{\cal B}(f)V_{k'_i}$ converges to $f$ in $C^*(\cal B)$ and $V_{l_j}^*\mu_{\cal D}(g)V_{l'_j}$ converges to $g$ in $C^*(\cal D)$. Hence $J_{i,j}(z) = V_{k_i\otimes l_j}^*\mu_{\cal F}(z)V_{k'_i\otimes l'_j}$ converges to $z$ in $C^*(\cal F)$ for all $z\in {\cal L(B)}\otimes_{alg} {\cal L(D)}$. Since $\|J_{i,j}\|$ is uniformly bounded, $\mu_{\cal F}$ is injective and $C^*({\cal F}) = C^*_r({\cal F})$. \par\medskip An interesting consequence of this proposition is the nuclearity of the crossed products of group actions with the approximation property (which is a generalisation of the case of actions of amenable groups). Note that in the case of discrete group, this was also proved by Abadie in [1]. \par\medskip \begin{theorem} \label{4.4} Let $A$ be a $C^*$-algebra and $\alpha$ be an action on $A$ by a locally compact group $G$. If $A$ is nuclear and $\alpha$ has the approximation property, then $A\times_{\alpha} G = A\times_{\alpha , r} G$ is also nuclear. \end{theorem} \noindent {\bf Proof}: By Lemma \ref{4.3} (or Theorem \ref{3.9}), $A\times_{\alpha} G = A\times_{\alpha , r} G$. For any $C^*$-algebra $B$, let $\beta$ be the trivial action on $B$ by the trivial group $\{e\}$. Then $(A\times_\alpha G)\otimes_{\max} B = (A\otimes_{\max}B)\times_{\alpha\otimes\beta} G = (A\otimes B)\times_{\alpha\otimes\beta, r} G = (A\times_{\alpha , r} G)\otimes B$ (by Lemma \ref{4.3} again). \par\medskip One application of Theorem \ref{4.4} is to relate the amenability of Anantharaman-Delaroche (see [4, 4.1]) to the approximation property in the case when $A$ is nuclear and $G$ is discrete. The following corollary clearly follows from this theorem and [4, 4.5]. \par\medskip \begin{corollary} \label{4.5} Let $A$ be a nuclear $C^*$-algebra with an action $\alpha$ by a discrete group $G$. If $\alpha$ has the approximation property, then $\alpha$ is amenable in the sense of Anantharaman-Delaroche. \end{corollary} We don't know if the two properties coincide in general. However, we can have a more direct and transparent comparison of them in the case of commutative $C^*$-algebras and show that they are the same. Furthermore, they also coincide in the case of finite dimensional $C^*$-algebras. \par\medskip \begin{corollary} \label{4.a} Let $A$ be $C^*$-algebra with action $\alpha$ by a discrete group $G$. \par\noindent (a) If $A$ is commutative, the followings are equivalent: \begin{enumerate} \item[i.] $\alpha$ is amenable in the sense of Anantharaman-Delaroche; \item[ii.] $\alpha$ has the positive 1-approximation property; \item[iii.] $\alpha$ has the approximation property. \end{enumerate} \par\noindent (b) If $A$ is unital and commutative or if $A$ is finite dimensional, then (i)-(iii) are also equivalent to the following conditions: \begin{enumerate} \item[iv.] $\alpha$ has the strong positive 1-approximation property; \item[v.] $\alpha$ has the strong approximation property. \end{enumerate} \end{corollary} \noindent {\bf Proof:} (a) By [4, 4.9(h')], $\alpha$ is amenable in the sense of Anantharaman-Delaroche if and only if there exists a net $\{\gamma_i\}$ in $K(G;A)$ such that $\|\sum_{r\in G} \gamma_i(r)^*\gamma_i(r)\|\leq 1$ and $\sum_{r\in G} \gamma_i(r)^*\alpha_t(\gamma_i(t^{-1}r))$ converges to $1$ strictly for any $t\in G$. It is exactly the original definition of the approximation property given in [1]. Hence conditions (i) is equivalent to conditions (ii) (see Remark \ref{3.7}(a)). Now part (a) follows from Corollary \ref{4.5}. \par\noindent (b) Suppose that $A$ is both unital and commutative. Let $\alpha$ satisfy condition (i) and $\{\gamma_i\}$ be the net as given in the proof of part (a) above. As $A$ is unital, the strict convergence and the norm convergence are equivalent. Moreover, as $G$ is discrete, any compact subset $K$ of $G$ is finite. These, together with the commutativity of $A$, imply that $\sum_{r\in G} \gamma_i(r)^*\alpha_t(\gamma_i(t^{-1}r))$ converges to 1 strictly for any $t\in G$ if and only if $\sum_{r\in G} \gamma_i(r)^*a\alpha_t(\gamma_i(t^{-1}r))$ converges to $a\in A$ uniformly for $t\in K$ and $\|a\|\leq 1$. Thus, by Remark \ref{4.1}, we have the equivalence of (i) and (iv) in the case of commutative unital $C^*$-algebras and the equivalence of (i)-(v) follows from Lemma \ref{3.5} and Corollary \ref{4.5}. Now suppose that $A$ is a finite dimensional $C^*$-algebra (but not necessary commutative). By [4, 4.1] and [4, 3.3(b)], $\alpha$ satisfies condition (i) if and only if there exists a net $\{\gamma_i\}$ in $K(G; Z(A))$ (where $Z(A)$ is the centre of $A=A^{**}$) such that $\|\sum_{r\in G} \gamma_i(r)^*\gamma_i(r)\|\leq 1$ and for any $t\in G$, $\sum_{r\in G} \gamma_i(r)^*\alpha_t (\gamma_i(t^{-1}r))$ converges to $1$ weakly (and hence converges to $1$ in norm as $A$ is finite dimensional). Let $K$ be any compact (and hence finite) subset of $G$. Since $\alpha_t(\gamma_i(t^{-1}r))\in Z(A)$, $\sum_{r\in G} \gamma_i(r)^*a \alpha_t(\gamma_i(t^{-1}r))$ converges to $a\in A$ uniformly for $t\in K$ and $\|a\|\leq 1$. This shows that $\alpha$ satisfies condition (iv) (see Remark \ref{4.1}). The equivalence follows again from Lemma \ref{3.5} and Corollary \ref{4.5}. \par\medskip Because of the above results, we believe that approximation property is a good candidate for the notion of amenability of actions of locally compact groups on general $C^*$-algebras. \par\medskip \par\medskip \noindent {\em II. Discrete groups: $G$-gradings and coactions.} \par\medskip Let $G$ be a discrete group and let $D$ be a $C^*$-algebra with a $G$-grading (i.e. $D=\overline{\oplus_{r\in G} D_r}$ such that $D_r\cdot D_s \subseteq D_{rs}$ and $D_r^*\subseteq D_{r^{-1}}$). Then there exists a canonical $C^*$-algebraic bundle structure (over $G$) on $D$. We denote this bundle by $\cal D$. Now by [6, \S 3], $D$ is a quotient of $C^*(\cal D)$. Moreover, if the grading is topological in the sense that there exists a continuous conditional expectation from $D$ to $D_e$ (see [6, 3.4]), then $C^*_r({\cal D})$ is a quotient of $D$ (see [6, 3.3]). Hence by [12, 3.2(1)] (or [10, 2.17]), there is an induced non-degenerate coaction on $D$ by $C^*_r(G)$ which define the given grading. Now the proof of [9, 2.6], [6, 3.3] and the above observation imply the following equivalence. \par\medskip \begin{proposition} \label{4.6} Let $G$ be a discrete group and $D$ be a $C^*$-algebra. Then a $G$-grading $D=\overline{\oplus_{r\in G} D_r}$ is topological if and only if it is induced by a non-degenerate coaction of $C^*_r(G)$ on $D$. \end{proposition} \begin{corollary} \label{4.7} Let $D$ be a $C^*$-algebra with a non-degenerate coaction $\epsilon$ by $C^*_r(G)$. Then it can be ``lifted'' to a full coaction i.e. there exist a $C^*$-algebra $A$ with a full coaction $\epsilon_A$ by $G$ and a quotient map $q$ from $A$ to $D$ such that $\epsilon\circ q = (q\otimes \lambda_G)\circ\epsilon_A$. \end{corollary} In fact, if $\cal D$ is the bundle as defined above, then we can take $A=C^*({\cal D})$ and $\epsilon_A = \delta_{\cal D}$. \par\medskip \par\medskip \par\medskip \noindent {\bf References} \par\medskip \noindent [1] F. Abadie, Tensor products of Fell bundles over discrete groups, preprint (funct-an/9712006), Universidade de S\~{a}o Paulo, 1997. \par\noindent [2] C. Anantharaman-Delaroche, Action moyennable d'un groupe localement compact sur une alg\`ebre de von Neumann, Math. Scand. 45 (1979), 289-304. \par\noindent [3] C. Anantharaman-Delaroche, Action moyennable d'un groupe localement compact sur une alg\`ebre de von Neumann II, Math. Scand. 50 (1982), 251-268. \par\noindent [4] C. Anantharaman-Delaroche, Sys\`emes dynamiques non commutatifs et moyennabilit\`e, Math. Ann. 279 (1987), 297-315. \par\noindent [5] S. Baaj and G. Skandalis, $C^{*}$-alg\`ebres de Hopf et th\'eorie de Kasparov \'equivariante, $K$-theory 2 (1989), 683-721. \par\noindent [6] R. Exel, Amenability for Fell bundles, J. Reine Angew. Math., 492 (1997), 41--73. \par\noindent [7] J. M. G. Fell and R. S. Doran, {\it Representations of *-algebras, locally compact groups, and Banach *-algebraic bundles vol. 1 and 2}, Academic Press, 1988. \par\noindent [8] K. Jensen and K. Thomsen, {\it Elements of $KK$-Theory}, Birkh\"auser, 1991. \par\noindent [9] C. K. Ng, Discrete coactions on $C^*$-algebras, J. Austral. Math. Soc. (Series A), 60 (1996), 118-127. \par\noindent [10] C. K. Ng, Coactions and crossed products of Hopf $C^{*}$-algebras, Proc. London. Math. Soc. (3), 72 (1996), 638-656. \par\noindent [11] G. K. Pedersen, {\it $C^*$-algebras and their automorphism groups}, Academic Press, 1979. \par\noindent [12] I. Raeburn, On crossed products by coactions and their representation theory, Proc. London Math. Soc. (3), 64 (1992), 625-652. \par\noindent [13] M. A. Rieffel, Induced representations of $C^*$-algebras, Adv. Math. 13 (1974), 176--257. \par\noindent \par \medskip \noindent Departamento de Matem\'{a}tica, Universidade Federal de Santa Catarina, 88010-970 Florian\'{o}polis SC, Brazil. \par \noindent $E$-mail address: [email protected] \par\medskip \noindent Mathematical Institute, Oxford University, 24-29 St. Giles, Oxford OX1 3LB, United Kingdom. \par \noindent $E$-mail address: [email protected] \par \end{document}
{'timestamp': '1999-06-11T11:20:49', 'yymm': '9906', 'arxiv_id': 'math/9906070', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9906070'}
ArXiv
\section{Introduction} Online harassment is pervasive in regions around the world. Users post hate speech that demeans and degrades people based on their gender, race, sexual identity, or position in society \cite{blackwell2017classification,lenhart2016online}; users post insults and spread rumors, disproportionately harming those with fewer resources in society to cope with or respond to the attacks \cite{marwick2021morally, lenhart2016online, ybarra2008risky}; and users share private, sensitive content, like home addresses or sexual images, without the consent of those whose information is being shared \cite{goldberg2019nobody}. These behaviors introduce multiple types of harm with varied levels of severity, ranging from minor nuisances to psychological harm to economic precarity to life threats \cite{jiang2021understanding, schoenebeck2020reimagining, sambasivan2019they}. \textcolor{black}{Gaining a global understanding of online harassment} is important for designing online experiences that meet the needs of diverse, varied global experiences. Social media platforms have struggled to govern online harassment, relying on human and algorithmic moderation systems that cannot easily adjudicate content that is as varied as the human population that creates it \cite{goldman2021content, roberts2019behind}. Platforms maintain community guidelines that dictate what type of content is allowed or not allowed and then use the combination of human and automated pipelines to identify and address violations \cite{roberts2019behind, gillespie2018custodians}. However, identifying and categorizing what type of content is harmful or not is difficult for both humans and algorithms to do effectively and consistently. These challenges are magnified in multilingual environments where people may be trying to assess content in different languages or cultural contexts than they are familiar with, while algorithms are inadequately developed to work across these languages and contexts \cite{york2021silicon, gupta2022adima}. \textcolor{black}{Investigations} of harms associated with online harassment have been given disproportionate attention in U.S. contexts. Most prominent technology companies are centered in the U.S., employing U.S. workers in executive positions and centering U.S. laws, norms, corporations, and people \cite{york2021silicon, wef2022ceo}. Scholars have called attention to this problem, pointing out how experiences differ for people and communities globally (e.g. \cite{york2021silicon, sambasivan2019they, sultana2021unmochon}). For example, a study of 199 South Asian women shows that they refrain from reporting abuse because platforms rarely have the contextual knowledge to understand local experiences \cite{sambasivan2019they}. Across countries, social media users have expressed distrust in platforms' ability to govern behavior effectively, especially systems that are vague, complicated, and U.S.- and European-centric \cite{crawford2016flag, sambasivan2019they, blackwell2017classification}. Governing social media across the majority of the world requires understanding how to design platforms with policies and values that are aligned with the communities who use them. Towards that goal, this article examines perceptions of harm and preferences for remedies associated with online harassment via a survey conducted in 14 countries\footnote{\textcolor{black}{Data was collected from 13 countries plus a collection of Caribbean countries. We use the term "country" throughout for readability.}} around the world, selected for their diversity in location, culture, and economies. Results from this study shed light on similarities and differences in attitudes about harms and remedies in countries around the world. This work also demonstrates the complexities of measuring and making sense of these differences, which cannot be explained by a single factor and should not be assumed to be stable over time. This article advances scholarship on online harassment in majority contexts, and seeks to expand understandings about how to design platforms that meet the needs of the communities that use them. \section{Impacts of Online Harassment} Online harassment is an umbrella term that encompasses myriad types of online behaviors including insults, hate speech, slurs, threats, doxxing, and non-consensual image sharing, among others. A rich body of literature has described characteristics of online harassment including what it is, who experiences it, and how platforms try to address it (e.g. \cite{schoenebeck2020reimagining, jhaver2019did, matias2019preventing, douek2020governing, chandrasekharan2017bag, thomas2021sok}). Microsoft's Digital Civility surveys and Google's state of abuse, hate, and harassment surveys indicate how harassment is experienced globally \cite{thomas2021sok, msft2022dci}. Harassment can be especially severe when it is networked and coordinated, where groups of people threaten one or many other people's safety and wellbeing \cite{marwick2021morally}. Other types of harassment are especially pernicious in-the-moment, such as reporting ``crimes'' so that law enforcement agencies investigate a home \cite{bernstein2016investigating} or sharing a person's home address online with the intent of encouraging mobs of people to threaten that person at their home. Across types of harm, marginalized groups experienced disproportionate harm associated with harassment online, including racial minorities, religious minorities, caste minorities, sexual and gender minorities, and people who have been incarcerated \cite{walker2017systematic, pewonline, powell2014blurred,maddocks2018non, englander2015coerced, poole2015fighting}. Sometimes users post malicious content intended to bypass community guidelines which are difficult to algorithmically detect \cite{dinakar2012common,vitak2017identifying}. This makes it relatively easy to deceive automatic detection models by subtly modifying an otherwise highly toxic phrase so that the detection model assigns it a significantly lower toxicity score \cite{hosseini2017deceiving}. In addition, due to limited training on non-normative behavior, these automatic detection and classification tools can exacerbate existing structural inequities \cite{hosseini2017deceiving}. For instance, Facebook’s removal of a photograph of two men kissing after flagging it as ``graphic sexual content'' highlighted the lack of inclusivity of non-dominant behavior in their automatic detection tools \cite{fbcencershipprob}. This valorization of certain viewpoints highlights that power resides among those who create these labels by embedding their own values and worldviews (mostly U.S.-centric) to classify particular behaviors as appropriate or inappropriate \cite{blackwell2017classification, hosseini2017deceiving}. The effects of harassment vary by experience and individuals but might include anxiety, stress, fear, humiliation, self-blame, anger, and illness. There is not yet a standard framework for measuring harms associated with online harassment, which can include physical harm, sexual harm, psychological harm, financial harm, reproductive harm, and relational harm \cite{unwomen}. These can manifest in myriad ways: online harassment can cause changes to technology use or privacy behaviors, increased safety and privacy concerns, and disruptions of work, sleep, and personal responsibilities \cite{pittaro2007cyber,griffiths2002occupational,duggan2017online}. Other consequences can include public shame and humiliation, an inability to find new romantic partners, mental health effects such as depression and anxiety, job loss or problems securing new employment, offline harassment and stalking, numerous mental health issues, such as post-traumatic stress disorder (PTSD), depression, anxiety, self-blame, self-harm, trust issues, low self-esteem, confidence, and loss of control \cite{walker2017systematic, powell2014blurred, bates2017revenge,eckert2020doxxing, barrense2020non, ryan2018european, pampati2020having}. These effects can be experienced for long periods of time due in part to the persistence and searchability of content \cite{goldberg2019nobody}. Targets often choose to temporarily or permanently abstain from social media sites, despite the resulting isolation from information resources and support networks \cite{goldberg2019nobody, lenhart2016online}. Microsoft's Digital Civility Index, a yearly survey of participants in over 20 countries, indicates that men are more confident than women in managing online risks \cite{msft2022dci}. Sexual images of women and girls are disproportionately created, sent, and redistributed without consent which can severely impact women's lives \cite{burkett2015sex, dobson2016sext, bates2017revenge,eckert2020doxxing, barrense2020non}. In a study of unsolicited \textcolor{black}{nude} images and its affect on user engagement \cite{hayes2018unsolicited, shaw2016bitch}, victims reported being bombarded with unwelcomed explicit imagery and faced further insults when they attempted to reduce interaction. A survey by Maple et al. with 353 participants from the United Kingdom (68\% of respondents were women) listed damage to their reputation as the primary fear of victims of cyberharassment \cite{maple2011cyberstalking}. The consequences of gendered and reputational harm can be devastating. In South Korea, celebrities Hara Goo and Sulli (Jin-ri Choi) died by suicide, which many attributed to the large-scale cyberbullying, sexual harassment, and gender violence they experienced online \cite{goo2019}. A social media Pakistani celebrity was murdered by her brother, who perceived her social media presence as a blemish on the family's honor \cite{QandeelBaloch}. Two girls and their mother were allegedly gunned down by a stepson and his friends over the non-consensual filming and sharing of a video of the girls enjoying rain among family \cite{twogirls}. Many of these harms are ignited and fueled by victim-blaming, where society places the responsibility solely on women and other marginalized groups to avoid being assaulted \cite{walker2017systematic, powell2014blurred, chisala2016gender}. This blaming is also perpetuated digitally; for instance, a review of qualitative studies on non-consensual sharing highlighted that women are perceived as responsible if their images are shared because they voluntarily posed for and sent these images in the first place \cite{walker2017systematic}. \section{Challenges in Governing Online Harassment} Most social media sites have reporting systems aimed at flagging inappropriate content or behavior online \cite{crawford2016flag}. Though platform policies do not explicitly define what constitutes online harassment \cite{pater2016characterizations}, platforms have highlighted several activities and behaviors in their community guidelines including abuse, bullying, defaming, impersonation, stalking, and threats \cite{pater2016characterizations, jiang2020characterizing}. Content that is reported goes into a processing pipeline where human workers evaluate the content and determine whether it violates community guidelines or not \cite{roberts2019behind}. If it does, they may take it down and sanction the user who posted it, with sanctions ranging in severity from warnings to suspensions to permanent bans \cite{schoenebeck2021drawing, goldman2021content}. Platforms use machine learning to automatically classify and filter out malicious content, abusive language, and offensive behaviors \cite{chandrasekharan2017bag,wulczyn2017ex,yin2009detection}. These range from adding contextual and semantic features in detection tools to generating computational models using preexisting data from online communities to using these machine learning models to assign ``toxicity scores'' \cite{wulczyn2017ex, chandrasekharan2017bag}. Though harassment detection approaches have improved dramatically, fundamental limitations remain \cite{blackwell2017classification}, including false positives and true negatives, where content is taken down that should have stayed up and vice versa \cite{haimson2021disproportionate, schoenebeck2020reimagining}. Many of these problems are deeply embedded in algorithmic systems, which can reinforce Western tropes, such as associating the word "Muslim" with terrorists \cite{abid2021persistent}. Algorithms to detect problematic content also perform substantially worse in non-English languages, perpetuating inequalities rather than remediating them \cite{debre2021facebook}. Dominant voices can overrule automatically detected flagged content through situated judgments \cite{crawford2016flag}. For instance, a widely distributed video of Neda, an Iranian woman caught up in street protests and shot by military police in 2009, was heavily flagged as violating YouTube's community guidelines for graphic violence, but YouTube justified leaving it up because the video was newsworthy \cite{Neda}. Platform policies are written in complex terms that are inaccessible to many social media users, which makes it difficult for them to seek validation of their online harassment experiences \cite{fiesler2016reality}. Further, platform operators do not specify which prohibited activities are associated with which responses \cite{pater2016characterizations}. When combined with the punitive nature of sanctions, online governance systems may be confusing and ineffective at remediating user behavior, while overlooking the harms faced by victims of the behavior \cite{schoenebeck2021drawing}. One alternative that has been proposed more recently is a focus on rehabilitation and reparation in the form of apologies, restitution, mediation, or validation of experiences \cite{blackwell2017classification, schoenebeck2021drawing, xiao2022sensemaking}. Implementing responses to online harassment requires that users trust platforms' ability to select and implement that response \cite{wilkinson2022many}; however, public trust in technology companies has decreased in recent years, and there is also distrust of social media platforms' ability to effectively govern online behavior \cite{schoenebeck2021youth, americanstrust2020, blackwell2017classification, musgrave2022experiences}. 84\% of social media users in the U.S. believe that it is the platform's responsibility to protect them from social media harassment \cite{Americans}, yet Lenhart et al.'s survey suggests that only 27\% of victims reported harassing activities on these platforms \cite{onlineharassmentAmerica}. A different survey by Wolak et al. with 1631 victims of sextortion found that 79\% of victims did not report their situation to social media sites because they did not think it would be helpful to report \cite{wolak2016sextortion}. Their participants indicated that platform reporting might be helpful only when victims are connected to perpetrators exclusively online which might be addressable through in-app reporting \cite{wolak2016sextortion}. Sambasivan et al.'s study with 199 South Asian women revealed that participants refrain from reporting through platforms due to platforms' limited contextual understanding of victims' regional issues, which is further slowed by the platforms' requirements to fill out lengthy forms providing detailed contexts \cite{sambasivan2019they}. Musgrave et al. find that U.S. Black women and femmes do not report gendered and racist harassment because they do not believe reporting will help them \cite{musgrave2022experiences}. Wolak et al. also found that only 16\% of victims of sextortion reported their incidents to the police \cite{wolak2016sextortion}. Many of those who reported to police described having a negative reporting experience, which deterred them from pursuing criminal charges against offenders \cite{wolak2016sextortion}. Such experiences include police arguing for the inadequacy of proof to file complaints, that sextortion is a non-offensive act, lack of jurisdiction to take actions, and being generally rude, insensitive, and mocking \cite{wolak2016sextortion}. Sambasivan et al. also reported that only a few of their nearly 200 participants reported abusive behaviors to police because they perceived law enforcement officers to have low technical literacy, to be likely to shame women, or to be abusers themseves. \cite{sambasivan2019they}. When abusers are persistent, even reporting typically does not address the ongoing harassment \cite{marwick2021morally, goldberg2019nobody}. Sara Ahmed introduces the concept ``strategic inefficiency'' to explain how institutions slow down complaint procedures that can then deter complaints from constituents \cite{ahmed2021complaint}. The lack of formal reporting channels leads users to be largely self-reliant for mitigating and avoiding abuse. Techniques they use range from preventative strategies like limiting content, modifying privacy settings, self-censorship, using anonymous and gender-neutral identities, using humor, avoiding communication with others, ignoring abuse, confronting abusers, avoiding location sharing, deleting accounts, blocklists, changing contact information, changing passwords, using multiple emails accounts for different purposes, creating a new social media profile under a different name, blocking or unfriend someone and untagging themselves from photos \cite{onlineharassmentAmerica, wolak2016sextortion, vitak2017identifying, fox2017women, mayer2021now, such2017photo, corple2016beyond, dimond2013hollaback, vitis2017dick, jhaver2018online}. Whether reporting to companies or \textcolor{black}{police}, these approaches all put the burden of addressing harassment on the victims. If we want to better govern online behavior globally, we need to better understand what harms users experience and how platforms and policies can systematically better support them after those harms. \section{Study Design} We conducted a cross-country online survey in 14 countries (13 countries plus multiple Caribbean countries). \textcolor{black}{We aimed for a minimum of 250 respondents in each country which considered our desire for age variance and gender representation among men and women but without the higher sample size needed for representative samples or subgroup analyses}. The survey focused on online harassment harms and remedies and included questions about demographics, personal values, societal issues, social media habits, and online harassment. This paper complements a prior paper from the same project that focused on gender \cite{im2022women}; this paper focuses on country level differences though it also engages with gender as part of the narrative. We iteratively designed the survey as a research team, discussing and revising questions over multiple months. When we had a stable draft of a survey, members of our research team translated surveys manually and compared those versions to translations via paid human translation services for robustness. We pilot tested translations with 2-4 people for each language and revised the survey further. \textcolor{black}{Our goal was to have similar wording across languages; though this resulted in some overlapping terms in the prompts (e.g. malicious), participants seemed to comprehend each prompt in our pilots.} We deployed the survey in a dominant local language for each country (see Table \ref{table:participant-demographics}). The survey contained four parts: harassment scenarios, harm measures, possible remedies, and demographics and values. Below, we describe each stage in detail: \textit{Harassment scenarios}. We selected four online harassment scenarios to capture participants' perceptions about a range of harassment experiences but without making the survey too long which leads to participant fatigue. We selected the four harassment scenarios by reviewing prior scholarly literature, reports, and \textcolor{black}{news articles} and prioritizing diversity in types of harm and severity of harm. We prioritized harassment types that would be globally relevant and legible among participants and could be described succinctly. Participants were presented with one scenario along with the harm and remedy questions (described below), and completed this sequence four times for each harassment scenario. The harassment scenario prompt asked participants to "Imagine a person has:" and then presented each of the experiences below. \begin{itemize} \item spread malicious rumors about you on social media \item taken sexual photos of you without your permission and shared them on social media \item insulted or disrespected you on social media \item created fake accounts and sent you malicious comments through direct messages on social media \end{itemize} \textit{Harm measures}. We developed four measures of harm to ask about with each harassment scenario. We again prioritized types of harmful experiences that would be relevant to participants globally. Drawing on our literature review on harms in other disciplines (e.g. medicine) and more nascent discussions of technological harms (e.g. privacy harms \cite{citron2021privacy}), we chose to prioritize three prominent categories of harm used in scholarly literature and by the World Health Organization--psychological, physical, and sexual harm. We then added a fourth category---reputational harm---because harm to family reputation is a prominent concern in many cultures and these concerns may be exacerbated on social media. We prioritized question wording that could be translated and understood across languages. For example, our testing revealed that the concept of ``physical harm'' was confusing to participants when translated so we iterated on wording until we landed on personal safety. The final wording we used was: \begin{itemize} \item Would you be concerned for your psychological wellbeing? \item Would you be concerned for your personal safety? \item Would you be concerned for your family reputation? \item Would you consider this sexual harassment against you? \end{itemize} Perceived harm options were presented on 5-point scales of ``Not at all concerned'' (1) to ``Extremely concerned'' (5) for the first three questions and ``Definitely not'' (1) to ``Definitely'' (5) for the last question. We chose these response stems to avoid Agree/Disagree options which may promote acquiescence bias \cite{saris2010comparing} and because these could be translated consistently across languages. \textit{Harassment remedies}. Current harassment remedies prioritize content removal and user bans after a policy violation. However, scholars are increasingly arguing that a wider range of remedies is needed for addressing widespread harms. Goldman proposes that expanded remedies can improve the efficacy of content moderation, promote free expression, promote competition among Internet services, and improve Internet services’ community-building functions \cite{goldman2021content}. Goldman's taxonomy of remedies is categorized by content regulation, account regulation, visibility reductions, monetary, and ``other.'' Schoenebeck et al. \cite{schoenebeck2020drawing} have also proposed that expanding remedies can create more appropriate and contextualized justice systems online. They see content removal and user bans as a form of criminal legal moderation, where harmful behavior is removed from the community, and propose adding complementary justice frameworks. For example, restorative justice suggest alternative remedies like apologies, education, or mediation. Building on this work, we developed a set of proposed remedies and for each harassment scenario, we asked participants, ``How desirable would you find the following responses?'' with response options on a 5-point scale of ``Not at all desirable for me (1)'' to ``Extremely desirable for me (5).'' The seven remedies we displayed were chosen to reflect a diversity of types of remedies while keeping the total number relatively low to reduce participant fatigue. We also asked one free response question ``What do you think should be done to address the problem of harassment on social media?'' \begin{itemize} \item removing the content from the site. \item labeling the content as a violation of the site’s rules. \item banning the person from the site. \item paying you money. \item requiring a public apology from the person. \item revealing the person’s real name and photograph publicly on the site. \item by giving a negative rating to the person. \end{itemize} \textit{Demographics}. The final section contained social media use, values, and demographic questions. The values and demographic questions were derived from the World Values Survey (WVS) \cite{inglehart2014world}, a long-standing cross-country survey of values. This paper focuses on six measures from the WVS. \begin{itemize} \item Generally speaking, would you say that most people can be trusted or that you need to be very careful in dealing with people? \item How much confidence do you have in police? \item How much confidence do you have in the courts? \item How secure do you feel these days in your neighborhood? \item What is your gender? \item Have you had any children? \end{itemize} The response options ranged from ``None at all'' (1) to ``A great deal'' (4) for police and courts and from Not at all secure (1) to Very secure (4) for neighborhood. We omitted the police and courts questions in Saudi Arabia. For trust, options were ``Most people can be trusted'' (1) and ``Need to be very careful'' (2). For gender, the response options were ``Male'', ``Female'', ``Prefer not to disclose'', and ``Prefer to self-describe.'' We chose not to include non-binary or transgender questions because participants in some countries cannot safely answer those questions, though participants could choose to write them in. We recruited participants from 14 countries (see Table \ref{table:participant-demographics}): 13 countries plus the Caribbean countries (Antigua and Barbuda, Barbados, Dominica, Grenada, Jamaica, Monserrat, St. Kitts and Nevis, St. Lucia, and St. Vincent). \textcolor{black}{We decided to analyze the Caribbean countries together because of the small sample sizes and their relative similarities, while recognizing that each country has its own economics, culture and politics.} This study was exempted from review by our institution’s Institutional Review Board. Participants completed a consent form in the language of the survey. Participants were recruited via the survey company Cint in most countries, Prolific in the U.S., and manually via the research team in the Caribbean countries and Mongolia. Participants were compensated based on exchange rates and pilot tests of time taken in each country. \begin{table} \caption{Participant demographics} \label{table:participant-demographics} \begin{tabular}{ l c c } \toprule Country & Language & Num Participants\\ \midrule Austria & German & 251 \\ Cameroon & English & 263 \\ Caribbean & English & 254 \\ China & Mandarin & 283 \\ Colombia & Spanish (Colombian) & 296 \\ India & Hindi/English & 277 \\ South Korea & Korean & 252 \\ Malaysia & Malay & 298 \\ Mexico & Spanish (Mexican) & 306 \\ Mongolia & Mongolian & 367 \\ Pakistan & Urdu & 302 \\ Russia & Russian & 282 \\ Saudi Arabia & Arabic & 258 \\ USA & English & 304 \\ \textbf{Total} & & \textbf{3993} \end{tabular} \end{table} \subsection{Participant Demographics} The gender ratio between men and women participants was similar across countries \textcolor{black}{ranging from 50\% women and 50\% men in China to 43\% women and 57\% men in India)} except for Caribbean countries \textcolor{black}{which was women: 69\%, men: 27\% and Mongolia which was women: 59\%, men: 41\%} (see details about gender in \cite{im2022women}). The median age was typically in the 30s; Mongolia was lowest at 21 while South Korea and United States were 41.5 and 44, respectively. Participants skew young but roughly reflect each country's population, e.g. Mongolia’s median age is 28.2 years while South Korea and U.S medians are 43.7 and 38.3, respectively, according to United Nations estimates \cite{united20192019}. Participants’ self-reported income also varied across countries, with participants in Austria reporting higher incomes and participants in Caribbean countries reporting lower incomes. More than half of the participants had education equivalent to a Bachelor degree for eight countries (Cameroon, China, Colombia, India, Malaysia, Russia, Saudi Arabia, United States); the other countries did not. Participants placed their political views as more ``left'' than ``right.'' \subsection{Data analysis} We discarded low-quality responses based on duration (completed too quickly) and data quality (too many skipped questions). Table \ref{table:participant-demographics} shows the final number of participants per country after data cleaning. For the qualitative analysis, we separately discarded responses that were low quality (empty fields, meaningless text); the number of participants was slightly higher overall (N=4127) since some participants completed that section but did not finish the subsequent quantitative portions of the survey. We analyzed data using R software. We used group means to describe perceived harms and preferred remedies. Levene's tests to measure variance were significant for both harm and remedy analyses indicating that homogeneity of variance assumption is violated. Thus, we used Welch one-way tests for nonparametric data and posthoc pairwise t-tests which we deemed appropriate given our sufficiently large sample size \cite{fagerland2012t}. We used the Benjamini–Hochberg (BH) test to correct for multiple comparisons \cite{bretz2016multiple}. We also ran linear regressions with harassment - harm and harassment - remedy pairings as the dependent variables and demographics and country as the independent variables (4 harassment scenarios x 4 harm types = 16 harm models; 4 harassment scenarios x 7 remedies = 28 remedy models). We used adjusted R-squared to identify demographic variables that were more likely to explain model variance. Welch test and posthoc tests for harm (16 harassment-harm pairings) and remedy (28 harassment - remedy pairings) comparisons are available in the Appendix. Regression outputs and confidence intervals for demographic predictors are also available in the Appendix. We analyzed the qualitative data to the free responses question using an iterative, inductive process. Our approach was to familiarize ourselves with the data, develop a codebook, iteratively refine the codebook, code the data, then revisit the data to make sense of themes from the coding process. To do this, four members of the research team first read through a sample of responses across countries and then co-developed a draft codebook. Three members of the team then coded a sample of responses and calculated interrater reliability (IRR) for each code using Cohen's Kappa. Across the 26 codes tested, Kappa values ranged from -0.1 to 1 with a median of .35. We used the IRR values as well as our manual review of differences to refine the codebook. We removed codes that coders did not interpret consistently, generally those with agreement below about .4 and those that were low prevalence in the data. We revised remaining codes, especially those that had lower agreement, and discussed them again. The final codebook contained 21 codes (see Appendix) that focused on moderation practices, user responsibility, government involvement, and other areas of interest. \subsection{Limitations and Mitigation} Cross-country surveys are known to have a range of challenges that are difficult to overcome completely, but they remain useful, even indispensable, if designed and interpreted thoughtfully and cautiously~\cite{kaminska2017survey,kish1994multipopulation,smith2011opportunities}. In our case, the key issues have to do with language, sampling methodologies, and response biases that might have differed across our participants. Language differences were addressed as described above, through a process of careful translation, validation through back-translation, and survey piloting, but topics like non-consensual image sharing are inevitably shaped by the language they are discussed in and there may be differences in interpretation we did not capture. Sampling methodologies within countries were as consistent as we could make them, but a number of known differences should be mentioned: First, we used three different mechanisms for recruiting -- a market research firm (Cint) for 11 countries; a research survey firm (Prolific) for the United States; and our own outreach for the Caribbean and Mongolia. These mechanisms differ in the size of their pool of participants, as well as their baseline ability to draw a representative sample. Some differences were built-in to the recruitment process, for example, we requested a diverse age range of participants explicitly with Cint and Prolific which should have yielded more older adults. In contrast, our researcher recruitment method for Caribbean and Mongolia simply sought a range of participants through word of mouth, but did not specifically recruit or screen for older adults. Second, while we sought representative samples of the national/regional population in all cases, we know that we came up short. For example, while online surveys are increasingly able to achieve good representation in better-educated countries with high internet penetration, they are known to be skewed toward affluent groups in lower-income, less-connected contexts~\cite{mohorko2013internet,tijdens2016web}. \textcolor{black}{Oversampling from groups who are active online may be more tolerable for a study of online harassment, but it still overlooks important experiences from those who may be online but less likely to participate in a survey.} Third, differences in local culture and current events are known to cause a range of response biases across countries. Subjective questions about perception of harm, for example, might depend on a country's average stoicism; questions about ``trust in courts'' might be affected by the temporary effects of high-profile scandals. The issues above are common to cross-country survey research, and our mitigation strategies are consistent with the survey methodology literature ~\cite{kaminska2017survey,kish1994multipopulation}. To provide some assurance of our data's validity, we benchmarked against the World Values Survey, on which some of our demographic and social-issues questions were based. We compared responses from our participants to responses from the WVS for countries that had WVS data (China, Colombia, partial India, South Korea, Malaysia, Mexico, Pakistan, Russia, United States). We used the more recent Wave 7 (2017-20) where data was available, with Wave 6 (2010-14) as a back-up. We expected that our responses should correlate somewhat with WVS, even though there were substantial differences, such as that our sample was recruited via online panels with questions optimized for mobile devices whereas the WVS sample was recruited door-to-door with oral question and answer choices. Sample means for our data and the WVS for similar questions are presented in plots in the Appendix. In countries where corresponding data is available, we find that the means in our data about trust -- in police, or in courts -- align with WVS results. We also find the anticipated biases with respect to online surveys and socio-economic status. In particular, our participants reported better health and more appreciation for gender equality than WVS participants. Still, because of the above, we present our results with some caution, especially for between-country comparisons; specific pair-wise comparisons between countries should be considered with substantial caution. We include specific comparisons primarily in the Appendix for transparency; we focus on patterns in the Results which we expect to be more reliable, especially patterns within countries and holistic trends across the entire dataset. In the following sections, we strive to be explicit about how our findings can be interpreted. \section{Results} Results are organized into two sections: perceptions of harm associated with online harassment and preferences for remedies associated with online harassment. Each section follows the same structure: first we look at which harassment types are perceived as most harmful and which remedy types are most preferred, respectively, then we examine demographic predictors of perceptions of harm and preferences for remedies, respectively. \subsection{Perceptions of Harm Associated with Online Harassment} First, we differentate between the four types of harassment. Figure \ref{fig:harm_plot} shows perceptions of overall harm by harassment type. One-way Welch tests showed that means of perceptions of harm were significantly different, $F$(3, 35313) = 3186.4, $p$ < 0.001, with sexual photos being the highest in harm (M=4.20, SD=1.15), followed by spreading rumors (M=3.42; SD=1.30), malicious messages (M=3.20; SD=1.35), and insults or disrespect (M=2.93; SD=1.36) (see Figure \ref{fig:harm_plot}). Plots and posthoc tests for comparisons by type of harassment by country are available in the Appendix. To display an overall measure of perceived harms associated with online harassment by country, we aggregated each of the four harm measures together -- sexual harassment, psychological harm, physical safety, and family reputation -- for a combined measure of overall harm. Results suggest that participants in Colombia, India, and Malaysia rated perceived harm highest, on average, while participants in the United States, Russia, and Austria perceived it the lowest. Means are presented here and shown visually in Figure \ref{fig:harm_region}: Colombia (M=3.98; SD=1.18); India (M=3.86; SD=1.31); Malaysia (M=3.79; SD=1.21); Korea (M=3.67; SD=1.22); China (M=3.59; SD=1.19); Mongolia (M=3.55; SD=1.29); Cameroon (M=3.50; SD=1.38); Caribbean (M=3.44; SD=1.43); Mexico (M=3.38; SD=1.38); Pakistan (M=3.36; SD=1.36); Saudi Arabia (M=3.34; SD=1.35); Austria (M=2.99; SD=1.42); Russia (M=2.80; SD=1.43); United States (M=2.79; 1.45). \begin{figure} \centering \includegraphics[width=.7\linewidth]{harmplot4.jpg} \caption{Perceptions of harm by harassment type} \label{fig:harm_plot} \Description{Plot with error bars from lowest harm to highest: insulted or disrespected, malicious messages, spreading rumors, sexual photos.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{harmcountry3.png} \caption{Perceptions of harm by country} \label{fig:harm_region} \Description{Plot with error bars from lowest harm to highest: United States, Russia, Austria, Saudi Arabia, Pakistan, Mexico, Caribbean, Cameroon, Monoglia, China, Korea, Malaysia, India, Colombia.} \end{figure} Most of the ratings by country were statistically significant from each other (one-way Welch tests and posthoc tests are reported in the Appendix), though we remind readers that these differences should be interpreted with caution. In general, the wealthier countries per capita perceive lower harm, but beyond that the key takeaway is that there is substantial variance which is unlikely to be explained by one or even a few differences across any of those countries. \subsubsection{Predictors of Perceptions of Harm} Here we hone in more granular differences across harassment types and harm types and how they vary by country and other demographic data. Note that responses from Saudi Arabia participants are excluded from regressions because they did not complete questions about confidence in courts or police. The distribution of R-squared values for the 16 harassment - harm pairings is shown in Figure \ref{fig:harm_ridgeline} (ranging from close to 0 to 18\% variance). Country was the most predictive of perception of harm though with variance across harassment and harm pairings as indicated by the multiple peaks in Figure \ref{fig:harm_region}. Gender was next most predictive, followed by security in neighborhood, number of children, trust of people, trust in courts, and trust in police. We also ran exploratory factor analyses to look for underlying constructs across measured variables. When all variables we measured were in the analysis, perceptions of harm and preferred remedies loaded into constructs, as expected, but demographic and value variables did not. Analyses with only the demographic and values variables suggest some trends but they were not substantial predictors of variance (e.g. trust and courts loaded together; marriage and age inversely loaded together). We show some factor analyses results in the Appendix but do not focus on them further. \begin{figure}[ht] \centering \includegraphics[width=1.05\linewidth]{harm_ridgeline2.png} \caption{Adjusted R squared of demographic variables for predicting harm across 16 harassment scenario x harm types.} \label{fig:harm_ridgeline} \Description{Ridgeline plot (looks like a wave with one or a few peaks) showing which variables predict harm from highest to lowest: country, gender, secure, children, trust, courts, police.} \end{figure} We ran regression analyses for the 16 harassment type - harm pairings using country, gender, security in neighborhood, number of children, trust in other people, trust in courts, and trust in police as independent variables. We used the U.S. as the reference choice for country and men as the reference for gender. Complete results with confidence intervals are available in the Appendix. To communicate patterns across models, we present a heatmap (see Figure \ref{figure:harm_heatmap}) of regression coefficients with harassment type - harm pairings on the x-axis and the predictors in Figure \ref{fig:harm_ridgeline} on the y-axis. We also plotted participant responses to the courts, police, security, and trust questions with WVS ratings to benchmark that our participants' attitudes reflect those of a broader population; those plots are in the Appendix. \begin{figure*}[ht] \centering \includegraphics[width=0.98\linewidth]{harm_heatmap_8.jpg} \caption{Heatmap of regression coefficients of harassment types and harm pairings by country and demographics. Darker blue is positive coefficient (i.e. higher harm); darker gold is negative coefficient.} \Description{Heatmap showing darker blue shades for countries, especially for the insult scenario. One exception is the photos scenario and sexual harassment which has gold (i.e. negative) coefficients.} \label{figure:harm_heatmap} \end{figure*} Results of the six predictors in the regression models are summarized here: \textit{Country}: Participants in most countries perceive higher harm for most pairings than the U.S., with the exception of \textcolor{black}{the sexual photos and sexual harm pairing} where some countries perceive lower harm than the U.S. \textit{Gender}: Women perceive greater harm than men for all 16 harassment - harm pairings. \textit{Secure}: Participants who were more likely to give low ratings to the question ``How secure do you feel these days in your neighborhood?'' were more likely to perceive higher harm associated with online harassment for 8 of the harassment - harm pairings; however, security in neighborhood is negatively correlated with ratings for the sexual photos - sexual harassment pairing. \textit{Children}: Having more children is a predictor of greater perceptions of harm for 9 of the 16 pairings, except for the insulted or disrespected - sexual harassment pairing which is negatively correlated. \textit{Trust}: Participants who were more likely to be low in trust of other people were more likely to perceive higher harm associated with online harassment for 11 of the 16 harassment - harm pairings. The relationship was stronger for the sexual photos and spreading rumors scenarios, whereas there were no relationships for the malicious harassment scenario. \textit{Courts}: Higher trust in courts is correlated with increases in perceptions of harm for 14 of the 16 pairings. The two exceptions are spreading rumors - sexual harassment and sexual photos - sexual harassment pairings. \textit{Police}: Trust in police is correlated with increases in perceptions of harm for 4 of the 16 pairings. We return to these results in the Discussion. \subsection{Preferences for Remedies Associated with Online Harassment} \begin{comment} The prior section presented perceptions of harm; this section presents preferences for remedies. To display overall preferences for remedies associated with online harassment by country, we aggregate the seven remedy types together for a combined measure of overall remedies. Results show that Colombia, Russia, and Saudi Arabia were highest overall in support for remedies while Pakistan, Mongolia, and Cameroon were lowest. Means are again presented here and shown visually in Figure \ref{fig:remedy_region}. Colombia (M=4.07; SD=1.13); Russia (M=4.03; SD= 1.25); Saudi Arabia (M=3.97; SD=1.27); Mexico (M=3.93; SD=1.25); Malaysia (M=3.89; SD=1.20); China (M=3.89; SD=1.07); Caribbean (M=3.86; SD=1.38); Austria (M=3.86; SD=1.36); Korea (M=3.72; SD=1.29); India (M=3.70; SD=1.39); United States (M=3.60; SD=1.50); Cameroon (M=3.57; SD=1.40); Mongolia (M=3.45; SD=1.40); Pakistan (M=3.40; SD=1.41). As with harms, most of the ratings by country were statistically significant from each other (one-way Welch tests and posthoc tests are reported in the Appendix), though we again caution that the differences should be treated with caution. Regression outputs and confidence intervals for demographic predictors are also available in the Appendix. \end{comment} The prior section presented perceptions of harm; this section presents preferences for remedies. Specifically we report respondents' perceived desirability of the remedies to address harassment - related harms. First, we differentiate between the remedies themselves. One-way Welch tests showed that means of preferences for remedies were significantly different, $F$(6, 49593) = 1130.9, $p$ < 2.2e-16 (see Figure \ref{fig:remedy_plot}). Removing content and banning offenders are rated highest, followed by labeling, then apologies and rating. Revealing identities and payment are \textcolor{black}{rated} lowest. Posthoc comparisons showed that all pairings were significantly different from each other except for apology and rating: removing (M=4.18; SD=1.12); banning (M=4.07; SD=1.17); labeling (M=4.00; SD=1.19); apology (M=3.72; SD=1.34); rating (M=3.72; SD=1.32); revealing (M=3.56; SD=1.39); paying (M=3.16; SD=1.46). To display overall preferences for remedies associated with online harassment by country, we aggregate the seven remedy types together for a combined measure of overall remedies. Results show that Colombia, Russia, and Saudi Arabia were highest overall in support for remedies while Pakistan, Mongolia, and Cameroon were lowest. Means are again presented here and shown visually in Figure \ref{fig:remedy_region}: Colombia (M=4.07; SD=1.13); Russia (M=4.03; SD= 1.25); Saudi Arabia (M=3.97; SD=1.27); Mexico (M=3.93; SD=1.25); Malaysia (M=3.89; SD=1.20); China (M=3.89; SD=1.07); Caribbean (M=3.86; SD=1.38); Austria (M=3.86; SD=1.36); Korea (M=3.72; SD=1.29); India (M=3.70; SD=1.39); United States (M=3.60; SD=1.50); Cameroon (M=3.57; SD=1.40); Mongolia (M=3.45; SD=1.40); Pakistan (M=3.40; SD=1.41). As with harms, most of the ratings by country were statistically significant from each other (one-way Welch tests and posthoc tests are reported in the Appendix), though we again caution that the differences should be treated with caution. Regression outputs and confidence intervals for demographic predictors are also available in the Appendix. \begin{figure} \centering \includegraphics[width=.9\linewidth]{remedyplot3.png} \caption{Preferences for remedies by \textcolor{black}{remedy} type} \label{fig:remedy_plot} \Description{Plot with error bars from lowest remedy preference to highest: paying, revealing, rating, apology, labeling, banning, removing.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{remedycountry3.png} \caption{Preferences for remedies by country} \label{fig:remedy_region} \Description{Plot with error bars from lowest remedy preference to highest: Pakistan, Mongolia, Cameroon, United States, India, Korea, Austria, Caribbean, China, Malaysia, Mexico, Saudi Arabia, Russia, Colombia.} \end{figure} \subsubsection{Predictors of Preferences for Remedies} We plotted R-squared values with the same variables used in the harm regression models (see Figure \ref{fig:remedy_ridgeline}). Results are broadly similar to the harm ridgeline plot, though there is less overall variance explained in the remedy plots (0-15\%). Country is most predictive of preference for remedy, followed by number of children, gender, security in neighborhood, trust in police, trust in courts, and trust in other people. We ran regression analyses for the 28 harassment type - remedy pairings (4 harassment types and 7 remedies). \begin{figure}[ht] \centering \includegraphics[width=1.05\linewidth]{remedy_ridgeline2.png} \caption{Adjusted R squared of demographic variables for remedy preferences across 28 harassment scenario x remedy types.} \Description{Ridgeline plot (looks like a wave with one or a few peaks) showing which variables predict remedy preferences from highest to lowest: country, children, gender, secure, police, courts, trust.} \label{fig:remedy_ridgeline} \end{figure} We visually show model results in a heatmap (see Figure \ref{fig:remedy_heatmap}). Results are summarized here: \textit{Country:} Most countries tend to prefer payment, apologies, revealing users, and rating users, but are less favorable towards removing content, labeling content, or banning users compared to the U.S. These patterns are observed for three of the four harassment types, with the exception of insults or disrespect where countries tend to prefer all remedies compared to the U.S. \textit{Gender:} Women tend to prefer most remedies compared to men, except for payment, which they are less favorable towards for all four harassment types. \textit{Children:} Having more children is associated with higher preferences for most remedies. \textit{Secure:} Security in neighborhood is negatively associated with higher preference for remedies for 8 of the 28 pairings, primarily for removing content and labeling content. \textit{Trust:} Trust in other people is negatively associated with preferences for remedies for 19 of the 28 pairings. \textit{Courts:} Confidence in courts is associated with preference for the payment remedy for all harassment types but few other remedies. \textit{Police:} Confidence in police is not correlated with remedy preferences. \begin{figure*}[ht] \centering \includegraphics[width=0.98\linewidth]{remedy_heatmap.jpg} \caption{Heatmap of regression coefficients of harassment types and remedy pairings by country and demographics. Darker blue is positive coefficient (i.e. higher preference for remedy); darker gold is negative coefficient.} \Description{Heatmap showing preferences for remedies. The heatmap has small squares whose color reflects coefficient value for that variable. An overall visual pattern is that apology, revealing, and rating tend to be dark blue meaning that most countries prefer them to the U.S. The other remedies are gold or more mixed.} \label{fig:remedy_heatmap} \end{figure*} \subsection{Qualitative responses} We asked participants one free response question about how harassment on social media should be addressed. The most prevalent code related to the site being responsible for addressing the problem; nearly 50\% of responses referred to site responsibility, ranging from about 20\% to 60\% across countries. This was most prevalent in Malaysia, Cameroon, and the United States and least prevalent in Mongolia. Responses included content about setting policies, enforcing policy, supporting users, and protecting users. For example, a participant in Korea said: ''The social media company is primarily responsible for it, though the person who harassed others also has responsibility quite a lot.'' As part of this site responsibility, many participants described what they thought the site should do, such as: ``The social media site should give the offender a negative rating and ban them for a specific time period. This time period being months or years or indefinitely; as well as disallowing them from creating further accounts'' (Caribbean). Some participants described why they thought social media sites were responsible, such as this one from Pakistan who said: ``This problem should be solved only by the social media website as they have all the data of the user through which they can take action against it.'' The second most prevalent code, found in nearly 25\% of responses, referred to government involvement, including regulation, police, courts, arrest, prison, criminal behavior, and juries. References to government involvement were highest in China and Pakistan and lowest in Russia, Mexico, and Cameroon. Many participants who mentioned government responsibility for online harassment indicated it should be in collaboration with law enforcement. For example, one participant from Malaysia said: ''The responsible party (social media workers) need to take this issue seriously and take swift action to `ban' the perpetrator and bring this matter to court and the police so that the perpetrator gets a fair return. This is to avoid the trauma suffered by the victim and also reduce mental illness.'' Participants varied in their indication of whether the user or the platform should report the behavior to the government. One in India said ``First of all the person who has been harassed he should simply go to police station to report this incident then he should report the account and telling his friends to report it then he should mail to instagram some screenshots of that person's chat.'' Some participants focused only on government responsibility, such as one in Colombia saying: ``Having permission for the police to see all your material on the network and electronic devices.'' References to managing content (e.g. removing or filtering content) and account sanctions (e.g. warnings, suspensions, bans) showed up in about 20\% of responses. These were highest in the Caribbean and the United States and lowest in China and Mongolia. Sometimes posts only recommended one step, like content removal, but more often they mentioned a multi-step process for accountability. A participant in China recommended a user rating approach: ``in serious cases, he should be banned and blacklisted, and the stains of his behavior should be recorded on his personal file.'' Many participants proposed policies that required real identities linked to the account to deter harassment and allow punishment. One person in Austria said: ``Login to the networks only with the name known to the operator. Strict consequences, first mark the content, then also delete the profile and transfer the data to the public prosecutor's office.'' About 13\% of responses referred to user responsibility, in which the person who may be experiencing or targeted by harassment should handle it themselves. These responses suggested that people should ignore harassment, stop complaining or whining about it, deal with it, and understand people will disagree. These responses were highest in Colombia and Malaysia and lowest in Austria, Cameroon, Caribbean. For example a participant in Colombia indicated that users should take steps to protect themselves: ``1. Being on social media is everyone's responsibility. 2. In social networks, you should limit yourself in admitting requests from strangers. 3. Remove and block malicious invitations.'' One in Malaysia indicated that people should work out the problems themelves: ``All parties must cast off feelings of hatred and envy towards others. To deal with this problem all parties need to be kind to each other and help each other.'' Another Malaysia participant was more explicit, saying: ``The responsible party is myself, if it happens to me, I will definitely block the profile of the person who is harassing me. The self is also responsible for not harassing others despite personal problems. It is best to complete it face to face.'' Some responses, about 7\%, referred to public awareness or public shaming as a response, which could be through formal media coverage or offender lists or through informal user behaviors. These were highest in the Caribbean and China. One participant in Mongolia said: ``This role belongs to the private organization that runs the first social platform and to the police of the country. Disclosure to the public of the crimes committed by the perpetrators and the number of convictions related to this issue, and the damage caused by the crime.'' About 6\% of responses directly addressed the offender's role in the harassment, indicating that they are responsible and should address the problem and change their behavior. This was most prevalent in Korea and Malaysia. About 6\% referred to restitution in some way, which could involve the offender paying a fine to the site or the victim or the victim receiving compensation from the site or offender. About 6\% referred to blocking other users as a remedy. Other codes showed up in around 5\% of responses, including verifying accounts (checking for bots, fake accounts), educating users about appropriate behaviors, and offender apologies for behavior. Apologies, when specified, were often supposed to be public rather than private, such as a participant from Cameroon's response: ``They should demand a public apology from the person, and if the person does not give the apology, the account should be banned.'' A person in China associated the public apology with reputational damage: ``Infringement of my right of reputation requires a public apology to compensate for the loss.'' In terms of account verification, some participants talked about real names or use of IPs. One from Colombia said: ``Social networks should block people's IPs and allow them to have a maximum of 2 accounts per IP, since most of the people who harass do not do it from their main accounts but rather hide inside other personalities that are not them, by Doing this would greatly reduce this type of bullying.'' \section{Discussion} Our findings coalesce into three broad themes about global perceptions of social media harassment harms and remedies: (1) Location has a large influence on perceptions. (2) The causes are complex -- no single factor, nor even a straightforward subset of factors, emerges as a dominant predictor of perceptions of harm. (3) One-size-fits-all approaches to governance will overlook substantial differences in experiences across countries. \subsection{Key Role of Local Cultural Context} Our results suggest that local cultural context plays the greatest role in determining people's perceptions of online harassment among the factors we measured. In our analysis, country emerged as the most predictive of perceptions of harm across harm types and also with respect to remedies. This is striking, especially when considering that country explained more of the variation in perceptions than gender. As is widely understood, women and girls bear a greatly disproportional brunt of harassment in general~\cite{burkett2015sex, dobson2016sext, bates2017revenge,eckert2020doxxing, barrense2020non}, and though women in each country consistently perceived greater harm than the men in the same country, women's perception of harm globally depended even more on their country. Thus for example, our data indicates that women in the United States perceive less harm from social media harassment (M=2.99, SD=1.44) than men in China (M=3.47, SD=1.2) or India (M=3.18, SD=1.33). Those are just three data points, and we do not claim that this particular set of comparisons is necessarily reliable, but it illustrates a broader point that we believe is robust from our data: \emph{some} countries' women, on average, perceive less harm under some social media harassment cases than \emph{other} countries' men, on average. It is unclear what it is about local cultures that has this impact (our findings suggest that there is unlikely to be a simple set of causes), and we also wish to avoid an unresolvable discussion about what exactly constitutes ``culture.'' Yet, it seems safe to conclude that a range of complex social factors that have some coherence at the national/regional level has a profound effect on how citizens of those countries and countries perceive social media harms and remedies. These are also inevitably shaped by policies and regulations in those countries. For example, some of our Malaysia participants said that online harassment should be the responsibility of the ``MCMC.'' The Malaysian Communications and Multimedia Commission is responsible for monitoring online speech, including social media, though it has little power to remove content on platforms hosted outside of Malaysia. Though all countries we studied have some laws governing the extent of critique users can express towards their own governments, these laws vary in severity. For example, in 2015, the Malaysian government asked Facebook and YouTube to take down posts by blogger Alvin Tan which insulted Muslims \cite{bbc2015muslims}. More recently, the Indian government not only sanctioned individual users who critiqued Modi, but it sought to sanction Twitter for not taking down those posts -- Twitter has recently launched a lawsuit against the Indian government in response \cite{cnn2022twitter}. Though insults is lower in harm than other types of harassment, it is higher in some countries in our study, and it is the most prevalent type of harassment among participants in Google's 22-country survey, suggesting that it may have cumulative harmful effects for users \cite{thomas2021sok}. At the same time, it is important to remember that experiences and concerns within countries inevitably vary and span across boundaries. Our data indicates that reputational harm is lower in the U.S. and Austria and this may be true for the majority in our sample from those countries, but reputational harm can persist within and across boundaries. For instance, Arab women living in the U.S. may deal both with Arab and Western patriarchal structures and orientalism, thereby experiencing a form of intersectional discrimination that requires specific support measures and remedies \cite{al2016influence}. Similarly, refugees and undocumented migrants may be less likely to report online harassment for fear of repercussion to their status in the country \cite{guberek2018keeping}. Though a focus on country-level governance is important, additional work is required to protect and support people within countries who may experience marginalization, despite or because of, local governance. \subsection{No Simple Causal Factors for Harm Perception} The second broad conclusion of our study is that perceptions of harm about online harassment are complex; no simple mechanism, nor any small set of variables, easily explains relative perceptions among countries. Harm perceptions might, for example, reasonably be expected to correlate with how much people trust others, how safe they feel in their own neighborhoods, or how much they trust institutions like the police and the courts. Yet, our results find no such easy explanations: sense of neighborhood security correlated positively with greater perceptions of harm for some forms of harassment, but negatively for the nonconsensual sharing of sexual photos and sexual harassment pairing; number of children predicted greater harm for half of the harassment - harm pairings, but not the other half. Some correlations did emerge in our data, but it is not straightforward to interpret them. For example, trust in courts was associated with perceptions of harm in a majority of our countries. This pattern is surprising, and could indicate a desire to normalize online harassment as harmful to enable greater judicial oversight over those harms. Interestingly, trust in courts is mostly not correlated with the remedies we measured, \textit{except} for payment which is negatively correlated. It may be that lower trust in courts to procure compensation may be correlated with a higher reliance on platforms, but we would need additional data to confirm this interpretation. Somewhat easier to explain is that trust in other people was correlated with lower perception of harm in most cases. It may be that people who are low in trust in others assume online harassment will be severe and persistent. There was substantial variance in trust levels between countries, with Caribbean being lowest and China being highest. This suggests that harms associated with online harassment may reflect offline relationships and communities. Our results show that there is little or no relationship between confidence in police and harm or remedies, which may indicate that people do not see online harassment as a problem that police can or should address. This interpretation aligns also with previous research which has highlighted how police are often an inadequate organization to deal with concerns around harassment and online safety, and can sometimes cause more harm \cite{sambasivan2019they}. Instead, experts have called for investments in human rights and civil society groups who are specifically trained to support people in the communities who experience harassment \cite{york2021silicon}. Such experts could also mediate between affected people and other institutions such as the police and legal institutions. An exploration of factors we did not consider may find simpler or more coherent causal explanations for perceptions of harm and remedies, but we conjecture that the complexity is systemic. Online harassment, though relatively easy to discuss as a single type of phenomenon, touches on many social, cultural, political, and institutional factors, and the interplay among them is correspondingly complex. A highly patriarchal honor culture that leads women to fear the least sensitive of public exposures might be partially countered by effective law enforcement that prioritizes those women's rights; deep concerns about one's children might be offset by a high level of societal trust; close-knit communities might on the one hand provide victims with healthy support, but they might also judge and impose harsh social sanctions. \subsection{One Size Does Not Fit All in Online Governance} The four types of harassment we studied all differed from each other in perceived harm, both in type of harm and severity of that harm. Non-consensual sharing of sexual photos was highest in harm, consistent with work on sexual harms that has focused on non-consensual sharing of sexual images \cite{citron2021privacy, goldberg2019nobody,dad2020most}. This work has advocated for legal protection and recourse for people who are victims of non-consensual image sharing and has brought attention to the devastating consequences it can have on victims' lives. Much of this transformative work in U.S. contexts focuses on sexual content like nude photos, which are now prohibited in some states in the U.S. (though there is no federal law) \cite{citron2019evaluating}. However, in many parts of the world there are consequences for sharing photos of women even if they do not contain nude content. Our findings show substantial variance in perceptions of reputational harm as well as physical harm between countries. India (medians of 4.09 and 4.01, respectively) and Colombia (4.02, 4.24) are highest in both of those categories whereas the U.S. is lowest (2.73, 2.69). Our results corroborate Microsoft's Digital Civility Index, which found high rates of incivility in Colombia, India, and Mexico (and the U.S. being relatively low), though Russia was also high which deviates from our results. Google's survey similarly shows Colombia, India, and also Mexico as highest in prevalence of hate, abuse, and harassment \cite{thomas2021sok}. While shame associated with reputation persists globally, it may be a particularly salient factor where cultures of honor are high \cite{rodriguez2008attack}. In qualitative studies conducted in the South Asian country, including India, Pakistan, and Bangladesh, participants linked reputational harm with personal content leakage and impersonation, including non-consensual creation and sharing of sexually explicit photos \cite{sambasivan2019they}. Because women in conservative countries like India are expected to represent part of what the family considers its “honor,” reputational harm impacts not only just the individual’s personal reputation but also their family and community’s reputation. As one South Asian activist described technology-facilitated sexual violence (quoted from \cite{maddocks2018non}): \textit{"A lot of times, there’s an over-emphasis on sexually explicit photos. But in [this country], just the fact that somebody is photographed with another boy can lead to many problems, and we’ve seen honor killings emerging from that."} In these cases, women are expected to represent part of what the family considers its ``honor'' \cite{sambasivan2019they} and protecting this honor becomes the role of the family, and especially men in the family, who seek to regulate behavior to preserve that honor. Unfortunately, when a person becomes a victim of online abuse, it becomes irrelevant whether she is guilty or not, what matters is other people's perception of her guilt. At an extreme, families will engage in honor killings of women to preserve the honor of the family \cite{QandeelBaloch, jinsook2021resurgence, goo2019}. When women experience any kind of abuse, they may need to bring men with them to file reports, and then they may be mocked by officials who further shame and punish them for the abuse they experienced \cite{sambasivan2019they}. In Malaysia, legal scholars raise concern about the inadequacy of law in addressing cyberstalking in both the National Cyber Security Policy and the Sustainable Development Goals \cite{rosli2021non}. Sexual harassment, sexual harm, and reputation are strongly linked, and the threat of reputational damage empowers abusers. Many European countries have taken proactive stances against online harassment but the efficacy of their policies are not known yet. Unfortunately, any efforts to regulate content also risk threats to free-expression, such as TikTok and WeChat's suppression of LGBTQ+ topics \cite{walker2020more}. Concerns about human rights and civil rights may be especially pronounced in countries where there is not sufficient mass media interest to protest them, such as the rape of a girl in India by a high-profile politician that did not gather attention because it was outside of major cities \cite{guha2021hear}. In Latin American contexts, there is similar evidence that societies that place a premium on family reputation are likely to be afflicted by higher rates of intrapersonal harm \cite{osterman2011culture, dietrich2013culture}. For example, constitutional laws against domestic violence in Colombia decree that family relations are based on the equality of rights and duties for all members, and that violations are subject to imprisonment. Yet recent amendments have called for retribution against domestic violence to be levied \textit{only} when charges with more severe punishment do not apply. Human rights activists from the World Organisation Against Torture have claimed that such negligent regulations send the message that domestic violence, including harassment, is not as serious as other types \cite{omct2004violencecolombia, randall2015criminalizing}. Even with the existence of laws on domestic violence in countries like Colombia and Mexico, prevailing attitudes view harassment as a "private" matter, perhaps because of traditional norms that value family cohesion over personal autonomy. One speculation is that fears of reporting harassment because of familial backlash corroborate why survey respondents from this country may not find exposing their abusers online satisfying. \subsection{Recommendations for Global Platform Design and Regulation} Our recommendations for global platform design and regulation \textcolor{black}{build on work done by myriad civil society groups and follow from our own findings. In short, harms associated with online harassment is greater in non-U.S. countries and platform governance should be more actively coshaped by community leaders in those countries. } Above all, we discourage any idea that a \emph{single} set of platform standards, features, and regulations can apply across the entire world. While a default set of standards might be necessary, the ideal would be for platforms and regulations to be further customized to local context. \textcolor{black}{A reasonable start is for platforms to regulate at the country level, though governance should be sensitive to the blurriness of geopolitical and cultures boundaries.} Digital technology is highly customizable, and it would be possible to have platform settings differ by country. Similarly, regulation of social media, as well as policy for harm caused through online interaction, should also be set locally. To a great extent the latter already happens, as applicable policy tends to be set at a national level. It should also be the case that technology companies engage with local policymakers, without assuming that one-size-fits-all approaches are sufficient. According to the findings discussed above, local cultural context can play an important role in helping platforms define harassment and prioritize online speech and behavior that will likely have the most impact in a given local context. For example, posting non-consensual images, whether sexual or not, can have a more severe impact in countries where women's visibility and autonomy are contentious issues. Customizing definitions of harms would also align with the task of determining the effectiveness of a remedy. If certain behaviors are criminalized offline, that would likely have an impact on how seriously platforms should take online manifestations of such harassment, and how easy it would be for users in that locality to seek help from police or courts. Lastly, due to the great variance on how local laws are shaped and implemented, platforms can play a key role in determining the effectiveness of rules as applied to them and their users. The resulting observations about what laws are effective on the ground can help platforms both customize their own policies, and engage with stakeholders more productively. Platform features, settings, and regulation ought to be determined by multistakeholder discussions with representation from local government, local civil society, researchers, and platform creators. Input from entities familiar with the local laws, customs, and values is essential, \textcolor{black}{as others have recommended (e.g.~\cite{cammaerts2020digital,york2021silicon})}. As our study also finds, the specifics of how users respond to online harassment are localized and not given to easily generalized explanation. Of course, such discussions must be designed well. For example, we recommend that platform creators -- who have international scope yet often tend toward Western, educated, industrial, rich, and democratic (WEIRD) sensibilities~\cite{henrich2010weirdest,linxen2021weird} -- take a back seat and turn to local community leaders to lead these discussions. Platform creators have the power to determine final features anyway; additional exertion of power in such discussions will suppress local voices. Tech companies must also be willing to adopt the resulting recommendations~\cite{powell2013argument}. Beyond platform and regulatory customization within countries, there should be transnational bodies that consider things at a global level, and which might also serve to mediate between issues that bring geographic countries into contention. Technology companies already sponsor such bodies -- for instance, Meta has a Stakeholder Engagement Team that includes policymakers, NGOs, academics, and outside experts that support the company in developing Facebook community standards and Instagram community guidelines \cite{meta2022stakeholder}. Even better would be for such bodies to have more independence, set up for autonomous governance via external organizations. We recognize that customization by country raises new challenges, such as the question of whose policy should take precedence when cross-country interaction occurs on a platform. Or, how platforms should handle users who travel across countries (or claim to do so). Or the substantial problem, though not the focus of this paper, of how to address authoritarian regimes that are not aligned with human rights \cite{york2021silicon}. It will take work, and diplomacy, to resolve these issues, but if the aim is to prevent or mitigate harassment's harms in a locally appropriate way, the effort cannot be avoided. As to what kinds of customization such bodies might suggest, our study gestures toward features and regulations that might differ from place to place. For example, there appears to be wide variation across countries in terms of what is considered invasive disclosure. Russians generally care much less than Pakistanis whether photographs of an unmarried/unrelated man and woman are posted publicly. Thus, in some contexts, the default setting might require the explicit consent of all tagged, commented, or (automatically) recognized parties for a photo or comment to be posted. Another possibility is to adjust the ease with which a request to take down content is granted. The possibilities span a range from (A) automatically taking down any content as requested by \emph{anyone} to (Z) refusing to take down any content regardless of the volume or validity of requests. In between, there is a rich range of possibilities that could vary based on type of content and on country. With respect to how platforms manage content-removal requests, they might establish teams drawn from each geographic context, so that decision-makers address requests from cultures they are most familiar with (and based on standards recommended by the aforementioned local bodies). \section{Conclusion} We studied perceptions of harm and preferences for remedies associated with online harassment in 14 countries around the world. Results show that all countries perceive greater harm with online harassment compared to the U.S. and that non-consensual sharing of sexual photos is highest in harm, while insults and disrespect is lowest. In terms of remedies, participants prefer removing content and banning users compared to revealing identities and payment, though they are more positive than not about all remedies we studied. Country is the biggest predictor of ratings, with people in non-U.S. and lower income countries perceiving higher harm associated with online harassment in most cases. Most countries prefer payment, apologies, revealing identities, and rating users compared to the U.S., but are less favorable towards removing content, banning users, and labeling content. One exception to these trends is non-consensual sharing of sexual photos, which the U.S. rates more highly as sexual harassment than other countries. We discuss the importance of local contexts in governing online harassment, and emphasize that experiences cannot be easily disentangled or explained by a single factor. \begin{acks} This material is based upon work supported by the National Science Foundation under Grants \#1763297 and \#1552503 and by a gift from Instagram. We thank members of the Social Media Research Lab for their feedback at various stages of the project. We thank Anandita Aggarwal, Ting-Wei Chang, Chao-Yuan Cheng, Yoojin Choi, Banesa Hernandez, Kseniya Husak, Jessica Jamaica, Wafa Khan, and Nurfarihah Mirza Mustaheren for their contributions to this project. We thank Michaelanne Thomas, David Nemer, and Katy Pearce for early conversations about these ideas. \end{acks} \bibliographystyle{ACM-Reference-Format}
{'timestamp': '2023-01-30T02:12:29', 'yymm': '2301', 'arxiv_id': '2301.11715', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.11715'}
ArXiv
\section{Introduction} Although existing observations of the large scale structure of the Universe overwhelmingly favour cold dark matter cosmologies with hierarchical structure formation, the paradigm faces challenges both from the existence of luminous passive galaxies at high redshift, and the abundance of low-mass galaxies in the local universe \citep[e.g][]{baugh06,Balogh08}. Galaxies dominate the visible universe and any cosmological model is expected to reproduce the observed global properties of galaxies, at least statistically, in the first instance. A significant fraction of the evolutionary life of many galaxies is spent in the environment of small systems (i.e. groups), where close interactions and mergers of galaxies occur with higher efficiency than in massive haloes such as galaxy clusters \citep[e.g.][]{miles04}. The observable properties of the baryonic content of a group, which consists of the constituent galaxies and the inter-galactic medium (IGM), should be linked to mass assembly of the host group and its subsequent evolution. Among such observable properties, the ''magnitude gap'', i.e., the difference in the magnitudes of the two brightest galaxies, has been widely used as an optical parameter related to mass assembly of groups and clusters. Several studies show that a system of galaxies, where most of the mass has been assembled very early, develops a larger magnitude gap compared to systems that form later. The idea is supported in observational samples \citep[e.g.][]{Ponman94,habib04,Habib07}, theoretical studies \citep[e.g.][]{Milos06,Van07} and in detailed analysis of N-body numerical simulations \citep{Barnes89,donghia05,Dariush07}. These studies also predict that such early-formed galaxy groups or clusters should be relaxed and relatively more isolated systems in comparison to their later-formed counterparts. \citet{Jones03} defined such early-formed systems (also known as {\it fossils}) to have a minimum X-ray luminosity of $L_{\rm X,bol} \geq 0.25 \times 10^{42} h^{-2}$erg s$^{-1}$, and a large magnitude gap in the $R$-band between their first two brightest galaxies, i.e. $\Delta m_{\rm 12} \geq 2.0$, to distinguish them from late-formed groups and clusters. Recent research based on the Sloan Digital Sky Survey has either used only the optical criterion \citep{Santos07} or both optical and X-ray criteria \citep{Eigen09,Voevo09,labarb09} to identify fossils. In the former case, if the optical definition is solely employed, the chance of identifying truly early-formed systems diminishes, since a large fraction of the systems detected might be in the stage of collapsing for the first time, and so would not be X-ray luminous. From numerical simulations, \citet{vbb08} found that $\Delta m_{\rm 12}$ may not be a good indicator for identifying early-formed groups, since the condition would no longer be fulfilled when a galaxy of intermediate magnitude fell into the group. This study was based on simulations of dark matter particles only, and does reveal how frequently such a situation would arise. Other, potentially more robust, magnitude gap criteria have been considered in the literature. For example, \citet{Sales07} finds that the difference in magnitude between the first and 10$^{th}$ brightest galaxies in three fossil groups span a range of $\sim$3--5, in agreement with their results from the analysis of the Millennium data together with the semi-analytic catalogue of \citet{Croton06}. The overall number of observed fossil galaxy groups is small, making it difficult for observed systems to be statistically compared to simulated systems. In spite of their low space density, fossils have been used to test models of cosmological evolution \citep{Milos06,Habib07,vbb08,dm08}, since the criteria for observationally identifying such systems are simple \citep{Jones03,habib06}, and it is generally assumed that they are the archetypal relaxed systems, consisting of a group-scale X-ray halo, the optical image being dominated by a giant elliptical galaxy at the core \citep{Ponman94}. Indeed, if they are relaxed early-formed systems, fossil groups can be the ideal systems in which to study mechanisms of feedback, and the interaction of central AGN and the IGM of the group, since the effect of the AGN would not be complicated by the effect of recent mergers \citep{jetha08,jetha09}. The lack of recent merging activity would also predict the absence of current or recent star formation in early-type galaxies belonging to fossil groups \citep[e.g.][]{nolan07}, and the relative dearth of red star-forming galaxies compared to similar elliptical-dominated non-fossil groups and clusters \citep{sm09}. The use of cosmological simulations in the study of the evolution of galaxy groups will have to employ a semi-analytic scheme for simulating galaxies, and the results will be dependent on the appropriate characterisation of the models that describe galaxy formation and evolution. Once the hierarchical buildup of dark matter haloes is computed from N-body simulations, galaxy formation is modelled by considering the rate at which gas can cool within these haloes. This involves assumptions for the rate of galaxy merging (driven by dynamical friction) and the rate and efficiency of star formation and the associated feedback in individual galaxies \citep{Croton06,Bower06}. In a previous paper \citep{Dariush07}, we studied the formation of fossil groups in the Millennium Simulations, and showed that the conventional definition of fossils (namely a large magnitude gap between the two brightest galaxies within half a virial radius and a lower limit to the X-ray luminosity $L_{\rm X,bol} \geq 2.5 \times 10^{42} h^{-2}$erg s$^{-1}$), results in the identification of haloes which are $\approx ~$10\%-20\% more massive than the rest of the population of galaxy groups with the same halo mass and X-ray luminosity, when the Universe was half of its current age. This clearly indicates an early formation epoch for fossils. In addition, it was shown that the conventional fossil selection criteria filter out spurious systems, and therefore there is a very small probability for a large magnitude gap in a halo to occur at random. The fraction of late-formed systems that are spuriously identified as fossils was found to be $\approx ~$4--8\%, almost independent of halo mass \citep{Dariush07,Smith09}. Another important outcome of the \citet{Dariush07} study was the consistency between the space density of fossils found in the simulations and that from observational samples. Although the results from this previous analysis of the Millennium simulations are shown to be in fair agreement with observation, we did not investigate the evolution of the magnitude gap in either fossil or control groups with redshift. Furthermore, the number of (fossil or control) groups used was small ($\sim$400), which did not allow us to fully explore the connection between the halo mass and magnitude gap in such systems. In this paper, we select early-formed galaxy groups from the Millennium simulations, purely on the basis of their halo mass evolution from present time up to redshift $z \approx 1.0$, and, with the help of associated semi-analytic catalogues, study the evolution of the magnitude gap between their brightest galaxies. Our aim is to (a) investigate how well the conventional optical selection criterion, namely the $\Delta m\ge 2$ gap between the two brightest galaxies, is able to identify early-formed galaxy groups, and (b) to find whether we can find a better criterion to identify groups that have assembled most of their mass at an early epoch. In \S2, we describe the various simulation suites used in this work, and in \S3 the data we extract from them. In \S4, we study in detail the evolution with epoch of various measurable parameters for a large sample of early-formed ``fossil systems'' and two comparable sample of control systems, and compare these properties. In \S5, we examine the case for a revision of the criteria to observationally find fossil in order to ensure a higher incidence of genuine early-formed systems. We summarise our conclusions in \S6. We adopt $H_0 = 100\, h$ km~s$^{−1}$ Mpc$^{−1}$ for the Hubble constant, with $h=0.73$. \section{Description of the Simulations} \subsection{The Millennium Simulation} The Millennium Run consists of a simulation, in a Universe consistent with concordance $\Lambda$CDM cosmology, of 2160$^3$ particles of individual mass $8.6\times10^{8}h^{-1}$ M$_{\odot}$, within a co-moving periodic box of side 500$h^{-1}$ Mpc, employing a gravitational softening of 5$h^{-1}$ kpc, from redshift $z=127$ to the present day \citep{Springel05}. The basic setup is that of an inflationary Universe, dominated by dark matter particles, leading to a bottom-up hierarchy of structure formation, which involves the collapse and merger of small dense haloes at high redshifts, into the modern-day observed large virialised systems such as groups and clusters. The cosmological parameters used by the Millennium Simulation were $\Omega_\Lambda = 0.75$, $\Omega_M = 0.25$, $\Omega_b = 0.045$, $n = 1$, and $\sigma_8 = 0.9$, and the Hubble parameter $h = 0.73$. Dark matter haloes are found in this simulation down to a resolution limit of 20 particles, yielding a minimum halo mass of 1.72$\times 10^{10}h^{-1}$ M$_{\odot}$. Haloes in the simulation are found using a friends-of-friends (FOF) group finder, configured to extract haloes with overdensities of at least 200 relative to the critical density \citep{Springel05}. Within a FOF halo, substructures or subhaloes are identified using the SUBFIND algorithm developed by \citet{Springel01}, and the treatment of the orbital decay of satellites is described in the next section. During the Millennium Simulation, 64 time-slices of the locations and velocities of all the particles were stored, spread approximately logarithmically in time between $z=127$ and $z=0$. From these time-slices, merger trees were built by combining the tables of all haloes found at any given output epoch, thus enabling us to trace the growth of haloes and their subhaloes through time within the simulation. \subsection{Semi-analytic galaxy catalogues} \subsubsection{The Croton et al. semi-analytic catalogue} \label{SAMcroton} \citet{Croton06} simulated the growth of galaxies, and their central supermassive black holes, by self-consistently implementing semi-analytic models of galaxies on the dark matter haloes of the \citet{Springel05} simulation, Their semi-analytic catalogue contains 9 million galaxies at $z=0$ brighter than absolute magnitude $M_R\!-\!5 \log\, h = -16.6$, ``observed'' in $B$, $V$, $R$, $I$ and $K$ filters. The models focus on the growth of black holes and AGN as sources of feedback sources. The inclusion of AGN feedback in the semi-analytic model (allowing central cooling to be suppressed in massive haloes that undergo quasi-static cooling), and its good agreement with the observed galaxy luminosity function, distribution of galaxy colours and of the clustering properties of galaxies, make this catalogue suitable for our study. In this semi-analytic formulation, galaxies initially form within small dark matter haloes. Such a halo may fall into a larger halo as the simulation evolves. The ``galaxy'' within this halo then becomes a satellite galaxy within the main halo, and follows the track of its original dark matter halo (now a subhalo), until the mass of the subhalo drops below 1.72$\times 10^{10}h^{-1}$ M$_{\odot}$. This limit corresponds to the 20-particle limit for dark haloes in the original Millennium Simulation. At this point the galaxy is assumed to spiral into the centre of the halo, on some fraction of the dynamical friction timescale, where it merges with the central galaxy of the larger halo \citep{Croton06}. \subsubsection{The Bower et al. semi-analytic catalogue} \label{SAMbower} The \citet{Bower06} model also makes use of the Millennium Simulation, but utilises merger trees constructed with the algorithm described by \citet{Harker06}. The cooling of gas and the subsequent formation of galaxies and black holes is followed through the merging hierarchy of each tree utilising the {\sc Galform} semi-analytic model \citep{Cole00,Bower06,Malbon07}. At $z=0$ this results in 4,491,139 galaxies brighter than a limiting absolute magnitude of $M_K\!-\!5 \log\, h =-19.4$. In addition to feedback from supernovae, the \citet{Bower06} model accounts for energy input from AGN, resulting in a suppression of cooling in the hot atmospheres of massive haloes. The resulting galaxy population is in excellent agreement with the observed $z=0$ galaxy luminosity function in $B$ and $K$ bands, the $z=0$ colour distribution and also with the evolution of the galaxy stellar mass function from $z=0$ to $z\approx 5$. This model is therefore similarly well-suited to our study of fossil systems. If a halo in a merger tree has multiple progenitors, all but the most massive are considered to become subhaloes orbiting within the larger host halo and any galaxies they contain therefore become satellite galaxies in that halo. Due to the limited resolution of the Millennium Simulation (which may cause dynamical friction timescales to be poorly estimated), the time between becoming a satellite and merging with the central galaxy of the halo is computed from the analytic dynamical friction timescale. Specifically, each new satellite is randomly assigned orbital parameters from a distribution measured from N-body simulations and the appropriate dynamical friction timescale computed following the approach of \citet{Cole00}, but multiplied by a factor of 1.5. This was found to produce the best fit to the luminosity function in \protect\citet{Bower06} but is also in good agreement with the results of \protect\citet{Boylan07} who compared the predictions of analytic dynamical friction timescales with those from idealised N-body simulations. The satellite is allowed to orbit for the period of time calculated above, after which it is merged with the central galaxy of the halo. If the host halo doubles in mass before a satellite can merge, the satellite orbit is assumed to be reset by the merging which lead to that mass growth and so a new set of orbital parameters are assigned and a new merging timescale computed. \subsection{The Millennium Gas Simulations} The Millennium Gas Simulations are a series of hydrodynamical models constructed within the same volume, and values of initial perturbation amplitudes and phases, as the parent dark-matter-only Millennium Simulation \citep[see, e.g.,][]{hartley08}. Of the three principal models completed in this work, each contains additional baryonic physics: (i) the first does not follow the effects of radiative cooling and so overpredicts the luminosities of group-scale objects significantly, (ii) the second includes a simple preheating scheme that is tuned to match the observed X-ray properties of clusters at the present day and (iii) the third includes a simple feedback model that matches the observed properties of clusters today. We have used the second of these models in this work, as we only utilise the hydrodynamical properties of the groups at $z=0$, where the observational and simulation results are well matched. The Millennium Gas Simulations consist of $5 \times 10^8$ particles of each species, resulting in a dark matter mass of $1.422 \times 10^{10}h^{-1}$ M$_\odot$ per particle and a gas mass of $3.12 \times 10^{9}h^{-1}$ M$_\odot$ per particle. The Millennium Simulation has roughly 20 times better mass resolution than this and so some perturbation of the dark matter halo locations is to be expected. In practice the position and mass of dark matter haloes above $10^{13}h^{-1}$ M$_\odot$ are recovered to within $50\,h^{-1}$kpc between the two volumes, allowing straightforward halo-halo matching in the large majority of cases. The Millennium gas simulations used exactly the same cosmological parameters as those of the dark matter simulations. With the inclusion of a gaseous component, additional care needs to be taken in choosing the gravitational softening length in order to avoid spurious heating \citep{Steinmetz97}. We use a comoving value of $25(1+z)h^{-1}$ kpc, roughly 4\% of the mean inter-particle separation \citep{Borgani} until $z=3$, above which a maximum comoving value of $100h^{-1}$ kpc is adopted. A different output strategy is followed in the Millennium Gas Simulations, where the results are output uniformly in time with an interval roughly corresponding to the dynamical time of objects of interest. This strategy results in 160 rather than 64 outputs and places particular emphasis on the late stages of the simulation. \section{Datasets used in this work } \label{Data} We start with a catalogue of groups extracted by the friends-of-friends (FoF) algorithm employed in the Millennium Dark matter runs. Hereafter a ``group'' or ``group halo'' would refer to a group taken from this catalogue. In order to follow the evolution of groups from $z\!\sim$1 to the present epoch, we have to combine various sets of information from this FoF group catalogue and the associated semi-analytic catalogues of galaxies, as well as the gas simulations. We select all groups of $M(R_{\rm 200}) \!\geq\! 10^{13}\, h^{-1}\,$M$_{\odot}$ from the FoF group catalogue at $z=0.998$. The mass cut-off is intended to ensure that the progenitors of the present day galaxy groups are indeed groups at $z \sim$1 with at least four or five members (galaxies), above the magnitude cut of the catalogue. The evolution of each group was followed from $z=0.998$ to $z=0$ (at 23 discrete values, equally spaced in $\log z$) by matching the position of each halo to its descendants at later redshifts. The position of the central galaxy of each galaxy group, and the corresponding dark matter halo, were used to identify the member galaxies of each group. At each redshift and for each group halo, optical properties were extracted for its corresponding galaxies from the semi-analytic galaxy catalogue. The model galaxies become incomplete below a magnitude limit of M$_{K}-5\log(h) \sim -19.7$, due to the limited mass resolution of the Millennium simulation. We applied a $K$-band absolute magnitude cut-off of M$_K \lesssim -19$ on galaxies at all redshifts. During the matching process, for more than 99\% of the groups at each redshift, corresponding galaxies were found in the semi-analytic galaxy catalogue. The remaining groups were excluded from our final compiled list. In order to find the gas properties of all groups at $z\! =\! 0$, we cross-correlated our list of groups with the Millennium gas catalogue, and find the bolometric X-ray luminosity of our selected groups at $z$=0. Out of 19066 dark matter group haloes with $M(R_{\rm 200}) \geq 10^{13}\, h^{-1}\,$M$_{\odot}$ selected at $z=0.998$, optical properties from the semi-analytic catalogue (as well as gas properties from the gas simulations at $z$=0) and the entire history of evolution at all redshifts up to $z$=0, were found for 17866 ($\sim$94\% of the initial sample at $z\sim1$) of group haloes. Fig.~\ref{alifig01} shows the bolometric X-ray luminosity from the Millennium gas simulation, plotted against the corresponding dark matter halo mass of each group, at redshift $z$=0 for all of the matched 17866 groups. The vertical dashed line in Fig.~\ref{alifig01} corresponds to the conventional X-ray luminosity threshold ($L_{\rm X,bol} = 0.25\times 10^{42}\,h^{-2}$erg s$^{-1}$) for fossil groups \citep{Jones03}, as adopted in Sec.~\ref{fossil} to define X-ray bright groups. There are 14628 groups above this threshold, out of 17866 groups. These X-ray bright groups will constitute the main data set for the rest of our analysis, except for \S\ref{SAM} and \S\ref{Delta4}, where the whole range of halo mass will be explored to study the magnitude gap statistics and the local environment of groups, \begin{figure} \epsfig{file=ali-fig-01.eps,width=1.05\hsize} \caption{The relation between the mass of group haloes (within $R_{\rm 200}$) at $z=0$ from the Millennium DM simulation, and the bolometric X-ray luminosity of the corresponding haloes in the Millennium gas simulation. All groups have $M(R_{\rm 200}) \geq 10^{13}\, h^{-1}\,$M$_{\odot}$ at $z\sim1.0$. The {\it vertical dashed-line} corresponds to the X-ray luminosity threshold $L_{\rm X,bol} = 0.25\times 10^{42}\,h^{-2}$erg s$^{-1}$ generally adopted to define fossil groups (see Sec.~\ref{fossil}). Of the 17866 groups matched in the two catalogues, 14628 groups lie above this threshold. In this paper, we call these ``X-ray bright groups''. } \label{alifig01} \end{figure} \section{Results} \label{results} \subsection{The $R$-band Magnitude Gap Statistic} \label{SAM} The dynamical friction $f_{\rm dyn}$ will cause the more luminous galaxies in a group to merge on a time scale which depends upon the velocity dispersion of the group, and since $f_{\rm dyn} \propto v^{-2}$ this is more frequent in poorer groups than in clusters \citep[e.g.][]{miles04,miles06}. As a result, on group scales, the likelihood of a few of the brightest galaxies merging to form the brightest galaxy, leading to a large magnitude gap within a Hubble time, is higher. Thus, the distribution of the magnitude gap between the brightest galaxy, and the second and third brightest galaxies, in each group, is often used as an indicator the dynamical age of group, particularly in fossil groups \citep{Milos06,Van07,Dariush07,vbb08}. We determine the magnitude gaps from the Millennium semi-analytic models of \citet{Bower06} and \citet{Croton06}, and compare them with observational results from the Sloan Digital Sky Survey (SDSS) C4 cluster catalogue data of \citet{Miller05} and the 2-degree Field Galaxy Redshift Survey (2dFGRS) group catalogue of \citet{Van07}. The 2dFGRS group catalogue is constructed based on a halo-based group finder algorithm of \citet{Yang05} and contains $\sim 6300$ groups within the mass range $\log (M/h^{-1}\,$M$_{\odot}) \geq 13.0$ where the mass of each group has been determined from the total luminosity of all group members brighter than $M_{b_j} -5\log h=-18$. The C4 catalogue \citet{Miller05} consists of $\sim 730$ clusters identified in the spectroscopic sample of the Second Data Release (DR2) of the SDSS inside the mass range $13.69 \leq \log (M/h^{-1}\,$M$_{\odot}) \leq 15.0$, estimated from the total $r$-band optical luminosity of cluster galaxies. \begin{figure*} \begin{center} \epsfig{file=ali-fig-02.eps,width=1.07\hsize} \caption{The $R$-band magnitude gap distribution for haloes from the Millennium semi-analytic models of \citet{Bower06} ({\it red triangles}) and \citet{Croton06} ({\it black circles}) superposed on the data from 2dFGRS group catalogue of \citet{Van07} as well as SDSS C4 cluster catalogue of \citet{Miller05} ({\it blue histograms}). (a) The magnitude gap $\Delta m_{\rm 12}$ between the the first and second most luminous galaxies, compared with galaxies from the SDSS C4 catalogue of clusters computed within projected radius of 500$h^{-1}$kpc. (b) The same as in (a) but for the magnitude gap $\Delta m_{13}$ between the first and the third most luminous galaxies. (c) The magnitude gap $\Delta m_{\rm 12}$ estimated within $R_{\rm 200}$, compared with galaxies from the 2dFGRS group catalogue. The $\sim 6300$ 2dFGRS groups are within the mass range $\log (M(R_{\rm 200})/h^{-1}\,$M$_{\odot}) \geq 13.0$, and those from SDSS C4 catalogue consist of $\sim 730$ clusters within mass range $13.69 \leq \log (M(R_{\rm 200})/h^{-1}\,$M$_{\odot}) \leq 15.0$. } \label{alifig02} \end{center} \end{figure*} The results of the comparison between our distribution of the estimated $R$-band magnitude gaps from semi-analytic models of \citet{Bower06} as well as \citet{Croton06}, based on the Millennium simulation (red triangles and black circles respectively), and the observed results from the C4 cluster catalogue and the 2dFGRS group catalogues (blue histogram), are shown in Fig.~\ref{alifig02}. The magnitude gap statistics $\Delta m_{\rm 12}$ and $\Delta m_{13}$ from \citet{Bower06} are in excellent agreement with those obtained from 2dFGRS group catalogue and SDSS C4 catalogue of clusters. However, the semi-analytic galaxy catalogue of \citet{Croton06} predicts a larger fraction of groups with $\Delta m_{\rm 12} \geq 2.0$ for both the SDSS and 2dFGRS samples. This is in particular of great importance to the determination of the space density of fossil galaxy groups, and the comparison of fossil samples drawn from simulated and observed catalogues, which use the magnitude gap as a key discriminant, \citep[e.g., see Table~1 of][]{Dariush07}. The shift of the distribution of $\Delta m_{\rm 12}$ and $\Delta m_{13}$ to larger values than observed in the \citet{Croton06} model may reflect the fact that in the \citet{Bower06} model, the treatment of dynamical friction differs from that used by \citet{Croton06}. In both models, N-body dynamics are used to follow the orbital decay of satellite galaxies whose subhalo can be resolved. However, when the subhalo can no longer be reliably followed, the dynamical friction calculations differ. In the \citet{Bower06} model, the dynamical friction timescale is initially calculated following \citet{Cole00}. However, if the host halo of the satellite is deemed to undergo a formation event (corresponding to a mass doubling since the previous formation event), before the satellite merges, then a new orbit for the satellite is selected at random, and the dynamical friction timescale for the satellite is recalculated. This calculation takes into account the scattering of galaxies to larger energy orbits during the merger of their parent halo. Another possible cause for the success of the \citet{Bower06} model in matching the magnitude gap statistics is that it predicts a large scatter in the relation between galaxy stellar mass and halo mass---significantly more than in the \citet{Croton06} model. As a result, sometimes rather large satellite haloes arrive carrying relatively small galaxies resulting in a large difference between the magnitude of the dominant object and the next most luminous. This difference occurs because the AGN feedback is not guaranteed to switch off the cooling at a particular halo mass in the \citet{Bower06} model, as it depends on the merging and cooling history of each halo. For the purposes of this work, the fact that the \citet{Bower06} method for computing merging timescales results in good agreement with the observed magnitude gap distributions, makes it well suited for the remainder of our study. \subsection{Evolution of galaxy groups} In cosmological simulations, the age of galaxy groups can be expressed in terms of the rate of the mass assembly of the groups. This means that for a given group halo mass, groups that formed early, assemble most of their masses at an earlier epoch in comparison to younger groups. Thus the {\it assembly time} of a dark matter halo, defined as the look-back time at which its main progenitor reaches a mass that is half of the halo's present day mass, is larger in ``older'' systems than in ``younger'' ones. Of course, in cosmological simulations such as the Millennium runs, where the structures in Universe from hierarchically, massive systems which form later turn out to have shorter assembly time than low mass groups. Therefore one should take into account the mass of systems when comparing the mass assembly of various types of groups and clusters. \subsubsection{Fossil groups of galaxies} \label{fossil} How is the history of mass assembly of a group or cluster related to its present observable parameters? It is expected that groups which have formed earlier tend to be more dynamically relaxed, thus resulting in a hotter intergalactic medium (IGM) and being more likely to be X-ray luminous \citep{miles04,forbes06}. It has been shown that in X-ray luminous systems, the brightest galaxies are bigger and more optically luminous and those belonging to systems that have little or no diffuse X-ray emission \citep[e.g.,][]{habib-gems}. On the other hand, groups with the same mass that have formed late, and are still in a state of collapse, would not show X-ray emission associated with their IGM \citep{Jesper06,bai10}, and are less likely to be dominated by a massive elliptical in their cores. Hitherto the so-called {\it fossil galaxy groups}, which are supposed to be canonical examples of groups that have formed early, have been identified by requiring that their X-ray luminosities exceed $L_{\rm X,bol} \geq 0.25 \times 10^{42} h^{-2}$erg s$^{-1}$ \citep[e.g.][]{Habib07,Jones03}. In addition, a fossil group needs to have, within half a virial radius of the group's centre, the second brightest galaxy to be at least 2-mag fainter than the brightest galaxy, i.e. $\Delta m_{\rm 12} \geq 2.0$ \footnote{This condition can be replaced by $\log (L_2/L_1) \leq -0.8$ where $L_1$ and $L_2$ are the luminosities of the first two brightest galaxies.}. So far these two observational criteria have been jointly used to explore fossil groups and clusters of galaxies. Therefore, Fig.~\ref{alifig03}, which displays all the {\it X-ray bright groups} fulfilling the X-ray criterion in Fig.~\ref{alifig01}, and the optical criterion $\Delta m_{\rm 12} \geq 2.0$ (dotted horizontal line) should separate groups which have been formed earlier in comparison to their counterparts with $\Delta m_{\rm 12}<2.0$. Note that, in numerical simulations, fossils are identified as groups with $\Delta m_{\rm 12} \geq 2.0$ within $R_{\rm 200}$ or $0.5R_{\rm 200}$. Our results from this study as well as those represented in \citet{Dariush07} show that the fraction of fossils (and therefore their space densities) depend on the search radius within which $\Delta m_{\rm 12}$ is estimated, whereas the history or mass assembly does not change that much. \subsubsection{The mass assembly of X-ray fossil groups} \label{evolution1} Let us introduce the parameter $\alpha_z$ which for an individual group is the ratio of its mass at redshift $z$ to its final mass at $z=0$, i.e. $\alpha_z\equiv M_{z}/M_{z=0}$. Thus at a given redshift $z$, groups with larger $\alpha_z$ have assembled a larger fraction of their final mass by $z$ than groups with smaller values of $\alpha_z$. In Fig.~\ref{alifig03}, we plot the magnitude gap $\Delta m_{\rm 12}$ (within $0.5R_{\rm 200}$), estimated for all 14628 X-ray bright groups (i.e. groups with $L_{\rm X,bol} \geq 0.25 \times 10^{42} h^{-2}$erg s$^{-1}$) at $z=0$ as a function of their mass fraction $\alpha_{1.0}$ at $z\!=\! 1$. Groups are colour-coded according to their dark matter halo mass. The horizontal dashed line separates groups into fossils ($\Delta m_{\rm 12} \geq 2.0$) and non-fossils ($\Delta m_{\rm 12} < 2.0$). All data points on the right side of the {\it vertical dashed-line} have assembled more than 50\% of their mass by $z \sim$1.0 and hence have a minimum assembly time of about $\sim$7.7 Gyr. The contour lines represent the number of data points (groups) in each of $25 \times 25$ cells of an overlaid grid which is equally spaced along both the horizontal and vertical axes. \begin{figure*} \epsfig{file=ali-fig-03.eps,width=1.05\hsize} \caption{ The magnitude gap $\Delta m_{\rm 12}$ within $0.5\,R_{\rm 200}$, estimated for all 14628 X-ray bright groups in Fig.~\ref{alifig01} (i.e. groups with $L_{\rm X,bol} \geq 0.25 \times 10^{42} h^{-2}$erg s$^{-1}$) at $z=0$ versus the ratio of the group halo mass at redshift $z=1$ to its mass at $z=0$ ($\alpha_{1.0}$). The {\it horizontal dashed-line} separates groups into fossils ($\Delta m_{\rm 12} \geq 2.0$) and non-fossils ($\Delta m_{\rm 12} < 2.0$). The {\it vertical dashed-line} corresponds to $\alpha_{1.0}=0.5$. Groups with $\alpha_{1.0} \geq 0.5$ have formed more than half of their mass by $z \sim 1.0$ and hence have a minimum assembly time of about $\sim 7.7$ Gyr. Data points are colour-coded according to FoF group halo mass $M_{R200}$ at present epoch. The density of data points is represented by {\it black contour lines} which is the number of groups in each of $25 \times 25$ cells of an overlaid grid, equally spaced horizontally and vertically. The {\it upper panel} represents the histogram of X-ray bright fossil groups, i.e. all groups with $\Delta m_{\rm 12} \geq 2.0$ and $L_{\rm X,bol} \geq 0.25 \times 10^{42} h^{-2}$erg s$^{-1}$.} \label{alifig03} \end{figure*} Three results emerge from this plot: (i) As is expected, on average the rate of mass growth in massive systems is higher than in low mass groups as the majority of massive groups and clusters have assembled less than 50\% of their final mass at $z \sim$1.0. (ii) Less massive groups (and therefore early-formed ones) tend to develop larger magnitude gaps in comparison to massive groups and clusters. Consequently the fraction of massive fossils, identified in this way, is less than low mass fossil groups. (iii) For any given $\alpha_{1.0} \gtrsim 0.5$, the majority of groups have magnitude gaps $\Delta m_{\rm 12} \lesssim 2.0$, as is evident from the density of contours. In other words, the number of early-formed groups with values of $\Delta m_{\rm 12} < 2.0$ exceeds the number of fossil groups with $\Delta m_{\rm 12} \gtrsim 2.0$. Unlike the first two results, the third conclusion is not in agreement with our current view that early-formed groups necessarily develop larger magnitude gaps. Clearly, the majority of groups with similar values of $\alpha_{1.0} \gtrsim 0.5$ have smaller magnitude gaps. Without doubt, the parameter $\Delta m_{\rm 12}$ is influenced by the infall and merging of galaxies and sub-groups within galaxy groups. This could result in the increase (in case of merging) or decrease (in case of the infalling of new galaxies) in $\Delta m_{\rm 12}$. Indeed, in the work of \citet{vbb08}, one finds that the ``fossil'' phase of any fossil group is transient, since the magnitude gap criterion will sooner or later be violated by a galaxy comparable to the brightest galaxy falling into the core of the group. \begin{figure*} \epsfig{file=ali-fig-04.eps,width=1.1\hsize} \caption{ The evolution with redshift of various physical parameters of X-ray bright groups, in various ranges of group mass. {\it Left panel:} Haloes are classified as {\bf X-ray fossil} ($\Delta m_{\rm 12}\geq 2.0$, {\it red triangles}) and {\bf control }($\Delta m_{\rm 12}\leq 0.5$, {\it blue circles}) groups based on the magnitude gap between the first and the second brightest galaxies within $0.5R_{\rm 200}$. {\it Right panel:} Groups are divided into {\bf old}($\alpha_{1.0} \geq 0.5$, {\it red stars}) and {\bf young} ($\alpha_{1.0} \leq 0.5$, {\it blue squares}) population respectively. Each row represents the evolution of a parameter characteristic of galaxy groups that can be measured from the simulations (but not necessarily from observations). From top to bottom the parameters on the y-axis represent: {\bf (i)}~$\alpha_z$, the ratio of the group halo mass at redshift $z$ to its mass at $z=0$ ($a1 ,..., a6$), {\bf (ii)}~$\Delta m_{\rm 12}$, i.e. the magnitude gap between the first two brightest group galaxies found within 0.5$R_{\rm 200}$ ($b1 ,..., b6$), {\bf (iii)}~$G_z$, the ratio of the number of galaxies within $0.5\,R_{\rm 200}$ at redshift $z$ of a given galaxy group to the number of galaxies within $0.5\, R_{\rm 200}$ at redshift $z=0$ of the same group ($c1 ,..., c6$), and {\bf (iv)}~the group velocity dispersion $\sigma_V$ in km~s$^{-1}$ ($d1,...,d6$). In the third panels from top, the horizontal {\it green dashed-lines} intersect the y-axis at $G_z=1$. } \label{alifig04} \end{figure*} To quantify the above results, we study the evolution with redshift of various physical parameters for two different sample of groups, drawn from the distribution of galaxy groups in Fig.~\ref{alifig03}. In the first sample, haloes are divided into {\bf old}($\alpha_{1.0} \geq 0.5$, $b+c$ in Fig.~\ref{alifig03}) and {\bf young} ($\alpha_{1.0} \leq 0.5$, $a+d$ in Fig.~\ref{alifig03}) groups respectively. In the second population, haloes are classified as {\bf X-ray fossil} ($\Delta m_{\rm 12}\geq 2.0$, $a+b$ in Fig.~\ref{alifig03}) and {\bf control }($\Delta m_{\rm 12}\leq 0.5$, $c+d$ in Fig.~\ref{alifig03}) groups based on the magnitude gap between the first and the second brightest galaxies within half a virial radius of the centre of the group. For each sample, the evolution of various parameters are shown in two panels of Fig.~\ref{alifig04}. From top to bottom these parameters are: \begin{itemize} \item $\alpha_z$, i.e. the ratio of the group halo mass at redshift $z$ to its mass at $z=0$, \item $\Delta m_{\rm 12}$ within 0.5$R_{\rm 200}$, \item Ratio of the number of galaxies within $0.5R_{\rm 200}$ at redshift $z$ of a given galaxy group to the number of galaxies within $0.5R_{\rm 200}$ at redshift $z=0$ of the same group, i.e. $G_z$, \item Group velocity dispersion $\sigma_V$ in km~s$^{-1}$. \end{itemize} In each panel of Fig.~\ref{alifig04}, the left, middle, and right columns correspond to different ranges in group mass, as indicated. The left panel in Fig.~\ref{alifig04} illustrates X-ray fossil ({\it red triangles}) and control ({\it blue circles}) groups respectively while the right panel shows old ({\it red stars}) and young ({\it blue squares}) groups. The horizontal {\it green dashed-lines} intersect y-axes at $G_z=1$. Errors on data points are the standard error on the mean, i.e. $\sigma / \sqrt{N}$, where $\sigma$ is the standard deviation of the original distribution and $N$ is the sample size. A comparison between Figs.~\ref{alifig04}$a1,a2,a3$ and Figs.~\ref{alifig04}$a4,a5,a6$ shows that older groups, which have been picked up according to their lower rate of mass growth (i.e. larger $\alpha_{1.0}$), represent a {\it perfect class} of fossils, though they develop a magnitude gap $\Delta m_{\rm 12}$ which is not as large as those seen in X-ray fossils (see also Figs.~\ref{alifig04}$b1,...,b6$). On the other hand, unlike old groups, X-ray fossils develop large magnitude gaps, which do not necessarily correspond to their rapid mass growth, especially in massive groups with $\log (M(R_{\rm 200})/h^{-1}\,$M$_{\odot}) \geq 14.0$. This reflects the fact that the majority of real passive groups have a small magnitude gap between their two brightest galaxies. Thus the expression $\Delta m_{\rm 12} \geq 2$ only partially separates genuine old/passive groups from young/forming groups, as there are a larger fraction of genuine old groups but with small $\Delta m_{\rm 12}$. From Figs.~\ref{alifig04}$c4,c5,c6$, it is obvious that older groups are essentially more relaxed systems, which have not recently experienced a major merger, as the rate of infall of galaxies is equal or even less than the rate at which galaxies merge with the central group galaxy. Therefore in old groups the parameter $G_z$ is more or less constant with time, compared to that of the younger groups within the same group mass bin. The situation is a bit different in X-ray fossils with $\log (M(R_{\rm 200})/h^{-1}\,$M$_{\odot}) \leq 14.0$ (Figs.~\ref{alifig04}$d1,d2$), since, in these cases, the rate of galaxy merging is noticeably larger than that of the infall of galaxies. As a result, very large magnitude gaps can develop in X-ray fossil groups. It is also evident both massive X-ray fossils and control groups with $\log (M(R_{\rm 200})/h^{-1}\,$M$_{\odot}) \geq 14.0$ (Fig.~\ref{alifig04}$d3$) are in a state of rapid mass growth. As a consequence, massive X-ray groups are not dynamically relaxed systems as they are influenced by infall of galaxies and substructures. Finally, it is worth considering how the velocity dispersion, plotted in Figs.~\ref{alifig04}$d1,...,d6$ changes with time in the different kinds of groups. As Figs.~\ref{alifig04}$d4,d5,d6$ show, as long as the rate of infall of subgroups is close to 1.0 ({\it green dashed-line}), the velocity dispersion does not significantly change with time, which in turn is a sign that these groups are certainly relaxed systems. \begin{figure*} \epsfig{file=ali-fig-05.eps,width=0.8\hsize} \caption{ The absolute R-band magnitude of BCGs for all X-ray bright groups versus $\Delta m_{\rm 12}$ within 0.5R$_{\rm 200}$ in four different mass bins. A grid of $45\times 55$ cells has been superposed in each panel, and the cell is colour-coded according to the median value of $\alpha_{1.0}$ within the cell. The contours trace the distribution of median $\alpha_{1.0}$ values in each panel. All panels have the same scale.} \label{alifig05} \end{figure*} \subsubsection{BCG magnitudes} Since the central galaxy in a fossil group is a product of numerous mergers, many of them with luminosities close to L$_{\star}$ galaxies, X-ray fossils are expected to be dominated by optically luminous brightest galaxies (BCGs) more often than their non-fossil counterparts. Here we explore the correlation between the luminosity of the central galaxies of groups with large magnitude gaps, and their mass assembly history. The four panels in Fig.~\ref{alifig05} demonstrate the relation between the absolute $R$-band magnitude of the BCGs for all X-ray bright groups and the magnitude gap $\Delta m_{\rm 12}$ within $0.5\,R_{200}$ in four different mass bins. In each panel of Fig.~\ref{alifig05}, a grid of $45\times 55$ cells is overlaid, where each cell is colour-coded according to the median of $\alpha_{1.0}$ in that cell. Accordingly, contours in each panel trace the distribution of median $\alpha_{1.0}$ values. Fig.~\ref{alifig05} shows clearly that both $\alpha_{1.0}$ and BCG $R$-band magnitudes increase with decrease in group halo masses. But it does not show a tight correlation between the R-band luminosity of BCGs and group magnitude gaps $\Delta m_{\rm 12}$, though the correlation is more pronounced in clusters with $M(R_{\rm 200}) \geq 10^{14}\,h^{-1}\,$M$_{\odot}$ (Fig.~\ref{alifig05}d). Therefore, imposing a magnitude cut for the BCGs would result in the loss from the sample of a large number of genuine groups which are not X-ray fossil systems according to the optical condition involving $\Delta m_{\rm 12}$. \subsection{The Fossil phase in the life of groups} \label{Phase1} The existence of large magnitude gaps in X-ray fossils in Figs.~\ref{alifig04}$b1,b2,b3$ is expected as these groups were initially selected according to their $\Delta m_{\rm 12}$ at $z$=0. It would be interesting if they could be shown to have maintained such large magnitude gaps for a longer time, in comparison to control groups, which would be the case if X-ray fossils were relaxed groups without recent major mergers. Also if fossil groups in general are the end results of galaxy merging, then we do expect the majority of fossils selected at higher redshifts to still be detected as fossils at the present epoch. In other words, the {\it fossil phase} in the life of a galaxy group should be long-lasting. To test this, we select three sets of fossil groups with $\Delta m_{\rm 12}\geq 2.0$ at three different redshifts. By tracing the magnitude gap of each set from $z$=1.0 to $z$=0, we examine the fossil phase of each set in time. In Fig.~\ref{alifig06}, fossils ({\it black shaded histogram}) and control ({\it black thick line histogram}) groups are selected at $z$=0 ({\it left column}), $z$=0.4 ({\it middle column}), and $z$=1.0 ({\it right column}). Fractions of fossil and control groups in each column of Fig.~\ref{alifig06} have been separately estimated by normalising the number of fossil and control groups at other redshifts to their total numbers at the redshift at which they were initially selected. This plot shows that, contrary to expectation, no matter at what redshift the fossils are selected, after $\sim$4 Gyr, more than $\sim$90\% of them change their status and become non-fossils according to the magnitude gap criterion. Over the span of 7.7~Gyr, which is the time interval between $z=$0-1, very few groups retain a two-magnitude gap between the two brightest galaxies. This means that the fossil-phase is a temporary phase in the life of fossil groups \citep[also see][]{vbb08}, and there is no guarantee that an observed fossil group, at a relatively high redshift, remains a fossil until the present time, if fossils are selected according to their magnitude gap $\Delta m_{\rm 12}\geq 2.0$. \begin{figure} \epsfig{file=ali-fig-06.eps,width=3.7in} \caption{ The fate of fossil groups identified at different redshifts. Fossil groups ({\it dark shaded histogram}) and control groups ({\it grey shaded histogram}) are initially selected at $z$=0 ({\it left column}), $z$=0.4 ({\it middle column}), and $z$=1.0 ({\it right column}). For these objects, we explore what fraction remain ``fossil'' or ``control'' groups at two other epochs in the redshift range $0 \lesssim z \lesssim 1.0$. It is clear that the fossil phase does not last in $>90$\% of groups after 4~Gyr, no matter at which epoch they are identified. } \label{alifig06} \end{figure} \section{Revising the Optical Criterion for finding fossil groups} \label{RevOptDef} Using the Millennium simulation DM runs as well as the gas and semi-analytic galaxy catalogues based on them, it seems from the above that the conventional optical condition $\Delta m_{\rm 12}\geq 2$, used to classify groups as fossils, does not ensure that that these systems represent a class of old galaxy groups, in which the central galaxy has grown through the merging of other comparable group galaxy members. Having said that, it is true that the magnitude gap in a galaxy group is related to the mass assembly history of the group, for we saw that in groups, such a gap develops gradually with time. However, the difference between the luminosities of the two brightest galaxies in groups is not always reliable for the identification of fossil systems, as this quantity is vulnerable to the assimilation of a comparable galaxy into the core of the group, as a result of infall or merger with another group. We therefore attempt to identify a more robust criterion, in terms of the difference of optical magnitudes among the brightest galaxies in a group, which might be better suited to identifying systems where most of the mass has been assembled at an early epoch. As introduced in Sec.~\ref{evolution1}, we quantify the age of a group in terms of the mass assembly parameter $\alpha_{1.0}$, which for an individual group is the ratio of its mass at redshift $z\!=\! 1.0$ to its present mass at $z\!=\! 0$, i.e. $\alpha_{1.0}\equiv M_{z=1.0}/M_{z=0}$. We begin by considering the effect of the radius within which the magnitude gap is calculated. \subsection{A general criterion for the magnitude gap} Assume a general optical condition in defining early-formed groups according to the magnitude gap between the brightest group galaxy and other group members in the following form: \begin{equation} \Delta m_{1i} \geq j, \label{dm1i} \end{equation} where $\Delta m_{1i}$ is the difference in $R$-band magnitude between the first brightest group galaxy and the $i^{th}$ brightest group galaxy within 0.5R$_{\rm 200}$ (or R$_{\rm 200}$) of the group centre. The current definition of fossils involves $i=2$ and $j=2$. Obviously any group satisfying Eq.~\ref{dm1i} must contain at least $i$ galaxies. We do not consider $i>10$ since then we have to exclude most groups in our sample, and it would turn out not be very useful for observers as well. \begin{figure*} \centering \epsfig{file=ali-fig-07-A.eps,bb=50 20 290 250,clip=,width=0.49\hsize} \epsfig{file=ali-fig-07-B.eps,bb=50 20 290 250,clip=,width=0.49\hsize} \caption{The dependence on ($i$, $j$), defined in Eq.~\ref{dm1i}, of the mass assembly parameter $\alpha_{1.0}$, which is defined as the ratio of the mass of a group at redshift $z\!=\! 1.0$ to its mass at $z\!=\! 0$, i.e. $\alpha_{1.0}\equiv M_{z=1.0}/M_{z=0}$. For each value of $i$, groups are sorted according to the value of their magnitude gaps $\Delta m_{1i}$ calculated within a certain radius (different for the two panels). Then, for different values of $j$, the average for $\alpha_{\rm 1.0}$ is calculated for all groups satisfying Eq.~\ref{dm1i}. The plot is colour-coded according to $\alpha_{\rm 1.0}$. The {\it black contours} are drawn such that the fraction of the total number of groups identified is constant along each line. The magnitude gap $\Delta m_{1i}$ is calculated {\bf (Panel a.)} within half the overdensity radius, i.e. 0.5R$_{\rm 200}$, and {\bf (Panel b.)} within the overdensity radius, i.e. R$_{\rm 200}$.} \label{alifig07} \end{figure*} As we consider the magnitude gap between the brightest to the $i^{\rm th}$(=2, 3,..., 9, 10) brightest galaxy, the value of the magnitude gap varies from $j\gtrsim$0 to $j\lesssim$5. Our aim is to find a pair of ($i$, $j$) in Eq.~\ref{dm1i} which yields the best selection of genuinely old groups with a history of early mass assembly. In Fig.~\ref{alifig07}, we show how the parameter $\alpha_{1.0}\equiv M_{z=1.0}/M_{z=0}$, which represents the mass assembly of groups since redshift $z$=1.0, depends upon the selection of $i$ and $j$ in Eq.~\ref{dm1i}. For each value of $i$ in Fig.~\ref{alifig07}, groups are first sorted according to their magnitude gaps $\Delta m_{1i}$ estimated within 0.5R$_{\rm 200}$ or R$_{\rm 200}$), where the latter is the overdensity radius of the group. For each $i$, the average value for $\alpha_{\rm 1.0}$ is calculated, for each value of $j$, for all groups satisfying Eq.~\ref{dm1i}. The plot is colour-coded according to the values of $\alpha_{\rm 1.0}$. The {\it black contours} give an idea of the number of groups involved: the fraction of the total number of groups identified by parameters ($i$, $j$) is constant along each of these lines. From Fig.~\ref{alifig07}a, for instance, we find that systems with $i=4$ and $j=3$, i.e. systems for which $\Delta m_{14} \geq 3$ (within 0.5R$_{\rm 200}$), yield $\sim$2.4$\%$ of groups with $\alpha_{\rm 1.0} \sim$0.54. The same is $\sim$1.2$\%$ with $\alpha_{\rm 1.0} \sim$0.56, if one estimates $\Delta m_{14} \geq 3$ within R$_{\rm 200}$ according to Fig.~\ref{alifig07}b. In fact, by changing our search radius from 0.5R$_{\rm 200}$ to R$_{\rm 200}$, we find $\sim$50$\%$ fewer groups satisfying Eq.~\ref{dm1i}, for the same value of $\alpha_{\rm 1.0}$. Therefore, hereafter, we just use Panel~{\bf a} of Fig.~\ref{alifig07} which estimates the magnitude gap $\Delta m_{1i}$ within half of the overdensity radius, i.e. 0.5R$_{\rm 200}$. From this plot, the fraction of groups picked out by applying the conventional fossil criterion $\Delta m_{\rm 12} \geq 2$ is $\sim$4.0$\%$ with $\alpha_{\rm 1.0} \sim$0.52. If we were to find an improved criterion for finding fossils, a better set of parameters ($i$, $j$) in Eq.~\ref{dm1i} should \begin{enumerate} \item identify groups with larger valuer of $\alpha_{\rm 1.0}$, and/or \item find a larger fraction of groups with the same or larger value of $\alpha_{\rm 1.0}$, \end{enumerate} than found in conventional fossils, i.e. groups with ($i$, $j$)=(2, 2). For example, by choosing $i\!=\! 4$ and $j\!=\! 3$, the fraction of groups found with $\Delta m_{\rm 14} \geq 3$ turns out to be $\sim$40$\%$ less than when $i$=$j$=2, but it would identify slightly older groups, with an average $\alpha_{\rm 1.0} \sim$0.55, whereas the average $\alpha_{\rm 1.0} \sim$0.52 in fossils with $i$=$j$=2. In other words, ($i$, $j$)=(4, 3) identifies marginally older groups, at the expense of losing a large number of early-formed groups, compared to the case of conventional fossils ($i$, $j$)=(2, 2). Exploring Fig.~\ref{alifig07}a, we adopt ($i$, $j$)=(4, 2.5) as an example of how the fossil search criterion can be improved. If we define all groups with $\Delta m_{\rm 14} \geq 2.5$ within $0.5\,R_{\rm 200}$ as fossils, then the we would find groups with on average the same mass assembly history, i.e, the same average value of $\alpha_{\rm 1.0}$, but we would identify $\sim$50$\%$ more such groups, compared to groups identified the conventional parameters ($i$, $j$)=(2, 2). We will examine such groups further in the next section. Meanwhile, the Figs.~\ref{alifig07} will allow the user to find their favourite combination of ($i$, $j$) for both 0.5R$_{\rm 200}$ and R$_{\rm 200}$. \begin{table*} \begin{minipage}{120mm} \centering \caption{Peak values (from Gaussian fits) of the histograms of the mass assembly parameter $\alpha_{1.0}$ for various classes of groups. (see Fig.~\ref{alifig08}). Group halo mass $M(R_{\rm 200})$ is in units of $\,h^{-1}\,$M$_{\odot}$. F$_{\rm 14}$ consists of all groups with $\Delta m_{\rm 14} \geq 2.5$, and F$_{\rm 12}$, those with $\Delta m_{\rm 12} \geq 2.0$, both within 0.5R$_{200}$ of the group centre. } \begin{tabular}{lccc} \\ \hline Group type & $ 13.0 \leq \log M(R_{\rm 200}) $ & $13.0 \leq \log M(R_{\rm 200}) \leq 13.5$ & $\log M(R_{\rm 200}) \geq 13.5$ \\ & Panel {\bf a} & Panel {\bf b} & Panel {\bf c} \\ \hline All X-ray bright groups & $0.41 \pm 0.01$ & $0.52\pm0.01$ & $0.38\pm0.01$ \\ F$_{\rm 12}$ & $0.53 \pm 0.01$ & $0.56\pm0.01$ & $0.49\pm0.01$ \\ F$_{\rm 14}$ & $0.52 \pm 0.01$ & $0.55\pm0.01$ & $0.48\pm0.01$ \\ F$_{\rm 12}$ $\cap$ F$_{\rm 14}$ & $0.54 \pm 0.01$ & $0.55\pm0.01$ & $0.50\pm0.02$ \\\hline \\ \end{tabular} \label{fitPARAM} \end{minipage} \end{table*} \begin{figure*} \epsfig{file=ali-fig-08.eps,width=7.25in} \caption{Histograms of the mass assembly parameter $\alpha_{\rm 1.0}$ for X-ray bright groups ({\it blue histogram}), groups that satisfy the criterion F$_{\rm 14}$ ({\it red thick histogram}), those that satisfy F$_{\rm 12}$ ({\it gray shaded histogram}), and groups that satisfy both criteria, i.e. F$_{\rm 12}$ $\cap$ F$_{\rm 14}$ ({\it green shaded histogram}). Overlaid are Gaussian fits to each histogram (see Table~\ref{fitPARAM}). Panels {\bf a}, {\bf b}, and {\bf c} correspond to the logarithm of the group halo mass in the range $ 13.0 \leq \log M(R_{\rm 200}) $, $13.0 \leq \log M(R_{\rm 200}) \leq 13.5$, and $\log M(R_{\rm 200}) \geq 13.5$ respectively, the unit of $M(R_{\rm 200})$ being $\,h^{-1}\,$M$_{\odot}$. The {\it green dash dotted} line intersects the $x$-axis at $\alpha_{\rm 1.0}=0.5$ where haloes have assembled 50$\%$ of their mass at redshift $z$=1.0.} \label{alifig08} \end{figure*} \subsection{The optical criterion $\Delta m_{\rm 14} \geq 2.5$ within 0.5R$\, _{200}$} \label{altcrit} Having explored alternative criteria for identifying groups with a history of early formation, we now compare the history of mass assembly of groups selected according to $\Delta m_{\rm 14} \geq 2.5$ (these groups are hereafter collectively referred to as {\it F$_{\rm 14}$}) with those groups selected according to $\Delta m_{\rm 12} \geq 2.0$ (hereafter {\it F$_{\rm 12}$}), both within 0.5R$_{200}$ of the group centre. The latter category are the conventional fossil groups. The {\it blue histogram} in Fig.~\ref{alifig08} represents the distribution of the mass assembly parameter $\alpha_{\rm 1.0}$ for all X-ray bright groups (as defined in Fig.~\ref{alifig01}) in our sample. It also shows the groups in the categories F$_{\rm 14}$ ({\it red thick histogram}) and F$_{\rm 12}$ ({\it grey shaded histogram}). The {\it green shaded histogram} corresponds to groups that satisfy both criteria, i.e. F$_{\rm 12}$ $\cap$ F$_{\rm 14}$. The {\it green dash dotted} line intersects the x-axis at $\alpha_{\rm 1.0}=0.5$, representing groups for which half of their mass had been assembled at redshift $z\!=\! 1$. Gaussian fits to each histogram are overlaid. Panels {\bf a}, {\bf b}, and {\bf c} in Fig.~\ref{alifig08} correspond to different ranges of the logarithm of the group halo mass $ 13.0 \leq \log M\,(R_{\rm 200}) $, $13.0 \leq \log M\,(R_{\rm 200}) \leq 13.5$, and $\log M\,(R_{\rm 200}) \geq 13.5$ respectively, where $M\,(R_{\rm 200})$ is in units of $\,h^{-1}\,$M$_{\odot}$. These figures, as well as the values of the peaks of Gaussian fits to the distribution of $\alpha_{\rm 1.0}$ in each case (given in Table~\ref{fitPARAM}) lead to the following observations: \begin{enumerate} \item The groups belonging to F$_{\rm 12}$ and F$_{\rm 14}$ are older than the overall population of X-ray bright groups (for all values of halo mass), though such a difference is less pronounced in low-mass systems. Since haloes are thought to be hierarchically assembled, one expects to find a higher incidences of early-formed and low-mass groups in comparison to massive systems. \item Within the errors, groups belonging to F$_{\rm 14}$ are almost as old as F$_{\rm 12}$, i.e. the estimated $\alpha_{\rm 1.0}$ in F$_{\rm 12}$ systems (as given in Table. ~\ref{fitPARAM}) is more or less the same as found in F$_{\rm 14}$ systems. However, the fraction of groups in category F$_{\rm 14}$ is at least 50$\%$ more than F$_{\rm 12}$. This tells us that in general, the criterion $\Delta m_{\rm 14} \geq 2.5$ has a higher efficiency of identifying early-formed systems than $\Delta m_{\rm 12} \geq 2.0$. \item Interestingly, $\sim 75\%$ of F$_{\rm 12}$ haloes in Fig.~\ref{alifig08}{\bf a} also fulfil the $\Delta m_{\rm 14} \geq 2.5$ condition. Conversely, $\sim 35\%$ of F$_{\rm 14}$ haloes satisfy the $\Delta m_{\rm 12} \geq 2.0$ criterion. This means that a large proportion of the population of early-formed groups in the category F$_{\rm 14}$ is different from those in F$_{\rm 12}$. Groups which satisfy both criteria, i.e. F$_{\rm 12}$ $\cap$ F$_{\rm 14}$, are not necessarily older in comparison to those belonging to either F$_{\rm 12}$ or F$_{\rm 14}$ (see Table. ~\ref{fitPARAM}). \item Fig.~\ref{alifig08}{\bf b} shows that in fact neither criterion $\Delta m_{\rm 12} \geq 2.0$ nor $\Delta m_{\rm 14} \geq 2.5$ is efficient in finding early-formed groups in low-mass regime, even among X-ray bright groups. \end{enumerate} In the following section, we compare the environment and abundance of the groups belonging to the F$_{\rm 12}$ and F$_{\rm 14}$ categories. \subsection{The local environment of fossil groups} \label{Delta4} If galaxy mergers are responsible for the absence of bright galaxies in groups such as X-ray bright fossils, then most of the matter infall into these systems would have happened at a relatively earlier epoch. Consequently, at the present time, old groups should be more isolated than groups which have recently formed \citep[e.g.][]{labarb09}. Here we examine the local environment of groups, using the density parameter $\Delta_{4}$, defined as the number of haloes within a distance of $4\,h^{-1}$~Mpc from the centre of each group. The the local densities are calculated at $z=0$ according to \begin{equation} \Delta_4=\frac{\rho_4}{\rho_{bg}}-1, \label{D4} \end{equation} where $\rho_{4}$ is the number density of haloes within a spherical volume of of 4$h^{-1}$~Mpc in radius, and $\rho_{bg}$ is the background density of haloes within the whole volume of the Millennium simulation. Since the mass assembly of groups is mostly influenced by the infall of subgroups, which individually have masses typically below $\sim$10\% (and often substantially smaller) of the parent halo mass, it is important to take into account all haloes with $M\,(R_{\rm 200}) \geq 10^{11}\, h^{-1}\,$M$_{\odot}$ from the FoF group catalogue in order to estimate $\Delta_4$. From Gaussian fits to the histograms of the local density $\Delta_4$ of F$_{\rm 12}$ and F$_{\rm 14}$, control groups, as well as those of all X-ray bright groups, we find \begin{equation} \Delta_4 = \left\{ \begin{array}{rll} 6.31 \pm 0.17 & {\rm for } & \mbox{Control groups ($\Delta m_{\rm 12}\leq 0.5$)}\\ 6.25 \pm 0.18 & {\rm for } & \mbox{X-ray bright groups}\\ 5.10 \pm 0.35 & {\rm for } & {\rm F_{\rm 12}}\\ 5.19 \pm 0.26 & {\rm for } & {\rm F_{\rm 14}} \end{array} \right. \end{equation} where $\Delta_4$ is estimated using Eq.~\ref{D4}. It seems that both F$_{\rm 12}$ and F$_{\rm 14}$ groups are more likely to lie in lower density regions than control groups and X-ray bright groups. This is in agreement with our expectation as early-formed groups are assumed to be in low-density local environments. However, the local density around F$_{\rm 12}$ and F$_{\rm 14}$ groups is more or less the same. This is consistent with the fact that F$_{\rm 12}$ and F$_{\rm 14}$ have similar values of $\alpha_{\rm 1.0}$ (see Table~\ref{fitPARAM}). A similar trend is seen for density measures $\Delta_{5}$ and $\Delta_{6}$, but as the sampling volume increases ($> \Delta_{6}$), the above trend disappears, showing that the trend is related to the immediate environment of groups. \subsection{The abundance of fossil groups} \label{abundance} Various studies have shown that the fraction of early-formed groups increases as the group halo mass decreases \citep[e.g.][]{Milos06,Dariush07}. This phenomenon reflects the fact that structures form hierarchically, where small virialised groups form early, whereas most massive clusters form late. As the merging of galaxies in clusters is less efficient than in groups, due to the high velocity dispersion of cluster galaxies, clusters are less likely to develop large magnitude gaps. At the same time, in low-mass groups \citep[see,e.g.,][]{miles04,miles06} dynamical friction is more effective in ensuring galaxies fall to the core of the system, due to the smaller relative velocities involved. As a result, the existence of large magnitude gaps should be more frequently found in groups than in clusters. Thus, to find an old population of groups according to some criterion, and to study the way the criterion depends on group halo mass, would be a good test for the validity of the condition. The {\it top panel} of Fig.~\ref{alifig09} displays the abundance of F$_{\rm 12}$ ({\it gray shaded histogram}) and F$_{\rm 14}$({\it red thick line}) groups, defined as the fraction of haloes in each category, as a function of halo mass. The range of halo mass explored is $\log M(R_{\rm 200}) \gtrsim 13.4$ in units of $\,h^{-1}\,$M$_{\odot}$. Below this mass limit, the number of groups abruptly decrease, since all groups here have been chosen to be X-ray bright groups (see Fig.~\ref{alifig01}). The plot shows that in comparison to the F$_{\rm 12}$ groups, the F$_{\rm 14}$ groups are populated by less massive haloes. This can be better seen in the {\it bottom panel} of Fig.~\ref{alifig09}, where the relative fraction of F$_{\rm 14}$ groups over F$_{\rm 12}$ is shown. It can be inferred that, on average, the fraction of F$_{\rm 14}$ groups with halo mass $M(R_{\rm 200}) \leq 10^{14}\,h^{-1}\,$M$_{\odot}$ is at least 50$\%$ more than the fraction of F$_{\rm 12}$ . However, in the mass range $M(R_{\rm 200}) \geq 10^{14}\,h^{-1}\,$M$_{\odot}$, the fraction of F$_{\rm 14}$ groups decreases, though since the overall numbers in the extremely high mass range are low, the statistics are poorer. \begin{figure} \epsfig{file=ali-fig-09.eps,width=3.5in} \caption{{\bf Top panel:} The abundance of F$_{\rm 12}$({\it gray shaded histogram}) and F$_{\rm 14}$({\it red thick line}) groups, i.e., the fraction of groups in each category as a function of halo mass. {\bf Bottom panel:} The relative fraction of F$_{\rm 14}$ over F$_{\rm 12}$ groups as a function of halo mass.} \label{alifig09} \end{figure} \subsection{The survival of the magnitude gap: F$_{\rm 12}$ vs. F$_{\rm 14}$} \label{FossilPhase2} In Sec.~\ref{Phase1}, it was found that, in general, for the conventional fossil groups (F$_{\rm 12}$), the {\it fossil phase} is transient, $\gtrsim 90$\% of such groups ceasing to remain fossils after 4~Gyr. Here we examine whether the fossil phase in $F_{14}$ groups fares better. The histograms in Fig.~\ref{alifig10} represent the fractions of F$_{\rm 12}$ ({\it black line}) and F$_{\rm 14}$ ({\it thick red line}), as a function of look-back time in Gyr. The plot shows that in comparison to the F$_{\rm 12}$ groups, the {\it fossil phase} lasts longer by almost 1~Gyr for the same fraction of F$_{\rm 14}$ groups. For example, the fraction of F$_{\rm 12}$ groups that maintains its magnitude gap after $\sim$2.2 Gyr, falls to 28$\%$, while the corresponding period is $\sim$3.2 Gyr in F$_{\rm 14}$. Thus, not only does the $\Delta m_{\rm 14} \geq 2.5$ condition identify at least 1.5 times as many fossil groups as the F$_{\rm 12}$ condition, it also identifies groups in which the fossil phase lasts significantly longer. This can be explained from our analysis of the halo mass distribution within F$_{\rm 12}$ and F$_{\rm 14}$, already discussed in Sec.~\ref{abundance}. \begin{figure} \epsfig{file=ali-fig-10.eps,width=3.5in} \caption{The fractions of F$_{\rm 12}$ ({\it black line}) and F$_{\rm 14}$ ({\it thick red line}) groups, identified at redshift $z\!=\! 0$, that survive as fossils, as a function of look-back time.} \label{alifig10} \end{figure} \subsection{Comparison with observed groups} \label{comparison} When making detailed comparisons between simulations and catalogues of galaxies and groups compiled from observations, one has to be aware that simulated dark matter haloes have limited resolution, as mentioned in \S~\ref{SAMcroton} and ~\ref{SAMbower}. This means that all galaxies in the semi-analytic models, assigned to a particular halo, might not belong to dark matter subhalos. We find that even after applying a magnitude cut, there would be a significant number of modelled galaxies, whose orbits are analytically calculated, that would end up being not a member of a sub-halo, and thus would not be classified as a group member. While dealing with magnitude gaps $\Delta m_{12}$ and $\Delta m_{14}$, it is worth examining to what extent these quantities are vulnerable to resolution effects like the above. A direct way would be to compare the magnitude gap distribution of groups, selected based on their $\Delta m_{\rm 14}$, between simulations and observations. This is not a straightforward task, as groups identified from observational sky surveys are biased due to incompleteness in measured magnitude and redshift. Furthermore, a variety of group finding algorithms are been adopted to identify groups in simulations and observation, which adds uncertainties to any such comparison. Here, we use the group catalogue of \citet{Yang07}, which uses a halo-based group finder on the Sloan Digital Sky Survey (SDSS DR4). They define groups as systems whose dark matter haloes, have an overdensity of 180, determined from dynamics. This makes this catalogue suitable for comparison with the Millennium simulation, where dark matter halos have an overdensity of 200. From Sample~II of the catalogue, groups with following properties are selected: \begin{itemize} \item they have at least four members, \item they are within the redshift range $0.01 \leq z \leq 0.1$, and \item their estimated halo mass is $\log M\,(R_{\rm 180}) \geq 13.25, h^{-1}\,$M$_{\odot}$, since our Millennium X-ray groups have a similar mass threshold (see Fig.~\ref{alifig01}). \end{itemize} After applying the above criteria, 1697 groups were identified, and their magnitude gaps were compared with galaxy groups selected from the Millennium simulation at redshift $z \sim 0.041$. The magnitude gap distributions from both SDSS-DR4 as well as the Millennium simulation are shown in Fig.~\ref{alifig11}. Panels {\bf a} and {\bf b} of Fig.~\ref{alifig11} refer to the magnitude gap distribution $\Delta m_{\rm 12}$ and $\Delta m_{\rm 14}$ respectively. Results show that the estimated magnitude gaps are in fair agreement with observation. The fraction of groups with $\Delta m_{\rm 12} \geq 2.0$ is more or less the same while the fraction of those galaxy groups with $\Delta m_{\rm 14} \geq 2.5$ is different by $\sim 1\%$ (see Table.~\ref{sdssMill}). This shows that the incompleteness resulting from limited resolution does not affect our statistics. \begin{figure} \epsfig{file=ali-fig-11-new.eps,width=3.5in} \caption{The $R$-band magnitude gap distribution for haloes from the Millennium semi-analytic models of \citet{Bower06} ({\it red histograms}) superposed on the $r$-band data from Sample~II of SDSS-DR4 group catalogue of \citet{Yang07} ({\it black histograms}). (a) This shows the magnitude gap $\Delta m_{\rm 12}$ between the the first and second most luminous galaxies, compared with galaxies from the SDSS-DR4 catalogue of groups computed within group radius. (b) This is the same as in (a), but for the magnitude gap $\Delta m_{14}$ between the first and the fourth most luminous galaxies. The 1697 SDSS-DR4 groups are within the mass range $\log (M(R_{\rm 180})/h^{-1}\,$M$_{\odot}) \geq 13.25$, and redshift range $0.01 \leq z \leq 0.1$. Those from the Millennium simulations consist of 14612 X-ray groups selected at redshift $z \sim 0.04$ in the same mass range.} \label{alifig11} \end{figure} \begin{table} \centering \caption{Comparison between the observed and simulated fraction of groups with magnitude gaps $\Delta m_{\rm 12}$ and $\Delta m_{\rm 14}$, estimated from the histograms presented in Fig.~\ref{alifig11}.} \begin{tabular}{lll} \\ \hline Selection criterion & SDSS (DR4) & Millennium simulation \\ & \citep{Yang07} & \citep{Bower06} \\ \hline $\Delta m_{\rm 12} \geq$2.0 & $2.0\% \pm 0.4$ & $2.1\%\pm0.2$ \\ $\Delta m_{\rm 14} \geq$2.5 & $6.2\% \pm 0.6$ & $5.1\%\pm0.2$ \\\hline \\ \end{tabular} \label{sdssMill} \end{table} \section{Discussion and conclusions \label{Discussion} In this work, we analysed the evolution of the magnitude gap (the difference in magnitude of the brightest and the $n$th brightest galaxies) in galaxy groups. Using the Millennium dark matter simulations and associated semi-analytical galaxy catalogues and gas simulations, we investigated how the magnitude gap statistics are related to the history of mass assembly of the group, assessing whether its use as an age indicator is justified. A catalogue of galaxy groups, compiled from the Millennium dark matter simulations, was cross-correlated with catalogues resulting from hot gas simulations, and from semi-analytic galaxy evolution models based on these simulations. This resulted in a list of groups, with various properties of the associated dark matter haloes and galaxies, at 21 time steps, over the redshift range $z \simeq 1.0$ to $z\! =\! 0$. The simulated X-ray emitting hot IGM properties were known for these haloes only for $z\! =\! 0$, and these were used to define a sample of X-ray emitting groups. This is necessary since our objective was to examine the evolution of fossil groups, which are observationally defined in terms of both optical and X-ray parameters. We compared the estimated magnitude gaps in these galaxy groups from two different semi-analytic models of \citet{Bower06} and \citet{Croton06}, based on the Millennium dark matter simulations, and found that the model of \citet{Bower06} better matches the observed present-day distribution of the difference in magnitude between the brightest galaxy in each group, and the second and third brightest galaxies $\Delta m_{12}$ and $\Delta m_{13}$. We decided to use the \citet{Bower06} catalogue for the rest of this study. We examined the evolution with time of fossil galaxy groups, conventionally defined as those with an $R$-band difference in magnitude between the two brightest galaxies $\Delta m_{\rm 12}\!\geq\! 2$ (within $0.5 \, R_{\rm 200}$ of the group centre). We explored the nature of the groups that would be selected if the radius of the group were extended to $R_{\rm 200}$, and the definition of the magnitude gap in terms of $\Delta m_{\rm 1i}$ were varied. Our major conclusions from the analyses can be summarised as follows: \begin{enumerate} \item The parameter $\Delta m_{1i}$ defined for a galaxy system as the magnitude gap between the first and $i^{th}$ brightest galaxies (estimated within a radius of 0.5R$_{\rm 200}$ or R$_{\rm 200}$) can be shown to be linked to the halo mass assembly of the system $\alpha_{\rm 1.0}$ (Fig.~\ref{alifig07}), such that {\it galaxy systems with larger magnitude gaps $\Delta m_{1i}$ are more likely to be early-formed than those with smaller magnitude gaps}. \item Fig.~\ref{alifig06} shows that, contrary to expectation, irrespective of the redshift at which fossil groups are identified according to the usual criteria, after $\sim$4 Gyr, more than $\sim$90\% of them become non-fossils according to the magnitude gap criterion. Over the span of 7.7~Gyr, which is the time interval between $z=$0-1, very few groups retain a two-magnitude gap between the two brightest galaxies. This provides clear evidence that the fossil phase is a temporary phase in the life of fossil groups \citep[also see][]{vbb08}. \item In a given galaxy group, the merging of the $i^{th}$ brightest (or a brighter) galaxy, with the brightest galaxy in the group (often the central galaxy if there is one), results in an increase of $\Delta m_{1i}$. However, one of the main reasons for the fossil phase to be a transient one is that such a magnitude gap could be filled by the infall of equally massive galaxies into the core of the group, which would lead to a decrease in $\Delta m_{1i}$. Therefore, groups with smaller magnitude gaps are not necessarily late-formed systems. Many groups spend a part of their life in such a fossil phase, though an overwhelming majority of them would not fulfil the criteria of the ``fossil'' label at all epochs. \item For our sample of X-ray bright groups, the optical criterion $\Delta m_{\rm 14} \geq 2.5$ in the $R$-band is more efficient in identifying early-formed groups than the condition $\Delta m_{\rm 12} \geq 2.0$ (for the same filter), and is shown to identify at least $50\%$ more early-formed groups. Furthermore, for the groups selected by the latter criterion, the {\it fossil phase} in general is seen on average to last $\sim 1.0~$Gyr more than their counterparts selected using the conventional criterion. \item Groups selected according to $\Delta m_{\rm 14} \geq 2.5$ at $z=0$ correspond to $\sim 75\%$ of those identified using the $\Delta m_{\rm 12} \geq 2.0$ criterion. On comparing different panels in Fig.~\ref{alifig08}, one finds that early-formed groups identified from their large magnitude gaps (either $\Delta m_{\rm 12} \geq 2.0$ or $\Delta m_{\rm 14} \geq 2.5$) represent a small fraction (18\% for F14 and 8\% for F12) of the overall population of early-formed systems. This is especially noticeable in the high-mass regime. \item Finally, Fig.~\ref{alifig09} shows that in comparison to conventional fossils (i.e. F$_{\rm 12}$ groups), the F$_{\rm 14}$ groups identified based on $\Delta m_{\rm 14} \geq 2.5$, predominantly correspond to systems with halo masses $M(R_{\rm 200}) \leq 10^{14}\,h^{-1}\,$M$_{\odot}$. This makes the criterion $\Delta m_{\rm 12} \geq 2.0$ marginally more efficient than $\Delta m_{\rm 14} \geq 2.5$ in identifying massive early-formed systems. \end{enumerate} These results depend to some extent on the employed semi-analytic model in our current analysis, and the statistics might change if one uses different semi-analytical model of galaxy formation. Physical prescriptions such as galaxy merging, supernova and AGN feedback used in such models are somewhat different from one another. Furthermore, superfluous mergers may result from the algorithm used for the identification of haloes in the Millennium DM simulation, and this may affect the merger rates calculated from various studies, including ours, using these catalogues \citep[e.g.][]{Genel09}. Though merging is the most important process affecting galaxies in groups, there are other physical processes such as ram pressure stripping, interactions and harassment, group tidal field, and gas loss, that are not fully characterised by current semi-analytic models. This is partially due to the limited spatial resolution of the Millennium simulation. The new release of the current simulation, i.e. Millennium-II Simulation might help to address some of the above issues. The latter has 5 times better spatial resolution and 125 times better mass resolution \citep{Boylan09}. Future semi-analytic models based upon high resolution simulations, incorporating such effects would be worth employing in a similar investigation to find better observational indicators of the ages of galaxy groups and clusters. \section*{Acknowledgments} The Millennium Simulations used in this paper was carried out by the Virgo Supercomputing Consortium at the Computing Center of the Max-Planck Society in Garching. The semi-analytic galaxy catalogues used in this study are publicly available at http://galaxy-catalogue.dur.ac.uk:8080/MyMillennium/. The Millennium Gas Simulations were carried out at the Nottingham HPC facility, as was much of the analysis required by this work. The SDSS-DR4 group catalogue of \citet{Yang07} used in this study is publicly available at http://www.astro.umass.edu/~xhyang/Group.html. AAD gratefully acknowledges Graham Smith, Malcolm Bremer and the anonymous referee for helpful discussions. The 2dFGRS group catalogue data \citep{Yang05,Yang07} used in this study was kindly provided by Frank C. van den Bosch and X. Yang.
{'timestamp': '2010-02-23T22:05:34', 'yymm': '1002', 'arxiv_id': '1002.4414', 'language': 'en', 'url': 'https://arxiv.org/abs/1002.4414'}
ArXiv
\section{Introduction} \section{General Assumptions} Motivated by \cite{Tramontana2015L} , we consider a market served by firms with heterogeneous decision mechanisms producing homogeneous products. We use $q_i(t)$ to denote the output of firm $i$ at period $t$. The cost function of firm $i$ is supposed to be quadratic, i.e., $C_i(q_i)=c q_i^2$. Note that $c$ is a positive parameter and identical for all our firms. Furthermore, assume that the demand function of the market is isoelastic, which is founded on the hypothesis that the consumers have the Cobb-Douglas utility function. Hence, the price of the product should be $$p(Q)=\frac{1}{Q}=\frac{1}{\sum_i q_i},$$ where $Q=\sum_i q_i$ is the total supply. \section{Game of Two Firms}\label{sec:duopoly} First, let us consider a duopoly game, where the first firm adopts a so-called \emph{gradient adjustment mechanism}, while the second firm adopts the \emph{best response mechanism}. Both of these two mechanisms are boundedly rational. To be exact, the first firm increases/decreases its output according to the information given by the marginal profit of the last period, i.e., at period $t+1$, \begin{equation} q_1(t+1)=q_1(t) + k q_1(t) \frac{\partial \Pi_1(t)}{\partial q_1(t)}, \end{equation} where $\Pi_1(t)=\frac{q_1(t)}{q_1(t)+q_2(t)}-cq_1^2(t)$ is the profit of firm 1 as period $t$, and $k>0$ is a parameter controlling the adjustment speed. It is worth noting that the adjustment speed depends upon not only the parameter $k$ but also the size of the firm $q_1(t)$. The second firm knows exactly the form of the price function, thus can estimate its profit at period $t+1$ to be \begin{equation} \Pi_2^e(t+1)=\frac{q_2(t+1)}{q_1^e(t+1)+q_2(t+1)}-cq_2^2(t+1), \end{equation} where $q_1^e(t+1)$ is its expectation of the output at period $t+1$ of firm 1. It is realistic that firm 2 has no idea about its rival's production plan of the present period. We suppose that firm 2 have a naive expectation of its competitor to produce the same quantity as the last period, i.e., $q_1^e(t+1)=q_1(t)$. Hence, \begin{equation} \Pi_2^e(t+1)=\frac{q_2(t+1)}{q_1(t)+q_2(t+1)}-cq_2^2(t+1). \end{equation} In order to maximize the expected profit, the second firm try to solve the first condition $\partial \Pi_2^e(t+1) / \partial q_2(t+1)=0$, i.e., \begin{equation}\label{eq:gb-first-order} q_1(t)-2\,cq_2(t+1)(q_1(t)+q_2(t+1))^2=0. \end{equation} It should be noted that \eqref{eq:gb-first-order} is an equation of cubic polynomial. Although a general cubic polynomial has at most three real roots, it is easy to know that there exist one single real solution of \eqref{eq:gb-first-order} for $q_2(t+1)$, but its closed-form expression is particularly complex. However, we suppose that firm 2, by observing the rival's output at the last period, has such ability of computation to find the best response, which is denoted as $R_2(q_1(t))$. Therefore, the model could be described as the following discrete dynamic system. \begin{equation}\label{eq:gb-map} T_{GB}(q_1,q_2): \left\{\begin{split} &q_1(t+1)=q_1(t) + k q_1(t)\left[\frac{q_2(t)}{(q_1(t)+q_2(t))^2}-2\,cq_1(t)\right],\\ &q_2(t+1)=R_2(q_1(t)). \end{split} \right. \end{equation} By setting $q_1(t+1)=q_1(t)=q_1$ and $q_2(t+1)=q_2(t)=q_2$, the equilibrium can be identified by \begin{equation} \left\{\begin{split} &q_1=q_1 + k q_1\left(\frac{q_2}{(q_1+q_2)^2}-2\,cq_1\right),\\ &q_2=R_2(q_1), \end{split} \right. \end{equation} where $q_2=R_2(q_1)$ can be reformulated to $q_1-2\,cq_2(q_1+q_2)^2=0$ according to \eqref{eq:gb-first-order}. Thus, we have \begin{equation} \left\{\begin{split} &k q_1\left(\frac{q_2}{(q_1+q_2)^2}-2\,cq_1\right)=0,\\ &q_1-2\,cq_2(q_1+q_2)^2=0, \end{split} \right. \end{equation} which could be solved by a unique solution $$E_{GB}^1=\left(\frac{1}{\sqrt{8c}},\frac{1}{\sqrt{8c}}\right).$$ It should be noted that $(0,0)$ is not an equilibrium for it is not defined for the iteration map \eqref{eq:gb-map}. In order to investigate the local stability of an equilibrium $(q_1^*,q_2^*)$, we consider the Jacobian matrix of the form \begin{equation} J_{GB}(q_1^*,q_2^*)=\left[\begin{matrix} \frac{\partial q_1(t+1)}{\partial q_1(t)}\big|_{(q_1^*,q_2^*)} & \frac{\partial q_1(t+1)}{\partial q_2(t)}\big|_{(q_1^*,q_2^*)}\\ \frac{\partial q_2(t+1)}{\partial q_1(t)}\big|_{(q_1^*,q_2^*)} & \frac{\partial q_2(t+1)}{\partial q_2(t)}\big|_{(q_1^*,q_2^*)}\\ \end{matrix} \right]. \end{equation} It is easy to obtain that \begin{equation} \begin{split} & \frac{\partial q_1(t+1)}{\partial q_1(t)}\big|_{(q_1^*,q_2^*)}= 1+kq_2^*\frac{q_2^*-q_1^*}{(q_1^*+q_2^*)^3}-4\,ckq_1^*,\\ &\frac{\partial q_1(t+1)}{\partial q_2(t)}\Big|_{(q_1^*,q_2^*)}= kq_1^*\frac{q_1^*-q_2^*}{(q_1^*+q_2^*)^3}. \end{split} \end{equation} Furthermore, the derivative of $q_2(t+1)$ with respect to $q_2(t)$ is $0$ as $R_2$ does not involve $q_2$. However, the derivative of $q_2(t+1)$ with respect to $q_1(t)$ may not be directly obtained. By virtue of the method called implicit differentiation, it can be acquired that \begin{equation} \frac{\partial q_2(t+1)}{\partial q_1(t)}\Big|_{(q_1^*,q_2^*)} =-\frac{4\,cq_1^*q_2^*+4\,cq_2^{*2}-1}{2\,c(q_1^{*2}+4\,q_1^*q_2^*+3\,q_2^{*2})}. \end{equation} At $E_{GB}^1=(1/\sqrt{8c},1/\sqrt{8c})$, we have that \begin{equation} J_{GB}(E_{GB}^1)=\left[\begin{matrix} 1-k\sqrt{2\,c} & 0\\ 0 & 0\\ \end{matrix} \right]. \end{equation} Obviously, its eigenvalues are $\lambda_1=1-k\sqrt{2\,c}$ and $\lambda_2=0$. It is evident that $E_{GB}^1$ is locally stable if and only if $k\sqrt{c}<\sqrt{2}$. We summarize the above results in the following proposition. \begin{proposition} The $T_{GB}$ model described by \eqref{eq:gb-map} has a unique equilibrium $$\left(\frac{1}{\sqrt{8c}}, \frac{1}{\sqrt{8c}}\right),$$ which is locally stable provided that \begin{equation}\label{eq:gb-stable-cd} k\sqrt{c}<\sqrt{2}. \end{equation} \end{proposition} \section{Game of Three Firms} In this section, we introduce a new boundedly rational player and add it to the model of the previous section. This player is assumed to take an \emph{adaptive mechanism}, which means that at each period $t+1$ it decides the quantity of production $q_3(t+1)$ according to the previous output $q_3(t)$ as well as its expectations of the other two competitors. It is also supposed that this player naively expects that at period $t+1$ firm 1 and 2 would produce the same quantity as at period $t$. Therefore, the third firm could calculate the best response $R_3(q_1(t),q_2(t))$ to maximize its expected profit. Similar as \eqref{eq:gb-first-order}, $R_3(q_1(t),q_2(t))$ is the solution for $q_3'(t+1)$ of the following equation. \begin{equation} q_1(t)+q_2(t)-2\,cq_3'(t+1)(q_1(t)+q_2(t)+q_3'(t+1))^2=0. \end{equation} The adaptive decision mechanism for firm 3 is that it choose the output $q_3(t+1)$ proportionally to be $$q_3(t+1)=(1-l)q_3(t)+lR_3(q_1(t),q_2(t)),$$ where $l\in(0,1]$ is a parameter controlling the proportion. Hence, the triopoly can be described by \begin{equation}\label{eq:gba-map} T_{GBA}(q_1,q_2,q_3): \left\{\begin{split} &q_1(t+1)=q_1(t) + k q_1(t)\left[\frac{q_2(t)+q_3(t)}{(q_1(t)+q_2(t)+q_3(t))^2}-2\,cq_1(t)\right],\\ &q_2(t+1)=R_2(q_1(t),q_3(t)),\\ &q_3(t+1)=(1-l)q_3+l R_3(q_1(t),q_2(t)). \end{split} \right. \end{equation} Similar to Section \ref{sec:duopoly}, the equilibria satisfy that \begin{equation} \left\{\begin{split} &k q_1\left(\frac{q_2+q_3}{(q_1+q_2+q_3)^2}-2\,cq_1\right)=0,\\ &q_1+q_3-2\,cq_2(q_1+q_2+q_3)^2=0,\\ &q_1+q_2-2\,cq_3(q_1+q_2+q_3)^2=0, \end{split} \right. \end{equation} which could be solved by \begin{equation} \begin{split} E_{GBA}^1=&~\left(0,\frac{1}{\sqrt{8c}},\frac{1}{\sqrt{8c}}\right),\\ E_{GBA}^2=&~\left(\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}}\right). \end{split} \end{equation} For an equilibrium $(q_1^*,q_2^*,q_3^*)$, the Jacobian matrix of $T_{GBA}$ takes the form \begin{equation} J_{GBA}(q_1^*,q_2^*,q_3^*)=\left[\begin{matrix} \frac{\partial q_1(t+1)}{\partial q_1(t)}\big|_{(q_1^*,q_2^*,q_3^*)} & \frac{\partial q_1(t+1)}{\partial q_2(t)}\big|_{(q_1^*,q_2^*,q_3^*)} & \frac{\partial q_1(t+1)}{\partial q_3(t)}\big|_{(q_1^*,q_2^*,q_3^*)}\\ \frac{\partial q_2(t+1)}{\partial q_1(t)}\big|_{(q_1^*,q_2^*,q_3^*)} & \frac{\partial q_2(t+1)}{\partial q_2(t)}\big|_{(q_1^*,q_2^*,q_3^*)} & \frac{\partial q_2(t+1)}{\partial q_3(t)}\big|_{(q_1^*,q_2^*,q_3^*)}\\ \frac{\partial q_3(t+1)}{\partial q_1(t)}\big|_{(q_1^*,q_2^*,q_3^*)} & \frac{\partial q_3(t+1)}{\partial q_2(t)}\big|_{(q_1^*,q_2^*,q_3^*)} & \frac{\partial q_3(t+1)}{\partial q_3(t)}\big|_{(q_1^*,q_2^*,q_3^*)}\\ \end{matrix} \right]. \end{equation} The first and the second rows of the matrix might be similarly computed as Section \ref{sec:duopoly}. For the third row, we have \begin{equation} \begin{split} \frac{\partial q_3(t+1)}{\partial q_1(t)}=&~l\frac{\partial R_3(q_1(t),q_2(t))}{\partial q_1(t)},\\ \frac{\partial q_3(t+1)}{\partial q_2(t)}=&~l\frac{\partial R_3(q_1(t),q_2(t))}{\partial q_2(t)},\\ \frac{\partial q_3(t+1)}{\partial q_3(t)}=&~1-l,\\ \end{split} \end{equation} where ${\partial R_3(q_1(t),q_2(t))}/{\partial q_1(t)}$ and ${\partial R_3(q_1(t),q_2(t))}/{\partial q_2(t)}$ can be acquired using the method of implicit differentiation. From an economic point of view, we only consider the positive equilibrium $E^2_{GBA}$, where the Jacobian matrix would be \begin{equation} J_{GBA}(E^2_{GBA})=\left[\begin{matrix} 1-{10\,k\sqrt{c}}/{9} & -k\sqrt{c}/9 & -k\sqrt{c}/9\\ -{1}/{10} & 0 & -{1}/{10}\\ -{l}/{10} & -{l}/{10} & 1-l \end{matrix} \right]. \end{equation} Let $A$ be the characteristic polynomial of a Jacobian matrix $J$. The eigenvalues of $J$ are simply the roots of the polynomial $A$ for $\lambda$. So the problem of stability analysis can be reduced to that of determining whether all the roots of $A$ lie in the open unit disk $|\lambda|<1$. To the best of our knowledge, in addition to the Routh-Hurwitz criterion \cite{Oldenbourg1948T} generalized from the corresponding criterion for continuous systems, there are two other criteria, the Schur-Cohn criterion \cite[pp.\,246--248]{Elaydi2005U} and the Jury criterion \cite{Jury1976I}, available for discrete dynamical systems. In what follows, we provide a short review of the Schur-Cohn criterion. \begin{proposition}[Schur-Cohn Criterion]\label{prop:Schur-Cohn} For a $n$-dimensional discrete dynamic system, assume that the characteristic polynomial of its Jacobian matrix is \begin{equation*} A= \lambda^n + a_{n-1}\lambda^{n - 1} + \cdots + a_0. \end{equation*} Consider the sequence of determinants $D^\pm_1$, $D^\pm_2$, $\ldots$, $D^\pm_n$, where \begin{equation*} \begin{split} D^{\pm}_i =&\left| \left( \begin{array}{ccccc} 1&a_{n-1}&a_{n-2}&\cdots&a_{n-i+1}\\ 0&1&a_{n-1}&\cdots&a_{n-i+2}\\ 0&0&1&\cdots&a_{n-i+3}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&1\\ \end{array} \right)\pm\left( \begin{array}{ccccc} a_{i-1}&a_{i-2}&\cdots&a_{1}&a_0\\ a_{i-2}&a_{i-3}&\cdots&a_{0}&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ a_{1}&a_0&\cdots&0&0\\ a_0&0&\cdots&0&0\\ \end{array} \right)\right|. \end{split} \end{equation*} The characteristic polynomial $A$ has all its roots inside the unit open disk if and only if \smallskip \begin{enumerate} \item $A(1)>0$ and $(-1)^nA(-1)>0$, \item $D^\pm_1>0, D^\pm_3>0, \ldots, D^\pm_{n-3}>0, D^\pm_{n-1}>0$ (when $n$ is even), or\\[2pt] \smallskip $D^\pm_2>0, D^\pm_4>0, \ldots, D^\pm_{n-3}>0, D^\pm_{n-1}>0$ (when $n$ is odd). \end{enumerate} \end{proposition} \begin{corollary} Consider a $3$-dimensional discrete dynamic system with the characteristic polynomial of its Jacobian matrix of the form $$A=\lambda^3+a_2\lambda^2+a_1\lambda+a_0.$$ An equilibrium $E$ is locally stable if and only if the following inequalities are satisfied at $E$. \begin{equation}\label{eq:cd4-stable} \left\{\begin{split} &1+a_2+a_1+a_0>0,\\ &1-a_2+a_1-a_0>0,\\ &-a_0^2-a_0a_2+a_1+1>0,\\ &-a_0^2+a_0a_2-a_1+1>0. \end{split}\right. \end{equation} \end{corollary} For the $3$-dimensional discrete dynamic system \eqref{eq:gba-map}, it is easy to verify that at the unique positive equilibrium $E_{GBA}^2$ the local stability condition \eqref{eq:cd4-stable} could be reformulated to \begin{equation}\label{eq:inequality-4} CD_{GBA}^1>0,~CD_{GBA}^2>0,~CD_{GBA}^3<0,~CD_{GBA}^4<0, \end{equation} where \begin{equation} \begin{split} CD_{GBA}^1=&~ kl\sqrt{c},\\ CD_{GBA}^2=&~ 504\,kl\sqrt{c}-1010\,k\sqrt{c}-909\,l+1800,\\ CD_{GBA}^3=&~ 324\,ck^2l^2-18360\,ck^2l+10100\,ck^2-16524\,kl^2\sqrt{c}-840420\,kl\sqrt{c}\\ &+8181\,l^2+891000\,k\sqrt{c}+801900\,l-1620000,\\ CD_{GBA}^4=&~ 36\,ck^2l^2+1960\,ck^2l+1764\,kl^2\sqrt{c}-1100\,ck^2+93420\,kl\sqrt{c}\\ &-99000\,k\sqrt{c}-891\,l^2-89100\,l. \end{split} \end{equation} It is obvious that $CD_{GBA}^1>0$ could be ignored as it is always true for all parameter values such that $k>0$, $c>0$ and $1\geq l>0$. A further question is whether the other three inequalities could be simplified. To answer this question, we might investigate the inclusion relations of these inequalities. It worth noticing that the surfaces $CD_{GBA}^2=0$, $CD_{GBA}^3=0$ and $CD_{GBA}^4=0$ divide the parameter space $\{(k,l,c)\,|\,k>0,1\geq l>0,c>0\}$ of our concern into a number of connected regions. Moreover, in a given region, the signs of $CD_{GBA}^i$ ($i=1,2,3,4$) would be invariant. This means that in each of these regions we could identify whether the inequalities in \eqref{eq:inequality-4} are satisfied by checking them at a single sample point. For simple cases, the selection of sample points might be done by hand. Generally, however, the selection could be automated by using, e.g., the partial cylindrical algebraic decomposition (PCAD) method \cite{Collins1991P}. \begin{table}[htbp] \centering \caption{Stability Condition of $T_{GBA}$ at Selected Sample Points} \label{tab:gba-sample} \begin{tabular}{|l|c|c|c|c|} \hline sample point of $(k,l,c)$ & $CD_{GBA}^1>0$ & $CD_{GBA}^2>0$ & $CD_{GBA}^3<0$ & $CD_{GBA}^4<0$\\ \hline (455/256, 71/256, 1/4) & true & true & true & true \\ \hline (31/8, 71/256, 1/4) & true & false & true & true \\ \hline (601/128, 71/256, 1/4)&true& false& false& true\\ \hline (453/256, 183/256, 1/4)&true & true& true& true\\ \hline (1439/256, 183/256, 1/4) &true& false& true& true\\ \hline (1577/16, 183/256, 1/4)&true& false& false& true\\ \hline (49855/256, 183/256, 1/4)&true& false& true& true \\ \hline (25673/128, 183/256, 1/4)& true& false& true& false\\ \hline (451/256, 15/16, 1/4)&true& true& true& true\\ \hline (5237/256, 15/16, 1/4)&true& false& true& true\\ \hline (2425/64, 15/16, 1/4)&true& false& true& false\\ \hline \end{tabular} \end{table} In Table \ref{tab:gba-sample}, we list all the selected sample points such that there is at least one point in each region. The four inequalities in \eqref{eq:inequality-4} are verified at these sample points one by one, which are also given in Table \ref{tab:gba-sample}. It is observed that at the sample points where $CD_{GBA}^2>0$ is true, the other three inequalities would also be true. Hence, if $CD_{GBA}^2>0$ is satisfied, then all the four inequalities in \eqref{eq:inequality-4} would be satisfied definitely. In other words, only $CD_{GBA}^2>0$ is needed herein for the detection of the local stability. Furthermore, $CD_{GBA}^2>0$ is equivalent to \begin{equation*} k\sqrt{c}<\frac{9(101\,l-200)}{2(252\,l-505)}. \end{equation*} Therefore, we summarize the obtained results in the following proposition. \begin{proposition} The $T_{GBA}$ model described by \eqref{eq:gba-map} has a unique positive equilibrium $$\left(\frac{1}{\sqrt{9c}}, \frac{1}{\sqrt{9c}}, \frac{1}{\sqrt{9c}}\right),$$ which is locally stable provided that \begin{equation} k\sqrt{c}<\frac{9(101\,l-200)}{2(252\,l-505)}. \end{equation} \end{proposition} Furthermore, we have the following result. \begin{proposition} The stability region of the $T_{GBA}$ model is strictly larger than that of $T_{GB}$. \end{proposition} \begin{proof} It suffices to prove that $$\frac{9(101\,l-200)}{2(252\,l-505)}>\sqrt{2},$$ which is equivalent to \begin{equation*}\label{eq:gba-gb} 9(101\,l-200) < 2\sqrt{2}(252\,l-505) \end{equation*} since $252\,l-505<0$. It is easy to see that the above inequality can be reformulated to $$(909-504\sqrt{2})l<(1800-1010\sqrt{2}),$$ which is true by checking at $l=0$ and $l=1$. This completes the proof. \end{proof} \section{Game of Four Firms} In this section, we introduce an additional player. The fourth firm adopts the so-called \emph{local monopolistic approximation} (LMA) mechanism \cite{Tuinstra2004A}, which is also a boundedly rational adjustment process. In this process, the player just has limited knowledge of the demand function. To be exact, the firm can observe the current market price $p(t)$ and the corresponding total supply $Q(t)$ and is able to correctly estimate the slope $p'(Q(t))$ of the price function around the point $(p(t),Q(t))$. Then, the firm uses such information to conjecture the demand function and expect the price at period $t+1$ to be $$p^e(t+1)=p(Q(t))+p'(Q(t))(Q^e(t+1)-Q(t)),$$ where $Q^e(t+1)$ represents the expected aggregate production at period $t+1$. Moreover, firm $4$ is also assumed to use the naive expectations of its rivals, i.e., $$Q^e(t+1)=q_1(t)+q_2(t)+q_3(t)+q_4(t+1).$$ Thus, we have that $$p^e(t+1)=\frac{1}{Q(t)}-\frac{1}{Q^2(t)}(q_4(t+1)-q_4(t)).$$ The expected profit of the fourth firm is $$\Pi^e_4(t+1)=p^e(t+1)q_4(t+1)-cq_4^2(t+1).$$ To maximize the expected profit, firm $4$ chooses its output at period $t+1$ to be the solution of the first order condition $$q_4(t+1)=\frac{2\,q_4(t)+q_1(t)+q_2(t)+q_3(t)}{2(1+c(q_1(t)+q_2(t)+q_3(t)+q_4(t))^2)}.$$ Therefore, the new model can be described by the following $4$-dimensional discrete dynamic system. \begin{equation}\label{eq:gbal-map} \begin{split} &T_{GBAL}(q_1,q_2,q_3,q_4): \\ &\left\{\begin{split} &q_1(t+1)=q_1(t) + k q_1(t)\left[\frac{q_2(t)+q_3(t)+q_4(t)}{(q_1(t)+q_2(t)+q_3(t)+q_4(t))^2}-2\,cq_1(t)\right],\\ &q_2(t+1)=R_2(q_1(t),q_3(t),q_4(t)),\\ &q_3(t+1)=(1-l)q_3+l R_3(q_1(t),q_2(t),q_4(t)),\\ &q_4(t+1)=\frac{2\,q_4(t)+q_1(t)+q_2(t)+q_3(t)}{2(1+c(q_1(t)+q_2(t)+q_3(t)+q_4(t))^2)}. \end{split} \right. \end{split} \end{equation} Similarly, we know that the equilibria are described by \begin{equation} \left\{\begin{split} &k q_1\left(\frac{q_2+q_3+q_4}{(q_1+q_2+q_3+q_4)^2}-2\,cq_1\right)=0,\\ &q_1+q_3+q_4-2\,cq_2(q_1+q_2+q_3+q_4)^2=0,\\ &q_1+q_2+q_4-2\,cq_3(q_1+q_2+q_3+q_4)^2=0,\\ &q_4-\frac{2\,q_4+q_1+q_2+q_3}{2(1+c(q_1+q_2+q_3+q_4)^2)}=0, \end{split} \right. \end{equation} which could be solved by two solutions \begin{equation} \begin{split} E_{GBAL}^1=&~\left(0,\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}}\right),\\ E_{GBAL}^2=&~\left(\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}}\right).\\ \end{split} \end{equation} Hence, there exists a unique positive equilibrium $E_{GBAL}^2$, where the Jacobian matrix of $T_{GBAL}$ should be \begin{equation} J_{GBAL}(E^2_{GBAL})=\left[\begin{matrix} 1-{3\,k\sqrt{6c}}/8 & -k\sqrt{6c}/{24} & -k\sqrt{6c}/{24} & -k\sqrt{6c}/{24}\\ -{1}/{9} & 0 & -{1}/{9} & -{1}/{9}\\ -{l}/{9} & -{l}/{9} & 1-l & -{l}/{9}\\ -{1}/{10} & -{1}/{10} & -{1}/{10} & {1}/{10}\\ \end{matrix} \right]. \end{equation} By virtue of Proposition \ref{prop:Schur-Cohn}, we have the following corollary. \begin{corollary}\label{cor:4-dim-stable} Consider a $4$-dimensional discrete dynamic system with the characteristic polynomial of its Jacobian matrix of the form $$A=\lambda^4+a_3\lambda^3+a_2\lambda^2+a_1\lambda+a_0.$$ An equilibrium $E$ is locally stable if and only if the following inequalities are satisfied at $E$. \begin{equation}\label{eq:cd6-stable} \left\{\begin{split} &1+a_3+a_2+a_1+a_0>0,\\ &1-a_3+a_2-a_1+a_0>0,\\ &-a_0^3-a_0^2a_2+a_0a_1a_3+a_0a_3^2-a_0^2-a_1^2-a_1a_3+a_0+a_2+1>0,\\ &a_0^3-a_0^2a_2+a_0a_1a_3-a_0a_3^2-a_0^2+2\,a_0a_2-a_1^2+a_1a_3-a_0-a_2+1>0,\\ &1+a_0>0,\\ &1-a_0>0. \end{split}\right. \end{equation} \end{corollary} For the $4$-dimensional discrete dynamic system \eqref{eq:gbal-map}, it is easy to verify that at the unique positive equilibrium $E_{GBAL}^2$ the above condition \eqref{eq:cd6-stable} could be reformulated to \begin{equation}\label{eq:inequality-6} \begin{split} CD_{GBAL}^1>0,~CD_{GBAL}^2>0,~CD_{GBAL}^3>0,\\ CD_{GBAL}^4<0,~CD_{GBAL}^5<0,~CD_{GBAL}^6>0, \end{split} \end{equation} where \begin{equation} \begin{split} CD_{GBAL}^1=&~ kl\sqrt{32c/3},\\ CD_{GBAL}^2=&~ (512\, k l - 1017\, k )\sqrt{32c/3} - 3616\, l + 7056,\\ CD_{GBAL}^3=&~ (28672\,k^3l^3-1062432\,k^3l^2+9180054\,k^3l-12603681\,k^3)(\sqrt{32c/3})^3\\ &+(-3777536\,k^2l^3+179157888\,k^2l^2-1194862752\,k^2l+945483840\,k^2)(\sqrt{32c/3})^2\\ &+(116054016\,kl^3-4248400896\,kl^2-5573546496\,kl+13237426944\,k)\sqrt{32c/3}\\ &-566525952\,l^3+11952783360\,l^2+47066406912\,l-133145026560,\\ CD_{GBAL}^4=&~ (3616\,k^3l^3-132966\,k^3l^2-512973\,k^3l+1226907\,k^3)(\sqrt{32c/3})^3\\ &+(-472768\,k^2l^3+16419744\,k^2l^2+77813136\,k^2l-83525904\,k^2)(\sqrt{32c/3})^2\\ &+(-6484992\,kl^3+276668928\,kl^2+1145829888\,kl-1868106240\,k)\sqrt{32c/3}\\ &+55148544\,l^3-1055932416\,l^2-6642155520\,l,\\ CD_{GBAL}^5=&~ (16\,kl-27\,k)\sqrt{32c/3}-96\,l-12816,\\ CD_{GBAL}^6=&~ (16\,kl-27\,k)\sqrt{32c/3}-96\,l+13104,\\ \end{split} \end{equation} \begin{table}[htbp] \centering \caption{Stability Condition of $T_{GBAL}$ at Selected Sample Points} \label{tab:gbal-sample} \begin{tabular}{|l|c|c|c|} \hline sample point of $(k,l,c)$ & $CD_{GBAL}^1>0$ & $CD_{GBAL}^2>0$ & $CD_{GBAL}^3>0$ \\ \hline (55/64, 109/256, 3/2) & true& true& true \\ \hline (243/128, 109/256, 3/2) & true& false& true \\ \hline (301/32, 109/256, 3/2)& true& false& false\\ \hline (271/16, 109/256, 3/2)& true& false& true\\ \hline (5725/64, 109/256, 3/2)& true& false& true\\ \hline (20771/128, 109/256, 3/2)& true& false& true\\ \hline (109/128, 119/128, 3/2)& true& true& true\\ \hline (1275/256, 119/128, 3/2)& true& false& true\\ \hline (35405/256, 119/128, 3/2)& true& false& true\\ \hline (34413/128, 119/128, 3/2)& true& false& true\\ \hline \end{tabular} \begin{tabular}{|l|c|c|c|} \hline sample point of $(k,l,c)$ & $CD_{GBAL}^4<0$ & $CD_{GBAL}^5<0$ & $CD_{GBAL}^6>0$\\ \hline (55/64, 109/256, 3/2) & true& true& true \\ \hline (243/128, 109/256, 3/2) & true& true& true \\ \hline (301/32, 109/256, 3/2)& true& true& true\\ \hline (271/16, 109/256, 3/2)& true& true& true\\ \hline (5725/64, 109/256, 3/2)& false& true& true\\ \hline (20771/128, 109/256, 3/2)& false& true& false\\ \hline (109/128, 119/128, 3/2)& true& true& true \\ \hline (1275/256, 119/128, 3/2)& true& true& true\\ \hline (35405/256, 119/128, 3/2)& false& true& true\\ \hline (34413/128, 119/128, 3/2)& false& true& false\\ \hline \end{tabular} \end{table} In order to simplify condition \eqref{eq:inequality-6}, it is also helpful to explore the inclusion relations of these inequalities. Bear in mind that the surfaces $CD_{GBAL}^i=0$ ($i=1,\ldots,6$) divide the parameter space $\{(k,l,c)\,|\,k>0,1\geq l>0,c>0\}$ into regions, and in each of them the signs of $CD_{GBA}^i$ ($i=1,\ldots,6$) would be invariant. Similarly, we use the PCAD method to select at least one sample point from each region. Table \ref{tab:gbal-sample} lists the selected sample points and shows the verification results of the six inequalities in \eqref{eq:inequality-6} at these sample points. It is observed that at all the sample points where $CD_{GBAL}^2>0$ is true, the rest inequalities would also be true. In other words, if $CD_{GBAL}^2>0$, then the local stability condition \eqref{eq:cd6-stable} would be satisfied. Thus, condition \eqref{eq:inequality-6} could be simplified to one single inequality. Furthermore, it is easy to see that $CD_{GBAL}^2>0$ is equivalent to \begin{equation*} k\sqrt{c}<\frac{2\sqrt{6}(226\,l-441)}{512\,l-1017}. \end{equation*} Therefore, we summarize the results in the following proposition. \begin{proposition} The $T_{GBAL}$ model described by \eqref{eq:gbal-map} has a unique positive equilibrium $$\left(\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}}\right),$$ which is locally stable provided that \begin{equation*} k\sqrt{c}<\frac{2\sqrt{6}(226\,l-441)}{512\,l-1017}. \end{equation*} \end{proposition} Furthermore, we have the following result. \begin{proposition} The stability region of the $T_{GBAL}$ model is strictly larger than that of $T_{GBA}$. \end{proposition} \begin{proof} It suffices to prove that $$\frac{9(101\,l-200)}{2(252\,l-505)}<\frac{2\sqrt{6}(226\,l-441)}{512\,l-1017},$$ which is equivalent to $$9(101\,l-200)(512\,l-1017) < 4\sqrt{6}(252\,l-505)(226\,l-441),$$ and further to $$ (-227808\sqrt{ 6} + 465408)l^2 + (901048 \sqrt{6} - 1846053)l - 890820\sqrt{6} + 1830600<0.$$ This inequality is satisfied for $0<l\leq 1$ since the left part has a negative leading coefficient and has both of its roots greater than $1$, which completes the proof. \end{proof} \section{Game of Five Firms} Finally, we introduce a special firm, which is a {rational player}, to the model of this section. A \emph{rational player}, quite different from the second player, not only knows clearly the form of the price function, but also has complete information of its rivals' decisions. Because of no information about the rivals, firm 2 just naively expects that all its competitors produce the same amounts as the last period. Thus, the expected profit of firm 2 at period $t+1$ would be $$\Pi_2^e(t+1)=\frac{q_2(t+1)}{q_1(t)+q_2(t+1)+q_3(t)+q_4(t)+q_5(t)}-cq_2^2(t+1).$$ In comparison, firm 5 has complete information and know exactly the production plans of all its rivals. Hence, the expected profit of firm 5 would be the real profit, i.e., $$\Pi_5^e(t+1)=\Pi_5(t+1)=\frac{q_5(t+1)}{q_1(t+1)+q_2(t+1)+q_3(t+1)+q_4(t+1)+q_5(t+1)}-cq_5^2(t+1).$$ In order to maximize its profit, firm 5 need to solve the first condition $\partial \Pi_5(t+1) / \partial q_5(t+1)=0$ for $q_5(t+1)$. We denote the solution as $$q_5(t+1)=R_5(q_1(t+1),q_2(t+1),q_3(t+1),q_4(t+1)).$$ It is worth noting that the form of the solution is similar as that of firm 2, but with variables replaced by the output quantities of the rivals at the present period. In short, we have the $5$-dimensional iteration map \begin{equation}\label{eq:gbalr-map} \begin{split} &T_{GBALR}(q_1,q_2,q_3,q_4,q_5): \\ &\left\{\begin{split} &q_1(t+1)=q_1(t) + k q_1(t)\left[\frac{q_2(t)+q_3(t)+q_4(t)+q_5(t)}{(q_1(t)+q_2(t)+q_3(t)+q_4(t)+q_5(t))^2}-2\,cq_1(t)\right],\\ &q_2(t+1)=R_2(q_1(t),q_3(t),q_4(t),q_5(t)),\\ &q_3(t+1)=(1-l)q_3(t)+l R_3(q_1(t),q_2(t),q_4(t),q_5(t)),\\ &q_4(t+1)=\frac{2\,q_4(t)+q_1(t)+q_2(t)+q_3(t)+q_5(t)}{2(1+c(q_1(t)+q_2(t)+q_3(t)+q_4(t)+q_5(t))^2)},\\ &q_5(t+1)=R_5(q_1(t+1),q_2(t+1),q_3(t+1),q_4(t+1)). \end{split} \right. \end{split} \end{equation} Therefore, the equilibria are described by \begin{equation} \left\{\begin{split} &k q_1\left(\frac{q_2+q_3}{(q_1+q_2+q_3+q_4+q_5)^2}-2\,cq_1\right)=0,\\ &q_1+q_3+q_4+q_5-2\,cq_2(q_1+q_2+q_3+q_4+q_5)^2=0,\\ &q_1+q_2+q_4+q_5-2\,cq_3(q_1+q_2+q_3+q_4+q_5)^2=0,\\ &q_4-\frac{2\,q_4+q_1+q_2+q_3+q_5}{2(1+c(q_1+q_2+q_3+q_4+q_5)^2)}=0,\\ &q_1+q_2+q_3+q_4-2\,cq_5(q_1+q_2+q_3+q_4+q_5)^2=0, \end{split} \right. \end{equation} which could be solved by two solutions \begin{equation} \begin{split} E_{GBALR}^1=&~\left(0,\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}}\right),\\ E_{GBALR}^2=&~\left(\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}}\right). \end{split} \end{equation} For simplicity, we denote the first and the fourth equation in \eqref{eq:gbalr-map} to be $$q_1(t+1)=G_1(q_1(t),q_2(t),q_3(t),q_4(t),q_5(t))$$ and $$q_4(t+1)=L_4(q_1(t),q_2(t),q_3(t),q_4(t),q_5(t)),$$ respectively. One may find that \eqref{eq:gbalr-map} could be reformulated to the following $4$-dimensional map. \begin{equation}\label{eq:gbalr-map4} \begin{split} &T_{GBALR}(q_1,q_2,q_3,q_4): \\ &\left\{\begin{split} &q_1(t+1)=G_1(q_1(t),q_2(t),q_3(t),q_4(t),R_5(q_1(t),q_2(t),q_3(t),q_4(t))),\\ &q_2(t+1)=R_2(q_1(t),q_3(t),q_4(t),R_5(q_1(t),q_2(t),q_3(t),q_4(t))),\\ &q_3(t+1)=(1-l)q_3(t)+l R_3(q_1(t),q_2(t),q_4(t),R_5(q_1(t),q_2(t),q_3(t),q_4(t))),\\ &q_4(t+1)=L_4(q_1(t),q_2(t),q_3(t),q_4(t),R_5(q_1(t),q_2(t),q_3(t),q_4(t))). \end{split} \right. \end{split} \end{equation} Hence, the analysis of the local stability is transformed to the investigation of the Jacobian matrix \eqref{eq:gbalr-map4} of the form \begin{equation} J_{GBALR}=\left[\begin{matrix} \frac{\partial q_1(t+1)}{\partial q_1(t)} & \frac{\partial q_1(t+1)}{\partial q_2(t)} & \frac{\partial q_1(t+1)}{\partial q_3(t)} & \frac{\partial q_1(t+1)}{\partial q_4(t)}\\ \frac{\partial q_2(t+1)}{\partial q_1(t)} & \frac{\partial q_2(t+1)}{\partial q_2(t)} & \frac{\partial q_2(t+1)}{\partial q_3(t)} & \frac{\partial q_2(t+1)}{\partial q_4(t)}\\ \frac{\partial q_3(t+1)}{\partial q_1(t)} & \frac{\partial q_3(t+1)}{\partial q_2(t)} & \frac{\partial q_3(t+1)}{\partial q_3(t)} & \frac{\partial q_3(t+1)}{\partial q_4(t)}\\ \frac{\partial q_4(t+1)}{\partial q_1(t)} & \frac{\partial q_4(t+1)}{\partial q_2(t)} & \frac{\partial q_4(t+1)}{\partial q_3(t)} & \frac{\partial q_4(t+1)}{\partial q_4(t)}\\ \end{matrix} \right], \end{equation} where \begin{equation} \begin{split} \frac{\partial q_1(t+1)}{\partial q_i(t)}=&~\frac{\partial G_1}{\partial q_i}+\frac{\partial G_1}{\partial q_5}\frac{\partial R_5}{\partial q_i},~~i=1,2,3,4,\\ \frac{\partial q_2(t+1)}{\partial q_i(t)}=&~\frac{\partial R_2}{\partial q_i} + l\frac{\partial R_2}{\partial q_5}\frac{\partial R_5}{\partial q_i},~~i=1,3,4,\\ \frac{\partial q_2(t+1)}{\partial q_2(t)}=&~\frac{\partial R_2}{\partial q_5}\frac{\partial R_5}{\partial q_2},\\ \frac{\partial q_3(t+1)}{\partial q_i(t)}=&~l\frac{\partial R_3}{\partial q_i} + l\frac{\partial R_3}{\partial q_5}\frac{\partial R_5}{\partial q_i},~~i=1,2,4,\\ \frac{\partial q_3(t+1)}{\partial q_3(t)}=&~(1-l)+l\frac{\partial R_3}{\partial q_5}\frac{\partial R_5}{\partial q_3},\\ \frac{\partial q_4(t+1)}{\partial q_i(t)}=&~\frac{\partial L_4}{\partial q_i}+\frac{\partial L_4}{\partial q_5}\frac{\partial R_5}{\partial q_i},~~i=1,2,3,4.\\ \end{split} \end{equation} Likewise, we focus on the positive equilibrium $E^2_{GBALR}$, where the Jacobian matrix $J_{GBALR}$ becomes \begin{equation} J_{GBALR}(E^2_{GBALR})=\left[\begin{matrix} 1-{31\,k\sqrt{2c}}/56 & -3\,k\sqrt{2c}/{56} & -3\,k\sqrt{2c}/{56} & -3\,k\sqrt{2c}/{56}\\ -{75}/{784} & 9/784 & -{75}/{784} & -{75}/{784}\\ 0 & 0 & 1-25\,l/28 & 0\\ -{5}/{56} & -{5}/{56} & -{5}/{56} & {13}/{168}\\ \end{matrix} \right]. \end{equation} According to Corollary \ref{cor:4-dim-stable}, the unique positive equilibrium $E_{GBALR}^2$ is locally stable if and only if the following condition is satisfied. \begin{equation}\label{eq:inequality-6-5firm} \begin{split} CD_{GBALR}^1>0,~CD_{GBALR}^2>0,~CD_{GBALR}^3<0,\\ CD_{GBALR}^4<0,~CD_{GBALR}^5<0,~CD_{GBALR}^6>0, \end{split} \end{equation} where \begin{equation} \begin{split} CD_{GBALR}^1=&~ kl\sqrt{25c/2},\\ CD_{GBALR}^2=&~ (25\,l-56)(5737\,k\sqrt{25c/2}-50860),\\ CD_{GBALR}^3=&~ (3934321875\,k^3l^3-104905111500\,k^3l^2+1172129631120\,k^3l\\ &-1186719653952\,k^3)(\sqrt{25c/2})^3+(-439562531250\,k^2l^3+19054516460000\,k^2l^2\\ &-144796527937600\,k^2l+134072666053760\,k^2)(\sqrt{25c/2})^2+(19706242500000\,kl^3\\ &-579386747450000\,kl^2-1721529608680000\,kl+3133067852544000\,k)\sqrt{25c/2}\\ &-113004562500000\,l^3+1975821995000000\,l^2+12875890524000000\,l\\ &-37485773024000000,\\ CD_{GBALR}^4=&~ (9423\,k^2(\sqrt{25c/2})^2-981050\,k\sqrt{25c/2}-33575000)((3375\,kl^3-89180\,kl^2\\ &-629552\,kl+812224\,k)\sqrt{25c/2}\\ &-22500\,l^3+343000\,l^2+3332000\,l)\\ CD_{GBALR}^5=&~ (225\,kl-252\,k)\sqrt{25c/2}-1500\,l-217840,\\ CD_{GBALR}^6=&~ (225\,kl-252\,k)\sqrt{25c/2}-1500\,l+221200.\\ \end{split} \end{equation} By observing Table \ref{tab:gbalr-sample}, we have the following proposition. \begin{proposition} The $T_{GBALR}$ model described by \eqref{eq:gbalr-map} has a unique positive equilibrium $$\left(\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}}\right),$$ which is locally stable provided that \begin{equation*} k\sqrt{c}<\frac{ 10172\sqrt{2}}{5737}. \end{equation*} \end{proposition} Furthermore, the following result is acquired. \begin{proposition} The stability region of the $T_{GBALR}$ model is strictly larger than that of $T_{GBAL}$. \end{proposition} \begin{proof} It suffices to prove that $$\frac{2\sqrt{6}(226\,l-441)}{512\,l-1017}<\frac{ 10172\sqrt{2}}{5737},$$ which is equivalent to $$10172\sqrt{2}(1017-512\,l) - 5737\times 2\sqrt{6}(441-226\,l)>0,$$ which is true by checking at $l=0$ and $l=1$. \end{proof} \begin{table}[htbp] \centering \caption{Stability Condition of $T_{GBALR}$ at Selected Sample Points} \label{tab:gbalr-sample} \begin{tabular}{|l|c|c|c|} \hline sample point of $(k,l,c)$ & $CD_{GBALR}^1>0$ & $CD_{GBALR}^2>0$ & $CD_{GBALR}^3<0$ \\ \hline (453/256, 61/128, 1/2) & true& true& true \\ \hline (1007/256, 61/128, 1/2) & true& false& true \\ \hline (7183/256, 61/128, 1/2)& true& false& false\\ \hline (6675/128, 61/128, 1/2)& true& false& true\\ \hline (10587/32, 61/128, 1/2)& true& false& true\\ \hline (9755/16, 61/128, 1/2)& true& false& true\\ \hline (453/256, 251/256, 1/2)& true& true& true\\ \hline (1567/256, 251/256, 1/2)& true& false& true\\ \hline (225/8, 251/256, 1/2)& true& false& false\\ \hline (12807/256, 251/256, 1/2)& true& false& true\\ \hline (91267/64, 251/256, 1/2)& true& false& true\\ \hline (89603/32, 251/256, 1/2)& true& false& true\\ \hline \end{tabular} \begin{tabular}{|l|c|c|c|} \hline sample point of $(k,l,c)$ & $CD_{GBALR}^4<0$ & $CD_{GBALR}^5<0$ & $CD_{GBALR}^6>0$ \\ \hline (453/256, 61/128, 1/2) & true& true& true \\ \hline (1007/256, 61/128, 1/2) & true& true& true \\ \hline (7183/256, 61/128, 1/2)& true& true& true\\ \hline (6675/128, 61/128, 1/2)& true& true& true\\ \hline (10587/32, 61/128, 1/2)& false& true& true\\ \hline (9755/16, 61/128, 1/2)& false& true& false\\ \hline (453/256, 251/256, 1/2)& true& true& true\\ \hline (1567/256, 251/256, 1/2)& true& true& true\\ \hline (225/8, 251/256, 1/2)& true& true& true\\ \hline (12807/256, 251/256, 1/2)& true& false& true\\ \hline (91267/64, 251/256, 1/2)& false& true& true\\ \hline (89603/32, 251/256, 1/2)& false& true& false\\ \hline \end{tabular} \end{table} \section{Concluding Remarks} \begin{figure}[htbp] \centering \includegraphics[width=7cm]{fig/regions.png} \caption{The stability regions of the models considered in the paper. The unique equilibrium of $T_{GB}$ is locally stable if and only if the parameters take values from the red region. The unique positive equilibrium of $T_{GBA}$ is locally stable if and only if the parameters take values from the red and yellow regions. By analogy, similar conclusions can be obtained for the $T_{GBAL}$ and $T_{GBALR}$ models.} \label{fig:regions} \end{figure} \bibliographystyle{abbrv}
{'timestamp': '2021-12-30T02:00:09', 'yymm': '2112', 'arxiv_id': '2112.13844', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.13844'}
ArXiv
Abstract: We perform a detailed theoretical study of the edge fracture instability, which commonly destabilises the fluid-air interface during strong shear flows of entangled polymeric fluids, leading to unreliable rheological measurements. By means of direct nonlinear simulations, we map out phase diagrams showing the degree of edge fracture in the plane of the surface tension of the fluid-air interface and the imposed shear rate, within the Giesekus and Johnson-Segalman models, for different values of the nonlinear constitutive parameters that determine the dependencies on shear rate of the shear and normal stresses. The threshold for the onset of edge fracture is shown to be relatively robust against variations in the wetting angle where the fluid-air interface meets the hard walls of the flow cell, whereas the nonlinear dynamics depend strongly on wetting angle. We perform a linear stability calculation to derive an exact analytical expression for the onset of edge fracture, expressed in terms of the shear-rate derivative of the second normal stress difference, the shear-rate derivative of the shear stress (sometimes called the tangent viscosity), the jump in shear stress across the interface between the fluid and the outside air, the surface tension of that interface, and the rheometer gap size. Full agreement between our analytical calculation and nonlinear simulations is demonstrated. We also elucidate in detail the mechanism of edge fracture, and finally suggest a new way in which it might be mitigated in experimental practice. Some of the results in this paper were first announced in an earlier letter. The present manuscript provides additional simulation results, calculational details of the linear stability analysis, and more detailed discussion of the significance and limitations of our findings.
{'timestamp': '2019-04-20T08:17:57Z', 'url': 'https://arxiv.org/abs/1903.02037', 'language': 'en', 'source': 'c4'}
ArXiv
\section{Introduction} At the meeting of the American Mathematical Society in Hayward, California, in April 1977, Olga Taussky-Todd \cite{TausskyTodd} asked whether one could characterize the values of the group determinant when the entries are all integers. For a prime $p,$ a complete description was obtained for $\mathbb Z_{p}$ and $\mathbb Z_{2p}$, the cyclic groups of order $p$ and $2p$, in \cite{Newman1} and \cite{Laquer}, and for $D_{2p}$ and $D_{4p}$ the dihedral groups of order $2p$ and $4p$ in \cite{dihedral}. The values for $Q_{4n}$, the dicyclic group of order $4n$ were explored in \cite{dicyclic} with a near complete description for $Q_{4p}$. In general though this quickly becomes a hard problem, with only partial results known even for $\mathbb Z_{p^2}$ once $p\geq 7$ (see \cite{Newman2} and \cite{Mike}). The remaining groups of order less than 15 were tackled in \cite{smallgps} and $\mathbb Z_{15}$ in \cite{bishnu1}. The integer group determinants have been determined for all five abelian groups of order 16 ($\mathbb Z_2 \times \mathbb Z_8$, $\mathbb Z_{16}$, $\mathbb Z_2^4$, $\mathbb Z_4^2$, $\mathbb Z_2^2 \times\mathbb Z_4$ in \cite{Yamaguchi1,Yamaguchi2,Yamaguchi3,Yamaguchi4,Yamaguchi5}), and for three of the non-abelian groups ($D_{16}$, $\mathbb Z_2\times D_8$, $\mathbb Z_2 \times Q_8$ in \cite{dihedral,ZnxH}). Here we determine the the group determinants for $Q_{16}$, the dicyclic or generalized quaternion group of order 16. $$ Q_{16}=\langle X,Y \; | \; X^8=1,\; Y^2=X^4,\; XY=YX^{-1}\rangle. $$ This leaves five unresolved non-abelian groups of order 16 \begin{theorem} The even integer group determinants for $Q_{16}$ are exactly the multiples of $2^{10}$. The odd integer group determinants are all the integers $n\equiv 1$ mod 8 plus those $n\equiv 5$ mod 8 of the form $n=mp^2$ where $m\equiv 5$ mod 8 and $p\equiv 7$ mod $8$ is prime. \end{theorem} We shall think here of the group determinant as being defined on elements of the group ring $\mathbb Z [G]$ $$ \mathcal{D}_G\left( \sum_{g\in G} a_g g \right)=\det\left( a_{gh^{-1}}\right) .$$ \begin{comment} We observe the multiplicative property \begin{equation} \label{mult} \mathcal{D}_G(xy)= \mathcal{D}_G(x)\mathcal{D}_G(y), \end{equation} using that $$ x=\sum_{g \in G} a_g g,\;\;\; y=\sum_{g \in G} b_g g \; \Rightarrow \; xy=\sum_{g\in G} \left(\sum_{hk=g}a_hb_k\right) g. $$ \end{comment} Frobenius \cite{Frob} observed that the group determinant can be factored using the groups representations (see for example \cite{Conrad} or \cite{book}) and an explicit expression for a dicyclic group determinant was given in \cite{smallgps}. For $Q_{16}$, arranging the 16 coefficients into two polynomials of degree 7 $$ f(x)=\sum_{j=0}^7 a_j x^j,\;\; g(x)=\sum_{j=0}^7 b_jx^j, $$ and writing the primitive 8th root of unity $\omega:=e^{2\pi i/8}=\frac{\sqrt{2}}{2}(1+i)$, this becomes \begin{equation} \label{form}\mathcal{D}_G\left( \sum_{j=0}^7 a_j X^j + \sum_{j=0}^7 b_j YX^j\right) =ABC^2D^2 \end{equation} with integers $A,B,C,D$ from \begin{align*} A=& f(1)^2- g(1)^2\\ B=& f(-1)^2-g(-1)^2\\ C=& |f(i)|^2-|g(i)|^2 \\ D=& \left(|f(\omega)|^2+|g(\omega)|^2\right)\left(|f(\omega^3)|^2+|g(\omega^3)|^2\right). \end{align*} From \cite[Lemma 5.2]{dicyclic} we know that the even values must be multiples of $2^{10}$. The odd values must be 1 mod 4 (plainly $f(1)$ and $g(1)$ must be of opposite parity and $A\equiv B\equiv \pm 1$ mod 4 with $(CD)^2\equiv 1$ mod 4). \section{Achieving the values $n\not \equiv 5$ mod 8} We can achieve all the multiples of $2^{10}$. Writing $h(x):=(x+1)(x^2+1)(x^4+1),$ we achieve the $2^{10}(-3+4m)$ from $$ f(x) = (1-m)h(x),\quad g(x)=1+x^2+x^3+x^4-mh(x), $$ the $2^{10}(-1+4m)$ from $$ f(x)= 1+x+x^4+x^5-mh(x),\;\;\;\; g(x)= 1+x-x^3-x^7-mh(x), $$ the $2^{11}(-1+2m)$ from $$ f(x)= 1+x+x^2+x^3+x^4+x^5-mh(x),\;\;\quad g(x)=1+x^4-mh(x), $$ and the $2^{12}m$ from $$ f(x)= 1+x+x^4+x^5-x^6-x^7-mh(x),\;\; g(x)= 1+x-x^3+x^4+x^5-x^7+mh(x). $$ We can achieve all the $n\equiv 1$ mod 8; the $1+16m$ from $$ f(1)=1+mh(x),\;\; g(x)=mh(x), $$ and the $-7+16m$ from $$f(x)= 1-x+x^2+x^3+x^7- mh(x),\;\; g(x)= 1+x^3+x^4+x^7-mh(x). $$ \section{ The form of the $n\equiv 5$ mod 8} This leaves the $n\equiv 5$ mod 8. Since $(CD)^2\equiv 1$ mod 8 we must have $AB\equiv 5$ mod 8. Switching $f$ and $g$ as necessary we assume that $f(1),f(-1)$ are odd and $g(1),g(-1)$ even. Replacing $x$ by $-x$ if needed we can assume that $g(1)^2\equiv 4$ mod 8 and $g(-1)^2\equiv 0$ mod 8. We write $$ F(x)=f(x)f(x^{-1})= \sum_{j=0}^7 c_j (x+x^{-1})^j, \quad G(x)=g(x)g(x^{-1})= \sum_{j=0}^7 d_j (x+x^{-1})^j, $$ with the $c_j,d_j$ in $\mathbb Z$. From $F(1),F(-1)\equiv 1$ mod 8 we have $$ c_0+2c_1+4c_2 \equiv 1 \text{ mod }8, \quad c_0-2c_1+4c_2 \equiv 1 \text{ mod }8, $$ and $c_0$ is odd and $c_1$ even. From $G(1)\equiv 4$, $G(-1)\equiv 0$ mod 8 we have $$ d_0+2d_1+4d_2 \equiv 4 \text{ mod 8}, \quad d_0-2d_1+4d_2 \equiv 0 \text{ mod } 8, $$ and $d_0$ is even and $d_1$ is odd. Since $\omega+\omega^{-1}=\sqrt{2}$ we get \begin{align*} F(\omega) & = (c_0+2c_2+4c_4+\ldots ) + \sqrt{2}(c_1+2c_3+4c_5+\cdots),\\ G(\omega) & = (d_0+2d_2+4d_4+\ldots ) + \sqrt{2}(d_1+2d_3+4d_5+\cdots), \end{align*} and $$|f(\omega)|^2+|g(\omega)|^2= F(\omega)+G(\omega) = X+ \sqrt{2} Y>0, \;\; \quad X, Y \text{odd, } $$ with $ |f(\omega^3)|^2+|g(\omega^3)|^2=F(\omega^3)+G(\omega^3) = X- \sqrt{2} Y>0$. Hence the positive integer $D=X^2-2Y^2\equiv -1$ mod 8. Notice that primes 3 and 5 mod 8 do not split in $\mathbb Z[\sqrt{2}]$ so only their squares can occur in $D$. Hence $D$ must contain at least one prime $p\equiv 7$ mod 8, giving the claimed form of the values 5 mod 8. \section{Achieving the specified values 5 mod 8} Suppose that $p\equiv 7$ mod 8 and $m\equiv 5$ mod 8. We need to achieve $mp^2$. Since $p\equiv 7$ mod 8 we know that $\left(\frac{2}{p}\right)=1$ and $p$ splits in $\mathbb Z[\sqrt{2}].$ Since $\mathbb Z[\sqrt{2}]$ is a UFD, a generator for the prime factor gives a solution to $$ X^2-2Y^2=p, \;\; X,Y\in \mathbb N. $$ Plainly $X,Y$ must both be odd and $X+\sqrt{2}Y$ and $X-\sqrt{2}Y$ both positive. Since $(X+\sqrt{2}Y)(3+2\sqrt{2})=(3X+4Y)+\sqrt{2}(2X+3Y)$ there will be $X,Y$ with $X\equiv 1$ mod 4 and with $X\equiv -1$ mod 4. Cohn \cite{Cohn} showed that $a+b\sqrt{2}$ in $\mathbb Z[\sqrt{2}]$ is a sum of four squares in $\mathbb Z[\sqrt{2}]$ if and only if $2\mid b$. Hence we can write $$ 2(X+\sqrt{2}Y)= \sum_{j=1}^4 (\alpha_j + \beta_j\sqrt{2})^2, \;\;\alpha_j,\beta_j\in \mathbb Z. $$ That is, $$ 2X=\sum_{j=1}^4 \alpha_j^2+ 2\sum_{j=0}^4 \beta_j^2,\;\;\quad Y=\sum_{j=1}^4\alpha_j\beta_j.$$ Since $Y$ is odd we must have at least one pair, $\alpha_1$, $\beta_1$ say, both odd. Since $2X$ is even we must have two or four of the $\alpha_i$ odd. Suppose that $\alpha_1$, $\alpha_2$ are odd and $\alpha_3,\alpha_4$ have the same parity. We get \begin{align*} X+\sqrt{2}Y & = \left( \frac{\alpha_1+\alpha_2}{2} + \frac{\sqrt{2}}{2}(\beta_1+\beta_2)\right)^2+ \left( \frac{\alpha_1-\alpha_2}{2} + \frac{\sqrt{2}}{2}(\beta_1-\beta_2)\right)^2 \\ & \quad + \left( \frac{\alpha_3+\alpha_4}{2} + \frac{\sqrt{2}}{2}(\beta_3+\beta_4)\right)^2+ \left( \frac{\alpha_3-\alpha_4}{2} + \frac{\sqrt{2}}{2}(\beta_3-\beta_4)\right)^2. \end{align*} Writing $$ f(\omega)=a_0+a_1\omega+a_2\omega^2+a_3\omega^3=a_0+ \frac{\sqrt{2}}{2}(1+i)a_1+a_2i+ \frac{\sqrt{2}}{2}(-1+i)a_3,$$ we have $$ \abs{f(\omega)}^2 =\left(a_0+ \frac{\sqrt{2}}{2}(a_1-a_3)\right)^2 + \left(a_2+ \frac{\sqrt{2}}{2}(a_1+a_3)\right)^2 $$ and can make $$ |f(\omega)|^2+|g(\omega)|^2 = X + \sqrt{2}Y $$ with the selection of integer coefficients for $f(x)=\sum_{j=0}^3a_jx^j$ and $g(x)=\sum_{j=0}^3 b_jx^j$ \begin{align*} a_0=&\frac{1}{2}(\alpha_1-\alpha_2),\quad a_1 =\beta_1,\quad a_2=\frac{1}{2}(\alpha_1+\alpha_2), \quad a_3=\beta_2, \\ b_0=& \frac{1}{2}(\alpha_3-\alpha_4),\quad b_1 =\beta_3,\quad b_2=\frac{1}{2}(\alpha_3+\alpha_4), \quad b_3=\beta_4. \end{align*} These $f(x)$, $g(x)$ will then give $D=p$ in \eqref{form}. We can also determine the parity of the coefficients. \vskip0.1in \noindent {\bf Case 1}: the $\alpha_i$ are all odd. Notice that $a_0$ and $a_2$ have opposite parity, as do $b_0$ and $b_2$. Since $Y$ is odd we must have one or three of the $\beta_i$ odd. If $\beta_1$ is odd and $\beta_2,\beta_3,\beta_4$ all even, then $2X\equiv 6$ mod 8 and $X\equiv -1$ mod 4. Then $a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even and $f(x)=u(x)+2k(x)$ with $u(x)=1+x$ or $x(1+x)$. Likewise $b_0,b_1,b_2,b_3$ are odd, even, even, even or even, even, odd, even and $g(x)=v(x)+2s(x)$ with $v(x)=1$ or $x^2$. Hence if we take \begin{equation} \label{shift} f(x)=u(x)+(1-x^4)k(x)-mh(x),\quad g(x)=v(x)+(1-x^4)s(x)-mh(x), \end{equation} we get $A=3-16m$, $B=-1$, $C=1$, $D=p$ and we achieve $(16m-3)p^2$ in \eqref{form}. If three $\beta_i$ are odd then $2X\equiv 10$ mod 8 and $X\equiv 1$ mod 4. We assume $\beta_1,\beta_2,\beta_3$ are odd and $\beta_4$ even. Hence $a_0,a_1,a_2,a_3$ are either odd, odd, even, odd or even, odd, odd, odd and $f(x)=u(x)+2k(x)$ with $u(x)=1+x+x^3$ or $x(1+x+x^2)$ and $b_0,b_1,b_2,b_3$ are odd, odd, even, even or even, odd, odd, even and $g(x)=v(x)+2s(x)$ with $v(x)=1+x$ or $x(1+x)$. In this case \eqref{shift} gives $A=(5-16m)$, $B=1$, $C=-1$, $D=p$ achieving $(5-16m)p^2$. \vskip0.1in \noindent {\bf Case 2}: $\alpha_1$, $\alpha_2$ are odd, $\alpha_3$, $\alpha_4$ are even. In this case $a_0$, $a_2$ will have opposite parity and $b_0$, $b_2$ the same parity. Since $Y$ is odd we must have $\beta_1$ odd, $\beta_2$ even. Since $2X\equiv 2$ mod 4 we must have one more odd $\beta_i$, say $\beta_3$ odd and $\beta_4$ even. If $\alpha_3\equiv \alpha_4$ mod 4 then $2X\equiv 6$ mod 8 and $X\equiv -1$ mod 4. Hence $a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even, that is $u(x)=1+x$ or $x(1+x)$ and $b_0,b_1,b_2,b_3$ are even, odd, even, even and $v(x)=x^2$ and again \eqref{shift} gives $(16m-3)p^2$. If $\alpha_3\not\equiv \alpha_4$ mod 4 then $2X\equiv 10$ mod 8 and $X\equiv 1$ mod 4. In this case $a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even, that is $u(x)=1+x$ or $x(1+x)$ and $b_0,b_1,b_2,b_3$ are odd, odd, odd, even and $v(x)=1+x+x^2$ and again \eqref{shift} gives $(5-16m)p^2$. Hence, in either case, starting with an $X\equiv 1$ mod 4 gives the $mp^2$ with $m\equiv 5$ mod 16 and an $X\equiv -1$ mod 4 the $mp^2$ with $m\equiv -3$ mod 16. \section*{Acknowledgement} \noindent We thank Craig Spencer for directing us to Cohn's four squares theorem in $\mathbb Z[\sqrt{2}]$.
{'timestamp': '2023-02-24T02:03:13', 'yymm': '2302', 'arxiv_id': '2302.11688', 'language': 'en', 'url': 'https://arxiv.org/abs/2302.11688'}
ArXiv
\section{Introduction and Motivation} \subsection{Starburst Regions} The study of individual luminous stars and stellar populations in nearby giant H\,{\sc ii} regions is a prerequisite to understanding the starburst phenomenon and interpreting the observations of distant starburst galaxies and those containing starburst regions, for which only integral properties can be observed \citep{lei97}. In this context, the Hubble Space Telescope (HST) has been crucial in providing us with high-resolution images of nearby, but very dense and massive, stellar clusters which are ionizing giant H\,{\sc ii} regions (\citet{hun95}, \citet{hun96} and \citet{mal96}). Our HST-based investigation of the stellar populations of the most luminous star-forming complexes in the nearby late-type (ScIII) spiral galaxy NGC 2403 was very fruitful \citep{dri99}, and underpinned our request for HST time to observe M101. A member of the M81 group, at a distance of 3.2 $\pm$ 0.4 Mpc \citep{fre88}, NGC 2403 is very rich in H\,{\sc ii} regions \citep{siv90}. Its abundance level and O/H radial gradient have been well established by \citet{mar96}; they are similar to those of M33 \citep{hen95}. In contrast with M33, which contains relatively modest giant H II regions, four of NGC 2403's H\,{\sc ii} regions are exceptionally bright, with H$\alpha$ luminosities L($H\alpha$) $\sim 0.8-1.5 \times 10^{40} $ erg/s, comparable to the most massive starburst region in the Local Group, the 30 Doradus complex. We also found direct evidence for the presence of Wolf-Rayet (WR) stars in five of the six giant H\,{\sc ii} regions investigated; 25 - 40 WR stars are present in the NGC 2403-I giant H\,{\sc ii} region alone. HST has also provided optical and UV spectra of individual massive stars in NGC 1569 \citep{mao01}, NGC 5398 \citep{sid06} and NGC 925 \citep{ada11}. Ground-based imagery and spectroscopy has revealed rich WR populations in M83 \citep{cro04} and NGC 5253 \citep{cro99}. M101 (also known as NGC 5457 and the Pinwheel Galaxy) is the logical galaxy in which to extend this work. As the nearest giant grand design ScI spiral galaxy, it is brimming with H\,{\sc ii} regions, massive stars and at least 3,000 luminous star clusters \citep{bar06}. About 6500 WR stars are estimated to exist in the Milky Way \citep{sha99}; simplistically scaling up to the size and luminosity of M101 suggests a population of 10-20,000 WR stars and even more Red Supergiants (RSGs). The star formation rate (SFR) in M101 \citep{Lee2009} is probably a few times that of the Milky Way, and M101 is 50\% or more larger than our Galaxy. This, again, suggests a very substantial population of M101 WR stars. In addition, a rich treasury of (mostly) continuum imagery with HST was already in hand: over 135 ksec of HST exposures. As described below, we used this database extensively to perform the image subtractions needed to isolate the strong emission-line WR stars and very red Red Supergiants from the other stellar populations. \subsection{Wolf-Rayet and Red Supergiant Stars} Wolf-Rayet \citep{cro07} and Red Supergiant stars (\citet{lev10}; \citet{mey11}) are the massive stars that are easiest to identify in imaging surveys of galaxies because of their strong emission lines and extreme colors, respectively. They provide important constraints on the age of a starburst \citep{gaz12} and on the mode of star formation \citep{cro13}. Single stars with initial masses (M$_{i}>$20M$_{\odot}$) are predicted to advance to the WR phase at approximately solar metallicity. WR stars possess strong stellar winds which produce a unique, emission--line spectrum displaying broad He\,{\sc ii}\,$\lambda$4686 for hot nitrogen--rich (WN) and carbon--rich (WC) stars or C\,{\sc iii}\,$\lambda$4650+C\,{\sc iv}\,$\lambda$5808 for (initially more massive) carbon--rich (WC) subtypes. WNh stars are a unique subclass of WR stars as these are luminous WN stars that are still burning hydrogen on the main sequence \citep{dek98}, hence they are fundamentally different from their helium burning \textit{classical} cousins. Since the hydrogen--rich envelope has been removed from classical WN stars it follows that they are probably the progenitors of at least a subset of H--poor Type Ib SN. Similarly, the removal of both the hydrogen and helium envelopes from WC stars should correspond to the absence of both these elements in the spectra of Type Ic SN. However, the WR-Type Ibc SN question remains unresolved as, to date, \textit{no direct detection of a Type Ib or Ic SN progenitor has been obtained.} \citep{eld13} have recently claimed that 12 SNIbc progenitors are invisible to as faint as absolute B, V and R magnitudes of -4 to -5. In contrast, RSG are predicted to arise from the evolution of less massive (8 -- 20 M$_{\odot}$) stars and are therefore expected to appear later in the life of a starburst. Evolutionary models predict that single massive stars with M$_{i}\sim$8--20M$_{\odot}$ end their lives during the Red Supergiant (RSG) phase as H--rich Type II core--collapse supernovae (ccSNe). HST broad--band pre-SN imaging has been able to confirm the RSG--Type II SN connection, particularly for SN 2003gd \citep{sma09}. However, the highest mass RSG progenitor to date is only $\sim$16M$_{\odot}$ (Smartt 2009), making these limits uncertain. Models indicate that in instantaneous starbursts of low metallicity, these two populations are well separated in time, since only the most extreme stars (M$_{i}$ $\geq$ 50 {M$_{\odot}$) can shed enough mass to reach the WR stage. In regions of high metallicity, however, the simultaneous presence of WR and RSG stars can be expected for a short period of time, since lower mass stars ( $ \sim 25$ {M$_{\odot}$) can also become WR after having spent some time as RSG \citep{mae94}. WR and RSG are observed to coexist in the massive Galactic cluster Westerlund 1 \citep{cla05}. One of our key goals is to directly test this prediction in a single galaxy: M101. \citet{ken03} have shown that in M101, over the galactocentric range 6-41 kpc, oxygen abundances are well fitted by an exponential distribution from approximately 1.3 (O/H) solar in the center to 1/15 (O/H) solar in the outermost regions. Equivalently, log O/H +12 = 8.8 in the center of M101, and 7.5 in that galaxy's outer regions. Observing across the entire range of galactocentric distances in M101 (from 0 to 50 kpc) to measure how the absolute numbers and WR/RSG ratio changes across the galaxy is an equally important goal of our study. \subsection{Star Clusters} An early HST--based investigation by \citet{dri93} targeted the WR population of M33 using narrow--band $\lambda$4686 imaging surveys. NGC 604 is the largest giant H\,{\sc ii} region in the nearby star-forming galaxy M33 which lies at a distance of only d$\sim$0.8 Mpc \citep{sco09}. Ground--based imaging with seeing $\sim$1.2\arcsec ~revealed that NGC 604 has a moderate WR population \citep{dri91}. However this was significantly increased via high spatial resolution HST narrow--band imaging \citep{dri93}, identifying the fainter WR population which corresponds to the lowest mass WR stars. At the core of each of the two most luminous giant H II regions in NGC 2403 lies a luminous, compact object \citep{dri99}. The discovery that very dense, massive star clusters form at the cores of all types of starbursts led to the suggestion that globular clusters were once located at the cores of massive starbursts (Meurer 1995; Whitmore \& Schweizer 1995; Ho \& Filippenko 1996a). The cases of HD 97950 (the ionizing core of NGC 3603 \citep{dri95}), R136 \citep{mof94} and \citep{cro10}, NGC 2363 \citep{dri00}, NGC 2403-I and NGC 2403-II \citep{dri99}, M31 \citep{neu12} and M33 \citep{neu11} show that massive compact stellar clusters also form in more normal galaxies as well. How much more common are they in a very massive, actively star-forming galaxy like M101? A striking feature in the NGC 2403 clusters is that RSG stars are mainly present over a more extended halo, while the young blue stars and most WR stars are in or close to a compact core. Stars more massive than $\sim 25$ M$_{\odot}$ are not expected to go through a RSG phase before becoming WR stars. For $M_{i}$ $\leq$ 20 M$_{\odot}$, stars evolve to RSG and explode as Type II supernovae without entering the WR phase. The timescale of these evolutionary paths is longer as one considers lower masses, hence the absence of RSG and presence of WR stars in the cores indicate that the population is dominated by very young and very massive stars. The presence of RSG in the halos signifies that we have an older mix of stars of various masses with M $\gtrapprox$ 8--15 M$_{\odot}$. The relative age spread and the spatial exclusion between RSG and WR stars are most obvious for the largest H\,{\sc ii} regions. Although of different ages, the proximity there of the WR and RSG stars suggests a triggering link between the two populations, which is a key part of this research program. WR stars and RSG are observed to co-exist in the Milky Way's most massive compact cluster Westerlund 1 \citep{cla05}. The luminosity, inferred mass and compact nature of Westerlund 1 are comparable with those of Super Star Clusters - previously identified only in external galaxies (see e.g. \citet{bas12}, \citet{lar11},\citet{whi05} and \citet{whi11}). \subsection{Supernova Environments} Another key goal of this and related studies is obtaining narrow--band imaging of several nearby galaxies to produce a catalogue of $\sim$10$^4$ WR stars. When a Type Ibc SN and/or gamma ray burst \citep{geo12} eventually occurs in one of these galaxies, our catalog should reveal the WR progenitor, confirming one of the strongest predictions of stellar evolutionary theory. We have obtained ground-based narrow-band imaging of several nearby star-forming galaxies (\citet{bib12}, \citet{bib10} and \citet{had07}) and confirmed a subset of the WR candidates with multi-object spectroscopy. Given the average lifetime of a WR star of $\sim$ 0.3\,Myr \citep{cro03}, we would expect one of the WR stars identified to produce a Type Ibc ccSN within the next few decades. Until then, we are able to compare the distribution of WR stars in their host galaxy with different types of SN to assess whether they represent a common population; and to check whether the predicted WC/WN ratio varies across galaxies as predicted by theory. Different ccSN are seen to be located in different regions of their host galaxies. For example, Type II ccSN follow the distribution of the host galaxy light whereas Type Ic SN are preferentially located in the brightest regions, similar to long Gamma-Ray Bursts \citep{fru06}. Furthermore, Type Ib and Ic SN were found to have different spatial distributions relative to the distribution of the host galaxy light, strongly suggesting that they have different progenitors \citep{kel08}. If WN and WC stars are the progenitors of Type Ib and Ic SN, respectively, they should follow the same distribution as the corresponding supernovae. Indeed, \citet{lel10} applied this approach to spectroscopically confirmed WR stars in M83 \citep{had05} and found that WN and WC stars are located in different regions of their host galaxy. Moreover, the distribution of WN and WC stars are most consistent with those of Type Ib and Ic SNe, respectively. Given that this paper only concentrates on one region in M101, we postpone the discussion of our candidates' distribution until the next paper in this series. The plan of this paper is as follows. In Section 2 we describe the narrowband imaging technique and our HST observations. The data reductions, including image processing, photometry and detection limits are presented in Section 3. Source selection from our images is described in Section 4 and we address the issue of contamination by variable stars in Section 5. Our methodology for locating Red Supergiants is presented in Section 6; we also compare the distributions of the WR and RSG candidates and star clusters in this section. We briefly summarize our results in Section 7. \section{Techniques and Observations} The need for targeted surveys that uniquely pick out WR stars is highlighted by \citet{sma09} and by \citet{eld13}, who discussed Type Ibc SN for which broad--band pre-SN imaging exists. Not one progenitor has been identified from these dozen SNe. This is because short exposure times did not allow the images to go deep enough to detect the continuum of a WR star. Had proper narrow--band surveys been available we would almost certainly, by now, have been able to provide strong evidence for the WR-SNIbc connection. A powerful technique to detect individual Wolf-Rayet stars in crowded fields, such as the ones in M101, consists of subtracting a continuum image, normalized in both PSF and intensity, from an image obtained with a narrow-band filter centered on the He\,{\sc ii} $\lambda$4686 emission line. One reason for this is that WR stars are much brighter in filters sensitive to their strong, broad emission lines, particularly He\,{\sc ii}$\lambda$4686, than their continua, by up to 3 magnitudes \citep{mjo98}. Hence, WR stars can be easily identified from specific narrow--band images but are difficult to detect in broad--band images. Moreover, WR stars detected in broad--band images alone cannot be distinguished from other blue supergiants. This narrowband-broadband technique has been successfully used on WFPC2 F469N images of giant H\,{\sc ii} regions in galaxies such as M33 \citep{dri93} and NGC~2403 \citep{dri99}, detecting both isolated, individual WR stars and WR stars in unresolved clusters that include only a very small fraction of WR stars. M101 lies at a distance of 6.4\,Mpc \citep{sha11}, almost twice as far as NGC~2403 \citep{fre88}, so that $\sim$ 4$\times$ longer exposures are needed to reach similar magnitudes. In Cycle 17 we obtained HST/Wide Field Camera 3 (WFC3) pointings of 2 orbits per M101 field, under program ID 11635 (PI. Shara), with a total exposure time of 6106~seconds per field. This permitted us to image to a similar depth in M101 as we achieved in NGC~2403. We note that the systemic redshift of M101 (+372 km/s) shifts the center of the He\,{\sc ii} $\lambda$4686 emission line to $\lambda$4692~\AA. This has virtually no effect on the detectability of WR stars in this galaxy, since the F469N filter transmission curve is fortuitously centered at 4693 \AA, and the filter sensitivity is nearly constant from 4680 to 4710 \AA, with a FWHM of 50 \AA. It is true that the filter will capture essentially all HeII 4686 but exclude most NIII 4640, with only the red half of CIII 4650 included. We used 18 pointings to cover the large majority of M101. Some gaps between CCD chips are inevitable. The fields covered are shown in Figure \ref{m101_wfc3_view}. The very different orientation of one of the pointings was necessary to provide a guide star for the observations, albeit resulting in overlap with another pointing. The coverage of our WFC3 images was selected based on the availability of deep archival continuum imaging. We used F435W, F555W and F814W ACS/WFC images, taken under program ID 9490 (PI. Kuntz) and ID 10918 (PI. Freedman), to represent continuum images so that only additional narrow-band F469N images were needed. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{fig1.eps} \caption{Digitized Sky Survey image of M101 with the 18 HST/WFC3 pointings overlaid. North is up and East to the left of the image. The field shown is approximately 20 arcmin$^{2}$ and each WFC3 field highlighted is 2.7$\times$2.7 arcmin.} \label{m101_wfc3_view} \end{figure} \section{Data Reductions} \label{reduction} Each WFC3 pointing was treated individually to ensure the best alignment with the ACS/WFC data and to make the size of each dataset more manageable. The corresponding ACS frames were drizzled with the WFC3 F469N pointing using the \textsc{multidrizzle} task within \textsc{iraf}\footnotemark \footnotetext{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation.} to produce a coordinate system that was consistent between each different instrument and filter. Often small shifts and rotations were required to achieve the best alignment; these were calculated using \textsc{geomap}. To achieve the best match, the WFC3 data was drizzled to a spatial scale of 0.15\arcsec, corresponding to 4.65 pc at the 6.4 Mpc distance of M101. While slightly degraded from the 0.1\arcsec optimum sampling offered by HST, this was necessary to allow us to produce the best continuum subtractions possible. In the following we describe the methods applied to all 18 pointings, and present the results of one of those pointings (M101-I). Later papers in this series, describing the remaining 17 fields, use exactly the same methodology. \subsection{Photometry} Once the broad-- and narrow--band images had been aligned, photometry was performed on each filter separately using the standalone code, \textsc{daophot} \citep{ste87}. A point-spread function (PSF), based on isolated, point-like stars within the field, was built and applied to all the other stars detected. Individual zero-points from the HST literature were applied for each filter to transform the observed magnitudes into ST magnitudes and Vega magnitudes. For this study we find the Vega magnitude system to be more useful, since (under the Vega system) the F435W, F555W, and F814W filters correspond to the Johnson B, V and I filters. This enabled us to use literature color cuts. Henceforth, any magnitudes listed are Vega magnitudes unless otherwise stated. Typical photometric errors for the narrow-band F469N images were $\pm$0.07\,mag for bright sources (m$_{F469N}$\,=\,21mag) and $\pm$0.5\,mag for the fainter sources (m$_{F469N}$\,=\,25.5\,mag); the distribution of the photometric errors relative to the source brightness for the F469N images is shown in Figure \ref{f469_errors}. For the ACS broad-band images the errors were slightly lower for sources of similar magnitudes, with $\pm$0.03\,mag (m$_{F435W}$\,=\,21mag) and $\pm$0.10\,mag (m$_{F435W}$\,=\,25.5mag). For the faintest sources in the ACS image with m$_{F435W}$\,=\,28\,mag the photometric error is typically $\pm$0.5\,mag, shown in Figure \ref{f435_errors} for the F435W image, which is consistent with the other broad-band images. \begin{figure} \centering \includegraphics[width=0.7\columnwidth, angle=-90]{fig2.eps} \caption{Photometric errors as a function of zero-point corrected, apparent (Vega) magnitude for all sources found in the WFC3/F469N image of M101-I .} \label{f469_errors} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\columnwidth, angle=-90]{fig3.eps} \caption{Photometric errors as a function of zero-point corrected, apparent (Vega) magnitude for all sources found in the ACS/WFC F435W image of M101-I .} \label{f435_errors} \end{figure} \subsection{Detection Limits} \label{detection_limits} In order to assess the completeness of our M101 survey we must determine to what depth our images probe. Following the method of \citet{bib10} we fit a polynomial to the distribution, where the 100\% detection limit is defined by the point at which the power law deviates from the observed data. Figure \ref{f469_mag_dist} shows the distribution of sources detected in our WFC3/F469N data, indicating that our 100\% detection limit is m$_{F469N}$\,=\,24.3 mag. If we adopt the extinction from \citet{Lee2009} of A(H$\alpha$)\,=\,1.06\,mag, corresponding to A(F469N)\,=\,1.53\,mag following the extinction law from \citet{Cardelli1989}, and adopt the Cepheid distance of 6.4\,Mpc \citep{sha11}, our 100\% completeness detection limit corresponds to M$_{F469N}$\,=\,--6.26\,mag. The magnitude distribution of sources in the ACS/F435W data is shown in Figure \ref{f435_mag_dist}, which shows that we sample $\sim$2 magnitudes fainter in the continuum, with a 100\% detection limit of m$_{F435W}$\,=\,26.6\,mag. Again, we adopt the extinction law of \citet{Cardelli1989} so A(F435W)\,=\,1.74\,mag determines a 100\% completeness detection limit of M$_{F435W}$\,=\,--4.17\,mag. \begin{figure} \centering \includegraphics[width=0.7\columnwidth, angle=-90]{fig4.eps} \caption{The magnitude distribution of photometric sources identified in the WFC3/F469N image of M101-I using 0.2\,magnitude bins. A 100\% detection limit of m$_{F469N}$\,=\,24.3\,mag is derived from this plot using the solid line which represents a 3rd degree polynomial fit to the brightest sources.} \label{f469_mag_dist} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\columnwidth, angle=-90]{fig5.eps} \caption{The magnitude distribution of photometric sources identified in the ACS/WFC F435W image of M101-I using 0.2\,magnitude bins. A 100\% detection limit of m$_{F435W}$\,=\,26.6\,mag is derived from this plot using the solid line which represents a 3rd degree polynomial fit to the brightest sources.} \label{f435_mag_dist} \end{figure} \section{Source Selection} The F469N filter, centered at $\lambda$4693\AA~includes the He\,{\sc ii}$\lambda$4686 emission line, and partially includes the red wing of the C\,{\sc iii}$\lambda$4650 emission lines from carbon--rich WR stars, however the N\,{\sc iii}$\lambda$4640 line from nitrogen--rich, WN, stars will not be detected. In order to identify WR candidates we had to identify sources which were brighter in the narrow-band image relative to the continuum image, in this case the F435W image centered at $\lambda$4297\AA. We note that we also tried creating a continuum image using a combination of the F435W and F555W images, however this did not improve our subtraction and hence we used the F435W and F555W images individually to check the accuracy of our candidates. Output files from the photometric analysis were merged to match sources in terms of x and y coordinates. For each source m$_{F469N}$--m$_{F435W}$ was determined, where m$_{F469N}$--m$_{F435W}$$\leq$0 indicates an excess at $\lambda$4693\AA. Only sources where the excess was $\geq$3$\sigma$ were considered to be WR candidates. In addition, sources that were only identified in the F469N image, and not in the F435W or F555W images, were also flagged as WR candidates, since it is likely that these stars are faint WR stars with little or no detectable continuum. Figure \ref{m101-i-excess} shows the photometric properties of each WR candidate, with the 100\% detection limit of m$_{F435W}$\,=\,26.6\,mag adopted as the continum magnitude for those candidates only detected in the F469N image. An efficient way of identifying bonafide WR candidates is via the ``blinking'' method \citep{mof83}, which compares the F469N, F435W and continuum subtracted images in sequence. However, the broad--band F435W filter bandpass is almost 30$\times$ the width of the narrow-band F469N filter, hence the F435W image was scaled to create a narrow-band continuum. Since most stars should have m$_{F469N}$--m$_{F435W}$$=$0 we determined the scale factor which allowed most stars to disappear from the continuum subtracted image. We note that this could affect the detection of WR stars with a low He\,{\sc ii} excess. The ``blinking'' technique was applied to all of the 3$\sigma$ photometric WR candidates to confirm the He\,{\sc ii}$\lambda$4686 excess and to remove any spurious sources such as cosmic rays, features at the edge of the CCD, and poor photometry in dense, unresolved regions. In total, after the photometry and blinking processes, 91 WR candidates were identified in just one of our F469N fields (M101-I). Of these, 54 candidates ($\sim$60\%) are detected only in the F469N image. Examples of typical candidates are shown in Figure \ref{blinks} which shows the WR candidate in the F435W, F469N and F469N-F435W$_{scaled}$ filters. \begin{figure} \centering \includegraphics[width=0.7\columnwidth, angle=-90]{fig6.eps} \caption{Excess, m$_{F435W}$ -- m$_{F469N}$, versus F469N magnitude for WR candidates in M101-I. Open squares indicate WR candidates with a 3$\sigma$ detection in both F469N and F435W images, which also have an excess of $>$3$\sigma$. Open triangles indicate sources for which there was no detection in the F435W image, so excesses represent lower limits assuming m$_{F435W}$\,=\,26.6\,mag -- our 100\% detection limit. } \label{m101-i-excess} \end{figure} \begin{figure} \centering \subfigure[]{\includegraphics[width=0.301\columnwidth]{fig7a.eps}} \subfigure[]{\includegraphics[width=0.3\columnwidth]{fig7b.eps}} \caption{Postage stamp images produced during blinking showing the F435W continuum (bottom), F469N narrow-band (middle) and continuum subtracted image (top) for each WR candidate. Here we show an example of a WR candidate a) detected in the continuum (source \#1026), and b) not detected in the continuum image (source \#639).} \label{blinks} \end{figure} \section{Contamination by Variable Stars} Since the narrow-band images were obtained in 2010, and the continuum images in 2002, it is possible that stellar variability over the 8 year baseline leads to contamination of our WR candidates with Cepheid and other variable stars. \citet{sha11} identified Cepheids in 2 fields to determine the distance to M101. They showed that Cepheids have a (V--I) color $\geq$0.5\,mag (their Figure 12) which we can apply to our WR candidates. Of the 37 WR candidates detected in both the F469N and F435W images, and identified to have a $\geq$3$\sigma$ excess in m$_{F469N}$--m$_{F435W}$, 15 are provisionally eliminated ($\sim$40\%) by their red (V--I) $>$0.5\,mag colors. As already notes in Section \ref{detection_limits}, reddening of 0.5\,mag of individual M101 stars, including WR stars, is expected. Hence some of these 15 stars with (V--I)$\geq$0.5\,mag colors may be reddened WR stars. However, to be conservative in our estimates we eliminate them for now. Two of the 22 surviving candidates were not detected in the F814W image, and two other candidates have a V--I $<$-1\,mag; this is unexpected. However, inspection of the image reveals that these two sources lie in very crowded regions and hence their photometry is likely to be more unreliable than for the other candidates. The (V--I) colors for the 18 candidates which have a $\geq$3$\sigma$ excess and -1$<$(V--I)$<$0.5\,mag are shown in Figure \ref{v-i}, though we emphasize that 22 candidates are not eliminated by the above color test. Unfortunately, for the WR candidates detected only in F469N, but not in F555W, we cannot calculate a V-I color. However, if we ``blink'' the F469N image with the F814W image we can identify the sources that are also detected in the F814W images. These stars are too red to be WR stars. Of our initial 54 candidates that were not detected in either F435W or F555W, only one is detected in the F814W image. It is is eliminated as a likely red star, reducing the number of WR candidates detected only in F469N, but not in F555W to 53. In the M101-I region our final survey tally is 22 + 53 = 75 candidate WR stars. Their photometry is presented in Table \ref{photometry}. Blue variable stars could also be a source of contamination in our WR candidate list. We rule out B[e] stars since they do not exhibit any emission lines that lie within the F469N filter bandpass. It is possible that we have identified a few Luminous Blue Variables (LBVs) during outburst, since the continuum of an LBV can vary by up to $\sim$2\,mag. We note that LBVs are rare, with only four confirmed in M33 \citep{cla12}. We expect few LBV to contaminate our WR candidate list, but an independent check, which follows, is prudent. The HST archive has a rich assortment of M101 images, hence we used ACS/F555W images from 2006 (Proposal ID: 10918, PI Freedman), mid-way between the epochs of the ACS F435W (2002) and WFC3 F469N (2010) imaging, to investigate the impact of variable stars on our survey. The 2006 ACS/F555W imaging covers only a part of the M101-I frame covered by the 2002 narrowband data; 12 of our WR candidates lie within this region. The 2006 data were reduced and analyzed using the same method as described in Section \ref{reduction}, again blinking in all available filters. 12 WR candidates were independently identified using the 2006 imaging, 11 of which are consistent with the analysis using the F555W imaging from 2002. The two remaining candidates, one from each epoch, are candidates that have been identified from the F469N emission and do not have a counterpart in the broad-band imaging. Both candidates are bonafide WR candidates and we conclude that they have most likely been missed through human error in the blinking process. The consistency and lack of variability of the WR candidates found in the two epochs again supports the view that variability is not a significant issue for our survey. Finally we note that, after the analyses reported here were completed, we obtained Gemini-North spectra (which will be described in the next paper of this series) of over a hundred WR candidates in multiple fields; a high farction do, indeed, turn out to be bona fide WR stars. \begin{figure} \centering \includegraphics[width=0.7\columnwidth, angle=-90]{fig8.eps} \caption[]{Here we present the I (F814W) versus (V-I) of the WR candidates in M101 which have a detection limit in both the F469N and F435W filter images of $>$3$\sigma$. Following \citet{sha11} we use a color cut of V-I$>$0.5\,mag to rule out contamination by Cepheids. This resulted in the rejection of 15 WR candidates.} \label{v-i} \end{figure} \section{Identifying Red Supergiants} The archival ACS images also allow us to identify RSG candidates from their (B--V) and (V--I) colors. Based on the colors of RSGs in the LMC we apply color cuts of (B--V)$\geq$1.2\,mag and (V--I)$\geq$1.8\,mag to our field of M101 (B. Davies, priv. communication). To ensure that we are not contaminated by foreground red giant stars we also insist that the luminosity of each RSG candidate is $\geq$10$^{4.5}$L$_{\odot}$, equivalent to an apparent magnitude of m$_{F814W}\leq$22\,mag. In total we identify 164 RSG candidates in our single M101 field; their photometry is presented in Table \ref{rsg_phot}. In their study of RSGs in M31, \citet{mas09} suggest a more stringent B--V cut of $\geq$1.6\,mag to remove contamination from foreground stars. However, as they note, adopting this cut removes $\sim$25\% of RSGs already spectroscopically confirmed by \citet{mas98} from their candidate list (their Figure 3). If B--V $\geq$1.6\,mag is applied to the LMC RSG sample then $\sim$30\% of candidates would be missed, which is consistent with the 35\% reduction in RSG candidates we find if we apply the B--V$\geq$1.6\,mag cut to our M101 sample. The V--I versus B--V colors for our RSG candidates and V--I versus I magnitude for our M101 RSGs are presented in Figure \ref{rsg_a} and \ref{rsg_b}, respectively. These figures depict the candidates which would have been deleted by the more stringent B--V cut as solid squares and the "less stringent" candidates as open squares. Inspection of the open points in Figure \ref{rsg_b} reveals that the photometry of $\sim$7 RSG candidates appears to be inconsistent with the remaining candidates (with m$_{F814W}$$\geq$20.3\,mag or m$_{F555W}$-m$_{F814W}$$\geq$3\,mag). However, we note that the $\delta$m$_{F814W}$\,=\,3\,mag and $\delta$m$_{F555W}$-m$_{F814W}$\,=\,2\,mag range is consistent with that found for the confirmed RSGs in the LMC and as such we do not remove the RSGs from the candidate list. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.5\columnwidth, angle=-90]{fig9a.eps} \label{rsg_a}} \subfigure[]{\includegraphics[width=0.5\columnwidth, angle=-90]{fig9b.eps} \label{rsg_b}} \caption{Photometry of Red Supergiant candidates in a single field of M101 showing a) the V--I versus B--V colors and b) V--I versus I-band magnitude for all candidates. The solid squares show sources which are identified as RSGs using a (B--V)$\geq$1.2 cut, while the open squares use the more stringent constraint of (B--V)$\geq$1.6\,mag from \citet{mas09}.} \end{figure} \begin{figure} \centering \subfigure[]{\includegraphics[width=0.4\columnwidth, angle=-90]{fig10a.eps}} \subfigure[]{\includegraphics[width=0.35\columnwidth, angle=35]{fig10b.eps}} \caption{a) Broad band image of the main region of M101-I containing multiple clusters. The contours plotted related to 99\% (blue) and 95\% (black) cuts for the brightness. Wolf-Rayet stars are shown as red triangles and Red Supergiants as red squares - assuming a cut of B-V$\geq$1.6\,mag from \citet{mas09} and b) an archival WFPC2 H$\alpha$ image of the same region with clusters identified from \citet{Che05} plotted as red circles.} \label{contours} \end{figure} In Figure~\ref{contours} we focus on the largest star-forming complex NGC 5462 in the field M101-I, which includes multiple bright H\,{\sc ii} regions from \citet{hod90}, H1159, 1169, 1170 and 1176. This region has also been studied by \citet{Che05} using HST images and for reference we show an archival HST/WFPC2 H$\alpha$ image of this region, marking the clusters identified by these authors. RSG and WR candidates are plotted as red squares and red triangles, respectively. The brightest 1\% of all pixels in the ACS field are colored blue, while the next brightest 4\% of all pixels are colored black. 7 WR and 36 RSG candidates are ''isolated'', i.e. not surrounded by black pixels. 6 WR and 30 RSG are largely or entirely surrounded by black pixels. Finally, 12 WR and 5 RSG are surrounded by blue pixels. The corresponding ratios of numbers of WR to RSG candidates are 0.194, 0.2 and 2.4 . The former two are statistically indistinguishable, but the strong clustering of WR stars in the core of the star-forming complex M101-I is suggested by the latter ratio. This suggestion can be made indisputable only with a much larger sample of stars. In later papers in this series we will greatly strengthen these small number statistics with thousands of M101 WR and RSG stars. \begin{table} \footnotesize \centering \caption[]{Photometry of M101 WR candidates. The RA and DEC of each candidate is taken from the calibration of the F469N/WFC3 image. Narrow- and broad-band magnitudes are listed for each candidate, unless the object was not detected in that filter. Errors listed are the 1$\sigma$ errors determined by the \textsc{daophot} routine.} \begin{tabular}{@{\hspace{1mm}}c@{\hspace{3mm}}c@{\hspace{3mm}}c@{\hspace{2mm}}c@{\hspace{3mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \hline \hline RA & DEC & F469N & err & F435W & err & F435W-F469N & err & F555W & err & F814W & err \\ \hline 14:03:51.802 & +54:21:39.55 & 22.11 & 0.07 & 23.22 & 0.01 & 1.11 & 0.07 & 23.61 & 0.03 & 23.91 & 0.05 \\ 14:03:51.378 & +54:21:35.21 & 22.53 & 0.12 & 23.49 & 0.03 & 0.96 & 0.12 & 23.10 & 0.04 & 22.57 & 0.04 \\ 14:03:51.722 & +54:21:50.05 & 23.21 & 0.13 & 24.60 & 0.04 & 1.39 & 0.13 & 24.71 & 0.04 & 24.54 & 0.05 \\ 14:03:52.347 & +54:21:54.57 & 22.25 & 0.13 & 24.37 & 0.04 & 2.12 & 0.14 & 24.44 & 0.06 & 25.04 & 0.08 \\ 14:03:52.755 & +54:21:55.90 & 23.37 & 0.09 & 25.27 & 0.05 & 1.90 & 0.10 & 25.48 & 0.08 & 25.84 & 0.11 \\ 14:03:54.324 & +54:21:55.99 & 23.34 & 0.12 & 25.27 & 0.05 & 1.93 & 0.10 & 25.48 & 0.08 & 25.84 & 0.11 \\ 14:03:53.820 & +54:22:03.57 & 23.94 & 0.14 & 25.32 & 0.05 & 1.38 & 0.15 & 25.07 & 0.06 & 24.68 & 0.07 \\ 14:03:53.690 & +54:22:08.78 & 22.43 & 0.10 & 23.65 & 0.02 & 1.22 & 0.10 & 23.84 & 0.03 & 23.86 & 0.04 \\ 14:03:53.863 & +54:22:09.11 & 21.58 & 0.12 & 23.91 & 0.04 & 2.33 & 0.13 & 24.18 & 0.06 & 24.74 & 0.07 \\ 14:03:54.030 & +54:22:09.25 & 23.39 & 0.12 & 25.97 & 0.17 & 2.58 & 0.21 & 25.91 & 0.16 & 26.38 & 0.52 \\ 14:03:53.344 & +54:21:50.08 & 23.73 & 0.11 & 25.43 & 0.05 & 1.70 & 0.12 & 25.96 & 0.08 & -- & -- \\ 14:03:53.172 & +54:21:50.63 & 23.19 & 0.12 & 25.43 & 0.05 & 2.24 & 0.12 & 25.96 & 0.08 & -- & -- \\ 14:03:52.987 & +54:22:00.44 & 21.67 & 0.10 & 23.15 & 0.09 & 1.48 & 0.13 & 23.35 & 0.10 & 23.60 & 0.11 \\ 14:03:53.072 & +54:22:01.25 & 22.15 & 0.14 & 24.35 & 0.05 & 2.20 & 0.14 & 24.47 & 0.04 & 25.19 & 0.08 \\ 14:03:53.676 & +54:22:02.08 & 22.23 & 0.14 & 24.20 & 0.03 & 1.97 & 0.14 & 24.10 & 0.03 & 25.28 & 0.07 \\ 14:03:53.294 & +54:21:55.20 & 22.97 & 0.13 & 23.95 & 0.04 & 0.98 & 0.13 & 24.17 & 0.04 & 24.19 & 0.04 \\ 14:03:54.942 & +54:22:03.39 & 23.51 & 0.17 & 24.42 & 0.04 & 0.91 & 0.17 & 24.53 & 0.05 & 24.70 & 0.07 \\ 14:03:53.180 & +54:22:04.81 & 22.49 & 0.17 & 24.21 & 0.06 & 1.72 & 0.18 & 24.29 & 0.08 & 25.49 & 0.20 \\ 14:03:54.810 & +54:22:08.67 & 22.95 & 0.10 & 24.21 & 0.06 & 1.26 & 0.18 & 24.29 & 0.08 & 24.73 & 0.13 \\ 14:03:54.551 & +54:22:13.94 & 22.97 & 0.15 & 25.00 & 0.05 & 2.03 & 0.16 & 25.03 & 0.07 & 25.81 & 0.10 \\ 14:03:36.863 & +54:23:00.19 & 22.96 & 0.14 & 24.47 & 0.04 & 1.51 & 0.15 & 24.79 & 0.05 & 24.36 & 0.04 \\ 14:03:53.453 & +54:21:37.31 & 23.31 & 0.17 & 24.17 & 0.03 & 0.86 & 0.17 & 24.48 & 0.02 & 24.66 & 0.03 \\ 14:03:52.091 & +54:21:27.92 & 24.08 & 0.17 & -- & -- & -- & -- & -- & -- & -- & -- \\ 14:03:44.772 & +54:21:38.30 & 23.93 & 0.17 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:47.409 & +54:21:40.92 & 23.99 & 0.18 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:53.946 & +54:21:46.79 & 24.21 & 0.14 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:43.399 & +54:22:10.12 & 24.41 & 0.20 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:46.325 & +54:22:24.73 & 23.88 & 0.11 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:49.713 & +54:23:06.55 & 23.75 & 0.12 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:45.544 & +54:23:10.90 & 23.78 & 0.15 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:48.384 & +54:23:21.04 & 23.94 & 0.10 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:45.830 & +54:23:23.92 & 24.05 & 0.12 & -- & -- & -- & -- & -- & -- & -- & --\\ \end{tabular} \label{photometry} \end{table} \begin{table} \centering {Table 1: (continued)} \\ \footnotesize \begin{tabular}{@{\hspace{1mm}}c@{\hspace{3mm}}c@{\hspace{3mm}}c@{\hspace{2mm}}c@{\hspace{3mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c@{\hspace{2mm}}c} \hline \hline RA & DEC & F469N & err & F435W & err & F435W-F469N & err & F555W & err & F814W & err \\ \hline 14:03:43.572 & +54:20:43.93 & 24.15 & 0.17 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:42.956 & +54:20:56.05 & 23.94 & 0.17 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:43.450 & +54:21:03.90 & 24.33 & 0.12 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:42.025 & +54:21:38.77 & 23.38 & 0.13 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:52.247 & +54:21:50.81 & 23.89 & 0.13 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:53.618 & +54:21:51.54 & 23.68 & 0.11 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:47.278 & +54:21:52.04 & 23.67 & 0.18 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:51.110 & +54:22:05.84 & 24.00 & 0.12 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:53.577 & +54:22:09.21 & 23.93 & 0.18 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:43.080 & +54:22:17.27 & 23.99 & 0.11 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:46.935 & +54:22:18.13 & 24.33 & 0.16 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:48.954 & +54:22:18.41 & 23.85 & 0.13 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:52.630 & +54:22:19.51 & 23.73 & 0.15 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:50.926 & +54:22:31.61 & 23.86 & 0.15 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:46.177 & +54:22:33.89 & 23.34 & 0.11 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:36.468 & +54:22:40.98 & 24.46 & 0.19 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:51.363 & +54:22:42.89 & 24.27 & 0.23 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:39.703 & +54:23:07.76 & 23.89 & 0.14 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:43.267 & +54:23:10.18 & 23.64 & 0.11 & -- & -- & -- & -- & -- & --& -- & -- \\ 14:03:48.790 & +54:23:30.75 & 23.90 & 0.10 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:46.573 & +54:20:58.59 & 24.09 & 0.19 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:40.353 & +54:21:10.67 & 24.31 & 0.30 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:51.245 & +54:21:17.95 & 23.77 & 0.14 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:47.905 & +54:21:19.20 & 24.00 & 0.18 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:45.569 & +54:22:03.20 & 24.02 & 0.16 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:53.393 & +54:22:22.78 & 23.48 & 0.17 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:35.495 & +54:22:50.76 & 23.89 & 0.18 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:43.473 & +54:23:19.11 & 24.89 & 0.31 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:40.107 & +54:21:14.66 & 23.85 & 0.26 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:44.015 & +54:21:48.57 & 24.30 & 0.20 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:40.020 & +54:21:49.87 & 24.62 & 0.48 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:49.533 & +54:21:57.22 & 24.18 & 0.35 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:46.586 & +54:22:44.52 & 23.78 & 0.13 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:43.833 & +54:23:10.16 & 24.00 & 0.15 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:48.099 & +54:23:39.66 & 24.67 & 0.26 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:45.029 & +54:21:01.14 & 23.57 & 0.20 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:52.031 & +54:21:51.80 & 23.04 & 0.12 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:52.886 & +54:21:55.63 & 23.44 & 0.14 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:50.001 & +54:23:19.95 & 24.07 & 0.30 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:40.677 & +54:22:19.23 & 24.65 & 0.20 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:43.142 & +54:22:34.54 & 24.39 & 0.19 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:43.161 & +54:22:33.92 & 24.35 & 0.18 & -- & -- & -- & -- & -- & -- & -- & --\\ 14:03:38.669 & +54:22:18.74 & 24.12 & 0.16 & -- & -- & -- & -- & -- & -- & -- & --\\ \end{tabular} \end{table} \begin{table}[!] \footnotesize \centering \caption[]{HST/ACS photometry of Red Supergiant candidates in one pointing of M101. In total we identify 164 RSG candidates using color and magnitude cuts provided by B. Davies (priv. communication). Note that all magnitudes presented use the Vega magnitude system, for which zero points were provided by Josh Sokol from the ACS instrument team. Errors listed represent 1$\sigma$ errors which are calculated in \textsc{daophot}.} \begin{tabular}{ccclclcl} \\ \hline\hline RA & DEC & F435W & err & F555W & err & F814W & err \\ \hline 14:03:53.731 & +54:21:53.65 & 25.40 & 0.05 & 24.02 & 0.03 & 21.62 & 0.05 \\ 14:03:51.361 & +54:21:13.24 & 25.40 & 0.05 & 23.95 &0.04 & 21.61 & 0.05\\ 14:03:52.891 & +54:21:45.49 & 24.94 & 0.04 & 23.48 &0.02 & 21.62 & 0.03\\ 14:03:51.615 & +54:21:35.75 & 24.93 & 0.08 & 23.58 &0.03 & 21.60 & 0.03\\ 14:03:52.696 & +54:21:38.56 & 25.63 & 0.08 & 24.32 & 0.04 & 21.73 & 0.05\\ 14:03:52.978 & +54:21:45.75 & 25.35 & 0.04 & 23.89 &0.03 & 21.74 & 0.04\\ 14:03:55.206 & +54:21:54.40 & 25.65 & 0.04 & 24.12 & 0.03 & 21.74 & 0.04\\ 14:03:36.509 & +54:22:49.55 & 25.30 & 0.05 & 23.91 & 0.04 & 21.75 & 0.05\\ 14:03:51.397 & +54:22:46.20 & 25.33 & 0.05 & 23.98 &0.03 & 21.78 & 0.06\\ 14:03:53.121 & +54:21:27.84 & 25.14 & 0.07 & 23.84 &0.03 & 21.79 & 0.04\\ 14:03:35.420 & +54:21:11.50 & 26.33 & 0.08 & 25.08 &0.07 & 21.79 & 0.08\\ 14:03:53.275 & +54:21:21.52 & 26.43 & 0.07 & 24.91 &0.05 & 21.79 & 0.06\\ 14:03:52.172 & +54:21:47.34 & 25.19 & 0.07 & 23.68 &0.04 & 21.81 & 0.06\\ 14:03:36.375 & +54:20:48.33 & 25.23 & 0.04 & 23.96 & 0.03 & 21.82 & 0.04\\ 14:03:54.280 & +54:21:32.75 & 25.32 & 0.07 & 23.89 & 0.04 & 21.82 & 0.05\\ 14:03:53.141 & +54:21:47.40 & 25.25 & 0.07 & 23.84 &0.03 & 21.85 & 0.04\\ 14:03:53.387 & +54:21:27.42 & 25.03 & 0.10 & 23.73 &0.05 & 21.86 & 0.06\\ 14:03:56.414 & +54:22:32.07 & 25.43 & 0.08 & 24.03 &0.05 & 21.88 & 0.06\\ 14:03:51.467 & +54:21:14.10 & 25.38 & 0.05 & 23.96 &0.05 & 21.89 & 0.06\\ 14:03:42.430 & +54:23:36.30 & 25.15 & 0.03 & 23.61 &0.02 & 21.57 & 0.04\\ 14:03:52.154 & +54:21:39.26 & 25.25 & 0.07 & 23.67 & 0.04 & 21.52 & 0.05\\ 14:03:51.011 & +54:21:08.24 & 25.94 & 0.08 &24.46 & 0.04 & 21.70 & 0.05\\ 14:03:54.974 & +54:21:53.05 & 25.53 & 0.07 & 24.14 &0.05 & 21.49 & 0.05\\ 14:03:53.114 & +54:21:47.13 & 25.06 & 0.04 &23.46 &0.03 & 21.42 & 0.03\\ 14:03:52.144 & +54:21:38.98 & 25.07 & 0.07 & 23.49 &0.03 & 21.42 & 0.04\\ 14:03:34.066 & +54:22:47.59 & 25.25 & 0.03 & 23.97 & 0.03 & 21.41 & 0.04\\ 14:03:52.364 & +54:21:34.34 & 24.93 & 0.07 &23.45 & 0.02 & 21.39 & 0.03\\ 14:03:52.121 & +54:21:41.35 & 24.65 & 0.04 &23.37 & 0.03 & 21.40 & 0.04\\ 14:03:41.571 & +54:23:52.62 & 24.89 & 0.04 & 23.37 & 0.03 & 21.40 & 0.05\\ 14:03:36.366 & +54:23:07.19 & 25.37 & 0.07 & 24.15 & 0.06 & 21.38 & 0.09\\ 14:03:51.947 & +54:21:47.81 & 25.02 & 0.12 &23.60 & 0.05 & 21.37 & 0.06\\ 14:03:51.641 & +54:21:37.61 & 24.62 & 0.10 &23.17 & 0.02 & 21.08 & 0.04\\ 14:03:54.354 & +54:21:33.96 & 24.45 & 0.06 &23.23 & 0.03 & 21.30 & 0.04\\ 14:03:54.335 & +54:21:33.92 & 25.35 & 0.07 & 23.98 & 0.04 & 21.30 & 0.05\\ 14:03:45.593 & +54:22:35.95 & 25.09 & 0.11 & 23.63 & 0.11 & 21.27 & 0.16\\ 14:03:52.214 & +54:21:28.58 & 25.27 & 0.04 & 23.96 & 0.06 & 21.28 & 0.07\\ 14:03:50.700 & +54:21:17.50 & 25.13 & 0.06 & 23.63 & 0.06 & 21.26 & 0.07\\ 14:03:51.731 & +54:21:37.81 & 25.49 & 0.04 & 23.94 & 0.02 & 21.15 & 0.02\\ \end{tabular} \label{rsg_phot} \end{table} \begin{table}[!] \centering {Table 2: (continued)} \\ \footnotesize \begin{tabular}{ccclclcl} \hline\hline RA & DEC & F435W & err & F555W & err & F814W & err \\ \hline 14:03:51.696 & +54:21:18.03 & 24.66 & 0.05 & 23.15 & 0.06 & 21.13 & 0.09\\ 14:03:52.707 & +54:21:48.57 & 24.40 & 0.06 & 22.87 & 0.05 & 21.03 & 0.06\\ 14:03:51.869 & +54:21:33.31 & 24.61 & 0.04 & 23.34 & 0.12 & 21.05 & 0.13\\ 14:03:54.266 & +54:22:09.33 & 24.16 & 0.06 & 22.91 & 0.04 & 21.06 & 0.06\\ 14:03:52.993 & +54:21:54.36 & 24.06 & 0.10 & 22.85 & 0.10 & 20.99 & 0.10\\ 14:03:36.424 & +54:21:20.63 & 24.25& 0.03 &22.87 & 0.03 & 20.99 & 0.04\\ 14:03:53.214 & +54:21:58.09 & 24.32 & 0.07 &22.84 &0.05 & 20.70 & 0.06\\ 14:03:53.119 & +54:21:40.82 & 24.67 & 0.03 &23.13 &0.02 & 20.72 & 0.03\\ 14:03:51.525 & +54:21:21.18 & 24.27 & 0.02 &22.80 &0.02 & 20.68 & 0.03\\ 14:03:51.344 & +54:21:00.17 & 24.25 & 0.09 &22.85 & 0.06 & 20.51 & 0.07\\ 14:03:52.993 & +54:21:54.36 & 24.06 & 0.10 &22.85 &0.10 & 20.44 & 0.10\\ 14:03:35.731 & +54:22:49.13 & 23.59 & 0.03 & 22.20 & 0.03 & 19.60 & 0.04\\ 14:03:53.537 & +54:21:59.31 & 23.27 & 0.08 &21.91 &0.03 & 20.01 & 0.04\\ 14:03:53.403 & +54:21:59.38 & 23.48 & 0.05 & 22.12 &0.03 & 20.23 & 0.05\\ 14:03:54.224 & +54:21:51.88 & 23.61 & 0.06 & 22.23 &0.03 & 20.24 & 0.04\\ 14:03:52.916 & +54:21:50.23 & 23.63 & 0.03 & 22.28 & 0.03 & 20.29 & 0.05\\ 14:03:56.477 & +54:22:11.98 & 23.84 & 0.03 & 22.02 &0.04 & 19.41 & 0.05\\ 14:03:56.795 & +54:21:54.97 & 23.90 & 0.03 & 22.02 &0.03 & 20.01 & 0.04\\ 14:03:41.464 & +54:23:35.21 & 24.46 & 0.05 & 22.75 & 0.03 & 20.37 & 0.04\\ 14:03:52.410 & +54:21:49.64 & 24.34 & 0.09 & 22.60 & 0.03 & 20.43 & 0.04\\ 14:03:44.033 & +54:23:05.94 & 25.49 & 0.05 & 23.84 & 0.03 & 20.45 & 0.04\\ 14:03:52.476 & +54:21:34.12 & 24.68 & 0.03 & 22.80 & 0.02 & 20.50 & 0.03\\ 14:03:52.475 & +54:21:33.35 & 24.57 & 0.04 & 22.90 & 0.04 & 20.61 & 0.04\\ 14:03:54.025 & +54:22:01.54 & 25.00 & 0.06 & 23.09 & 0.04 & 20.62 & 0.05\\ 14:03:52.267 & +54:21:50.93 & 24.64 & 0.07 & 22.94 & 0.03 & 20.69 & 0.05\\ 14:03:53.067 & +54:22:03.94 & 24.36 & 0.04 &22.53 &0.03 & 20.72 & 0.04\\ 14:03:55.499 & +54:21:35.99 & 24.88 & 0.04 &23.07 &0.03 & 20.78 & 0.04\\ 14:03:53.845 & +54:21:54.61 & 24.53 & 0.05 &22.84 &0.03 & 20.79 & 0.04\\ 14:03:54.357 & +54:21:34.98 & 24.48 & 0.05 &22.76 &0.04 & 20.79 & 0.06\\ 14:03:51.656 & +54:22:05.41 & 25.01 & 0.05 &23.33 &0.03 & 20.81 & 0.05\\ 14:03:53.157 & +54:21:54.75 & 25.15 & 0.10 & 23.31 & 0.09 & 20.85 & 0.13\\ 14:03:53.882 & +54:21:50.27 & 24.90 & 0.03 &23.06 &0.02 & 20.87 & 0.03\\ 14:03:52.740 & +54:21:43.79 & 24.57 & 0.03 &22.83 &0.03 & 20.90 & 0.03\\ 14:03:52.598 & +54:22:08.88 & 24.68 & 0.04 &22.94 &0.03 & 20.91 & 0.03\\ 14:03:51.297 & +54:21:19.64 & 24.76 & 0.05 &22.96 &0.04 & 20.92 & 0.05\\ 14:03:36.903 & +54:22:09.95 & 25.33 & 0.04 & 23.49 & 0.04 & 20.93 & 0.05\\ 14:03:50.700 & +54:21:17.50 & 25.13 & 0.06 & 23.37 & 0.04 & 20.96 & 0.05\\ 14:03:51.813 & +54:21:34.43 & 24.72 & 0.04 & 22.95 & 0.03 & 20.98 & 0.04\\ 14:03:52.838 & +54:21:36.65 & 24.76 & 0.05 &23.02 &0.03 & 20.99 & 0.04\\ 14:03:53.105 & +54:21:36.27 & 24.93 & 0.03 &23.08 &0.02 & 21.00 & 0.02\\ 14:03:54.367 & +54:21:46.68 & 25.23 & 0.04 &23.42 &0.02 & 21.02 & 0.03\\ 14:03:50.964 & +54:21:07.60 & 25.37 & 0.06 & 23.52 & 0.03 & 21.02 & 0.04\\ \end{tabular} \end{table} \begin{table} \centering {Table 2: (continued)} \\ \footnotesize \begin{tabular}{ccclclcl} \hline\hline RA & DEC & F435W & err & F555W & err & F814W & err \\ \hline 14:03:56.166 & +54:21:59.42 & 24.90 & 0.04 & 23.17 &0.04 & 21.03 & 0.05\\ 14:03:51.522 & +54:21:36.49 & 25.64 & 0.05 & 23.81 & 0.04 & 21.08 & 0.04\\ 14:03:54.866 & +54:21:54.63 & 25.00 & 0.05 &23.16 &0.03 & 21.09 & 0.04\\ 14:03:52.377 & +54:21:29.64 & 25.22 & 0.03 &23.37 &0.04 & 21.11 & 0.05\\ 14:03:53.444 & +54:21:14.05 & 24.97 & 0.04 & 23.17 & 0.03 & 21.12 & 0.05\\ 14:03:51.733 & +54:21:35.50 & 25.18 & 0.07 &23.35 &0.09 & 21.13 & 0.12\\ 14:03:52.200 & +54:21:36.24 & 25.33 & 0.07 & 23.07 & 0.03& 21.18 & 0.04\\ 14:03:52.195 & +54:21:36.42 & 24.74 & 0.03 & 23.07 & 0.03 & 21.18 & 0.04\\ 14:03:52.744 & +54:21:20.43 & 25.61 & 0.05 &23.84 &0.04 & 21.18 & 0.05\\ 14:03:51.808 & +54:21:35.15 & 25.24 & 0.03 &23.42 &0.02 & 21.21 & 0.03\\ 14:03:51.182 & +54:21:57.21 & 26.08 & 0.07 &24.27 &0.03 & 21.23 & 0.05\\ 14:03:54.198 & +54:22:00.43 & 24.92 & 0.03 &23.29 &0.02 & 21.24 & 0.03\\ 14:03:53.621 & +54:21:47.58 & 25.09 & 0.04 & 23.36 & 0.03 & 21.24 & 0.04\\ 14:03:54.963 & +54:21:55.03 & 25.34 & 0.04 &23.60 &0.03 & 21.26 & 0.04\\ 14:03:54.378 & +54:22:13.81 & 25.54 & 0.05 &23.61 &0.03 & 21.26 & 0.04\\ 14:03:50.700 & +54:21:17.50 & 25.13 & 0.06 &23.37 &0.04 & 21.26 & 0.06\\ 14:03:54.288 & +54:21:52.26 & 25.00 & 0.06 &23.32 &0.02 & 21.27 & 0.03\\ 14:03:51.808 & +54:21:43.68 & 25.08 & 0.06 &23.31 &0.02 & 21.29 & 0.02\\ 14:03:42.360 & +54:23:48.05 & 25.04 & 0.08 & 23.26 & 0.06 & 21.30 & 0.08\\ 14:03:44.006 & +54:23:36.89 & 25.37 & 0.10 &23.62 &0.12 & 21.32 & 0.18\\ 14:03:55.025 & +54:22:27.55 & 25.15 & 0.06 &23.46 &0.03 & 21.33 & 0.04\\ 14:03:51.405 & +54:21:41.58 & 24.98 & 0.03 &23.28 &0.01 & 21.34 & 0.02\\ 14:03:52.443 & +54:21:23.97 & 25.73 & 0.06 &23.77 &0.04 & 21.34 & 0.05\\ 14:03:54.388 & +54:21:34.39 & 25.89 & 0.07 &23.30 &0.03 & 21.35 & 0.04\\ 14:03:52.154 & +54:21:37.76 & 25.54 & 0.05 & 23.78 & 0.03 & 21.36 & 0.04\\ 14:03:38.802 & +54:21:47.99 & 25.63 & 0.06 &23.88 &0.03 & 21.37 & 0.04\\ 14:03:56.427 & +54:22:04.82 & 25.48 & 0.05 &23.60 &0.03 & 21.41 & 0.05\\ 14:03:54.025 & +54:21:42.82 & 25.19 & 0.04 &23.59 &0.03 & 21.43 & 0.04\\ 14:03:50.921 & +54:21:22.55 & 25.37 & 0.05 &23.53 &0.03 & 21.44 & 0.04\\ 14:03:52.789 & +54:21:44.94 & 25.20 & 0.03 &23.44 &0.01 & 21.45 & 0.02\\ 14:03:53.838 & +54:21:38.62 & 26.29 & 0.08 & 24.27 & 0.03 & 21.46 & 0.03\\ 14:03:52.447 & +54:21:49.52 & 24.99 & 0.06 &23.31 &0.05 & 21.46 & 0.06\\ 14:03:54.564 & +54:21:19.97 & 25.25 & 0.05 &23.50 &0.02 & 21.48 & 0.07\\ 14:03:53.208 & +54:21:50.69 & 25.27 & 0.05 &23.46 &0.03 & 21.50 & 0.05\\ 14:03:54.040 & +54:21:57.44 & 25.67 & 0.06 &23.78 &0.03 & 21.53 & 0.04\\ 14:03:53.151 & +54:21:33.80 & 25.82 & 0.05 &24.16 &0.05 & 21.53 & 0.06\\ 14:03:54.772 & +54:22:01.28 & 25.45 & 0.05 & 23.76 & 0.04 & 21.54 & 0.05\\ 14:03:53.027 & +54:21:34.49 & 25.34 & 0.05 &23.56 &0.03 & 21.55 & 0.04\\ 14:03:53.238 & +54:21:51.47 & 25.13 & 0.07 &23.50 &0.03 & 21.59 & 0.04\\ 14:03:51.968 & +54:21:33.02 & 25.84 & 0.07 &24.06 &0.03 & 21.60 & 0.04\\ 14:03:52.200 & +54:21:36.24 & 25.33 & 0.07 &23.58 &0.07 & 21.60 & 0.09\\ 14:03:35.520 & +54:22:17.92 & 25.22 & 0.07 &23.61 &0.02 & 21.62 & 0.04\\ 14:03:53.763 & +54:21:53.60 & 25.77 & 0.07 &24.15 &0.04 & 21.62 & 0.06\\ \end{tabular} \end{table} \begin{table} \centering {Table 2: (continued)} \\ \footnotesize \begin{tabular}{ccclclcl} \hline\hline X & Y & F435W & err & F555W & err & F814W & err \\ \hline 14:03:54.566 & +54:21:45.77 & 25.26 & 0.04 &23.57 &0.02 & 21.63 & 0.03\\ 14:03:56.658 & +54:22:22.04 & 25.63 & 0.05 &23.82 &0.03 & 21.64 & 0.04\\ 14:03:52.822 & +54:21:44.51 & 25.45 & 0.04 &23.84 &0.03 & 21.64 & 0.04\\ 14:03:54.379 & +54:21:48.41 & 25.84 & 0.06 &24.04 &0.02 & 21.65 & 0.03\\ 14:03:49.694 & +54:21:13.17 & 25.89 & 0.07 & 24.15 & 0.04 & 21.65 & 0.05\\ 14:03:54.057 & +54:22:02.87 & 25.09 & 0.05 & 23.48 & 0.03 & 21.66 & 0.04\\ 14:03:54.617 & +54:22:56.56 & 25.32 &0.04 & 23.69 & 0.02 &21.66 & 0.03\\ 14:03:37.847 & +54:22:02.26 & 25.47 & 0.05 & 23.81 & 0.03 & 21.66 & 0.04\\ 14:03:43.282 & +54:22:26.49 & 25.72 & 0.05 & 24.01 & 0.02 & 21.67 & 0.03\\ 14:03:51.967 & +54:22:53.10 & 25.58& 0.07 & 23.98 & 0.05 & 21.67 & 0.06\\ 14:03:52.185 & +54:21:18.89 & 25.65& 0.07 & 24.04 & 0.05 & 21.67 & 0.06\\ 14:03:53.076 & +54:21:35.38 & 25.73& 0.06 &23.86 &0.04 &21.68 & 0.04\\ 14:03:53.446 & +54:22:02.87 & 25.87& 0.16 &23.79 &0.04 &21.68 & 0.05\\ 14:03:52.282 & +54:21:58.84 & 26.00& 0.08 &24.05 &0.04 &21.68 & 0.05\\ 14:03:53.326 & +54:21:52.79 & 25.32& 0.08 &23.65 &0.06 &21.69 & 0.08\\ 14:03:51.768 & +54:21:23.80 & 25.75 & 0.07 & 23.90 & 0.04 & 21.70 & 0.05\\ 14:03:51.756 & +54:21:23.63 & 25.76 & 0.10 & 24.01 & 0.06 & 21.70 &0.06\\ 14:03:56.264 & +54:21:53.28 & 25.64 & 0.11 &24.04 &0.04 &21.70 &0.05\\ 14:03:50.450 & +54:21:54.07 & 25.53 & 0.06 &23.75 &0.03 &21.70 &0.05\\ 14:03:54.868 & +54:21:59.07 & 25.89 & 0.09 &24.07 &0.03 &21.72 &0.04\\ 14:03:53.418 & +54:21:51.24 & 25.52 & 0.06 &23.77 &0.03 &21.73 &0.04\\ 14:03:55.619 & +54:21:56.82 & 25.72 & 0.06 &23.94 &0.03 &21.75 &0.04\\ 14:03:51.115 & +54:21:27.39 & 25.83 & 0.06 & 24.12 & 0.03 & 21.76 &0.04\\ 14:03:53.779 & +54:21:59.47 & 25.50 & 0.05 &23.63 & 0.04 &21.76 &0.05\\ 14:03:39.781 & +54:21:07.18 & 25.76 & 0.07 & 23.91 & 0.03 &21.76 & 0.05\\ 14:03:54.274 & +54:21:40.79 & 25.83 & 0.05 &24.21 & 0.04 &21.80 &0.05\\ 14:03:51.490 & +54:21:21.41 & 25.64 & 0.05 &24.02 & 0.03 &21.82 &0.04\\ 14:03:43.061 & +54:22:28.83 & 25.60 & 0.06 & 23.94 & 0.04 & 21.82 & 0.05\\ 14:03:52.248 & +54:21:32.04 & 25.50 & 0.05 & 23.87 & 0.02 &21.83 &0.03\\ 14:03:37.785 & +54:22:13.83 & 25.93 & 0.04 &24.15 &0.03 &21.83 &0.04\\ 14:03:40.408 & +54:23:25.01 & 25.58 & 0.06 &23.91 &0.03 &21.83 &0.05\\ 14:03:52.086 & +54:21:37.68 & 25.66 & 0.06 &23.92 &0.02 &21.84 &0.03\\ 14:03:52.948 & +54:21:42.57 & 25.51 & 0.06 &23.88 &0.03 &21.84 &0.04\\ 14:03:51.309 & +54:21:20.38 & 25.57 & 0.07 &23.95 &0.06 & 21.84 & 0.07\\ 14:03:42.129 & +54:23:49.29 & 25.39 & 0.08 & 23.75 & 0.05 &21.86 &0.06\\ 14:03:53.636 & +54:21:38.60 & 25.82 & 0.04 &24.17 &0.02 &21.87 &0.03\\ 14:03:53.212 & +54:21:59.41 & 25.44 & 0.06 &23.82 & 0.04 &21.87 &0.05\\ 14:03:54.313 & +54:22:04.51 & 25.85 & 0.15 &24.05 & 0.04 &21.88 &0.05\\ 14:03:53.181 & +54:21:45.97 & 25.76 & 0.05 &24.08 & 0.04 &21.89 &0.05\\ \end{tabular} \end{table} \section{Summary and Conclusions} We describe the motivation for, and data collected by HST to search for the progenitors of type Ib/c supernovae in the nearby giant spiral galaxy M101. The analysis methodology and early results of a search for WR and RSG stars in one HST WFC3 pointing of M101 are reported. 75 WR and 164 RSG candidates are identified. There is a suggestion of clustering of WR candidates in the central core of the largest star-forming complex in the field. Thousands of WR and RSG candidates, and hundreds of spectrographically confirmed WR stars will be reported in future papers in this series. \acknowledgments This research is based on NASA/ESA Hubble Space Telescope observations obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy Inc. under NASA contract NAS5-26555. JLB and MMS acknowledge the interest and generous support of Hilary and Ethel Lipsitz. AFJM and LD are grateful to NSERC (Canada) and FQRNT (Quebec) for financial assistance. We thank Or Graur for suggestions on displaying the contours shown in Figure \ref{contours}.
{'timestamp': '2013-02-28T02:00:40', 'yymm': '1302', 'arxiv_id': '1302.6631', 'language': 'en', 'url': 'https://arxiv.org/abs/1302.6631'}
ArXiv
\section{Introduction} \label{sec_intro} The Universe contains prominent structures up to $\sim 100\,{\rm Mpc}$, only reaching homogeneity on much large scales \citep[e.g.][]{Peebles1980,Davis1985}. The properties of galaxies and other objects, which form and evolve in the cosmic web, are expected to be affected by their large-scale environments. Thus, astronomical observations, which are always made in limited volumes in the Universe, can be affected by the cosmic variance (CV) caused by spatial variations of the statistical properties of cosmic objects, such as galaxies, due to the presence of large scale structure. Because of CV, statistics obtained from a sample that covers a specific volume in the Universe may be different from those expected for the Universe as a whole. Erroneous inferences would then be made if such biased observational data were used to constrain models. Cosmic variance (CV) is a well known problem \citep[e.g.][]{Somerville2004,Jha2007,Driver2010,Moster2011,Marra2013,Keenan2013,Wojtak2014,Whitbourn2014,Whitbourn2016}, and various attempts have been made to deal with it. One way is to analyze different (sub-)samples, e.g. obtained from the Jackknife sampling of the observational data, and to use the variations among them to have some handle on the CV. However, this can only provide information about the variance within the total sample itself, but not that of the total sample relative to a fair sample of the Universe. Another way is to use the spatial distribution of bright galaxies (a density-defining population), which can be observed in a large volume, to quantify the CV expected in sub-volumes \citep[e.g.][]{Driver2010}, or to re-scale (or correct) the number density of faint galaxies observed in a smaller volume, as was done by \citet{Baldry2012} in their estimate of galaxy stellar mass functions in the GAMA sample. However, this method relies on the assumption that galaxies of different luminosities/masses have similar spatial distributions, which may not be true. The same problem also exists in the maximal likelihood method \citep[e.g.][]{G.Efstathiou1988}, where galaxy luminosity function is explicitly assumed to be independent of environment. Yet another way is to estimate the CV expected from a given sample using simple, analytic models for the clustering properties of galaxies on large scales. Along this line, \citet{Somerville2004} tested the effects of the CV on different scales, and proposed the use of either the two-point correlation function of galaxies, or the combination of the linear density field with halo bias models \citep[e.g.][]{MoWhite1996,Sheth2001}, to predict the CV of different surveys. Similarly, \citet{Moster2011} carried out an investigation of the CV expected in observations of the galaxy populations at different redshifts, using the linear density field predicted by the ${\rm \Lambda CDM}$ model combined with a bias model that takes into account the dependence of galaxy distribution on galaxy mass and redshift. Unfortunately, such an approach does not take into account observational selection effects. More importantly, this approach only gives a statistical estimate of the CV but does not measure the deviation of a specific sample from a fair sample. Finally, one can also use a large number of mock galaxy samples, either obtained directly from hydrodynamic simulations, or from N-body simulation-based semi-analytic (SAM) and empirical models, to quantify how the sample-to-sample variation of the statistical measure in question depends on sample volume. However, this needs a large set of simulations for each model, analyzed in a way that takes into account the observational selection effects in the data, which in practice is costly and time consuming. Furthermore, the same as the approach based on galaxy clustering statistics, this approach can only provide a statistical statement of the expected CV, but does not provide a way to correct the variance of a specific sample. Can one develop a systematic method to study the cosmic variance, and to quantify and correct biases that are present in observational data? The answer is yes, and the key is to use constrained simulations. Indeed, if one can accurately reconstruct the initial conditions for the formation of the structures in which the observed galaxy population reside, one can then carry out simulations with such initial conditions in a sufficiently large box that contains the constrained volume, so that the large box can be used as a fair sample, while the constrained region can be used to model the observational data. By comparing the statistics obtained from the mock samples with those obtained from the whole box, one can quantify and correct the CV in the observational data. In the past few years, the ELUCID collaboration has embarked on the development of a method to accurately reconstruct the initial conditions responsible for the density field in the low-$z$ Universe \citep{Wang2014}. As demonstrated by various tests \citep{Wang2014,Wang2016}, the reconstruction method is much more accurate than other methods that have been developed, and works reliably even in highly non-linear regimes. The initial conditions in a $500\, h^{-1}{\rm {Mpc}}$ box that contains the main part of the SDSS volume have already been obtained, and a high resolution $N$-body simulation, run with $(3072)^3$ particles, has been carried out with these initial conditions in the current $\Lambda$CDM cosmology \citep{Wang2016}. In the present paper, we use the dark matter halo merger trees constructed from the ELUCID simulation to populate simulated halos with model galaxies predicted by the empirical galaxy formation model developed by~\citet[][thereafter L14, L15]{Lu2014,Lu2015}. The model galaxies in the constrained volume are then used to construct mock catalogs that contain the same CV as the real SDSS sample. We compare galaxy stellar mass functions (GSMF) estimated from the mock catalogs with that obtained from the total simulation box to quantify the CV within the SDSS volume. Finally, we propose a method based on the conditional stellar mass or luminosity distribution in dark matter halos to correct for the CV in the observed GSMF. As we will see, the CV can be very severe in the low-mass end of the GSMF obtained from methods commonly adopted, and the low-mass end slope of the true GSMF in the low-$z$ Universe may be significantly steeper than those published in the literature. The structure of the paper is as follows. In \S\ref{sec_mergingtrees} we describe methods to implement Monte Carlo halo merger trees in simulated merger trees, so as to extend all trees down to a mass resolution sufficient for our purpose. In \S\ref{sec_populating} we populate simulated halos with galaxies using an empirical model of galaxy formation, and construct a number of mock catalogs to mimics the SDSS survey both in spatial distribution and physical properties. In \S\ref{sec_CVinGSMF} we examine in detail the cosmic variance in the estimates of the GSMF, and show how commonly adopted methods to measure the galaxy luminosity function (GLF) and GSMF fail to account for the CV. We also propose and test a new method to correct for CV in GLF and GSMF, and apply it to the real SDSS data to obtain the CV-corrected GLF and GSMF. Finally, a brief summary of our main results is presented in \S\ref{sec_summary}. Throughout the paper, we define the GSMF as $\Phi(M_*)=\mathrm{d}N/\mathrm{d}V/\mathrm{d}\log M_*$, which is the number of galaxies per unit volume per unit stellar mass in logarithmic space, and define the GLF in $\rm X$-band as $\Phi(M_{\rm X})={\rm d} N/{\rm d}V/{\rm d}(M_{\rm X}-5\log h)$, which is the number of galaxies per unit volume per unit magnitude. The magnitude $M_{\rm X}$ is $k$-corrected to redshift $0.1$ without evolution correction, unless specified otherwise. \section{Merger trees of dark matter halos from the ELUCID simulation} \label{sec_mergingtrees} \subsection{The simulation} We use the ELUCID simulation carried out by \citet{Wang2016} to model the dark matter halo population, their formation histories, and spatial distribution. This is an $N$-body simulation that uses L-GADGET, a memory optimized version of GADGET-2~\citep{Springel2005}, to follow the evolution of $3072^3$ dark matter particles (each with a mass of $3.088 \times 10^8 \, h^{-1}{\rm M_\odot}$) in a periodic cubic box with side length of $500\, h^{-1}{\rm {Mpc}}$ in co-moving units. The cosmology used is the one based on WMAP5~\citep{Dunkley2009,Komatsu2009}: a flat Universe with $\Omega_K=0$; a matter density parameter $\Omega_{\rm m,0}=0.258$; a cosmological constant $\Omega_{\Lambda,0}=0.742$; a baryon density parameter $\Omega_{\rm B,0}=0.044$; a Hubble constant $H_0=100h\ \mathrm{km\ s^{-1}\ \,{\rm Mpc}^{-1}}$ with $h=0.72$; and a Gaussian initial density field with power spectrum $P(k)\propto k^n$, with $n=0.96$ and with the amplitude specified by $\sigma_8=0.80$. The simulation is run from redshift $z=100$ to $z=0$, with outputs recorded at 100 snapshots between $z=18.4$ and $z=0.0$. The initial conditions (phases of Fourier modes) of the density field are those obtained from the reconstruction based on the halo-domain method of~\citet{Wang2009} and the Hamiltonian Markov Chain Monte Carlo (HMC) method~\citep{Wang2013,Wang2014}, constrained by the distributions of dark matter halos represented by galaxy groups and clusters selected from the SDSS redshift survey \citep{Yang2007,Yang2012}. As shown in \citet{Wang2016} with the use of mock catalogs, more than $95\%$ of the groups with masses above $10^{14}\, h^{-1}{\rm M_\odot}$ can be matched with the simulated halos of similar masses, with a distance error tolerance of $\sim 4\, h^{-1}{\rm {Mpc}}$, and massive structures such as the Coma cluster and the Sloan Great Wall can be well reproduced in the reconstruction. Thus, the use of the constrained simulation from ELUCID allows us not only to model accurately the large-scale environments within which observed galaxies reside, but also to recover, at least partially, the formation histories of the massive structures seen in the local Universe. \subsection{The construction of halo merger trees} \label{ssec_trees} \begin{figure*} \centering \includegraphics[width=12cm]{fig1.pdf} \caption{ Conditional progenitor mass functions (mean fraction of mass in progenitors in per unit progenitor mass $M_{\rm prog}/M_{z=0}$ bin in logarithmic space) of dark matter halos from different kinds of merging trees, for different $z=0$ halos of different masses (each column) and for progenitors at different redshifts (each row). Black solid: halo merger trees obtained from ELUCID simulation repaired by Monte-Carlo-based trees. Blue solid: P08~\citep{Parkinson2007} Monte Carlo trees generated with the WMAP5 cosmology. Purple dashed: Millennium~\citep{Springel2005} FOF halo merger trees. Green dashed: P08 Monte Carlo trees with Millennium cosmology. The two vertical solid lines show the 20 particles mass resolution of halos in ELUCID and Millennium simulations, respectively.} \label{fig:CHMF} \end{figure*} Halos and their sub-halos with more than 20 particles are identified with the friend-of-friend (FOF) and SUBFIND algorithms \citep{Springel2005b}. To be safe, we only use halos identified in the simulation with masses $M_{\rm h} \geq M_{\rm th} = 10^{10}\, h^{-1}{\rm M_\odot}$. However, this mass resolution is not sufficient to resolve lower mass halos in which star formation may still be significant, particularly at high $z$. In order to trace the star formation histories in halos to high redshifts, we need to reach a halo mass of about $10^9\, h^{-1}{\rm M_\odot}$, below which star formation is expected to be unimportant due to photo-ionization heating \citep[e.g.][]{Babul1992,Thoul1996}. Here we adopt a Monte Carlo method to extend the merger trees of the simulated halos down to a mass limit, $10^{9}\, h^{-1}{\rm M_\odot}$. \citet{Jiang2014} have tested the performances of several different methods of generating Monte Carlo halo merger trees, and found that the method of \citet[][thereafter P08]{Parkinson2007} consistently provides the best match to the halo merging trees obtained from $N$-body simulations. We therefore adopt the P08 method. We join the P08 Monte Carlo trees to the halo merger trees obtained from the simulation through the following steps: \begin{enumerate}[fullwidth,itemindent=1em,label=(\roman*)] \item For each simulated halo merger tree $T$, we eliminate halos that have masses below $M_{\rm th} = 10^{10} \, h^{-1}{\rm M_\odot}$ but have no progenitors more massive than $M_{\rm th}$. The purpose of the second condition is to preserve halos which once had masses larger than $M_{\rm th}$ but have become less massive later due to stripping and/or mass loss. \item For each halo $H$ that is not eliminated in $T$, we generate a Monte Carlo tree $t$ (down to $10^9\, h^{-1}{\rm M_\odot}$), rooted from a halo $h$ that has the same mass and the same redshift as $H$, and eliminate all halos more massive than $10^{10}\, h^{-1}{\rm M_\odot}$ in $t$. \item We add $t$ to $H$. The procedure is repeated for all halos with masses above $10^{10}\, h^{-1}{\rm M_\odot}$ in all trees in the ELUCID simulation, so that all such halos have merger trees extended to $10^9\, h^{-1}{\rm M_\odot}$. \item For halos with masses below $10^{10}\, h^{-1}{\rm M_\odot}$ at $z=0$, their merger trees are entirely generated with the Monte Carlo method. Note that these halos are not identified from the simulation, but can be used to model galaxies in such low-mass halos when needed. \end{enumerate} With these steps, we obtain `repaired' halo merger trees that have a mass resolution of $10^9\, h^{-1}{\rm M_\odot}$, with halos more massive than $10^{10}\, h^{-1}{\rm M_\odot}$ sampled entirely by the simulation, and the less massive ones modeled by Monte-Carlo trees. Fig.~\ref{fig:CHMF} shows the conditional progenitor mass functions of dark matter halos, defined as the fraction of mass in progenitors per logarithmic mass, for merger trees rooted from different masses, and for progenitors at different redshifts. Our results, obtained by combining the simulated trees above the mass resolution $M_{\rm th}$ with the Monte Carlo merging trees generated with the P08 model below the mass limit, are shown by the black solid lines, and compared with the merger trees generated entirely with the P08 model. Overall, the progenitor mass distributions we obtain match well those obtained from the Monte Carlo method, indicating that our merger trees are reliable. Since galaxies form and evolve in dark matter halos, our `repaired' halo merger trees from the ELUCID simulation provide the basis to link galaxy properties to dark matter halos, and can be used in combinations with halo-based methods of galaxy formation, such as abundance matching, semi-analytic and other empirical models, to populate halos with galaxies. The method can, in principle, be applied to simulated halos with any mass resolution and with any cosmology, to extend halo merger trees to a sufficiently low mass, as long as reliable Monte Carlo trees can be generated. We note that our merger trees do not include high order sub-halos, i.e. sub-halos in sub-halos. In the next section, we apply the empirical model, developed in L14 and L15, to follow galaxy formation and evolution in dark matter halos, based on our repaired halo merger trees. \section{Populating halos with galaxies} \label{sec_populating} In this section, we describe the L14, L15 empirical method, developed by \citet{Lu2014,Lu2015}, to populate galaxies in the halo merger trees described in the previous section. Briefly, we assign a central galaxy to each distinctive halo and give it an appropriate star formation rate (SFR) according to the empirical model. We then evolve all galaxies in the current snapshot to the next, following the accretion of galaxies by dark matter halos and the mergers of galaxies. The stellar masses for both central and satellite galaxies are obtained by integrating the stellar contents along their histories. Finally, observable quantities, such as luminosity and apparent magnitude, are obtained from a stellar population synthesis model. \subsection{The empirical model of galaxy formation} \begin{figure} \includegraphics[width=\columnwidth]{fig2.pdf} \caption{ Galaxy stellar mass functions at redshift $z=0$. Black solid line: model galaxies based on our repaired trees. Purple solid line: from ~\citealp{Lu2014}, based on Monte Carlo halo merger trees. Green dots with error bars: from observational result which is used by L14 to calibrate the model.} \label{fig:gsmf} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{fig3.pdf} \caption{Conditional stellar mass functions in halos with different mass $M_{\rm h}$ (as indicated in each panel) at redshift $z=0$. Black lines: model galaxies based on our repaired trees, for central galaxies (dashed) and satellite galaxies (solid). The error bars indicate the standard deviations among 100 bootstrap resamplings. Purple lines: from~\citealp{Lu2014}, based on Monte Carlo halo merger trees, for central galaxies (dashed) and satellite galaxies (solid). Green markers with error bars: from observational result of~\citealp{Yang2008}, for central galaxies (circles) and satellite galaxies (triangles). } \label{fig:cgsmf} \end{figure} \begin{figure*} \centering \includegraphics[width=15cm]{fig4.pdf} \caption{Spatial distribution of SDSS (left panel) and mock (right panel) galaxies. Selections are made for all galaxies with $r\le 17.6$ in the redshift range $[0.01,\ 0.12]$ in the Sloan NGC region. Only galaxies in a $4^\circ$ declination slice are plotted.} \label{fig:sky_sdss} \end{figure*} In the model of L14 and L15, SFR of a central galaxy is assumed to depend on the halo mass $M_{\rm halo}$ and redshift $z$ as \begin{equation} \begin{split} {\rm SFR}&(M_{\rm halo},z) =\\ &\varepsilon \frac{f_{\rm B} M_{\rm halo}}{\tau _0} (1+z)^{\kappa}(1+X)^{\alpha} \left(\frac{X+R}{X+1}\right)^{\beta} \left(\frac{X}{X+R}\right)^{\gamma} \end{split} \end{equation} where $\tau_0 = 1/(10 H_0)$, $\kappa = 3/2$. $f_{\rm B}=\Omega_{\rm B,0}/\Omega_{\rm m,0}$ is the cosmic baryon fraction, and $\varepsilon$ and $\beta$ are time-independent model parameters. The parameters, $\alpha$ and $\gamma$, are assumed to be time-dependent, given by \begin{equation} \alpha = \alpha_0(1+z)^{\alpha^{'}} \end{equation} and \begin{equation} \gamma = \left\{ \begin{aligned} &\gamma_a& \mathrm{if}\ z < z_c\\ &(\gamma_a-\gamma_b)(\frac{z+1}{z_c+1})^{\gamma^{'}} + \gamma_b & \mathrm{if}\ z \geq z_c \end{aligned} \right. \end{equation} where $\alpha_0$, $\alpha^{'}$, $\gamma_a$, $\gamma_b$ and $z_c$ are time-independent model parameters. In L14 and L15, both $\alpha$ and $\gamma$ are chosen to be time-dependent to make the model compatible with the observed galaxy stellar mass functions (GSMFs) at different redshifts and the composite conditional luminosity function of cluster galaxies at redshift $z=0$~\citep[see also][]{Lim2017}. All the model parameters are determined by fitting the model predictions to a set of observational data (see the original papers for details). Here we adopt the parameters listed in L14 (denoted by 'Model III SMF+CGLF' in this paper), which are based on a cosmology consistent with the WMAP5 cosmology~\citep{Dunkley2009,Komatsu2009} used here. Once a dark matter halo hosting a galaxy is accreted by a bigger halo, the central galaxy in it is assumed to become a satellite galaxy, and thus experience satellite-specific processes, such as tidal stripping and ram-pressure stripping, which may reduce and quench its star formation. L14 modelled the SFR in satellites as \begin{equation} {\rm SFR}(M_*,z) = {\rm SFR}(t_{\rm accr}) \exp\left[ - \frac{t-t_{\rm accr}} {\tau(M_*)}\right] \end{equation} with \begin{equation} \tau(M_*) = \tau_{*,0}\exp(-M_*/M_{*,c})\,. \end{equation} Here $M_*$ is the current mass of the satellite galaxy, $\tau_{*,0}$ and $M_{*,c}$ are time-independent model parameters, and $t_{\rm accr}$ is the cosmic time at which the host halo of the galaxy is accreted. After accretion, the satellite halo and the galaxies it hosts are expected to experience dynamical friction, which causes them to move towards the inner part of the new host halo. The satellites may then merger with the central galaxy located near the center. We follow L14 and use an empirical model to determine the time when the merger occurs: \begin{equation} \Delta t = 0.216\frac{Y^{1.3}}{\ln(1+Y)}\exp(1.9\eta)\frac{r_{\rm halo}}{v_{\rm halo}}, \end{equation} where $Y=M_{\rm cen}/M_{\rm sat}$ is the ratio of mass between the central halo and the satellite halo at the time when the accretion occurs, and $r_{\rm halo}$ and $v_{\rm halo}$ are the virial radius and virial velocity of the central halo \citep[e.g.][]{Boylan-Kolchin2008}. The parameter, $\eta$, describes the specific orbital angular momentum, and is assumed to follow a probability distribution $P(\eta) = \eta^{1.2}(1-\eta)^{1.2}$ \citep[e.g.][]{Zentner2005}. After merger, a fraction of $f_{\rm TS}$ of the stellar mass of the satellite is added to the central galaxy, with $f_{\rm TS}$ a model parameter. The ingredients given above can be used to predict the stellar mass and SFR of both central and satellite galaxies. In order to make predictions for galaxy luminosities in different bands, we also need the metallicities of stars. We use the mean metallicity - stellar mass relation given by \citet{Gallazzi2005} to assign metallicities to galaxies according to their masses. A simple stellar population synthesis model, based on the \cite{Bruzual2003} with a Chabrier initial mass function \citep{Chabrier2003}, is adopted to obtained the mass to light ratio of formed star, and the mass loss due to stellar evolution. We note that the L14 model, which is based on Monte-Carlo merger trees, does not take into account some special events that exist in numerical simulations. In simulated merger trees, some sub-halos were main halos at some early times, accreted into other systems as satellites later, and were eventually ejected and became main halos again. For such cases, we treat the galaxy in the sub-halo as a satellite galaxy even after the sub-halo is ejected. The ejected sub-halos are then treated as new main halos after ejection. This implementation does not make much physical sense, but best mimic the Monte Carlo merger trees in which sub-halos are never ejected, and all halos at a given time are treated equally without depending on whether or not they have gone through a big halo. Such an implementation is necessary, as the model parameters given by L14 are calibrated by using Monte-Carlo merger trees. Fig.~\ref{fig:gsmf} shows the galaxy stellar mass function (GSMF) of model galaxies at redshift z = 0, in black solid line, in comparison with the result of L14 (purple solid line). As one can see, the L14 result is well reproduced over wide ranges of stellar masses, which demonstrates that our implementation of the L14 model with the ELUCID halo merger trees are reliable, as long as the general galaxy GSMF is concerned. For reference, we also include the observational data points (green dots with error bars) that were used in L14 to constrain their model parameters. As a more demanding test, we compare in Fig.\,\ref{fig:cgsmf} the conditional galaxy stellar mass functions (CGSMFs) in halos of different masses at redshift $z=0$ obtained from the ELUCID halo merger trees with those given by L14. Here again we see a good agreement between the two. Since the CGSMF gives the average number of galaxies of a given stellar mass in a halo of a given mass, a good match in CGSMFs also implies that the spatial clustering of galaxies as a function of stellar mass is also reproduced. \subsection{Galaxy occupation in dark matter halos} To use our model galaxies to construct mock catalogs, we need to assign spatial positions and peculiar velocities to galaxies in each halo in the simulation according to the halo occupation distributions (HODs) obtained from the empirical model described above. Here we adopt a sub-halo abundance matching method that links galaxies in a halo to the sub-halos in it. As shown in \citet{Wang2016}, the sub-halo population can be identified reliably from the ELUCID simulation for sub-halos with masses down to $\sim 10^{10}\, h^{-1}{\rm M_\odot}$. The abundance matching goes as follows. For a given halo, we first rank galaxies in descending order of stellar mass and sub-halos in descending order of halo mass. Here the mass of a sub-halo is that at the time when the sub-halo was first accreted into its host. Note that sub-halos both identified directly from the simulation and added using Monte Carlo merger trees (see~\S\ref{ssec_trees}) are used. For sub-halos identified in the simulation, their positions and velocities are those given by the SUNFIND. For the Monte Carlo sub-halos that are joined to the simulated halos, on the other hand, we assign random positions according to the NFW profile \citep{Navarro1997} with concentration parameters given by \citet{Zhao2009}, and their velocities are drawn from a Gaussian distribution with dispersion appropriate for the density profile assumed. For sub-halos that are rooted from a $z=0$ Monte Carlo halo, the galaxies hosted by them are usually too faint to be relevant; they are only included for completeness, but actually are not used in constructing the mock catalog. Finally, the position and velocity of the sub-halo are assigned to the galaxy that has the same rank. For those galaxies that do not have sub-halo counter-parts, their positions and velocities are assigned randomly according to the NFW profile. This method can be used to construct volume limited samples within the entire simulation box down to stellar masses $\sim 10^8\, h^{-1}{\rm M_\odot}$, with full phase space information obtained from the simulated sub-halos. This is sufficient for most of our purposes. \subsection{The SDSS mock catalog} With full information about the luminosities and phase space coordinates for individual galaxies, it is straightforward to make mock catalogs using galaxies in the constrained volume and applying the same selection criteria as in the observation. For each model galaxy in the simulation box, we assign to it a cosmological redsfhit, $z_{\rm cos}$, according to its distance to a virtual observer, and the observed redshift, $z_{\rm obs}$, is given by $z_{\rm cos}$ together with its line-of-sight (los) peculiar velocity, $v_{\rm los}$: \begin{equation} z_{\rm obs} = z_{\rm cos} + (1+z_{\rm cos})\frac{v_{\rm los}}{c}\,, \end{equation} with $c$ the speed of light. Here the location of the virtual observer and the coordinate system are determined by the orientation of the SDSS volume in the simulation box. SDSS apparent magnitudes in $u$, $g$, $r$, $i$, and $z$ are assigned to each galaxy according to its luminosities in the corresponding bands. For our SDSS mock sample, we select all galaxies in the SDSS Northern-Galactic-Cap (NGC) region with redshifts $0.01 < z < 0.12$ and with magnitude $r \leq 17.6$. Fig.~\ref{fig:sky_sdss} shows the real SDSS galaxies (left) and the mock galaxies (right) in the same slice in the SDSS sky coverage. It is clear that the distribution of the mock galaxies is very similar to that of the real galaxies. The large scale structures in the local Universe, such as the Sloan Great Wall at redshift $\approx 0.08$, are well reproduced. Thus, the mock catalog can be used to investigate both the properties of the galaxy population in the cosmic web, and the large scale clustering of galaxies. In particular, since all galaxies above our mass resolution limit, which is about $10^{8}\, h^{-1}{\rm M_\odot}$, are modeled in the entire simulation box, a comparison of the statistical properties between the SDSS mock catalog and the whole simulation box carries information about the CV of the SDSS sample. \section{Cosmic variance in galaxy stellar mass functions} \label{sec_CVinGSMF} The realistic model catalogs described above have many applications, such as to study the relationships between galaxies and the the mass density field, and to investigate the galaxy population in different components of the cosmic web. Here we use them to analyze and quantify the cosmic variances (CV) in the measurements of the galaxy stellar mass function (GSMF) and luminosity function (GLF). We first use model galaxies in the whole simulation box to quantify the CV as a function of sample volume and galaxy mass. We then use the SDSS mock catalog to examine the CV in the SDSS, and to investigate different estimates of the GSMF/GLF in their abilities to account for the CV. We propose and test a new method that can best correct for the CV. Finally, we apply our method to the SDSS catalog to obtain GLF and GSMF that are free of the CV. \subsection {Cosmic variance as a function of sample volume and galaxy mass} \begin{figure} \includegraphics[width=\columnwidth]{fig5.pdf} \caption{GSMFs at $z=0$ in sub-boxes in the $500 \, h^{-1}{\rm {Mpc}}$ box of ELUCID simulation. For each sub-box size $L_{\rm box}\le 100h^{-1}{\rm Mpc}$, 100 sub-boxes without overlap are randomly chosen in the simulation box, while for $L_{\rm box} = 250 \, h^{-1}{\rm {Mpc}}$, all the 8 sub-boxes are used. The GSMFs of individual sub-boxes are shown by the green curves in each panel. The average over the sub-boxes of the same size is given by the red line in each panel. Error bars covering $96\%$ ($2\sigma$) range among different sub-boxes are also plotted.} \label{fig:cosV_gsmf_boxes} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig6.pdf} \caption{ Upper panel: Cosmic variance $\sigma_{\rm CV}$ as a function of the characteristic size of sample $L_{\rm s}$ and stellar mass $M_*$, as indicated in the panel. Solid lines are $\sigma_{\rm CV}$ estimated from the mock sample, while dashed lines are from the fitting formula. Lower panel: Ratio of $\sigma_{\rm CV}$ between the mock sample and model prediction. The black solid line indicates the ratio of $1.0$. } \label{fig:cosV_gsmf_box_mock} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{fig7.pdf} \caption{ The covariance, ${\rm Cov}(M_{*,1},\ M_{*,2};\ L_{\rm s})$, of the GSMF between two stellar masses, $M_{*,1}$ and $M_{*,2}$, as a function of the characteristic sample size $L_{s}$. Symbols show results obtained from the mock sample. Different $M_{*,1}$ are represented by different symbols: $M_{*,1} [\, h^{-1}{\rm M_\odot}] = 10^{9.1},\ 10^{10.3},\ 10^{11.5}$, from bottom up, scaled by $0.01,\ 1,\ 100$, respectively, for clarity. Different $M_{*,2}$ with the same $M_{*,1}$ are re-scaled by $0.3,\ 1,\ 1.2$, for $M_{*,2} [\, h^{-1}{\rm M_\odot}] = 10^{9.1},\ 10^{10.3},\ 10^{11.5}$, respectively. The solid curves are model predictions. } \label{fig:cov_gsmf_box_mock} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{fig8.pdf} \caption{ The model of cosmic variance compared with SDSS data. Upper panel: Cosmic variance $\sigma_{\rm CV}$ of the GSMF as a function of the characteristic sample size $L_{\rm s}$, for galaxies of different stellar masses, $M_*$, as shown by different colors. Solid lines are $\sigma_{\rm CV}$ estimated from the SDSS sample, while dashed lines are predictions of the fitting model. Lower panel: The ratio of $\sigma_{\rm CV}$ between the SDSS sample and the model. The black horizontal line indicates the ratio of $1.0$. } \label{fig:cosV_gsmf_box_SDSS} \end{figure} To quantify the effects of CV, we partition the whole $500^3\ h^{-3}\mathrm{Mpc}^3$ simulation box into sub-boxes, each with a given size $L_{\rm s}$, without overlap. For each sub-box $i$ we calculate the galaxy number density $n_{g,i}(M_*; L_s)$. Fig.~\ref{fig:cosV_gsmf_boxes} shows the GSMFs obtained for 100 sub-boxes with sizes $L_{\rm s} = 25$, $50$, $100$, and $250 \, h^{-1}{\rm {Mpc}}$, respectively. The results of individual sub-boxes are shown by the green lines, while the average and the $2\sigma$ variance ($96\%$) among the GSMFs are shown by the red curve and bars, respectively. As expected, the scatter among the sub-boxes decreases as the sub-box size increases. For instance, the scatter for $L_{\rm s}=50\, h^{-1}{\rm {Mpc}}$ is about $\approx 0.3\ {\rm dex}$ over almost the entire stellar mass range, while it is smaller than $10\%$ for $L_{\rm s}=250 \, h^{-1}{\rm {Mpc}}$. Theoretically, the galaxy number density $n_g$ is related to the mass density $\rho_m$ by a stochastic bias relation: \begin{equation} \delta_g = b \delta_m +\epsilon\,, \end{equation} where $\delta_g=(n_g/{\overline n}_g)-1$ and $\delta_m=(\rho_m/{\overline\rho}_m)-1$, with ${\overline n}_g$ and ${\overline\rho}_m$ being the mean number density of galaxies and the mean density of mass in the Universe. The coefficient, $b$, is the bias parameter, which characterizes the deterministic part of the bias relation, and $\epsilon$ is the stochastic part. If the galaxy number density field is a Poisson sampling of the mass density field, then the variance in the galaxy density can be written as \begin{equation} \sigma_t^2 = \sigma^2_{\rm CV} + \sigma_{\rm P}^2, \label{eq_sigma_t} \end{equation} where $\sigma_P=N^{-1/2}$ is due to Poisson fluctuation. Assuming linear bias, the deterministic part, which we refer to as the cosmic variance (CV), can be written as: \begin{equation} \sigma_{\rm CV}^2 (M_*; L_s) =b^2(M_*) \sigma_m^2 (L_s), \label{eq_fit_sigma_cv} \end{equation} where $L_s$ is the characteristic size of the sample, and $\sigma_m(L_s)$ is the rms of the mass fluctuation on the scale of $L_s$. Motivated by this, we model $\sigma_{\rm CV}$ using the GSMF obtained from simulated galaxies. The number density $n_{g,i}$ of all sub-boxes are synthesized to give the mean value, ${\overline n}_g(M_*; L_s)$, and the variance, $\sigma_t^2(M_*;\ L_s)$. We use ${\overline n}_g$ to estimate the expected Poisson variance, $\sigma_{\rm P}^2$, and use equation (\ref{eq_sigma_t}) to estimate $\sigma_{\rm CV}^2(M_*; L_s)$ by subtracting the Poisson part from the total variance. Equation (\ref{eq_fit_sigma_cv}) is then used to fit the dependence of CV on stellar mass and the size of sub-box. We find that the $L_s$-dependence can be well described by \begin{equation} \log \sigma_m (x) = p_0 + p_1 x + p_2 x^2 + p_3 x^3\,, \end{equation} where $x = \log (L_{s}/\, h^{-1}{\rm {Mpc}})$, and $p_0=1.53$, $p_1=-2.02$, $p_2=0.92$, and $p_3=-0.25$, while the $M_*$ dependence by \begin{equation} \log b(y)= q_0 + q_1 y + q_2 y^2 + q_3 y^3\,, \end{equation} where $y = \log (M_*/\, h^{-1}{\rm M_\odot})$, and $q_0=-16.04$, $q_1= 5.79$, $q_2=-0.68$, and $q_3=0.026$. Fig.~\ref{fig:cosV_gsmf_box_mock} shows the comparison between $\sigma_{\rm CV}$ obtained direct from the simulated galaxy sample and the model prediction as a function of $L_s$ for galaxies of different $M_*$, as represented by different lines. The fitting formulae work well over the range from $10^8 \, h^{-1}{\rm M_\odot}$ to $10^{11.6} \, h^{-1}{\rm M_\odot}$ in $M_*$, and from $10 \, h^{-1}{\rm {Mpc}}$ to $125\, h^{-1}{\rm {Mpc}}$ in $L_s$. The above prescription also provides a model for the covariance matrix of the cosmic variance. Consider the covariance matrix, $C$, of the densities between galaxies of masses $M_{*,1}$ and $M_{*,2}$. The bias model described above gives \begin{equation} C (M_{*,1}, M_{*,2}; L_s) = b(M_{*,1}) b(M_{*,2}) \sigma_m^2(L_s). \end{equation} Fig.~\ref{fig:cov_gsmf_box_mock} shows the ratio between the measured $C$ and the model predictions as a function of $L_s$ for a number of $(M_{*,1}, M_{*,2})$ pairs. Overall the model matches the measurements well. Some discrepancies can be seen for massive galaxies and small $L_s$, where the model prediction is slightly lower than that measured from the simulation data. We can compare the CV model calibrated above with that obtained from SDSS data. To this end, we estimate the total variance, the Poisson variance, and the cosmic variance using sub-boxes of given $L_{s}$ that are fully contained by the SDSS volume, within which the sample is complete for a given $M_*$. In order to estimate the variance among sub-boxes reliably, we only present cases where at least 10 sub-boxes are available. The results, plotted in Fig.~\ref{fig:cosV_gsmf_box_SDSS}, show that the SDSS measurements follow the model predictions for $10 < L_{s} < 75\, h^{-1}{\rm {Mpc}}$ and $ M_* > 10^9\, h^{-1}{\rm M_\odot} $. Note that we did not fit the $\sigma_{\rm CV}$ for $M_* > 10^{11.6} \, h^{-1}{\rm M_\odot}$, but the extrapolation seems to match the SDSS measurements well even for such stellar masses. For $M_* < 10^9 \, h^{-1}{\rm M_\odot}$, the variance obtained from the SDSS becomes significantly lower than the model prediction. As we will see below, this deviation is caused by the fact that the local volume, within which such galaxies can be observed, does not sample the galaxy population fairly. To summarize, the simple model presented above provides a useful way to estimate the level of CV expected in the measurements of the GSMF. This variance, which is produced by the fluctuations of the cosmic density field, should be combined with the Poisson variance from number counting to estimate the total variance in the uncertainty in the GSMF. This is particularly the case where the galaxy population is observed in a small volume and the cosmic variance is large than the counting error. In real applications, other types of uncertainties, such as errors in photometry, redshift, and stellar mass estimate, should also be modeled properly along with the CV described here. \subsection{Cosmic variances in the SDSS volume} \label{ssec_cv_sdss_region} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig9.pdf} \caption{ Galaxy and halo number densities at different redshift $z$ in the Sloan sky coverage, from $z=0.01$ to $0.12$. Red lines are for simulated halos (Solid: halos with $M_{\rm h}>10^{12}\, h^{-1}{\rm M_\odot}$, offset by $\times 60.0$; Dashed: $10^{8} \leq M_{\rm h} \leq 10^{12}\, h^{-1}{\rm M_\odot}$). Blue lines are for mock galaxies based on the empirical model (Solid: $M_* > 10^{10.5}\, h^{-1}{\rm M_\odot}$, offset by $\times 22.0$; Dashed: $10^{9.5} \leq M_* \leq 10^{10.5} \, h^{-1}{\rm M_\odot}$, offset by $\times 4.8$). Purple lines are for SDSS galaxies (Solid: galaxies with $M_r < -20.5$, offset by $\times 8.0$; Dashed: $-20.5 \leq M_r \leq -19.5$, offset by $\times 4.4$). Horizontal lines are the mean number densities in the corresponding volumes. } \label{fig:num_den_4} \end{figure} \begin{figure*} \centering \includegraphics[width=14cm]{fig10.pdf} \caption{The galaxy stellar mass functions (GSMF), $\Phi(M_*)$, estimated using the V-max method from SDSS magnitude-limited mock samples with different magnitude limits, $r_{\rm lim}$, as shown in the left panel. Right panel shows the absolute values of the GSMF, with the benchmark shown by the black dashed line. The black solid shows the result for $r_{\rm lim} =17.6$, the magnitude limit of the SDSS survey. The left panel shows the ratio of GSMF between the magnitude-limited samples and benchmark. In each panel, the vertical dashed lines indicate the stellar masses corresponding to the break at $z = 0.03$.} \label{fig:mag_lim} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig11.pdf} \caption{The GSMFs estimated from the SDSS mock catalog with different methods designed to account for the cosmic variance. Upper panel shows the GSMFs and lower panel shows the ratio of each GSMF to the benchmark. Black solid line: the benchmark GSMF obtained from the SDSS volume-limited mock sample. Black dashed line: the GSMF obtained with the V-max method, with error bars calculated using 100 bootstrap samples. Purple line: the GSMF obtained from the density scaling method of \citet[][]{Baldry2012}. Green line: the GSMF obtained from the maximum likelihood method assuming a triple-Schechter function form.} \label{fig:cor_gsmf_twomethod} \end{figure} In this subsection we examine in detail the CV in the SDSS using the mock samples constructed for the SDSS. Here we only consider galaxies and model galaxies in the SDSS Northern-Galactic-Cap (NGC) (thereafter, SDSS sky coverage) with redshift $0.01 \leq z \leq 0.12$ (thereafter, SDSS volume). We construct four different types of samples: \begin{enumerate}[fullwidth,itemindent=1em,label=(\roman*)] \item SDSS sample: SDSS DR7 observed galaxies in the SDSS volume, with $r$-band magnitude selection $r \leq 17.6$. \item SDSS mock sample: model galaxies in the SDSS volume, with r-band magnitude selection $r \leq 17.6$. \item SDSS magnitude-limited mock samples: model galaxies, in the SDSS volume, that are brighter than a given magnitude limit. \item SDSS volume-limited mock sample: all model galaxies in the SDSS volume. This sample is served as the benchmark of the GSMF, since it is almost free of cosmic variance, as compared with the entire simulation box. \end{enumerate} To investigate potential cosmic variance in the SDSS volume, we first examine the galaxy number density, $n_{g}$, as a function of redshift $z$, in the SDSS volume-limited mock sample. Model galaxies in a given stellar mass bin are binned in redshift intervals with bin size $\delta z = 0.005$, and the galaxy count in each bin is used to estimate the galaxy number density. The results are shown in Fig.~\ref{fig:num_den_4}. The redshift distribution of model galaxies in the SDSS volume shows two peaks, one around $z_1 \approx 0.03$, due to the presence of the large-scale structure known as the CfA Great Wall~\citep[][]{Geller1989}, and the other around $z_2 \approx 0.075$ due to the presence of the the Sloan Great Wall \citep[][]{GottIII2005}. Below $z \approx 0.03$, the number densities show a sharp decline as $z$ decreases, and the effect is stronger for massive galaxies, indicating the presence of a local void~\citep[see also, for example][]{Whitbourn2014,Whitbourn2016}. For comparison, we also show the redshift distribution of SDSS galaxies, obtained by using sub-samples complete to given absolute magnitude limits. We see that the observed distribution follows well that in the mock sample, indicating our mock sample can be used to study the CV in the SDSS sample. For reference we also plot the number densities of simulated dark matter halos in the SDSS volume versus redshift. Here again we see structures similar to that seen in the galaxy distribution. In particular, there is a marked decline of halo density at $z< 0.03$, and the decline is more prominent for more massive halos. The presence of the local low-density region shown above can have strong impact on the statistical properties of the galaxy population derived from the SDSS, especially for faint galaxies which can be observed only within the local volume in a magnitude limited sample. Indeed, the measurement of the GSMF, which describes the number density of galaxies as a function of galaxy mass, can be biased if the local low-density region is not properly accounted for. As an illustration, Fig.~\ref{fig:mag_lim} shows the GSMFs derived from SDSS magnitude-limited mock samples with different $r$-band magnitude limits, using the standard $V_{\rm max}$ method. For reference, we also plot the GSMF obtained from the SDSS volume-limited mock sample (the thick dashed line), which matches well the `global' GSMF obtained from the whole $500^3\ h^{-3}\mathrm{Mpc}^3$ simulation box. As one can see, the GSMF can be significantly underestimated if the magnitude limit is shallow (corresponding to a low value of the $r$-band magnitude limit, $r_{\rm lim}$). Only a sample as deep as $r_{\rm lim}=20$ can provide an unbiased estimate of the GSMF down to $M_*\sim 10^8 \, h^{-1}{\rm M_\odot}$. For the SDSS limit, $r_{\rm lim}=17.6$, the measurement starts to deviate from the global GSMF at $M_* \approx 10^{9} \, h^{-1}{\rm M_\odot}$, and the difference between them reaches a factor of about 5 at around $10^8 \, h^{-1}{\rm M_\odot}$. The underestimate of the GSMF at the low-mass end is produced by the presence of the low-density region at $z<0.03$ in the SDSS volume. To show this more clearly, we define a `break' mass, $M_{0.03}(r_{\rm lim})$, so that galaxies with stellar masses $M_* = M_{0.03}$ is complete to $z = 0.03$ for the given magnitude limit, $r_{\rm lim}$. Here we have used the mean mass-to-light ratio, obtained from the mock sample, to convert the stellar mass to an absolute magnitude. As one can see, for each $r_{\rm lim}$, the GSMF obtained from the sample starts to deviate from the global benchmark at $M_{0.03}(r_{\rm lim})$, shown by the vertical line, and is substantially lower at $M_*< M_{0.03}$. All these demonstrate that the faint-end of the GSMF can be under-estimated significantly in the SDSS due to the presence of the local low-density region at $z<0.03$. \subsection{The correction of cosmic variance} \subsubsection{Conventional methods} The results described above indicate that CV is a serious issue in the measurements of the GSMF, even for a sample as large as the SDSS. Corrections have to be made in order to obtain an unbiased result that represents the true GSMF in the low-$z$ Universe. In the literature, some estimators other than the standard V-max method have been proposed, such as the maximum likelihood method \citep[e.g.][]{G.Efstathiou1988,Blanton2001,Cole2011,Whitbourn2016}, and scaling with bright galaxies \citep[e.g.][]{Baldry2012}. These methods were designed, at least partly, to correct for the effects of large-scale structure in the measurements of the GSMF from an observational sample. Here we test their performances using our SDSS mock samples. In the maximum likelihood method, one starts with an assumed functional form, either parametric or non-parametric, for the GSMF, and then use a maximum likelihood method to match the model prediction with the data, thereby obtaining the parameters that specifies the functional form of the GSMF. In our analysis here, we choose a triple-Schechter function to model the GSMF, \begin{equation} \Phi(M_*) \mathrm{d}\log M_* = \sum_{k=1}^3 \Phi_{*,k} \left(\frac{M_*} {\mu_i}\right)^{\alpha_i+1} e^{-M_*/\mu_i} {\rm d}\log M_*\,, \label{eq_triple_schechter} \end{equation} where $\Phi_{*,i}$, $\mu_i$, $\alpha_{i}$ are the amplitude, the characteristic mass, and the faint-end slope, of the $i$-th Schechter component, respectively. This function is assumed to be defined over the domain, $[ M_{*,\rm min},\ M_{*, \rm max} ]$. For a galaxy, `$i$', with stellar mass $M_i$ at redshift $z_i$ in the sample, the probability for it to be observed at this redshift is \begin{equation} {\cal L}_i = \frac{\Phi(M_i)}{\int_{M_{i,\rm min}}^{M_{\rm max}} \Phi(M_*)\mathrm{d}\log M_*}\,. \end{equation} The total likelihood ${\cal L}$ that the GSMF takes the assumed $\Phi$ is then given by \begin{equation} {\cal L } = \prod_{i=1}^{N} {\cal L}_i\,, \end{equation} where $N$ is the number of galaxies in the sample. The model parameters can be adjusted so as to maximize the likelihood ${\cal L}$. In our application to the SDSS mock sample, we fit the GSMF obtained from the V-max method with the Triple-Schechter function and use the parameters as the initial input of the maximization process. Since the bright end is free of cosmic variance, we fix the three parameters characterizing the Schechter component at the brightest end, leaving the remaining six parameters to be constrained by the maximum likelihood process. As the maximum likelihood method does not provide information about the overall amplitude of $\Phi(M_*)$, the bright end is also used to fix the amplitude of $\Phi(M_*)$. The GSMF estimated in the way from the SDSS mock sample is plotted in Fig.~\ref{fig:cor_gsmf_twomethod} as the green line, in comparison with that estimated by the V-max method (dashed line), and the benchmark GSMF (black line). It is clear that the maximum likelihood method works better than the V-max method, but it still underestimates the GSMF at the low-mass end. The underlying assumption of the maximum likelihood method is that the relative distribution of galaxies with respect to $M_*$ is everywhere the same. This in general is not true, given that galaxy clustering depends on $M_*$. This explains the failure of this method in correcting the CV. In an attempt to control the cosmic variance in the GAMA survey, \citet{Baldry2012} proposed to use the number density of brighter galaxies estimated in a larger volume to scale the number density of fainter galaxies that are observed only in a smaller volume. This method will be referred to as ``density scaling" method. Our implementation of this method is as follows. \begin{enumerate}[fullwidth,itemindent=1em,label=(\roman*)] \item Choose a `cosmic-variance-free (CVF)' sample, including only bright galaxies that have $z_{\rm max}$ larger than $0.12$. In our SDSS mock sample, this corresponds to select galaxies with $M_* >3\times10^{10} \, h^{-1}{\rm M_\odot}$. This sample will be used as the density tracer at different redshifts, to scale the density at the fainter end. \item Compute the cumulative number density of the CVF sample, $n_{\rm CVF}(<z)$, as a function of redshift $z$. In practice, the cumulative number density is calculated in the redshift range $[0.01,\ z]$. \item Compute the GSMF, $\Phi_{\rm Vmax}(M_*)$, using the V-max method \item For each stellar mass bin of $\Phi_{\rm Vmax}(M_*)$, find the largest redshift, $z_{\rm max}(M_*)$, below which galaxies in this bin can be observed in the sample. \item Obtain the corrected GSMF, $\Phi_{\rm sc}$, by scaling the V-max estimate with a correction factor: \begin{equation} \Phi_{\rm sc}(M_*) = \Phi_{\rm Vmax}(M_*) \frac{n_{\rm CVF}(<0.12)}{n_{\rm CVF}[<z_{\rm max}(M_*) ]}\,, \end{equation} where $n_{\rm CVF}(<0.12)$ is the number density of the CVF sample in the full redshift range, $[0.01, 0.12]$, and $n_{\rm CVF}[<z_{\rm max}(M_*)]$ is that in the redshift range $[0.01, z_{\rm max}(M_*)]$. \end{enumerate} The GSMF estimated in this way from the SDSS mock sample is plotted in Fig.\,\ref{fig:cor_gsmf_twomethod} as the purple line. This method appears to work better than both the V-max method and the maximum likelihood method in the low-mass end, but the underestimation is still substantial. Furthermore, this method leads to a dip around $M_*=10^{9.8}\, h^{-1}{\rm M_\odot}$, because of the density enhancement associated with the CfA Great Wall. The failure of this scaling method has an origin similar to that of the maximum likelihood method. The underlying assumption here is that the bright galaxies can serve as a tracer of the cosmic density field, and that the distributions of bright and faint galaxies are both related to the underlying density field by a similar bias factor. In general, this assumption is not valid. \subsubsection{Methods based on the joint distribution of galaxies and environment} \label{sssec_mock_test_correction} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig12.pdf} \caption{ The conditional galaxy stellar mass functions (CGSMFs) for halos of different masses, $M_{\rm h}$, as indicated in the figure. Solid lines represent the CGSMFs estimated from the SDSS mock sample. Dotted lines are estimated from the SDSS volume-limited mock sample. } \label{fig:mockcsmfs} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig13.pdf} \caption{ The GSMFs obtained by applying different methods to the SDSS mock sample. Upper panel shows the GSMFs while the lower panel shows the ratio of each GSMF to the benchmark. Black solid line shows the benchmark GSMF directly calculated from the SDSS volume-limited mock sample. Black dashed line is the GSMF obtained by the method based on the CGSMFs described in this paper. Black dotted line shows the GSMF derived by using the V-max method. Green line shows the GSMF obtained by combining the CGSMFs directly calculated from the SDSS mock sample (which is incomplete for faint galaxies in massive groups). The purple solid and dashed lines are $\Phi_1$ and $\Phi_2$, the contributions of halos with masses $M_{\rm h} \geq 10^{12.5}\, h^{-1}{\rm M_\odot}$ and $M_{\rm h} < 10^{12.5}\, h^{-1}{\rm M_\odot}$, respectively (see~\S\ref{sssec_mock_test_correction} for details). Error bars are calculated from 100 bootstrap samples. } \label{fig:mock_syn_csmfs} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig14.pdf} \caption{ The galaxy luminosity function (GLF) estimated from the SDSS catalog by using our method, in comparison to the results in the literature. The upper panel shows the GLFs, while the lower panel shows the ratio of each GLF to that obtained with the V-max method. The black solid line is the GLF obtained by our method. The gray solid line at the faint end (first two data points) are obtained by linear extrapolation. Black dashed line is the GLF by the V-max method. The gray shaded band indicates the cosmic variance of SDSS sample expected from Eq.~\ref{eq_fit_sigma_cv}. Purple line is from~\citet[][]{Loveday2012} for GAMA survey. Green line is from~\citet[][]{Whitbourn2016} using SDSS 'cmodel' magnitude. } \label{fig:corrcted_glf} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig15.pdf} \caption{ The galaxy stellar mass function (GSMF) obtained from the SDSS sample in this paper, in comparison with the results published earlier. Upper panel shows the GSMFs, while lower panel shows the ratio of each GSMF to that given by the V-max method. Black solid line is the GSMF obtained by our method. Black dashed line is the GSMF by the V-max method for the SDSS sample. The gray shaded band indicates the cosmic variance of SDSS sample expected from Eq.~\ref{eq_fit_sigma_cv}. The gray solid piece at the faint end indicates the slight change when the extrapolation of the GLF is used. Green line: GSMF from \citet[][]{Li2009}. Blue line: GSMF from the GAMA survey \citep{Baldry2012}. Purple line: GSMF from \citet{Bernardi2013}, where the stellar masses of bright galaxies are estimated from S\'{e}rsic-Exponential fitting. The red dashed line in the top of upper panel has a slope $\alpha = -1.5$ (see Eq.~\ref{eq_triple_schechter} for the definition of faint-end slope). } \label{fig:corrcted_gsmf} \end{figure} \begin{table}[b!] \begin{threeparttable} \centering \caption{Corrected galaxy luminosity and stellar mass function} \label{tab:gsmf} \begin{tabular}{c c c c c} \toprule[1.5pt] $M_{\rm r}-5\log h$ & $\log \Phi(M_{\rm r})$ & $\log M_*$ & \multicolumn{2}{c}{$\log \Phi(M_*)$} \\ & $[h^3 {\rm Mpc}^{-3} {\rm mag}^{-1}]$ & $[h^{-1}{\rm M_\odot}]$ & \multicolumn{2}{c}{$[h^3 {\rm Mpc}^{-3} {\rm dex}^{-1}]$}\\ \hline $-15.2$ & $-1.179^{+0.043}_{-0.047}$ & $8.1$ & $-1.005$ & $-0.941$ \\ $-15.6$ & $-1.296^{+0.027}_{-0.028}$ & $8.3$ & $-1.111$ & $-1.111$ \\ $-16.0$ & $-1.395^{+0.022}_{-0.023}$ & $8.5$ & \multicolumn{2}{c}{$-1.184^{+0.011}_{-0.022}$} \\ $-16.4$ & $-1.500^{+0.022}_{-0.023}$ & $8.7$ & \multicolumn{2}{c}{$-1.272^{+0.003}_{-0.008}$} \\ $-16.8$ & $-1.536^{+0.015}_{-0.016}$ & $8.9$ & \multicolumn{2}{c}{$-1.345^{+0.007}_{-0.018}$} \\ $-17.2$ & $-1.571^{+0.011}_{-0.011}$ & $9.1$ & \multicolumn{2}{c}{$-1.417^{+0.001}_{-0.015}$} \\ $-17.6$ & $-1.659^{+0.009}_{-0.009}$ & $9.3$ & \multicolumn{2}{c}{$-1.518^{+0.026}_{-0.009}$} \\ $-18.0$ & $-1.730^{+0.006}_{-0.007}$ & $9.5$ & \multicolumn{2}{c}{$-1.576^{+0.002}_{-0.003}$} \\ $-18.4$ & $-1.802^{+0.006}_{-0.006}$ & $9.7$ & \multicolumn{2}{c}{$-1.624^{+0.000}_{-0.005}$} \\ $-18.8$ & $-1.864^{+0.005}_{-0.005}$ & $9.9$ & \multicolumn{2}{c}{$-1.678^{+0.026}_{-0.013}$} \\ $-19.2$ & $-1.919^{+0.004}_{-0.004}$ & $10.1$ & \multicolumn{2}{c}{$-1.688^{+0.004}_{-0.008}$} \\ $-19.6$ & $-1.960^{+0.003}_{-0.003}$ & $10.3$ & \multicolumn{2}{c}{$-1.770^{+0.000}_{-0.005}$} \\ $-20.0$ & $-2.019^{+0.002}_{-0.002}$ & $10.5$ & \multicolumn{2}{c}{$-1.903^{+0.022}_{-0.030}$} \\ $-20.4$ & $-2.117^{+0.002}_{-0.002}$ & $10.7$ & \multicolumn{2}{c}{$-2.163^{+0.010}_{-0.007}$} \\ $-20.8$ & $-2.244^{+0.002}_{-0.002}$ & $10.9$ & \multicolumn{2}{c}{$-2.562^{+0.058}_{-0.016}$} \\ $-21.2$ & $-2.449^{+0.003}_{-0.003}$ & $11.1$ & \multicolumn{2}{c}{$-3.053^{+0.049}_{-0.012}$} \\ $-21.6$ & $-2.743^{+0.004}_{-0.004}$ & $11.3$ & \multicolumn{2}{c}{$-3.770^{+0.131}_{-0.040}$} \\ $-22.0$ & $-3.166^{+0.006}_{-0.006}$ & & & \\ $-22.4$ & $-3.770^{+0.011}_{-0.011}$ & & & \\ $-22.8$ & $-4.501^{+0.023}_{-0.024}$ & & & \\ \bottomrule[1.5pt] \end{tabular} \begin{tablenotes} \item[$*$] Galaxy brighter than $ M_{\rm r}-5\log h = -15$ is not sufficient to calculate GSMF down to $M_*=10^{8.1},\ 10^{8.3}\, h^{-1}{\rm M_\odot}$. Extrapolation of GLF is used to solve this. Left column of $\Phi(M_*)$ is without extrapolation, while right column is with extrapolation. \end{tablenotes} \end{threeparttable} \end{table} Since galaxies form and reside in the cosmic density field, the number density of galaxies is expected to depend on the local environment of galaxies. Suppose the local environment is specified by a quantity or a set of quantities ${\cal E}$. The joint distribution of galaxy mass and ${\cal E}$ obtained from a given sample, `$S$', can be written as \begin{equation} \Phi_S(M_*, {\cal E}) =\Phi_S (M_*\vert {\cal E}) P_S({\cal E}), \end{equation} where $\Phi_S(M_*\vert {\cal E})$ is the conditional distribution of galaxy mass in a given environment estimated from sample 'S', and $P_S({\cal E})$ is the probability distribution function of the environmental quantity given by the sample. If galaxy formation and evolution is a local process so that $\Phi (M_*\vert {\cal E})$ is independent of the galaxy sample, then the CV in the stellar mass function derived from the sample can all be attributed to the difference between $P_S({\cal E})$ and the global distribution function, $P({\cal E})$, expected from a large sample where the distribution of ${\cal E}$ is sampled without bias. An unbiased estimate of the GSMF $\Phi(M_*)$ is then \begin{equation} \label{PhiMstar} \Phi(M_*)=\int \Phi_S (M_*\vert {\cal E}) P({\cal E}) d{\cal E}\,. \end{equation} Thus, the unbiased GSMF is obtained from the conditional distribution function, $\Phi_S(M_*\vert {\cal E})$, derived from the sample `$S$', and the unbiased distribution $P({\cal E})$ of environment variable. The environmental quantity has to be chosen properly so that it can be estimated from observation, while the unbiased distribution function, $P({\cal E})$, can, in principle, be obtained from large cosmological simulations. Here we analyze a method which uses the masses of dark matter halos as the environmental quantity. In this case, ${\cal E}$ is represented by halo mass, $M_{\rm h}$, $\Phi_S (M_*\vert M_{\rm h})$ is the conditional galaxy stellar mass function(CGSMF), and $P({\cal E})=n(M_{\rm h})$ is the halo mass function estimated directly from the constrained simulation~\citep{Yang2003}. The advantage here is that the unbiased estimates are only needed for the conditional functions, $\Phi (M_*\vert M_{\rm h})$. The disadvantage is that it is model dependent through $n(M_{\rm h})$, and that one has to identify galaxy systems to represent dark matter halos. Fig.~\ref{fig:mockcsmfs} shows the conditional stellar mass functions, of galaxies in halos of different masses, estimated from the SDSS mock sample, in comparison with the benchmarks obtained from the total SDSS volume-limited sample. As one can see, for a given halo mass, the CGSMF obtained from the SDSS mock sample matches the benchmark well only in the massive end. This happens because of the absence of massive halos at small distances in the local under-dense region, so that their faint member galaxies are not observed in the magnitude limited sample. The total GSMF, obtained using Eq.~(\ref{PhiMstar}), is shown in the Fig.~\ref{fig:mock_syn_csmfs} by the green line, in comparison to the benchmark of the total GSMF represented by the black solid line, and to the GSMF obtained by the traditional V-max method represented by the black dotted line. Here the benchmark CGSMFs (dotted lines in Fig.~\ref{fig:mockcsmfs}) are used for halos with $M_{\rm h} < 10^{12} \, h^{-1}{\rm M_\odot}$, while the CGSMFs estimated from the magnitude-limted sample (solid lines in Fig.~\ref{fig:mockcsmfs}) are used for less massive halos. This is to mimic the fact that the total CGSMF for less massive halos can be obtained by other means (see Eq.~\ref{eq:phi2}), while the low-stellar-mass end of CMGSFs for massive halos cannot be obtained directly from the SDSS spectroscopic sample. Here again the stellar mass function at the low-mass end is under-estimated, although the method works substantially better than the V-max method. The reason is clear from Fig.~\ref{fig:mockcsmfs}. The stellar mass function at the low mass end is only sampled by low-mass halos because of the absence of massive halos in the nearby volume, while the low-mass end in the benchmark stellar mass function is actually affected by the low-mass ends of the conditional stellar mass functions of massive halos. These results demonstrate an important point. If the shape of the CGSMF depends significantly on halo mass, then one needs to estimate all the conditional functions reliably down to a given stellar mass limit, in order to get an unbiased estimate of the total stellar mass function down to the same mass limit. The SDSS redshift sample is clearly insufficient to achieve this goal in the low-mass end. In a recent paper, \citet{Lan2016} showed that the conditional functions of galaxies can be estimated down to $M_r\sim -14$ (corresponding to a stellar mass of about $10^{8}{\rm M_\odot}$) for halos with mass $M_{\rm h}>10^{12}{\rm M_\odot}$ by cross correlating galaxy groups (halos) selected from the SDSS spectroscopic sample with SDSS photometric data. Thus, if we can estimate the contribution by halos with lower masses to a similar magnitude, then the total function can be obtained. Here we test the feasibility of such an approach using SDSS mock sample. First, we obtain the CGSMFs down to a stellar mass of $10^8\, h^{-1}{\rm M_\odot}$ for halos with $M_{\rm h} \geq M_1=10^{12.5}\, h^{-1}{\rm M_\odot}$ directly from the total simulation volume. This step is to mimic the fact that such CGSMFs can be obtained, as in \citet{Lan2016}, from observational data. The GSMF contributed by such halos is \begin{equation} \Phi_1 (M_*) =\int_{M_1}^\infty \Phi (M_*\vert M_{\rm h}) n(M_{\rm h}) d M_{\rm h}\,, \label{eq:phi1} \end{equation} To maximally reduce possible uncertainties introduced by this procedure, we estimate the total CGSMF $\Phi_1$ for $M_{\rm h} \geq M_1$ directly from a modified V-max method for the high-stellar-mass end. Specifically, each galaxy is assigned a weight, $n_{\rm halo,u}/n_{\rm halo}(V_{\rm max})$, the ratio between the number density of $M_{\rm h}\ge M_1$ halos in the Universe and that in $V_{\rm max}$. In practice, the weighted V-max has little impact on the results, as the effect of cosmic variance for high-mass galaxies is small. The procedure is included only for maintaining consistency. Eq.~(\ref{eq:phi1}) is then used only at the low-stellar-mass end where the V-max method fails because of incompleteness. The result for $\Phi_1$ obtained in this way is shown by the purple solid curve in Fig~\ref{fig:mock_syn_csmfs}. To estimate the contribution by halos with $M_{\rm h} < M_1$ in a way that can be applied to real observation, we first eliminate all galaxies that are contained in halos with $M_{\rm h} \geq M_1$. For the rest of the galaxies, we estimate the function by a modified version of the V-max method \begin{equation} \Phi_2(M_*) = \sum { \frac{1}{V_{\rm max}} } \frac{1}{1+b\delta(V_{\rm max})} \,, \label{eq:phi2} \end{equation} where the summation is over individual galaxies, $b = 0.6$ is the bias factor which is considered to be constant for low-mass halos \citep[e.g.][]{Sheth2001}, $\delta(V_{\rm max}) = { {\overline\rho} (V_{\rm max}) / \rho_{\rm u} } - 1$ is the mean over density within $V_{\rm max}$, $\rho_{\rm u}$ is the universal mass density, and ${\overline\rho}(V_{\rm max})$ is the mean mass density within $V_{\rm max}$. The function $\Phi_2$ so estimated is shown as the purple dashed curve in Fig.\ref{fig:mock_syn_csmfs}. Note that small groups can only be seen in the very local region, so the CGSMF estimated for halos in a small mass bin can be very noisy. Our method intends to avoid this uncertainty by calculating the total CGSMF for all halos less massive than $M_1$. The total GSMF, $\Phi=\Phi_1 + \Phi_2$ is shown by the black dashed line in Fig.\ref{fig:mock_syn_csmfs}, which is very close to the benchmark, indicating that our method can indeed take care of the bias produced by the local under-dense region. We have checked that the result depends only weakly on the choice of the value of $M_1$. \subsection{Applications to observational data} In this subsection, we apply the method described above to the real SDSS sample. We first estimate the galaxy luminosity function (GLF) using the procedure based on the conditional distributions of galaxy luminosity in dark matter halos, as described in \S\ref{sssec_mock_test_correction}. Here the CLF, $\Phi_1(M_r)$, for faint galaxies with magnitude $M_r-5\log h > -17.2$ in halos more massive than $10^{12.5}\, h^{-1}{\rm M_\odot}$ are obtained from \citet{Lan2016}, while the CLF for brighter galaxies in these halos is estimated directly from SDSS sample using the V-max method and the group catalog of \citet[][]{Yang2012}~\citep[see also][]{Yang2007}. For halos with masses below $10^{12.5}\, h^{-1}{\rm M_\odot}$ the CLF, $\Phi_2(M_r)$, is obtained from the SDSS sample using the modified V-max method as described by Eq.\,(\ref{eq:phi2}). The total galaxy luminosity function (GLF) is then obtained by $\Phi(M_r)=\Phi_1(M_r)+\Phi_2(M_r)$. Fig.~\ref{fig:corrcted_glf} shows the result of the GLF so obtained in solid black line, in comparison with that obtained from the traditional V-max method. At the faint end, $M_r-5\log h \approx -15$, the GLF is about twice as high as that given by the V-max method, indicating that cosmic variance can have large impact on the estimate of the GLF at the faint end. To show this more clearly, we plot the cosmic variance expected from Eq.~\ref{eq_fit_sigma_cv} as the shaded band in Fig.~\ref{fig:corrcted_glf}, where the stellar mass is obtained from luminosity by using mean mass-to-light ratio. The expected cosmic variance is quite large at the faint end, indicating that cosmic variance is an important issue in estimating the faint end of GLF. The GLF in the local Universe has been estimated by many authors using various samples \citep[e.g.][]{Blanton2003,Yang2009,Loveday2012,Jones2006,Driver2012,Whitbourn2016}. For comparison, we plot the GLFs obtained by \citet{Loveday2012} from the GAMA survey and by \citet{Whitbourn2016}, who applies a maximum likelihood method to the SDSS to account for the cosmic variance in their estimates. The result of \citet{Whitbourn2016} matches ours over a wide range of luminosity, but seems to still underestimate the GLF at the faint end. The result of \citet{Loveday2012} has a large discrepancy with our result, possibly due to the cosmic variance in the small sky converage of the GAMA sample used, which is $144\deg^2$. Since many of the faint galaxies in the SDSS photometric data do not have reliable stellar mass estimates, conditional galaxy stellar mass functions are not available at the low mass end. Because of this, we cannot estimate the GSMF down to the low-mass end directly from the data with the method above. As an alternative, we use the $M_r$ - $M_*$ relation obtained from the SDSS spectroscopic sample to convert the GLF obtained above to estimate a GSMF. We do this through the following steps. (i) Construct a large volume-limited Monte-Carlo sample of galaxies with absolute magnitude distribution given by the GLF. (ii) Bin these galaxies according to their absolute magnitudes. (iii) For each Monte-Carlo galaxy, we randomly choose a galaxy in the real SDSS spectroscopic sample in the same absolute magnitude bin, and assign the stellar mass of the real galaxy to the Monte Carlo galaxy. (iv) Compute the GSMF of this volume-limited Monte-Carlo sample. The GSMF obtained directly from the GLF in this way is shown in Fig.~\ref{fig:corrcted_gsmf} by the black solid curve. Since the GLF is estimated only down to $M_r-5\log h \approx -15$, the first two data points in the low-mass end of the GSMF may be underestimated, as galaxies fainter than $M_r-5\log h = -15$ may contribute to these two stellar mass bins. To test this, we extrapolate the faint end of the GLF to $M_r-5\log h = -14.2$, which is sufficient to include all galaxies with stellar masses down to $10^8\, h^{-1}{\rm M_\odot}$. This extrapolation is shown by the gray extension of the black solid curve in Fig.~\ref{fig:corrcted_glf}. The GSMF obtained from the extended GLF is shown by the gray line in~Fig.\ref{fig:corrcted_gsmf}. As one can see, the extension of the GLF only slightly increases the GSMF at the lowest mass. The GSMF estimated in this way is compared with that estimated with the conventional V-max method. The gray shaded band shows the expected cosmic variance given by Eq.~\ref{eq_fit_sigma_cv} for the SDSS sample. The effect of CV is quite large at the low-stellar-mass end. The difference between our result and that obtained from the V-max method is even larger, indicating again that the local SDSS region is an unusually under-dense region. The GSMF in the low-$z$ Universe has been estimated in numerous earlier investigations using different samples and methods \citep[e.g.][]{Li2009,Yang2009,Baldry2012,Bernardi2013,He2013,DSouza2015}. Several of the earlier results are plotted in Fig.~\ref{fig:corrcted_gsmf} for comparison. The result of \citet{Li2009}, who measured the GSMF of SDSS sample directly from the stellar masses estimated by \citet{Blanton2007} with a Chabrier IMF~\citep[][]{Chabrier2003} and corrections for dust, matches well our V-max result, and also misses the steepening of the GSMF at $M_*< 10^{9.5} \, h^{-1}{\rm M_\odot}$. Our measurement at $M_*>10^{10.5}\, h^{-1}{\rm M_\odot}$ is significantly lower than that from \citet{Bernardi2013}, because they included the light in the outer parts of massive galaxies that may be missed in the SDSS NYU-VAGC used here (see also \citealt{He2013} for the discussion of this effect). Such corrections do not affect the GSMF in the low-mass end. The overall shape of our GSMF is similar to that of \citet{Baldry2012} obtained from the GAMA sample, but the amplitude of their function is about $50\%$ lower. GAMA has a small sky coverage, $144\deg^2$, although it is deeper, to $r\approx 19.8$. According to our test with mock samples of a similar sky coverage and depth, the cosmic variance in the GSMF estimated from such a sample can be very large. The lower amplitude given by the GAMA sample may be produced by such cosmic variance. In conclusion, when CV is carefully taken into account, the low-stellar-mass end slope of the GSMF in the low-$z$ Universe, which is about $-1.5$ as indicated by the red dashed line in Fig.~\ref{fig:corrcted_gsmf}, is significantly steeper than those published in earlier studies. In particular, there is a significant upturn at $M_*< 10^{9.5} \, h^{-1}{\rm M_\odot}$ in the GSMF that is missed in many of the earlier measurements. For reference, we list the GLF and GSMF estimated with our method in Table~\ref{tab:gsmf}. \section{Summary and discussion} \label{sec_summary} In this paper, we use ELUCID simulation, a constrained $N$-body simulation in the Sloan Digital Sky Survey (SDSS) volume to study galaxy distribution in the low-$z$ Universe. Our main results can be summarized as follows: \begin{enumerate}[fullwidth,itemindent=1em,label=(\roman*)] \item Dark matter halos are selected from different snapshots of the simulation, and halo merger trees are constructed from the simulated halos down to a halo mass of $\sim 10^{10}\, h^{-1}{\rm M_\odot}$. A method is developed to extend all the simulated halo merger trees to a mass resolution of $10^9\, h^{-1}{\rm M_\odot}$, which is needed to model galaxies down to a stellar mass of $10^8\, h^{-1}{\rm M_\odot}$. \item The merger trees are used to populate simulated dark matter halos with galaxies according to an empirical model of galaxy formation developed by \citet{Lu2014a,Lu2015a}. The model galaxies follow the real galaxies in the SDSS volume both in spatial distribution and in intrinsic properties. The catalog of the model galaxies, therefore, provide a unique way to study galaxy formation and evolution in the cosmic web in the low-$z$ Universe. \item Mock catalogs in the SDSS sky coverage are constructed, which can be used to investigate the distribution of galaxies as measured from the real SDSS data and its relation to the global distribution expected from a fair sample of galaxies in the low-$z$ Universe. These mock catalogs can thus be used to quantify the cosmic variances in the statistical properties of the low-$z$ galaxy population estimated from a survey like SDSS. \item As an example, we use the mock catalogs so constructed to quantify the cosmic variance in the galaxy stellar mass function (GSMF). Useful fitting formulae are obtained to describe the cosmic variance and covariance matrix of the GSMF as functions of stellar mass and sample volume. \item We find that the GSMF estimated from the SDSS magnitude-limited sample can be affected significantly by the presence of the under-dense region at $z<0.03$, so that the low-mass end of the function can be underestimated significantly. \item We test several existing methods that are designed to deal with the effects of the cosmic variance in the estimate of GSMF, and find that none of them is able to fully account for the cosmic variance effects. \item We propose and test a method based on the conditional stellar mass functions in dark matter halos, which is found to provide an unbiased estimate of the global GSMF. \item We apply the method to the SDSS data and find that the GSMF has a significant upturn at $M_*< 10^{9.5} \, h^{-1}{\rm M_\odot}$, which is missed in many earlier measurements of the local GSMF. \end{enumerate} Our results of the GSMF have important implications for galaxy formation and evolution. The presence of an upturn in the GSMF at $M_*<10^{9.5}\, h^{-1}{\rm M_\odot}$ suggests that there is a characteristic mass scale, $\sim 10^{9.5}\, h^{-1}{\rm M_\odot}$, corresponding to a halo mass of $\sim 10^{11}\, h^{-1}{\rm M_\odot}$ \citep[e.g.][]{Lim2017a}, below which star formation may be affected by processes that are different from those in galaxies of higher masses. The stellar mass function of galaxies at low-$z$ has been widely used to calibrate numerical simulations and semi-analytic models of galaxy formation. The improved estimate of the GSMF presented here clearly will provide more accurate constraints on theoretical models. The mock catalogs constructed here have other applications. For example, they can be used to analyze the cosmic variance in the measurements of other statistical properties of the galaxy population, such as the correlation functions \citep{Zehavi2005, WangYu2007} and peculiar velocities \citep[e.g.][]{Jing1998,Loveday2018} of galaxies of different luminosities/masses. Because of the presence of local large-scale structures, such as the under-dense region at $z<0.03$, the measurements for faint galaxies can be affected. A comparison between the results obtained from the mock sample and that from the benchmark sample can then be used to quantify the effects of cosmic variance. Another application is to HI samples of galaxies. Current HI surveys, such as HIPASS \citep{Meyer2004} and ALFALFA \citep{Giovanelli2005}, are shallow, typically to $z\sim 0.05$, and so the HI-mass functions and correlation functions estimated from these surveys can be affected significantly by the cosmic variance in the nearby Universe \citep[e.g.][]{Guo2017}. The same method as described here can be used to construct mock catalogs for HI galaxies, and to quantify cosmic variances in these measurements. We will come back to some of these problems in the future. \section*{Acknowledgements} This work is supported by the National Key R\&D Program of China (grant Nos. 2018YFA0404503, 2018YFA0404502), the National Key Basic Research Program of China (grant Nos. 2015CB857002, 2015CB857004), and the National Science Foundation of China (grant Nos. 11233005, 11621303, 11522324, 11421303, 11503065, 11673015, 11733004, 11320101002). HJM acknowledges the support from NSF AST-1517528.
{'timestamp': '2018-09-05T02:18:10', 'yymm': '1809', 'arxiv_id': '1809.00523', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.00523'}
ArXiv
\section{Introduction} Many pieces of evidence support the existence of dark matter in the universe. One of the most attracting candidates is weakly interacting massive particles (WIMPs). Three methods can be used to search for WIMPs: direct, indirect and collider experiments. Direct detection aims to catch the signal of nuclear recoil by the WIMPs. Many direct dark matter search experiments have been performed in the world. Among the variations of direct detection experiments, directional dark matter searches are said to provide a ''smoking gun'' evidence of the galactic dark matter\ \cite{TANIMORI2004241}. Several directional dark matter search experiments utilizing low-pressure gaseous time projection chambers (TPCs), such as DRIFT\ \cite{BATTAT201765}, MIMAC\ \cite{Santos:2013oua}, and NEWAGE\ \cite{Nakamura:PTEP2015}, are being performed. As is the case for other direct dark matter search experiments, background-reduction is one of the most important R\&D items for the gaseous TPC detectors. Large volume detectors are also required because the expected event rate is very low. However, electron diffusion during drift is large compared to its track length in the conventional TPCs, in which the electrons are drifted by the electric field; thus constructing a large TPC (e.g., 1\ $\mathrm{m^{3}}$ scale) is very challenging. A negative-ion TPC (NITPC) is a gaseous TPC with negative-ion gas, or gas with a large electron affinity gas. This type of TPC, in which negative-ions are drifted instead of the electrons, has been studied for the high position resolution applications because the ion diffusion is smaller than that of the electrons\ \cite{MARTOFF2000355}. The DRIFT group pioneered the development of NITPCs for direct dark matter search and succeeded in operating a larger volume TPC. The ''minority carrier,'' which makes the z-fiducialization possible\ \cite{Battat:2014van}\ \cite{PHAN2015}. After the nuclear recoil and ionizations of gas molecules, several species of negative-ion, called main and minority charges, are created in the NITPCs. These ion species have different drift velocities; hence, the absolute z-position of the incident position can be known from the arrival times. This feature enables detectors to cut radioactive decay events from the drift and readout planes. First NITPC with $\mathrm{CS_2}$ and a few percent $\mathrm{O_{2}}$ mixture was demonstrated \cite{BATTAT201765}. $\mathrm{SF_6}$ was also recently identified as a good negative-ion gas candidate for NITPC \cite{PHAN2015}. $\mathrm{SF_6}$ is a safer, non-flammable and non-toxic gas compared to the $\mathrm{CS_{2} + O_{2}}$ mixture. NITPC is a relatively new technology; thus, no method has yet been established to simulate micro pattern gaseous detectors (MPGDs) in negative-ion gas with Garfield++ \cite{Garfield:url}, which is a MPGD simulation toolkit. The NITPC simulation method and their results together with the related experimental results are described herein to establish the MPGD simulation method for negative-ion gas. \section{Measurements} \subsection{Measurement Setup} Several fundamental properties of the MPGD performance in negative-ion gas were measured as a comparison target of the simulation. Fig.\ref{fig:exp_setup} schematically shows the setup used to evaluate the performance of GEMs\ \cite{SAULI1997531} in $\mathrm{SF_6}$. Two or three GEMs (100\ $\mathrm{\mu m}$ thickness, liquid crystal polymer, made by Scienergy) were used. The difference between these setups was only the number of GEMs. The drift mesh, made of stainless steel, was set on the top, and GEM1 (three-GEM case) or GEM2 (double-GEM case) was located 10\ mm below the drift mesh. The transfer gap, which is the distance between the GEMs, was 3.5\ mm, while the induction gap, which is the distance between GEM3 and the readout plane, was 2\ mm. One-dimensional 24\ strips with a pitch of 400\ $\mu$m were used as the readout electrodes. The signals from these strips were grouped to one channel, decoupled with a 1\ nF capacitor, and read by the charge sensitive amplifier, Cremat CR-110 (Gain:\ 1.4\ V/pC, $\tau$\ =\ 140\ $\mu$s). The waveforms were obtained using a USB oscilloscope UDS-5206S. Pure $\mathrm{SF_6}$ gas with various low pressures of 60--120\ Torr were used. The gas gains were measured with different applied voltages and pressures. \begin{figure}[] \centering \includegraphics[width=8cm]{exp_setup_schemEN.pdf} \caption{Gain measurement setup} \label{fig:exp_setup} \end{figure} \subsection{Measurement result} Fig.\ref{fig:GEM_gain} shows the measurement results. Gas gains of up to 6,000 were obtained with $\mathrm{SF_6}$ at 100\ Torr in the double-GEM measurement (Fig.\ref{fig:doubleGEM_gain}). Gas gains of up to 10,000 for 60 to 120\ Torr were obtained in the triple GEM measurement (Fig.\ref{fig:tripleGEM_gain}). Figs.\ref{fig:dep_results}a--c show the gas gain dependence on the drift, the transfer and the induction, respectively. No dependence on the drift electric field was observed, as shown in Fig.\ref{fig:drift_dep_result}. Some dependence on the transfer and induction electric fields was observed as shown in Fig.\ref{fig:transfer_dep_result} and Fig.\ref{fig:induction_dep}. In both cases, the gains increased at the lower electric field region and reached plateaus at the higher electric field region. In addition, the induction-dependence measurement result showed a slight gain rise after the plateau, indicating that some electron multiplication may occur at the induction gap. The gas gains relevant to the single GEM were calculated from the double and triple-GEM results by assuming that the total gain was a product of two or three GEM gains. Fig.\ref{fig:gain_per_single} shows the result. The single GEM gains in the case of double and three GEM were on the same gain curve. This result indicates that there is no significant charge loss in the transfer region between the GEMs. \begin{figure} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[keepaspectratio, scale=0.4]{double_gain.pdf} \subcaption{Double GEM gas gain at 100\ Torr pure $\mathrm{SF_{6}}$} \label{fig:doubleGEM_gain} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[keepaspectratio, scale=0.4]{triple_gasgain.pdf} \subcaption{Triple GEM gas gain in pure $\mathrm{SF_{6}}$} \label{fig:tripleGEM_gain} \end{minipage} \caption{Double and Triple GEM gas gain curve} \label{fig:GEM_gain} \end{figure} \begin{figure} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[keepaspectratio, scale=0.4]{drift_gain.pdf} \subcaption{Gas gain dependence on $E_{drift}$. Measurement of the conditions are $\mathrm{SF_{6}}$ at 100\ Torr,$E_{transfer}=2.9\ \mathrm{kV/cm}$ and $E_{induction}=2.5\ \mathrm{kV/cm}$} \label{fig:drift_dep_result} \end{minipage} \begin{minipage}[b]{0.48\hsize} \centering \includegraphics[keepaspectratio, scale=0.4]{transfer_gain.pdf} \subcaption{Gas gain dependence on $E_{transfer}$. Measurement of the conditions are $\mathrm{SF_{6}}$ at 100\ Torr,$E_{drift}=1.0\ \mathrm{kV/cm}$ and $E_{induction}=2.5\ \mathrm{kV/cm}$} \label{fig:transfer_dep_result} \end{minipage}\\ \centering \begin{minipage}[b]{0.48\linewidth} \includegraphics[keepaspectratio, scale=0.4]{induction_gain.pdf} \subcaption{Gas gain dependence on $E_{induction}$. Measurement of the conditions are $\mathrm{SF_{6}}$ at 100\ Torr,$E_{drift}=1.0\ \mathrm{kV/cm}$ and $E_{transfer}=2.9\ \mathrm{kV/cm}$}% \label{fig:induction_dep} \end{minipage} \caption{Gas gain dependence on the electric fields in $\mathrm{SF_{6}}$ at 100\ Torr} \label{fig:dep_results} \end{figure} \begin{figure}[] \centering \includegraphics[width=.6\textwidth]{gain_per_single2.pdf} \caption{Gas gain per single GEM at 100\ Torr $\mathrm{SF_{6}}$} \label{fig:gain_per_single} \end{figure} \section{MPGD simulation} The MPGD simulation study for the negative-ion gas is indispensable for the optimization of the design and operation. Garfield++\ \cite{Garfield:url} and Magboltz\ \cite{Magboltz:url} version 9.01 were used herein. Garfield++ is a simulation toolkit for the gaseous detector study written in C++. It imports detector geometry and electric field from the external finite element method software's output and uses the Monte Carlo microscopic and macroscopic methods for the charge propagation. Magboltz calculates the transport parameters of the electrons in given gas, electric field, and magnetic field from the cross-section data. These parameters were passed to Garfied++ and used for the gas avalanche calculation. \subsection{Simulation method} The cross-section data for the reactions between the electron and the negative-ion gas, such as $\mathrm{SF_6}$, were implemented in Magboltz. However no code was implemented for the negative-ion transportation and the electron detachment. The negative-ion transportation process and two electron detachment models were introduced herein and implemented in the Garfield++ customization code. The whole process is presented as follows. \begin{enumerate} \item Negative-ions drift in the electric field and reach the MPGD region. \item The electron is detached from the negative-ion gas molecule by the high electric field\ (electron detachment). \item An avalanche starts from the detached electron. \item The electron-ion pairs create the signal. \end{enumerate} As a simulation setup, a single 100\ $\mathrm{\mu m}$ thick GEM, which has 140\ $\mathrm{\mu}$m holes pitch and 70\ $\mathrm{\mu}$m hole diameter, was used. The gas was pure $\mathrm{SF_{6}}$\ 100\ Torr and only the main charge $\mathrm{SF_{6}^{-}}$ was considered in the drift process\ (i). The electron detachment cross-section of $\mathrm{SF_{6}^{-}}$ was used for the electron detachment process\ (ii) and two models, namely the cross-section and threshold models, were introduced and used to simulate the detachment process. The electron~/~$\mathrm{SF_{6}}$ cross-section which Magboltz has was used for the avalanche process\ (iii). \subsection{cross-section model} The electron detachment probability in the cross-section model was calculated using the cross-section of the relevant chemical processes. Two possible detachment processes, namely direct(Eq.(\ref{eq:sf6_direct_reaction1}) or (\ref{eq:sf6_direct_reaction2})) and indirect one (Eq.(\ref{eq:sf6_indirect_reaction1})+(\ref{eq:sf6_indirect_reaction3})~or~(\ref{eq:sf6_indirect_reaction2})+(\ref{eq:sf6_indirect_reaction3})), were considered as possible processes. The cross-sections of these reactions were reported in \cite{YWANG}. The cross-section of Eq.(\ref{eq:sf6_direct_reaction1}) and (\ref{eq:sf6_direct_reaction2}) increased at 100\ eV (Fig.\ref{fig:detach_direct}) while that of Eq.(\ref{eq:sf6_indirect_reaction3}) increased at 10\ eV. This is ten times lower energy than what Eq.(\ref{eq:sf6_direct_reaction1}),\ (\ref{eq:sf6_direct_reaction2}) showed at 100\ eV. Fig.\ref{fig:detach_indirect} indicates that the Eq.(\ref{eq:sf6_indirect_reaction1}) and (\ref{eq:sf6_indirect_reaction2}) reactions have larger cross-sections below 10\ eV compared to Eq.(\ref{eq:sf6_direct_reaction1}) and (\ref{eq:sf6_direct_reaction2}). In addition, $\mathrm{SF_{6}^{-}}$ has larger mass than $\mathrm{F^{-}}$ so $\mathrm{SF_{6}^{-}}$ is difficult to be accelerated and have sufficient energy for the reactions. % Therefore the detachment rate can be assumed to be determined by Eq.(\ref{eq:sf6_indirect_reaction3}), such that the cross-section of Eq.(\ref{eq:sf6_indirect_reaction3}) can be used for the detachment calculation. \begin{figure} \centering \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[keepaspectratio, scale=0.5]{detach_cross_direct.pdf} \subcaption{Direct detachment, Eq.(\ref{eq:sf6_direct_reaction1})(green square), Eq..(\ref{eq:sf6_direct_reaction2})(blue triangle) and Eq.(\ref{eq:sf6_indirect_reaction3})(red circle)} \label{fig:detach_direct} \end{minipage} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[keepaspectratio, scale=0.5]{detach_cross_indirect.pdf} \subcaption{Fluorine ion ($\mathrm{F^{-}}$) creation reaction, Eq.(\ref{eq:sf6_indirect_reaction1})(green triangle) and Eq.(\ref{eq:sf6_indirect_reaction2})(blue triangle)} \label{fig:detach_indirect} \end{minipage} \caption{Direct electron detachment and fluorine ion ($\mathrm{F^{-}}$) creation reactions \cite{YWANG}} \label{fig:detachment_reaction_crosssection} \end{figure} \begin{eqnarray} \label{eq:sf6_direct_reaction1} SF_{6}^{-} + SF_{6} & \rightarrow & e^{-} + SF_{6} + SF_{6} \\ \label{eq:sf6_direct_reaction2} SF_{5}^{-} + SF_{6} & \rightarrow & e^{-} + SF_{5} + SF_{6} \end{eqnarray} \begin{eqnarray} \label{eq:sf6_indirect_reaction1} SF_{6}^{-} + SF_{6} & \rightarrow & F^{-} + SF_{5} +SF_{6} \\ \label{eq:sf6_indirect_reaction2} SF_{5}^{-} + SF_{6} & \rightarrow & F^{-} + SF_{4} +SF_{6} \\ \label{eq:sf6_indirect_reaction3} F^{-} + SF_{6} & \rightarrow & e^{-} + F +SF_{6} \end{eqnarray} In the Eq.(\ref{eq:sf6_indirect_reaction1}) reaction, the $\mathrm{SF_{6}^{-}}$ energy can be calculated using the $\mathrm{SF_{6}^{-}}$ mobility in $\mathrm{SF_{6}}$\ \cite{Benhenni2012} and the fluorine ion $\mathrm{F^{-}}$ is expected to have energy up to approximately 2\ eV by a simple kinematics calculation. The $\mathrm{F^{-}/SF_{6}}$ collision reaction has a cross-section of $\mathrm{\sigma=10^{-15}-10^{-14}\ cm^2}$\ \cite{Benhenni2012} in $\mathrm{SF_{6}}$ 100\ Torr at the corresponding energy range, and the number of density is $\mathrm{n = 3.5\times 10^{18}\ cm^{-3}}$. Thus the mean-free-path of this reaction can be calculated to be in an order of $\mathrm{10^{-6}\ m}$. The energy of the $\mathrm{F^{-}}$ ion would obtain from the electric field can be calculated along its path. The energy obtained in the GEM hole, where typical electric field is approximately 50\ kV/cm and above, was 10\ eV if the $\mathrm{F^{-}/SF_{6}}$ collision mean-free-path $\lambda$ is an order of $\mathrm{10^{-6}\ m}$, and this energy is close to Eq.(\ref{eq:sf6_indirect_reaction3}) reaction's threshold. Given the $\mathrm{F^{-}}$ energy, the mean-free-path of the electron detachment reaction through the indirect reaction, $\mathrm{\lambda}$, was thus known based on the cross-section shown in Fig.\ref{fig:detach_direct}\ \cite{YWANG}. The probability ($p$) of the electron detachment from $\mathrm{F^{-}}$ within 1\ $\mathrm{\mu m}$, the same as the step used in this simulation, was calculated as Eq.(\ref{eq:detach_prob} \begin{eqnarray} p = 1 - \exp(- 1 \mu m / \lambda) \label{eq:detach_prob} \end{eqnarray} This detachment probabilities are shown with the blue triangles in Fig.\ref{fig:detach_prob}. The interpolated probability values were used in the simulation. \subsection{Threshold model} The other model, the threshold model, is a more phenomenological one. In this model, the negative-ion releases the electron when the electric field is more than a threshold value. In this simulation, the negative-ion releases an electron in the high electric field larger than 45\ kV/cm, where the avalanches are first observed in the corresponding experiments. This threshold value was known from the electric field map of the GEM geometry when the gas avalanche starts in the experiments. The red line in Fig.\ref{fig:detach_prob} shows the probability function. \subsection{Simulation result} \begin{figure}[] \centering \includegraphics[width=10cm]{deta_prob_comp2.pdf} \caption{Detachment probability dependence on the electric field. The blue marker denotes the cross-section model, while red line denotes the threshold model probability function.} \label{fig:detach_prob} \end{figure} \begin{figure}[] \centering \includegraphics[width=12cm]{final_gaincurve.pdf} \caption{Gas gain curve (pure\ $\mathrm{SF_6}$\ 100\ Torr). The cross mark denotes the measurement result and red square denotes threshold model, and blue triangle denotes the cross-section model.} \label{fig:final_result} \end{figure} The gas avalanche simulations were performed with the detachment models described above. Fig.\ref{fig:final_result} shows the simulation results. The blue triangles and the red squares show the results of the cross-section model and the threshold models, respectively. The result of the measurements and its error are also depicted as the asterisk and the light blue region, respectively. It should be noted, as a primary outcome of this study, the absolute gas gain was reproduced by the simulation within factor 2 difference. This result can be claimed as a great success of this first study. The main difference between the cross-section model and threshold models was the slope of the gain curve. In other words, the cross-section model had a larger gain than threshold model at low electric field but did not at higher electric field. These results can be explained by the detachment probability difference, shown in Fig.\ref{fig:detach_prob}, such that at low electric field cross-section model had a larger detachment probability and, on the contrary, at higher electric field threshold model had. A comparison of the experimental and simulation results indicated that the threshold model produced the experimental result more than the cross-section model in terms of the slope of the gain curve. Since the cross-section model is a more first principle method, tuning the cross-section method would be a important step to generalize the outcome of this study. Studies for other MPGDs, such as MicroMegas and $\mathrm{\mu}$-PIC in negative-ion gas are also important future works. It is also important to compare the gas gain dependences on the transfer and induction electric field. Studies on other parameters, such as the energy resolution, signal formation, and signal collection, would help to validate this method. An optimization work are also planned for dark matter search experiment. \section{Conclusion} In this work, the first MPGD simulation study with $\mathrm{SF_{6}}$, one of negative-ion gases, was achieved with Garfield++ and Magboltz. Two detachment models, cross-section and threshold model, were introduced as the electron detachment process. In both models, the absolute gas gain was reproduced by the simulation within factor 2 difference. In the terms of the slope of the gain curve, the threshold model reproduced the experiments result more than cross-section one. \section*{Acknowledgement} This work was supported by KAKENHI Grants-in-Aids (15K21747, 26104005, 16H02189 and 19H05806), Grant-in-Aid for JSPS Fellows (17J03537 and 19J20376). \section*{References} \bibliographystyle{iopart-num}
{'timestamp': '2019-07-31T02:06:52', 'yymm': '1907', 'arxiv_id': '1907.12729', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.12729'}
ArXiv
\section{Introduction} The problem of designing robust deterministic Model Predictive Control (MPC) schemes, has nowadays many solutions, see for example \cite{MagniBook07,rawbook,LimonEtAl-NMPC09}. However, the proposed approaches are in general computationally very demanding, since they either require the solution to difficult on-line min-max optimization problems, e.g.,~\cite{MagniDeNScatAll03}) or the off-line computations of polytopic robust positive invariant sets, see \cite{mayne2005robust}. In addition they are conservative, mainly because they (implicitly or explicitly) rely on worst-case approaches. Moreover, in case the uncertainties/disturbances are characterized as stochastic processes, constraints must be reformulated in a probabilistic framework \cite{Yaesh2003351,Hu2012477}, worst-case deterministic methods do not take advantage of the available knowledge on the characteristics of the process noises, such as their probability density function, and cannot even guarantee recursive feasibility in case of possibly unbounded disturbances.\\ Starting from the pioneering works \cite{Schwarm99,Li02}, these reasons have motivated the development of MPC algorithms for systems affected by stochastic noise and subject to probabilistic state and/or input constraints. Mainly two classes of algorithms have been developed so far. The first one relies on the randomized, or scenario-based approach, see e.g., \cite{Batina,Calaf,BlackmoreOnoBektassovWilliams-TR10}, a very general methodology that allows to consider linear or nonlinear systems affected by noise with general distributions with possibly unbounded and non-convex support. As a main drawback, randomized methods are still computationally very demanding for practical implementations and their feasibility and convergence properties are difficult to prove.\\ The second approach, referred in \cite{Cogill} as probabilistic approximation method, is based on the point-wise reformulation of probabilistic, or expectation, constraints in deterministic terms to be included in the MPC formulation. Interesting intermediate methods have been proposed in \cite{BernardiniBemporad-TAC12}, where a finite number of disturbance realizations are assumed, and in \cite{Korda}, where constraints averaged on time are considered. Among the wide class of probabilistic approximation algorithms, a further distinction can be based on the noise support assumptions, which can be either bounded, e.g., as in \cite{Korda,KouvaritakisCannonRakovicCheng-Automatica10,CannonChengKouvaritakisRakovic-Automatica12} or unbounded, see for instance \cite{Bitmead,Primbs,Hokaymen,Cogill,CannonKouvaritakisWu-Automatica09,Ono-ACC12}. While for bounded disturbances recursive feasibility and convergence can be established, the more general case of unbounded noise poses more difficulties and some specific solutions and reformulations of these properties have been adopted, for example in \cite{CannonKouvaritakisWu-Automatica09} the concept of invariance with probability $p$ is used, while in \cite{Ono-ACC12} the definition of probabilistic resolvability is introduced. Also, linear systems with known state have generally been considered, with the notable exceptions of \cite{Bitmead,Hokaymen,CannonChengKouvaritakisRakovic-Automatica12}, where output feedback methods have been proposed.\\ Finally, it must be remarked that some of the mentioned approaches have been successfully applied in many applicative settings, such as building temperature regulation \cite{OldewurtelParisioJonesMorariEtAl-ACC10} and automotive applications \cite{GrayBorrelli-ITSC13,BlackmoreOnoBektassovWilliams-TR10,BichiRipaccioliDiCairanoBernardiniBemporadKolmanovsky-CDC10}.\\ In this paper, an output feedback algorithm for linear discrete-time systems affected by a possibly unbounded additive noise is proposed. In case the noise distribution is unknown, the chance constraints on the inputs and state variables are reformulated by means of the Chebyshev - Cantelli inequality \cite{Cantelli}, as originally proposed in \cite{Locatelli} for the design of decentralized controllers and in \cite{Pala} in the context of MPC. Later, this approach has also been considered in \cite{Cogill,GrayBorrelli-ITSC13}, and used to develop preliminary versions of the algorithm here proposed in \cite{FGMS13_CDC,FGMS14_IFAC}. With respect to \cite{FGMS13_CDC,FGMS14_IFAC}, in this paper we discuss, in a consistent fashion and in a detailed way, our control approach. In particular, we address also the case when the noise distribution is known (i.e., and it is Gaussian). We also address algorithm implementation aspects, proposing two novel and theoretically well funded approximated schemes and full implementation details. The algorithm computational load can be made similar to the one of a standard stabilizing MPC algorithm with a proper choice of the design parameters. Importantly, the computation of robust positively invariant sets is not required and, in view of its simplicity and of the required lightweight computational load, the application of the proposed approach to medium/large-scale problems is allowed. The recursive feasibility of the proposed algorithm is guaranteed by a switching MPC strategy which does not require any relaxation technique, and the convergence of the state to a suitable neighbor of the origin is proved.\\ The paper is organized as follows. In Section \ref{sec:problem_statement} we first introduce the main control problem, then we define and properly reformulate the probabilistic constraints. In Section \ref{MPC} we formulate the Stochastic MPC optimization problem and we give the general theoretical results. Section \ref{sec:num_implementation} is devoted to the implementation issues, while in Section \ref{sec:example} two examples are discussed in detail: the first one is analytic and is aimed at comparing the conservativeness of the algorithm to the one of the well known tube based approach \cite{mayne2005robust}, while the second one is numeric and allows for a comparison of the different algorithm implementations. Finally, in Section \ref{sec:conclusions} we draw some conclusions. For clarity of exposition, the proof of the main theoretical results is postponed to the Appendix.\\ \textbf{Notation}. The symbols $\succ$ and $\succeq$ (respectively $\prec$, and $\preceq$) are used to denote positive definite and semi-positive definite (respectively negative definite and semi-negative definite) matrices. The point-to-set distance from $\zeta$ to $\mathcal{Z}$ is $\mathrm{dist}(\zeta,\mathcal{Z}):=\inf\{\|\zeta-z\|,z\in\mathcal{Z}\}$. % \section{Problem statement} \label{sec:problem_statement} \subsection{Stochastic system and probabilistic constraints} % Consider the following discrete-time linear system \begin{equation} \left\{\begin{array}{l} x_{t+1}=Ax_t+Bu_t+Fw_t \quad t\geq 0\\ y_{t}=Cx_t+v_t \end{array}\right . \label{eq:model}\end{equation} where $x_t\in \mathbb{R}^n$ is the state, $u_t\in\mathbb{R}^m$ is the input, $y_t\in\mathbb{R}^p$ is the measured output and $w_t\in\mathbb{R}^{n_w}, v_t\in\mathbb{R}^{p}$ are two independent, zero-mean, white noises with covariance matrices $W\succeq 0$ and $V\succ0$, respectively, and a-priori unbounded support. The pair $(A,C)$ is assumed to be observable, and the pairs $(A,B)$ and $(A,\tilde{F})$ are reachable, where matrix $\tilde{F}$ satisfies $\tilde{F}\tilde{F}^T=FWF^T$.\\ % Polytopic constraints on the state and input variables of system \eqref{eq:model} are imposed in a probabilistic way, i.e., it is required that, for all $t\geq 0$ \begin{align}\label{eq:prob_constraint_state} \mathbb{P}\{b_r^Tx_{t}\geq x^{ max}_r\}&\leq p_r^x \quad r=1,\dots, n_r\\ \label{eq:prob_constraint_input} \mathbb{P}\{c_s^Tu_{t}\geq u^{ max}_s\}&\leq p_s^u \quad s=1,\dots, n_s \end{align} where $\mathbb{P}(\phi)$ denotes the probability of $\phi$, $b_r$, $c_s$ are constant vectors, $x_r^{ max}$, $u_s^{ max}$ are bounds for the state and control variables, and $p^x_r, p^u_s$ are design parameters. It is also assumed that the set of relations $b_r^Tx\leq x^{ max}_r$, $r=1,\dots, n_r$ (respectively, $c_s^Tu\leq u^{ max}_s$, $s=1,\dots, n_s$), defines a convex set $\mathbb{X}$ (respectively, $\mathbb{U}$) containing the origin in its interior. \subsection{Regulator structure} For system~\eqref{eq:model}, we want to design a standard regulation scheme made by the state observer \begin{equation}\label{eq:observer1} \hat{x}_{t+1}=A\hat{x}_{t}+Bu_t+L_{t}(y_t-C\hat{x}_{t}) \end{equation} coupled with the feedback control law \begin{equation}\label{eq:fb_control_law_ideal} u_{t}=\bar{u}_{t}-K_{t}(\hat{x}_t-\bar{x}_{t}) \end{equation} where $\bar{x}$ is the state of the nominal model \begin{equation}\label{eq:mean_value_evolution_freeIC} \bar{x}_{t+1}=A\bar{x}_{t}+B\bar{u}_{t} \end{equation} In \eqref{eq:observer1}, \eqref{eq:fb_control_law_ideal}, the feedforward term $\bar{u}_{t}$ and the gains $L_{t}$, $K_{t}$ are design parameters to be selected to guarantee convergence properties and the fulfillment of the probabilistic constraints \eqref{eq:prob_constraint_state}, \eqref{eq:prob_constraint_input}.\\ Letting \begin{subequations} \label{eq:errors} \begin{align} e_{t}&=x_{t}-\hat{x}_{t}\label{eq:obs_error}\\ \varepsilon_{t}&=\hat{x}_{t}-\bar{x}_{t}\label{est_error} \end{align} \end{subequations} from \eqref{eq:errors} we obtain that \begin{equation}\label{eq:error2} \delta x_{t}=x_{t}-\bar{x}_{t}= e_{t}+\varepsilon_{t} \end{equation} Define also the vector $\sigma_{t}=\begin{bmatrix}e_{t}^T& \varepsilon_{t}^T\end{bmatrix}^T$ whose dynamics, according to \eqref{eq:model}-\eqref{eq:errors}, is described by \begin{equation}\label{eq:error_matrix} \begin{array}{ll} \sigma_{t+1}=&\Phi_{t} \sigma_{t}+\Psi_{t} \begin{bmatrix}w_{t}\\v_{t}\end{bmatrix}\end{array} \end{equation} where $$\Phi_{t}= \begin{bmatrix}A-L_{t}C&0\\L_{t}C&A-BK_{t}\end{bmatrix},\,\Psi_{t}=\begin{bmatrix}F&-L_{t}\\0&L_{t}\end{bmatrix}$$ In the following it is assumed that, by a proper initialization, i.e. $\mathbb{E}\left\{\sigma_{0}\right\}=0$, and recalling that the noises $v$ and $w$ are zero mean, the enlarged state $\sigma_{t}$ of system \eqref{eq:error_matrix} is zero-mean, so that $\bar{x}_{t}=\mathbb{E}\{x_{t}\}$. Then, denoting by $\Sigma_{t}=\mathbb{E}\left\{ \sigma_{t}\sigma_{t}^T \right\}$ and by $\Omega=\mathrm{diag}(W,V)$ the covariance matrices of $\sigma_{t}$ and $[w_{t}^T\, v_{t}^T]^T$ respectively, the evolution of $\Sigma_{t}$ is governed by \begin{align}\label{eq:variance_evolution_error} &\Sigma_{t+1}=\Phi_{t}\Sigma_{t}\Phi_{t}^T+\Psi_{t}\Omega\Psi_{t}^T \end{align} By definition, also the variable $\delta x_{t}$ defined by \eqref{eq:error2} is zero mean and its covariance matrix $X_{t}$ can be derived from $\Sigma_{t}$ as follows \begin{align}\label{eq:variance_evolution_state} X_{t}=\mathbb{E}\left\{\delta x_{t} \delta x_{t}^T\right\} =\begin{bmatrix}I&I\end{bmatrix}\Sigma_{t} \begin{bmatrix}I\\I\end{bmatrix} \end{align} Finally, letting $\delta u_{t}=u_{t}-\bar{u}_{t}=-K_{t}(\hat{x}_{t}-\bar{x}_{t})$, one has $\mathbb{E}\left\{\delta u_{t}\right\}=0$ and also the covariance matrix $U_{t}=\mathbb{E}\left\{\delta u_{t} \delta u_{t}^T\right\}$ can be obtained from $\Sigma_{t}$ as follows \begin{align}\label{eq:variance_evolution_input} U_{t}=&\mathbb{E}\left\{K_{t}\varepsilon_{t} \varepsilon_{t}^TK_{t}^T\right\}=\begin{bmatrix}0&K_{t}\end{bmatrix}\Sigma_{t} \begin{bmatrix}0\\K_{t}^T\end{bmatrix} \end{align} \subsection{Reformulation of the probabilistic constraints} To set up a suitable control algorithm for the design of $\bar{u}_{t}$, $L_{t}$, $K_{t}$, the probabilistic constraints \eqref{eq:prob_constraint_state} and \eqref{eq:prob_constraint_input} are now reformulated as deterministic ones at the price of suitable tightening. To this end, consider, in general, a random variable $z$ with mean value $\bar{z}=\mathbb{E}\{z\}$, variance $Z=\mathbb{E}\{(z-\bar{z})(z-\bar{z})^T\}$, and the chance-constraint \begin{equation}\mathbb{P}\{h^T z\geq z^{ max}\}\leq p\label{eq:prob_constraint_general}\end{equation} The following result, based on the Chebyshev - Cantelli inequality \cite{Cantelli}, has been proven in~\cite{Pala}. \begin{proposition} \label{prop:Cantelli} Letting $f(p)=\sqrt{(1-p)/{p}}$, constraint \eqref{eq:prob_constraint_general} is verified if \begin{equation} h^T\bar{z}\leq z^{ max}-\sqrt{h^T Z h}\,f(p) \label{eq:Cantelli_propGen} \end{equation} \end{proposition} Note that this result can be proved without introducing any specific assumption on the distribution of~$z$. If, on the other hand, $z$ can be assumed to be normally distributed, less conservative constraints can be obtained, as stated in the following result. \begin{proposition} \label{prop:gen_distr} Assume that $z$ is normally distributed. Then, constraint \eqref{eq:prob_constraint_general} is verified if \eqref{eq:Cantelli_propGen} holds with $f(p)=\mathcal{N}^{-1}(1-p)$ where $\mathcal{N}$ is the cumulative probability function of a Gaussian variable with zero mean and unitary variance. \end{proposition} In Propositions \ref{prop:Cantelli} and \ref{prop:gen_distr}, the function $f(p)$ represents the level of constraint tightening on the mean value of $z$ needed to meet the probabilistic constraint \eqref{eq:prob_constraint_general}. In case of unknown distribution (Proposition \ref{prop:Cantelli}) the values of $f(p)$ are significantly smaller with respect to the Gaussian case (e.g., about an order of magnitude in the range $(0.1, 0.4)$). Similar results can be derived in case of different distributions (e.g., homogeneous).\\ In view of Propositions \ref{prop:Cantelli} and \ref{prop:gen_distr}, the chance-constraints \eqref{eq:prob_constraint_state}-\eqref{eq:prob_constraint_input} are verified provided that the following (deterministic) inequalities are satisfied. \begin{subequations} \begin{align} b_r^T\bar{x}_{t}&\leq x_r^{max}-\sqrt{b_r^T X_{t} b_r}f(p^x_r)\label{eq:Cantelli_ineqs_state}\\ c_s^T\bar{u}_{t}&\leq u_s^{max}-\sqrt{c_s^T U_{t} c_s}f(p_s^u)\label{eq:Cantelli_ineqs_input} \end{align} \label{eq:Cantelli_ineqs} \end{subequations} If the support of the noise terms $w_k$ and $v_k$ is unbounded, the definition of state and control constraints in probabilistic terms is the only way to state feasible control problems. In case of bounded noises the comparison, in terms of conservativeness between the probabilistic framework and the deterministic one, is discussed in the example in Section \ref{app:example_constrs}. % % \section{MPC algorithm: formulation and properties} \label{MPC} To formally state the MPC algorithm for the computation of the regulator parameters $\bar{u}_{t}$, $L_{t}$, $K_{t}$, the following notation will be adopted: given a variable $z$ or a matrix $Z$, at any time step $t$ we will denote by $z_{t+k}$ and $Z_{t+k}$, $k\geq 0$, their generic values in the future, while $z_{t+k|t}$ and $Z_{t+k|t}$ will represent their specific values computed based on the knowledge (e.g., measurements) available at time $t$.\\ The main ingredients of the optimization problem are now introduced. \subsection{Cost function} Assume to be at time $t$ and denote by $\bar{u}_{t,\dots, t+N-1}=\{\bar{u}_t, \dots, \bar{u}_{t+N-1}\}$ the nominal input sequence over a future prediction horizon of length $N$. Moreover, define by $K_{t,\dots, t+N-1}=\{K_t, \dots, K_{t+N-1}\}$, $L_{t,\dots, t+N-1}=\{L_t, \dots, L_{t+N-1}\}$ the sequences of the future control and observer gains, and recall that the covariance $\Sigma_{t+k}=\mathbb{E}\left\{\sigma_{t+k}\sigma_{t+k}^T\right\}$ evolves, starting from $\Sigma_{t}$, according to \eqref{eq:variance_evolution_error}.\\ The cost function to be minimized is the sum of two components, the first one ($J_{m}$) accounts for the expected values of the future nominal inputs and states, while the second one ($J_{v}$) is related to the variances of the future errors $e$, $\varepsilon$, and of the future inputs. Specifically, the overall performance index is \begin{align}\label{eq:JTOT J=J_m(\bar{x}_t,\bar{u}_{t,\dots, t+N-1})+J_v(\Sigma_t,K_{t,\dots, t+N-1},L_{t,\dots, t+N-1}) \end{align} where \begin{align} &J_m=\sum_{i=t}^{t+N-1} \| \bar{x}_{i}\|_{Q}^2+\| \bar{u}_{i}\|_{R}^2+\| \bar{x}_{t+N}\|_{S}^2\label{eq:mean_cost_function}\\ &J_v=\mathbb{E}\left\{\sum\limits_{i=t}^{t+N-1} \| x_i-\hat{x}_i \|_{Q_L}^2+\| x_{t+N}-\hat{x}_{t+N} \|_{S_{L}}^2 \right\}+\nonumber\\ &\mathbb{E}\left\{\sum\limits_{i=t}^{t+N-1} \| \hat{x}_i-\bar{x}_i \|_Q^2+\| u_i-\bar{u}_i \|_R^2+\| \hat{x}_{t+N}-\bar{x}_{t+N} \|_{S}^2 \right\} \label{eq:variance_cost_function1}\end{align} where the positive definite and symmetric weights $Q$, $Q_L$, $S$, and $S_L$ must satisfy the following inequality \begin{equation}\label{eq:Lyap_S} Q_{T}-S_{T}+\Phi^TS_{T}\Phi\preceq 0 \end{equation} where $$\Phi=\begin{bmatrix}A-\bar{L}C&0\\\bar{L}C&A-B\bar{K}\end{bmatrix}$$ $Q_{T}=\mathrm{diag}(Q_{L},Q+\bar{K}^TR\bar{K})$, $S_{T}=\mathrm{diag}(S_{L},S)$, and $\bar{K}$, $\bar{L}$ must be chosen to guarantee that $\Phi$ is asymptotically stable.\\ By means of standard computations, it is possible to write the cost \eqref{eq:variance_cost_function1} as follows \begin{equation}\label{eq:variance_cost_function} J_v=\sum_{i=t}^{t+N-1} \mathrm{tr}(Q_{T} \Sigma_{i})+ \mathrm{tr} (S_{T} \Sigma_{t+N}) \end{equation} From \eqref{eq:JTOT}-\eqref{eq:variance_cost_function1}, it is apparent that the goal is twofold: to drive the mean $\bar{x}$ to zero by acting on the nominal input component $\bar{u}_{t,\dots, t+N-1}$ and to minimize the variance of $\Sigma$ by acting on the gains $K_{t,\dots, t+N-1}$ and $L_{t,\dots, t+N-1}$. In addition, also the pair $(\bar{x}_{t},\Sigma_t)$ must be considered as an additional argument of the MPC optimization, as later discussed, to guarantee recursive feasibility. \subsection{Terminal constraints} As usual in stabilizing MPC, see e.g. \cite{Mayne00}, some terminal constraints must be included into the problem formulation. In our setup, the mean $\bar{x}_{t+N}$ and the variance $\Sigma_{t+N}$ at the end of the prediction horizon must satisfy \begin{align} \bar{x}_{t+N}&\in\bar{\mathbb{X}}_F\label{eq:term_constraint_mean}\\ \Sigma_{t+N}&\preceq \bar{\Sigma}\label{eq:term_constraint_variance} \end{align} where $\bar{\mathbb{X}}_F$ is a positively invariant set (see \cite{Gilbert}) such that \begin{align}\label{eq:inv_terminal} (A-B\bar{K})\bar{x}&\in\bar{\mathbb{X}}_F \quad &\forall \bar{x}\in \bar{\mathbb{X}}_F \end{align} while $\bar{\Sigma}$ is the steady-state solution of the Lyapunov equation \eqref{eq:variance_evolution_error}, i.e., \begin{align} \bar{\Sigma}=& \Phi \bar{\Sigma} \Phi^T+\Psi\bar{\Omega}\Psi^T \label{eq:Riccati_1} \end{align} where $\Psi=\begin{bmatrix}F&-\bar{L}\\0 &\bar{L} \end{bmatrix}$ and $\bar{\Omega}=\mathrm{diag}(\bar{W},\bar{V})$ is built by considering (arbitrary) noise variances $\bar{W}\succeq W$ and $\bar{V}\succeq V$. In addition, and consistently with \eqref{eq:Cantelli_ineqs}, the following coupling conditions, must be verified. \begin{subequations} \label{eq:linear_constraint_finalc} \begin{align} b_r^T\bar{x}&\leq x_r^{max}-\sqrt{b_r^T \bar{X} b_r}f(p^x_r) \label{eq:linear_constraint_state_finalc}\\ -c_s^T\bar{K}\bar{x}&\leq u_s^{max}-\sqrt{c_s^T \bar{U} c_s}f(p_s^u) \label{eq:linear_constraint_input_finalc} \end{align} \end{subequations} for all $r=1,\dots, n_r$, $s=1,\dots, n_s$, and for all $\bar{x}\in\bar{\mathbb{X}}_F$, where \begin{subequations} \label{eq:bar_def} \begin{align} \bar{X}&=\begin{bmatrix}I&I\end{bmatrix}\bar{\Sigma}\begin{bmatrix}I\\I\end{bmatrix}\label{eq:Xbar_def}\\ \bar{U}&=\begin{bmatrix}0&\bar{K}\end{bmatrix}\bar{\Sigma}\label{eq:Ubar_def} \begin{bmatrix}0\\\bar{K}^T\end{bmatrix} \end{align}\end{subequations} % It is worth remarking that the choice of $\bar{\Omega}$ is subject to a tradeoff. In fact, large variances $\bar{W}$ and $\bar{V}$ result in large $\bar{\Sigma}$ (and, in view of \eqref{eq:bar_def}, large $\bar{X}$ and $\bar{U}$). This enlarges the terminal constraint \eqref{eq:term_constraint_variance} but, on the other hand, reduces the size of the terminal set $\mathbb{X}_F$ compatible with \eqref{eq:linear_constraint_finalc}. % \subsection{Statement of the stochastic MPC (S-MPC) problem} % The formulation of the main S-MPC problem requires a preliminary discussion concerning the initialization. In principle, and in order to use the most recent information available on the state, at any time instant it would be natural to set the current value of the nominal state $\bar{x}_{t|t}$ to $\hat x_{t}$ and the covariance $\Sigma_{t|t}$ to $\mathrm{diag}(\Sigma_{11,t|t-1},0)$, where $\Sigma_{11,t|t-1}$ is the covariance of state prediction error $e$ obtained using the observer~\eqref{eq:observer1}. However, since we do not exclude the possibility of unbounded disturbances, in some cases this choice could lead to infeasible optimization problems. On the other hand, and in view of the terminal constraints \eqref{eq:term_constraint_mean}, \eqref{eq:term_constraint_variance}, it is quite easy to see that recursive feasibility is guaranteed provided that $\bar{x}$ is updated according to the prediction equation \eqref{eq:mean_value_evolution_freeIC}, which corresponds to the variance update given by \eqref{eq:variance_evolution_error}. These considerations motivate the choice of accounting for the initial conditions $(\bar{x}_t,\Sigma_t)$ as free variables, which will selected by the control algorithm according to the following alternative strategies.\\ \textbf{Strategy 1} Reset of the initial state: $\bar{x}_{t|t}=\hat x_{t}$, $\Sigma_{t|t}=\mathrm{diag}(\Sigma_{11,t|t-1},0)$.\\ \textbf{Strategy 2} Prediction: $\bar{x}_{t|t}=\bar{x}_{t|t-1}$, $\Sigma_{t|t}=\Sigma_{t|t-1}$.\\ % % The S-MPC problem can now be stated.\\\\ \textbf{S-MPC problem: }at any time instant $t$ solve $$\min_{\bar{x}_{t},\Sigma_{t},\bar{u}_{t, \dots, t+N-1}, K_{t,\dots, t+N-1},L_{t,\dots, t+N-1}} J$$ where $J$ is defined in \eqref{eq:JTOT}, \eqref{eq:mean_cost_function}, \eqref{eq:variance_cost_function1}, subject to \begin{itemize} \item[-] the dynamics~\eqref{eq:mean_value_evolution_freeIC} and~\eqref{eq:variance_evolution_error}; \item[-] constraints~\eqref{eq:Cantelli_ineqs} for all $k=0,\dots,N-1$; \item[-] the initialization constraint, corresponding to the choice between Strategies 1 and 2, i.e., \begin{equation}\label{eq:reset_constraint} (\bar{x}_{t},\Sigma_{t})\in\{(\hat x_t,\mathrm{diag}(\Sigma_{11,t|t-1},0)),(\bar{x}_{t|t-1},\Sigma_{t|t-1})\} \end{equation} \item[-] the terminal constraints~\eqref{eq:term_constraint_mean}, \eqref{eq:term_constraint_variance}.\hfill$\square$ \end{itemize} Denoting by $\bar{u}_{t,\dots, t+N-1|t}=\{\bar{u}_{t|t},\dots, \bar{u}_{t+N-1|t}\}$, $K_{t,\dots, t+N-1|t}=\{K_{t|t},\dots,$\break $K_{t+N-1|t}\}$, $L_{t,\dots, t+N-1|t}=$ $\{L_{t|t},\dots, L_{t+N-1|t}\}$, and ($\bar{x}_{t|t},\Sigma_{t|t}$) the optimal solution of the S-MPC problem, the feedback control law actually used is then given by~\eqref{eq:fb_control_law_ideal} with $\bar{u}_{t}=\bar{u}_{t|t}$, $K_{t}=K_{t|t}$, and the state observation evolves as in~\eqref{eq:observer1} with $L_{t}=L_{t|t}$.\\ We define the S-MPC problem feasibility set as \begin{center} $\Xi:=\{(\bar{x}_0,\Sigma_0):\exists \bar{u}_{0,\dots, N-1},K_{0,\dots, N-1},L_{0,\dots, N-1}$ such that~\eqref{eq:mean_value_evolution_freeIC},~\eqref{eq:variance_evolution_error}, and \eqref{eq:Cantelli_ineqs} hold for all $k=0,\dots,N-1$ and \eqref{eq:term_constraint_mean}, \eqref{eq:term_constraint_variance} are verified\}\end{center} Some comments are in order. \setlength{\leftmargini}{0.5em} \begin{itemize} \item[-]At the initial time $t=0$, the algorithm must be initialized by setting $\bar{x}_{0|0}=\hat{x}_{0}$ and $\Sigma_{0|0}=\mathrm{diag}(\Sigma_{11,0},0)$. In view of this, feasibility at time $t=0$ amounts to $(\hat{x}_0,\Sigma_{0|0})\in\Xi$. \item[-] The binary choice between Strategies 1 and 2 requires to solve at any time instant two optimization problems. However, the following sequential procedure can be adopted to reduce the average overall computational burden: the optimization problem corresponding to Strategy 1 is first solved and, if it is infeasible, Strategy 2 must be used, otherwise Strategy 2 must be solved and adopted. On the contrary, if it is feasible, it is possible to compare the resulting value of the optimal cost function with the value of the cost using the sequences $\{\bar{u}_{t|t-1},\dots, \bar{u}_{t+N-2|t-1}, -\bar{K}\bar{x}_{t+N-1|t}\}$, $\{K_{t|t-1},\dots, K_{t+N-2|t-1}, \bar{K}\}$, $\{L_{t|t-1},\dots,$ $L_{t+N-2|t-1}, \bar{L}\}$. If the optimal cost with Strategy 1 is lower, Strategy 1 can be used without solving the MPC problem for Strategy 2. This does not guarantee optimality, but the convergence properties of the method stated in the result below are recovered and the computational effort is reduced. \end{itemize} % Now we are in the position to state the main result concerning the convergence properties of the algorithm. \begin{thm}\label{thm:main} If, at $t=0$, the S-MPC problem admits a solution, the optimization problem is recursively feasible and the state and input probabilistic constraints \eqref{eq:prob_constraint_state} and \eqref{eq:prob_constraint_input} are satisfied for all $t\geq 0$. Furthermore, if there exists $\rho\in(0,1)$ such that the noise variance $\Omega$ verifies \begin{align} \frac{(N+\frac{\beta}{\alpha})}{\alpha}\mathrm{tr}(S_T\Psi \Omega \Psi^T)&<\min(\rho\bar{\sigma}^2,\rho\lambda_{min}(\bar{\Sigma}))\label{eq:cons_conds_on_W} \end{align} % where $\bar{\sigma}$ is the maximum radius of a ball, centered at the origin, included in $\bar{\mathbb{X}}_F$, and \begin{subequations} \begin{align} \alpha&=\min\{\lambda_{min}(Q),\mathrm{tr}\{Q^{-1}+Q_L^{-1}\}^{-1}\}\\ \beta&=\max\{\lambda_{max}(S),\mathrm{tr}\{S_T\}\} \end{align} \label{eq:lambda_def} \end{subequations} then, as $t\rightarrow +\infty$\\ % \begin{align}\mathrm{dist}(\|\bar{x}_t\|^2+\mathrm{tr}\{\Sigma_{t|t}\},[0,\frac{1}{\alpha}(N+\frac{\beta}{\alpha})\,\mathrm{tr}(S_T\Psi \Omega \Psi^T)])\rightarrow 0\label{eq:thm_stat}\end{align}\hfill$\square$ \end{thm} Note that, as expected, for smaller and smaller values of $\Omega$, also the asymptotic values of $\|\bar{x}_t\|$ and $\mathrm{tr}\{\Sigma_{t|t}\}$ tend to zero. % \section{Implementation issues} \label{sec:num_implementation} The main difficulty in the solution to the S-MPC problem is due to the non linear constraints \eqref{eq:Cantelli_ineqs} and to the non linear dependence of the covariance evolution, see~\eqref{eq:variance_evolution_error}, on $K_{t,\dots, t+N-1},L_{t,\dots, t+N-1}$. This second problem can be prevented in the state feedback case, see \cite{FGMS13_CDC}, where a reformulation based on linear matrix inequalities (LMIs) can be readily obtained. In the output feedback case here considered, two possible solutions are described in the following.\\ Also, in Section \ref{sec:inputs} we briefly describe some possible solutions for coping with the presence of additive deterministic constraints on the input variables $u_t$. \subsection{Approximation of S-MPC for allowing a solution with LMIs} A solution, based on an approximation of S-MPC characterized by linear constraints solely, is now presented. First define $A^D=\sqrt{2}A,B^D=\sqrt{2}B,C^D=\sqrt{2}C$, and $V^D=2V$ and let the auxiliary gain matrices $\bar{K}$ and $\bar{L}$ be selected according to the following assumption. \begin{assumption} \label{ass:KandL} The gains $\bar{K}$ and $\bar{L}$ are computed as the steady-state gains of the LQG regulator for the system $(A^D,B^D,C^D)$, with state and control weights $Q$ and $R$, and noise covariances $\bar{W}\succeq W$ and $\bar{V}\succeq V^D$. \end{assumption} Note that, if a gain matrix $\bar{K}$ (respectively $\bar{L}$) is stabilizing for $(A^D-B^D\bar{K})=\sqrt{2}(A-B\bar{K})$ (respectively $(A^D-\bar{L}C^D)=\sqrt{2}(A-\bar{L}C)$), it is also stabilizing for $(A-B\bar{K})$ (respectively $(A-\bar{L}C)$), i.e., for the original system. The following preliminary result can be stated. \begin{lemma} \label{lemma:bound_var_main} Define $A^D_{L_t}=A^D-L_tC^D$, $A^D_{K_t}=A^D-B^DK_t$, the block diagonal matrix $\Sigma^D_t=\mathrm{diag}(\Sigma_{11,t}^D,\Sigma_{22,t}^D)$, $\Sigma_{11,t}^D\in \mathbb{R}^{n\times n}$, $\Sigma_{22,t}^D\in \mathbb{R}^{n\times n}$ and the update equations \begin{subequations} \begin{align} \Sigma_{11,t+1}^D=&A^D_{L_t}\Sigma_{11,t}^D(A^D_{L_t})^T+FWF^T+L_t V^D L_t^T\label{eq:sigmaD1}\\ \Sigma_{22,t+1}^D=&A^D_{K_t}\Sigma_{22,t}^D(A^D_{K_t})^T+L_tC^D\Sigma_{11,t}^DC^{D\ T}L_t^T\nonumber\\ &+L_t V^D L_t^T\label{eq:sigmaD2} \end{align} \label{eq:SigmaDupdate} \end{subequations} Then\\ I) $\Sigma^D_t\succeq \Sigma_t$ implies that $\Sigma^D_{t+1}=\mathrm{diag}(\Sigma_{11,t+1}^D,\Sigma_{22,t+1}^D)\succeq \Sigma_{t+1}$.\\ II) We can rewrite as LMIs the following inequalities \begin{subequations} \begin{align} \Sigma_{11,t+1}^D\succeq &A^D_{L_t}\Sigma_{11,t}^D(A^D_{L_t})^T+FWF^T+L_t V^D L_t^T\label{eq:sigmaD1LMI}\\ \Sigma_{22,t+1}^D \succeq & A^D_{K_t}\Sigma_{22,t}^D(A^D_{K_t})^T+L_tC^D\Sigma_{11,t}^DC^{D\ T}L_t^T\nonumber\\ &+L_t V^D L_t^T\label{eq:sigmaD2LMI} \end{align} \label{eq:SigmaDupdateLMI} \end{subequations} \end{lemma} Based on Lemma~\ref{lemma:bound_var_main}-II, we can reformulate the original problem so that the covariance matrix $\Sigma^{D}$ is used instead of $\Sigma$. Accordingly, the update equation~\eqref{eq:variance_evolution_error} is replaced by \eqref{eq:SigmaDupdate} and S-MPC problem is recast as an LMI one (see Appendix \ref{app:LMI}).\\ The inequalities \eqref{eq:Cantelli_ineqs} have a nonlinear dependence on the covariance matrices $X_{t}$ and $U_{t}$. It is possible to prove that \eqref{eq:Cantelli_ineqs} are satisfied if \begin{subequations}\label{eq:Cantelli_ineqs_lin} \begin{align} b_r^T\bar{x}_{t}&\leq (1-0.5\alpha_x)x_r^{ max}-\frac{b_r^T X_{t}b_r}{2\alpha_x x_r^{ max}} f(p_r^x)^2 \label{eq:linear_constraint_state}\\ c_s^T\bar{u}_{t}&\leq (1-0.5\alpha_u)u_s^{ max}-\frac{c_s^TU_{t}c_s}{2\alpha_u u_s^{ max}} f(p_s^u)^2\label{eq:linear_constraint_input} \end{align}\end{subequations} with $r=1,\dots, n_r$ and $s=1,\dots, n_s$, where $\alpha_x\in(0, 1]$ and $\alpha_u\in(0, 1]$ are free design parameters. Also, note that $X_{t}\preceq\begin{bmatrix} I&I \end{bmatrix}\Sigma_{t}^D\begin{bmatrix} I\\I \end{bmatrix}= \Sigma_{11,t}^D+\Sigma_{22,t}^D$ and that $U_{t}\preceq\begin{bmatrix} 0&K_{t} \end{bmatrix}\Sigma_{t}^D\begin{bmatrix} 0\\K_{t}^T \end{bmatrix}=K_{t}\Sigma_{22,t}^DK_{t}^T$ so that, defining $X_{t}^D=\Sigma_{11,t}^D+ \Sigma_{22,t}^D$ and $U_{t}^D=K_{t}\Sigma_{22,t}^DK_{t}^T$,~\eqref{eq:Cantelli_ineqs_lin} can be written as follows \begin{subequations}\label{eq:Cantelli_ineqs_linL} \begin{align} b_r^T\bar{x}_{t}&\leq (1-0.5\alpha_x)x_r^{ max}-\frac{b_r^TX_{t}^D b_r}{2\alpha_x x_r^{ max}} f(p_r^x)^2\label{eq:linear_constraint_stateL}\\ c_s^T\bar{u}_{t}&\leq (1-0.5\alpha_u)u_s^{ max}-\frac{c_s^TU_{t}^D c_s}{2\alpha_u u_s^{ max}} f(p_s^u)^2\label{eq:linear_constraint_inputL} \end{align}\end{subequations} Note that the reformulation of~\eqref{eq:Cantelli_ineqs} into \eqref{eq:Cantelli_ineqs_linL} has been performed at the price of additional constraint tightening. For example, on the right hand side of \eqref{eq:linear_constraint_stateL}, $x_r^{max}$ is replaced by $(1-0.5\alpha^x)x_r^{max}$, which significantly reduces the size of the constraint set. Note that parameter $\alpha^x$ cannot be reduced at will, since it also appears at the denominator in the second additive term.\\ In view of Assumption~\ref{ass:KandL} and resorting to the separation principle, it is possible to show \cite{glad2000control} that the solution $\bar{\Sigma}^D$ to the steady-state equation \begin{align} \bar{\Sigma}^D=& \Phi^D \bar{\Sigma}^D (\Phi^D)^T+\Psi\bar{\Omega}\Psi^T \label{eq:Riccati_1L} \end{align} is block diagonal, i.e., $\bar{\Sigma}^D=\mathrm{diag}(\bar{\Sigma}^D_{11},\bar{\Sigma}_{22}^D)$, where $$\Phi^D=\begin{bmatrix}A^D-\bar{L}C^D&0\\\bar{L}C^D&A^D-B^D\bar{K}\end{bmatrix}$$ The terminal constraint \eqref{eq:term_constraint_variance}, must be transformed into $\Sigma_{t+N}^D\preceq \bar{\Sigma}^D$, which corresponds to setting \begin{equation} \begin{array}{lcllcl} \Sigma_{11,t+N}^D&\preceq \bar{\Sigma}^D_{11},\quad& \Sigma_{22,t+N}^D&\preceq \bar{\Sigma}^D_{22} \end{array} \label{eq:term_constraints_varianceL} \end{equation} Defining $\bar{X}^D=\bar{\Sigma}^D_{11}+\bar{\Sigma}^D_{22}$ and $\bar{U}^D=\bar{K}\bar{\Sigma}^D_{22}\bar{K}^T$, the terminal set condition \eqref{eq:linear_constraint_finalc} must now be reformulated as \begin{subequations} \begin{align} b_r^T\bar{x}&\leq (1-0.5\alpha^x)x_r^{ max}-\frac{b_r^T\bar{X}^D b_r}{2\alpha^x x_r^{ max}} f(p_r^x)^2 \label{eq:linear_constraint_state_finalL}\\ -c_s^T\bar{K}\bar{x}&\leq (1-0.5\alpha^u)u_s^{ max}-\frac{c_s^T \bar{U}^Dc_s}{2\alpha^u u_s^{ max}} f(p_s^u)^2 \label{eq:linear_constraint_input_finalL} \end{align} \end{subequations} for all $r=1,\dots, n_r$, $s=1,\dots, n_s$, and for all $\bar{x}\in\bar{\mathbb{X}}_F$.\\ Also $J_v$ must be reformulated. Indeed \begin{equation} \begin{array}{ll} J_v\leq J_v^D&=\sum\limits_{i=t}^{t+N-1} \mathrm{tr}\left\{Q_L\Sigma_{11,i}^D+Q\Sigma_{22,i}^D+RK_i\Sigma_{22,i}^DK_i^T\right\}\\ &+\mathrm{tr}\left\{S_L\Sigma_{11,t+N}^D+S\Sigma_{22,t+N}^D\right\} \end{array} \label{eq:variance_cost_functionL} \end{equation} where the terminal weights $S$ and $S_L$ must now satisfy the following Lyapunov-like inequalities \begin{equation} \begin{array}{l} (\bar{A}^D_K)^T S \bar{A}^D_K-S+Q+\bar{K}^TR\bar{K}\preceq 0\\ (\bar{A}^D_L)^T S_L \bar{A}^D_L-S_L+Q_L+(C^D)^T\bar{L}^T S \bar{L}C^D\preceq 0 \end{array} \label{eq:Lyap_S_L} \end{equation} where $\bar{A}^D_K=A^D-B^D\bar{K}$ and $\bar{A}^D_L=A^D-\bar{L}C^D$. It is now possible to formally state the S-MPCl problem.\\\\ \textbf{S-MPCl problem: }at any time instant $t$ solve $$\min_{\bar{x}_{t},\Sigma^D_{11,t},\Sigma^D_{22,t},\bar{u}_{t, \dots, t+N-1},K_{t,\dots, t+N-1},L_{t,\dots, t+N-1}} J$$ where $J$ is defined in \eqref{eq:JTOT}, \eqref{eq:mean_cost_function}, \eqref{eq:variance_cost_functionL}, subject to \begin{itemize} \item[-] the dynamics~\eqref{eq:mean_value_evolution_freeIC} and~\eqref{eq:SigmaDupdate}; \item[-] the linear constraints~\eqref{eq:Cantelli_ineqs_linL} for all $k=0,\dots,N-1$; \item[-] the initialization constraint, corresponding to the choice between Strategies 1 and 2, i.e., $(\bar{x}_{t},\Sigma^D_{11,t},\Sigma_{22,t}^D)\in\{(\hat x_t,\Sigma^D_{11,t|t-1},0),(\bar{x}_{t|t-1},\Sigma^D_{11,t|t-1},\Sigma_{22,t|t-1}^D)\}$ \item[-] the terminal constraints~\eqref{eq:term_constraint_mean},~\eqref{eq:term_constraints_varianceL}. \end{itemize} \hfill$\square$\\ The following corollary follows from Theorem~\ref{thm:main}. \begin{corollary}\label{cor:LMI_soluz} If, at time $t=0$, the S-MPCl problem admits a solution, the optimization problem is recursively feasible and the state and input probabilistic constraints \eqref{eq:prob_constraint_state} and \eqref{eq:prob_constraint_input} are satisfied for all $t\geq 0$. Furthermore, if there exists $\rho\in(0,1)$ such that the noise variance $\Omega^D=\mathrm{diag}(W,V^D)$ verifies \begin{align} \frac{(N+\frac{\beta}{\alpha})}{\alpha}\mathrm{tr}(S_T\Psi \Omega^D \Psi^T)&<\min(\rho\bar{\sigma}^2,\rho\lambda_{min}(\bar{\Sigma}^D))\label{eq:cons_conds_on_WL} \end{align} then, as $t\rightarrow +\infty$ $$\mathrm{dist}(\|\bar{x}_t\|^2+\mathrm{tr}\{\Sigma_{t|t}^D\},[0,\frac{1}{\alpha}(N+\frac{\beta}{\alpha})\,\mathrm{tr}(S_T\Psi \Omega^D \Psi^T)])\rightarrow 0$$ \hfill$\square$ \end{corollary} \subsection{Approximation of S-MPC with constant gains} \label{sec:num_implementation_constant} The solution presented in this section is characterized by a great simplicity and consists in setting $L_t=\bar{L}$ and $K_t=\bar{K}$ for all $t\geq 0$. In this case, the value of $\Sigma_{t+k}$ (and therefore of $X_{t+k}$ and $U_{t+k}$) can be directly computed for any $k>0$ by means of~\eqref{eq:variance_evolution_error} as soon as $\Sigma_t$ is given. As a byproduct, the nonlinearity in the constraints \eqref{eq:Cantelli_ineqs} does not carry about implementation problems. Therefore, this solution has a twofold advantage: first, it is simple and requires an extremely lightweight implementation; secondly, it allows for the use of nonlinear less conservative constraint formulations. In this simplified framework, the following S-MPCc problem can be stated.\\\\ \textbf{S-MPCc problem: }at any time instant $t$ solve $$\min_{\bar{x}_{t},\Sigma_{t},\bar{u}_{t, \dots, t+N-1}} J$$ where $J$ is defined in \eqref{eq:JTOT}, \eqref{eq:mean_cost_function}, \eqref{eq:variance_cost_function1}, subject to \begin{itemize} \item[-] the dynamics~\eqref{eq:mean_value_evolution_freeIC} , with $K_t=\bar{K}$, and \begin{equation} \Sigma_{t+1}=\Phi\Sigma_{t}\Phi^T+\Psi\Omega\Psi^T \end{equation} \item[-] the constraints~\eqref{eq:Cantelli_ineqs} for all $k=0,\dots,N-1$; \item[-] the initialization constraint \eqref{eq:reset_constraint}; \item[-] the terminal constraints~\eqref{eq:term_constraint_mean}, \eqref{eq:term_constraint_variance}. \end{itemize} \hfill$\square$\\ An additional remark is due. The term $J_v$ in~\eqref{eq:variance_cost_function} does not depend only of the control and observer gain sequences $K_{t,\dots, t+N-1}$, $L_{t,\dots, t+N-1}$, but also of the initial condition $\Sigma_t$. Therefore, it is not possible to discard it in this simplified formulation.\\ The following corollary can be derived from Theorem~\ref{thm:main}. \begin{corollary}\label{cor:const_gains} If, at time $t=0$, the S-MPCc problem admits a solution, the optimization problem is recursively feasible and the state and input probabilistic constraints \eqref{eq:prob_constraint_state} and \eqref{eq:prob_constraint_input} are satisfied for all $t\geq 0$. Furthermore, if there exists $\rho\in(0,1)$ such that the noise variance $\Omega$ verifies~\eqref{eq:cons_conds_on_W}, then, as $t\rightarrow +\infty$ $$\mathrm{dist}(\|\bar{x}_t\|^2+\mathrm{tr}\{\Sigma_{t|t}\},[0,\frac{1}{\alpha}(N+\frac{\beta}{\alpha})\,\mathrm{tr}(S_T\Psi \Omega \Psi^T)])\rightarrow 0$$ \hfill$\square$ \end{corollary} \subsection{Boundedness of the input variables} \label{sec:inputs} The S-MPC scheme described in the previous sections does not guarantee the satisfaction of hard constraints on the input variables. However, input variables can be bounded in practice, and subject to \begin{align} Hu_t\leq \mathbf{1} \label{eq:input_constr} \end{align} where $H\in\mathbb{R}^{n_H\times m}$ is a design matrix and $\mathbf{1}$ is a vector of dimension $n_H$ whose entries are equal to $1$. Three possible approaches are proposed to account for these inequalities.\\ 1) Inequalities \eqref{eq:input_constr} can be stated as additive probabilistic constraints~\eqref{eq:prob_constraint_input} with small violation probabilities $p_s^u$. This solution, although not guaranteeing satisfaction of~\eqref{eq:input_constr} with probability 1, is simple and of easy application.\\ 2) If the S-MPCc scheme is used, define the gain matrix $\bar{K}$ in such a way that $A-B\bar{K}$ is asymptotically stable and, at the same time, $H\bar{K}=0$. From \eqref{eq:fb_control_law_ideal}, it follows that $Hu_t=H\bar{u}_t+H\bar{K}(\hat{x}_t-\bar{x}_t)=H\bar{u}_t$. Therefore, to verify \eqref{eq:input_constr} it is sufficient to include in the problem formulation the deterministic constraint $H\bar{u}_t\leq \mathbf{1}$.\\ 3) In an S-MPCc scheme, in case probabilistic constraint on the input variables were absent, we can replace \eqref{eq:fb_control_law_ideal} with $u_t=\bar{u}_t$ and set $H\bar{u}_t\leq \mathbf{1}$ in the S-MPC optimization problem to verify \eqref{eq:input_constr}. If we also define $\hat{u}_t=\bar{u}_t-\bar{K}(\hat{x}_t-\bar{x}_t)$ as the input to equation \eqref{eq:observer1}, the dynamics of variable $\sigma_t$ is given by \eqref{eq:error_matrix} with $$\Phi_t=\begin{bmatrix}A-\bar{L}C&B\bar{K}\\ \bar{L}C&A-B\bar{K}\end{bmatrix}$$ and the arguments follow similarly to those proposed in the paper. It is worth mentioning, however, that matrix $\Phi_t$ must be asymptotically stable, which requires asymptotic stability of $A$. \section{Examples} \label{sec:example} In this section a comparison between the characteristics of the proposed method and the well-known robust tube-based MPC is first discussed. Then, the approximations described in Section \ref{sec:num_implementation} are discussed with reference to a numerical example. \subsection{Simple analytic example: comparison between the probabilistic and the deterministic robust MPC} \label{app:example_constrs} Consider the scalar system $$x_{t+1}=a x_t+u_t+w_t$$ where $0<a<1$, $w\in[-w_{max},w_{max}]$, $w_{max}>0$, and the measurable state is constrained as follows \begin{align}x_t\leq x_{max}\label{eq:ex_det_constr} \end{align} The limitations imposed by the deterministic robust MPC algorithm developed in \cite{mayne2005robust} and by the probabilistic (state-feedback) method described in this paper are now compared. For both the algorithms, the control law $u_t=\bar{u}_t$ is considered, where $\bar{u}$ is the input of the nominal/average system $\bar{x}_{t+1}=a\bar{x}_{t}+b\bar{u}_{t}$ with state $\bar{x}$. Note that this control law is equivalent to \eqref{eq:fb_control_law_ideal}, where for simplicity it has been set $K_{t}=0$.\\ In the probabilistic approach, we allow the constraint \eqref{eq:ex_det_constr} to be violated with probability $p$, i.e., \begin{align}\mathbb{P}\{x\geq x_{ max}\}\leq p\label{eq:ex_prob_constr} \end{align} To verify \eqref{eq:ex_det_constr} and \eqref{eq:ex_prob_constr} the tightened constraint $\bar{x}_k\leq x_{max}-\Delta x$ must be fulfilled in both the approaches where, in case of \cite{mayne2005robust}, $\Delta x =\Delta x_{RPI}=\sum_{i=0}^{+\infty}a^iw_{max}=\frac{1}{1-a} w_{max}$ while, having defined $w$ as a stochastic process with zero mean and variance $W$, in the probabilistic framework $\Delta x=\Delta x_{S}(p) =\sqrt{X(1-p)/p}$, and $X$ is the steady state variance satisfying the algebraic equation $X=a^2X+W$, i.e. $X=W/(1-a^2)$. Notably, $W$ takes different values depending upon the noise distribution.\\ It results that the deterministic tightened constraints are more conservative provided that $\Delta x_{S}(p)<\Delta x_{RPI}$, i.e. \begin{equation}p>\frac{(1-a)^2}{b(1-a^2)+(1-a)^2}\label{eq:ex_pbound}\end{equation} Consider now the distributions depicted in Figure \ref{fig:distrs} with $W=w_{max}^2/b$, where\\ Case A) $b=3$ for uniform distribution;\\ Case B) $b=18$ for triangular distribution;\\ Case C) $b=25$ for truncated Gaussian distribution.\\ \begin{figure} \centering \includegraphics[width=0.6\linewidth]{distrs} \caption{Distributions: uniform (case A, solid line), triangular (case B, dashed line), truncated Gaussian (case C, dotted line).} \label{fig:distrs} \end{figure} Setting, for example, $a=0.9$, condition \eqref{eq:ex_pbound} is verified for $p>0.0172$ in case A), $p>0.0029$ in case B), and $p>0.0021$ in case C). Note that, although formally truncated, the distribution in case C) can be well approximated with a non-truncated Gaussian distribution: if this information were available, one could use $\Delta x_S(p)=\sqrt{X}\,\mathcal{N}^{-1}(1-p)$ for constraint tightening, and in this case $\Delta x_{S}(p)<\Delta x_{RPI}$ would be verified with $p>1-\mathcal{N}\left(\frac{(1-a^2)b}{(1-a)^2}\right)\simeq 0$. \subsection{Simulation example} The example shown in this section is inspired by \cite{mayne2005robust}. We take $$A=\begin{bmatrix}1&1\\0&1\end{bmatrix}, B=\begin{bmatrix}0.5\\1\end{bmatrix}$$ $F=I_2$, and $C=I_2$. We assume that a Gaussian noise affects the system, with $W=0.01I_2$ and $V=10^{-4}I_2$. The chance-constraints are $\mathbb{P}\{x_{2}\geq 2\}\leq 0.1$, $\mathbb{P}\{u\geq 1\}\leq 0.1$, and $\mathbb{P}\{-u\geq 1\}\leq 0.1$. In \eqref{eq:JTOT}, \eqref{eq:mean_cost_function}, and \eqref{eq:variance_cost_function} we set $Q_L=Q=I_2$, $R=0.01$, and $N=9$.\\ In Figure \ref{fig:sets} we compare the feasible sets obtained with the different methods presented in Section \ref{sec:num_implementation}, with different assumptions concerning the noise (namely S-MPCc (1), S-MPCc (2), S-MPCl (1), S-MPCl (2), where (1) denotes the case of Gaussian distribution and (2) denotes the case when the distribution is unknown). Apparently, in view of the linearization of the constraints (see the discussion after \eqref{eq:Cantelli_ineqs_linL}), the S-MPCl algorithm results more conservative than S-MPCc. On the other hand, concerning the dimension of the obtained feasibility set, in this case the use of the Chebyshev - Cantelli inequality does not carry about a dramatic performance degradation in terms of conservativeness.\\ \begin{figure} \centering \includegraphics[width=1\linewidth]{sets} \caption{Plots of the feasibility sets for S-MPCc (1), S-MPCc (2), S-MPCl (1), S-MPCl (2)} \label{fig:sets} \end{figure} A 200-runs Montecarlo simulation campaign has been carried out for testing the probabilistic properties of the algorithm, with initial conditions $(5,-1.5)$. The fact that the control and estimation gains are free variables makes the transient behaviour of the state responses in case of S-MPCl more performing and reduces the variance of the dynamic state response (at the price of a more reactive input response), with respect to the case when S-MPCc is used. For example, the maximum variance of $x_1(k)$ (resp. of $x_1(k)$) is about $0.33$ (resp. $0.036$) in case of S-MPCc (1) and (2), while it results about $0.25$ (resp. $0.035$) in case of S-MPCl (1) and (2). On the other hand, the maximum variance of $u(k)$ is about $0.006$ in case of S-MPCc, while it is $0.008$ in case of S-MPCl. \section{Conclusions} \label{sec:conclusions} The main properties of the proposed probabilistic MPC algorithm lie in its simplicity and in its light-weight computational load, both in the off-line design phase and in the online implementation. This allows for the application of the S-MPC scheme to medium/large-scale problems, for general systems affected by general disturbances.\\ Future work will focus on the use of the proposed scheme in challenging control problems, such as the control of micro-grids in presence of renewable stochastic energy sources. The application of the algorithm to cope with linear time-varying systems is envisaged, while its extension to distributed implementations is currently underway. \section*{Acknowledgements} We are indebted with Bruno Picasso for fruitful discussions and suggestions.
{'timestamp': '2014-08-29T02:13:28', 'yymm': '1408', 'arxiv_id': '1408.6723', 'language': 'en', 'url': 'https://arxiv.org/abs/1408.6723'}
ArXiv
\section{Introduction} Fix $n\in \mathbb{N}$. The input of the {\em Sparsest Cut Problem} consists of two $n$ by $n$ symmetric matrices with nonnegative entries $C=(C_{ij}),D=(D_{ij})\in M_n([0,\infty))$, which are commonly called capacities and demands, respectively. The goal is to design a polynomial-time algorithm to evaluate the quantity \begin{equation}\label{eq:def opt} \mathsf{OPT}(C,D)\stackrel{\mathrm{def}}{=} \min_{\emptyset\subsetneq A\subsetneq \{1,\ldots,n\}}\frac{\sum_{(i,j)\in A\times (\{1,\ldots,n\}\setminus A)}C_{ij}}{\sum_{(i,j)\in A\times (\{1,\ldots,n\}\setminus A)}D_{ij}}. \end{equation} In view of the extensive literature on the Sparsest Cut Problem, it would be needlessly repetitive to recount here the rich and multifaceted impact of this optimization problem on computer science and mathematics; see instead the articles~\cite{AKRR90,LR99}, the surveys~\cite{Shm95,Lin02,Chawla08,Nao10}, Chapter~10 of the monograph~\cite{DL97}, Chapter~15 of the monograph~\cite{Mat02}, Chapter~1 of the monograph~\cite{Ost13}, and the references therein. It suffices to say that by tuning the choice of matrices $C,D$ to the problem at hand, the minimization in~\eqref{eq:def opt} finds a partition of the ``universe'' $\{1,\ldots,n\}$ into two parts, namely the sets $A$ and $\{1,\ldots,n\}\setminus A$, whose appropriately weighted interface is as small as possible, thus allowing for inductive solutions of various algorithmic tasks, a procedure known as {\em divide and conquer}. (Not all of the uses of the Sparsest Cut Problem fit into this framework. A recent algorithmic application of a different nature can be found in~\cite{MMV14}.) It is $NP$-hard to compute $\mathsf{OPT}(C,D)$ in polynomial time~\cite{SM90}. By~\cite{CK09-hardness} there exists $\varepsilon_0>0$ such that it is even $NP$-hard to compute $\mathsf{OPT}(C,D)$ within a multiplicative factor of less than $1+\varepsilon_0$. If one assumes Khot's Unique Games Conjecture~\cite{Kho02,Kho10,Tre12} then by~\cite{CKKRS06,KV15} there does not exist a polynomial-time algorithm that can compute $\mathsf{OPT}(C,D)$ within any universal constant factor. By the above hardness results, a much more realistic goal would be to design a polynomial-time algorithm that takes as input the capacity and demand matrices $C,D\in M_n([0,\infty))$ and outputs a number $\mathsf{ALG}(C,D)$ that is guaranteed to satisfy $$\mathsf{ALG}(C,D)\le \mathsf{OPT}(C,D)\le \rho(n)\mathsf{ALG}(C,D),$$ with (hopefully) the quantity $\rho(n)$ growing to $\infty$ slowly as $n\to \infty$. Determining the best possible asymptotic behaviour of $\rho(n)$ (assuming $P\neq NP$) is an open problem of major importance. In~\cite{LLR95,AR98} an algorithm was designed, based on linear programming (through the connection to multicommodity flows) and Bourgain's embedding theorem~\cite{Bou85}, which yields $\rho(n)=O(\log n)$. An algorithm based on semidefinite programming (to be described precisely below) was proposed by Goemans and Linial in the mid-1990s. To the best of our knowledge this idea first appeared in the literature in~\cite[page~158]{Goe97}, where it was speculated that it might yield a constant factor approximation for Sparsest Cut (see also~\cite{Lin02,Lin-open}). In what follows, we denote the approximation ratio of the Goemans--Linial algorithm on inputs of size at most $n$ by $\rho_{\mathsf{GL}}(n)$. The hope that $\rho_{\mathsf{GL}}(n)=O(1)$ was dashed in the remarkable work~\cite{KV15}, where the lower bound $\rho_{\mathsf{GL}}(n)\gtrsim \sqrt[6]{\log\log n}$ was proven.\footnote{Here, and in what follows, we use the following (standard) asymptotic notation. Given $a,b>0$, the notations $a\lesssim b$ and $b\gtrsim a$ mean that $a\le \mathsf{K}b$ for some universal constant $\mathsf{K}>0$. The notation $a\asymp b$ stands for $(a\lesssim b) \wedge (b\lesssim a)$. Thus $a\lesssim b$ and $a\gtrsim b$ are the same as $a=O(b)$ and $a=\Omega(b)$, respectively, and $a\asymp b$ is the same as $a=\Theta(b)$.} An improved analysis of the ideas of~\cite{KV15} was conducted in~\cite{KR09}, yielding the estimate $\uprho_{\mathsf{GL}}(n)\gtrsim \log\log n$. An entirely different approach based on the geometry of the Heisenberg group was introduced in~\cite{LN06}. In combination with the important works~\cite{CK10,CheegerKleinerMetricDiff} it gives a different proof that $\lim_{n\to \infty}\rho_{\mathsf{GL}}(n)=\infty$. In~\cite{CKN09,CKN} the previously best-known bound $\rho_{\mathsf{GL}}(n)\gtrsim (\log n)^\delta$ was obtained for an effective (but small) positive universal constant $\delta$. Despite these lower bounds, the Goemans--Linial algorithm yields an approximation ratio of $o(\log n)$, so it is asymptotically more accurate than the linear program of~\cite{LLR95,AR98}. Specifically, in~\cite{CGR08} it was shown that $\rho_{\mathsf{GL}}(n)\lesssim (\log n)^{\frac34}$. This was improved in~\cite{ALN08} to $\rho_{\mathsf{GL}}(n)\lesssim (\log n)^{\frac12+o(1)}$. See Section~\ref{sec:previous} below for additional background on the results quoted above. No other polynomial-time algorithm for the Sparsest Cut problem is known (or conjectured) to have an approximation ratio that is asymptotically better than that of the Goemans--Linial algorithm. However, despite major scrutiny by researchers in approximation algorithms, the asymptotic behavior of $\rho_{\mathsf{GL}}(n)$ as $n\to \infty$ remained unknown. Theorem~\ref{thm:main GL lower intro} below resolves this question up to lower-order factors. \begin{thm}\label{thm:main GL lower intro} The approximation ratio of the Goemans--Linial algorithm satisfies $\rho_{\mathsf{GL}}(n)\gtrsim \sqrt{\log n}$. \end{thm} \subsection{The SDP relaxation} The Goemans--Linial algorithm is simple to describe. It takes as input the symmetric matrices $C,D\in M_n([0,\infty))$ and proceeds to compute the following quantity. \begin{equation*}\label{eq:def sdp} \mathsf{SDP}(C,D)\stackrel{\mathrm{def}}{=} \inf_{(v_1,\ldots,v_n)\in \mathsf{NEG}_n} \frac{\sum_{i=1}^n\sum_{j=1}^nC_{ij}\|v_i-v_j\|_{2}^2}{\sum_{i=1}^n\sum_{j=1}^nD_{ij}\|v_i-v_j\|_{2}^2}, \end{equation*} where \begin{align*} \mathsf{NEG}_n&\stackrel{\mathrm{def}}{=} \Big\{(v_1,\ldots v_n)\in (\mathbb{R}^n)^n:\\ &\qquad\qquad \|v_i-v_j\|_2^2\le \|v_i-v_k\|_2^2+\|v_k-v_j\|_2^2\\ &\qquad\qquad\mathrm{for\ all\ } i,j,k\in \{1,\ldots,n\}\Big\}. \end{align*} Thus $\mathsf{NEG}_n$ is the set of $n$-tuples $(v_1,\ldots v_n)$ of vectors in $\mathbb{R}^n$ such that $(\{v_1,\ldots,v_n\},\upnu_n)$ is a semi-metric space, where $\upnu_n:\mathbb{R}^n\times \mathbb{R}^n\to [0,\infty)$ is defined by $\upnu_n(x,y)=\sum_{j=1}^n (x_j-y_j)^2=\|x-y\|_2^2$ for every $x=(x_1,\ldots,x_n),y=(y_1,\ldots,y_n)\in \mathbb{R}^n$. A semi-metric space $(X,d_X)$ is said~\cite{DL97} to be of {\em negative type} if $(X,\sqrt{d_X})$ embeds isometrically into a Hilbert space. So, $\mathsf{NEG}_n$ can be described as the set of all (ordered) negative type semi-metrics of size $n$. It is simple to check that the evaluation of the quantity $\mathsf{SDP}(C,D)$ can be cast as a semidefinite program (SDP), so it can be achieved (up to $o(1)$ precision) in polynomial time~\cite{GLS93}. One has $\mathsf{SDP}(C,D)\le \mathsf{OPT}(C,D)$ for all symmetric matrices $C,D\in M_n([0,\infty))$. See e.g.~\cite[Section~15.9]{Mat02-book} or~\cite[Section~4.3]{Nao10} for an explanation of the above assertions about $\mathsf{SDP}(C,D)$, as well as additional background and motivation. The pertinent question is therefore to evaluate the asymptotic behavior as $n\to \infty$ of the sequence $$ \uprho_{\mathsf{GL}}(n)\stackrel{\mathrm{def}}{=} \sup_{\substack{C,D\in M_n([0,\infty))\\ C,D\ \mathrm{symmetric}}} \frac{\mathsf{OPT}(C,D)}{\mathsf{SDP}(C,D)}. $$ This is the quantity $\rho_{\mathsf{GL}}(n)$ appearing in Theorem~\ref{thm:main GL lower intro}, also known as the {\em integrality gap} of the Goemans--Linial semidefinite programming relaxation for the Sparsest Cut Problem. \subsection{Bi-Lipschitz embeddings}\label{sec:embed intro} A duality argument of Rabinovich (see~\cite[Lemma~4.5]{Nao10} or~\cite[Section~1]{CKN09}) establishes that $\uprho_{\mathsf{GL}}(n)$ is equal to the largest possible $L_1$-distortion of an $n$-point semi-metric of negative type. If $d:\{1,\ldots,n\}^2\to [0,\infty)$ is a semi-metric, its $L_1$ distortion, denoted $c_1(\{1,\ldots,n\},d)$, is the smallest $D\in [1,\infty)$ for which there are integrable functions\footnote{If one wishes to use finite-dimensional vectors rather than functions then by~\cite{Wit86} there exist $v_1,\ldots, v_n\in \mathbb{R}^{n(n-1)/2}$ such that $\int_0^1|f_i(t)-f_j(t)|\ud t=\|v_i-v_j\|_1=\sum_{k=1}^{n(n-1)/2} |v_{ik}-v_{jk}|$ for every $i,j\in \{1,\ldots,n\}$.} $f_1,\ldots,f_n:[0,1]\to \mathbb{R}$ such that $\int_0^1|f_i(t)-f_j(t)|\ud t\le d(i,j)\le D\int_0^1|f_i(t)-f_j(t)|\ud t$ for every $i,j\in \{1,\ldots,n\}$. Rabinovich's duality argument proves that $\uprho_{\mathsf{GL}}(n)$ is equal to the maximum of $c_1(\{1,\ldots,n\},d)$ over all possible semi-metrics $d$ of negative type on $\{1,\ldots,n\}$. Hence, Theorem~\ref{thm:main GL lower intro} is equivalent to the assertion that for every $n\in \mathbb{N}$ there exists a metric of negative type $d:\{1,\ldots,n\}^2\to [0,\infty)$ for which $c_1(\{1,\ldots,n\},d)\gtrsim \sqrt{\log n}$. \subsection{A poorly-embeddable metric} The $5$-dimensional discrete Heisenberg group, denoted $\H_\mathbb{Z}^5$, is the following group of $4$ by $4$ invertible matrices, equipped with the usual matrix multiplication. \begin{equation}\label{eq:def H5} \H_\mathbb{Z}^5\stackrel{\mathrm{def}}{=} \left\{\begin{pmatrix} 1 & \mathsf{a} &\mathsf{b}& \mathsf{e}\\ 0 & 1&0 & \mathsf{c}\\ 0 & 0 & 1 & \mathsf{d} \\ 0 & 0 & 0& 1 \end{pmatrix}: \mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e}\in \mathbb{Z}\right\}\subset \mathrm{GL}_4(\mathbb{R}). \end{equation} This group is generated by the symmetric set $$S\stackrel{\mathrm{def}}{=} \{X_1,X_1^{-1},X_2,X_2^{-1},Y_1,Y_1^{-1},Y_2,Y_2^{-1}\},$$ where \begin{align}\label{eq:def generators} \begin{split} X_1&\stackrel{\mathrm{def}}{=} \begin{pmatrix} 1 & 1 &0 & 0\\ 0 & 1&0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0& 1 \end{pmatrix},\qquad X_2\stackrel{\mathrm{def}}{=} \begin{pmatrix} 1 & 0 & 1 & 0\\ 0 & 1&0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0& 1 \end{pmatrix},\\ Y_1&\stackrel{\mathrm{def}}{=} \begin{pmatrix} 1 & 0 &0 & 0\\ 0 & 1&0 & 1\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0& 1 \end{pmatrix},\qquad Y_2\stackrel{\mathrm{def}}{=} \begin{pmatrix} 1 & 0 &0& 0\\ 0 & 1&0 & 0\\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0& 1 \end{pmatrix}. \end{split} \end{align} For notational convenience we shall identify the matrix in~\eqref{eq:def H5} with the vector $(\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e})\in \mathbb{Z}^5$. This yields an identification of $\H_\mathbb{Z}^5$ with the $5$-dimensional integer grid $\mathbb{Z}^5$. We view $\mathbb{Z}^5$ as a (noncommutative) group equipped with the product that is inherited from matrix multiplication through the above identification, i.e., for every $(\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e}),(\alpha,\beta,\gamma,\delta,\upepsilon)\in \mathbb{Z}^5$ we set \begin{multline}\label{eq:Z5 product} (\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e})(\alpha,\beta,\gamma,\delta,\upepsilon)\\\stackrel{\mathrm{def}}{=} (\mathsf{a}+\alpha,\mathsf{b}+\beta,\mathsf{c}+\gamma,\mathsf{d}+\delta,\mathsf{e}+\upepsilon+\mathsf{a}\gamma+\mathsf{b}\delta). \end{multline} Note that under the above identification the identity element of $\H_\mathbb{Z}^5$ is the zero vector $\mathbf{0}\in \mathbb{Z}^5$, the inverse of an element $h=(\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e})\in \mathbb{Z}^5$ is $h^{-1}=(-\mathsf{a},-\mathsf{b},-\mathsf{c},-\mathsf{d},-\mathsf{e}+\mathsf{a}\mathsf{c}+\mathsf{b}\mathsf{d})$, and the generators $X_1,X_2,Y_1,Y_2$ in~\eqref{eq:def generators} are the first four standard basis elements of $\mathbb{R}^5$. Let $Z$ denote the fifth standard basis element of $\mathbb{R}^5$, i.e., $Z=(0,0,0,0,1)$. We then have the relations $[X_1,Y_1]=[X_2,Y_2]=Z$ and $[X_1,X_2]=[X_1,Y_2]=[X_1,Z]=[Y_1,X_2]=[Y_1,Y_2]=[Y_1,Z]=[X_2,Z]=[Y_2,Z]=\mathbf{0}$, where we recall the standard commutator notation $[g,h]=ghg^{-1}h^{-1}$ for every two group elements $g,h\in \H_\mathbb{Z}^5$. In other words, any two elements from $\{X_1,X_2,Y_1,Y_2,Z\}$ other than $X_1,Y_1$ or $X_2,Y_2$ commute, and the commutators of $X_1,Y_1$ and $X_2,Y_2$ are both equal to $Z$. In particular, $Z$ commutes with all of the members of the generating set $S$, and therefore $Z$ is in the {\em center} of $\H_\mathbb{Z}^5$. It is worthwhile to mention that these commutation relations could be used to define the group $\H_\mathbb{Z}^5$ abstractly using generators and relations, but this fact will not be needed in what follows. This group structure induces a graph $\mathcal{X}_S(\H_\mathbb{Z}^{5})$ on $\mathbb{Z}^5$, called the \emph{Cayley graph} of $\H^{5}_\mathbb{Z}$. The edges of this graph are defined to be the unordered pairs of the form $\{h,hs\}$, where $h\in \mathbb{Z}^5$ and $s\in S$. This is an $8$-regular connected graph, and by the group law~\eqref{eq:Z5 product}, the neighbors of each vertex $(\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e})\in \mathbb{Z}^5$ are $(\mathsf{a}\pm 1,\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e}), (\mathsf{a},\mathsf{b}\pm 1,\mathsf{c},\mathsf{d},\mathsf{e}), (\mathsf{a},\mathsf{b},\mathsf{c}\pm 1,\mathsf{d},\mathsf{e}\pm \mathsf{a}), (\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d}\pm 1,\mathsf{e}\pm \mathsf{b}).$ The shortest-path metric on $\mathbb{Z}^5$ that is induced by this graph structure will be denoted below by $d_W:\mathbb{Z}^5\times \mathbb{Z}^5\to \mathbb{N}\cup\{0\}$. This metric is also known as the left-invariant \emph{word metric} on the Heisenberg group $\H_\mathbb{Z}^5$. For every $R\in [0,\infty)$ denote the (closed) ball of radius $R$ centered at the identity element by $\mathcal{B}_R=\{h\in \mathbb{Z}^5:\ d_W(h,\mathbf{0})\le R\}$. It is well-known (see e.g.~\cite{Bas72}) that $|\mathcal{B}_R|\asymp R^6$ and $d_W(\mathbf{0},Z^R)\asymp\sqrt{R}$ for every $R\in \mathbb{N}$. Our main result is the following theorem. \begin{thm}\label{thm:distortion R} For all $R\ge 2$ we have $c_1(\mathcal{B}_{R},d_W)\asymp \sqrt{\log R}$. \end{thm} The new content of Theorem~\ref{thm:distortion R} is the bound $c_1(\mathcal{B}_R,d_W)\gtrsim \sqrt{\log R}$. The matching upper bound $c_1(\mathcal{B}_R,d_W)\lesssim \sqrt{\log R}$ has several proofs in the literature; see e.g.~the discussion immediately following Corollary~1.3 in~\cite{LafforgueNaor} or Section~\ref{sec:embed intro} below. The previous best known estimate~\cite{CKN} was that there exists a universal constant $\updelta>0$ such that $c_1(\mathcal{B}_R,d_W)\ge (\log R)^\updelta$. By~\cite[Theorem~2.2]{LN06} the metric $d_W$ is bi-Lipschitz equivalent to a metric on $\H^5_\mathbb{Z}$ that is of negative type. We remark that~\cite{LN06} makes this assertion for a different metric on a larger continuous group that contains $\H_\mathbb{Z}^5$ as a discrete co-compact subgroup, but by a simple general result (e.g.~\cite[Theorem~8.3.19]{BBI01}) the word metric $d_W$ is bi-Lipschitz equivalent to the metric considered in~\cite{LN06}. Since $|\mathcal{B}_R|\asymp R^6$, we have $\sqrt{\log |\mathcal{B}_R|}\asymp\sqrt{\log R}$, so Theorem~\ref{thm:distortion R} implies Theorem~\ref{thm:main GL lower intro} through the duality result of Rabinovich that was recalled in Section~\ref{sec:embed intro}. The following precise theorem about $L_1$ embeddings that need not be bi-Lipschitz implies Theorem~\ref{thm:distortion R} by considering the special case of the modulus $\upomega(t)=t/D$ for $D\ge 1$ and $t\in [0,\infty)$. \begin{thm}\label{thm:integral criterion} There exists a universal constant $c\in (0,1)$ with the following property. Fix $R\ge 2$ and a nondecreasing function $\upomega:[1,\infty)\to [1,\infty)$. Then there exists $\upphi:\mathcal{B}_{R}\to L_1$ for which every distinct $x,y\in \mathcal{B}_{R}$ satisfy \begin{equation}\label{eq:compression omega on ball} \upomega\big(d_W(x,y)\big)\lesssim \|\upphi(x)-\upphi(y)\|_1\le d_W(x,y), \end{equation} {\bf if and only if} $\omega(t)\lesssim t$ for all $t\in [1,\infty)$ and \begin{equation}\label{eq:integral criterion} \int_{1}^{c R} \frac{\upomega(s)^2}{s^3}\ud s\lesssim 1. \end{equation} \end{thm} The fact that the integrability requirement~\eqref{eq:integral criterion} implies the existence of the desired embedding $\upphi$ is due to~\cite[Corollary~5]{Tes08}. The new content of Theorem~\ref{thm:integral criterion} is that the existence of the embedding $\upphi$ implies~\eqref{eq:integral criterion}. By letting $R\to \infty$ in Theorem~\ref{thm:integral criterion} we see that there exists $\upphi:\mathbb{Z}^5\to L_1$ that satisfies \begin{equation}\label{eq:compression assumption} \forall\, x,y\in \mathbb{Z}^5,\quad \upomega\big(d_W(x,y)\big)\lesssim \|\upphi(x)-\upphi(y)\|_1\le d_W(x,y), \end{equation} {\bf if and only if} \begin{equation}\label{eq:1 to infty integral} \int_{1}^\infty \frac{\upomega(s)^2}{s^3}\ud s\lesssim 1. \end{equation} In~\cite{CKN} it was shown that if $\upphi:\mathbb{Z}^5\to L_1$ satisfies~\eqref{eq:compression assumption}, then there must exist arbitrarily large $t\ge 2$ for which $\upomega(t)\lesssim t/(\log t)^\updelta$, where $\updelta>0$ is a universal constant. This follows from~\eqref{eq:1 to infty integral} with $\updelta=\frac12$, which is the largest possible constant for which this conclusion holds true. This positively answers a question that was asked in~\cite[Remark~1.7]{CKN}. In fact, it provides an even better conclusion, because~\eqref{eq:1 to infty integral} implies that, say, there must exist arbitrarily large $t\ge 4$ for which $$\upomega(t)\lesssim \frac{t}{\sqrt{(\log t)\log\log t}}.$$ (The precise criterion is the integrability condition~\eqref{eq:1 to infty integral}.) Finally, by considering $\upomega(t)=t^{1-\varepsilon}/D$ for $\varepsilon\in (0,1)$ and $D\ge 1$, we obtain the following notable corollary. \begin{cor}[$L_1$ distortion of snowflakes]\label{coro:snoflake} For every $\varepsilon\in (0,1)$ we have $c_1\big(\mathbb{Z}^5,d_W^{1-\varepsilon}\big)\asymp \frac{1}{\sqrt{\varepsilon}}$. \end{cor} The fact that for every $O(1)$-doubling metric space $(X,d)$ we have $c_1(X,d^{1-\varepsilon})\lesssim 1/\sqrt{\varepsilon}$ follows from an argument of~\cite{LMN05} (see also~\cite[Theorem~5.2]{NS11}). Corollary~\ref{coro:snoflake} shows that this is sharp. More generally, it follows from Theorem~\ref{thm:integral criterion} that for every $R\ge 2$ and $\varepsilon\in (0,1)$ we have $$ c_1\big(\mathcal{B}_{R},d_W^{1-\varepsilon}\big)\asymp\min\left\{\frac{1}{\sqrt{\varepsilon}},\sqrt{\log R}\right\}. $$ \subsection{Vertical-versus-horizontal isoperimetry}\label{sec:isoper intro} Our new non-embeddability results are all consequences of an independently interesting isoperimetric-type inequality which we shall now describe. Roughly speaking, this inequality subtly quantifies the fact that for any $n\in \mathbb{Z}$ and any $h\in \H_\mathbb{Z}^5$, there are many paths in the Cayley graph $\mathcal{X}_S(\H_\mathbb{Z}^{5})$ of length roughly $\sqrt{n}$ that connect $h$ to $h Z^n$. Consequently, if a finite subset $\Omega\subset \mathbb{Z}^5$ has a small edge boundary in the Cayley graph, then the number of pairs $(x,y)\in \mathbb{Z}^5\times \mathbb{Z}^5$ for which $|\{x,y\}\cap \Omega|=1$ yet $x$ and $y$ differ only in their fifth (vertical) coordinate must also be small. It turns out that the proper interpretation of the term ``small'' is this context is not at all obvious, and it should be measured in a certain multi-scale fashion. Formally, we consider the following quantities. \begin{defn}[Discrete boundaries]\label{def:discrete perimeters} { For $\Omega\subset \mathbb{Z}^{5}$, the {\bf horizontal boundary} of $\Omega$ is defined by \begin{equation}\label{eq:def horizontal boundary} \partial_{\mathsf{h}}\Omega\stackrel{\mathrm{def}}{=} \big\{(x,y)\in \Omega\times \left(\mathbb{Z}^{5}\setminus \Omega\right): x^{-1}y\in S\big\}. \end{equation} Given also $t\in \mathbb{N}$, the $t${\bf-vertical boundary} of $\Omega$ is defined by \begin{equation}\label{eq:def vertical t boundary} \partial^t_{\mathsf{v}} \Omega\stackrel{\mathrm{def}}{=} \Big\{(x,y)\in \Omega\times \left(\mathbb{Z}^{5}\setminus \Omega\right): x^{-1}y\in \left\{Z^t,Z^{-t}\right\}\Big\}. \end{equation} The {\bf horizontal perimeter} of $\Omega$ is defined to be the cardinality $|\partial_{\mathsf{h}}\Omega|$ of its horizontal boundary. The {\bf vertical perimeter} of $\Omega$ is defined to be the quantity \begin{equation}\label{eq:def discrete vertical perimeter} |\partial_{\mathsf{v}}\Omega|\stackrel{\mathrm{def}}{=} \bigg(\sum_{t=1}^\infty \frac{|\partial^t_{\mathsf{v}}\Omega|^2}{t^2}\bigg)^{\frac12}. \end{equation} } \end{defn} The horizontal perimeter of $\Omega$ is nothing more than the size of its {\em edge boundary} in the Cayley graph $\mathcal{X}_S(\H_\mathbb{Z}^{5})$. The vertical perimeter of $\Omega$ is a more subtle concept that does not have such a simple combinatorial description. The definition~\eqref{eq:def discrete vertical perimeter} was first published in~\cite[Section~4]{LafforgueNaor}, where the isoperimetric-type conjecture that we resolve here as Theorem~\ref{thm:isoperimetric discrete} below also appeared for the first time. These were formulated by the first named author and were circulating for several years before~\cite{LafforgueNaor} appeared, intended as a possible route towards the algorithmic application that we indeed succeed to obtain here. That ``vertical smallness'' should be measured through the quantity $|\partial_{\mathsf{v}}\Omega|$, i.e., the $\ell_2$ norm of the sequence $\{|\partial_\mathsf{v}^t\Omega|/t\}_{t=1}^\infty$, was arrived at through trial and error, inspired by functional inequalities that were obtained in~\cite{AusNaoTes,LafforgueNaor}, as explained in~\cite[Section~4]{LafforgueNaor}. \begin{thm \label{thm:isoperimetric discrete} Every $\Omega\subset \mathbb{Z}^{5}$ satisfies $|\partial_{\mathsf{v}}\Omega|\lesssim |\partial_{\mathsf{h}}\Omega|$. \end{thm} The significance of Theorem~\ref{thm:isoperimetric discrete} can only be fully appreciated through an examination of the geometric and analytic reasons for its validity. To facilitate this, we shall include in this extended abstract an extensive overview of the ideas of the proof of Theorem~\ref{thm:isoperimetric discrete}; see Section~\ref{sec:overview} below. Before doing so, we shall now demonstrate the utility of Theorem~\ref{thm:isoperimetric discrete} by using it to deduce Theorem~\ref{thm:integral criterion}. As explained above, by doing so we shall conclude the proof of all of our new results (modulo Theorem~\ref{thm:isoperimetric discrete}), including the lower bound on the integrality gap for the Goemans--Linial SDP. \subsection{From isoperimetry to non-embeddability} An equivalent formulation of Theorem~\ref{thm:isoperimetric discrete} is that every finitely supported function $\upphi:\mathbb{Z}^5\to L_1$ satisfies the following Poincar\'e-type inequality. \begin{multline}\label{eq:discrete global intro} \left(\sum_{t=1}^\infty \frac{1}{t^2}\bigg(\sum_{h\in \mathbb{Z}^5} \big\|\upphi\big(hZ^t\big)-\upphi(h)\big\|_1\bigg)^2\right)^{\frac12}\\\lesssim \sum_{h\in \mathbb{Z}^5}\sum_{\sigma\in S}\big\|\upphi(h\sigma)-\upphi(h)\big\|_1. \end{multline} Indeed, Theorem~\eqref{thm:isoperimetric discrete} is nothing more than the special case $\upphi=\mathbf 1_\Omega$ of~\eqref{eq:discrete global intro}. Conversely, the fact that~\eqref{eq:discrete global intro} follows from Theorem~\eqref{thm:isoperimetric discrete} is a straightforward application of the cut-cone representation of $L_1$ metrics (see e.g.~\cite[Proposition~4.2.2]{DL97} or~\cite[Corollary~3.2]{Nao10}), though our proof will yield the (seemingly) stronger statement~\eqref{eq:discrete global intro} directly. Next, Section~3.2 of~\cite{LafforgueNaor} shows that~\eqref{eq:discrete global intro} formally implies its local counterpart, which asserts that there exists a universal constant $\alpha\ge 1$ such that for every $n\in \mathbb{N}$ and every $\upphi:\mathbb{Z}^5\to L_1$ we have \begin{multline}\label{eq:discrete local intro} \left(\sum_{t=1}^{n^2} \frac{1}{t^2}\bigg(\sum_{h\in \mathcal{B}_{n}} \big\|\upphi\big(hZ^t\big)-\upphi(h)\big\|_1\bigg)^2\right)^{\frac12}\\\lesssim \sum_{h\in \mathcal{B}_{\alpha n}}\sum_{\sigma\in S}\big\|\upphi(h\sigma)-\upphi(h)\big\|_1. \end{multline} To deduce Theorem~\ref{thm:integral criterion}, suppose that $R\ge 2$, that $\upomega:[0,\infty)\to [0,\infty)$ is nondecreasing and that the mapping $\upphi:\mathcal{B}_{R}\to L_1$ satisfies~\eqref{eq:compression omega on ball}. For notational convenience, fix two universal constants $\beta\in (0,1)$ and $\gamma\in (1,\infty)$ such that $\beta\sqrt{t}\le d_W(Z^t,\mathbf{0})\le \gamma\sqrt{t}$ for every $t\in \mathbb{N}$. Note that~\eqref{eq:compression omega on ball} implies in particular that $\omega(R)\lesssim R$, so for every $c\in (0,1)$ the left hand side of~\eqref{eq:integral criterion} is at most a universal constant multiple of $R^2$. Hence, it suffices to prove Theorem~\ref{thm:integral criterion} when $R\ge 1+\max\{\alpha,\gamma\}$, where $\alpha$ is the universal constant in~\eqref{eq:discrete local intro}. Denote $n=\lfloor \min\{R/(1+\gamma),(R-1)/\alpha\}\rfloor\in \mathbb{N}$. If $t\in \{1,\ldots,n^2\}$ and $h\in \mathcal{B}_n$ then $d_W(hZ^t,\mathbf{0})\le n+\gamma \sqrt{t}\le (1+\gamma)n\le R$, and therefore we may apply~\eqref{eq:compression omega on ball} with $x=hZ^t$ and $y=h$ to deduce that $\|\upphi(hZ^t)-\upphi(h)\|_1\gtrsim \omega(d_W(Z^t,\mathbf{0}))\ge \omega(\beta\sqrt{t})$. Consequently, \begin{align}\label{eq:pass to int} &\sum_{t=1}^{n^2} \frac{1}{t^2}\bigg(\sum_{h\in \mathcal{B}_{n}} \big\|\upphi\big(hZ^t\big)-\upphi(h)\big\|_1\bigg)^2\gtrsim \sum_{t=1}^{n^2} \frac{|\mathcal{B}_n|^2\omega\big(\beta\sqrt{t}\big)^2}{t^2}\nonumber \\ \nonumber&\gtrsim n^{12}\sum_{t=1}^{n^2} \int_t^{t+1}\frac{\omega\big(\beta\sqrt{u/2}\big)^2}{u^2}\ud u=\beta^2n^{12}\int_{\frac{\beta}{\sqrt{2}}}^{\frac{\beta\sqrt{n^2+1}}{\sqrt{2}}}\frac{\omega(s)^2}{s^3}\ud s\\&\ge \frac{\beta^2(R/2)^{12}}{\max\{(1+\gamma)^{12},\alpha^{12}\}} \int_1^{\frac{\beta R }{2\max\{1+\gamma,\alpha\}}}\frac{\omega(s)^2}{s^3}\ud s, \end{align} where the second inequality in~\eqref{eq:pass to int} uses the fact that $\omega$ is non-decreasing, the penultimate step of~\eqref{eq:pass to int} uses the change of variable $s=\beta\sqrt{u/2}$, and for the final step of~\eqref{eq:pass to int} recall that $\beta<1$ and the definition of $n$. At the same time, by our choice of $n$ we have $h\sigma\in \mathcal{B}_{\alpha n+1}\subset \mathcal{B}_R$ for every $h\in \mathcal{B}_{\alpha n}$ and $\sigma\in S$, and so by~\eqref{eq:compression omega on ball} we have $\|\upphi(h\sigma)-\upphi(h)\|_1\le d_W(h\sigma,h)=1$. The right hand side of~\eqref{eq:discrete local intro} is therefore at most a universal constant multiple of $|\mathcal{B}_{\alpha n}|\cdot|S|\lesssim (\alpha n)^6\lesssim R^6$. By contrasting~\eqref{eq:pass to int} with~\eqref{eq:discrete local intro} we obtain that the desired estimate~\eqref{eq:integral criterion} indeed holds true. \subsection{Overview of the proof of Theorem~\ref{thm:isoperimetric discrete}}\label{sec:overview} Our proof of~\eqref{eq:discrete global intro}, and hence also of Theorem~\ref{thm:isoperimetric discrete}, is carried out in a continuous setting that is equivalent to its discrete counterpart. Such a passage from continuous to discrete is commonplace, and in the present setting this was carried out in~\cite{AusNaoTes,LafforgueNaor}. The idea is to consider a continuous group that contains $\H_\mathbb{Z}^5$ and to deduce the discrete inequality~\eqref{eq:discrete global intro} from its (appropriately formulated) continuous counterpart via a partition of unity argument. There is an obvious way to embed $\H_\mathbb{Z}^5$ in a continuous group, namely by considering the same group of matrices as in~\eqref{eq:def H5}, but with the entries $\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e}$ now allowed to be arbitrary real numbers instead of integers. This is a indeed a viable route and the ensuing discussion could be carried out by considering the resulting continuous matrix group. Nevertheless, it is notationally advantageous to work with a different (standard) realization of $\H_\mathbb{Z}^5$ which is isomorphic to the one that we considered thus far. We shall now introduce the relevant notation. Fix an orthonormal basis $\{X_1,X_2,Y_1,Y_2,Z\}$ of $\mathbb{R}^{5}$. If $h=\alpha_1X_1+\alpha_2X_2+\beta_1Y_1+\beta_2Y_2+\gamma Z\in \mathbb{R}^{5}$ then denote $x_i(h)=\alpha_i$, $y_i(h)=\beta_i$ for $i\in \{1,2\}$ and $z(h)=\gamma$, i.e., $x_1,x_2,y_1,y_2,z:\mathbb{R}^{5}\to \mathbb{R}$ are the coordinate functions corresponding to the above basis. The continuous Heisenberg group $\H^{5}$ is defined to be $\mathbb{R}^{5}$, equipped with the following group law. \begin{multline}\label{eq:def heisenberg algebra product} u v\stackrel{\mathrm{def}}{=} u+v\\+\frac{x_1(u)y_1(v)-y_1(u)x_1(v)+x_2(u)y_2(v)-y_2(u)x_2(v)}{2} Z . \end{multline} The identity element of $\H^{5}$ is $\mathbf{0}\in \mathbb{R}^{5}$ and the inverse of $h\in \mathbb{R}^{5}$ under the group law~\eqref{eq:def heisenberg algebra product} is equal to $-h$. By directly computing Jacobians, one checks that the Lebesgue measure on $\mathbb{R}^{5}$ is invariant under the group operation given in~\eqref{eq:def heisenberg algebra product}, i.e., it is a Haar measure of $\H^{5}$. In what follows, in order to avoid confusing multiplication by scalars with the group law of $\H^{5}$, for every $h\in \H^{5}$ and $t\in \mathbb{R}$ we shall use the exponential notation $h^t=(th_1,\ldots,th_{5})$; this agrees with the group law when $t\in \mathbb{Z}$. (This convention is not strictly necessary, but without it the ensuing discussion could become somewhat notationally confusing.) The subgroup of $\H^{5}$ that is generated by $\{X_1,X_2,Y_1,Y_2\}$ is the discrete Heisenberg group of dimension $5$, denoted $\H_\mathbb{Z}^{5}$. The apparent inconsistency with~\eqref{eq:def H5} is not an actual issue because it is straightforward to check that the two groups in question are in fact isomorphic. The linear span of $\{X_1,Y_1,Z\}$ is a subgroup of $\H^5$ which is denoted $\H^3$ (the $3$-dimensional Heisenberg group). There is a canonical left-invariant metric on $\H^{5}$, commonly called the {\em Carnot--Carath\'eodory metric}, which we denote by $d$. We refer to \cite{CDPT07} for a precise definition of this metric. For the purpose of the present discussion it suffices to know that $d$ possesses the following properties. Firstly, for every $g,h\in \H^{5}$ and $\uptheta\in \mathbb{R}$ we have $d(\mathfrak{s}_\uptheta (g),\mathfrak{s}_\uptheta (h))=|\uptheta| d(g,h)$. Here, $\mathfrak{s}_\uptheta$ denotes the {\em Heisenberg scaling} by $\uptheta$, given by the formula $$\mathfrak{s}_\uptheta (\alpha_1,\alpha_2,\beta_1,\beta_2,\gamma)=(\uptheta\alpha_1,\uptheta \alpha_2,\uptheta \beta_1,\uptheta \beta_2,\uptheta^2\gamma)$$ Secondly, the restriction of $d$ to the subgroup $\H_\mathbb{Z}^{5}$ is bi-Lipschitz to the word metric induced by its generating set $\{X_1^{\pm 1}, X_2^{\pm 1},Y_1^{\pm 1},Y_2^{\pm 1}\}$. Thirdly, there exists $C\in (1,\infty)$ such that every $h\in \H^5$ satisfies \begin{equation}\label{eq:cc normalization} d(h,\mathbf{0})\le |x_1(h)|+|x_2(h)|+|y_1(h)|+|y_2(h)|+4\sqrt{|z(h)|}\le \frac{C}{2}d(h,\mathbf{0}). \end{equation} Given $r\in (0,\infty)$ we shall denote by $B_r\subset \H^{5}$ the open ball in the metric $d$ of radius $r$ centered at the identity element, i.e., $B_r=\{h\in \H^{5}:\ d(\mathbf{0},h)< r\}$. For $\Omega\subset \H^5$ the Lipschitz constant of a mapping $f:\Omega\to \mathbb{R}$ relative to the metric $d$ will be denoted by $\|f\|_{\Lip(\Omega)}$. For $s\in (0,\infty)$, the notation $\mathcal{H}^s$ will be used exclusively to denote the $s$-dimensional Hausdorff measure that is induced by the metric $d$ (see e.g.~\cite{Mat95}). One checks that $\mathcal{H}^{6}$ is proportional to the Lebesgue measure on $\mathbb{R}^{5}$ and that the restriction of $\mathcal{H}^4$ to the subgroup $\H^3$ is proportional to the Lebesgue measure on $\H^3$ (under the canonical identification of $\H^3$ with $\mathbb{R}^3$). For two measurable subsets $E,U\subset \H^{5}$ define the {\em normalized vertical perimeter} of $E$ in $U$ to be the function $\overline{\mathsf{v}}_U(E):\mathbb{R}\to [0,\infty]$ given by setting for every $s\in \mathbb{R}$, \begin{align}\label{eq:def normalized vertical} \nonumber\overline{\mathsf{v}}_U(E)(s)&\stackrel{\mathrm{def}}{=} \frac{1}{2^{s}}\mathcal{H}^{6}\Big( \big(E\mathop{\triangle} \big(E Z^{2^{2s}}\big)\big)\cap U\Big)\\&=\frac{1}{2^{s}}\int_{U} \Big|1_E(u)-1_{E}\big(uZ^{-2^{2s}}\big)\Big|\ud \mathcal{H}^6(u). \end{align} where $A\mathop{\triangle} B\stackrel{\mathrm{def}}{=} (A\setminus B)\cup (B\setminus A)$ is the symmetric difference. We also denote $\overline{\mathsf{v}}(E)\stackrel{\mathrm{def}}{=}\overline{\mathsf{v}}_{\H^5}(E)$. The isoperimetric-type inequality of Theorem~\ref{thm:v vs h cont intro} below implies Theorem~\ref{thm:isoperimetric discrete}. See \cite{NY-full} for an explanation of this (standard) deduction; the argument is a straightforward use of the co-area formula (see e.g.~\cite{Amb01,Mag11}) to pass from sets to functions, followed by the partition of unity argument of~\cite[Section~3.3]{LafforgueNaor} to pass from the continuous setting to the desired discrete inequality~\eqref{eq:discrete global intro}. \begin{thm}\label{thm:v vs h cont intro} $ \big\|\overline{\mathsf{v}}(E)\big\|_{L_2(\mathbb{R})}\lesssim \mathcal{H}^{5}(\partial E) $ for all open $E\subset \H^{5}$. \end{thm} We shall now explain the overall strategy and main ideas of our proof of Theorem~\ref{thm:v vs h cont intro}. Complete technical details are included in \cite{NY-full}. A key new ingredient appears in Section~\ref{sec:into lip graph} below, which is the {\em only} place in our proof where we use the fact that we are dealing with $\H^5$ rather than $\H^3$. In fact, the analogue of Theorem~\ref{thm:v vs h cont intro} for $\H^3$ (i.e., with $\mathcal{H}^5(\partial E)$ replaced by $\mathcal{H}^3(\partial E)$ and $\overline{\mathsf{v}}(E)(\cdot)$ defined in the same way as in~\eqref{eq:def normalized vertical} but with $\mathcal{H}^6$ replaced by the restriction of $\mathcal{H}^4$ to $\H^3$) is {\em false} (see Section~\ref{sec:3d} below). The crux of the matter is the special case of Theorem~\ref{thm:v vs h cont intro} where the boundary of $E$ is (a piece of) an {\em intrinsic Lipschitz graph}. Such sets were introduced by Franchi, Serapioni, and Serra Cassano~\cite{FSSC06}. These sets can be quite complicated, and in particular they {\em are not the same} as graphs of functions (in the usual sense) that are Lipschitz with respect to the Carnot--Carath\'eodory metric. Our proof of this special case relies crucially on an $L_2$-variant of~\eqref{eq:discrete global intro} for $\H^3$ that was proven in~\cite{AusNaoTes} using representation theory and in~\cite{LafforgueNaor} using Littlewood--Paley theory. In essence, our argument ``lifts'' a certain $L_2$ inequality in lower dimensions to a formally stronger endpoint $L_1$ (or isoperimetric-type) inequality in higher dimensions. Once the special case is established, we prove Theorem~\ref{thm:v vs h cont intro} in its full generality by decomposing an open set $E$ into parts whose boundaries are close to pieces of intrinsic Lipschitz graphs and applying the special case to each part of this decomposition. We deduce the desired estimate by summing up all the inequalities thus obtained. Such a ``corona decomposition'' is an important and widely-used tool in harmonic analysis on $\mathbb{R}^n$ that was formulated by David and Semmes in~\cite{DavidSemmesSingular}. For the present purpose we need to devise an ``intrinsic version'' of a corona decomposition on the Heisenberg group. This step uses a different ``coercive quantity'' to control local overlaps, but for the most part it follows the lines of the well-understood methodology of David and Semmes, as described in the monographs~\cite{DavidSemmesSingular,DSAnalysis}. \subsubsection{Intrinsic Lipschitz graphs}\label{sec:into lip graph} Set $V\stackrel{\mathrm{def}}{=}\{h\in \H^5:\ x_2(h)=0\}$. For $f:V\to \mathbb{R}$ define \begin{multline}\label{eq:lip graph abcd} \Gamma_f\stackrel{\mathrm{def}}{=} \Big\{vX_2^{f(v)}:\ v\in V\Big\}\\\stackrel{\eqref{eq:def heisenberg algebra product}}{=} \Big\{\Big(\mathsf{a},f(\mathsf{a},\mathsf{c},\mathsf{d},\mathsf{e}),\mathsf{c},\mathsf{d},\mathsf{e}-\frac12 \mathsf{d} f(\mathsf{a},\mathsf{c},\mathsf{d},\mathsf{e}) \Big):\ \mathsf{a},\mathsf{c},\mathsf{d},\mathsf{e}\in \mathbb{R}\Big\}, \end{multline} where~\eqref{eq:lip graph abcd} uses the identification of $\mathsf{a} X_1+\mathsf{b} X_2+\mathsf{c} Y_1+\mathsf{d} Y_2+\mathsf{e} Z\in \H^5$ with $(\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e})\in \mathbb{R}^5$ and the identification of $(\mathsf{a},0,\mathsf{c},\mathsf{d},\mathsf{e})\in V$ with $(\mathsf{a},\mathsf{c},\mathsf{d},\mathsf{e})\in \mathbb{R}^4$ (thus we think of the domain of $f$ as equal to $\mathbb{R}^4$). The set $\Gamma_f$ is a typical {\em intrinsic graph} in $\H^5$. See \cite{NY-full} for a discussion of the general case, which is equivalent to this case via a symmetry of $\H^5$ (so the ensuing discussion has no loss of generality). Suppose that $\lambda\in (0,\infty)$. We say that $\Gamma_f$ is an {\em intrinsic $\lambda$-Lipschitz graph} over the vertical hyperplane $V$ if \begin{equation}\label{eq:lambda lip condition} \forall\, w_1,w_2\in \Gamma_f,\qquad |x_2(w_1)-x_2(w_2)|\le \lambda d(w_1,w_2). \end{equation} Due to~\eqref{eq:lip graph abcd} the condition~\eqref{eq:lambda lip condition} amounts to a point-wise inequality for $f$ that is somewhat complicated, and in particular it does not imply that $f$ must be Lipschitz with respect to the restriction of the Carnot--Carath\'eodory metric to the hyperplane $V$, as explained in~\cite[Remark~3.13]{FSSCDifferentiability}. Denote by $\Gamma_f^+=\{vX_2^t:\ v\in V\ \wedge \ t>f(v)\}$ the half-space that is bounded by the intrinsic graph $\Gamma_f$. Suppose that $\Gamma_f$ is an intrinsic $\lambda$-Lipschitz graph with $\lambda\in (0,1)$. We claim that \begin{equation}\label{eq:lip graph ineq intro} \forall\, r\in (0,\infty),\qquad \big\|\overline{\mathsf{v}}_{B_r}\big(\Gamma^+_f\big)\big\|_{L_2(\mathbb{R})}\lesssim \frac{r^5}{1-\lambda}. \end{equation} When, say, $\lambda\in (0,\frac12)$, the estimate~\eqref{eq:lip graph ineq intro} is in essence the special case of Theorem~\ref{thm:v vs h cont intro} for pieces of Lipschitz graphs. This is so because, due to the isoperimetric inequality for the Heisenberg group~\cite{Pan82}, the right-hand side of~\eqref{eq:lip graph ineq intro} is at most a universal constant multiple of $\mathcal{H}^5(\partial(B_r\cap \Gamma_f^+))$ whenever $\mathcal{H}^6(B_r\cap \Gamma_f^+)\gtrsim r^6$, i.e., provided that $\Gamma_f^+$ occupies a constant fraction of the volume of the ball $B_r$. The estimate~\eqref{eq:lip graph ineq intro} will be used below only in such a non-degenerate situation. The advantage of working in $\H^5$ rather than $\H^3$ is that $V\subset \H^5$ can be sliced into copies of $\H^3$. We will bound $\|\overline{\mathsf{v}}_{B_r}(\Gamma^+_f)\|_{L_2(\mathbb{R})}$ by decomposing $\Gamma_f^+$ into a corresponding family of slices. Write \begin{equation}\label{eq:def h(u)} \forall\, u\in \H^5,\quad h_u\stackrel{\mathrm{def}}{=} X_1^{x_1(u)}+Y_1^{y_1(u)}+Z^{z(u)+\frac12 x_2(u)y_2(u)}\in \H^3. \end{equation} Recalling~\eqref{eq:def heisenberg algebra product}, one computes directly that $u=Y_2^{y_2(u)}h_uX_2^{x_2(u)}$. Let $C\in (1,\infty)$ be the universal constant in~\eqref{eq:cc normalization}. A straightforward computation using~\eqref{eq:cc normalization} shows that $d(h_u,\mathbf{0})\le Cd(u,\mathbf{0})$. Also, \eqref{eq:cc normalization} implies that $|y_2(u)|\le Cd(u,\mathbf{0})$. These simple observations demonstrate that \begin{equation}\label{eq:ball product set} \forall\, u\in \H^5,\qquad \mathbf 1_{B_r}(u)\le \mathbf 1_{[-Cr,Cr]}\big(y_2(u)\big)\mathbf 1_{\H^3\cap B_{Cr}}(h_u). \end{equation} For every $\chi\in \mathbb{R}$ define $f_\chi:\H^3\to \mathbb{R}$ by $f_\chi(h)=f(Y_2^\chi h)$ (recall that $\H^3$ is the span of $\{X_1,Y_1,Z\}$, so $Y_2^\chi h\in V$ is in the domain of $f$). Under this notation $u\in \Gamma^+_f$ if and only if $x_2(u)>f_{y_2(u)}(h_u)$. Also, for every $\alpha\in \mathbb{R}$ we have $uZ^\alpha\in \Gamma^+_f$ if and only if $x_2(u)>f_{y_2(u)}(h_u Z^\alpha)$, since $h_{uZ^{\alpha}}=h_u Z^\alpha$ by~\eqref{eq:def h(u)}. Due to~\eqref{eq:def normalized vertical} and~\eqref{eq:ball product set}, these observations imply that for every $s\in \mathbb{R}$ we have \begin{align}\label{eq:estimate on product instead of ball} \nonumber &\overline{\mathsf{v}}_{B_r}\big(\Gamma^+_f\big)(s) \\&\nonumber \le \frac{1}{2^s}\int_{\H^5} \Big|\mathbf 1_{\{x_2(u)>f_{y_2(u)}(h_u)\}}-\mathbf 1_{\{x_2(u)>f_{y_2(u)}(h_uZ^{-2^{2s}})\}}\Big|\nonumber\\&\qquad\qquad \times\mathbf 1_{[-Cr,Cr]}\big(y_2(u)\big)\mathbf 1_{\H^3\cap B_{Cr}}(h_u)\ud \mathcal{H}^6(u). \end{align} Recall that $\mathcal{H}^6$ is proportional to the Lebesgue measure on $\H^5$. Hence, if we continue to canonically identify $\mathsf{a} X_1+\mathsf{b} X_2+\mathsf{c} Y_1+\mathsf{d} Y_2+\mathsf{e} Z\in \H^5$ with $(\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e})\in \mathbb{R}^5$ and $\mathsf{a} X_1+\mathsf{c} Y_1+\mathsf{e} Z\in \H^3$ with $(\mathsf{a},\mathsf{c},\mathsf{e})\in \mathbb{R}^3$ then, recalling~\eqref{eq:def h(u)}, the integral in the right hand side of~\eqref{eq:estimate on product instead of ball} is proportional to \begin{align*} &\int_{\mathbb{R}^5} \Big|\mathbf 1_{\big\{\mathsf{b}>f_{\mathsf{d}}\big(\mathsf{a},\mathsf{c},\mathsf{e}+\frac12\mathsf{b} \mathsf{d}\big)\big\}}-\mathbf 1_{\big\{\mathsf{b}>f_{\mathsf{d}}\big(\mathsf{a},\mathsf{c},\mathsf{e}+\frac12\mathsf{b} \mathsf{d}-2^{2s}\big)\big\}}\Big| \\&\qquad\qquad\times \mathbf 1_{[-Cr,Cr]}(\mathsf{d})\mathbf 1_{\H^3\cap B_{Cr}}\Big(\mathsf{a},\mathsf{c},\mathsf{e}+\frac12\mathsf{b} \mathsf{d}\Big)\ud(\mathsf{a},\mathsf{b},\mathsf{c},\mathsf{d},\mathsf{e})\\ &= \int_{\mathbb{R}^5} \Big|\mathbf 1_{\{\mathsf{b}>f_{\mathsf{d}}(\alpha,\gamma,\upepsilon)\}}-\mathbf 1_{\{\mathsf{b}>f_{\mathsf{d}}(\alpha,\gamma,\upepsilon-2^{2s})\}}\Big|\\&\qquad\qquad\times \mathbf 1_{[-Cr,Cr]}(\mathsf{d})\mathbf 1_{\H^3\cap B_{Cr}}(\alpha,\gamma,\upepsilon)\ud(\alpha,\mathsf{b},\gamma,\mathsf{d},\upepsilon), \end{align*} where for each fixed $\mathsf{b},\mathsf{d}\in \mathbb{R}$ we made the change of variable $(\alpha,\gamma,\upepsilon)=(\mathsf{a},\mathsf{c},\mathsf{e}+\mathsf{b} \mathsf{d}/2)$. Since the restriction of the Hausdorff measure $\mathcal{H}^4$ to $\H^3$ is proportional to the Lebesgue measure on $\H^3\cong \mathbb{R}^3$, we conclude from the above considerations that for every $s\in \mathbb{R}$ we have \begin{align}\label{eq:v bar bound at s} \nonumber &\overline{\mathsf{v}}_{B_r}\big(\Gamma^+_f\big)(s)\\\nonumber &\lesssim \frac{1}{2^s} \int_{-Cr}^{Cr}\int_{\H^3\cap B_{Cr}}\\&\qquad \bigg(\int_{-\infty}^\infty \Big|\mathbf 1_{\{\xi>f_{\chi}(h)\}}-\mathbf 1_{\{\xi>f_{\chi}(hZ^{-2^{2s}})\}}\Big|\ud \xi\bigg)\ud \mathcal{H}^4(h)\ud \chi\nonumber\\ &= \frac{1}{2^s} \int_{-Cr}^{Cr}\int_{\H^3\cap B_{Cr}} \Big|f_\chi(h)-f_\chi\big(hZ^{-2^{2s}}\big)\Big|\ud \mathcal{H}^4(h)\ud \chi. \end{align} Next, fix $h_1,h_2\in \H^3$ and $\chi\in \mathbb{R}$. Denote $w_1\stackrel{\mathrm{def}}{=} Y_2^\chi h_1X_2^{f_\chi(h_1)}$ and $w_2\stackrel{\mathrm{def}}{=} Y_2^\chi h_2X_2^{f_\chi(h_2)}$. By design we have $w_1,w_2\in \Gamma_f$ and therefore we may apply~\eqref{eq:lip graph ineq intro} to deduce that \begin{align}\label{eq:to deduce lip sections} \nonumber &|f_\chi(h_1)-f_\chi(h_2)|=|x_2(w_1)-x_2(w_2)|\le \lambda d(w_1,w_2)\\ \nonumber&=\lambda d\Big(Y_2^\chi h_1X_2^{f_\chi(h_1)},Y_2^\chi h_2X_2^{f_\chi (h_2)}\Big) \\ \nonumber&=\lambda d\Big(\mathbf{0},h_1^{-1}h_2X_2^{f_\chi(h_2)-f_\chi(h_1)}\Big)\nonumber \\&\le \lambda\Big(Cd(h_1,h_2)+|f_\chi(h_1)-f_\chi(h_2)|\Big), \end{align} where the first inequality in~\eqref{eq:to deduce lip sections} uses~\eqref{eq:lip graph ineq intro} , the penultimate step of~\eqref{eq:to deduce lip sections} uses the left-invariance of the metric $d$ and the fact that $X_2$ commutes with all of the elements of $\H^3$, and the final step of~\eqref{eq:to deduce lip sections} uses~\eqref{eq:cc normalization} (twice). The estimate~\eqref{eq:to deduce lip sections} simplifies to show that $|f_\chi(h_1)-f_\chi(h_2)|\lesssim d(h_1,h_2)/(1-\lambda)$, i.e., for every fixed $\chi\in \mathbb{R}$ the function $f_\chi$ is Lipschitz on $\H^3$ with $\|f_\chi\|_{\Lip(\H^3)}\lesssim 1/(1-\lambda)$. In~\cite[Theorem~7.5]{AusNaoTes} the following inequality was proved for a Lipschitz function $\psi:\H^3\to \mathbb{R}$ and $\rho\in (0,\infty)$ as a consequence of a continuous $L_2$-variant of~\eqref{eq:discrete global intro}. Due to its quadratic nature, this variant can be proved using a decomposition into irreducible representations (i.e., a spectral argument). \begin{equation}\label{eq:psi ineq} \int_0^{\rho^2}\int_{B_\rho\cap \H^3} \left|\psi(h)-\psi\big(hZ^{-t}\big)\right|^2\ud\mathcal{H}^4(h)\frac{\ud t}{t^2}\lesssim \rho^4\|\psi\|_{\Lip(\H^3)}^2. \end{equation} Consequently, \begin{align} &\label{eq:use slice lip}\frac{r^5}{(1-\lambda)^2}\\ \nonumber&\gtrsim \int_{-Cr}^{Cr}\int_0^{(Cr)^2}\int_{B_{Cr}\cap \H^3} \left|f_\chi(h)-f_\chi\big(hZ^{-t}\big)\right|^2\ud\mathcal{H}^4(h)\frac{\ud t}{t^2}\ud \chi \\&= \int_{-\infty}^{\log_2(Cr)}\frac{2\log 2}{2^{2s}}\nonumber\\&\quad \times\int_{-Cr}^{Cr}\int_{B_{Cr}\cap \H^3} \left|f_\chi(h)-f_\chi\big(hZ^{-2^{2s}}\big)\right|^2\ud\mathcal{H}^4(h)\ud \chi\ud s \label{eq:change of variable}\\ &\label{eq:use CS on slice} \gtrsim \int_{-\infty}^{\log_2(Cr)} \frac{1}{r^5}\\ \nonumber &\quad\times \bigg(\frac{1}{2^s}\int_{-Cr}^{Cr}\int_{B_{Cr}\cap \H^3} \left|f_\chi(h)-f_\chi\big(hZ^{-2^{2s}}\big)\right|\ud\mathcal{H}^4(h)\ud \chi\bigg)^2 \ud s \\ &\gtrsim \frac{1}{r^5} \int_{-\infty}^{\log_2(Cr)}\overline{\mathsf{v}}_{B_r}\big(\Gamma^+_f\big)(s)^2 \ud s,\label{eq:use bound on v bar} \end{align} In~\eqref{eq:use slice lip} we applied~\eqref{eq:psi ineq} with $\psi=f_\chi$ for each $\chi\in [-Cr,Cr]$, while using $\|f_\chi\|_{\Lip(\H^3)}\lesssim 1/(1-\lambda)$. In~\eqref{eq:change of variable} we made the change of variable $t=2^{2s}$. In~\eqref{eq:use CS on slice} we used the Cauchy--Schwarz inequality while noting that $\mathcal{H}^4(B_{Cr}\cap \H^3)\asymp r^4$. Finally, \eqref{eq:use bound on v bar} follows from an application of~\eqref{eq:v bar bound at s}. Now, \begin{align}\label{eq:got desired graph} &\nonumber \big\|\overline{\mathsf{v}}_{B_r}\big(\Gamma^+_f\big)\big\|_{L_2(\mathbb{R})}^2\\ \nonumber&=\int_{-\infty}^{\log_2(Cr)}\overline{\mathsf{v}}_{B_r}\big(\Gamma^+_f\big)(s)^2 \ud s+\int_{\log_2(Cr)}^\infty\overline{\mathsf{v}}_{B_r}\big(\Gamma^+_f\big)(s)^2 \ud s\\ &\lesssim \frac{r^{10}}{(1-\lambda)^2}+\int_{\log_2(Cr)}^\infty\frac{\mathcal{H}^6(B_r)^2}{2^{2s}} \ud s \nonumber\\&\asymp\frac{r^{10}}{(1-\lambda)^2}+\int_{\log_2(Cr)}^\infty \frac{r^{12}}{2^{2s}}\ud s\asymp \frac{r^{10}}{(1-\lambda)^2}, \end{align} where we estimated the second integral using the trivial bound $\overline{\mathsf{v}}_{B_r}(E)(s)\le \mathcal{H}^6(B_r)/2^s\asymp r^6/2^s$. By taking square roots of both sides of~\eqref{eq:got desired graph} we obtain the desired estimate~\eqref{eq:lip graph ineq intro}. It is important to stress that this proof does not work for functions on $\H^3$ because it relies on slicing $\H^5$ into copies of $\H^3$. There is no analogue of~\eqref{eq:psi ineq} for $1$-dimensional vertical slices of $\H^3$. \subsubsection{An intrinsic corona decomposition} In Section~\ref{sec:into lip graph} we presented the complete details of the proof of a crucial new ingredient that underlies the validity of Theorem~\ref{thm:v vs h cont intro}. This ingredient is the {\em only step} that relies on a property of $\H^5$ that is not shared by $\H^3$. We believe that it is important to fully explain this key ingredient within this extended abstract, but this means that we must defer the details of the formal derivation of Theorem~\ref{thm:v vs h cont intro} from its special case~\eqref{eq:lip graph ineq intro} to the full version~\cite{NY-full}. The complete derivation requires additional terminology and notation, but the main idea is to produce ``intrinsic corona decompositions'' in the Heisenberg group. Corona decompositions are an established tool in analysis for reducing the study of certain singular integrals on $\mathbb{R}^n$ to the case of Lipschitz graphs, starting with seminal works of David~\cite{Dav84,Dav91} and Jones~\cite{Jon89,Jon90} on the Cauchy integral and culminating with the David--Semmes theory of quantitative rectifiability~\cite{DavidSemmesSingular,DSAnalysis}. Our adaptation of this technique is mostly technical, but it will also involve a conceptually new ingredient, namely the use of quantitative monotonicity for this purpose. We will now outline the remainder of the proof of Theorem~\ref{thm:v vs h cont intro}. Our arguments hold for Heisenberg groups of any dimension (including $\H^3$), but we avoid introducing new notation by continuing to work with $\H^5$ for now. The first step is to show that in order to establish Theorem~\ref{thm:v vs h cont intro} it suffices to prove that for every $r\in (0,\infty)$ and every $E\subset \H^5$, we have $\|\overline{\mathsf{v}}_{B_r}(E)\|_{L_2(\mathbb{R})}\lesssim r^5$ under the additional assumption that the sets $E$, $\H^5\setminus E$, and $\partial E$ are $r$-locally Ahlfors-regular, i.e., $\mathcal{H}^6(uB_\rho \cap E)\asymp \rho^6\asymp \mathcal{H}^6(vB_\rho \setminus E)$ and $\mathcal{H}^5(wB_\rho\cap \partial E)\asymp \rho^5$ for all $\rho\in (0,r)$ and $(u,v,w)\in E\times (\H^5\setminus E)\times \partial E$. We prove this by first applying a Heisenberg scaling and an approximation argument to reduce Theorem~\ref{thm:v vs h cont intro} to the case that $E$ is a ``cellular set,'' i.e., it is a union of parallelepipeds of the form $h[-\frac12,\frac12]^5$ as $h$ ranges over a subset of the discrete Heisenberg group $\H_\mathbb{Z}^5\subset \H^5$. Any such set is Ahlfors-regular on sufficiently small balls. We next argue that $E$ can be decomposed into sets that satisfy the desired local Ahlfors-regularity. The full construction of this decomposition appears in~\cite{NY-full}, but we remark briefly that it amounts to the following natural ``greedy'' iterative procedure. If one of the sets $E, \H^5\setminus E, \partial E$ were not locally Ahlfors-regular then there would be some smallest ball $B$ such that the density of $E$, $\H^5\setminus E$ or $\partial E$ is either too low or too high on $B$. By replacing $E$ by either $E\cup B$ or $E\setminus B$, we cut off a piece of $\partial E$ and decrease $\mathcal{H}^5(\partial E)$. Since $B$ was the smallest ball where Ahlfors-regularity fails, $E, \H^5\setminus E, \partial E$ are Ahlfors-regular on balls smaller than $B$. Repeating this process eventually reduces $E$ to the empty set, and we arrive at the conclusion of Theorem~\ref{thm:v vs h cont intro} for the initial set $E$ by proving the (local version of) the theorem for each piece of this decomposition, then summing the resulting inequalities. We will therefore suppose from now on that $E$, $\H^5\setminus E$ and $\partial E$ are all locally Ahlfors-regular. The next step is the heart of the matter: approximating $\partial E$ by intrinsic Lipschitz graphs so that we can use the fact that Theorem~\ref{thm:v vs h cont intro} holds for (pieces of) such graphs. The natural way to do this is to construct (an appropriate Heisenberg version of) a corona decomposition in the sense of~\cite{DavidSemmesSingular,DSAnalysis}. Such a decomposition covers $\partial E$ by two types of sets, called \emph{stopping-time regions} and \emph{bad cubes}. Stopping-time regions correspond to parts of $\partial E$ that are close to intrinsic Lipschitz graphs, and bad cubes correspond to parts of $\partial E$, like sharp corners, that are not. The multiplicity of this cover depends on the shape of $\partial E$ at different scales. For example, $\partial E$ might look smooth on a large neighborhood of a point $x$, jagged at a medium scale, then smooth again at a small scale. If so, then $x$ is contained in a large stopping-time region, a medium-sized bad cube, and a second small stopping-time region. A cover like this is a corona decomposition if it satisfies a {\em Carleson packing condition} (see~\cite{NY-full}) that bounds its average multiplicity on any ball. We construct our cover following the well-established methods of~\cite{DavidSemmesSingular,DSAnalysis}. We start by constructing a sequence of nested partitions of $\partial E$ into pieces called \emph{cubes}; this is a standard construction due to Christ~\cite{ChristTb} and David~\cite{DavidWavelets} and only uses the Ahlfors regularity of $\partial E$. These partitions are analogues of the standard tilings of $\mathbb{R}^n$ into dyadic cubes. Next, we classify the cubes into \emph{good cubes}, which are close to a piece of a hyperplane, and \emph{bad cubes}, which are not. In order to produce a corona decomposition, there cannot be too many bad cubes, i.e., they must satisfy a Carleson packing condition. In~\cite{DavidSemmesSingular,DSAnalysis}, this condition follows from \emph{quantitative rectifiability}; the surface in question is assumed to satisfy a condition that bounds the sum of its (appropriately normalized) local deviations from hyperplanes. These local deviations are higher-dimensional versions of Jones' $\beta$-numbers~\cite{Jon89,Jon90}, and the quantitative rectifiability assumption leads to the desired packing condition. In the present setting, the packing condition follows instead from {\em quantitative non-monotonicity}. The concept of the quantitative non-monotonicity of a set $E\subset \H^5$ (see~\cite{NY-full}) was first defined in~\cite{CKN09,CKN}, where the kinematic formula for the Heisenberg group was used to show that the total non-monotonicity of all of the cubes is at most a constant multiple of $\mathcal{H}^5(\partial E)$. This means that there cannot be many cubes that have large non-monotonicity. By a result of~\cite{CKN09,CKN}, if a set has small non-monotonicity, then its boundary is close to a hyperplane. Consequently, most cubes are close to hyperplanes and are therefore good. (The result in~\cite{CKN09,CKN} is stronger than what we need for this proof; it provides power-type bounds on how closely a nearly-monotone surface approximates a hyperplane. For our purposes, it is enough to have \emph{some} bound (not necessarily power-type) on the shape of nearly-monotone surfaces, and we can deduce the bound that we need by applying a quick compactness argument to a result from~\cite{CheegerKleinerMetricDiff} that states that if a set is {\em precisely monotone} (i.e., every line intersects its boundary in at most one point), then it is a half-space.) Next, we partition the good cubes into stopping-time regions by using an iterative construction that corrects overpartitioning that may have occurred when the Christ cubes were constructed. If $Q$ is a largest good cube that hasn't been treated yet and if $P$ is its approximating half-space, we find all of the descendants of $Q$ with approximating half-spaces that are sufficiently close to $P$. If we glue these half-spaces together using a partition of unity, the result is an intrinsic Lipschitz half-space that approximates all of these descendants. By repeating this procedure for each untreated cube, we obtain a collection of stopping-time regions. These regions satisfy a Carleson packing condition because if a point $x\in \partial E$ is contained in many different stopping-time regions, then either $x$ is contained in many different bad cubes, or $x$ is contained in good cubes whose approximating hyperplanes point in many different directions. In either case, these cubes generate non-monotonicity, so there can only be a few points with large multiplicity. The construction above leads to the proof of Theorem~\ref{thm:v vs h cont intro} as follows. The vertical perimeter of $\partial E$ comes from three sources: the bad cubes, the approximating Lipschitz graphs, and the error incurred by approximating a stopping-time region by an intrinsic Lipschitz graph. By the Carleson packing condition, there are few bad cubes, and they contribute vertical perimeter on the order of $\mathcal{H}^5(\partial E)$. By the result of Section~\ref{sec:into lip graph}, the intrinsic Lipschitz graphs also contribute vertical perimeter on the order of $\mathcal{H}^5(\partial E)$. Finally, the vertical perimeter of the difference between a stopping-time region and an intrinsic Lipschitz graph is bounded by the size of the stopping-time region. The stopping-time regions also satisfy a Carleson packing condition, so these errors also contribute vertical perimeter on the order of $\mathcal{H}^5(\partial E)$. Summing these contributions, we obtain the desired bound. \subsection{Historical overview and directions for further research}\label{sec:previous} Among the well-established deep and multifaceted connections between theoretical computer science and pure mathematics, the Sparsest Cut Problem stands out for its profound and often unexpected impact on a variety of areas. Indeed, previous research on this question came hand-in-hand with the development of remarkable mathematical and algorithmic ideas that spurred many further works of importance in their own right. Because the present work belongs to this tradition, we will try to put it into context by elaborating further on the history of these investigations and describing directions for further research and open problems. Some of these directions will appear in forthcoming work. The first polynomial-time algorithm for Sparsest Cut with approximation ratio $O(\log n)$ was obtained in the important work~\cite{LR99}, which studied the notable special case of {\em Sparsest Cut with Uniform Demands} (see Section~\ref{sec:uniform} below). This work introduced a linear programming relaxation and developed influential techniques for its analysis, and it has led to a myriad of algorithmic applications. The seminal contributions~\cite{LLR95,AR98} obtained the upper bound $\rho_{\mathsf{GL}}(n)\lesssim \log n$ in full generality by incorporating a classical embedding theorem of Bourgain~\cite{Bou85}, thus heralding the transformative use of metric embeddings in algorithm design. The matching lower bound on the integrality gap of this linear program was proven in~\cite{LR99,LLR95}. This showed for the first time that Bourgain's embedding theorem is asymptotically sharp and was the first demonstration of the power of expander graphs in the study of metric embeddings. A $O(\sqrt{\log n})$ upper bound for the approximation ratio of the Goemans--Linial algorithm in the case of uniform demands was obtained in the important work~\cite{ARV09}. This work relied on a clever use of the concentration of measure phenomenon and introduced influential techniques such as a ``chaining argument'' for metrics of negative type and the use of expander flows. \cite{ARV09} also had direct impact on results in pure mathematics, including combinatorics and metric geometry; see e.g.~the ``edge replacement theorem'' and the estimates on the observable diameter of doubling metric measure spaces in~\cite{NRS05}. The best-known upper bound $\rho_{\mathsf{GL}}(n)\lesssim (\log n)^{\frac12+o(1)}$ of~\cite{ALN08} built on the (then very recent) development of two techniques: The chaining argument of~\cite{ARV09} (through its refined analysis in~\cite{Lee05}) and the {\em measured descent} embedding method of~\cite{KLMN05} (through its statement as a gluing technique for Lipschitz maps in~\cite{Lee05}). Another important input to~\cite{ALN08} was a re-weighting argument of~\cite{CGR08} that allowed for the construction of an appropriate ``random zero set'' from the argument of~\cite{ARV09,Lee05} (see~\cite{Nao10,Nao14} for more on this notion and its significance). The impossibility result~\cite{KV15} that refuted the Goemans--Linial conjecture relied on a striking link to complexity theory through the Unique Games Conjecture (UGC), as well as an interesting use of discrete harmonic analysis (through~\cite{Bou02}) in this context; see also~\cite{KR09} for an incorporation of a different tool from discrete harmonic analysis (namely~\cite{KKL88}, following~\cite{KN06}) for the same purpose, as well as~\cite{CKKRS06,CK07-hardness} for computational hardness. The best impossibility result currently known~\cite{KM13} for Sparsest Cut with Uniform Demands relies on the development of new pseudorandom generators. The idea of using the geometry of the Heisenberg group to bound $\rho_{\mathsf{GL}}(n)$ from below originated in~\cite{LN06}, where the relevant metric of negative type was constructed through a complex-analytic argument, and initial (qualitative) impossibility results were presented through the use of Pansu's differentiation theorem~\cite{Pan89} and the Radon--Nikod\'ym Property from functional analysis (see e.g.~\cite{BL00}). In~\cite{CK10}, it was shown that the Heisenberg group indeed provides a proof that $\lim_{n\to \infty} \rho_{\mathsf{GL}}(n)=\infty$. This proof introduced a remarkable new notion of differentiation for $L_1$-valued mappings, which led to the use of tools from geometric measure theory~\cite{FSSCRectifiability,FSSC03} to study the problem. A different proof that $\H^3$ fails to admit a bi-Lipschitz embedding into $L_1$ was found in~\cite{CheegerKleinerMetricDiff}, where a classical notion of metric differentiation~\cite{Kir94} was used in conjunction with the novel idea to consider monotonicity of sets in this context, combined with a sub-Riemannian-geometric argument that yielded a classification of monotone subsets of $\H^3$. The main result of~\cite{CKN} finds a quantitative lower estimate for the scale at which this differentiation argument can be applied, leading to a lower bound of $(\log n)^{\Omega(1)}$ on $\rho_{\mathsf{GL}}(n)$. This result relies on a mixture of the methods of~\cite{CK10} and~\cite{CheegerKleinerMetricDiff} and requires overcoming obstacles that are not present in the original qualitative investigation. In particular, \cite{CKN} introduced the quantitative measures of non-monotonicity that we use in the present work to find crucial bounds in the construction of an intrinsic corona decomposition. The quantitative differentiation bound of~\cite{CKN} remains the best bound currently known, and it would be very interesting to discover the sharp behavior in this more subtle question. The desire to avoid the (often difficult) need to obtain sharp bounds for quantitative differentiation motivated the investigations in~\cite{AusNaoTes,LafforgueNaor}. In particular, \cite{AusNaoTes} devised a method to prove sharp (up to lower order factors) nonembeddability statements for the Heisenberg group based on a cohomological argument and a quantitative ergodic theorem. For Hilbert-space valued mappings, \cite{AusNaoTes} used a cohomological argument in combination with representation theory to prove the following quadratic inequality for every finitely supported function $\upphi:\H_Z^5\to L_2$. \begin{multline}\label{eq:discrete global quadratic} \bigg(\sum_{t=1}^\infty \frac{1}{t^2}\sum_{h\in \mathbb{Z}^5} \big\|\upphi\big(hZ^t\big)-\upphi(h)\big\|_2^2\bigg)^{\frac12}\\\lesssim \bigg(\sum_{h\in \mathbb{Z}^5}\sum_{\sigma\in S}\big\|\upphi(h\sigma)-\upphi(h)\big\|_2^2\bigg)^{\frac12}. \end{multline} In~\cite{LafforgueNaor} a different approach based on Littlewood--Paley theory was devised, leading to the following generalization of~\eqref{eq:discrete global quadratic} that holds true for every $p\in (1,2]$ and every finitely supported $\upphi:\H^5\to L_p$. \begin{multline}\label{eq:discrete global p} \left(\sum_{t=1}^\infty \frac{1}{t^2}\bigg(\sum_{h\in \mathbb{Z}^5} \big\|\upphi\big(hZ^t\big)-\upphi(h)\big\|_p^p\bigg)^\frac{2}{p}\right)^{\frac12}\\\le C(p)\bigg(\sum_{h\in \mathbb{Z}^5}\sum_{\sigma\in S}\big\|\upphi(h\sigma)-\upphi(h)\big\|_p^p\bigg)^{\frac{1}{p}}, \end{multline} for some $C(p)\in (0,\infty)$. See~\cite{LafforgueNaor} for a strengthening of~\eqref{eq:discrete global p} that holds for general uniformly convex targets (using the recently established~\cite{MTX06} vector-valued Littlewood--Paley--Stein theory for the Poisson semigroup). These functional inequalities yield sharp non-embeddability estimates for balls in $\H^5_\mathbb{Z}$, but the method of~\cite{LafforgueNaor} inherently yields a constant $C(p)$ in~\eqref{eq:discrete global p} that satisfies $\lim_{p\to 1} C(p)=\infty$. The estimate~\eqref{eq:discrete local intro} that we prove here for $L_1$-valued mappings is an endpoint estimate corresponding to~\eqref{eq:discrete global p}, showing that the best possible $C(p)$ actually remains bounded as $p\to 1$. This confirms a conjecture of~\cite{LafforgueNaor} and is crucial for the results that we obtain here. As explained in Section~\ref{sec:into lip graph}, our proof of~\eqref{eq:discrete global quadratic} uses the $\H^3$-analogue of~\eqref{eq:discrete global quadratic}. It should be mentioned at this juncture that the proofs of~\eqref{eq:discrete global quadratic} and~\eqref{eq:discrete global p} in~\cite{AusNaoTes,LafforgueNaor} were oblivious to the dimension of the underlying Heisenberg group.\footnote{Thus far in this extended abstract we recalled the definitions of $\H^5$ and $\H^3$ but not of higher-dimensional Heisenberg groups (since they are not needed for any of the applications that are obtain here). Nevertheless, it is obvious how to generalize either the matrix group or the group modelled on $\mathbb{R}^5$ that we considered above to obtain the Heisenberg group $\H^{2k+1}$ for any $k\in \mathbb{N}$.} An unexpected aspect of the present work is that the underlying dimension does play a role at the endpoint $p=1$, with the analogue of~\eqref{eq:discrete local intro} (or Theorem~\ref{thm:isoperimetric discrete}) for $\H^3$ being in fact {\em incorrect}; see Section~\ref{sec:3d} below. In the full version \cite{NY-full} of this paper we shall establish the $\H^{2k+1}$-analogue of Theorem~\ref{thm:isoperimetric discrete} for every $k\in \{2,3,\ldots\}$, in which case the implicit constant depends on $k$, and we shall also obtain the sharp asymptotic behavior as $k\to \infty$. As we recalled above, past progress on the Sparsest Cut Problem came hand-in-hand with meaningful mathematical developments. The present work is a culmination of a long-term project that is rooted in mathematical phenomena that are interesting not just for their relevance to approximation algorithms but also for their connections to the broader mathematical world. In the ensuing subsections we shall describe some further results and questions related to this general direction. \subsubsection{The $3$-dimensional Heisenberg group}\label{sec:3d} The investigation of the possible validity of an appropriate analogue of Theorem~\ref{thm:isoperimetric discrete} with $\H_\mathbb{Z}^5$ replaced by $\H_\mathbb{Z}^3$ remains an intriguing mystery and a subject of ongoing research that will be published elsewhere. This ongoing work shows that Theorem~\ref{thm:isoperimetric discrete} fails for $\H_\mathbb{Z}^3$, but that there exists $p\in (2,\infty)$ such that for every $\Omega\subset \H_\mathbb{Z}^3$ we have \begin{equation}\label{eq:p version} \bigg(\sum_{t=1}^\infty \frac{|\partial^t_{\mathsf{v}}\Omega|^p}{t^{1+\frac{p}{2}}}\bigg)^{\frac{1}{p}}\lesssim |\partial_{\mathsf{h}}\Omega|. \end{equation} A simple argument shows that $\sup_{s\in \mathbb{N}} |\partial^s_{\mathsf{v}}\Omega|/\sqrt{s}\le \gamma |\partial_{\mathsf{h}}\Omega|$ for some universal constant $\gamma>0$. Hence, for every $t\in \mathbb{N}$ we have \begin{align*} \frac{|\partial^t_{\mathsf{v}}\Omega|^p}{t^{1+\frac{p}{2}}}\le\frac{|\partial^t_{\mathsf{v}}\Omega|^2}{t^2}\sup_{s\in \mathbb{N}} \left(\frac{|\partial^s_{\mathsf{v}}\Omega|}{\sqrt{s}}\right)^{p-2} \le \frac{|\partial^t_{\mathsf{v}}\Omega|^2}{t^2}(\gamma|\partial_{\mathsf{h}}\Omega|)^{p-2}. \end{align*}This implies that the left hand side of~\eqref{eq:p version} is bounded from above by a universal constant multiple of $|\partial_{\mathsf{v}}\Omega|^{2/p}|\partial_{\mathsf{h}}\Omega|^{1-2/p}$. Therefore~\eqref{eq:p version} is weaker than the estimate $|\partial_{\mathsf{v}}\Omega|\lesssim |\partial_{\mathsf{h}}\Omega|$ of Theorem~\ref{thm:isoperimetric discrete}. It would be interesting to determine the infimum over those $p$ for which~\eqref{eq:p version} holds true for every $\Omega\subset \H_\mathbb{Z}^3$, with our ongoing work showing that it is at least $4$. In fact, this work shows that for every $R\ge 2$ the $L_1$ distortion of the ball of radius $R$ in $\H_\mathbb{Z}^3$ is at most a constant multiple of $\sqrt[4]{\log R}$ --- asymptotically \emph{less} than the distortion of the ball of the same radius in $\H_\mathbb{Z}^5$. It would be interesting to determine the correct asymptotics of this distortion, with the best-known lower bound remaining that of~\cite{CKN}, i.e., a constant multiple of $(\log R)^\delta$ for some universal constant $\delta>0$. It should be stressed, however, that the algorithmic application of Theorem~\ref{thm:isoperimetric discrete} that is obtained here uses Theorem~\eqref{thm:isoperimetric discrete} as stated for $\H_\mathbb{Z}^5$, and understanding the case of $\H_\mathbb{Z}^3$ would not yield any further improvement. So, while the above questions are geometrically and analytically interesting in their own right, they are not needed for applications that we currently have in mind. \subsubsection{Metric embeddings}\label{sec:embed intro} Theorem~\ref{thm:distortion R} also yields a sharp result for the general problem of finding the asymptotically largest-possible $L_1$ distortion of a finite doubling metric space with $n$ points. A metric space $(X,d_X)$ is said to be $K$-doubling for some $K\in \mathbb{N}$ if every ball in $X$ (centered anywhere and of any radius) can be covered by $K$ balls of half its radius. By~\cite{KLMN05}, \begin{equation}\label{eq:descent} c_1(X,d_X)\lesssim \sqrt{(\log K)\log |X|}. \end{equation} As noted in~\cite{GKL03}, the dependence on $|X|$ in~\eqref{eq:descent}, but with a worse dependence on $K$, follows by combining results of~\cite{Ass83} and~\cite{Rao99} (the dependence on $K$ that follows from~\cite{Ass83,Rao99} was improved significantly in~\cite{GKL03}). The metric space $(\mathbb{Z}^5,d_W)$ is $O(1)$-doubling because $|\mathcal{B}_{R}|\asymp R^6$ for every $R\ge 1$. Theorem~\ref{thm:distortion R} shows that~\eqref{eq:descent} is sharp when $K=O(1)$, thus improving over the previously best-known construction~\cite{LS11} of arbitrarily large $O(1)$-doubling finite metric spaces $\{(X_i,d_i)\}_{i=1}^\infty$ for which $c_1(X_i,d_i)\gtrsim \sqrt{(\log |X_i|)/\log\log |X_i|}$. Probably~\eqref{eq:descent} is sharp for every $K\le |X|$; conceivably this could be proven by incorporating Theorem~\ref{thm:distortion R} into the argument of~\cite{JLM11}, but we shall not pursue this here. Theorem~\ref{thm:distortion R} establishes for the first time the existence of a metric space that simultaneously has several useful geometric properties and poor (indeed, worst possible) embeddability into $L_1$. By virtue of being $O(1)$-doubling, the metric space $(\mathbb{Z}^5,d_W)$ also has Markov type $2$ due to~\cite{DLP13} (which improves over~\cite{NPSS06}, where the conclusion that it has Markov type $p$ for every $p<2$ was obtained). For more on the bi-Lipschitz invariant Markov type and its applications, see~\cite{Bal92,Nao12}. The property of having Markov type $2$ is shared by the construction of~\cite{LS11}, which is also $O(1)$-doubling, but $(\mathbb{Z}^5,d_W)$ has additional features that the example of~\cite{LS11} fails to have. For one, it is a group; for another, by~\cite{Li14,Li16} we know that $(\mathbb{Z}^5,d_W)$ has Markov convexity $4$ (and no less). (See~\cite{LNP09,MN13} for background on the bi-Lipschitz invariant Markov convexity and its consequences.) By~\cite[Section~3]{MN13} the example of~\cite{LS11} does not have Markov convexity $p$ for any finite $p$. No examples of arbitrarily large finite metric spaces $\{(X_i,d_i)\}_{i=1}^\infty$ with bounded Markov convexity (and Markov convexity constants uniformly bounded) such that $c_1(X_i,d_i)\gtrsim\sqrt{\log |X_i|}$ were previously known to exist. Analogous statements are known to be impossible for Banach spaces~\cite{MW78}, so it is natural in the context of the Ribe program (see the surveys~\cite{Nao12,Bal13} for more on this research program) to ask whether there is a potential metric version of~\cite{MW78}; the above discussion shows that there is not. \subsubsection{The Sparsest Cut Problem with Uniform Demands}\label{sec:uniform} An important special case of the Sparsest Cut Problem is when the demand matrix $D$ is the matrix $\mathbf 1_{\{1,\ldots,n\}\times\{1,\ldots,n\}}\in M_n(\mathbb{R})$ all of whose entries equal $1$ and the capacity matrix $C$ lies in $M_n(\{0,1\})$, i.e., all its entries are either $0$ or $1$. This is known as the {\em Sparsest Cut Problem with Uniform Demands}. In this case $C$ can also be described as the adjacency matrix of a graph $G$ whose vertex set is $\{1,\ldots,n\}$ and whose edge set consists of those unordered pairs $\{i,j\}\subset \{1,\ldots,n\}$ for which $C_{ij}=1$. With this interpretation, given $A\subset \{1,\ldots,n\}$ the numerator in~\eqref{eq:def opt} equals twice the number of edges that are incident to $A$ in $G$. And, since $D=\mathbf 1_{\{1,\ldots,n\}\times\{1,\ldots,n\}}$, the denominator in~\eqref{eq:def opt} is equal to $2|A|(n-|A|)\asymp n\min\{|A|,|\{1,\ldots,n\}\setminus A|\}$. So, the Sparsest Cut Problem with Uniform Demands asks for an algorithm that takes as input a finite graph and outputs a quantity which is bounded above and below by universal constant multiples of its {\em conductance}~\cite{JS89} divided by $n$. The Goemans--Linial integrality gap corresponding to this special case is $$ \uprho_{\mathsf{GL}}^{\mathrm{unif}}(n)\stackrel{\mathrm{def}}{=} \sup_{\substack{C\in M_n(\{0,1\})\\ C\ \mathrm{symmetric}}} \frac{\mathsf{OPT}(C,\mathbf 1_{\{1,\ldots,n\}\times\{1,\ldots,n\}})}{\mathsf{SDP}(C,\mathbf 1_{\{1,\ldots,n\}\times \{1,\ldots,n\}})}. $$ The Goemans--Linial algorithm furnishes the best-known approximation ratio also in the case of uniform demands. By the important work~\cite{ARV09} we have $\uprho_{\mathsf{GL}}^{\mathrm{unif}}(n)\lesssim \sqrt{\log n}$, improving over the previous bound $\uprho_{\mathsf{GL}}^{\mathrm{unif}}(n)\lesssim \log n$ of~\cite{LR99}. As explained in~\cite{CKN09}, the present approach based on (fixed dimensional) Heisenberg groups cannot yield a lower bound on $\uprho_{\mathsf{GL}}^{\mathrm{unif}}(n)$ that tends to $\infty$ with $n$. The best-known lower bound~\cite{KM13} is $\uprho_{\mathsf{GL}}^{\mathrm{unif}}(n)\ge \exp(c\sqrt{\log\log n})$ for some universal constant $c>0$, improving over the previous bound $\uprho_{\mathsf{GL}}^{\mathrm{unif}}(n)\gtrsim \log\log n$ of~\cite{DKSV06}. Determining the asymptotic behavior of $\uprho_{\mathsf{GL}}^{\mathrm{unif}}(n)$ remains an intriguing open problem. \bibliographystyle{ACM-Reference-Format}
{'timestamp': '2017-04-06T02:01:31', 'yymm': '1704', 'arxiv_id': '1704.01200', 'language': 'en', 'url': 'https://arxiv.org/abs/1704.01200'}
ArXiv
\section{Introduction} X-ray emission from the hot coronae of photospherically cool stars arises from magnetically confined and heated plasma with temperatures in excess of $10^{6}$\,K (see the review by G\"udel 2004). X-ray observations of cool stars with convective envelopes have long since shown that X-ray activity increases with rotation rate and has led to the paradigm that a magnetic dynamo process produces and maintains the required magnetic fields, in analogy to processes observed to occur in the Sun (e.g. Pallavicini et al. 1981; Mangeney \& Praderie 1984). The influence of a regenerative dynamo is supported by observations of coronal activity in stars at a range of masses. These demonstrate, in accordance with expectations from dynamo models, that rotation is not the only important parameter. The best correlations are found between increasing magnetic activity and the inverse of the Rossby number (e.g. Noyes et al. 1984; Dobson \& Radick 1989; Pizzolato et al. 2003), where Rossby number is $P/\tau_c$, the ratio of stellar rotation period, $P$, to the convective turnover time, $\tau_c$. According to stellar models, $\tau_c$ increases towards lower masses (e.g. Gilliland 1986; Kim \& Demarque 1996) and hence, for a given rotation period, a K-dwarf has a smaller Rossby number and is more magnetically active than a G-dwarf, where magnetic activity is expressed in a form normalised by the bolometric luminosity (e.g. the ratio of coronal X-ray to bolometric luminosity, $L_x/L_{\rm bol}$). Many young, cool stars have not lost significant angular momentum and rotate much more rapidly than the Sun, leading to higher coronal X-ray luminosities by several orders of magnitude. At fast rotation rates, magnetic activity (as measured in the corona and chromosphere) appears to ``saturate'' (e.g. Vilhu \& Walter 1987; Stauffer et al. 1994). Saturated coronal activity is manifested as a plateau of $L_x/L_{\rm bol} \simeq 10^{-3}$ in G-dwarfs with rotation periods less than about 3 days. In lower mass stars the plateau in coronal activity is at a similar level but the rotation period at which it first occurs is longer. A unified picture emerges by plotting X-ray activity against Rossby number (St{\c e}pie\'n 1994; Randich et al. 1996; Patten \& Simon 1996; Pizzolato et al. 2003), such that coronal saturation occurs at Rossby numbers of about 0.1 in stars of spectral types G, K and M. This unification in terms of a parameter associated with dynamo efficiency suggests that coronal saturation reflects a saturation of the dynamo itself (Vilhu \& Walter 1987), although other explanations, such as a redistribution of radiative losses to other wavelengths (Doyle 1996) or changes in magnetic topology with fast rotation (Solanki, Motamen \& Keppens 1997; Jardine \& Unruh 1999; Ryan, Neukirch \& Jardine 2005), must also be considered. At rotation rates $\sim 5$ times faster than required for saturation it appears that coronal activity turns down again -- a phenomenon dubbed ``super-saturation'' (Prosser et al. 1996). Examples of super-saturated G- and K-stars, with $L_x/L_{\rm bol} \simeq 10^{-3.5}$, have been found in a number of young open clusters (Stauffer et al. 1994, 1997; Patten \& Simon 1996; Randich 1998; Jeffries et al. 2006), among the fast rotating components of contact W~UMa binaries (Cruddace \& Dupree 1984; St{\c e}pie\'n, Schmitt \& Voges 2001) and has been suggested as the reason for lower X-ray activity in the fastest rotating, very young pre main-sequence stars (Stassun et al. 2004; Preibisch et al. 2005; Dahm et al. 2007). Possible explanations for coronal super-saturation include negative feedback in the dynamo (Kitchatinov, R\"udiger \& K\"uker 1994), decreasing coverage by active regions (St{\c e}pie\'n et al. 2001), reorganisation of the coronal magnetic field (Solanki et al. 1997) or centrifugal stripping of the corona (Jardine 2004). An important test of these ideas is to look at the coronal properties of fast rotators across a wide range of masses. In particular, it is vital to gauge whether saturation and super-saturation occur at fixed values of rotation period or Rossby number. This would illuminate which physical mechanisms are responsible. One of the principal gaps in our knowledge is the behaviour of coronal emission for ultra-fast rotating M-dwarfs. These have larger convection zones (as a fraction of the star), longer convective turnover times and hence smaller Rossby numbers at a given rotation period than G- and K-dwarfs. Of course M-dwarfs also have much lower bolometric luminosities than G- or K-dwarfs and so, at a given magnetic activity level, they are harder to observe in the young open clusters where the majority of ultra-fast rotators are found. The most comprehensive work so far was by James et al. (2000) using X-ray observations for a small, inhomogeneous sample of fast rotating low-mass stars from the field and open clusters. They found that, like G- and K-type stars, M-dwarfs with periods below $\sim 6$ days and Rossby numbers below 0.1 showed saturated levels of X-ray emission with $L_x/L_{\rm bol} \simeq 10^{-3}$. They also claimed tentative evidence for super-saturation in the fastest rotating M-dwarfs, with periods of 0.2--0.3 days. In this paper we re-visit the question of saturation and super-saturation of coronal emission in fast-rotating M-dwarfs. We analyse new, deep {\it XMM-Newton} observations of a large, homogeneous sample of rapidly rotating M-dwarfs identified in the open cluster NGC~2547 by Irwin et al. (2008). NGC~2547 has an age of 35--38\,Myr (Jeffries \& Oliveira 2005; Naylor \& Jeffries 2006) and a rich population of low-mass stars (Jeffries et al. 2004). However, the cluster is at at $\sim 400$\,pc, and while previous X-ray observations with {\it ROSAT} (Jeffries \& Tolley 1998) and {\it XMM-Newton} (Jeffries et al. 2006) demonstrated an X-ray active low-mass population, they were insufficiently sensitive to probe the coronal activity of its M-dwarfs in any detail. In section~2 we describe the new {\it XMM-Newton} observations of NGC~2547 and the identification of X-ray sources with rapidly rotating M-dwarfs from the Irwin et al. (2008) catalogue. In section~3 we combine X-ray and optical data, estimate coronal activity levels and examine the evidence for coronal saturation and super-saturation using a homogeneous sample of fast rotating M-dwarfs several times larger than considered by James et al. (2000). In section 4 we compare our results to those compiled in the literature for G- and K-stars and for other samples of M-dwarfs. In section 5 we discuss our results in the context of competing models for saturation/super-saturation and our conclusions are summarised in section 6. \section{XMM-Newton Observations of NGC~2547} \label{observations} NGC 2547 was observed with {\it XMM-Newton} between UT 22:30:03 on 12 November 2007 and UT 09:30:03 on 14 November 2007 using the European Photon Imaging Counter (EPIC) instrument, for a nominal exposure time of 125.8\,ks (Observation ID 0501790101). The two EPIC-MOS cameras and the EPIC-PN camera were operated in full frame mode (Turner et al. 2001; Str\"uder et al. 2001), using the medium filter to reject optical light. The nominal pointing position of the observation was RA\,$=08$h\,10m\,06.1s, Dec\,$=-49$d\,15m\,42.9s (J2000.0). \subsection{Source Detection} Version 7.1 of the {\it XMM-Newton} Science Analysis System was used for the initial data reduction and source detection. Data from the three cameras were individually screened for high background periods and these time intervals were excluded from all subsequent analysis. Observation intervals were excluded where the total count rate for events with energies $>10$\,keV, exceeded 0.35\,s$^{-1}$ and 1.0\,s$^{-1}$ for the MOS and PN detectors respectively. The remaining useful exposure times were 107.3\,ks, 104.8\,ks and 87.4\,ks for the MOS1, MOS2 and PN cameras respectively, which can be compared with the equivalent exposure times of 29.0\,ks, 29.4\,ks and 13.6\,ks in the less sensitive observation analysed by Jeffries et al. (2006). Images were created using the {\sc evselect} task and a spatial sampling of 2 arcseconds per pixel. The event lists were filtered to exclude anomalous pixel patterns and edge effects by including only those events with ``pattern''\,$\leq 12$ for the MOS detectors and $\leq 4$ for the PN detectors. The contrast between background and source events was also increased by retaining only those events in channels corresponding to energies of 0.3--3\,keV. The {\sc edetect\_chain} task was used to find sources with a combined maximum log likelihood value greater than 10 (approximately equivalent to $-\ln p$, where $p$ is the probability that the ``source'' is due to a background fluctuation), in all three instruments combined, over the 0.3--3\,keV energy range. We expect 1-2 spurious X-ray detections at this level of significance, though they would be highly unlikely to correlate with NGC~2547 members, so will not hamper any analysis in this paper. The individual images from each instrument were source-searched first to confirm there were no systematic differences in the astrometry of the brightest sources. Count rates in each detector were determined using vignetting-corrected exposure maps created within the same task. In addition, count rates were determined for each source in the 0.3--1.0\,keV and 1.0--3.0\,keV bands separately, in order to form a hardness ratio. A total of 323 significant X-ray sources were found. Some of these only have count rates measured in a subset of the three instruments because they fell in gaps between detectors, on hot pixels or lay outside the field of view. In addition we decided only to retain count rates for analysis if they had a signal-to-noise ratio greater than 3, which resulted in the removal of three sources from our list. To check the EPIC astrometric solution, we cross-correlated all the brightest X-rays sources (those detected with a maximum log likelihood greater than 100) against a list of photometrically selected NGC~2547 members compiled by Naylor et al. (2002 -- their Table~6), which is based on D'Antona \& Mazzitelli (1997) isochrones and $BVI$ photometry, and which incorporates bright cluster members from Clari\'a (1982). There were 98 correlations found within 6 arcseconds of the nominal X-ray position and as discussed in Jeffries et al. (2006), where a similar procedure was followed, these correlations are very likely to be the genuine optical counterparts of the X-ray sources. The mean offset between the X-ray and optical positions was 0.11 arcseconds in RA and 0.59 arcseconds in Dec. As the optical catalogues have an absolute accuracy better than 0.2 arcseconds and an internal precision of about 0.05\,arcseconds, the X-ray positions were corrected for these offsets. The remaining dispersion in the offsets indicates an additional 1 arcsecond uncertainty (in addition to the quoted astrometric uncertainty from the source searching routines) in the X-ray positions, concurring with the current astrometric calibration assessment of the {\it XMM-Newton} science operations centre (Guainazzi 2010). \subsection{Cross-correlation with the Irwin catalogue} The purpose of this paper is to study the properties of fast-rotating M-dwarfs, so an investigation of the full X-ray source population is deferred to a later paper. Here, we discuss cross-correlations between the astrometrically corrected X-ray source list and the catalogue of cool stars with known rotation periods in NGC 2547, given by Irwin et al. (2008). The Irwin et al. (2008) catalogue contains precise positions tied to the 2MASS reference frame, rotation periods and photometry in the $V$ and $I$ bands. We found significant systematic differences (of up to 0.2 mag) between the photometry of Irwin et al. (2008) and that found in Naylor et al. (2002) for stars common to both. As the accuracy of the Naylor et al. photometry has support from an independent study by Lyra et al. (2006), we transformed the Irwin photometry using the following best-fit relationships between the Naylor et al. and Irwin et al. photometry: \begin{equation} V = 1.029\, V_{\rm Irwin} - 0.428 \ \ \ \ {\rm rms}=0.08\,{\rm mag}, \end{equation} \begin{equation} V-I = 1.177\, (V-I)_{\rm Irwin} - 0.305 \ \ \ {\rm rms}=0.10\,{\rm mag}, \end{equation} where the calibrations transform the photometry onto the Johnson-Cousins calibration used by Naylor et al. and are valid for $14<V<21$ and $1.2<(V-I)<3.4$. We take the rms deviation from these relationships as the photometric uncertainty in subsequent analysis. A maximum correlation radius between X-ray and optical sources of 5 arcseconds resulted in 68 correlations from the 97 Irwin et al. (2008) objects that are within the EPIC field of view (there were no correlations between 5 arcseconds and our nominal 6 arcsecond acceptance threshold). The missing objects were predominantly the optically faintest (see section 3.2). Experiments involving offsetting the X-ray source positions by 30 arcseconds in random directions suggest that fewer than 1 of these correlations would be expected by chance. The details of the correlations along with count rates, hardness ratios and correlation separations appear in Table~1 (available fully in electronic form only). \subsection{Fluxes and coronal activity} To assess magnetic activity levels, X-ray count rates were converted into fluxes using a single conversion factor for each instrument. The use of a single conversion factor is necessitated because few of the X-ray sources that correlate with the Irwin et al. (2008) sample have sufficient counts ($>500$) to justify fitting a complex spectral model. The median source is detected in the EPIC-PN camera with about 150 counts and a signal-to-noise ratio of only 10. Instead we designed a spectral model that agrees with the mean hardness ratio (HR) in the EPIC-PN camera, defined as $(H-S)/(H+S)$ where $S$ is the 0.3--1.0\,keV count rate and $H$ is the 1.0--3.0\,keV count rate. Our starting point was the analysis of an earlier, much shorter {\em XMM-Newton} observation of NGC~2547 (Jeffries et al. 2006). This showed that single-temperature thermal plasma models were insufficiently complex to fit the spectra of active stars in NGC~2547. A two-temperature ``{\sc mekal}'' model (Mewe, Kaastra \& Leidahl 1995) provided a satisfactory description, with $T_1 \simeq 0.6$\,keV, $T_2 \simeq 1.5$\,keV and an emission measure ratio (hot/cool) of about 0.7, which was chosen to approximately match the mean HR. This crude approximation to the differential emission measure (DEM) of the coronal plasma reasonably matches detailed work on the coronal DEM of several nearby, rapidly-rotating low-mass stars, which show a DEM maximum at around $10^{7}$\,K (Garc\'ia-Alvarez et al. 2008). It was also established in Jeffries et al. (2006) that the coronal plasma was best fitted with a sub-solar metallicity ($Z \simeq 0.3$), which seems to be a common feature for very active stars, including fast-rotating K- and M-dwarfs (e.g. Briggs \& Pye 2003; Garc\'ia-Alvarez et al. 2008). Adopting the same model we have used the software package {\sc xspec} and instrument response files appropriate for the EPIC-PN camera at the time of the observations, to calculate a conversion factor from 0.3--3.0\,keV count rates to an {\em unabsorbed} 0.3--3.0\,keV flux. We assumed an interstellar absorbing column density of neutral hydrogen $N_H = 3\times 10^{20}$\,cm$^{-2}$ (see section 2.5). The derived flux conversion factor is $1.68\times 10^{-12}$\,erg\,cm$^{-2}$ per count and the derived HR is -0.42, which closely matches the mean HR in our sample (see section~\ref{results}). Analogous conversion factors for the EPIC-MOS cameras were derived by dividing the EPIC-PN conversion factor by the weighted mean ratio of the observed MOS and PN count rates. This gave conversion factors of $6.22\times10^{-12}$\,erg\,cm$^{-2}$ per count for both of the EPIC-MOS cameras. The unabsorbed 0.3--3\,keV X-ray flux for each detected Irwin et al. (2008) star is found from a weighted average of fluxes from each detector. Uncertainties in these fluxes arise from the count rate errors, but we added a further 10 per cent systematic error in quadrature to each detector count rate to account for uncertainties in the instrument response and in the the point spread function modelling in the {\sc edetect\_chain} task (e.g. Saxton 2003; Guainazzi 2010). From the average fluxes the coronal activity indicator $L_x/L_{\rm bol}$ was calculated, using the corrected $V$ magnitudes, an extinction $A_V=0.19$, a reddening $E(V-I)=0.077$ (see Clari\'a 1982; Naylor et al. 2002) and the relationship between intrinsic $V-I$ and bolometric correction described by Naylor et al. (2002). For the red stars in our sample with $V-I> 1.5$, these bolometric corrections are based on the empirical measurements of Leggett et al. (1996). The bolometric corrections and $L_x/L_{\rm bol}$ values are reported in Table~1. \begin{table*} \caption{:\ X-ray detections of sources in the Irwin et al. (2008) catalogue of NGC~2547 members with rotation periods. The Table is available electronically and contains 69 rows. Only two rows are shown here as a guide to form and content. Columns are as follows: (1) Identification from Irwin et al. (2008), (2-3) RA and Dec (J2000.0, from Irwin et al. 2008), (4-5) $V$ and $V-I$ magnitudes (modified from the Irwin et al. 2008 values -- see section 2.2), (6) rotation period (from Irwin et al. 2008), (7) $V$-band bolometric correction (see section 2.3), (8) $\log$ bolometric luminosity (assuming a distance of 400\,pc), (9) $\log$ convective turnover time (see equation~3), (10) $\log$ Rossby Number, (11-12) stellar mass and radius (estimated from the Siess et al. 2000 models) (13) Keplerian co-rotation radius as a multiple of the stellar radius, (14-15) RA and Dec of the X-ray source, (16) the maximum log likelihood of the detection, (17) separation from the optical counterpart, (18-19) Total PN count rate (0.3--3\,keV), (20-21) PN count rate (0.3--1\,keV, the ``$S$'' band), (22-23) PN count rate (1--3\,keV, the ``$H$'' band), (24-25) PN hardness ratio (defined as $(H-S)/(H+S)$), (26-27) Total MOS1 count rate (0.3--3\,keV), (28-29) Total MOS2 count rate (0.3--3\,keV), (30-31) X-ray flux (0.3--3\,keV), (32-33) $\log$ X-ray to bolometric flux ratio, (34-35) cross identification with Naylor et al. (2002) where available.} \begin{flushleft} \begin{tabular}{lccc@{\hspace*{2mm}}ccc@{\hspace*{2mm}}c@{\hspace*{2mm}}c@{\hspace*{2mm}}c@{\hspace*{2mm}}c@{\hspace*{2mm}}c@{\hspace*{2mm}}c} \hline Name & RA & Dec & $V$ & $V-I$ & P & BC &$\log L_{\rm bol}/L_{\odot}$& $\log \tau_c$ & $\log N_R$ & $M/M_{\odot}$ & $R/R_{\odot}$ & $R_{\rm Kepler}/R$ \\ &\multicolumn{2}{c}{(J2000)}& & & (d) & & & (d) & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) &(8) & (9) &(10) & (11) &(12) &(13) \\ \hline N2547-1-5-417& 8 09 25.91& -49 09 58.4& 18.69& 2.80& 0.865& -2.28& -1.40& 1.80& -1.86& 0.37& 0.47& 5.76\\ N2547-1-5-1123& 8 09 17.71& -49 08 34.5& 19.57& 3.16& 0.413& -2.76& -1.56& 1.88& -2.26& 0.28& 0.42& 3.62\\ \hline \end{tabular} \vspace*{5mm} \begin{tabular}{cccccccccccc} \hline X-ray RA & X-ray Dec & ML & Sep & PN0 & PN0\_err & PN1 & PN1\_err & PN2 & PN2\_err & HR & HR\_err \\ \multicolumn{2}{c}{(deg, J2000.0)} & & (arcsec)& \multicolumn{2}{c}{(s$^{-1}$, 0.3--3\,keV)} & \multicolumn{2}{c}{(s$^{-1}$, 0.3--1\,keV)} & \multicolumn{2}{c}{(s$^{-1}$, 1--3\,keV)}& & \\ (14) & (15) & (16) & (17) & (18) & (19) & (20) & (21) & (22) & (23) & (24) & (25) \\ \hline 122.35782& -49.1664& 421.1& 0.62& 7.23e-03& 5.23e-04& 5.00e-03& 4.21e-04& 2.23e-03& 3.11e-04& -0.38& 0.07\\ 122.32372& -49.1431& 36.4& 0.57& 1.84e-03& 3.76e-04& 1.33e-03& 2.99e-04& 5.04e-04& 2.28e-04& -0.45& 0.20\\ \hline \end{tabular} \vspace*{5mm} \begin{tabular}{cccccc@{\hspace*{2mm}}c@{\hspace*{2mm}}cll} \hline MOS1 & MOS1\_err & MOS2 & MOS2\_err & Flux & Flux\_err & $\log L_x/L_{\rm bol}$ & $\Delta \log L_x/L_{\rm bol}$ & \multicolumn{2}{c}{Naylor ID}\\ \multicolumn{2}{c}{(s$^{-1}$, 0.3--3\,keV)} & \multicolumn{2}{c}{(s$^{-1}$, 0.3--3\,keV)} & \multicolumn{2}{c}{(erg\,cm\,s$^{-1}$, 0.3--3\,keV)} & & & &\\ (26) & (27) & (28) & (29) & (30) & (31) & (32) & (33) & (34) & (35) \\ \hline 1.82e-03 & 2.41e-04 & 1.80e-03& 2.32e-04& 9.59e-15& 9.87e-16& -2.92& 0.08 & 14 & 1171\\ 4.85e-04 &1.62e-04& 5.47e-04& 1.82e-04& 2.80e-15& 6.05e-16& -3.30& 0.12 & 4 & 2832\\ \hline \end{tabular} \end{flushleft} \label{xraydetect} \end{table*} \begin{table*} \caption{The properties of stars from the Irwin et al. (2008) catalogue that have known rotation periods but were not detected within the {\it XMM-Newton} field. The Table is available electronically and contains 28 rows. Only two rows are shown here as a guide to form and content. The first 13 columns contain the same properties as listed in Table~\ref{xraydetect}; following these, column 14 lists a flag denoting from which {\it XMM-Newton} instrument the X-ray count rate upper limit was derived: ``P'' for the PN, ``M1'' for MOS1, ``M2'' for MOS2 and ``M12'' for the average from MOS1 and MOS2 (see text for details). Column 15 lists the 3-sigma count rate upper limit in that instrument, column 16 lists the corresponding flux upper limit (0.3--3\,keV) and column 17 lists the $\log L_x/L_{\rm bol}$ upper limit. Columns 18 and 19 list the cross identification with the Naylor et al. (2002) catalogue where available.} \begin{flushleft} \begin{tabular}{lccc@{\hspace*{2mm}}ccc@{\hspace*{2mm}}c@{\hspace*{2mm}}c@{\hspace*{2mm}}c@{\hspace*{2mm}}c@{\hspace*{2mm}}c@{\hspace*{2mm}}c} \hline Name & RA & Dec & $V$ & $V-I$ & P & BC &$\log L_{\rm bol}/L_{\odot}$& $\log \tau_c$ & $\log N_R$ & $M/M_{\odot}$ & $R/R_{\odot}$ & $R_{\rm Kepler}/R$ \\ &\multicolumn{2}{c}{(J2000)}& & & (d) & & & (d) & & & & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) &(8) & (9) &(10) & (11) &(12) &(13) \\ \hline N2547-1-5-3742 & 8 09 25.73 & -49 03 15.7& 19.16& 2.92& 0.807& -2.43& -1.53& 1.86& -1.96& 0.30& 0.43 & 5.64\\ N2547-1-6-2487& 8 10 04.56 & -49 06 04.2& 14.26& 1.00& 5.488& -0.27& -0.43& 1.31& -0.57& 0.88& 0.85 & 14.76\\ \hline \end{tabular} \vspace*{5mm} \begin{tabular}{ccccll} \hline Instrument & Upper Limit & Flux & $\log L_x/L_{\rm bol}$ & \multicolumn{2}{c}{Naylor ID}\\ & ($s^{-1}$) & (erg\,cm\,s$^{-1}$, 0.3--3\,keV)& & & \\ (14) & (15) & (16) & (17) & (18) & (19)\\ \hline P & $<$5.20e-03 & $<$8.73e-15 & $<-2.84$ & 4 & 1258\\ P & $<$1.16e-03 & $<$1.96e-15 & $<-4.59$ & 3 & 760\\ \hline \end{tabular} \end{flushleft} \label{xraynodetect} \end{table*} \subsection{Flux upper limits} There are 29 objects in the Irwin et al. (2008) catalogue within the {\it XMM-Newton} field of view (for only a subset of the detectors in some cases) but not found as sources by the {\sc edetect\_chain} task. We inspected the X-ray images at the positions of these sources and found one example (N2547-1-6-5108) which is a reasonably bright X-ray source, that appears only in the MOS2 image, and which was missed by the automated source searching. We evaluated the X-ray count rates for this object using a 20 arcsecond radius aperture (see below) and a local estimate of the background. This source has been added to Table~1. There were no X-ray sources apparent at the positions of the other 28 objects and X-ray flux upper limits were derived as follows. In the {\sc edetect\_chain} task, we generated images for each detector consisting of smooth models of the X-ray background, to which were added {\it models} of the significantly detected sources calculated from their count rates and the model point spread function used to detect and parameterise them. These images were ``noise-free'' estimates of the expected X-ray background for any given position. The total expected background was summed within circles of radius 20 arcseconds surrounding each of the undetected Irwin et al. (2008) objects. The number of observed X-ray counts in those areas in the original X-ray images was also summed, consisting of both source and background counts. A 3-sigma upper limit to the number of source counts was calculated using the Bayesian approach formulated by Kraft, Burrows \& Nousek (1991). The choice of a 20 arcsec radius follows the work of Carrera et al. (2007), who showed that ``aperture photometry'' using this radius gave count rates that closely matched those found in {\sc edetect\_chain}. The upper limits to the X-ray count {\it rates} were determined by dividing by the average exposure time within the same circular area. Using this technique we found that where objects were covered by both PN and MOS data, that the PN data were at least a factor of two more sensitive. Rather than attempt to combine the results from the three instruments we have either (a) taken the upper limit from the PN, where PN data are present (23 objects), (b) taken the the upper limit from one MOS detector where an object was only covered by that detector (3 objects) or (c) taken the average upper limit from both MOS detectors (2 objects) and divided by $\sqrt{2}$, as in these cases the two upper limits were very similar. The count rate upper limits were converted to upper limits in X-ray flux and upper limits to $L_x/L_{\rm bol}$ using the procedures described in the previous subsection. The details of the upper limit measurements are presented separately in Table~2 (available in electronic form only). \subsection{Uncertainties in X-ray fluxes} The uncertainties in the $\log L_{\rm x}/L_{\rm bol}$ values quoted in Table~1 incorporate the statistical count rate uncertainties and the systematic instrument response uncertainties mentioned in section~2.3. In addition we have included (in quadrature) uncertainties in the bolometric flux due to the photometric uncertainties (or variability) implied by equations~1 and~2, which amount to about $\pm 0.07$~dex. The final quoted uncertainties range from 0.08~dex to 0.21~dex, with a mean of 0.10~dex. Other contributing uncertainties have also been considered. The X-ray flux conversion factors are rather insensitive to the details of the spectral model. For instance doubling the temperature of the hot component increases the conversion factor by just 3 per cent and the HR increases to -0.39; varying the metal abundance in the range $0.1<Z<1.0$ changes the conversion factor by only $\pm 5$ per cent; doubling the ratio of hot-to-cool plasma emission measures increases the conversion factor by 3 per cent and increases the HR to -0.32. The value assumed for $N_H$ has a little more effect. The value of $N_H = 3\times10^{20}$\,cm$^{-2}$ is derived from the cluster reddening of $E(B-V)=0.06\pm 0.02$ (Clari\'a 1982) and the relationship between reddening and $N_H$ found by Bohlin, Savage \& Drake (1978), and could be uncertain by factors of two. Altering $N_H$ from our assumed value to either $1.5\times 10^{20}$\,cm$^{-2}$ or $6\times 10^{20}$\,cm$^{-2}$ would lead to conversion factors about 5 per cent smaller or 10 per cent larger respectively and HR would change to either -0.44 or -0.35 respectively. All these uncertainties are small compared with those we have already considered and they are neglected. The final source of uncertainty is difficult to quantify, but is probably dominant when considering a single epoch of X-ray data, namely the coronal variability of active stars. Active, low-mass stars show frequent X-ray flaring behaviour on timescales of minutes and hours, resulting in an upward bias in an X-ray flux estimated, as here, over the course of more than a day. The more active the low-mass star, the more frequently it exhibits large X-ray flares (Audard et al. 2000). Rather than try to correct for this, and as the comparison samples in section~4 have not had any flare exclusion applied, we continue to use the time-averaged X-ray flux to represent coronal activity. An idea of the uncertainties can be gained by looking at similar estimates from more than one epoch. For example, Gagn\'e, Caillault \& Stauffer (1995) find that young Pleiades low-mass stars show differences of more than a factor of two in their X-ray fluxes only 25 per cent of the time on timescales of 1--10 years. Marino et al. (2003) find that the median level of X-ray flux variability of K3-M dwarfs in the Pleiades is about 0.2\,dex on timescales of months. Jeffries et al. (2006) looked at the variability of G- to M-dwarfs in NGC~2547 itself on timescales of 7~years, finding a median absolute deviation from equal X-ray luminosity of about 0.1\,dex in G- and K dwarfs, with a hint that the M-dwarfs are slightly more variable. Only about 20 per cent of sources varied by more than a factor of two. Our conclusion is that we should assume an additional uncertainty of about 0.1--0.2\,dex in our single-epoch measurements, but caution that the distribution is probably non-Gaussian in the sense that a sample will probably contain a small number of objects that are upwardly biased by more than a factor of two by large flares. \section{Coronal activity in the M-dwarfs of NGC 2547} \label{results} \subsection{Hardness ratios} \begin{figure*} \centering \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=80mm]{plothrvi.ps} \end{minipage} \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=80mm]{plothrlxlbol.ps} \end{minipage} \caption{Hardness ratios for the NGC~2547 sample (defined as $(H-S)/(H+S)$ where $H$ is the 1.0--3.0\,keV count rate and $S$ is the 0.3--1.0\,keV count rate in the EPIC-PN camera) as a function of (a) $V-I$ colour and (b) the X-ray to bolometric flux ratio. } \label{hardness} \end{figure*} In Figure~1 we show the dependence of the HR, measured by the PN camera, on the intrinsic colour of the star and on the magnetic activity level, expressed as $L_x/L_{\rm bol}$. There is no significant dependence of HR on colour, and hence the approximation of a uniform flux conversion factor with spectral type is shown to be reasonable. The weighted mean HR is $-0.42\pm 0.02$, with a standard deviation of 0.16. The standard deviation is about twice that expected from the statistical errors, hinting at some genuine HR variation. Note that this assumes that the HR uncertainties are symmetric and Gaussian, even though the X-ray data are Poissonian in nature and HR is strictly limited to $-1 \leq$\,HR\,$\leq 1$ (e.g. see Park et al. 2006). The majority of sources are detected with sufficiently large numbers of counts in both energy ranges and with sufficiently precise count rates to make such an assumption reasonable. The second plot shows there is some evidence that HR increases slightly with activity level. A best-fit linear model is HR$=-0.09(\pm 0.12) + 0.11(\pm 0.04) \log L_x/L_{\rm bol}$, but the residuals suggest there may still be star-to-star variation at a given activity level. Note though that time variability of the X-ray activity or HR values has not been considered (see section 2.5) and this might plausibly account for some of this star-to-star variation. A significant increase in average coronal temperatures (and hence HR) with activity level is well established in field stars (e.g. Telleschi et al. 2005). The relationship between HR and $L_x/L_{\rm bol}$ found here and the size of the intrinsic scatter are indistinguishable from those found for somewhat higher mass G- and K-stars in NGC~2547 by Jeffries et al. (2006), as is the likelihood of an intrinsic scatter. The mean HR agrees well (by design) with the HR predicted by the spectral model used to calculate the X-ray fluxes. The probable intrinsic variation in HR at a given $L_{x}/L_{\rm bol}$ corresponds to variations of about a factor of two in the emission measure ratio of the hot and cool plasmas in the two-temperature coronal model. Such variations lead to very small uncertainties in $L_{x}/L_{\rm bol}$ of order a few per cent (see section 2.5) and so we have not attempted to correct for them. \subsection{X-ray activity} \begin{figure*} \centering \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=80mm]{plotvvi.ps} \end{minipage} \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=80mm]{plotlxlbolvi.ps} \end{minipage} \caption{(a) A colour-magnitude diagram in NGC~2547 showing: low-mass stars with rotation periods (from Irwin et al. 2008) that have significant X-ray detections (solid circles); stars with rotation periods, that are in the {\it XMM-Newton} field of view, but have only upper limits to their X-ray activity (open triangles); additional photometric candidate members of NGC~2547 (from Naylor et al. 2002) that are in the {\it XMM-Newton} field of view (open diamonds); and those additional photometric candidates that are detected in X-rays (filled diamonds). (b) X-ray activity, expressed as $\log L_{\rm x}/L_{\rm bol}$, as a function of colour. We compare the X-ray activity (and 3-sigma upper limits to X-ray activity) for stars in NGC~2547 with known rotation periods (from Irwin et al. 2008), to the X-ray activity of detected photometric members of NGC~2547 (from Naylor et al. 2002) that have no known rotation period. Almost all photometric candidate members of NGC~2547 are X-ray detected for $V-I <2.8$ (see text), so we argue that the members with known rotation periods represent a reasonably unbiased sample with regard to X-ray activity and that the spread of activity at a given colour is less than an order of magnitude for $V-I >1.2$. } \label{lxlbol} \end{figure*} Figure~2a shows a $V$ versus $V-I$ colour magnitude diagram for the stars of the Irwin et al. (2008) sample (with rotation periods) which lie within the {\it XMM-Newton} field of view. As a comparison we show photometric members of NGC~2547 (from Naylor et al. 2002) that also lie within the X-ray field of view and indicate those which have an X-ray counterpart among the 323 significant X-ray sources reported in section~2.1. Figure~2a shows that the Irwin et al. (2008) sample is only about 50 per cent complete. A significant fraction of photometic cluster candidates at all colours have no detected rotation periods. Most of these non-periodic candidates are X-ray detected and very likely to be genuine NGC~2547 members (see Jeffries et al. 2006 for a detailed discussion). Only for stars with $V-I<1.5$ and $V-I>2.8$ are there significant numbers of candidates without X-ray detections. For the former, these are likely to be contaminating giants among the photometrically selected members, but the latter are more likely to be genuine cluster members that become too faint for detection at very low luminosities (see below), as indeed are many of the Irwin et al. (2008) objects at similar colours. Figure~2b plots X-ray activity, expressed as $\log L_{\rm x}/L_{\rm bol}$, versus colour. There is a gradually rising mean activity level as we move from K- through to M-dwarfs. The upper envelope is ill defined due to three very high points that seem well separated from the rest of the distribution. A time-series analysis of these three stars reveals that each was affected by a large flare during the course of the {\it XMM-Newton} observation. The lower envelope is well defined for $1< V-I <3$ and probably represents a true floor to the level of X-ray activity in cool stars at the age of NGC~2547. Although our NGC~2547 sample is biased by having detected rotation periods, Irwin et al. (2008) argue that there is not a population of more slowly rotating low-mass stars and Fig.~2b also shows that photometrically selected members of NGC~2547 {\it without} rotation periods, and which also have a {\it XMM-Newton} counterpart, share a similar minimum (and median) level of X-ray activity as a function of colour. Neither is this minimum level a product of the sensitivity of the X-ray observations, because almost all photometric candidates with $V-I < 2.8$ and which lie in the {\it XMM-Newton} field of view have been detected (see Fig.~2a). The important point that emerges from these arguments is that the range of X-ray activity levels in NGC~2547 at a given colour, is quite small (less than an order of magnitude) for young K- and M-dwarfs with $V-I<2.8$. For cooler stars with $V-I>2.8$, the X-ray observations become progressively incomplete and we can say very little about the range of X-ray activity here. \subsection{Activity versus rotation period and Rossby number} \subsubsection{The dependence of activity on rotation period} \begin{figure} \centering \includegraphics[width=80mm]{plotlxlbolper.ps} \caption{X-ray activity as a function of rotation period for periodic objects in Irwin et al. (2008) that are in the {\it XMM-Newton} field of view. The sample has been split according to mass (estimated from the luminosity, see text); detections are denoted with symbols as shown in the plot and upper limits are shown with downward pointing triangles. } \label{lxlbolper} \end{figure} Figure~3 shows the dependence of activity ($L_{\rm x}/L_{\rm bol}$) on rotation period, considering here only those stars in NGC~2547 with rotation periods given by Irwin et al. (2008). For the purposes of later discussion we have divided the stars up according to their estimated masses: $0.55<M/M_{\odot}<0.95\,M_{\odot}$ (roughly corresponding to K-stars, blue circles); $0.35<M/M_{\odot}<0.55$ (corresponding to M0-M3 stars, red crosses); and $M<0.35\,M_{\odot}$ (corresponding to stars cooler than M3, black open circles). The masses were estimated from the luminosities of the stars and a 35\,Myr isochrone from the evolutionary models of Siess, Dufour \& Forestini (2000). The bolometric luminosities of our sample were calculated using the corrected, intrinsic $V$ magnitudes, the bolometric corrections described in section~2.3 and an assumed distance to NGC~2547 of 400\,pc (e.g. Mayne \& Naylor 2008). The choice of the mass division at $0.55\,M_{\odot}$ is to isolate K-dwarfs from the cooler M-dwarfs and at $0.35\,M_{\odot}$ to mark the approximate transition to a fully convective star (Siess et al. 2000). More importantly, the latter division marks the lowest mass at which our sample can be considered complete, in the sense that X-ray upper limits begin to occur in stars of lower mass. The lowest luminosity stars in the sample have masses of $\simeq 0.1\,M_{\odot}$. Figure~3 shows that X-rays have been detected from K- and M-dwarfs with periods ranging from 0.2--10\,days. Most of the upper limits occur for stars with short periods. Given the possibility of a correlation between X-ray activity and rotation this at first seems surprising. The reason is that most of the lowest luminosity (and hence lowest mass) objects in the Irwin et al. (2008) catalogue have short rotation periods. As the the ratio $L_{\rm x}/L_{\rm bol}$ is close-to-constant in this sample, this means that many short period objects have low bolometric {\it and} X-ray luminosities and are thus harder to detect. There is a very shallow decline in X-ray activity towards longer periods. As we will show when considering the Rossby numbers for these stars, the reason that the longer period stars do not show much lower X-ray activity is that most of them still have small Rossby numbers and are in the regime where saturated activity levels are expected. There is some evidence that the very shortest period stars might have lower activity corresponding to ``super-saturation''. This is primarily based on a group of three very low-mass stars with $P<0.4$\,d and only upper limits to their X-ray activity and a similar group of three K-stars ($M>0.55\,M_{\odot}$) where $L_x/L_{\rm bol} \simeq 10^{-3.5}$. Any correlations, trends or threshold periods may be confused or blurred by the inclusion of a range of spectral types/convection zone depths within the sample plotted in Fig.~3. It was exactly this issue which led previous authors to consider the use of the Rossby number as a proxy for magnetic dynamo efficiency (e.g. Noyes et al. 1984; Dobson \& Radick 1989; Pizzolato et al. 2003). \subsubsection{Rossby numbers} \begin{figure} \centering \includegraphics[width=80mm]{plotmtc.ps} \caption{Convective turnover time as a function of stellar mass. The open circles show the turnover times adopted for the NGC~2547 sample, calculated according to $\tau_c \propto L_{\rm bol}^{-1/2}$ (see text), compared with empirical calibrations from Noyes et al. (1984), Pizzolato et al. (2003) and Kiraga \& St{\c e}pie\'n (2007) and a scaled theoretical 200\,Myr isochrone from Kim \& Demarque (1996). The masses for the NGC~2547 stars were estimated from their luminosities (see text). } \label{tauc} \end{figure} The use of Rossby number raises a problem when dealing with M-dwarfs. The widely used semi-empirical formula of Noyes et al. (1984) predicts convective turnover time, $\log \tau_c$, as a function of $B-V$ colour. This relationship is poorly defined for $B-V>1$ and has no constraining data in the M-dwarf regime. Theoretically, little work has been done on turnover times at very low masses. Gilliland (1986) calculated that the turnover time at the base of the convection zone increases with decreasing $T_{\rm eff}$ and mass along the main sequence. These models are limited to stars $\geq 0.5\,M_{\sun}$, corresponding to $T_{\rm eff} \geq 3500$\,K on the main sequence. $\log \tau_c$ is roughly linearly dependent on $\log T_{\rm eff}$ and increases from about 1.2 (when $\tau_c$ is expressed in days) at $T_{\rm eff}=5800$\,K to about 1.85 at 3500\,K. Similar calculations, with similar results (except for arbitrary scaling factors) have been presented more recently by Kim \& Demarque (1996), Ventura et al. (1998) and Landin, Mendes \& Vaz (2010). Ventura et al. (1998) attempted to extend the calculation into the fully convective region, predicting that the convective turnover time would continue to increase. A complication here is that the M-dwarfs of NGC~2547 are not on the main sequence. Gilliland (1986), Kim \& Demarque (1996) and Landin et al. (2010) predict that $\tau_c$ is about 50 per cent larger for stars of $0.5\,M_{\odot}$ at an age of $\sim 30$\,Myr on the pre-main-sequence (PMS). An alternative approach has been to empirically determine $\tau_c$ by demanding that activity indicators (chromospheric or coronal) satisfy a single scaling law with Rossby number, irrespective of stellar mass (e.g. Noyes et al. 1984; St{\c e}pie\'n 1994). The most recent work has focused on $L_{\rm x}/L_{\rm bol}$ as an activity indicator. Using slow- and fast-rotating stars, Pizzolato et al. (2003) showed that $\tau_c$ needs to increase rapidly with decreasing mass in order to simultaneously explain $L_{x}/L_{\rm bol}$ in G-, K- and M-dwarfs, and they find $\log \tau_c > 2$ for $M<0.5\,M_{\odot}$. Similar work by Kiraga \& St{\c e}pie\'n (2007) concentrated on slowly rotating M-dwarfs finding their empirical $\log \tau_c$ increased from 1.48 at $M\simeq 0.6\,M_{\odot}$ to 1.98 at $M \simeq 0.2\,M_{\odot}$. An interesting insight was provided by Pizzolato et al. (2003), who noted that the mass dependence of the turnover time is closely reproduced by assuming that $\tau_c \propto L_{\rm bol}^{-1/2}$. In what follows we adopt this scaling and hence a Rossby number given by \begin{equation} \log N_R = \log P - 1.1 + 0.5 \log L_{\rm bol}/L_{\odot}\, . \end{equation} The turnover time has been anchored such that $\log \tau_c = 1.1$ for a solar luminosity star, following the convention adopted by Noyes et al. (1984) and Pizzolato et al. (2003). In Fig.~\ref{tauc} we compare the $\tau_c$ values calculated for our sample as a function of mass, with the theoretical and empirical relationships discussed above. Overall, our $\tau_c$ values lie a little below (but not significantly) the Pizzolato et al. (2003) calibration, and marginally above the values determined by Kiraga \& St{\c e}pie\'n (2007) and a theoretical 200\,Myr isochrone from Kim \& Demarque (1996).\footnote{The $\tau_c$ values from Kim \& Demarque (1996) were multiplied by 0.31 to anchor them at $\log \tau_c = 1.1$ for a solar mass star.} From the discussion by these latter authors and their Fig.~3, it is clear that any discrepancy between theory and data could be explained by the younger age of NGC~2547. Stars with $M<1.0\,M_{\odot}$ are still approaching the main sequence at 30--40\,Myr, their $\tau_c$ values are still falling and one would expect them to have larger $\tau_c$ than for a 200\,Myr isochrone or indeed M-dwarf field stars. Unfortunately, Kim \& Demarque (1996) did not provide $\tau_c$ isochrones at these younger ages. The semi-empirical relationship between $\tau_c$ and mass proposed by Noyes et al. (1984) is also shown. As discussed above, this has little or no constraining data below about $0.7\,M_{\odot}$ and our adopted turnover times in this regime are much larger. We conclude that our adopted $\tau_c$ values and hence Rossby numbers are quite consistent with previous work, although systematic uncertainties at the level of $\sim 0.2$\,dex must be present when comparing stars at $M<0.5\,M_{\odot}$ with solar-mass stars. Tables 1~and~2 include our calculated convective turnover times and Rossby numbers. \subsubsection{The dependence of activity on Rossby number} \begin{figure} \centering \includegraphics[width=80mm]{plotlxlbolrossby.ps} \caption{X-ray activity as a function of Rossby number. The sample is divided by mass and shown with symbols as in Fig.~\ref{lxlbolper}. The dashed lines indicate divisions between unsaturated, saturated and super-saturated coronal activity defined in the literature for G- and K-stars (see section 3.3.3). } \label{lxlbolrossby} \end{figure} Figure~\ref{lxlbolrossby} shows the dependence of $L_{\rm x}/L_{\rm bol}$ on Rossby number. Dashed loci indicate the approximate divisions between the unsaturated, saturated and super-saturated regimes found from data for G- and K-stars (see Randich 1998; Pizzolato et al. 2003 and section~4). There is some evidence that the K-stars in our sample ($M>0.55\,M_{\odot}$, blue circles) are following the pattern seen in other young clusters. Most of the K-dwarfs appear to have a saturated level of magnetic activity, although we do not clearly see any fall in activity levels at the largest Rossby numbers in our sample. There are three K-dwarfs with $L_{\rm x}/L_{\rm bol} \simeq 10^{-3.5}$ at $\log N_R <-1.8$, which may be examples of super-saturated stars, and whilst there are also some with similar activity levels at higher Rossby numbers it is clear that the K-dwarfs with the lowest Rossby numbers are less active than their equivalents in the M-dwarf subsample. The 5 K-dwarfs with $\log N_R < -1.7$ have a mean $\log L_{\rm x}/L_{\rm bol}= -3.46 \pm 0.11$ (standard deviation), while the 7 M-dwarfs (with $M>0.35\,M_{\odot}$) at similar Rossby numbers have a mean $\log L_{\rm x}/L_{\rm bol}= -3.02 \pm 0.21$. In the M-dwarf data for NGC~2547 ($M<0.55\,M_{\odot}$, red crosses and open circles) there is no decline in activity at large Rossby numbers. This is probably due to a lack of targets with long enough periods to populate the unsaturated regime ($\log N_R > -0.8$, Stauffer et al. 1997). However, there is no doubt that an unsaturated coronal activity regime does exist for field M-dwarfs with long rotation periods (see Kiraga \& St{\c e}pie\'n 2007 and section~4). The lack of obvious super-saturation among the M-dwarfs is more significant. The phenomenon is claimed to begin in G- and K-dwarfs with $\log N_R\leq -1.8$ (Patten \& Simon 1996; Stauffer et al. 1997; Randich 1998) and there are plenty of stars in our sample with much smaller Rossby numbers than this, mostly among the lowest mass M-dwarfs. A complication here is that the cited papers used relationships in Noyes et al. (1984) to calculate $\tau_c$. However, Fig.~\ref{tauc} shows that Rossby numbers estimated for G- and K-dwarfs ($M>0.55\,M_{\odot})$ using the Noyes et al. (1984) formulae would be $\leq 0.2$\,dex larger than those calculated in this paper for a star of similar mass, so the conclusion is robust. It is possible that super-saturation begins in the M-dwarfs of NGC~2547 at very low Rossby numbers, say $\log N_R <-2.3$, but this is only indicated by three upper limits. James et al. (2000) also claimed tentative evidence of super-saturation in their collected sample of field and cluster M-dwarfs. Their claim was based on two stars; VXR47 and VXR60b in IC~2391, which have $(V-I)_0=1.7$ and 2.06 respectively and according to their luminosities would both be classed as late K-dwarfs ($M>0.55\,M_{\odot}$) with $\log N_R \simeq -2$ in our classification scheme. Our conclusion from the NGC~2547 sample is that if super-saturation does occur in M-dwarfs it occurs at Rossby numbers that become (in the lowest mass stars) at least a factor of three lower than in super-saturated G- and K-dwarfs. \section{Combination with other datasets} The NGC~2547 data suggest that super-saturation does not occur at a single critical Rossby number irrespective of spectral type (see Fig.~5). Instead it seems more plausible that super-saturation may occur below some critical period ($\leq 0.4$~d, see Fig.~3). This interpretation is hampered by relatively few targets with very short rotation periods and also few targets of higher mass, where the very different convective turnover times would result in quite different Rossby numbers at the same rotation period. To increase our statistics and to make a better comparison with a larger sample of higher mass stars, we combined our dataset with published data for stars with known rotation periods in young open clusters and the field. The main sources of the comparison data are catalogues of X-ray activity in field M-dwarfs by Kiraga \& St{\c e}pie\'n (2007) and in field and cluster dwarfs by Pizzolato et al. (2003). We have also used the the recent catalogue of rotation periods in Hartman et al. (2010, see below) to add many new stars from the young Pleiades cluster to the rotation-activity relationship. This large sample covers a range of ages: NGC~2547 at 35\,Myr (Jeffries \& Oliveira 2005); IC~2391 and IC~2602 at $\simeq 50$\,Myr (Barrado y Navascues, Stauffer \& Jayawardhana 2004); Alpha Per at $\simeq 90$\,Myr (Stauffer et al. 1999); Pleiades at $\simeq 125$\,Myr (Stauffer, Schultz \& Kirkpatrick 1998); Hyades at $\simeq 625$\,Myr (Perryman et al. 1998); and field stars that have ages from tens of Myr to many Gyr. There are important issues to address regarding the completeness of comparison samples. For NGC~2547, we started with stars of known rotation period and determined their X-ray activity level or upper limits to their X-ray activity if they could not be detected. Almost all stars were detected down to about $0.35\,M_{\odot}$ (spectral type M3) and there were a mixture of detections and upper limits at lower masses. The samples of stars with known X-ray activity and rotation period found in the literature have been generated in a different way. Generally, field star samples have been compiled by matching objects with known rotation periods to X-ray sources in the {\it ROSAT} all-sky survey (RASS, Voges et al. 1999). In clusters, the data are mostly from {\it ROSAT} pointed observations, from which detected X-ray sources were matched with known cluster members. The problem is, especially when searching for evidence of super-saturation, that we must be sure that the X-ray observations were sensitive enough to have detected examples of super-saturated stars (say with $L_x/L_{\rm bol} \leq 10^{-3.5}$). This difficulty is exacerbated for active M-dwarfs, because they frequently flare, so there is a possibility that what has been reported in the literature is an X-ray bright tail, disguising a hidden population of undetected, super-saturated M-dwarfs. \begin{figure*} \centering \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=80mm]{plotlxlbolkper.ps} \includegraphics[width=80mm]{plotlxlbolmper.ps} \includegraphics[width=80mm]{plotlxlbolcper.ps} \end{minipage} \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=80mm]{plotlxlbolkrossby.ps} \includegraphics[width=80mm]{plotlxlbolmrossby.ps} \includegraphics[width=80mm]{plotlxlbolcrossby.ps} \end{minipage} \caption{X-ray activity as a function of rotation period (left column) or as a function of Rossby number (right column) for low-mass stars in NGC~2547 and for literature compilations of cluster and field dwarfs as explained in the text. The plots are separated according to the estimated masses of the stars (top: K-stars with $0.55<M/M_{\odot}<0.95$; middle: M-stars with $0.35<M/M_{\odot}<0.55$; bottom: M-stars with $M<0.35\,M_{\odot}$, the symbols are those used in Fig.~3). The loci in each plot are by-eye trends identified in the K-stars and then repeated in the subsequent plots to highlight differences with stellar mass. Cluster M-dwarfs from Pizzolato et al. (2003) may be subject to an upward bias (see text) and are marked with asterisks. } \label{comblxlbol} \end{figure*} For field stars at a given $V$ magnitude, and for a given X-ray flux detection threshold, the corresponding $L_x/L_{\rm bol}$ detection threshold becomes smaller in cooler stars because the magnitude of the $V$-band bolometric correction increases. The catalogue of M-dwarfs with rotation periods compiled by Kiraga \& St{\c e}pie\'n (2007) was taken from optical samples with $V<12.5$ and correlated with {\it ROSAT} data, mainly from the RASS. The flux sensitivity of RASS varies with ecliptic latitude, but the minimum detectable count rate over most of the sky is $\simeq 0.015$ counts per second. For a typical coronal spectrum, and neglecting interstellar absorption for nearby stars, this equates to a flux sensitivity of $10^{-13}$ erg\,cm$^{-2}$\,s$^{-1}$ (e.g. H\"unsch et al. 1999). If we consider an M0 dwarf with $V=12.5$, this flux detection limit corresponds to $L_{x}/L_{\rm bol} \simeq 1.3 \times 10^{-4}$ and is even lower for cooler M-dwarfs. Hence the Kiraga \& St{\c e}pie\'n field M-dwarf sample was easily capable of identifying super-saturation. Similarly, the faintest field M-dwarfs in the Pizzolato et al. (2003) compilation have $V \sim 11$ and would easily be detected at super-saturated activity levels in the RASS. There are thus no completeness problems for the comparison samples of field dwarfs. In a cluster, all stars are at the same distance, so a given X-ray flux detection limit corresponds to a $L_{x}$ threshold, and cooler stars with smaller $L_{\rm bol}$ are harder to detect at a given $L_x/L_{\rm bol}$. In most clusters observed by {\it ROSAT}, and included in the Pizzolato et al. compilation, the sensitivity was sufficient to detect all G- and K-dwarfs, but only a fraction of M-dwarf members were detected (e.g. Stauffer et al. 1994; Micela et al. 1999 - the Pleiades; Randich et al. 1995 - IC~2602; Randich et al. 1996 - Alpha Per). Furthermore in some clusters (e.g. IC 2391 -- Patten \& Simon 1996), members were {\it identified} on the basis of their X-ray detection and their rotation periods were determined afterwards. In these circumstances it is likely that the M-dwarfs identified in the cluster observations will have mean activity levels biased upwards by selection effects and there would be little chance of identifying super-saturation even if it were present. The cluster M-dwarfs from Pizzolato et al. (2003) are therefore clearly identified in what follows. In the case of the Pleiades, these difficulties were circumvented by replacing those Pleiades stars in Pizzolato et al. (2003) with a sample constructed by cross-correlating the Hartman et al. (2010) catalogue of rotation periods for G-, K- and M-dwarfs with the {\it ROSAT} source catalogues (including upper limits) of Micela et al. (1999) and Stauffer et al. (1994) (in that order of precedence where sources were detected in both). In this way we constructed a Pleiades catalogue of 111 stars detected by {\it ROSAT}, with rotation periods $0.26\leq P \leq 9.04$\,d and masses (see below) of $0.33<M/M_{\odot}<0.95$, along with 18 stars with $0.35\leq P \leq 9.41$\,d and $0.18<M/M_{\odot}<0.95$ that have only X-ray upper limits. The Hartman et al. (2010) catalogue has a relatively bright magnitude limit and hence few very low-mass stars. Nevertheless there were 38 detections and 10 upper limits for objects classed here as M-dwarfs ($M<0.55\,M_{\odot}$). Masses were calculated for the field and cluster dwarfs using bolometric luminosities listed in the source papers and the Z=0.02 Siess et al. (2000) models with (i) a 1\,Gyr age for the field stars and stars in the Hyades, (ii) a 100\,Myr age for stars in the Alpha Per and Pleiades clusters, (iii) a 50\,Myr age for stars in the IC~2391 and IC~2602 clusters. Rossby numbers were calculated using equation~3. $L_{x}/L_{\rm bol}$ values were taken from Pizzolato et al. (2003) and Kiraga \& St{\c e}pie\'n (2007), or from Stauffer et al. (1994) and Micela et al. (1999) for the Pleiades. The X-ray fluxes in these papers are quoted in the 0.1--2.4\,keV {\it ROSAT} passband. The spectral model adopted for NGC~2547 in section 3.3 predicts that a flux in the 0.1--2.4\,keV band would only be 6 per cent higher than the 0.3--3\,keV fluxes reported in Table~1. This difference is small enough to be neglected in our comparisons. The combined data are shown in Fig.~\ref{comblxlbol}, where correlations between activity and period and activity and Rossby number are investigated. The stars have been split into three mass ranges (the same subsets as in section 3.3) to see whether period or Rossby number is the best parameter with which to determine X-ray activity levels across a broad range of masses and convection zone depths. The numbers of stars taken from each of the clusters and the field in each of the three mass ranges are: NGC~2547 (45 for $<0.35\,M_{\odot}$, 25 for $0.35<M/M_{\odot}<0.55$, 26 for $0.55<M/M_{\odot}<0.95$); IC~2391/2602 (0, 3, 24); Alpha Per (0, 2, 29); Pleiades (10, 38, 71); Hyades (0, 1, 8); Field (8, 27, 44) The additional data, particularly more fast-rotating K-dwarfs and slow-rotating M-dwarfs, clarifies a number of issues. \begin{enumerate} \item The left hand panels of Fig.~\ref{comblxlbol} show that both slowly rotating K- and M-dwarfs have a decreasing level of coronal activity at long periods. However, rotation period alone appears to be a poor indicator of X-ray activity for slowly rotating stars. K-dwarfs and M-dwarfs follow different rotation-activity relationships at long periods -- compare the M-dwarfs with the (red) solid locus which approximately indicates the relationship followed by the K-dwarfs. The rotation period at which saturation sets in may be longer for lower mass stars. \item The addition of further fast-rotating K-dwarfs, mainly from other young clusters, clearly demonstrates a super-saturation effect at $P<0.3$\,d. The evidence for super-saturation in M-dwarfs is not convincing: there are two upper limits (from NGC~2547 and also present in Fig.~\ref{lxlbolper}) that may suggest super-saturation at $P\simeq 0.2$\,d, in the lowest mass stars. However, the few additional short-period M-dwarfs from the comparison samples show no indication of super-saturation down to periods of 0.25\,d. \item The Rossby number (or at least the Rossby number found from the convective turnover times that we have calculated) unifies the low-activity side of the rotation-activity correlation (in the right hand panels of Fig.~\ref{comblxlbol}). The Rossby number is capable of predicting the level of X-ray activity with a modest scatter and applies to a wide range of masses and convection zone depths, including some stars which are fully or nearly-fully convective. Coronal saturation occurs, within the limitations of the small number statistics for the lowest mass stars, at a similar range of Rossby numbers $(-1.8 < \log N_R < -0.8$) for all low-mass stars and at a similar value of $L_{x}/L_{\rm bol}$. \item For $\log N_R < -1.8$ the Rossby number does less well in predicting what happens to the coronal activity. Whilst there is evidence that some G- and K-dwarfs super-saturate at $\log N_R < -1.8$, there is a large scatter. There is no evidence for declining activity at low Rossby numbers among the M-dwarfs, unless the two upper limits at $\log N_R < -2.5$ in NGC~2547 mark the beginning of super-saturation at much lower Rossby numbers. Comparing the left and right hand panels of Fig.~\ref{comblxlbol} it may be that, for fast-rotating stars, rotation period is the better parameter to determine when and if super-saturation occurs. \item None of these conclusions are affected by the inclusion or otherwise of cluster M-dwarfs from the Pizzolato et al. (2003), that may be subject to an upward selection bias. \item There are 3 K-dwarfs (1 detection -- HII\,1095, and 2 upper limits -- HII\,370, HII\,793) in the Pleiades sample, with $P$ in the range 0.87--0.94\,d, which have anomalously low X-ray activity. We do not believe these are examples of coronal super-saturation, but more likely they are stars where the period found by Hartman (2010) is a 1-day alias of a true longer period. \end{enumerate} \section{Discussion} A summary of the findings of sections 3 and~4 would be that X-ray activity rises with decreasing Rossby number, reaches a saturation plateau at a critical Rossby number ($\log N_R \simeq -0.8$) that is approximately independent of stellar mass, and then becomes super-saturated at a smaller Rossby number in a way that {\it is} dependent on stellar mass -- either in the sense that super-saturation does not occur at all in low-mass M-dwarfs or if it does, it occurs at much lower Rossby numbers than for K-dwarfs. Rossby number is the parameter of choice to describe the occurrence of saturation and the decline of activity at slower rotation rates, but rotation period may be a better parameter for predicting the occurrence of super-saturation at $P \sim 0.3$\,d in K-dwarfs and perhaps $P \sim 0.2$\,d in lower mass stars. There are a number of ideas that could explain the progression from the unsaturated to satuarated to super-saturated coronal regimes: (i) Feedback effects in the dynamo itself may manifest themselves at high rotation rates. The increasing magnetic field could suppress differential rotation and the capacity to regenerate poloidal magnetic field (e.g. Robinson \& Durney 1982; Kitchatinov \& R\"udiger 1993; Rempel 2006). (ii) The photosphere may become entirely filled with equipartion fields (e.g. Reiners, Basri \& Browning 2009). This leads to saturation of magnetic activity indicators in the corona and chromosphere. At faster rotation rates it is possible that the magnetic field emerges in a more restricted way (e.g. constrained to higher latitudes -- Solanki et al. 1997; St{\c e}pie\'n et al. 2001), reducing the filling factor again and leading to super-saturation. (iii) At high rotation rates, centrifugal forces could cause a rise in pressure in the summits of magnetic loops, which then break open or become radiatively unstable, reducing the emitting coronal volume and emission measure (Jardine \& Unruh 1999). The results from NGC~2547 and from Fig.~\ref{comblxlbol} favour an explanation for saturation in terms of saturation of the dynamo or magnetic filling factor, rather than a simple dependence on rotation rate. The coronal activity of low- and very-low mass M-dwarfs in the right hand panels of Fig.~\ref{comblxlbol} is well described by the same relationship between activity and Rossby number as the higher mass K-dwarfs, at least for $\log N_{R}> -1.8$. The Rossby number at which coronal activity saturates is also broadly similar across a wide range of masses. These facts suggest that X-ray activity levels depend primarily on Rossby number, which is a key parameter describing the efficiency of a magnetic dynamo (Durney \& Robinson 1982; Robinson \& Durney 1982). Supporting evidence for an explanation involving saturation of magnetic flux generation comes from direct measurements of magnetic flux in fast rotating M-dwarfs (Saar 1991; Reiners et al. 2009) and from chromospheric magnetic activity indicators, which also show show saturation at Rossby numbers of $\simeq 0.1$ (Cardini \& Cassatella 2007; Marsden, Carter \& Donati 2009). A caveat to this conclusion is that the convective turnover times and hence Rossby numbers of the lowest mass stars in our sample are uncertain. In fact the semi-empirical scaling of $\tau_c \propto L_{\rm bol}^{-1/2}$ was {\it designed} to minimise the scatter at large Rossby numbers (Pizzolato et al. 2003). Clearly, better theoretical calculations of $\tau_c$ are desirable for $M<0.5\,M_{\odot}$. The phenomenon of super-saturation does not seem to be well described by Rossby numbers calculated using similar $\tau_c$ estimates. Some G- and K-dwarfs show super-saturation at $\log N_R \simeq -1.8$, but using the same $\tau_c$ values that tidy up the low-activity side of Fig.~\ref{comblxlbol} implies that M-dwarfs do not super-saturate unless at $\log N_R \simeq -2.5$. This suggests that super-saturation may not be intrinsic to the dynamo mechanism; a point of view supported by the lack of super-saturation in the chromospheric emission from very rapidly rotating G- and K-dwarfs (Marsden, Carter \& Donati 2009). It is worth noting that the lack of super-saturation in M-dwarfs is probably not related to any fundamental change in dynamo action, such as a switch from an interface dynamo to a distributed dynamo as the convection zone deepens. There are sufficient M-dwarfs in Fig.~\ref{comblxlbol} with $M>0.35\,M_{\odot}$, which should still have radiative cores (Siess et al. 2000), to demonstrate that they also show no signs of super-saturation at the Rossby numbers of super-saturated G- and K-dwarfs. \begin{figure*} \centering \centering \begin{minipage}[t]{0.95\textwidth} \includegraphics[width=80mm]{plotlxlbolallrossby.ps} \includegraphics[width=80mm]{plotlxlbolallrk.ps} \end{minipage} \caption{X-ray activity as a function of Rossby number (left) and Keplerian co-rotation radius (right, expressed as a fraction of the stellar radius). The sample is divided by mass as in Fig.~\ref{lxlbolper}. The shaded regions indicate the approximate regimes in which we hypothesise that the coronal activity is controlled by magnetic dynamo efficiency (at large Rossby number in the left hand plot) or by centrifugal effects (for small Keplerian co-rotation radii in the right hand plot). } \label{rkplot} \end{figure*} St{\c e}pie\'n et al. (2001) put forward a hypothesis that a latitudinal dependence of the heating flux at the base of the convection zone is caused by the polar dependence of the local gravity in rapid rotators. This could result in strong poleward updrafts in the convection zone that sweep magnetic flux tubes to higher latitudes, leaving an equatorial band that is free from magnetically active regions, hence reducing the filling factor of magnetically active regions in the photosphere, chromosphere and corona. In this model super-saturation occurs when the ratio of centrifugal acceleration at the surface of the radiative core to the local gravitational acceleration reaches some critical value $\gamma$. i.e. \begin{equation} G\, M_c\, R_c^{-2} = \gamma\, 4\pi^2\, R_c\, P^{-2}\, , \end{equation} where $M_c$ and $R_c$ are the mass and radius of the radiative core. Leaving aside the issue of what happens in fully convective stars, we can make the approximation that $M_c R_c^{-3}$ is approximately proportional to the central density, so that the period $P_{ss}$ at which super-saturation would be evident depends on central density as $P_{\rm ss} \propto \rho_c^{-1/2}$ (assuming that the convection zone rotates as a solid body). The central density as a function of mass is very time dependent on the PMS. At 35\,Myr a star of $0.3\,M_{\odot}$ has $\rho_c = 3000$\, kg\,m$^{-3}$, while a $0.9\,M_{\odot}$ star has $\rho_c = 6900$\,kg\,m$^{-3}$ (Siess et al. 2000). At 100\,Myr however, the core of the $0.3\,M_{\odot}$ star is nearly three times denser, while the $0.9\,M_{\odot}$ star is almost unchanged. Thanks to the inverse square root dependence on density however, one would expect super-saturation to occur at quite similar periods in objects with a range of masses and certainly with a variation that is much smaller than if super-saturation occurred at a fixed Rossby number. However, there is no evidence of super-saturation in the chromospheric activity of G- and K-dwarfs which {\em are} coronally super-saturated (Marsden et al. 2009), and this argues that a simple restriction of the filling factor due to a polar concentration of the magnetic field is not the solution. Jardine \& Unruh (1999) have shown that dynamo saturation or complete filling by active regions may not be necessary to explain the observed plateau in X-ray activity and its subsequent decline at very fast rotation rates. In their model, centrifugal forces act to strip the outer coronal volume, either because the plasma pressure exceeds what can be contained by closed magnetic loops (see also Ryan et al. 2005) or because the coronal plasma becomes radiatively unstable beyond the Keplerian co-rotation radius (Collier Cameron 1988). The reduced coronal volume is initially balanced by a rising coronal density, causing a saturation plateau, but at extreme rotation rates, as more of the corona is forced open, the X-ray emission measure falls (see also Jardine 2004). We might expect centrifugal effects to become significant in the most rapidly rotating stars and whilst there will clearly be a correlation with Rossby number, there will be an important difference in mass dependence. The key dimensionless parameter in the centrifugal stripping model is $\alpha_c$, the co-rotation radius expressed as a multiple of the stellar radius \begin{equation} \alpha_c = (G M_{\ast} P^2/ 4 \pi^2 R_{\ast}^{3})^{1/3} \propto M_{\ast}^{1/3} R_{\ast}^{-1} P^{2/3}\, , \end{equation} where $P$ is the rotation period. Thus coronal activity should saturate at some small value of $\alpha_c$ and then super-saturate at an even smaller $\alpha_c$. In the samples considered here there is a two order of magnitude range in $P$ but a much smaller range in $M_{\ast}^{1/3} R_{\ast}^{-1}$. Hence we expect to see saturation and super-saturation occur at short periods corresponding to some critical values of $\alpha_c$. However, in stars with lower masses and smaller radii, these critical $\alpha_c$ values will be reached at {\it shorter} periods, such that $P_{ss} \propto M^{-1/2} R^{3/2}$, which in NGC~2547 varies from 0.9 (in solar units) for the most massive stars in our sample to 0.5 in the lowest mass stars. Hence super-saturation in the M-dwarfs would occur at shorter periods than for K-dwarfs by a factor approaching 2. To test these ideas Fig.~\ref{rkplot} shows $L_{x}/L_{\rm bol}$ as a function of both $\log N_R$ and $\alpha_c$, with the stars grouped into mass subsets in a similar way to sections~3 and~4. The stellar radii and masses for the comparison samples were estimated from their luminosities and the Siess et al. (2000) models as described in sections~3 and~4. The $\alpha_c$ parameter, like the rotation period, is a poor predictor of what happens to X-ray activity in the slowly-rotating and low-activity regimes. Saturated levels of coronal activity are reached for $\alpha_c = 10$--30, dependent on the mass of the star. We interpret this to mean that centrifugal forces have a negligible effect on coronal structures in this regime. For large values of $\alpha_c$ and concomitantly large values of $\log N_R$ we suppose that coronal activity is determined by the efficiency of the magnetic dynamo, hence explaining the reasonably small scatter within the shaded area of the left hand panel of Fig.~\ref{rkplot}. On the other hand, super-saturation seems to be achieved when $\alpha_c \la 2.5$ and the modest scatter within the shaded area of the right hand panel of Fig.~\ref{rkplot}, compared with that for $\log N_R < -1.8$ in the left hand panel, suggests that centrifugal effects may start to control the level of X-ray emission somewhere between this value and $\alpha_c \sim 10$. In this model it now becomes clear that the reason we have no clear evidence for super-saturation in M-dwarfs is that they are not spinning fast enough for their coronae to be affected by centrifugal forces. There are only two very low-mass M-dwarfs in our sample (from NGC~2547) with $\alpha_c < 2.5$ and they do have upper limits to their activity that could be indicative of super-saturation. If the corona is limited at some small multiple of the co-rotation radius, this implies that X-ray emitting coronal structures exist up to this extent above the stellar surface. The arguments for and against the presence of such extended coronal structures are summarised by G\"udel (2004). Perhaps the best supporting evidence comes from the ``slingshot prominences'' observed to occur around many rapidly rotating stars, which often form at or outside the co-rotation radius (e.g. Collier Cameron \& Robinson 1989; Jeffries 1993; Barnes et al. 1998), which may signal the action of the centrifugal stripping process. \section{Summary} We have determined the level of X-ray activity (in terms of $L_{x}/L_{\rm bol}$) for a sample of low-mass K- and M-dwarfs with rotation periods of 0.2--10\,d, in the young ($\simeq 35$\,Myr) open cluster NGC~2547. A deep {\it XMM-Newton} observation is able to detect X-rays from almost all stars with $M\geq 0.35\,M_{\odot}$ and provides detections or upper limits for many more with lower masses. Most targets exhibit saturated levels of X-ray activity ($L_{x}/L_{\rm bol}\simeq 10^{-3}$), but a few of the most rapidly rotating showed evidence of lower, ``super-saturated'' activity. The evidence for super-saturation in M-dwarfs with $M<0.55\,M_{\odot}$ is weak, limited to three objects with short periods and very small Rossby numbers, which have 3-sigma upper limits to their coronal activity of $L_{x}/L_{\rm bol}\leq 10^{-3.3}$ to $10^{-3.4}$. These data were combined with X-ray measurements of fast-rotating low-mass stars in the field and from other young clusters and the results were considered in terms of stellar rotation period and in terms of the Rossby number, which is thought to be diagnostic of magnetic dynamo efficiency. The main result is that while coronal saturation appears to occur below a threshold Rossby number, $\log N_R \la -0.8$, independent of stellar mass, there is evidence that super-saturation does not occur at a fixed Rossby number. Super-saturation in the lowest mass M-dwarfs occurs, if at all, at Rossby numbers that are at least three times smaller than those for which super-saturation is observed to occur in G- and K-dwarfs; there is no evidence for super-saturation in M-dwarfs with $\log N_R > -2.3$. Instead it appears that a rotation period $<0.3$\,d is a more accurate predictor of when super-saturation commences and there are only a few M-dwarfs rotating at these periods in our sample. A caveat to this result is that our calculated Rossby numbers rely on rather uncertain semi-empirical values of the convective turnover time. However, these turnover times do unify the slowly-rotating side of the rotation-activity relation in the sense that the use of Rossby numbers significantly reduces the scatter in this relationship and predicts a common threshold Rossby number for coronal saturation. These phenomena can be interpreted within a framework where coronal saturation occurs due to saturation of the dynamo itself at a critical Rossby number, or perhaps due to complete filling of the surface by magnetically active regions. Coronal super-saturation is probably not an intrinsic property of the dynamo, but instead associated with topological changes in the coronal magnetic field with fast rotation rates. The observations favour the centrifugal stripping scenario of Jardine \& Unruh (1999) and Jardine (2004), in which the reduction of the available coronal volume due to instabilities induced by centrifugal forces leads to a drop in X-ray emission as the Keplerian co-rotation radius moves in towards the surface of the star. If centrifugal stripping is the correct explanation for super-saturation then M-dwarfs should super-saturate at shorter periods than K-dwarfs, by factors of up to $\sim 2$. The data analysed here are sparse but consistent with this idea. Determining the X-ray emission from a small sample of rapidly rotating ($P<0.25$\,d) M-dwarfs would resolve this issue. The centrifugal stripping idea may also explain the positive correlation between X-ray activity and rotation period seen in very young PMS stars by Stassun et al. (2004) and Preibisch et al. (2005). It could be that the fastest rotating young PMS stars are centrifugally stripped whilst those of slower rotation are merely saturated. Investigating this in detail faces formidable observational and theoretical difficulties, not least the estimation of the intrinsic, non-accretion related X-ray emission, individual absorption column densities for stars and the determination of PMS stellar radii, masses and convective turnover times. \section*{Acknowledgements} Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. RJJ acknowledges receipt of a Science and Technology Facilities Council postgraduate studentship. \nocite{barnes98} \nocite{jeffries93} \nocite{collier89b} \nocite{jeffries98n2547} \nocite{ryan05} \nocite{collier88} \nocite{robinson82} \nocite{rempel06} \nocite{reinersbsat09} \nocite{pallavicini81a} \nocite{mangeney84} \nocite{noyes84} \nocite{dobson89} \nocite{pizzolato03} \nocite{gudel04} \nocite{gilliland86} \nocite{kim96} \nocite{vilhu87sat} \nocite{stauffer94} \nocite{stepien94} \nocite{randich96alphaper} \nocite{patten96} \nocite{kiraga07} \nocite{doyle96} \nocite{solanki97} \nocite{jardine99} \nocite{stauffer97ic23912602} \nocite{kitchatinov94} \nocite{stepien01} \nocite{jardine04} \nocite{jeffries06} \nocite{james00} \nocite{irwin08} \nocite{jeffries05} \nocite{naylor06} \nocite{jeffries04} \nocite{struder01} \nocite{turner01} \nocite{dantona97} \nocite{claria82} \nocite{guainazzi10} \nocite{lyra06} \nocite{mekal95} \nocite{garcia08} \nocite{briggs03} \nocite{saxton03} \nocite{bohlin78} \nocite{leggett96} \nocite{carrera07} \nocite{telleschi05} \nocite{siess00} \nocite{mayne08} \nocite{ventura98} \nocite{randich95ic2602} \nocite{voges99} \nocite{huensch99} \nocite{randichsupersat98} \nocite{durney82} \nocite{saar91} \nocite{marsden09} \nocite{cardini07} \nocite{prosser96} \nocite{marino03a} \nocite{gagnepleiades95} \nocite{cruddace84} \nocite{audard00} \nocite{hartman10} \nocite{naylor02} \nocite{kraft91} \nocite{park06} \nocite{stassun04b} \nocite{preibisch05b} \nocite{dahm07} \nocite{landin10} \nocite{perryman98} \nocite{stauffer99} \nocite{stauffer98} \nocite{barrado04} \bibliographystyle{mn2e}
{'timestamp': '2010-10-12T02:04:09', 'yymm': '1010', 'arxiv_id': '1010.2152', 'language': 'en', 'url': 'https://arxiv.org/abs/1010.2152'}
ArXiv
\section{\hspace{5mm}Figures: Numerical simulations} \vspace*{\fill} \bild{simu1yx} {Nearly ideal {\it transversal\/} quantized {\sc Hall} resistance (without spin degeneracy) down to filling factor $\nu=1$. (Computer simulation based on a phenomenological formula averaging over $N>50$ replicas within a charge carrier density variance $\sigma<0.1\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu1xx} {Nearly ideal {\it longitudinal\/} quantized {\sc Hall} resistance (without spin degeneracy) down to filling factor $\nu=1$. (Computer simulation based on a phenomenological formula averaging over $N>50$ replicas within a charge carrier density variance $\sigma<0.1\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu2yx} {Nearly ideal {\it transversal\/} quantized {\sc Hall} resistance {\it including\/} spin degeneracy down to filling factor $\nu=2$. (Computer simulation based on a phenomenological formula averaging over $N>50$ replicas within a charge carrier density variance $\sigma<0.1\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu2xx} {Nearly ideal {\it longitudinal\/} quantized {\sc Hall} resistance {\it including\/} spin degeneracy down to filling factor $\nu=2$. (Computer simulation based on a phenomenological formula averaging over $N>50$ replicas within a charge carrier density variance $\sigma<0.1\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu3yx} {{\it Transversal\/} quantized {\sc Hall} resistance (without spin degeneracy) down to filling factor $\nu=1$. (Computer simulation based on a phenomenological formula {\it geometrically\/} averaging over $N=1000$ replicas within a charge carrier density variance $\sigma=1\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$. Inhomogenities of the external magnetic field may be modelled in an analogous fashion.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu3xx} {{\it Longitudinal\/} quantized {\sc Hall} resistance (without spin degeneracy) down to filling factor $\nu=1$. (Computer simulation based on a phenomenological formula {\it geometrically\/} averaging over $N=1000$ replicas within a charge carrier density variance $\sigma=1\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$. Inhomogenities of the external magnetic field may be modelled in an analogous fashion.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu4yx} {{\it Transversal\/} quantized {\sc Hall} resistance (without spin degeneracy) down to filling factor $\nu=1$. (Computer simulation based on a phenomenological formula {\it geometrically\/} averaging over $N=1000$ replicas within a charge carrier density variance $\sigma=3\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$. Inhomogenities of the external magnetic field may be modelled in an analogous fashion.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu4xx} {{\it Longitudinal\/} quantized {\sc Hall} resistance (without spin degeneracy) down to filling factor $\nu=1$. (Computer simulation based on a phenomenological formula {\it geometrically\/} averaging over $N=1000$ replicas within a charge carrier density variance $\sigma=3\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$. Inhomogenities of the external magnetic field may be modelled in an analogous fashion.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu5yx} {{\it Transversal\/} quantized {\sc Hall} resistance (without spin degeneracy) down to filling factor $\nu=1$. (Computer simulation based on a phenomenological formula {\it geometrically\/} averaging over $N=1000$ replicas within a charge carrier density variance $\sigma=6\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$. Inhomogenities of the external magnetic field may be modelled in an analogous fashion.)} {16} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{simu5xx} {{\it Longitudinal\/} quantized {\sc Hall} resistance (without spin degeneracy) down to filling factor $\nu=1$. (Computer simulation based on a phenomenological formula {\it geometrically\/} averaging over $N=1000$ replicas within a charge carrier density variance $\sigma=6\,\%$ around $n_{2D}\approx 2.0 \cdot 10^{15}/{\rm m}^2$. Inhomogenities of the external magnetic field may be modelled in an analogous fashion.)} {16} \vspace*{\fill} \newpage \section{\hspace{5mm}Figures: Sample geometry and topology} \vspace*{\fill} \bild{hbar}{{\sc Hall}-bar layout}{15} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{hbarpho}{Photograph of the {\sc Hall}-bar}{12} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{hpauw}{{\sc van\,der\,Pauw}-type sample layout}{6} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{hpauwpho}{Photograph of the (broken) {\sc van\,der\,Pauw}-type \lq\lq{\bf centi}\rq\rq\ sample}{12} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{hcorb}{{\sc van\,der\,Pauw}-{\sc Corbino}-hybrid sample layout}{6} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{hcorbpho}{Photograph of the {\sc van\,der\,Pauw}-{\sc Corbino}-hybrid sample}{12} \vspace*{\fill} \newpage \section{\hspace{5mm}Figures: Experimental results} \vspace*{\fill} \bild{x1centi1}{Measurement 1 on the {\sc van\,der\,Pauw}-type \lq\lq${\bf centi}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x1centi2}{Measurement 2 on the {\sc van\,der\,Pauw}-type \lq\lq${\bf centi}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x1centi3}{Measurement 3 on the {\sc van\,der\,Pauw}-type \lq\lq${\bf centi}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x1centi4}{Measurement 4 on the {\sc van\,der\,Pauw}-type \lq\lq${\bf centi}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x1centi5}{Measurement 5 on the {\sc van\,der\,Pauw}-type \lq\lq${\bf centi}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x2centi}{Measurement 6 on the {\sc van\,der\,Pauw}-type \lq\lq${\bf centi}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x4micro}{Measurement on a {\sc Hall}-bar-type \lq\lq${\bf micro}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x4milli}{Measurement on a {\sc van\,der\,Pauw}-type \lq\lq${\bf milli}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x4centi}{Measurement on a {\sc van\,der\,Pauw}-type \lq\lq${\bf centi}$\rq\rq\ sample}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x4scalin}{Scaling in real space: Comparison of different sample sizes}{14} \vspace*{\fill} \eject\noinden \vspace*{\fill} \bild{x5corb}{Measurement on a {\sc van\,der\,Pauw}-{\sc Corbino}-hybrid sample}{14} \vspace*{\fill} \newpage \section{\hspace{5mm}References} \begin{itemize} \item[{[1]}] {\sc K.\ von\,Klitzing}, {\sc G.\ Dorda}, and {\sc M.\ Pepper}, {\it New method for high-accuracy determination of the fine-structure constant based on quantized Hall resistance\/}, Phys.\ Rev.\ Lett.\ {\bf 45}, 494-497 (1980) \item[{[2]}] {\sc P.A.M.\ Dirac}, {\it Quantized singularities in the electromagnetic field\/}, Proc.\ Roy.\ Soc.\ {\bf A133}, 60-72 (1931) \item[{[3]}] {\sc R.E.\ Prange} and {\sc S.M.\ Girvin}, {\it The Quantum Hall Effect\/}, Springer-Verlag, Berlin 1987 \item[{[4]}] {\sc M.\ Stone} ed., {\it Quantum Hall Effect\/}, World Scientific, Singapore 1992 \item[{[5]}] {\sc M.\ Janssen}, {\sc O.\ Viehweger}, {\sc U.\ Fastenrath}, and {\sc J.\ Hajdu}, {\it Introduction to the Theory of the Integer Quantum Hall Effect\/}, Verlag Chemie, Weinheim 1994 \item[{[6]}] {\sc T.\ Chakraborty} and {\sc P.\ Pietil\"ainen}, {\it The Quantum Hall Effects\/}, Springer-Verlag, Berlin 1995 \item[{[7]}] {\sc K.\ Efetov}, {\it Supersymmetry in Disorder and Chaos\/}, Cambridge University Press 1997 \item[{[8]}] {\sc F.D.M.\ Haldane} and {\sc L.\ Chen}, {\it Magnetic flux of \lq\lq vortices\rq\rq\ on the two-dimensional Hall surface\/}, Phys.\ Rev.\ Lett.\ {\bf 53}, 2591 (1984) \item[{[9]}] {\sc J.K.\ Jain}, {\it Theory of the fractional quantum Hall effect\/}, Phys.\ Rev.\ {\bf B41}, 7653-7665 (1990) \item[{[10]}] {\sc S.-C.\ Zhang}, {\sc T.H.\ Hanson}, and {\sc S.\ Kivelson}, {\it Effective-field-theory model for the fractional quantum Hall effect\/}, Phys.\ Rev.\ Lett.\ {\bf 62}, 82-85 (1989) \item[{[11]}] {\sc S.-C.\ Zhang}, {\it The Chern-Simons-Landau-Ginzburg theory of the fractional quantum Hall effect\/}, Int.\ J.\ Mod.\ Phys.\ {\sc B6}, 25-58 (1992) \item[{[12]}] {\sc S.\ Kivelson}, {\sc D.-H.\ Lee}, and {\sc S.-C.\ Zhang}, {\it Global phase diagramm in the quantum Hall effect\/}, Phys.\ Rev.\ {\bf B46}, 2223-2238 (1992) \item[{[13]}] {\sc Y.\ Aharonov} and {\sc D.\ Bohm}, {\it Significance of electromagnetic potentials in the quantum theory\/}, Phys.\ Rev.\ {\bf 115}, 485-491 (1959) \item[{[14]}] {\sc Y.\ Aharonov} and {\sc A.\ Casher}, {\it Topological quantum effects for neutral particles\/}, Phys.\ Rev.\ Lett.\ {\bf 53}, 319-321 (1984) \item[{[15]}] {\sc X.G.\ Wen} and {\sc A.\ Zee}, {\it On the possibility of a statistics-changing phase transition\/}, J.\ Phys.\ France {\bf 50}, 1623-1629 (1989) \item[{[16]}] {\sc J.\ Fr\"ohlich} and {\sc A.\ Zee}, {\it Large scale physics of the quantum Hall fluid\/}, Nucl.\ Phys.\ {\bf B364}, 517-540 (1991) \item[{[17]}] {\sc A.P.\ Balachandran}, {\it Chern-Simons dynamics and the quantum Hall effect\/}, Pre\-print Syracuse SU-4228-492 (1991). Published in a volume in honor of Professor R.\ Vijayaraghavan. \item[{[18]}] {\sc F.\ Ghaboussi}, {\it Quantum theory of the Hall effect\/}, Int.\ J.\ Theor.\ Phys.\ {\bf 36}, 923-934 (1997) \item[{[19]}] {\sc H.\ Levine}, {\sc S.B.\ Libby}, and {\sc A.M.M.\ Pruisken}, {\it Theory of the quantized Hall effect I\/}, Nucl.\ Phys.\ {\bf B240} (FS12), 30-48 (1984); {\it II\/}, Nucl.\ Phys.\ {\bf B240} (FS12), 49-70 (1984); {\it III\/}, Nucl.\ Phys.\ {\bf B240} (FS12), 71-90 (1984) \item[{[20]}] {\sc A.M.M.\ Pruisken}, {\it On localization in the theory of the quantized Hall effect: a two dimensional realization of the theta vacuum\/}, Nucl.\ Phys.\ {\bf B235} (FS11), 277-298 (1984) \item[{[21]}] {\sc D.E.\ Khmel'nitzkii}, {\it Quantization of Hall conductivity\/}, JETP Lett.\ {\bf 38}, 552-556 (1983) \item[{[22]}] {\sc J.L.\ Cardy} and {\sc E.\ Rabinovici}, {\it Phase structure of Z(P) models in the presence of a theta parameter\/}, Nucl.\ Phys.\ {\bf B205} (FS5) 1-16 (1982) \item[{[23]}] {\sc J.L.\ Cardy} and {\sc E.\ Rabinovici}, {\it Duality and the theta parameter in abelian lattice models\/}, Nucl.\ Phys.\ {\bf B205} (FS5) 17-26 (1982) \item[{[24]}] {\sc A.M.\ Chang} and {\sc D.C.\ Tsui}, {\it Experimental observation of a striking similarity between quantum Hall transport coefficients\/}, Solid State Comm.\ {\bf 56}, 153-154 (1985) \item[{[25]}] {\sc G.D.\ Mahan}, {\it Many-Particle Physics\/}, Plenum Press, New York 1986 \item[{[26]}] {\sc L.V.\ Keldysh}, {\sc D.A.\ Kirzhnitz}, and {\sc A.A.\ Maraduduin}, {\it The dielectric function of condensed systems\/}, North-Holland, Amsterdam 1989 \item[{[27]}] {\sc N.\ Schopohl}, private communication. \item[{[28]}] {\sc I.D.\ Vagner} and {\sc M.\ Pepper}, {\it Similarity between quantum Hall transport coefficients\/}, Phys.\ Rev.\ {\bf 37}, 7147-7148 (1988) \item[{[29]}] {\sc K.\ von\,Klitzing}, private communication. \item[{[30]}] {\sc M.A.\ Hermann} and {\sc H.\ Sitter}, {\it Molecular Beam Epitaxy\/}, Springer-Verlag, Berlin (1989) \item[{[31]}] {\sc L.J.\ van\,der\,Pauw}, {\it A method of measuring specific resistivity and Hall effects of discs of arbitrary shape\/}, Philips Research Reports {\bf 13}, 1-9 (1958) \item[{[32]}] {\sc M.A.\ Hermann} and {\sc H.\ Sitter}, {\it Molecular Beam Epitaxy\/}, Springer-Verlag, Berlin (1989) \item[{[33]}] {\sc H.P.\ Wei}, {\sc D.C.\ Tsui}, {\sc M.A.\ Paalanen}, and {\sc A.M.M.\ Pruisken}, {\it Experiments on delocalization and universality in the integral quantum Hall effect\/}, Phys.\ Rev.\ Lett.\ {\bf 61}, 1294-1296 (1988) \item[{[34]}] {\sc H.P.\ Wei}, {\sc S.Y.\ Lin}, {\sc D.C.\ Tsui}, and {\sc A.M.M.\ Pruisken}, {\it Effect of long-range potential fluctuations on scaling in the integer quantum Hall effect\/}, Phys.\ Rev.\ {\bf 45}, 3926-3928 (1992) \item[{[35]}] {\sc G.P.\ Carver}, {\it A Corbino disk apparatus to measure Hall mobilities in amorphous semiconductors\/}, Rev.\ Scient.\ Instr.\ {\bf 43}, 1257-1263 (1972) \item[{[36]}] {\sc I.M.\ Templeton}, {\it A simple contactless method for evaluating the low-temperature parameters of a two-dimensional electron gas\/}, J.\ Appl.\ Phys.\ {\bf 62}, 4005-4007 (1987) \item[{[37]}] {\sc A.P.\ Balachandran}, {\sc L.\ Chandar}, and {\sc B.\ Sathiapalan}, {\it Duality and the fractional quantum Hall effect\/}, Nucl.\ Phys.\ {\bf B443}, 465-500 (1995) \item[{[38]}] {\sc A.P.\ Balachandran}, {\sc L.\ Chandar}, and {\sc B.\ Sathiapalan}, {\it Chern-Simons duality and the fractional quantum Hall effect\/}, Int.\ J.\ Mod.\ Phys.\ {\bf A11}, 3587-3608 (1996) \item[{[39]}] {\sc P.F.\ Fontein}, {\sc J.M.\ Lagemaat}, {\sc J.\ Wolter}, and {\sc J.P.\ Andre}, {\it Magnetic field modulation - a method for measuring the Hall conductance with a Corbino disc\/}, Semiconductor Science and Technology {\bf 3}, 915-918 (1988) \item[{[40]}] {\sc B.\ Jeanneret}, {\sc B.D.\ Hall}, {\sc H.-J.\ Buhlmann}, {\sc R.\ Houdre}, {\sc M.\ Ilegems}, {\sc B.\ Jeckelmann}, and {\sc U.\ Feller}, {\it Observation of the integer quantum Hall effect by magnetic coupling to a Corbino ring\/}, Phys.\ Rev.\ {\bf B51}, 9752-9756 (1995) \item[{[41]}] {\sc B.\ Jeanneret}, {\sc B.D.\ Hall}, {\sc B.\ Jeckelmann}, {\sc U.\ Feller}, {\sc H.-J.\ Buhlmann}, and {\sc M.\ Ilegems}, {\it AC measurements of edgeless currents in a Corbino ring in the quantum Hall regime\/}, Solid State Comm.\ {\bf 102}, 287-290 (1997) \item[{[42]}] {\sc C.L.\ Petersen} and {\sc O.P.\ Hansen}, {\it Two-dimensional electron gases in the quantum Hall regime: analysis of the circulating current in contactless Corbino geometry\/}, Solid State Comm.\ {\bf 98}, 947-950 (1996) \item[{[43]}] {\sc C.L.\ Petersen} and {\sc O.P.\ Hansen}, {\it The diagonal and off-diagonal AC conductivity of two-dimensional electron gases with contactless Corbino geometry in the quantum Hall regime\/}, J.\ Appl.\ Phys.\ {\bf 80}, 4479-4483 (1996) \item[{[44]}] {\sc R.G.\ Mani}, {\it Steady-state bulk current at high magnetic fields in Corbino-type\linebreak GaAs/AlGaAs heterostructure devices\/}, Europhys.\ Lett.\ {\bf 36}, 203-208 (1996) \item[{[45]}] {\sc H.\ Wolf}, {\sc G.\ Hein}, {\sc L.\ Bliek}, {\sc G.\ Weimann}, and {\sc W.\ Schlapp}, {\it Quantum Hall effect in devices with an inner boundary\/}, Semiconductor Science and Technology {\bf 5}, 1046-50 (1990) \item[{[46]}] {\sc R.J.\ Haug}, {\sc A.D.\ Wieck}, and {\sc K.\ von\,Klitzing}, {\it Magnetotransport properties of Hall-bar with focused-ion-beam written in-plane-gate\/}, Physica {\bf B184}, 192-196 (1993) \item[{[47]}] {\sc R.D.\ Tscheuschner} and {\sc A.D.\ Wieck}, {\it Quantum ballistic transport in in-plane-gate transistors showing onset of a novel ferromagnetic phase transition\/}, Superlattices and Microstructures {\bf 20}, 616-622 (1996) \item[{[48]}] {\sc A.S.\ Sachrajda}, {\sc Y.\ Feng}, {\sc R.P.\ Taylor}, {\sc R.\ Newbury}, {\sc P.T.\ Coleridge}, {\sc J.P.\ McCaffrey}, {\it The topological transition from a Corbino to Hall bar geometry\/}, Superlattices and Microstructures {\bf 20}, 651-656 (1996) \item[{[49]}] {\sc U.\ Klass}, {\sc W.\ Dietsche}, {\sc K.\ von\,Klitzing}, and {\sc K.\ Ploog}, {\it Fountain-pressure imaging of the dissipation in quantum-Hall experiments\/}, Physica {\bf B169}, 363-367 (1991) \end{itemize} \newpage \section{\hspace{5mm}Tables} \begin{table}[h] \vspace{4cm} \begin{center} \begin{tabular}{|r|r|r|r|} \hline $\nu$ & \lq\lq{\bf micro}\rq\rq\ & \lq\lq{\bf milli}\rq\rq\ & \lq\lq{\bf centi}\rq\rq\ \\ \hline \hline {\bf 2} & 100 $\%$ & 121 $\%$ & 179 $\%$ \\ \hline {\rm 3} & 100 $\%$ & 150 $\%$ & 50 $\%$ \\ \hline {\bf 4} & 100 $\%$ & 117 $\%$ & 150 $\%$ \\ \hline {\bf 6} & 100 $\%$ & 117 $\%$ & 100 $\%$ \\ \hline {\bf 8} & 100 $\%$ & 117 $\%$ & 50 $\%$ \\ \hline {\bf 10} & 100 $\%$ & 100 $\%$ & 0 $\%$ \\ \hline {\bf 12} & 100 $\%$ & 100 $\%$ & 0 $\%$ \\ \hline \end{tabular}\normalsize \end{center} \vspace{0.5cm} \caption{Relative plateau width in Fig.\,26} \end{table} \eject\noindent% \begin{table}[h] \vspace{4cm} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline meas.\ no.\ & time & $I_{in}$ & $U_{out}$ & $U_{out}$ range \\ \hline \hline 1 & 00:00 & AC & BD & 3 mV \\ & & AB & CD & 0.3 mV \\ \hline 2 & 00:20 & BD & CA & 3 mV \\ & & BC & AD & 0.3 mV \\ \hline 3 & 00:37 & CA & DB & 3 mV \\ & & CD & AB & 0.3 mV \\ \hline 4 & 00:45 & DB & AC & 3 mV \\ & & DA & BC & 0.3 mV \\ \hline 5 & 00:60 & AC & BD & 3 mV \\ & & AB & CD & 0.3 mV \\ \hline 6 & 01:31 & AC & BD & 3 mV \\ & & AB & CD & 0.3 mV \\ \hline \end{tabular}\normalsize \end{center} \vspace{0.5cm} \caption{Measurements depicted in Fig.\,17 - Fig.\,22} \end{table} \eject\noindent% \begin{table}[h] \vspace{5cm} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline curve & $I_{in}$ & $U_{out}$ & $U_{out}$ range & zero line & overall characteristics \\ \hline\hline \boldmath$\alpha$ & AC & DB & 3 mV & low & standard transversal curve \\ \hline \boldmath$\beta$ & ac & db & 3 mV & low & standard transversal curve \\ \hline \boldmath$\gamma$ & AB & CD & 0.3 mV & low & longitudinal curve with neg.\ slope \\ \hline \boldmath$\delta$ & ab & cd & 0.3 mV & low & longitudinal curve with pos.\ slope \\ \hline \boldmath$\epsilon$ & AD & bd & 3 mV & mid & longitudinal curve with pos.\ slope \\ \hline \boldmath$\zeta$ & AC & bd & 0.3 mV & mid & curve with second derivative content \\ \hline \boldmath$\eta$ & AB & bd & 3 mV & mid & longitudinal curve with neg.\ slope \\ \hline \boldmath$\vartheta$ & AC & bB & 3 mV & low & hybrid curve \\ \hline \end{tabular} \end{center} \vspace{0.5cm} \caption{Evaluation of Fig.\,27} \end{table} \eject \section{\hspace{5mm}Introduction and theoretical perspective} A remarkable fact is that the inverse {\sc von\,Klitzing} constant [1] \begin{equation} \frac{1}{R_{vK}} = \frac{e^2}{h} = \frac{e}{h/e} \end{equation} is nothing but a ratio between an elementary electric and an elementary magnetic quantity, namely the elementary electronic charge and the {\sc London} magnetic flux quantum.% \footnote{% \lq\lq {\sc London} magnetic flux quantum $h/e$\rq\rq\ as opposed to the \lq\lq BCS magnetic flux quantum $h/2e$\rq\rq\ reflecting the fact that the electrons, from which the superconducting ground state is built, are paired.} This is not unlike the expression for the vacuum impedance \begin{equation} Z_0=\sqrt{\frac{\mu_0}{\varepsilon_0}}, \end{equation} the fundamental quantity of r.f.\ technology. In fact, the fundamental constant of quantum electrodynamics, the {\sc Sommerfeld} fine structure constant, is given by the ratio (in MKSA units) \begin{equation} \alpha = \frac{Z_0}{2R_{vK}} = \frac{\sqrt{\mu_0/\varepsilon_0}}{2\,h/e^2} = \frac{e^2}{4\pi\varepsilon_0\,\hbar c} = \frac{e^2\mu_0c}{4\pi\hbar}. \end{equation} More strikingly, the ratio $g/e$ between the charge of a hypothetical {\sc Dirac} magnetic monopole [2] and an electron charge is supposed to be \begin{equation} \frac{g}{e} = \frac{Z_0}{2\alpha} = R_{vK}, \end{equation} since the {\sc Dirac} quantization condition for a configuration consisting of an elementary electric charge $e$ and an elementary magnetic charge $g$ reads (in MKSA) \begin{equation} \frac{1}{\hbar c} \cdot \frac{g}{\sqrt{4\pi\mu_0}} \cdot \frac{e}{\sqrt{4\pi\varepsilon_0}} = \frac{1}{2}. \end{equation} Notice, in terms of {\sc Gau\ss}ian quantities we have to write instead \begin{equation} \alpha = \frac{e^2}{\hbar c}, \phantom{xxxxx} \frac{g}{e} = \frac{1}{2\alpha}, \phantom{xxxxx} \frac{g \cdot e}{\hbar c} = \frac{1}{2}. \end{equation} \par \lq\lq The {\sc von\,Klitzing} resistance is an universal ratio between a magnetic and an electric quantity.\rq\rq\ This statement suggests that the quantum {\sc Hall} effect is a truly fundamental phenomenon of quantum electrodynamics contrary to the popular belief prevalent in semiconductor physics [3, 4, 5, 6, 7].% \footnote{% For example, {\sc Haldane} and {\sc Chen} strongly argue against this view [8]. They consider the material independence of the QHE a strong evidence {\it against\/} electrodynamics effects. They state that then in a physical realistic situation the effect would depend on a material dependent \lq\lq effective fine structure constant\rq\rq\ $\alpha'=(\mu/\varepsilon)^{1/2}/2R_{vK}$. However, quantum physical quantization rules always manifest themselves in terms of bare (microscopic) quantities. One prominent example is the quantization of circulation in neutral superfluids which only depend on the bare mass (of the helium atoms for example) in spite of the strong renormalization effects. (Thanks one extended to Professor {\sc Nils Schopohl} for reminding us of this point.)} In particular, it implies that the transversal conductivity plateaus appearing at integer (resp.\ odd rational) multiples of $e^2/h$ reflect a macroscopic quantum state exhibiting a certain range of rigidity against a variation of external parameters such as the strength of the magnetic field or the density of charge carriers. However, unlike BCS superconductivity (including high-$T_C$ superconductivity), here we do not encounter a Bose-like condensate made up from paired electric charges, i.e.\ the electrons (or holes), but, obviously, a Bose-like condensate made up from flux-charge composites% \footnote{% This interpretation is based on the composite boson model by {\sc Zhang}, {\sc Hanson}, and {\sc Kivelson} for the fractional effect, but it works for the integer one as well. (For a theoretical framework providing a {\it unified\/} description of the integer and fractional quantum {\sc Hall} effects the reader is refered to {\sc Jain}s work [9].)} [9, 10, 11, 12]. Let us briefly recall the essence of the argument. \par The state characterized by a filling factor $\nu=1$ may be regarded as an assembly of bound states, each made up from a point-like electric charge $e$ and an infinite thin magnetic solenoid carrying a flux quantum $h/e$. The cumulative {\sc Aharonov}-{\sc Bohm}-{\sc Aharonov}-{\sc Casher} (ABAC) [13, 14] phase for adiabatically looping one bound state around another is equal to \begin{eqnarray} && \mbox{{\it q.m.\,phase shift\/}}\,(\mbox{{\rm charge around vortex}}) \cdot \mbox{{\it q.m.\,phase shift\/}}\,(\mbox{{\rm vortex around charge}}) \nonumber\\ && \phantom{123} = \exp\,\left\{\frac{i}{\hbar}\, e\,(h/e) \right\} \cdot \exp\,\left\{\frac{i}{\hbar}\, (h/e)\,e \right\} = \exp\,\left\{\frac{i}{\hbar}\,2\,e\,(h/e) \right\} = \exp\, i 4\pi = 1, \end{eqnarray} where, as usual, we have set $\hbar=h/2\pi$. \par A simple exchange of two bound states, interpreted as an exchange of indistinguishable particles in the sense of quantum mechanics, is topologically equivalent to one half of the above operation, such that the statistics parameter becomes \begin{equation} \exp\,i\theta = \exp\,\left\{\frac{i}{\hbar}\,e\,(h/e) \right\} = \exp\, i 2\pi = 1, \end{equation} where we assumed that the constituents, the electric as well as the magnetic ones, are all bosons. It follows immediately that the composites are bosons at least in a long-distance limit. \par However, if we try to incorporate these objects into an action principle, i.e.\ into a Lagrangian framework, things change dramatically: The composites become fermions, or, if they are built from fermionic electric charges (as electrons are) and bosonic flux lines, they transmute to bosonic charge-flux composites. Let us explain how this happens. The Lagrangian function for an assembly of electrically charged point-like particle in an external electromagnetic potential is given by \begin{equation} L = L_{kin} - V({\bf x}_\alpha) + \sum_\alpha {\dot {\bf x}}_\alpha \cdot {\bf A}({\bf x}_\alpha) = L_{kin} - \sum_\alpha \frac{dx_\mu^\alpha}{dt} A^\mu, \end{equation} with $x_\mu$ and $A_\mu$ being the space-time coordinate and the electromagnetic 4-potential, respectively. The attachment of a flux line $\Phi$ to an electric charge may $Q$ be viewed as an additional constraint, say \begin{equation} Q \propto \Phi, \end{equation} such that for the elementary quanta the relation \begin{equation} e = \frac{1}{R_{vk}} \cdot \frac{h}{e}. \end{equation} is fulfilled. In an oversimplified language of mathematical physics, where we set $\hbar=h/2\pi=1$ as well as $e=1$, the celebrated {\sc von\,Klitzing} resistance is simply $2\pi$, and the constraint has to be rewritten as \begin{equation} Q = \frac{1}{2\pi} \cdot \Phi. \end{equation} This is a poor man's form% \footnote{freespoken after a joke by {\sc P.W.\ Anderson} on the renormalization group analysis of the {\sc Kondo} problem.} of the {\sc Chern}-{\sc Simons} relation [10, 11, 12]. Expressed in terms of the associated densities, e.g.\ for one pointlike composite at the origin, \begin{eqnarray} \varrho_{2D} (x,y) &=& \delta(x)\delta(y) \\ B_z (x,y) &=& 2\pi \delta(x)\delta(y) \end{eqnarray} we have \begin{eqnarray} \int \varrho_{2D} \, d^2 x &=& \frac{1}{2\pi} \, \int B_z d^2 x \nonumber \\ &=& \frac{1}{2\pi} \, \int {\bf rot}\,{\bf A}\, d^2 x \;=\;\frac{1}{2\pi} \, \oint {\bf A} d{\bf l}. \end{eqnarray} Using a relativistic notation, which is of course the natural choice in a context of a problem involving classical electrodynamics, we rewrite this as \begin{eqnarray} j^0 &=& \frac{1}{2\pi} F^{12} \nonumber \\ &=& \frac{1}{2\pi} (\partial^1 A^2 - \partial^2 A^1) \nonumber \\ &=& \frac{1}{4\pi} \, \varepsilon_{012} \, (\partial^1 A^2 - \partial^2 A^1), \end{eqnarray} which, taking {\sc Lorentz} invariance into account, may be generalized to \begin{equation} j^\varrho = \frac{1}{4\pi} \, \varepsilon_{\varrho\sigma\tau} \, (\partial^\sigma A^\tau - \partial^\tau A^\sigma). \end{equation} Inserting this constraint into the Lagrangian we finally get \begin{eqnarray} L &=\;& L_0 - \sum_\alpha \frac{dx^\alpha_\mu}{dt} A^\mu + \frac{1}{4\pi} \int d^2 x \, \varepsilon^{\mu\nu\varrho} A_\mu \partial_\nu A_\varrho \nonumber\\ &=: & L_0 - \int d^2 x \, j_\mu A^\mu + \frac{1}{4\pi} \int d^2 x \, \varepsilon^{\mu\nu\varrho} A_\mu \partial_\nu A_\varrho \nonumber\\ &=: & L_0 - \left(\, \int d^2 x \, j_\mu^{particles} - \frac{1}{4\pi} \int d^2 x \, \varepsilon^{\mu\nu\varrho} \partial_\nu A_\varrho \,\right) \,A^\mu \nonumber\\ &=: & L_0 - \int d^2 x \, (j_\mu^{particles} - j_\mu^{field}) \,A^\mu \nonumber\\ &=: & L_0 - \int d^2 x \, j_\mu^{total}\,A^\mu. \end{eqnarray} What now seems to come as a suprise is that the quantum mechanical statistics parameter $\theta$ is exactly the fourth part of the denominator of the topological constraint term, i.e.\ $\pi$. This can be verified with help of functional integral techniques: Integrating out the electromagnetic vector potential $A_\mu$ we get an effective non-local action bilinear in the currents $j_\mu^{particles}$. Calculations show that a two-particle-exchange trajectory gives rise to the correct phase factor. The naive picture is consolidated if we redefine the true electric charge of the charge-flux composite as \begin{equation} Q_{true} = \int d^2 x\, j_0^{total} = \int d^2 x \left(j_0^{particles} - \frac{1}{4\pi} \varepsilon_{0\nu\lambda} F^{\nu\lambda} \right) = \frac{1}{2} \, Q, \end{equation} yielding the correct statistics phase factor even in the ABAC inspired picture.% \footnote{% This point is missed in most popular treatments on two-dimensional statistics (e.g.\ anyons). The readers are often confused about the very origin of statistics transmutation. As shown, the flux-line pierced electron picture has to be supplemented by a renormalization of the effective electric charge of the composite [15].} Thus, if the picture is true, a prerequesite for building the macroscopic Bose-condensed QHE state, is the validity of the {\sc Chern}-{\sc Simons} dynamics, a fact emphasized by {\sc Fr\"ohlich}, {\sc Balachandran}, and others [16, 17, 18]. Recently, {\sc Ghabhoussi} claimed the fundamental validity of the {\sc Chern}-{\sc Simons} Lagrangian for the {\it integral\/} quantum {\sc Hall} effect [18]. However, the latter is a postulate, at best comparable to the {\sc London} theory of superconductivity. The fundamental problem is to find a microscopic justification of this. \par As early as 1984 {\sc Levine}, {\sc Libby}, and {\sc Pruisken} [19] as well as {\sc Pruisken} himself [20] decribed the integral quantum {\sc Hall} effect in the language of a $\sigma$ model with a topological term, in which the longitudinal and transversal components of the conductivity tensor, $\sigma_{xx}$ and $\sigma_{xy}$, respectively, play the role of coupling constants. In an appropriate quantum field theoretical treatment, these are subject to renormalization expressed in terms of a two-parameter scaling analysis as shown in the pioneering work by {\sc Khmel'nitzkii} [21]. {\sc Hall} conductivity plateaus correspond to vanishing {\sc Callan}-{\sc Symanzik} $\beta$-functions, those points, at which the quantized $\sigma$ model exhibits its conformal invariance.% \footnote{% The foliated phase structure of quantum field theories with topological terms has been known for some time in the high-energy physics community, see e.g.\ [22, 23]. In particular, Figs.\ 1 and 2 of Ref.\ 23 anticipate the phase structure of the full quantum {\sc Hall} problem ten years before it became clear [12]. One of us (R.D.T.) is indebted to {\sc R.L.\ Stuller} for this remark.} In spite of its sophistication and beauty, we think that even this model, as well as many other related approaches, are build on presuppositions, which already contain the expected result. Up to now, we have no microscopic theory of the IQHE in which the exact quantization appears as a result, not as a hidden assumption. Moreover, the debate whether the integral quantum {\sc Hall} effect is a direct consequence of the fundamental laws of a dimensionally reduced quantum electrodynamics or genuinely tied to certain subtleties of semiconductor physics still seems to be open. \par A theory of the quantum {\sc Hall} effect should not only explain the exact quantization of the transversal conductivity but also describe the exact shape of the curves, which are only step and delta functions in the limit of zero temperature. Clearly, {\sc Landau} levels are broadened by impurity scattering, but this effect alone does not explain the shape of the longitudinal and transversal resistivity curves. For the many different approaches to the problem the reader is refered to [3,4,5,6,7]. \par Some time ago {\sc Chang} and {\sc Tsui} [24] observed that the derivative of the finite-tem\-pe\-ra\-ture quantum {\sc Hall} resistance $\varrho_{yx}$ with respect to the two-dimensional carrier density $n_{2D}$ exhibits a remarkable similarity to the longitudinal resistivity $\varrho_{xx}$ to the extent that one is almost directly proportional to the other, i.e.\ \begin{equation} \frac{d\varrho_{yx}}{dn_{2D}} \approx - a \cdot \varrho_{xx}. \end{equation} Two significant deviations of this behaviour should be mentioned: Firstly, the relation does no longer hold in the classical regime $B\rightarrow 0$ and, secondly, the spikes appearing in the derivative of the transversal resistivity are smeared out in the longitudinal one. \par What is the reason for this apparently fundamental relation between the two quantities? {\sc Chang} and {\sc Tsui} speculate about a {\sc Kramers}-{\sc Kr\"onig} type relation based on causality [25]. The presence of a natural frequency scale $\omega_c$, the cyclotron frequency, and the suggestive association of the longitudinal and transversal resistivities as parts of a generalized complex resistivity describing a general type of a dielectric response phenomenon in the sense of {\sc Keldysh} {\it et al.\/} [26] should give rise to this kind of dispersion relation. We cannot expect, however, that the standard {\sc Kubo} formula treatment does provide a background for this speculation as claimed by {\sc Chang} and {\sc Tsui} in their 1985 paper [24] since it certainly does not contain all the information necessary for a thorough treatment of the electromagnetic response problem defined by the quantum {\sc Hall} setup [27]. \par The relation discovered by {\sc Chang} and {\sc Tsui} can only directly verified in gated systems where we can continuously control the two-dimensional charge carrier density $n_{2D}$. Therefore, in the context of this paper, it is interesting to reexpress the derivative relation in terms of the resistivities and the magnetic field alone. If one assumes that the filling factor \begin{equation} \nu = \frac{n_{2D}h}{eB} \end{equation} is the relevant variable, we may write \begin{equation} \frac{d\varrho_{yx}}{dn} = \frac{d\varrho_{yx}}{d\nu} \frac{d\nu}{dn_{2D}} = \frac{d\varrho_{yx}}{dB} \frac{dB}{d\nu} \frac{d\nu}{dn_{2D}} = - \frac{1}{n_{2D}} \frac{d\varrho_{yx}}{dB} \cdot B. \end{equation} Combining both equations we obtain \begin{equation} \frac{d\varrho_{yx}}{dB} \approx a \cdot \frac{n_{2D}}{B} \cdot \varrho_{xx}. \end{equation} Within a simple scaling model {\sc Vagner} and {\sc Pepper} discuss some generalizations of this formula [28]. Assumptions on the nature of the impurity scattering, on the spatial variations of the transversal resistivities, on the strength of the applied magnetic field etc.\ restrict the possible values of the exponents of the general, still phenomenological, formula \begin{equation} \frac{d\varrho_{yx}}{dB} \approx a \cdot n_{2D} \cdot B^s \cdot \varrho_{xx}^t. \end{equation} In the cases accessible in our experiments we have $t=1$ with $s$ being a small positive or negative number of order $10^{-2}$ for a negative or positive slope of $\varrho_{xx}(B)$ at $B=0$, respectively. \par We will use these phenomenological formulas as a basis of numerical simulations where we average over a finite number of replicas with a certain distribution of values simulating inhomogenities of the two-dimensional charge carrier density $n_{2D}$ and external magnetic field $B$. \par Our stategy is to push quantum {\sc Hall} experiments to the extremes - in a truly literal sense! If this effect is really a macroscopic quantum effect, then it should be as robust as superconductivity, where it is possible to create situations where the macroscopic quantum state is extended over a region of many kilometers. Furthermore such a state should exhibit a robustness which allows a \lq\lq quick 'n' dirty\rq\rq\ preparation. Nevertheless it still should exhibit features uniquely associated with the topology of boundary conditions. This paper is intended to be a first step toward a realization of this strategy, following a (more or less) crazy suggestion by one of us (A.D.W.) to Professor {\sc von\,Klitzing} some time ago, namely to investigate the quantized {\sc Hall} effects on huge samples [29]. \par To summarize, there are three main reasons to do so: \begin{enumerate} \item To study the general limits of {\it macroscopic\/} quantum coherence attributed to the quantum {\sc Hall} phenomenon. \item To study the scaling laws in the {\it infrared\/} (using the terminology of quantum field theory), i.e.\ in the large scale regime. \item To study the influence of inhomogenities of the charge carrier density, the mobility, and the external magnetic field on the {\it \lq\lq spectral smearing\rq\rq\/} of the QHE signal. \end{enumerate} The latter topic seems to be of great importance in quality control management of III-V (GaAs) molecular beam epitaxy (MBE); for a review on this topic see [30]. If it is possible to interpret a huge sample quantum {\sc Hall} curve in an appropriate way we will have a technique to measure the quality, say {\it electrical\/} homogenity, of a full wafer in a purely electronic, non-destructible way. \par The remaining part of the paper is organized as follows: In the next section we present some computer simulations based on a simple phenomenological model. This enables us to get a feeling about the influence of inhomogenities on the shape of the quantum {\sc Hall} curve. In what follows we briefly describe the experimental set-up including the preparation of the samples. Finally, we review the experimental results, try to interpret them, and make some suggestions towards future research. \section{\hspace{5mm}Computer simulations} The classical formula for the {\sc Hall} resistance in case of a {\sc Hall} bar geometry is given by \begin{equation} R^{cl}_H = \frac{B}{en_{2D}} = \left. \frac{h}{\left( {\displaystyle \frac{hn_{2D}}{eB}} \right) e^2 } \right. . \end{equation} The quantum analog reads \begin{equation} R^{qu}_H = \frac{h}{\nu e^2}. \end{equation} Consequently, an idealized quantum {\sc Hall} curve, in which the {\sc Hall} resistance $R^{ideal}_H(B)$ is understood as a function depending on the external magnetic field $B$ is given by the assignment \begin{equation} \nu \longmapsto \nu(B) = \left\{ \begin{array}{lllll} {\rm int} \left( {\displaystyle \frac{hn_{2D}}{eB} + \frac{1}{2} } \right) &\phantom{1234} &\mbox{{\rm if}} &\phantom{1234} &{\displaystyle \frac{hn_{2D}}{eB}>\frac{1}{2} } \\ {\displaystyle \frac{hn_{2D}}{eB} } &\phantom{1234} &\mbox{{\rm else}} &\phantom{1234} & \end{array} \right. \end{equation} The \lq\lq else\rq\rq\ condition guarantees that below $\nu=1$ the curve is classical as will be expected if the fractional effect is absent. The case including spin degeneracy is modelled in an analogous way (Figures 1-4). \par Averaging may be done additively (arithmetic mean), multiplicatively (geometric\linebreak mean), or reciprocal additively (harmonic mean) by scattering the parameters for the two-di\-men\-si\-o\-nal charge carrier density and the external magnetic field around selected fixed values. In case of a different geometry than the {\sc Hall} bar one an additional multiplicative factor $\gamma$ has to be included \begin{equation} R^{cl}_H = \frac{\gamma B}{en_{2D}} = \left. \frac{h}{\left( {\displaystyle \frac{hn_{2D}}{e\gamma B}} \right) e^2 } \right. \end{equation} in such a way that the quantization (which must not depend on geometry) stays intact: \begin{equation} R^{qu}_H = \frac{h}{\nu e^2}. \end{equation} Geometry factors may vary as well and can be absorbed in a redefinition of the two-dimensional carrier density and magnetic field, respectively, introducing new effective quantities $n_{2D}^{ef\!f}$ and $B^{ef\!f}$. This may be useful in comparing totally different kinds of specimens. \par The mathematically inequivalent averaging methods correspond to different quantum {\sc Hall}-sample-network models such as arrangements in series, parallel, or combinations thereof. However, in cases of a sufficiently narrow distribution of values ($\leq 6\,\%$) it does not matter which method we prefer since then the results of all averaging methods nearly coincide. \par Including in this algorithm the proposal by {\sc Vagner} and {\sc Pepper} [28] we get a fairly good simulation of realistic {\sc Hall} curves which explicitely show up the smoothing of the steps and exhibit an extremely realistic behaviour of the logitudinal resistance. This can be seen in Figures 5-10. Clearly, our averaging model does not include microscopic mechanisms such as quantum interference. However, our diagrams can be used as a reference in the study of experimental curves to get knowledge about the large scale behaviour. \section{\hspace{5mm}Experiments with topological trivial samples} We grow a GaAs/Al$_{0.33}$Ga$_{0.67}$As modulation doped heterostructure by molecular beam epitaxy on a semi-insulating GaAs (100) substrate. It consists of a 2 $\mu$m nominally doped GaAs buffer layer and a 23 nm undoped Al$_{0.33}$Ga$_{0.67}$As spacer layer, followed by 50 nm of Si doped Al$_{0.33}$Ga$_{0.67}$As and a 10 nm GaAs cap. The two-dimensional electron gas is localized in a sheet within the GaAs buffer right at the interface to the spacer. \par The measured values for the electron sheet density and the electron mobility are \begin{itemize} \item at room temperature 3.36\,$\times$\,10$^{11}$\,cm$^{-2}$ and 6\,477 cm$^2\,$V$^{-1}$s$^{-1}$ \item at T=77 K in the dark 3.33\,$\times$\,10$^{11}$\,cm$^{-2}$ and 84\,600 cm$^2\,$V$^{-1}$s$^{-1}$ \item at T=5 K in the dark 3.1\,$\times$\,10$^{11}$\,cm$^{-2}$ and 336\,000 cm$^2\,$V$^{-1}$s$^{-1}$ \end{itemize} \par After the growth process the wafer ($\#$7235) is cut into parts. One sample is mesa-etched into a standard {\sc Hall}-bar geometry with a width of 150 $\mu$m and a distance of 200 $\mu$m between ohmic contacts which were made with an AuGe/Ni alloy. Thus in this type of specimen (called \lq\lq {\bf micro}\rq\rq) we have an electric active area of some 0.1 mm$^2$ depending on the patching. The other ones are square-shaped {\sc van\,der\,Pauw}-type [31] specimens of 3\,$\times$\,3 mm$^2$ = 9 mm$^2$ (called \lq\lq {\bf milli}\rq\rq) of 1.5\,cm\,$\times$\,1.5\,cm = 625 mm$^2$ (called \lq\lq {\bf centi}\rq\rq), respectively. Roughly speaking, this collection of samples enables us to study real-space scaling experimentally over four orders of magnitude, which is a lot! (The next two orders of magnitude would require specimens of about 100\,000 mm$^2$ corresponding to full 300 mm wafers, which are not available yet.) The larger specimens were contacted \lq\lq quick 'n' dirty\rq\rq\ by alloying-in some indium at the corners and the inner edges, respectively. This was done under an nitrogen-hydrogen atmosphere. The samples are mounted on a chip carrier and the measurement was done in a standard way using a home-made% \footnote{by one of us (A.D.W.)} metal cryostat used in experimental lab courses. The arrangements of the contacts and the sample geometry are shown in Figures 11-16. \par Experimental results are shown in Figures 17-27 and Tables 1-3. We observed an interesting aging effect, a drop of the signal for the longitudinal resistivity which disappeared after an experimental rest (Figures 17-22). The effect could be reproduced and was probably attributed to the thermodynamics of the set-up, which eventually caused one sample to break (see Figure 14). \par As a main result of our investigations, the scaling behaviour can be read off from Figures 23-25 and, finally, from Figure 26 and Table 1. \par In the literature, scaling mostly is discussed not by considering the renormalization of the sample size in real space but, rather, in terms of its low temperature behaviour. Essentially, the dependence of conductance on temperature is equivalent to its dependence on the sample size. In a very interesting experiment {\sc H.-P.\ Wei} {\it et al.\/} performed such a scaling analysis for the quantum {\sc Hall} problem [33, 34]. Essentially, they observed the behaviour of the derivative of the transversal {\sc Hall} resistance with respect to the temperature, which diverges algebraically with $T \rightarrow 0$. For the critical exponent which characterizes this divergence they obtain from measurements between 4.2 and 0.1 K a universal value of 0.42. \par In any finite-temperature experiment an effective sample size is determined by the {\sc Thouless} length which is a measure of the mean free path for inelastic scattering. But the temperature-size analogy rests on certain assumptions which may be questioned in the very large scale limit, such that, of course, it makes sense to perform a real-space experiment in the laboratory. By rescaling the magnetic field the curves in Figure 26 are normalized in such a way that they can be compared directly. In Table 1 we list some characteristic properties of the curves, from which the reader may find an appropriate phenomenological formula. In terms of our numerical simulations the samples corresponds to a family of replica of an idealized reference sample with a variation of parameters (i.e.\ the plateau width) within a few percent range showing again the robustness of the effect. \par Two qualitative observations should be underlined: Firstly, whereas the higher plateaux ($\nu>4$) are smeared out, the lower seems to become more pronounced and stable for very large samples, secondly, the lifting of the spin degenacy is worse for huge specimens. That is probably due to the fact that polarized domains have an characteristic maximum size. \par Of course, our samples are not ideal. But, in general, it is very difficult to distinguish between genuine effects and effects attributed to additional imperfections introduced by the specific production process which may be afflicted with one or another shortcoming in the MBE growth. According to common wisdom the quantum {\sc Hall} effect is a localization-delocalization phenomenon due to a mild form of disorder and therefore it is almost impossible to define an idealized reference sample. Nevertheless it would be useful to do this empirically by repeating these experiments again and again and logging them. \section{\hspace{5mm}Measurements on topological non-trivial samples} Like the setup proposed by {\sc van\,der\,Pauw} [31] the technique utilizing a {\sc Corbino} disk (i.e.\ a ring shaped sample) can be used to measure {\sc Hall} mobilities in different types of semiconductors, see e.g.\ [35], or the low-temperature parameters of a two-dimensional electron gas [36]. \par In the study of the {\it quantized\/} {\sc Hall} effect the {\sc Corbino} technique enables us to study situations in which the samples are contactless (with respect to the injected current) and hence the currents are edgeless. This is interesting since it provides us with information on the physics in the bulk. The connection between 2+1 dimensional {\sc Chern}-{\sc Simons} quantum field theory and 1+1 dimensional conformal field theory indicate, from first principles of quantum physics and gauge theory alone, that the edge and the bulk pictures should not be seen as concurrent but rather as complementary and, hence, compatible approaches, e.g.\ [37, 38]. Thus the analysis of the {\sc Corbino} topology definitely completes our understanding of a two-dimensional quantum-electrodynamical response phenomenon. \par In the edgeless case current is induced inductively either by modulating the external magnetic field with an a.c.\ driven solenoid [39, 40, 41] or by a capacitive coupling [42, 43]. The independent injection of different currents into the different connected components of the topologically disconnected boundary in a ring geometry was also studied [44]. {\sc Wolf} {\it et al.\/} produced window-shaped quantum {\sc Hall} effect samples with contacts both on the inner and on the outer edges. This allowed them to study potentials appearing on contacts inside a sample but still lying on an edge and hence not decoupling from the two-dimensional electron gas [45]. Their experimental results indicate that under quantum {\sc Hall} effect conditions there is no electron transfer between the inner and the outer edge. \par In our experiments we performed conventional quantum {\sc Hall} measurements on a very large {\sc van\,der\,Pauw}-{\sc Corbino} hybrid geometry/topology. The sample is a square shaped device of 1.5 cm$^2$ $\times$ 1.5 cm$^2$ with a centered hole of 7 mm diameter. The latter was milled out by putting the varnished specimen in a spinner and applying a pen-like rod of wood with sandpaper glued on its bottom. (As a rather unconventional method it is without any respect.) Contacts were soldered in exact the same way as in the case of the samples with trivial topology. Experimental results are shown in Figure 27 and commented in Table 3. \par Whereas it is interesting in itself that even in the topological non-trivial case everything works fine with this \lq\lq quick 'n' dirty\rq\rq\ preparation we should point the reader onto two interesting additional observations: Firstly, the behavior of the slopes in the longitudinal resistivity depending on the fact whether it is measured inside or outside (curves \mbox{\boldmath$\gamma$} and \mbox{\boldmath$\delta$}) and, secondly, the onset of a second-derivative content if the current is injected on the different boundary than on which the voltage is measured (curve \mbox{\boldmath$\zeta$}). However, these structures are not always as pronounced and vary. \section{\hspace{5mm}Conclusion} From the viewpoint of a professional technician our experiments may look a little bit sportive if not amateurish. However, from the viewpoint of a theoretician who is interested in first principles our investigations definitely have the flavour of {\it realized\/} gedankenexperiments intimately touching the first principles underlying our field of interest, namely the mesoscopic realization of a dimensionally reduced 2+1 dimensional quantum electrodynamics. \par The main result of our work is: Yes, the integral quantum {\sc Hall} effect is indeed a {\it macroscopic\/}, extremely robust, quantum phenomenon. \par How far can we go? Clearly, one should repeat all measurements on still larger samples (up to 300 mm wafers, if they are available, placed in high-energy accelerator detector magnets), at lower temperatures (down to mK range) and on samples of higher genera in the language of analytic function theory (i.e.\ with more holes). Of course, it would be useful to study the scaling laws in more detail experimentally although the essence is already captured in Figure 26 and Table 1. \par If we add techniques like focused ion beam lithography, in particular in-plane gated set-ups [46, 47] we probably could observe the topological transition from a {\sc Corbino} disk to a {\sc Hall} bar or from a {\sc Corbino} to a {\sc van\,der\,Pauw} geometry. In other words, we could perform experiments within a unique topology changing scenario in mesoscopic physics, as it was recently done with help of different preparation methods [48]. Last but not least we should mention the famous fountain pressure experiments by {\sc Klass} {\it et al.\/} which could give us additional relevant information about the physics in these huge samples [49]. \section{\hspace{5mm}Acknowledgements} The authors would like to thank all the members of Angewandte Festk\"orperphysik for experimental support. {\sc Eva Leschinksy} (Witten) gratefully acknowledges the kind hospitality of the Bochum group. One of us (R.D.T.) is indebted to {\sc Sabine Gargosch} and {\sc Martin Versen} (Bochum), {\sc Farhad Ghaboussi} (Konstanz), Pro\-fes\-sor {\sc Nils Scho\-pohl} (Bochum/T\"ubingen), {\sc Hermann He\ss\-ling} and {\sc Larry Stuller} (both at DESY Hamburg) for inspring discussions. \newpage
{'timestamp': '2000-02-09T06:35:44', 'yymm': '9805', 'arxiv_id': 'cond-mat/9805313', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9805313'}
ArXiv
\subsection{Room specific prompt creation} \label{ap: prompt} When building the room type codebook, we create prompts by filling the following template. \begin{align*} \text{A [room type] with [obj 1], ... and [obj n]}. \end{align*} where [room type] is a room label annotated in Matterport3D dataset~\cite{Matterport3D} with manually spelling completion, such as map ``l" to ``living room" and ``u" to ``utility room". [obj 1] .. [obj n] are high frequency object words that co-occur with specific room labels in the training instructions of the REVERIE dataset. A frequency above 0.8 is considered a high frequency. This threshold ensures diversity and limits the number of candidates. For instance, we create the prompt ``A dining room with table and chairs" for room type ``dining room", ``a bathroom with towel and mirror" for room type ``bathroom". \end{multicols} \end{appendices} \subsection{Global Visual and Cross Model Attention} \noindent\textbf{Language Encoder.} We use a pretrained multi-layer transformer encoder~\cite{transformer} to encode the natural language instruction $\mathcal{X}$. Following the convention, we feed the sum of the token embedding, position embedding and token type embedding into the transformer encoder, and the output denoted as $\mathcal{T}$ is taken as language features. \noindent\textbf{Global Node Self-Attention.} To enable that each node perceives global environment information and can be further used to learn room-to-room correlations in the environment layout, we conduct a graph aware self attention (GASA) \cite{vlnduet} over node embeddings $\mathcal{H}_t$ of graph $\mathcal{G}_t$. For simplicity, we use the same symbol $\mathcal{H}_t$ to denote the encoded graph node embeddings. \noindent\textbf{Cross Graph Language Encoder.} We use a cross modal transformer~\cite{lu2019vilbert} to model both the global and local graph-language relation. We name the global and local graph-language co-attention models as global branch and local branch, respectively. For the global branch, we perform co-attention of node embeddings $\mathcal{H}_t$ over language features $\mathcal{T}$, while only the current node and its neighbouring navigable nodes are used to compute the co-attention in the local branch. We feed the outputs of the global branch to the \text{Layout Learner} for layout prediction. In addition, the local branch outputs a predicted action distribution and score for object grounding, while the global branch only generates an action distribution. The action predictions from global and local branches are fused to make final decision Fig.~\ref{fig: layout}. \begin{align*} \mathcal{\tilde{H}}_t^{(glo)} &= \text{Cross-Attn}(\mathcal{H}_t, \mathcal{T}, \mathcal{T}) \tag{1} \\ \mathcal{\tilde{H}}_t^{(loc)} &= \text{Cross-Attn}(\{\mathcal{H}_t(\mathcal{A}_t), \mathcal{H}_t(a_{t,0})\}, \mathcal{T}, \mathcal{T}) \tag{2} \end{align*} where \textit{$\text{Cross-Attn}(query, key, value)$} is a multi-layer transformer decoder, and $\mathcal{A}_t$ stands for neighbouring navigable nodes of the current node $a_{t,0}$. $\mathcal{H}_t(\cdot)$ represents extracting corresponding rows from $\mathcal{H}_t$ by node indices. \subsection{Baseline Model} =============== old version =============== \subsubsection{Topological Graph.} We construct a topological graph following the work of~\citet{vlnduet}. We briefly describe the process and define the key concepts as follows. \noindent\textbf{Graph Building.} The baseline model gradually builds a topological graph $\mathcal{G}_t = \{\boldsymbol v,\boldsymbol e \mid \boldsymbol v \subseteq \mathcal{V}, \boldsymbol e \subseteq \mathcal{E} \}$ to represent the environment at time step $t$. The graph contains three types of nodes: (1) visited nodes; (2) navigable nodes; and (3) the current node. In Fig.~\ref{fig: layout}, they are denoted by a blue circle, yellow circle and double blue circle, respectively. Both visited nodes and current node have been explored, hence the agent has access to their panoramic views, while the navigable nodes are unexplored and only partially observed by the agent from neighboring visited nodes. At each step $t$, we build the graph $\mathcal{G}_t$ by updating the current node $a_{t,0}$ and its neighbouring navigable nodes $\mathcal{A}_t$. \noindent\textbf{Node Representation.} When navigating to a node $a_{t,0}$ at time step $t$, the agent extracts panoramic image features $\mathcal{R}_t$ and object features $\mathcal{O}_t$ from its panoramic view $\mathcal{V}_t$. Bounding boxes of objects are obtained by an object detection model~\cite{anderson2018bottom} or directly provided by simulation environment, as is the case in the REVERIE dataset. Since current node is fully observed at this stage, we update its features by more informative local visual features. Specifically, we implement a multi-layer transformer with self-attention among the image features $\mathcal{R}_t$ and object features $\mathcal{O}_t$ observed at the current location to model their relations. The fused features $\hat{\mathcal{O}_t}$ and $\hat{\mathcal{R}_t}$ are treated as local visual features of node $a_{t,0}$. Then we update the representations of nodes in topological graph as follows: (1) For the current node, we concatenate and average pool the local features $\hat{\mathcal{R}_t}$ and $\hat{\mathcal{O}_t}$ to update its representation; (2) As unvisited nodes could be partially observed from different directions of multiple visited nodes, we average all image features of the partial views as its representation. (3) The representation of previously visited node remains unchanged. We then further add the location embedding and navigation step embedding to the representation of each node. The location embedding of a node if formed by the concatenation of Euclidean distance, heading and elevation angles relative to the current node, and similar to positional embeddings the step embedding embeds the last visited time step of each visited node and we set time step zero for unvisited nodes. We use $\mathcal{H}_t$, to denote the full node representation. Please refer to Appendix for further details. \section{Methodology} \input{sec3-0-method} \input{sec3-1-baseline} \input{sec3-3-method} \input{sec3-4-method} \input{sec3-5-method} \input{sec3-6-method} \input{sec4-exps} \input{sec5-results} \section{Conclusion} In this work, to enhance the environmental understanding of an autonomous agent in the Embodied Referring Expression Grounding task, we have proposed a Layout Learner and Goal Dreamer. These two modules effectively introduce visual common sense, in our case via an image generation model, into the decision process. Extensive experiments and case studies show the effectiveness of our designed modules. We hope our work inspires further studies that include visual commonsense resources in autonomous agent design. \section{Acknowledgements} This research received funding from the Flanders AI Impuls Programme - FLAIR and from the European Research Council Advanced Grant 788506. \section{Introduction} In recent years, embodied AI has matured. In particular, a lot of works~\cite{vlnduet,hao2020towards,majumdar2020improving,reinforced,song2022one,vlnce_topo,georgakis2022cm2} have shown promising results in Vision-and-Language Navigation (VLN)~\cite{room2room,vlnce}. In VLN, an agent is required to reach a destination following a fine-grained natural language instruction that provides detailed step-by-step information along the path, for example ``Walk forward and take a right turn. Enter the bedroom and stop at the bedside table". However, in real-world applications and human-machine interactions, it is tedious for people to give such detailed step-by-step instructions. Instead, a high-level instruction only describing the destination, such as ``Go to the bedroom and clean the picture on the wall.", is more usual. In this paper, we target such high-level instruction-guided tasks. Specifically, we focus on the Embodied Referring Expression Grounding task~\cite{reverie,zhu2021soon}. In this task, an agent receives a high-level instruction referring to a remote object, and it needs to explore the environment and localize the target object. When given a high-level instruction, we humans tend to imagine what the scene of the destination looks like. Moreover, we can efficiently navigate to the target room, even in previously unseen environments, by exploiting commonsense knowledge about the layout of the environment. However, for an autonomous agent, generalization to unseen environments still remains challenging. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{imgs/tessar.jpeg} \caption{The agent is required to navigate and find the mentioned object in the environment. Based on acquired commonsense knowledge, the agent correctly identifies the current and surrounding room types. Based on the imagination of the destination, it correctly chooses the unexplored right yellow dot as the next step to go.} \label{fig: tessar} \end{figure} Inspired by how humans make decisions when receiving high-level instructions in unseen environments, as shown in Fig.~\ref{fig: tessar}, we design an agent that can identify the room type of current and neighboring navigable areas based on a room type codebook and previous states. On top of that, it learns to combine this information with goal imagination to jointly infer the most probable moving direction. Thus, we propose two modules named \textbf{Layout Learner} and \textbf{Goal Dreamer} to achieve this goal. In our model, the agent stores trajectory information by building a real-time topological map, where nodes represent either visited or unexplored but partially visible areas. The constructed topological map can be seen as a long-term scene memory. At each time step, the node representation is updated by moving the agent to the current node and receiving new observations. The \textbf{Layout Learner} module learns to infer the layout distribution of the environment with a room-type codebook constructed using a large-scale pre-trained text-to-image model, GLIDE~\cite{glide} in our case. The codebook design helps the model to leverage high-level visual commonsense knowledge of room types and boosts performance in layout learning. This prediction is updated at each time step allowing the agent to correct its prediction when more areas are explored. In the \textbf{Goal Dreamer} module, we encourage the agent to imagine the destination beforehand by generating a set of images with the text-to-image model GLIDE. The use of this imagination prior helps accurate action prediction. The cross-attention between topological node representations and imagination features is conducted, and its output is used to help the agent make a decision at each time step. In summary, the contributions of our paper are threefold: \begin{itemize} \item {We propose a \textbf{Layout Learner} which leverages the visual commonsense knowledge from a room-type codebook generated using the GLIDE model. It not only helps the agent to implicitly learn the environment layout distribution but also to better generalize to unseen environments.} \item {The novel \textbf{Goal Dreamer} module equips the agent with the ability to make action decisions based on the imagination of the unseen destination. This module further boosts the action prediction accuracy.} \item Analyzing different codebook room types shows that visual descriptors of the room concept better generalize than textual descriptions and classification heads. This indicates that, at least in this embodied AI task, visual features are more informative.} \end{itemize} \section{Related work} \subsubsection{Embodied Referring Expression Grounding.} In the Embodied Referring Expression Grounding task~\cite{reverie,zhu2021soon}, many prior works focus on adapting multimodal pre-trained networks to the reinforcement learning pipeline of navigation~\cite{reverie,hop} or introducing pretraining strategies for good generalization ~\cite{Qiao2022HOP,Hong_2021_CVPR}. Some recent breakthroughs come from including on-the-fly construction of a topological map and a trajectory memory as done in VLN-DUET~\cite{vlnduet}. Previous models only consider the observed history when predicting the next step. Different from them, we design a novel model to imagine future destinations while constructing the topological map. \subsubsection{Map-based Navigation.} In general language-guided navigation tasks, online map construction gains increasing attention (e.g., ~\cite{chaplot2020object,chaplot2020learning,irshad2022sasra}). A metric map contains full semantic details in the observed space and precise information about navigable areas. Recent works focus on improving subgoal identification~\cite{min2021film,blukis2022persistent,song2022one} and path-language alignment~\cite{wang2022find}. However, online metric map construction is inefficient during large-scale training, and its quality suffers from sensor noise in real-world applications. Other studies focus on topological maps~\cite{nrns,vlnce_topo,vlnduet}, which provide a sparser map representation and good backtracking properties. We use topological maps as the agent's memory. Our agent learns layout-aware topological node embeddings that are driven by the prediction of room type as the auxiliary task, pushing it to include commonsense knowledge of typical layouts in the representation. \subsubsection{Visual Common Sense Knowledge.} Generally speaking, visual common sense refers to knowledge that frequently appears in a day-to-day visual world. It can take the form of a hand-crafted knowledge graph such as Wikidata~\cite{vrandevcic2014wikidata} and ConceptNet~\cite{liu2004conceptnet}, or it can be extracted from a language model~\cite{acl2022-holm}. However, the knowledge captured in these resources is usually abstract and hard to align with objects mentioned in free language. Moreover, if, for instance, you would like to know what a living room looks like, then several images of different living rooms will form a more vivid description than its word definition. Existing work~\cite{xiaomi@concept} tries to boost the agent's performance using an external knowledge graph in the Grounding Remote Referring Expressions task. Inspired by the recent use of prompts ~\cite{petroni2019lama,brown2020gpt3} to extract knowledge from large-scale pre-trained language models (PLM)~\cite{devlin2019bert,brown2020gpt3,kojima2022large}, we consider pre-trained text-to-image models~\cite{glide,ramesh2022dalle2} as our visual commonsense resources. Fine-tuning a pre-trained vision-language model has been used in multimodal tasks~\cite{lu2019vilbert,su2019vl}. However, considering the explicit usage of prompted images as visual common sense for downstream tasks is novel. Pathdreamer~\cite{koh2021pathdreamer} proposes a model that predicts future observations given a sequence of previous panoramas along the path. It is applied in a VLN setting requiring detailed instructions for path sequence scoring. Our work studies the role of general visual commonsense knowledge and focuses on room-level imagination and destination imagination when dealing with high-level instructions. The experiments show that, on the one hand, including visual commonsense knowledge essentially improves task performance. On the other hand, visual common sense performs better than text labels on both environmental layout prediction and destination estimation. \subsection{Overview} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\textwidth]{imgs/model.png} \end{center} \caption{The model architecture of our Layout-aware Dreamer (LAD). Our model predicts the room type of all nodes of the topological graph; for simplicity, we only show the predictions of several nodes here. The center part is the baseline model, which takes the topological graph and instruction as inputs and dynamically fuses the local and global branch action decisions to predict the next action. The dashed boxes show our proposed Layout Learner and Goal Dreamer.} \label{fig: layout} \end{figure*} \textbf{Task Setup.} In Embodied Referring Expression Grounding task~\cite{reverie,zhu2021soon}, an agent is spawned at an initial position in an indoor environment. The environment is represented as an undirected graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ stands for navigable nodes and $\mathcal{E}$ denotes connectivity edges. The agent is required to correctly localize a remote target object described by a high-level language instruction. Specifically, at the start position of each episode, the agent receives a concise high-level natural language instruction $\mathcal{X}=<w_1, w_2,..., w_L>$, where $L$ is the length of the instruction, and $w_i$ represents the $i$th word token. The panoramic view $\mathcal{V}_t=\{v_{t,i}\}_{i=1}^{36}$ of the agent location at time step $t$ is represented by $36$ images which are the observations of the agent with 12 heading angles and 3 elevations. At each time step $t$, the agent has access to the state information $S_t$ of its current location consisting of panoramic view $\mathcal{V}_t$, and $M$ neighboring navigable nodes $A_{t} = [a_{t,1}, ..., a_{t,M}]$ with only a single view for each of these, namely the view observable from the current node. These single views of neighboring nodes form $\mathcal{\mathcal{N}}_t=[v_{t,1}, ..., v_{t,M}]$. Then the agent is required to take a sequence of actions $<a_0, ... a_t, ...>$ to reach the goal location and ground the object specified by the instruction in the observations. The possible actions $a_{t}$ at each time step $t$ are either selecting one of the navigable nodes $A_t$ or stopping at the current location denoted by $a_{t,0}$. \subsection{Base architecture} Our architecture is built on the basis of the VLN-DUET~\cite{vlnduet} model, which is the previous state-of-the-art model on the REVERIE dataset. In the following paragraphs, we briefly describe several important components of this base architecture, including topological graph construction, the global and local cross-attention modules with minimal changes. For more details, we refer the reader to~\citet{vlnduet}. \subsubsection{Topological Graph.} The base model gradually builds a topological graph $\mathcal{G}_t = \{\boldsymbol v,\boldsymbol e \mid \boldsymbol v \subseteq \mathcal{V}, \boldsymbol e \subseteq \mathcal{E} \}$ to represent the environment at time step $t$. The graph contains three types of nodes: (1) visited nodes; (2) navigable nodes; and (3) the current node. In Fig.~\ref{fig: layout}, they are denoted by a blue circle, yellow circle, and double blue circle, respectively. Both visited nodes and the current node have been explored and the agent has access to their panoramic views. The navigable nodes are unexplored and can be partially observed from other visited nodes. When navigating to a node $a_{t,0}$ at time step $t$, the agent extracts panoramic image features $\mathcal{R}_t$ from its panoramic view $\mathcal{V}_t$ and object features $\mathcal{O}_t$ from provided object bounding box. The model then uses a multi-layer transformer with self-attention to model the relation between the image features $\mathcal{R}_t$ and the object features $\mathcal{O}_t$. The fused features $\mathcal{R}_t$ and $\mathcal{O}_t$ are treated as local visual features of node $a_{t,0}$. During exploring the environment, the agent updates the node visual representations as follows: (1) For the current node, the node representation is updated by concatenating and average pooling the local features $\hat{\mathcal{R}}_t$ and $\hat{\mathcal{O}_t}$. (2) As unvisited nodes could be partially observed from different directions of multiple visited nodes, the average of all image features of the partial views are taken as its representation. (3) The features of visited nodes remain unchanged. The final representation of nodes is the sum of location embedding, step embedding and visual embedding. The location embedding of a node is formed by the concatenation of Euclidean distance, heading and elevation angles relative to the current node. The step embedding embeds the last visited time step of each visited node, and time zero is set for unvisited nodes. \noindent\textbf{Language Encoder.} We use a multi-layer transformer encoder~\cite{transformer} to encode the natural language instruction $\mathcal{X}$. Following the convention, we feed the sum of the token embedding, position embedding and token type embedding into the transformer encoder, and the output denoted as $\mathcal{T}$ is taken as language features. \noindent\textbf{Global Node Self-Attention.} Different from the VLN-DUET model, to enable each node to perceive global environment information without influenced by the language information, we conduct one more graph aware self-attention (GASA) \cite{vlnduet} over node embeddings $\mathcal{H}_t$ of graph $\mathcal{G}_t$ before interacting with word embeddings. For simplicity, we use the same symbol $\mathcal{H}_t$ to denote the encoded graph node embeddings. \noindent\textbf{Cross Graph Encoder.} Following the work of~\citet{devlin2019bert}, we use a multimodal transformer~\cite{lu2019vilbert} to model both the global and local graph-language relation. We name the global and local graph-language cross-attention models (Global Cross GA and Local GA) as global branch and local branch, respectively. For the global branch, we perform cross-attention of node embeddings $\mathcal{H}_t$ over language features $\mathcal{T}$, while only the current node and its neighboring navigable nodes are used to compute the cross-attention in the local branch. We feed the outputs of the global branch to the \text{Layout Learner} for layout prediction. In addition, both the global and local branch outputs are further used to make the navigation decision, as shown in Fig.~\ref{fig: layout}. \begin{align*} \mathcal{\tilde{H}}_t^{(glo)} &= \text{Cross-Attn}(\mathcal{H}_t, \mathcal{T}, \mathcal{T}) \tag{1} \\ \mathcal{\tilde{H}}_t^{(loc)} &= \text{Cross-Attn}(\{\mathcal{H}_t(\mathcal{A}_t), \mathcal{H}_t(a_{t,0})\}, \mathcal{T}, \mathcal{T}) \tag{2} \end{align*} where $\text{Cross-Attn}(query, key, value)$ is a multi-layer transformer decoder, and $\mathcal{A}_t$ stands for neighbouring navigable nodes of the current node $a_{t,0}$. $\mathcal{H}_t(\cdot)$ represents extracting corresponding rows from $\mathcal{H}_t$ by node indices. \subsection{Layout Learner} This module aims to learn both the implicit environment layout distribution and visual commonsense knowledge of the room type, which is achieved through an auxiliary layout prediction task with our room type codebook. This auxiliary task is not used directly at inference time. The main purpose of having it during training is learning representations to capture this information, which in turn improves global action prediction. \noindent\textbf{Building Room Type Codebook.} We fetch room type labels from the MatterPort point-wise semantic annotations which contain 30 distinct room types. We then select the large-scale pre-trained text-to-image generation model GLIDE~\cite{glide} as a visual commonsense resource. To better fit the embodied grounding task, we create prompt $P_{room}$ to prompt visual commonsense knowledge not only based on the room type label but including high-frequency objects of referring expressions in the training set. Specifically, when building the room type codebook, we create prompts by filling in the following template. \begin{align*} \text{A [room type] with [obj 1], ... and [obj n]}. \end{align*} where [room type] is a room label annotated in Matterport3D dataset~\cite{Matterport3D} with manual spelling completion, such as map ``l" to ``living room" and ``u" to ``utility room". [obj 1] .. [obj n] are high-frequency object words that co-occur with specific room labels in the training instructions of the REVERIE dataset. A frequency above 0.8 is considered a high frequency. This threshold ensures diversity and limits the number of candidates. For instance, we create the prompt ``A dining room with table and chairs" for room type ``dining room", ``a bathroom with towel and mirror" for room type ``bathroom". For each room type, we generate a hundred images and select $S$ representative ones in the pre-trained CLIP feature space (i.e., the image closest to each clustering center after applying a K-Means cluster algorithm). An example is shown in Fig.~\ref{fig: glide room}. Our selection strategy guarantees the diversity of the generated images, i.e., rooms from various perspectives and lighting conditions. The visual features of the selected images for different room types form the room type codebook $E_{room}\in \mathbb{R}^{K \times S \times 765}$, where $K$ is the total number of room types and $S$ represents the number of images for each room type. This codebook represents a commonsense knowledge base with visual descriptions of what a room should look like. \noindent\textbf{Environment Layout Prediction.} Layout information is critical for indoor navigation, especially when the agent only receives high-level instructions describing the goal positions, such as ``Go to the kitchen and pick up the mug beside the sink". This module equips the agent with both the capability of learning room-to-room correlations in the environment layout and a generalized room type representation. With the help of the visual commonsense knowledge of the rooms in the room type codebook, we perform layout prediction. We compute the similarity score between node representations $\mathcal{\tilde{H}}^{(glo)}_t$ and image features $E_{room}$ in the room type codebook and further use this score to predict the room type of each node in the graph $\mathcal{G}_t$. The predicted room type logits are supervised with ground truth labels $\mathcal{C}_t$. \begin{align*} \hat{\mathcal{C}_{t}^{i}} &= \sum_{j=1}^{S}{\mathcal{\tilde{H}}^{(glo)}_t E_{room (i,j)}} \tag{3} \\ \mathcal{L}_t^{\text{(LP)}} &=\text{CrossEntropy}(\hat{\mathcal{C}_t},\mathcal{C}_t) \tag{4} \end{align*} where $S$ is the number of images in the room type codebook for each room type, and $\hat{\mathcal{C}}_t^i$ represents the predicted score of $i$th room type. We use $\hat{\mathcal{C}_t}$ to denote the predicted score distribution of a node, thus $\hat{\mathcal{C}_t} = [\hat{\mathcal{C}_t^0},\cdots,\hat{\mathcal{C}_t^{K}}]$. \subsection{Goal Dreamer} A navigation agent without a global map can be short-sighted. We design a long-horizon value function to guide the agent toward the imagined destination. For each instruction, we prompt five images from GLIDE~\cite{glide} as the imagination of the destination. Three examples are shown in Fig~\ref{fig: glide des}. Imagination features $E^{(im)}$ are extracted from the pre-trained CLIP vision encoder~\cite{clip}. Then at each time step $t$, we attend the topological global node embeddings $\mathcal{\tilde{H}}_t^{(glo)}$ to $E^{(im)}$ through a cross-attention layer~\cite{transformer}. \begin{align*} \mathcal{\hat{H}}_t^{(glo)} = \text{Cross-Attn}(\mathcal{\tilde{H}}_t^{(glo)}, E^{(im)}, E^{(im)}) \tag{5} \end{align*} The hidden state $\mathcal{\hat{H}}_t^{(glo)}$ learned by the Goal Dreamer is projected by a linear feed-forward network (FFN)\footnote{FFNs in this paper are independent without parameter sharing.} to predict the probability distribution of the next action step over all navigable but not visited nodes. \begin{align*} Pr_{t}^{(im)} = \text{Softmax}(\text{FFN}(\mathcal{\hat{H}}_t^{(glo)}))\tag{6} \end{align*} We supervise this distribution $Pr_{t}^{(im)}$ in the warmup stage of the training (see next Section) with the ground truth next action $\mathcal{A}_{gt}$. \begin{align*} \mathcal{L}_t^{(D)} &= \text{CrossEntropy}(Pr_{t}^{(im)},\mathcal{A}_{gt}) \tag{7} \end{align*} Optimizing $Pr_{t}^{(im)}$ guides the learning of latent features $\mathcal{\hat{H}}_t^{(glo)}$. $\mathcal{\hat{H}}_t^{(glo)}$ will be fused with global logits in the final decision process as described in the following section. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{imgs/room_glide_example.jpeg} \caption{Prompted examples of the room codebook. } \label{fig: glide room} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.95\linewidth]{imgs/ins2img.png} \caption{Images of the destination generated by GLIDE based on the given instruction.} \label{fig: glide des} \end{figure} \subsection{Decision Maker} \noindent\textbf{Action Prediction}. We follow the work of~\citet{vlnduet} to predict the next action to be performed in both the global and local branches and dynamically fuse their results to enable the agent to backtrack to previous unvisited nodes. \begin{align*} Pr_{t}^{\text{(floc)}} = \text{DynamicFuse}(\mathcal{\tilde{H}}_t^{(loc)}, \mathcal{\tilde{H}}_t^{(glo)}) \tag{8} \end{align*} The proposed goal dreamer module equips the agent with the capability of learning guidance towards the target goal, hence we further fuse the goal dreamer's latent features $\mathcal{\hat{H}}_t^{(glo)}$ with global results weighted by a learnable $\lambda_t$. The $\lambda_t$ is node-specific; thus we apply a feed-forward network (FFN) to predict these weights conditioned on node representations. \begin{align*} \lambda_t &= \text{FFN}([\mathcal{\tilde{H}}_t^{(glo)};\mathcal{\hat{H}}_t^{(glo)}]) \tag{9} \end{align*} The fused action distribution is formulated as: \begin{align*} Pr_{t}^{\text{(fgd)}} &= (1-\lambda_t) * \mathcal{\text{FFN}(\tilde{H}}_t^{(glo)}) \\ &+ \lambda_t * \mathcal{\text{FFN}(\hat{H}}_t^{(glo)}) \tag{10} \end{align*} The objective for supervising the whole decision procedure by ground truth node $\mathcal{A}_{gt}$ in the next time step is: \begin{align*} Pr_t^{\text{(DSAP)}} &= Pr_{t}^{\text{(floc)}} + Pr_{t}^{\text{(fgd)}} \\ \mathcal{L}_t^{\text{(DSAP)}} &= \text{CrossEntropy}(Pr_t^{\text{(DSAP)}} ,\mathcal{A}_{gt}) \tag{11} \end{align*} where $Pr_t^{\text{(DSAP)}}$ is the estimated single-step prediction distribution (DSAP) over all nodes. Not only the global-local fusion already proposed in previous work but also our goal-dreaming branch is now included to predict the next action. \noindent\textbf{Object Grounding}. We simply consider object grounding as a classification task and use a $\text{FFN}$ to generate a score for each object in $\mathcal{O}_t$ of the current node. We then supervise this score with the annotated ground truth object $\mathcal{O}_{gt}$. \begin{align*} \hat{\mathcal{O}_t} &= \text{FFN}(\mathcal{O}_t) \\ \mathcal{L}_t^{\text{(OG)}} &= \text{CrossEntropy}(\hat{\mathcal{O}_t}, \mathcal{O}_{gt}) \tag{12} \end{align*} \subsection{Training and Inference} \label{sec: training} \textbf{Warmup stage.} Previous researches~\cite{history,vlnduet,episodic,hao2020towards} have shown that warming up the model with auxiliary supervised or self-supervised learning tasks can significantly boost the performance of a transformer-based VLN agent. We warm up our model with five auxiliary tasks, including three common tasks in vision-and-language navigation: masked language modeling (MLM)~\cite{devlin2019bert}, masked region classification (MRC)~\cite{lu2019vilbert}, object grounding (OG)~\cite{hop} if object annotations exist; and two new tasks, that is, layout prediction (LP) and single action prediction with the dreamer (DSAP) explained in sections \text{Layout Learner} and \text{Goal Dreamer}, respectively. In the \text{LP}, our agent predicts the room type of each node in the topological graph at each time step, aiming to model the room-to-room transition of the environment, the objective $\mathcal{L}_t^{\text{(LP)}}$ of which is shown in Eq. 4. To encourage the agent to conduct goal-oriented exploration, in the \text{DSAP}, as illustrated in $\mathcal{L}_t^{\text{(D)}}$ (Eq. 7) and $\mathcal{L}_t^{\text{(DSAP)}} $ (Eq. 11) we use the output of the goal dreamer to predict the next action. The training objective of the warmup stage is as follows: \begin{align*} \mathcal{L}^{\text{WP}}_t = &\mathcal{L}_t^{\text{(MLM)}} +\mathcal{L}_t^{\text{(MRC)}} + \mathcal{L}_t^{\text{(OG)}} + \\ & \mathcal{L}_t^{\text{(LP)}} + \mathcal{L}_t^{\text{(D)}} + \mathcal{L}_t^{\text{(DSAP)}} \tag{13} \end{align*} \noindent\textbf{Imitation Learning and Inference.} We use the imitation learning method DAgger~\cite{dagger} to further train the agent. During training, we use the connectivity graph $\mathcal{G}$ of the environment to select the navigable node with the shortest distance from the current node to the destination as the next target node. We then use this target node to supervise the trajectory sampled using the current policy at each iteration. The training objective here is: \begin{align*} \mathcal{L}^{\text{IL}}_t = & \mathcal{L}_t^{\text{(OG)}} + \mathcal{L}_t^{\text{(LP)}} + \mathcal{L}_t^{\text{(DSAP)}} \tag{14} \end{align*} During inference, our agent builds the topological map on-the-fly and selects the action with the largest probability. If the agent decides to backtrack to the previous unexplored nodes, the classical Dijkstra algorithm~\cite{dijkstra1959note} is used to plan the shortest path from the current node to the target node. The agent stops either when a stop action is predicted at the current location or when it exceeds the maximum action steps. When the agent stops, it selects the object with the maximum object prediction score. \section{Experiments} \subsection{Datasets} Because the navigation task is characterized by realistic high-level instructions, we conduct experiments and evaluate our agent on the embodied goal-oriented benchmark REVERIE~\cite{reverie} and the SOON~\cite{song2022one} datasets. REVERIE dataset: The dataset is split into four sets, including the training set, validation seen set, validation unseen set, and test set. The environments in both validation unseen and test sets do not appear in the training set, while all environments in validation seen have been explored or partially observed during training. The average length of instructions is $18$ words. The dataset also provides object bounding boxes for each panorama, and the length of ground truth paths ranges from $4$ to $7$ steps. SOON dataset: This dataset has a similar data split as REVERIE. The only difference is that it proposes a new validation on the seen instruction split which contains the same instructions in the same house but with different starting positions. Instructions in SOON contain $47$ words on average, and the ground truth paths range from $2$ to $21$ steps with $9.5$ steps on average. The SOON dataset does not provide bounding boxes for object grounding, thus we use here an existing object detector~\cite{anderson2018bottom} to generate candidate bounding boxes. \subsection{Evaluation Metrics} \noindent\textbf{Navigation Metrics.} Following previous work~\cite{vlnduet,evaluation}, we evaluate the navigation performance of our agent using standard metrics, including Trajectory Length (TL) which is the average path length in meters; Success Rate (SR) defined as the ratio of paths where the agent's location is less than $3$ meters away from the target location; Oracle SR (OSR) that defines success if the trajectory has ever passed by the target location; and SR weighted by inverse Path Length (SPL). \noindent\textbf{Object Grounding Metrics.} We follow the work of~\citet{reverie} using Remote Grounding Success (RGS), which is the ratio of successfully executed instructions, and RGS weighted by inverse Path Length (RGSPL). \subsection{Implementation Details.} The model is trained for 100k iterations with a batch size of 32 for single action prediction and 50k iterations with a batch size of 8 for imitation learning with DAgger~\cite{dagger}. We optimize both phases by the AdamW~\cite{adamw} optimizer with a learning rate of 5e-5 and 1e-5, respectively. We include two fixed models for preprocessing data, i.e., GLIDE~\cite{glide} for generating the room codebook and the imagined destination, and CLIP~\cite{clip} for image feature extraction. The whole training procedure takes two days with a single NVIDIA-P100 GPU. \section{Results} \begin{table*}[t] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l|cccc|cc||cccc|cc||cccc|cc} \hline \multirow{3}*{Methods} & \multicolumn{6}{c}{Val-seen} & \multicolumn{6}{c}{Val-unseen} & \multicolumn{6}{c}{Test-unseen} \\ ~ & \multicolumn{4}{c}{Navigation} & \multicolumn{2}{c}{Grounding} &\multicolumn{4}{c}{Navigation} & \multicolumn{2}{c}{Grounding} &\multicolumn{4}{c}{Navigation} & \multicolumn{2}{c}{Grounding} \\ \hline ~ & TL $\downarrow$ & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ & TL $\downarrow$ & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ & TL $\downarrow$ & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ \\ \hline RCM~\cite{reinforced} & 10.70 & 29.44 & 23.33 & 21.82 & 16.23 & 15.36 & 11.98 & 14.23 & 9.29 & 6.97 & 4.89 & 3.89 & 10.60 & 11.68 & 7.84 & 6.67 & 3.67 & 3.14 \\ SelfMonitor~\cite{ma2019self} & 7.54 & 42.29 & 41.25 & 39.61 & 30.07 & 28.98 & 9.07 & 11.28 & 8.15 & 6.44 & 4.54 & 3.61 & 9.23 & 8.39 & 5.80 & 4.53 & 3.10 & 2.39 \\ REVERIE~\cite{reverie} & 16.35 & 55.17 & 50.53 & 45.50 & 31.97 & 29.66 & 45.28 & 28.20 & 14.40 & 7.19 & 7.84 & 4.67 & 39.05 & 30.63 & 19.88 & 11.61 & 11.28 & 6.08 \\ CKR~\cite{gao2021room} & 12.16 & 61.91 & 57.27 & 53.57 & 39.07 & - & 26.26 & 31.44 & 19.14 & 11.84 & 11.45& - & 22.46 & 30.40 & 22.00 & 14.25 & 11.60 & - \\ SIA~\cite{hop} & 13.61 & 65.85 & 61.91 & 57.08 & 45.96 & 42.65 & 41.53 & 44.67 & 31.53 & 16.28 & 22.41 & 11.56 & 48.61 & 44.56 & 30.8 & 14.85 & 19.02 & 9.20 \\ VLN-DUET~\cite{vlnduet} & 13.86 & \textbf{73.86} & \textbf{71.75} & \textbf{63.94} & \textbf{57.41} & \textbf{51.14} & 22.11 & 51.07 & 46.98 & 33.73 & 32.15 & 22.60 & 21.30 & 56.91 & 52.51 & 36.06 & 31.88 & 22.06 \\ \hline \textbf{LAD (Ours)} & 16.74 & 71.68 & 69.22 & 57.44 & 52.92 & 43.46 & 26.39 & \textbf{63.96} & \textbf{57.00} & \textbf{37.92} & \textbf{37.80} & \textbf{24.59} & 25.87 & \textbf{62.02} & \textbf{56.53} & \textbf{37.8} & \textbf{35.31} & \textbf{23.38} \\ \hline \end{tabular} } \caption{Results obtained on the REVERIE dataset as compared to other existing models including the current state-of-the-art model VLN-DUET. } \label{tab: main table} \end{table*} \subsection{Comparisons to the state of the art.} \noindent\textbf{Results on REVERIE.} In Table~\ref{tab: main table}, we compare our model with prior works in four categories: (1) Imitation Learning + Reinforcement learning models: RCM~\cite{reinforced}, SIA~\cite{hop}; (2) Supervision model: SelfMonitor~\cite{ma2019self}, REVERIE~\cite{reverie}; (3) Imitation Learning with external knowledge graph: CKR~\cite{gao2021room}; and (4) Imitation Learning with topological memory: VLN-DUET~\cite{vlnduet}. Our model outperforms the above models with a large margin in challenging unseen environments. Significantly, our model surpasses the previous state of the art \text{VLN-DUET} by approximately $10\%$ (\text{SR}) and $5\%$ (\text{RGS}) in the val-unseen split. On the test split, our model beats VLN-DUET with improvements of \text{SR} by $4.02\%$ and \text{RGS} by $3.43\%$. The results demonstrate that the proposed \text{LAD} better generalizes to unseen environments, which is critical for real applications. \noindent\textbf{Results on SOON.} Table~\ref{tab: soon} presents the comparison of our proposed LAD with other models including the state-of-the-art VLN-DUET model. The LAD model significantly outperforms VLN-DUET across all evaluation metrics in the challenging test unseen split. Especially, the model improves the performance on \text{SR} and \text{SPL} by $6.15\%$ and $6.4\%$, respectively. This result clearly shows the effectiveness of the proposed \text{Layout Learner} and \text{Goal Dreamer} modules. \subsection{Ablation Studies} We verify the effectiveness of our key contributions via an ablation study on the REVERIE dataset. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{imgs/cropped_rt.jpeg} \caption{Room type belief will be updated while exploring. Blue double circle denotes the current location, a yellow circle refers to an unexplored but visible node, a blue circle represents a visited node, the red line is the agent's trajectory. Each column contains bird-view and egocentric views of current agent states.} \label{fig: room change} \end{figure*} \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|ccccc} \hline Split & Methods & TL $\downarrow$ & OSR $\uparrow$ & SR $\uparrow$ & SPL $\uparrow$ & RGSPL $\uparrow$ \\ \hline \multirow{3}*{\makecell[c]{Val \\ Unseen}} & GBE~\cite{zhu2021soon} & 28.96 & 28.54 & 19.52 & 13.34 & 1.16 \\ ~ & VLN-DUET~\cite{vlnduet} & 36.20 & \textbf{50.91} & 36.28 & 22.58 & 3.75 \\ ~ & LAD & 32.32 & 50.59 & \textbf{40.24} & \textbf{29.44} & \textbf{4.20} \\ \hline \multirow{3}*{\makecell[c]{Test \\ Unseen}} & GBE~\cite{zhu2021soon} & 27.88 & 21.45 & 12.90 & 9.23 & 0.45 \\ ~ & VLN-DUET~\cite{vlnduet} & 41.83 & 43.00 & 33.44 & 21.42 & 4.17 \\ ~ & LAD & 35.71 & \textbf{45.80} & \textbf{39.59} &\textbf{27.82} & \textbf{7.08} \\ \hline \end{tabular} } \caption{Results of our LAD model obtained on the SOON dataset compared with the results of other state-of-the-art models } \label{tab: soon} \end{table} \noindent\textbf{Are the Layout Learner and Goal Dreamer helpful? } We first verify the contribution of the \text{Layout Learner} and \text{Goal Dreamer}. For a fair comparison, we re-implement the result of VLN-DUET~\cite{vlnduet} but replace the visual representations to CLIP features and report the results in Row 1 of Table~\ref{tab: abla_table}. The performance boost in this re-implementation compared to VLN-DUET results in Table~\ref{tab: main table} indicates that CLIP features are more suitable for visual-language tasks than original ImageNet ViT features. Comparing the results in row $2$ with row $1$, it is clear that integrating the \text{Layout Learner} to the baseline model improves its performance across all evaluation metrics, which verifies our assumption that the layout information is vital in high-level instruction following tasks. One might notice that in row $3$ the \text{Goal Dreamer} module can boost the performance in \text{SR}, \text{OSR}, \text{SPL}, and \text{RGSPL}, but it slightly harms the performance in \text{RGS}. A lower \text{RGS} but higher \text{RGSPL} shows that the model with \text{Goal Dreamer} takes fewer steps to reach the goal, meaning that it conducts more effective goal-oriented exploration, which supports our assumption. \begin{table}[t] \resizebox{\linewidth}{!}{ \begin{tabular}{ccc|ccccc} \hline Baseline & Layout Learner & Goal Dreamer & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ \\ \hline \checkmark& & & 58.68 & 52.34 & 34.45 & 35.02 & 22.87 \\ \checkmark & \checkmark & & 63.90 & 56.04 & 37.66 & 37.06 & 24.58 \\ \checkmark & & \checkmark & 61.03 & 53.45 & 37.41 & 34.34 & 24.03 \\ \checkmark & \checkmark & \checkmark & 63.96 & 57.00 & 37.92 & 37.80 & 24.59 \\ \hline \end{tabular} } \caption{Comparisons of baseline model and baseline with our proposed modules (Layout Learner and Goal Dreamer).} \label{tab: abla_table} \end{table} \noindent\textbf{Visual or textual common sense?} In this work, we consider several images to describe a commonsense concept. In this experiment, we study whether visual descriptors of room types lead to a better generalization than directly using the classification label or a textual description while learning an agent. In the first line of Table~\ref{tb: codeboook}, we show the results of directly replacing the visual codebook module with a room label classification head. It shows a $3\%$ drop in both navigation and grounding success rates. This indicates that a single room type classification head is insufficient for learning good latent features of room concepts. We further compare the results of using a visual codebook with using a textual codebook. Since we use text to prompt multiple room images as our visual room codebook encoded with the CLIP~\cite{clip} visual encoder, for a fair comparison, we encode the text prompts as a textual codebook using the CLIP text encoder. Then we replace the visual codebook in our model with the textual one and re-train the whole model. As shown in Table \ref{tb: codeboook}, the textual codebook has a $5.62\%$ drop in navigation success rate (SR) and a $5.58\%$ drop in remote grounding success rate (RGS). This indicates that visual descriptors of commonsense room concepts are informative and easier to follow for an autonomous agent. \begin{table}[t] \resizebox{\linewidth}{!}{ \begin{tabular}{ccc|ccccc} \hline FFN & Text & Visual & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ \\ \hline \checkmark & & &58.76 &53.73 & 35.22 &34.68 & 23.58 \\ &\checkmark & & 56.06 & 51.38 & 35.57 & 32.38 & 22.05 \\ & & \checkmark & 63.96 & 57.00 & 37.92 & 37.80 & 24.59 \\ \hline \end{tabular} } \caption{Codebook type comparison: visual room codebook versus textual room codebook and direct classification head.} \label{tb: codeboook} \end{table} \noindent\textbf{Could the room type prediction be corrected while exploring more?} In this section, we study the predicted trajectory. As shown in Fig.~\ref{fig: room change}, the incorrect room type prediction of node $\alpha$ is corrected after exploration of the room. At time step $t$, the observation only contains chairs, the prediction of room type of node $\alpha$ is office. When entering the room at time step $t+1$, the table and television indicate this room is more likely to be a living room. While grabbing another view from a different viewpoint, the room type of node $\alpha$ is correctly recognized as a bedroom. Since the instruction states to find the pillow inside the bedroom, the agent could correctly track its progress with the help of room type recognition and successfully execute the instruction. This indicates that the ability to correct former beliefs benefits the layout understanding of the environment and further has a positive influence on the action decision process. We further discuss the room type correction ability quantitatively. The following Fig.~\ref{fig: room acc} shows the room type recognition accuracy w.r.t. time step $t$ in the validation unseen set of the REVERIE dataset. It shows that room type recognition accuracy increases with increased exploration of the environment. We also observe that the overall accuracy of the room type recognition is still not satisfactory. We assume the following main reasons: first, room types defined in MatterPort3D have ambiguity, such as family room and living room do not have a well-defined difference; second, many rooms do not have a clear boundary in the visual input (no doors), so it is hard to distinguish connected rooms from the observations. These ambiguities require softer labels while learning, which is also a reason why using images as commonsense resource performs better than using textual descriptors and linear classification heads as is seen in Table~\ref{tb: codeboook}. \section{Limitations and future work} In this paper, we describe our findings while including room type prediction and destination imagination in the Embodied Referring Expression Grounding task, but several limitations still require further study. \noindent\textbf{Imagination is not dynamic} and it is only conditioned on the given instruction. Including observations and dynamically modifying the imagination with a trainable generation module could be helpful for fully using the knowledge gained during exploration. This knowledge could guide the imagination model to generate destination images of a style similar to the environment. It is also possible to follow the idea of PathDreamer~\cite{koh2021pathdreamer} and Dreamer~\cite{hafner2019dream,hafner2020dreamer2}, which generate a sequence of hidden future states based on the history to enhance reinforcement learning models. \noindent\textbf{Constant number of generated visual features.} Due to the long generation time and storage consumption, we only generate five images as the goal imaginations. It is possible to increase diversity by generating more images. Then, a better sampling strategy for the visual room codebook construction and destination imagination could be designed, such as randomly picking a set of images from the generated pool. Since we have observed overfitting in the later stage of the training, it is possible to further improve the generalization of the model by including randomness in this way. \begin{figure} \centering \includegraphics[width=\linewidth]{imgs/unseen_rt_acc.png} \caption{Room recognition accuracy of the validation unseen set of the REVERIE dataset.} \label{fig: room acc} \end{figure}
{'timestamp': '2022-12-05T02:14:53', 'yymm': '2212', 'arxiv_id': '2212.00171', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.00171'}
ArXiv
\section{Introduction} An origin of cosmological baryon asymmetry is one of the prime open questions in particle physics as well as in cosmology. Among various mechanisms of baryogenesis, leptogenesis~\cite{FukugitaYanagida} is one of the most attractive idea because of its simplicity and the connection to neutrino physics. Particularly, thermal leptogenesis requires only the thermal excitation of heavy right-handed Majorana neutrinos which generate tiny neutrino masses via the seesaw mechanism~\cite{Type1seesaw} and provides several implications for the light neutrino mass spectrum~\cite{Buchmulleretal}. The size of CP asymmetry in a right-handed neutrino decay is, roughly speaking, proportional to the mass of right-handed neutrino. Thus, we obtain only insufficiently small CP violation for a lighter right-handed neutrino mass. That is the reason why it has been regarded that leptogenesis in low energy scale is in general difficult in the conventional Type I seesaw mechanism~\cite{LowerBound,Davidson:2002qv}. On the other hand, in supersymmetric models with conserved R-parity to avoid rapid proton decay, thermal leptogenesis faces with ``gravitino problem'' that the overproduction of gravitinos spoils the success of Big Bang Nucleosynthesis (BBN)~\cite{GravitinoProblem}, whereas the stable lightest supersymmetric particle (LSP) becomes dark matter candidate. In order not to overproduce gravitinos, the reheating temperature after inflation should not be so high that right-handed neutrinos can be thermally produced~\cite{GravitinoProblem2}. In the framework of gravity mediated supersymmetry (SUSY) breaking, a few solutions, e.g., gravitino LSP with R-parity violation~\cite{Buchmuller:2007ui}, very light axino LSP~\cite{Asaka:2000ew} and strongly degenerated right-handed neutrino masses~\cite{ResonantLeptogenesis}, have been proposed. Recently, a new class of two Higgs doublet models (THDM)~\cite{2hdm} has been considered in Refs.~\cite{Ma,Nandi,Ma:2006km,Davidson:2009ha,Logan:2010ag, HabaHirotsu}. The motivation is as follows. As mentioned above, seesaw mechanism naturally realizes tiny masses of active neutrinos through heavy particles coupled with left-handed neutrinos. However, those heavy particles are almost decoupled in the low-energy effective theory, few observations are expected in collider experiments. Then, some people consider a possibility of reduction of seesaw scale to TeV~\cite{TeVseesaw,Haba:2009sd}, where effects of TeV scale right-handed neutrinos might be observed in collider experiments such as Large Hadron Collider (LHC) and International Linear Collider (ILC). However, they must introduce a fine-tuning in order to obtain both tiny neutrino mass and detectable left-right neutrino mixing through which experimental evidences can be discovered. Other right-handed neutrino production processes in extended models by e.g., $Z'$ exchange~\cite{Zprime} or Higgs/Higgsino decay~\cite{CerdenoSeto} also have been pointed out. Here, let us remind that Dirac masses of fermions are proportional to their Yukawa couplings as well as a vacuum expectation value (VEV) of the relevant Higgs field. Hence, the smallness of a mass might be due to not a small Yukawa coupling but a small VEV of the Higgs field. Such a situation is indeed realized in some THDM. For example, in Type-II THDM with a large $\tan\beta$, the mass hierarchy between up-type quark and down-type quark can be explained by the ratio of Higgs VEVs, and when $\tan\beta \sim 40$, Yukawa couplings of top and bottom quark are same scale of order of unity~\cite{Hashimoto:2004xp}. Similarly, there is a possibility that smallness of the neutrino masses comparing to those of quarks and charged leptons is originating from an extra Higgs doublet with the tiny VEV. This idea is that neutrino masses are much smaller than other fermions because the origin of them comes from different VEV of different Higgs doublet, and then we do not need extremely tiny neutrino Yukawa couplings. Let us call this kind of model~\cite{Ma,Nandi,Ma:2006km,Davidson:2009ha,Logan:2010ag,HabaHirotsu} neutrinophilic Higgs doublet model. Especially, in models in Refs.\cite{Ma, HabaHirotsu}, tiny Majorana neutrino masses are obtained through a TeV scale Type-I seesaw mechanism without requiring tiny Yukawa couplings. Notice that neutrino Yukawa couplings in neutrinophilic Higgs doublet models do not need to be so small. This fact has significant implication to leptogenesis. The CP violation of right-handed neutrino decay is proportional to neutrino Yukawa coupling squared. We can obtain a large CP violation for such large neutrino Yukawa couplings. This opens new possibility of low scale thermal leptogenesis. In this paper, we will show that CP asymmetry is enhanced and thermal leptogenesis suitably works in multi-Higgs models with a neutrinophilic Higgs doublet field, where the tiny VEV of the neutrinophilic Higgs field has equivalently larger neutrino Yukawa couplings, and then TeV-scale seesaw works well. We will show that the thermal leptogenesis suitably works at low energy scale as avoiding enhancement of lepton number violating wash out effects. We will also point out that thermal leptogenesis in gravity mediated SUSY breaking works well without confronting gravitino problem in a supersymmetric model. \section{Neutrinophilic Higgs doublet models} \subsection{Minimal neutrinophilic THDM } \label{subsec:Minimal} Let us show a two Higgs doublet model, which we call neutrinophilic THDM model, originally suggested in Ref.~\cite{Ma}. In the model, one additional Higgs doublet $\Phi_{\nu}$, which gives only neutrino Dirac masses, besides the SM Higgs doublet $\Phi$ and a discrete $Z_2$-parity are introduced. The $Z_2$-symmetry charges (and also lepton number) are assigned as the following table. \begin{table}[h] \centering \begin{center} \begin{tabular}{|l|c|c|} \hline fields & $Z_{2}$-parity & lepton number \\ \hline\hline SM Higgs doublet, $\Phi$ & $+$ & 0 \\ \hline new Higgs doublet, $\Phi_{\nu}$ & $-$ & 0 \\ \hline right-handed neutrinos, $N$ & $-$ & $1$ \\ \hline others & $+$ & $\pm 1$: leptons, $0$: quarks \\ \hline \end{tabular} \end{center} \end{table} Under the discrete symmetry, Yukawa interactions are given by \begin{eqnarray} {\mathcal L}_{yukawa}=y^{u}\bar{Q}_L \Phi U_{R} +y^d \bar{Q}_{L}\tilde{\Phi}D_{R}+y^{l}\bar{L}\Phi E_{R} +y^{\nu}\bar{L}\Phi_{\nu}N+ \frac{1}{2}M \bar{N^{c}}N +{\rm h.c.}\; \label{Yukawa:nuTHDM} \end{eqnarray} where $\tilde{\Phi}=i\sigma_{2}\Phi^{\ast}$, and we omit a generation index. $\Phi_\nu$ only couples with $N$ by the $Z_2$-parity so that flavor changing neutral currents (FCNCs) are suppressed. Quark and charged lepton sectors are the same as Type-I THDM, but notice that this neutrinophilic THDM is quite different from conventional Type-I, II, X, Y THDMs~\cite{2hdm}. The Higgs potential of the neutrinophilic THDM is given by \begin{align} V^\text{THDM} & = m_\Phi^2 \Phi^\dag \Phi + m_{\Phi_\nu}^2 \Phi_\nu^\dag \Phi_\nu -m_3^2\left(\Phi^\dag \Phi_\nu+\Phi_\nu^\dag \Phi\right) +\frac{\lambda_1}2(\Phi^\dag \Phi)^2 +\frac{\lambda_2}2(\Phi_\nu^\dag \Phi_\nu)^2\nonumber \\ &\qquad+\lambda_3(\Phi^\dag \Phi)(\Phi_\nu^\dag \Phi_\nu) +\lambda_4(\Phi^\dag \Phi_\nu)(\Phi_\nu^\dag \Phi) +\frac{\lambda_5}2\left[(\Phi^\dag \Phi_\nu)^2 +(\Phi_\nu^\dag \Phi)^2\right]. \label{Eq:HiggsPot} \end{align} The $Z_2$-symmetry is softly broken by $m_3^2$. Taking a parameter set, \begin{equation} m_\Phi^2 < 0, ~~~ m_{\Phi_\nu}^2 > 0, ~~~ |m_{3}^2| \ll m_{\Phi_\nu}^2, \end{equation} we can obtain the VEV hierarchy of Higgs doublets, \begin{equation} v^2 \simeq \frac{-m_\Phi^2}{\lambda_1}, ~~~ v_{\nu} \simeq \frac{-m_{3}^2 v}{ m_{\Phi_\nu}^2 + (\lambda_3 + \lambda_4 + \lambda_5 ) v^2} , \end{equation} where we have decomposed the SM Higgs doublet $\Phi$ and the extra Higgs doublet $\Phi_{\nu}$ as \begin{eqnarray} \Phi = \left( \begin{array}{c} v+ \frac{1}{\sqrt{2}}\phi^{0}\\ \phi^{-} \end{array} \right) ,\;\; \Phi_{\nu}= \left( \begin{array}{c} v_{\nu}+\frac{1}{\sqrt{2}}\phi^{0}_{\nu} \\ \phi^{-}_{\nu} \end{array} \right). \end{eqnarray} When we take values of parameters as $m_\Phi \sim 100$ GeV, $m_{\Phi_\nu} \sim 1$ TeV, and $|m_{3}^2| \sim 10$ GeV$^2$, we can obtain $v_\nu \sim 1$ MeV. The smallness of $|m_{3}^2|$ is guaranteed by the ``softly-broken'' $Z_2$-symmetry. For a very large $\tan \beta=v/v_{\nu} (\gg 1)$ limit we are interested in, the five physical Higgs boson states and those masses are respectively given by \begin{eqnarray} H^\pm \simeq \ [\phi_\nu^\pm] , && ~~~ m^2_{H^\pm} \simeq m_\nu^2 + \lambda_3 v^2 , \\ A \simeq {\rm Im} [\phi_{\nu}^0] , && ~~~ m^2_A \simeq m_\nu^2 + (\lambda_3 + \lambda_4+ \lambda_5) v^2 , \\ h \simeq {\rm Re} [\phi^0] , && ~~~ m^2_{h} \simeq 2 \lambda_1 v^2 , \\ H \simeq {\rm Re} [\phi_\nu^0] , && ~~~ m^2_{H} \simeq m_\nu^2 + (\lambda_3 + \lambda_4+\lambda_5) v^2 , \end{eqnarray} where negligible ${\cal O}(v_{\nu}^2)$ and ${\cal O}(m_3^2)$ corrections are omitted. Notice that $\tan\beta$ is extremely large so that the SM-like Higgs $h$ is almost originated from $\Phi$, while other physical Higgs particles, $H^\pm, H, A$, are almost originated from $\Phi_\nu$. Since $\Phi_\nu$ has Yukawa couplings only with neutrinos and lepton doublets, remarkable phenomenology can be expected which is not observed in other THDMs. For example, lepton flavor violation (LFV) processes and oblique corrections are estimated in Ref.~\cite{Ma}, and charged Higgs processes in collider experiments are discussed in Refs.~\cite{Davidson:2009ha, Logan:2010ag}~\footnote{ The model deals with Dirac neutrino version in neutrinophilc THDM, but phenomenology of charged lepton has a similar region in part.}. The neutrino masses including one-loop radiative corrections~\cite{Ma:2006km} are estimated as \begin{equation} (m_\nu)_{ij} = \sum_k\frac{y^{\nu}_{ik} v_\nu y^{\nu T}{}_{kj} v_\nu}{M_k} + \sum_k {y^\nu_{ik} y^{\nu T}{}_{kj} M_{k} \over 16 \pi^{2}} \left[ {m_R^{2} \over m_R^{2}-M_{k}^{2}} \ln {m_R^{2} \over M_{k}^{2}} - {m_I^{2} \over m_I^{2}-M_{k}^{2}} \ln {m_I^{2} \over M_{k}^{2}} \right], \end{equation} where $m_R$ and $m_I$ are the masses of $ {\rm Re} [\phi^{0}]$ and ${\rm Im} [\phi_\nu^{0}]$ respectively. It is easy to see the tree level contribution gives ${\cal O} (0.1)$ eV neutrino masses for $M_k \sim 1$ TeV, $v_\nu \sim 1$ MeV and $y^{\nu} = {\cal O}(1)$. The one-loop contribution is induced for a nonvanishing $\lambda_5$. When $m_R^{2} - m_I^{2} = 2 \lambda_5 v^{2} \ll m_0^{2} = (m_R^{2} + m_I^{2})/2$, \begin{equation} ({m}_\nu)_{ij} = {\lambda_5 v^{2} \over 8 \pi^{2}} \sum_k {y^\nu_{ik} y^\nu_{jk} M_{k} \over m_0^{2} - M_{k}^{2}} \left[ 1 - {M_{k}^{2} \over m_0^{2}-M_{k}^{2}} \ln {m_0^{2} \over M_{k}^{2}} \right], \end{equation} and it shows \begin{eqnarray} &&({m}_\nu)_{ij} = {\lambda_5 v^{2} \over 8 \pi^{2}} \sum_k {y\nu_{ik} y^nu_{jk} \over M_{k}} \left[ \ln {M_{k}^{2} \over m_0^{2}} - 1 \right], \;\; (M_{k}^{2} \gg m_0^{2}), \\ && ({m}_\nu)_{ij} = {\lambda_5 v^{2} \over 8 \pi^{2} m_0^{2}} \sum_k y^\nu_{ik} y^\nu_{jk} M_{k}, \;\; (m_0^{2} \gg M_{k}^{2}), \\ && ({m}_\nu)_{ij} \simeq {\lambda_5 v^{2} \over 16 \pi^{2}} \sum_k {y^\nu_{ik} y^\nu_{jk} \over M_{k}}, \;\; (m_0^{2} \simeq M_{k}^{2}). \end{eqnarray} Thus, when the masses of Higgs bosons (except for $h$) and right-handed neutrinos are ${\mathcal O}(1)$ TeV, light neutrino mass scale of order ${\mathcal O}(0.1)$ eV is induced with $\lambda_5 \sim 10^{-4}$. Thus, whether tree-level effect is larger than loop-level effect or not is determined by the magnitude of $\lambda_5$ (and $m_A, m_H$), which contribute one-loop diagram. \subsection{A UV theory of neutrinophilic THDM } \label{subsec:HabaHirotsu} Here let us show a model in Ref.~\cite{HabaHirotsu} as an example of UV theory of the neutrinophilic THDM. This model is constructed by introducing one gauge singlet scalar field $S$, which has a lepton number, and $Z_3$-symmetry shown as the following table. \begin{table}[h] \centering \begin{center} \begin{tabular}{|l|c|c|} \hline fields & $Z_{3}$-charge & lepton number \\ \hline\hline SM Higgs doublet, $\Phi$ & 1 & 0 \\ \hline new Higgs doublet, $\Phi_{\nu}$ & $\omega^{2}$ & 0 \\ \hline new Higgs singlet, $S$ & $\omega$ & $-2$ \\ \hline right-handed neutrinos, $N$ & $\omega$ & 1 \\ \hline others & 1 & $\pm 1$: leptons, $0$: quarks \\ \hline \end{tabular} \end{center} \end{table} Under the discrete symmetry, Yukawa interactions are given by \begin{eqnarray} {\mathcal L}_{yukawa}=y^{u}\bar{Q}^{L}\Phi U_{R} +y^d \bar{Q}_{L}\tilde{\Phi}D_{R}+y^{l}\bar{L}\Phi E_{R} +y^{\nu}\bar{L}\Phi_{\nu}N+\frac{1}{2}y^{N}S\bar{N^{c}}N +{\rm h.c.} . \label{22} \end{eqnarray} The Higgs potential can be written as \begin{eqnarray} V=&m_\Phi^{2}|\Phi|^{2}+m_{\Phi_{\nu}}^{2}|\Phi_{\nu}|^{2}-m_S^{2}|S|^{2} -\lambda S^{3}-\kappa S\Phi^{\dagger}\Phi_{\nu}\nonumber\\ &+\frac{\lambda_{1}}{2}|\Phi|^{4}+\frac{\lambda_{2}}{2}|\Phi_{\nu}|^{4}+\lambda_{3}|\Phi|^{2}|\Phi_{\nu}|^{2}+\lambda_{4}|\Phi^{\dagger}\Phi_{\nu}|^{2}\nonumber\\ &+\lambda_{S}|S|^4+ \lambda_{\Phi}|S|^{2}|\Phi|^{2}+\lambda_{\Phi_{\nu}}|S|^{2}|\Phi_{\nu}|^{2} + {\it h.c.} . \label{Potential:HabaHirotsu} \end{eqnarray} $Z_3$-symmetry forbids dimension four operators, $(\Phi^\dagger \Phi_{\nu})^{2}$, $\Phi^\dagger \Phi_{\nu}|\Phi|^2$, $\Phi^\dagger \Phi_{\nu}|\Phi_{\nu}|^2$, $S^4$, $S^2|S|^2$, $S^2|\Phi|^2$, $S^2|\Phi_{\nu}|^2$, and dimension two or three operators, $\Phi^\dagger \Phi_{\nu}$, $S|\Phi|^{2}$, $S|\Phi_\nu|^{2}$. Although there might be introduced small soft breaking terms such as $m_3'^2\Phi^\dagger \Phi_{\nu}$ to avoid domain wall problem, we omit them here, for simplicity. It has been shown that, with $\kappa \sim 1$ MeV, the desirable hierarchy of VEVs \begin{eqnarray} && v_s \equiv \langle S \rangle \sim 1 \;\hbox{TeV},\;\;\; v \sim 100 \;\hbox{GeV}, \;\;\; v_{\nu} \sim 1 \;\hbox{MeV}, \end{eqnarray} and neutrino mass \begin{equation} m_\nu \simeq \frac{y^{\nu2} v_{\nu}^{2}} {M_N } . \end{equation} with Majorana mass of right-handed neutrino $M_N = y^N v_s$ can be realized~\cite{HabaHirotsu}. This is so-called Type-I seesaw mechanism in a TeV scale, when coefficients $y^\nu$ and $y^N$ are assumed to be of order one. The masses of scalar and pseudo-scalar mostly from $S$ are given by \begin{eqnarray} m^{2}_{H_{S}} &=& m_3^{2}+2\lambda_{S} v_s^2 , \;\;\;\;\;\; m^{2}_{A_{S}} = 9 \lambda v_s , \end{eqnarray} in the potential Eq.~(\ref{Potential:HabaHirotsu}) without CP violation. For parameter region with $v_s \gg 1$ TeV, both scalar and pseudo-scalar are heavier than other particles. After integrating out $S$, thanks to the $Z_3$-symmetry, the model ends up with an effectively neutrinophilic THDM with approximated $Z_2$-symmetry, $\Phi \to \Phi, \Phi_\nu \to -\Phi_\nu$. Comparing to the neutrinophilic THDM, the value of $m_3^2$, which is a soft $Z_2$-symmetry breaking term, is expected to be $\kappa v_s$. $\lambda_{5}$ is induced by integrating out $S$, which is estimated as ${\mathcal O}(\kappa^2/m_S^2)\sim 10^{-12}$. Thus, the neutrinophilic THDM has an approximate $Z_2$-symmetry. As for the neutrino mass induced from one-loop diagram \footnote{ We would like to thank J. Kubo and H. Sugiyama for letting us notice this topic. }, UV theory induces small $\lambda_5\sim 10^{-12}$ due to $Z_3$-symmetry, so that radiative induced neutrino mass from one-loop diagram is estimated as $\lambda_5 v^2/(4\pi)^2M \sim 10^{-4}$ eV. This can be negligible comparing to light neutrino mass which is induced from tree level Type-I seesaw mechanism. The tree level neutrino mass is \begin{equation} m_\nu^{tree} \sim {y_\nu^2 v_\nu^2 \over M}\sim {y_\nu^2 \kappa^2v^2 \over v_s^2M}, \end{equation} where we input $v_\nu \sim {\kappa v \over v_s}$. On the other hand, one-loop induced neutrino mass is estimated as \begin{equation} m_\nu^{loop} \sim {\lambda_5 y_\nu^2 \over 16\pi^2}{v^2 \over M} \sim {y_\nu^2 \over 16\pi^2}{\kappa^2 v^2 \over M^2 M}. \end{equation} Putting $M\sim v_s$, \begin{equation} {m_\nu^{loop} \over m_\nu^{tree}}\sim {1 \over 16\pi^2}, \end{equation} which shows loop induced neutrino mass is always smaller than tree level mass if UV theory is the model of Ref.~\cite{HabaHirotsu}. \subsection{Supersymmetic extension of neutrinophilic Higgs doublet model} \label{subsec:Super} Now let us show the supersymmetric extension of the neutrinophilic Higgs doublet model. The supersymmetric extension is straightforward by extending its Higgs sector to be a four Higgs doublet model. The superpotential is given by \begin{eqnarray} {\mathcal W}&=&y^{u}\bar{Q}^{L}H_u U_{R} +y^d \bar{Q}_{L}{H_d}D_{R}+y^{l}\bar{L}H_d E_{R} +y^{\nu}\bar{L}H_{\nu}N+M {N^{}}^2 \nonumber \\ && +\mu H_uH_d + \mu' H_\nu H_{\nu'} +\rho H_u H_{\nu'} + \rho' H_\nu H_d, \end{eqnarray} where $H_u$ ($H_d$) is Higgs doublet which gives mass of up- (down-) sector. $H_\nu$ gives neutrino Dirac mass and $H_{\nu'}$ does not contribute to fermion masses. For the $Z_2$-parity, $H_u, H_d$ are even, while $H_\nu, H_{\nu'}$ are odd. The $Z_2$-partity is softly broken by the $\rho$ and $\rho'$. We assume that $|\mu|, |\mu'| \gg |\rho|, |\rho'|$, and SUSY breaking soft squared masses can trigger suitable electro-weak symmetry breaking. The Higgs potential is given by \begin{eqnarray} V &=& (|\mu|^2 +|\rho|^2) H_u^\dag H_u + (|\mu|^2+|\rho'|^2) H_d^\dag H_d + (|\mu'|^2 +|\rho'|^2) H_{\nu}^\dag H_{\nu} + (|\mu'|^2+|\rho|^2) H_{\nu'}^\dag H_{\nu'} \nonumber \\ && + \frac{g_1^2}{2} \left( H_u^\dag \frac{1}{2} H_u - H_d^\dag\frac{1}{2} H_d + H_{\nu}^\dag \frac{1}{2} H_{\nu} - H_{\nu'}^\dag \frac{1}{2}H_{\nu'} \right)^2 \nonumber \\ && + \sum_a \frac{g_2^2}{2} \left( H_u^\dag \frac{\tau^a}{2} H_u + H_d^\dag\frac{\tau^a}{2} H_d + H_{\nu}^\dag \frac{\tau^a}{2} H_{\nu} + H_{\nu'}^\dag \frac{\tau^a}{2}H_{\nu'} \right)^2 \nonumber \\ && + m_{H_u}^2 H_u^\dag H_u + m_{H_d}^2 H_d^\dag H_d + m_{H_\nu}^2 H_{\nu}^\dag H_{\nu}+ m_{H_{\nu'}}^2 H_{\nu'}^\dag H_{\nu'} \nonumber \\ && + B \mu H_u \cdot H_d + B' \mu' H_{\nu}\cdot H_{\nu'} + \hat{B} \rho H_u \cdot H_{\nu'} + \hat{B}' \rho' H_{\nu}\cdot H_{d}\nonumber\\ && + \mu^* \rho H_d^\dag H_{\nu'}+\mu^* \rho' H_u^\dag H_{\nu}+ \mu'^* \rho' H_{\nu'}^\dag H_{d}+\mu'^* \rho H_\nu^\dag H_{u} + {\it h.c.} , \end{eqnarray} where $\tau^a$ and dot represent a generator of $SU(2)$ and its anti-symmetric product respectively. We assume Max.[$|\hat{B}\rho|, |\hat{B}'\rho'|, |\mu\rho|, |\mu'\rho|,|\mu\rho'|,|\mu'\rho'|$] $\sim {\mathcal O}(10)$ GeV$^2$, which triggers VEV hierarchy between the SM Higgs doublet and neutrinophilic Higgs doublets. Notice that quarks and charged lepton have small non-holomorphic Yukawa couplings with $H_\nu$, through one-loop diagrams associated with small mass parameters of $\hat{B}\rho, \hat{B}'\rho', \mu\rho, \mu'\rho, \mu\rho', \mu'\rho'$. This situation is quite different from non-SUSY model, where these couplings are extremely suppressed by factor of $v_\nu/v$. As for the gauge coupling unification, we must introduce extra particles, but anyhow, the supersymmetric extension of neutrinophilic Higgs doublet model can be easily constructed as shown above. \section{Leptogenesis} \subsection{A brief overview of thermal leptogenesis} In the seesaw model, the smallness of the neutrino masses can be naturally explained by the small mixing between left-handed neutrinos and heavy right-handed Majorana neutrinos $N_i$. The basic part of the Lagrangian in the SM with right-handed neutrinos is described as \begin{eqnarray} {\cal L}_{N}^{\rm SM}=-y^{\nu}_{ij} \overline{l_{L,i}} \Phi N_j -\frac{1}{2} \sum_{i} M_i \overline{ N^c_i} N_i + h.c. , \label{SMnuYukawa} \end{eqnarray} where $i,j=1,2,3$ denote the generation indices, $h$ is the Yukawa coupling, $l_L$ and $\Phi$ are the lepton and the Higgs doublets, respectively, and $M_i$ is the lepton-number-violating mass term of the right-handed neutrino $N_i$ (we are working on the basis of the right-handed neutrino mass eigenstates). With this Yukawa couplings, the mass of left-handed neutrino is expressed by the well-known formula \begin{equation} m_{ij} = \sum_k \frac{y^{\nu}_{ik}v y^{\nu}{}^T_{kj}v}{M_k}. \end{equation} The decay rate of the lightest right-handed neutrino is given by \begin{eqnarray} \Gamma_{N_1} = \sum_j\frac{y^{\nu}_{1j}{}^\dagger y^{\nu}_{j1}}{8\pi}M_1 = \frac{(y^{\nu}{}^\dagger y^{\nu})_{11}}{8\pi}M_1. \end{eqnarray} Comparing to the Friedmann equation for a spatially flat spacetime \begin{equation} H^2 = \frac{1}{3 M_P^2}\rho , \label{FriedmannEq} \end{equation} with the energy density of the radiation \begin{equation} \rho = \frac{\pi^2}{30}g_*T^4 , \end{equation} where $g_*$ denotes the effective degrees of freedom of relativistic particles and $M_P \simeq 2.4 \times 10^{18}$ GeV is the reduced Planck mass, the condition of the out of equilibrium decay $\Gamma_{N_1} < \left.H\right|_{T=M_1}$ is rewritten as \begin{eqnarray} \tilde{m}_1 \equiv (y^{\nu}{}^\dagger y^{\nu})_{11} \frac{v^2}{M_1} < \frac{8\pi v^2}{M_1^2} \left.H\right|_{T=M_1} \equiv m_* \end{eqnarray} with $ m_* \simeq 1\times 10^{-3}$ eV and $v=174$ GeV. In the case of the hierarchical mass spectrum for right-handed neutrinos, the lepton asymmetry in the Universe is generated dominantly by CP-violating out of equilibrium decay of the lightest heavy neutrino, $N_1 \rightarrow l_L \Phi^*$ and $ N_1 \rightarrow \overline{l_L} \Phi $. Then, its CP asymmetry is given by~\cite{FandG} \begin{eqnarray} \varepsilon &\equiv& \frac{\Gamma(N_1\rightarrow \Phi+\bar{l}_j)-\Gamma(N_1\rightarrow \Phi^*+l_j)} {\Gamma(N_1\rightarrow \Phi+\bar{l}_j)+\Gamma(N_1\rightarrow \Phi^*+l_j)} \nonumber \\ &\simeq& -\frac{3}{8\pi}\frac{1}{(y^{\nu} y^{\nu}{}^{\dagger})_{11}}\sum_{i=2,3} \textrm{Im}(y^{\nu}y^{\nu}{}^{\dagger})^2_{1i} \frac{M_1}{M_i}, \qquad \textrm{for} \quad M_i \gg M_1 . \end{eqnarray} Through the relations of the seesaw mechanism, this can be roughly estimated as \begin{eqnarray} \varepsilon \simeq \frac{3}{8\pi}\frac{M_1 m_3}{v^2} \sin\delta \simeq 10^{-6}\left(\frac{M_1}{10^{10}\textrm{GeV}}\right) \left(\frac{m_3}{0.05 \textrm{eV}}\right) \sin\delta, \label{epsilon} \end{eqnarray} where $m_3$ is the heaviest light neutrino mass normalized by $0.05$ eV which is a preferred to account for atmospheric neutrino oscillation data~\cite{atm}. Using the above $\varepsilon$, the resultant baryon asymmetry generated via thermal leptogenesis is expressed as \begin{equation} \frac{n_b}{s} \simeq C \kappa \frac{\varepsilon}{g_*} , \label{b-sRatio} \end{equation} where $\left. g_*\right|_{T=M_1} \sim 100$ , the so-called dilution (or efficiency) factor $ \kappa \leq {\cal O}(0.1) $ denotes the dilution by wash out processes, the coefficient \begin{equation} C = \frac{8 N_f + 4 N_H}{22 N_f + 13 N_H} , \label{C} \end{equation} with $N_f$ and $N_H$ being the number of fermion generation and Higgs doublet~\cite{C} is the factor of the conversion from lepton to baryon asymmetry by the sphaleron~\cite{KRS}. In order to obtain the observed baryon asymmetry in our Universe $n_b/s \simeq 10^{-10}$~\cite{WMAP}, the inequality $\varepsilon \gtrsim 10^{-7}$ is required. This can be rewritten as $M_1 \gtrsim 10^9$ GeV, which is the so-called Davidson-Ibarra bound for models with hierarchical right-handed neutrino mass spectrum~\cite{LowerBound,Davidson:2002qv}. \subsection{leptogenesis in neutrinophilic THDM } Now we consider leptogenesis in the neutrinophilic THDM with the extra Higgs doublet $\Phi_{\nu}$ described in Sec.~\ref{subsec:Minimal}. The relevant interaction part of Lagrangian Eq.~(\ref{Yukawa:nuTHDM}) is expressed as \begin{eqnarray} {\cal L}_{N}=-y^{\nu}_{ij} \overline{l_{L,i}} \Phi_{\nu} N_j -\frac{1}{2} \sum_{i} M_i \overline{ N^c_i} N_i + h.c. . \label{Yukawa:nuTHDM(2)} \end{eqnarray} The usual Higgs doublet $\Phi$ in Eq.~(\ref{SMnuYukawa}) is replaced by new Higgs doublet $\Phi_{\nu}$. Again, we are working on the basis of the right-handed neutrino mass eigenstates. Then, with these Yukawa couplings, the mass of left-handed neutrino is given by \begin{equation} m_{ij} = \sum_k \frac{y^{\nu}_{ik}v_{\nu} y^{\nu}{}^T_{kj}v_{\nu}}{M_k}. \end{equation} Thus, for a smaller VEV of $v_{\nu}$, a larger $y^{\nu}$ is required. The Boltzmann equation for the lightest right-handed neutrino $N_1$, which is denoted by $N$ here, is given by \begin{eqnarray} \dot{n}_N+3Hn_N &=& -\gamma (N\rightarrow L\Phi_{\nu}) - \gamma (N\rightarrow \bar{L}\Phi_{\nu}^*) \qquad\textrm{:decay}\nonumber\\ && +\gamma (L\Phi_{\nu}\rightarrow N) + \gamma (\bar{L}\Phi_{\nu}^* \rightarrow N) \qquad \textrm{:inverse decay}\nonumber\\ && -\gamma (N L \rightarrow A \Phi_{\nu})-\gamma ( N \Phi_{\nu} \rightarrow L A) -\gamma (N \bar{L} \rightarrow A \Phi_{\nu}^* )-\gamma ( N \Phi_{\nu}^* \rightarrow \bar{L} A) \nonumber\\ && +\textrm{inverse processes} \qquad \qquad \qquad \qquad : \textrm{s-channel scattering} \nonumber\\ && -\gamma (N L \rightarrow A \Phi_{\nu})-\gamma ( N \Phi_{\nu} \rightarrow L A) -\gamma ( N A \rightarrow L \Phi_{\nu}) \nonumber\\ && -\gamma (N \bar{L} \rightarrow A \Phi_{\nu}^* )-\gamma ( N \Phi_{\nu}^* \rightarrow \bar{L} A) -\gamma ( N A \rightarrow \bar{L} \Phi_{\nu}^*) \nonumber\\ && +\textrm{inverse processes} \qquad \qquad \qquad \qquad : \textrm{t-channel scattering} \nonumber\\ && -\gamma (N N \rightarrow {\rm Final}) + \gamma ({\rm Final} \rightarrow N N) : {\rm annihilation} \nonumber\\ &=& -\Gamma_D (n_N-n_N^{eq})-\Gamma_{scat} (n_N-n_N^{eq}) -\langle\sigma v(\rightarrow \Phi, \Phi_{\nu})\rangle (n_N^2-n_N^{eq}{}^2) \label{Boltzman:N} \end{eqnarray} where $\Phi, \Phi_{\nu}$ and $A$ denote the Higgs bosons, the neutrinophilic Higgs bosons and gauge bosons, respectively. Notice that usual $\Delta L =1$ lepton number violating scattering processes involving top quark is absent in this model, because $\Phi_{\nu}$ has neutrino Yukawa couplings. Although the annihilation processes $(N N \rightarrow {\rm Final})$ is noted in Eq.~(\ref{Boltzman:N}), in practice, this is not relevant because the coupling $y^{\nu}_{i1}$ is so small, as will be shown later, to satisfy the out of equilibrium decay condition. The Boltzmann equation for the lepton asymmetry $L \equiv l-\bar{l}$ is given by \begin{eqnarray} && \dot{n}_L+3H n_L \nonumber \\ &=& \gamma(N\rightarrow l\Phi_{\nu}) - \gamma( \bar{N} \rightarrow \bar{l}\Phi_{\nu}^*) \nonumber \\ && -\{ \gamma(l\Phi_{\nu}\rightarrow N) - \gamma( \bar{l}\Phi_{\nu}^* \rightarrow \bar{N}) \} \qquad\textrm{:decay and inverse decay} \nonumber \\ && -\gamma ( l A \rightarrow N \Phi_{\nu} )+\gamma ( \bar{l} A \rightarrow \bar{N} \Phi_{\nu}^* ) -\gamma (N l \rightarrow A \Phi_{\nu}) \nonumber \\ && +\gamma ( \bar{N} \bar{l} \rightarrow A \Phi_{\nu}^*) \quad\textrm{:s-channel $\Delta L=1$ scattering} \nonumber\\ && -\gamma (N l \rightarrow A \Phi_{\nu})+\gamma (\bar{N} \bar{l} \rightarrow A \Phi_{\nu}^*)-\gamma (l A \rightarrow N \Phi_{\nu})+\gamma (\bar{l} A \rightarrow \bar{N} \Phi_{\nu}^*) \nonumber\\ && -\gamma ( l \Phi_{\nu} \rightarrow N A)+\gamma ( \bar{l} \Phi_{\nu}^* \rightarrow \bar{N} A) \quad\textrm{:t-channel $\Delta L=1$ scattering} \nonumber \\ && +\gamma( \bar{l}\bar{l} \rightarrow \Phi_{\nu}^*\Phi_{\nu}^*)-\gamma(ll\rightarrow \Phi_{\nu}\Phi_{\nu}) \nonumber \\ && +2 \{ \gamma(\bar{l}\Phi_{\nu}^*\rightarrow l\Phi_{\nu})- \gamma(l\Phi_{\nu}\rightarrow \bar{l}\Phi_{\nu}^*) \} \quad\textrm{:t and s-channel $\Delta L=2$ scattering} \nonumber \\ &=& \varepsilon\Gamma_D(n_N-n_N^{eq}) - \Gamma_W n_L \end{eqnarray} where \begin{equation} \Gamma_W = \frac{1}{2}\frac{n_N^{eq}}{n_{\gamma}^{eq}}\Gamma_N + \frac{n_N}{n_N^{eq}}\Gamma_{\Delta L=1,t} + 2\Gamma_{\Delta L=1,s}+ 2\Gamma_{\Delta L=2} \end{equation} is the wash-out rate. The condition of the out of equilibrium decay is given as \begin{eqnarray} \tilde{m}_1 \equiv (y^{\nu}{}^\dagger y^{\nu})_{11}\frac{v_{\nu}^2}{M_1} < \frac{8\pi v_{\nu}^2}{M_1^2} \left.H\right|_{T=M_1} \equiv m_* \left(\frac{v_{\nu}}{v}\right)^2 \end{eqnarray} Notice that for $v_{\nu} \ll v$ the upper bound on $\tilde{m}_1$ becomes more stringent, which implies that the lightest left-handed neutrino mass is almost vanishing $m_1 \simeq 0$. Alternatively the condition can be expressed as \begin{equation} ( y^{\nu}{}^{\dagger} y^{\nu})_{11} < 8 \pi \sqrt{ \frac{\pi^2 g_*}{90} }\frac{M_1}{M_P} . \label{OoEqDecay} \end{equation} Hence, for the TeV scale $M_1$, the value of $ (y^{\nu}{}^{\dagger} y^{\nu} )_{11}$ must be very small, which can be realized by taking all $y^{\nu}_{i1}$ to be small. Under such neutrino Yukawa couplings $y^{\nu}_{i1} \ll y^{\nu}_{i2}, y^{\nu}_{i3}$ and hierarchical right-handed neutrino mass spectrum, the CP asymmetry, \begin{eqnarray} \varepsilon &\simeq & -\frac{3}{8\pi}\frac{1}{(y^{\nu}{}^{\dagger}y^{\nu})_{11}} \left(\textrm{Im}(y^{\nu}{}^{\dagger}y^{\nu})^2_{12} \frac{M_1}{M_2} + \textrm{Im}(y^{\nu}{}^{\dagger}y^{\nu})^2_{13} \frac{M_1}{M_3} \right) \nonumber \\ & \simeq & -\frac{3}{8\pi}\frac{m_{\nu} M_1}{v_{\nu}^2} \sin\theta \nonumber \\ & \simeq & -\frac{3}{16\pi} 10^{-6} \left(\frac{0.1 {\rm GeV}}{v_{\nu}}\right)^2 \left(\frac{M_1}{100 {\rm GeV}}\right) \left(\frac{m_{\nu}}{0.05 {\rm eV}}\right) \sin\theta , \label{CPasym} \end{eqnarray} is significantly enhanced due to large Yukawa couplings $y^{\nu}_{2i}$ and $y^{\nu}_{3i}$ as well as the tiny Higgs VEV $v_{\nu}$ . The thermal averaged interaction rate of $\Delta L =2$ scatterings is expressed as \begin{eqnarray} \Gamma^{(\Delta L =2)} = \frac{1}{n_{\gamma}} \frac{T}{32 \pi (2 \pi )^4} \int ds \sqrt{s} K_1\left(\frac{\sqrt{s}}{T} \right) \int \frac{d \cos\theta}{2}\sum \overline{ |{\cal M}|^2} \end{eqnarray} with \begin{eqnarray} \sum \overline{|{\cal M}|^2} &=& 2 \overline{|{\cal M}_{\rm t}|^2} + 2 \overline{|{\cal M}_{\rm s}|^2} \simeq \sum_{j,(\alpha, \beta)} 2 |y^{\nu}_{\alpha j} y^{\nu}_{\beta j}{}^{\dagger}|\frac{s}{M_{N_j}^2}, \quad \textrm{ for} \quad s \ll M_j^2 \ . \end{eqnarray} The decoupling condition \begin{eqnarray} \Gamma^{(\Delta L =2)} < \sqrt{\frac{\pi^2 g_*}{90}} \frac{T^2}{M_P}, \end{eqnarray} for $T < M_1$ is rewritten as \begin{eqnarray} \sum_i \left(\sum_j \frac{ y^{\nu}_{ij} y^{\nu}_{ji}{}^{\dagger} v_{\nu}^2}{M_j}\right)^2 < 32 \pi^3 \zeta(3) \sqrt{\frac{\pi^2 g_*}{90}} \frac{v_{\nu}^4}{T M_P} . \label{L2DecouplingCondition} \end{eqnarray} For lower $v_{\nu}$, $\Delta L =2$ wash out processes are more significant. Inequality~(\ref{L2DecouplingCondition}) gives the lower bound on $v_{\nu}$ in order to avoid too strong wash out. We here summarize all conditions for successful thermal leptogenesis, and the result is presented in Fig.~\ref{fig:AvailableRegion}. The horizontal axis is the VEV of neutrino Higgs $v_{\nu}$ and the vertical axis is the mass of the lightest right-handed neutrino, $M_1$. In the red brown region, the lightest right-handed neutrino decay into Higgs boson $H$ with assuming $M_H= 100$ GeV, and lepton is kinematically not allowed. In turquoise region corresponds to inequality~(\ref{L2DecouplingCondition}), $\Delta L=2$ wash out effect is too strong. The red and green line is contour of the CP asymmetry of $\varepsilon=10^{-6}$ and $10^{-7}$, respectively, with the lightest right-handed neutrino decay in hierarchical right-handed neutrino mass spectrum. Thus, in the parameter region above the line of $\varepsilon = 10^{-7}$, thermal leptogenesis easily works even with hierarchical masses of right-handed neutrinos. For the region below the line of $\varepsilon = 10^{-7}$, the resonant leptogenesis mechanism~\cite{ResonantLeptogenesis}, where CP asymmetry is enhanced resonantly by degenerate right-handed neutrino masses, may work. Here we stress that, for $v_{\nu} \ll 100$ GeV, the required degree of mass degeneracy is considerably milder than that for the original resonant leptogenesis. \begin{figure} \centerline{\includegraphics{Lepto.eps}} \caption{ Available region for leptogenesis. The horizontal axis is the VEV of neutrino Higgs $v_{\nu}$ and the vertical axis is the mass of the lightest right-handed neutrino mass $M_1$. In the red brown region, the lightest right-handed neutrino decay into Higgs boson $\Phi_{\nu}$ and lepton is kinematically forbidden. In turquoise region, $\Delta L=2$ wash out effect is too strong. The red and green line is contour of the CP asymmetry of $\varepsilon=10^{-6}$ and $10^{-7}$, respectively, with the lightest right-handed neutrino decay in hierarchical right-handed neutrino mass spectrum. } \label{fig:AvailableRegion} \end{figure} \subsection{Constraints on an UV theory} Let us suppose that neutrinophilic THDM is derived from a model reviewed in Sec.~\ref{subsec:HabaHirotsu} by integrated out a singlet field $S$. If $S$ is relatively light, thermal leptogenesis discussed above could be affected. That is the annihilation processes of $N_1$ which has been justifiably ignored in Eq.~(\ref{Boltzman:N}). However, the annihilation could take place more efficiently via S-channel $S$ scalar exchange processes in the UV theory~\cite{HabaHirotsu}. For example, the annihilation $N_1 N_1 \rightarrow \Phi_{\nu}\Phi_{\nu}^*$ with the amplitude \begin{eqnarray} \overline{ |{\cal M}|}^2 = \left| \frac{y^{\nu}_1 \lambda_{\Phi_{\nu}} v_s}{ s - M_{H_S}^2 -i M_{H_S} \Gamma_{H_S}}\right|^2 \frac{s-4 M_1^2}{4} , \end{eqnarray} would not be in equilibrium, if \begin{eqnarray} \lambda_{\Phi_{\nu}} \lesssim 40 \frac{M_{H_S}^2}{M_1^{3/2} M_P^{1/2}} \simeq 0.1 \left(\frac{10^5 \, {\rm GeV}}{M_1}\right)^{3/2}\left(\frac{M_{H_S}}{10^7 \, {\rm GeV}}\right)^2 , \end{eqnarray} for $M_S \gg T > M_1$ is satisfied. Here, $\Gamma_{H_S}$ denotes the decay width of $H_S$. Constraints on other parameters such as $\lambda_\Phi$ and $\kappa$ can be similarly obtained. \section{Supersymetric case: Reconciling to thermal leptogenesis, gravitino problem and neutralino dark matter} As we have shown in Sec.~\ref{subsec:Super}, it is possible to construct a supersymmetric model with $\Phi_{\nu}$. A discrete symmetry, called ``R-parity'', is imposed in many supersymmetric models in order to prohibit rapid proton decay. Another advantage of the conserved R-parity is that it guarantees the absolute stability of the LSP, which becomes a dark matter candidate. In large parameter space of supergravity model with gravity mediated SUSY breaking, gravitino has the mass of ${\cal O}(100)$ GeV and decays into LSP (presumablly the lightest neutralino) at late time after BBN. Then, decay products may affect the abundances of light elements produced during BBN. This is so-called ``gravitino problem''~\cite{GravitinoProblem}. To avoid this problem, the upper bound on the reheating temperature after inflation \begin{equation} T_R < 10^6 - 10^7 \, {\rm GeV}, \label{ConstraintsOnTR} \end{equation} has been derived as depending on gravitino mass~\cite{GravitinoProblem2}. By comparing Eq.~(\ref{ConstraintsOnTR}) with the CP violation in supersymmetric models with hierarchical right-handed neutrino masses, which is about four times larger than that in non-supersymmetric model~\cite{SUSYFandG} as, \begin{eqnarray} \varepsilon \simeq -\frac{3}{2\pi}\frac{1}{(y^{\nu} y^{\nu}{}^{\dagger})_{11}}\sum_{i=2,3} \textrm{Im}(y^{\nu}y^{\nu}{}^{\dagger})^2_{1i} \frac{M_1}{M_i}, \label{SUSYepsilon} \end{eqnarray} it has been regarded that thermal leptogenesis through the decay of heavy right-handed neutrinos hardly work because of gravitino problem. As we have shown in the previous section, a sufficient CP violation $\varepsilon = {\cal O}(10^{-6})$ can be realized for $v_{\nu} = {\cal O}(1)$ GeV in the hierarchical right-handed neutrino masses with $M_1$ of ${\cal O}(10^5 - 10^6)$ GeV. This implies that the reheating temperature after inflation $T_R$ of ${\cal O}(10^6)$ GeV is high enough in order to produce right-handed neutrinos by thermal scatterings. Thus, it is remarkable that SUSY neutrinophilic model with $v_{\nu} = {\cal O}(1)$ GeV can realize thermal leptogenesis in gravity mediated SUSY breaking with unstable gravitino. In this setup, the lightest neutralino could be LSP and dark matter with the standard thermal freeze out scenario. \begin{figure} \centerline{\includegraphic {LeptoSUSY.eps}} \caption{ The same as Fig.~\ref{fig:AvailableRegion} but with Eq.~(\ref{SUSYepsilon}). The additional horizontal black dashed line represents a reference value of the upper bound on reheating temperature after inflation $T_R$ of $10^6$ GeV from gravitino overproduction. } \label{fig:SUSYAvailableRegion} \end{figure} \section{Conclusion} We have examined the possibility of thermal leptogenesis in neutrinophilic Higgs doublet models, whose tiny VEV gives neutrino Dirac mass term. Thanks to the tiny VEV of the neutrinophilic Higgs field, neutrino Yukawa couplings are not necessarily small, instead, they tend to be large, and the CP asymmetry in the lightest right-handed neutrino decay is significantly enlarged. Although the $\Delta L = 2$ wash out effect also could be enhanced simlitaneously, we have found the available parameter region where its wash out effect is avoided as keeping the CP asymmetry large enough. In addition, in a supersymmetric neutrinophilic Higgs doublet model, we have pointed out that thermal leptogenesis in gravity mediated SUSY breaking works well without confronting gravitino problem. Where the lightest neutralino could be LSP and dark matter with the standard thermal freeze out scenario. \section*{Acknowledgements} We would like to thank M.~Hirotsu for collaboration in the early stage of this work. We are grateful to S.~Matsumoto, S.~Kanemura and K.~Tsumura for useful and helpful discussions. This work is partially supported by Scientific Grant by Ministry of Education and Science, Nos. 20540272, 22011005, 20039006, 20025004 (N.H.), and the scientific research grants from Hokkai-Gakuen (O.S.).
{'timestamp': '2011-02-15T02:04:49', 'yymm': '1102', 'arxiv_id': '1102.2889', 'language': 'en', 'url': 'https://arxiv.org/abs/1102.2889'}
ArXiv
\section{Introduction} Einstein manifolds are related with many questions in geometry and physics, for instance: Riemannian functionals and their critical points, Yang-Mills theory, self-dual manifolds of dimension four, exact solutions for the Einstein equation field. Today we already have in our hands many examples of Einstein manifolds, even the Ricci-flat ones (see \cite{besse,Oneil,LeandroPina,Romildo}). However, finding new examples of Einstein metrics is not an easy task. A common tool to make new examples of Einstein spaces is to consider warped product metrics (see \cite{LeandroPina,Romildo}). In \cite{besse}, a question was made about Einstein warped products: \begin{eqnarray}\label{question} \mbox{``Does there exist a compact Einstein warped}\nonumber\\ \mbox{product with nonconstant warping function?"} \end{eqnarray} Inspired by the problem (\ref{question}), several authors explored this subject in an attempt to get examples of such manifolds. Kim and Kim \cite{kimkim} considered a compact Riemannian Einstein warped product with nonpositive scalar curvature. They proved that a manifold like this is just a product manifold. Moreover, in \cite{BRS,Case1}, they considered (\ref{question}) without the compactness assumption. Barros, Batista and Ribeiro Jr \cite{BarrosBatistaRibeiro} also studied (\ref{question}) when the Einstein product manifold is complete and noncompact with nonpositive scalar curvature. It is worth to say that Case, Shu and Wei \cite{Case} proved that a shrinking quasi-Einstein metric has positive scalar curvature. Further, Sousa and Pina \cite{Romildo} were able to classify some structures of Einstein warped product on semi-Riemannian manifolds, they considered, for instance, the case in which the base and the fiber are Ricci-flat semi-Riemannian manifolds. Furthermore, they provided a classification for a noncompact Ricci-flat warped product semi-Riemannian manifold with $1$-dimensional fiber, however the base is not necessarily a Ricci-flat manifold. More recently, Leandro and Pina \cite{LeandroPina} classified the static solutions for the vacuum Einstein equation field with cosmological constant not necessarily identically zero, when the base is invariant under the action of a translation group. In particular, they provided a necessarily condition for integrability of the system of differential equations given by the invariance of the base for the static metric. When the base of an Einstein warped product is a compact Riemannian manifold and the fiber is a Ricci-flat semi-Riemannian manifold, we get a partial answer for (\ref{question}). Furthermore, when the base is not compact, we obtain new examples of Einstein warped products. Now, we state our main results. \begin{theorem}\label{teo1} Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (non Ricci-flat), where $M$ is a compact Riemannian manifold and $N$ is a Ricci-flat semi-Riemannian manifold. Then $\widehat{M}$ is a product manifold, i.e., $f$ is trivial. \end{theorem} It is very natural to consider the next case (see Section \ref{SB}). \begin{theorem}\label{teo2} Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$; $\lambda\neq0$), where $M$ is a compact Riemannian manifold with scalar curvature $R\leq\lambda(n-m)$, and $N$ is a semi-Riemannian manifold. Then $\widehat{M}$ is a product manifold, i.e., $f$ is trivial. Moreover, if the equality holds, then $N$ is Ricci-flat. \end{theorem} Now, we consider that the base is a noncompact Riemannian manifold. The next result was inspired, mainly, by Theorem \ref{teo2} and \cite{LeandroPina}, and gives the relationship between Ricci tensor $\widehat{R}ic$ of the warped metric $\hat{g}$ and the Ricci tensor $Ric$ for the metric of the base $g$. \begin{theorem}\label{teo3b} Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$), where $M$ is a noncompact Riemannian manifold with constant scalar curvature $\lambda=\frac{R}{n-1}$, and $N$ is a semi-Riemannian manifold. Then $M$ is Ricci-flat if and only if the scalar curvature $R$ is zero. \end{theorem} Considering a conformal structure for the base of an Einstein warped product semi-Riemannian manifold, we have the next results. Furthermore, the following theorem is very technical. We consider that the base for such Einstein warped product manifold is conformal to a pseudo-Euclidean space which is invariant under the action of a $(n-1)$-dimensional translation group, and that the fiber is a Ricci-flat space. In order, for the reader to have a more intimate view of the next results, we recommend a previous reading of Section \ref{CFSI}. \begin{theorem}\label{teo3a} Let $(\widehat{M}^{n+m}, \hat{g})=(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an warped product semi-Riemannian manifold such that $N$ is a Ricci-flat semi-Riemannian manifold. Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space with coordinates $x =(x_{1},\ldots , x_{n})$ and $g_{ij} = \delta_{ij}\varepsilon_{i}$, $1\leq i,j\leq n$, where $\delta_{ij}$ is the delta Kronecker and $\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Consider smooth functions $\varphi(\xi)$ and $f(\xi)$, where $\xi=\displaystyle\sum_{k=1}^{n}\alpha_{k}x_{k}$, $\alpha_{k}\in\mathbb{R}$, and $\displaystyle\sum_{k=1}^{n}\varepsilon_{k}\alpha_{k}^{2}=\kappa$. Then $(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$, where $\bar{g}=\frac{1}{\varphi^{2}}g$, is an Einstein warped product semi-Riemannain manifold (i.e., $\widehat{R}ic=\lambda\hat{g}$) such that $f$ and $\varphi$ are given by: \begin{eqnarray}\label{system2} \left\{ \begin{array}{lcc} (n-2)\varphi\varphi''-m\left(G\varphi\right)'=mG^{2}\\\\ \varphi\varphi''-(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda \\\\ nG\varphi'-(G\varphi)'-mG^{2}=\kappa\lambda, \end{array} \right. \end{eqnarray} and \begin{eqnarray}\label{sera3} f=\Theta\exp\left(\int\frac{G}{\varphi}d\xi\right), \end{eqnarray} where $\Theta\in\mathbb{R}_{+}\backslash\{0\}$, $G(\xi)=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$ and $\kappa=\pm1$. Here $\bar{R}$ is the scalar curvature for $\bar{g}$. \end{theorem} The next result is a consequence of Theorem \ref{teo3a}. \begin{theorem}\label{teo3} Let $(\widehat{M}^{n+m}, \hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$, $n\geq3$ and $m\geq2$, be an Einstein warped product semi-Riemannian manifold, where $M$ is conformal to a pseudo-Euclidean space invariant under the action of a $(n-1)$-dimensional translation group with constant scalar curvature (possibly zero), and $N$ is a Ricci-flat semi-Riemannian manifold. Then, $\widehat{M}$ is either \begin{enumerate} \item[(1)] a Ricci-flat semi-Riemannain manifold $(\mathbb{R}^{n},g)\times_{f}(N^{m},\tilde{g})$, such that $(\mathbb{R}^{n},g)$ is the pseudo-Euclidean space with warped function $f(\xi)=\Theta\exp{(A\xi)}$, where $\Theta>0,$ $A\neq0$ are nonnull constants, or\\ \item[(2)] conformal to $(\mathbb{R}^{n},g)\times(N^{m},\tilde{g})$, where $(\mathbb{R}^{n},g)$ is the pseudo-Euclidean space. The conformal function $\varphi$ is given by \begin{eqnarray*} \varphi(\xi)= \frac{1}{(-G\xi+C)^{2}};\quad\mbox{where}\quad G\neq0, C\in\mathbb{R}. \end{eqnarray*} \end{enumerate} Moreover, the conformal function is defined for $\xi\neq\frac{C}{G}$. \end{theorem} It is worth mentioning that the first item of Theorem \ref{teo3} was not considered in \cite{Romildo}. From Theorem \ref{teo3} we can construct examples of complete Einstein warped product Riemannian manifolds. \begin{corollary}\label{coro1} Let $(N^{m},\tilde{g})$ be a complete Ricci-flat Riemannian manifold and $f(\xi)=\Theta\exp{(A\xi)}$, where $\Theta>0$ and $A\neq0$ are constants. Therefore, $(\mathbb{R}^{n},g_{can})\times_{f}(N^{m},\tilde{g})$ is a complete Ricci-flat warped product Riemannian manifold. \end{corollary} \begin{corollary}\label{coro2} Let $(N^{m},\tilde{g})$ be a complete Ricci-flat Riemannian manifold and $f(x)= \frac{1}{x_{n}}$ with $x_{n}>0$. Therefore, $(\widehat{M},\hat{g})=(\mathbb{H}^{n},g_{can})\times_{f}(N^{m},\tilde{g})$ is a complete Riemannian Einstein warped product such that $$\widehat{R}ic=-\frac{m+n-1}{n(n-1)}\hat{g}.$$ \end{corollary} The paper is organized as follows. Section \ref{SB} is divided in two subsections, namely, {\it General formulas} and {\it A conformal structure for the warped product with Ricci-flat fiber}, where will be provided the preliminary results. Further, in Section \ref{provas}, we will prove our main results. \section{Preliminar}\label{SB} Consider $(M^{n}, g)$ and $(N^{m},\tilde{g})$, with $n\geq3$ and $m\geq2$, semi-Riemannian manifolds, and let $f:M^{n}\rightarrow(0,+\infty)$ be a smooth function, the warped product $(\widehat{M}^{n+m},\hat{g})=(M^{n},g)\times_{f}(N^{m},\tilde{g})$ is a product manifold $M\times N$ with metric \begin{eqnarray*} \hat{g}=g+f^{2}\tilde{g}. \end{eqnarray*} From Corollary 43 in \cite{Oneil}, we have that (see also \cite{kimkim}) \begin{eqnarray}\label{test1} \widehat{R}ic=\lambda\hat{g}\Longleftrightarrow\left\{ \begin{array}{lcc} Ric-\frac{m}{f}\nabla^{2}f=\lambda g\\ \widetilde{R}ic=\mu\tilde{g}\\ f\Delta f+(m-1)|\nabla f|^{2}+\lambda f^{2}=\mu \end{array} ,\right. \end{eqnarray} where $\lambda$ and $\mu$ are constants. Which means that $\widehat{M}$ is an Einstein warped product if and only if (\ref{test1}) is satisfied. Here $\widehat{R}ic$, $\widetilde{R}ic$ and $Ric$ are, respectively, the Ricci tensor for $\hat{g}$, $\tilde{g}$ and $g$. Moreover, $\nabla^{2}f$, $\Delta f$ and $\nabla f$ are, respectively, the Hessian, The Laplacian and the gradient of $f$ for $g$. \subsection{General formulas}\label{GF} We derive some useful formulae from system (\ref{test1}). Contracting the first equation of (\ref{test1}) we get \begin{eqnarray}\label{01} Rf^{2}-mf\Delta f=nf^{2}\lambda, \end{eqnarray} where $R$ is the scalar curvature for $g$. From the third equation in (\ref{test1}) we have \begin{eqnarray}\label{02} mf\Delta f+m(m-1)|\nabla f|^{2}+m\lambda f^{2}=m\mu. \end{eqnarray} Then, from (\ref{01}) and (\ref{02}) we obtain \begin{eqnarray}\label{oi} |\nabla f|^{2}+\left[\frac{\lambda(m-n)+R}{m(m-1)}\right]f^{2}=\frac{\mu}{(m-1)}. \end{eqnarray} When the base is a Riemannian manifold and the fiber is a Ricci-flat semi-Riemannian manifold (i.e., $\mu=0$), from (\ref{oi}) we obtain \begin{eqnarray}\label{eqtop} |\nabla f|^{2}+\left[\frac{\lambda(m-n)+R}{m(m-1)}\right]f^{2}=0. \end{eqnarray} Then, either \begin{eqnarray*} R\leq\lambda(n-m) \end{eqnarray*} or $f$ is trivial , i.e., $\widehat{M}$ is a product manifold. \subsection{A conformal structure for the Warped product with Ricci-flat fiber}\label{CFSI} In what follows, consider $(\mathbb{R}^{n}, g)$ and $(N^{m},\tilde{g})$ semi-Riemannian manifolds, and let $f:\mathbb{R}^{n}\rightarrow(0,+\infty)$ be a smooth function, the warped product $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^{n},g)\times_{f}(N^{m},\tilde{g})$ is a product manifold $\mathbb{R}^{n}\times N$ with metric \begin{eqnarray*} \hat{g}=g+f^{2}\tilde{g}. \end{eqnarray*} Let $(\mathbb{R}^{n}, g)$, $n\geq3$, be the standard pseudo-Euclidean space with metric $g$ and coordinates $(x_{1},\ldots,x_{n})$ with $g_{ij}=\delta_{ij}\varepsilon_{i}$, $1\leq i,j\leq n$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Consider $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^{n}, \bar{g})\times_{f}(N^{m},\tilde{g})$ a warped product, where $\varphi:\mathbb{R}^{n}\rightarrow\mathbb{R}\backslash\{0\}$ is a smooth function such that $\bar{g}=\frac{g}{\varphi^{2}}$. Furthermore, we consider that $\widehat{M}$ is an Einstein semi-Riemannian manifold, i.e., $$\widehat{R}ic=\lambda\hat{g},$$ where $\widehat{R}ic$ is the Ricci tensor for the metric $\hat{g}$ and $\lambda\in\mathbb{R}$. We use invariants for the group action (or subgroup) to reduce a partial differential equation into a system of ordinary differential equations \cite{olver}. To be more clear, we consider that $(\widehat{M}^{n+m},\hat{g})=(\mathbb{R}^n,\bar{g})\times_{f}(N^{m},\tilde{g})$ is such that the base is invariant under the action of a $(n-1)$-dimensional translation group (\cite{BarbosaPinaKeti,olver,LeandroPina,Romildo,Tenenblat}). More precisely, let $(\mathbb{R}^{n}, g)$ be the standard pseudo-euclidean space with metric $g$ and coordinates $(x_{1}, \cdots, x_{n})$, with $g_{ij} = \delta_{ij}\varepsilon_{i}$, $1\leq i, j\leq n$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i} = \pm1$, with at least one $\varepsilon_{i} = 1$. Let $\xi=\displaystyle\sum_{i}\alpha_{i}x_{i}$, $\alpha_{i}\in\mathbb{R}$, be a basic invariant for a $(n-1)$-dimensional translation group where $\alpha=\displaystyle\sum_{i}\alpha_{i}\frac{\partial}{\partial x_{i}}$ is a timelike, lightlike or spacelike vector, i.e., $\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}=-1,0,$ or $1$, respectively. Then we consider $\varphi(\xi)$ and $f(\xi)$ non-trivial differentiable functions such that \begin{eqnarray*} \varphi_{x_{i}}=\varphi'\alpha_{i}\quad\mbox{and}\quad f_{x_{i}}=f'\alpha_{i}. \end{eqnarray*} Moreover, it is well known (see \cite{BarbosaPinaKeti,LeandroPina,Romildo}) that if $\bar{g}=\frac{1}{\varphi^{2}}g$, then the Ricci tensor $\bar{R}ic$ for $\bar{g}$ is given by $$\bar{R}ic=\frac{1}{\varphi^{2}}\{(n-2)\varphi\nabla^{2}\varphi + [\varphi\Delta\varphi - (n-1)|\nabla\varphi|^{2}]g\},$$ where $\nabla^{2}\varphi$, $\Delta\varphi$ and $\nabla\varphi$ are, respectively, the Hessian, the Laplacian and the gradient of $\varphi$ for the metric $g$. Hence, the scalar curvature of $\bar{g}$ is given by \begin{eqnarray}\label{scalarcurvature} \bar{R}&=&\displaystyle\sum_{k=1}^{n}\varepsilon_{k}\varphi^{2}\left(\bar{R}ic\right)_{kk}=(n-1)(2\varphi\Delta\varphi - n|\nabla\varphi|^{2})\nonumber\\ &=&(n-1)[2\varphi\varphi''-n(\varphi)^{2}]\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}. \end{eqnarray} In what follows, we denote $\kappa=\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}$. When the fiber $N$ is a Ricci-flat semi-Riemannian manifold, we already know from Theorem 1.2 in \cite{Romildo} that $\varphi(\xi)$ and $f(\xi)$ satisfy the following system of differential equations \begin{eqnarray}\label{system} \left\{ \begin{array}{lcc} (n-2)f\varphi''-mf''\varphi-2m\varphi'f'=0;\\\\ f\varphi\varphi''-(n-1)f(\varphi')^{2}+m\varphi\varphi'f'=\kappa\lambda f;\\\\ (n-2)f\varphi\varphi'f'-(m-1)\varphi^{2}(f')^{2}-ff''\varphi^{2}=\kappa\lambda f^{2}. \end{array} \right. \end{eqnarray} Note that the case where $\kappa=0$ was proved in \cite{Romildo}. Therefore, we only consider the case $\kappa=\pm1$. \ \section{Proof of the main results}\label{provas} \ \noindent {\bf Proof of Theorem \ref{teo1}:} In fact, from the third equation of the system (\ref{test1}) we get that \begin{eqnarray}\label{kimkimeq} div\left(f\nabla f\right)+(m-2)|\nabla f|^{2}+\lambda f^{2}=\mu. \end{eqnarray} Moreover, if $N$ is Ricci-flat, from (\ref{kimkimeq}) we obtain \begin{eqnarray}\label{kimkimeq1} div\left(f\nabla f\right)+\lambda f^{2}\leq div\left(f\nabla f\right)+(m-2)|\nabla f|^{2}+\lambda f^{2}=0. \end{eqnarray} Considering $M$ a compact Riemannian manifold, integrating (\ref{kimkimeq1}) we have \begin{eqnarray}\label{kimkimeq2} \int_{M}\lambda f^{2}dv=\int_{M}\left(div\left(f\nabla f\right)+\lambda f^{2}\right)dv\leq 0. \end{eqnarray} Therefore, from (\ref{kimkimeq2}) we can infer that \begin{eqnarray}\label{kimkimeq3} \lambda\int_{M}f^{2}dv\leq 0. \end{eqnarray} This implies that, either $\lambda\leq0$ or $f$ is trivial. It is worth to point out that compact quasi-Einstein metrics on compact manifolds with $\lambda\leq0$ are trivial (see Remark 6 in \cite{kimkim}). \hfill $\Box$ \ \noindent {\bf Proof of Theorem \ref{teo2}:} Let $p$ be a maximum point of $f$ on $M$. Therefore, $f(p)>0$, $(\nabla f)(p)=0$ and $(\Delta f)(p)\geq0$. By hypothesis $R+\lambda(m-n)\leq0$, then from (\ref{oi}) we get \begin{eqnarray*} |\nabla f|^{2}\geq\frac{\mu}{m-1}. \end{eqnarray*} Whence, in $p\in M$ we obtain \begin{eqnarray*} 0=|\nabla f|^{2}(p)\geq\frac{\mu}{m-1}. \end{eqnarray*} Since $\mu$ is constant, we have that $\mu\leq0$. Moreover, from the third equation in (\ref{test1}) we have \begin{eqnarray*} \lambda f^{2}(p)\leq (f\Delta f)(p)+(m-1)|\nabla f|^{2}(p)+\lambda f^{2}(p)=\mu\leq0. \end{eqnarray*} Implying that $\lambda\leq0$. Then, from \cite{kimkim} the result follows. Now, if $R+\lambda(m-n)=0$ from (\ref{oi}) we have that \begin{eqnarray*} |\nabla f|^{2}=\frac{\mu}{m-1}. \end{eqnarray*} Then, for $p\in M$ we obtain \begin{eqnarray*} 0=|\nabla f|^{2}(p)=\frac{\mu}{m-1}. \end{eqnarray*} Therefore, since $\mu$ is a constant we get that $\mu=0$, i.e., $N$ is Ricci-flat. \hfill $\Box$ It is worth to say that if $M$ is a compact Riemannian manifold and the scalar curvature $R$ is constant, then $f$ is trivial (see \cite{Case}). \ \noindent {\bf Proof of Theorem \ref{teo3b}:} Considering $\lambda=\frac{R}{n-1}$ in equation (\ref{oi}) we obtain \begin{eqnarray}\label{ooi} |\nabla f|^{2}+\frac{R}{m(n-1)}f^{2}=\frac{\mu}{m-1}. \end{eqnarray} Then, taking the Laplacian we get \begin{eqnarray}\label{3b1} \frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}\left(|\nabla f|^{2}+f\Delta f\right)=0. \end{eqnarray} Moreover, when we consider that $\lambda=\frac{R}{n-1}$ in (\ref{test1}), and contracting the first equation of the system we have that \begin{eqnarray}\label{3b2} -\Delta f=\frac{Rf}{m(n-1)}. \end{eqnarray} From (\ref{3b2}), (\ref{3b1}) became \begin{eqnarray}\label{3b3} \frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}|\nabla f|^{2}=\frac{R^{2}f^{2}}{m^{2}(n-1)^{2}}. \end{eqnarray} The first equation of (\ref{test1}) and (\ref{ooi}) allow us to infer that \begin{eqnarray*} \frac{2f}{m}Ric(\nabla f)&=&\frac{2Rf}{m(n-1)}\nabla f+2\nabla^{2}f(\nabla f)\nonumber\\ &=&\nabla\left(|\nabla f|^{2}+\frac{Rf^{2}}{m(n-1)}\right)=\nabla\left(\frac{\mu}{m-1}\right)=0. \end{eqnarray*} And since $f>0$ we get \begin{eqnarray}\label{3b4} Ric(\nabla f, \nabla f)=0. \end{eqnarray} Remember the Bochner formula \begin{eqnarray}\label{bochner} \frac{1}{2}\Delta|\nabla f|^{2}=|\nabla^{2}f|^{2}+Ric(\nabla f,\nabla f)+g(\nabla f,\nabla\Delta f). \end{eqnarray} Whence, from (\ref{3b2}), (\ref{3b4}) and (\ref{bochner}) we obtain \begin{eqnarray}\label{bochner1} \frac{1}{2}\Delta|\nabla f|^{2}+\frac{R}{m(n-1)}{|\nabla f|}^{2}=|\nabla^{2}f|^{2}. \end{eqnarray} Substituting (\ref{3b3}) in (\ref{bochner1}) we get \begin{eqnarray}\label{hessiannorm} |\nabla^{2}f|^{2}=\frac{R^{2}f^{2}}{m^{2}(n-1)^{2}}. \end{eqnarray} From the first equation of (\ref{test1}), a straightforward computation give us \begin{eqnarray}\label{ricnorm} |Ric|^{2}=\frac{m^{2}}{f^{2}}|\nabla^{2}f|^{2}+\frac{2mR\Delta f}{(n-1)f}+\frac{nR^{2}}{(n-1)^{2}}. \end{eqnarray} Finally, from (\ref{hessiannorm}), (\ref{3b2}) and (\ref{ricnorm}) we have that \begin{eqnarray*} |Ric|^{2}=\frac{R^{2}}{n-1}. \end{eqnarray*} Then, we get the result. \hfill $\Box$ \ In what follows, we consider the conformal structure given in Section \ref{CFSI} to prove Theorem \ref{teo3a} and Theorem \ref{teo3}. \ \noindent {\bf Proof of Theorem \ref{teo3a}:} From definiton, \begin{eqnarray}\label{grad} |\bar{\nabla}f|^{2}=\displaystyle\sum_{i,j}\varphi^{2}\varepsilon_{i}\delta_{ij}f_{x_{i}}f_{x_{j}}=\left(\displaystyle\sum_{i}\varepsilon_{i}\alpha_{i}^{2}\right)\varphi^{2}(f')^{2}=\kappa\varphi^{2}(f')^{2}, \end{eqnarray} where $\bar{\nabla}f$ is the gradient of $f$ for $\bar{g}$, and $\kappa\neq0$. Then, from (\ref{eqtop}) and (\ref{grad}) we have \begin{eqnarray}\label{sera} \kappa\varphi^{2}(f')^{2}+\left[\frac{\lambda(m-n)+\bar{R}}{m(m-1)}\right]f^{2}=\frac{\mu}{m-1}. \end{eqnarray} Consider that $N$ is a Ricci-flat semi-Riemannian manifold, i.e., $\mu=0$, from (\ref{sera}) we get \begin{eqnarray}\label{sera1} \frac{f'}{f}=\frac{G(\bar{R})}{\varphi}, \end{eqnarray} where $G(\xi)=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$. Which give us (\ref{sera3}). Now, from (\ref{sera1}) we have \begin{eqnarray}\label{sera2} \frac{f''}{f}=\left(\frac{G}{\varphi}\right)'+\left(\frac{G}{\varphi}\right)^{2}=\left(\frac{G}{\varphi}\right)^{2}+\frac{G'}{\varphi}-\frac{G\varphi'}{\varphi^{2}}. \end{eqnarray} Therefore, from (\ref{system}), (\ref{sera1}) and (\ref{sera2}) we get (\ref{system2}). \hfill $\Box$ \ \noindent {\bf Proof of Theorem \ref{teo3}:} Considering that $\bar{R}$ is constant, from (\ref{system2}) we obtain \begin{eqnarray}\label{system12} \left\{ \begin{array}{lcc} (n-2)\varphi\varphi''-mG\varphi'=mG^{2}\\\\ \varphi\varphi''-(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda \\\\ (n-1)G\varphi'-mG^{2}=\kappa\lambda \end{array} .\right. \end{eqnarray} The third equation in (\ref{system12}) give us that $\varphi$ is an affine function. Moreover, since \begin{eqnarray}\label{hum} \varphi'(\xi)=\frac{\kappa\lambda+mG^{2}}{(n-1)G}, \end{eqnarray} we get $\varphi''=0$. Then, from the first and second equations in (\ref{system12}) we have, respectively, \begin{eqnarray*} -mG\varphi'=mG^{2}\quad\mbox{and}\quad -(n-1)(\varphi')^{2}+mG\varphi'=\kappa\lambda. \end{eqnarray*} This implies that \begin{eqnarray}\label{hdois} -(\varphi')^{2}=\frac{\kappa\lambda+mG^{2}}{(n-1)}. \end{eqnarray} Then, from (\ref{hum}) and (\ref{hdois}) we get \begin{eqnarray*} (\varphi')^{2}+G\varphi'=0. \end{eqnarray*} That is, $\varphi'=0$ or $\varphi'=-G$. First consider that $\varphi'=0$. From (\ref{scalarcurvature}) and (\ref{system12}), it is easy to see that $\lambda=\bar{R}=0$. Then, we get the first item of the theorem since, as mentioned, the case $\varphi' = 0$ was not considered in \cite{Romildo}. Now, we take $\varphi'=-G$. Integrating over $\xi$ we have \begin{eqnarray}\label{phii} \varphi(\xi)=-G\xi+C;\quad\mbox{where}\quad G\neq0, C\in\mathbb{R}. \end{eqnarray} Then, from (\ref{hum}) we obtain \begin{eqnarray}\label{htres} \frac{\kappa\lambda+mG^{2}}{(n-1)G}=-G. \end{eqnarray} Since $G^{2}=\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}$, from (\ref{htres}) we obtain \begin{eqnarray}\label{scalarcurvature1} \bar{R}=\frac{n(n-1)\lambda}{(m+n-1)}. \end{eqnarray} Considering that $\lambda\neq0$, we can see that $\bar{R}$ is a non-null constant. On the other hand, since $\varphi'=-G$, from (\ref{scalarcurvature}) we get \begin{eqnarray}\label{anem} \bar{R}=-n(n-1)\kappa G^{2}, \end{eqnarray} where $G^{2}=\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}$. Observe that (\ref{scalarcurvature1}) and (\ref{anem}) are equivalent. Furthermore, from (\ref{sera3}) and (\ref{phii}) we get \begin{eqnarray*} f(\xi)=\frac{\Theta}{-G\xi+C}. \end{eqnarray*} Now the demonstration is complete. \hfill $\Box$ \ \noindent {\bf Proof of Corollary \ref{coro1}:} It is a direct consequence of Theorem \ref{teo3}-(1). \hfill $\Box$ \ \noindent {\bf Proof of Corollary \ref{coro2}:} Remember that $\xi=\displaystyle\sum_{i}\alpha_{i}x_{i}$, where $\alpha_{i}\in\mathbb{R}$ (cf. Section \ref{CFSI}). Consider in Theorem \ref{teo3}-(2) that $\alpha_{n}=\frac{1}{G}$ and $\alpha_{i}=0$ for all $i\neq n$. Moreover, taking $C=0$ we get \begin{eqnarray} f(\xi)=\frac{1}{x_{n}^{2}}. \end{eqnarray} Moreover, take $\mathbb{R}^{n^{\ast}}_{+}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}; x_{n}>0\}$. Then, $\left(\mathbb{R}^{n^{\ast}}_{+},g_{can}=\frac{\delta_{ij}}{x_{n}^{2}}\right)=(\mathbb{H}^{n},g_{can})$ is the hyperbolic space. We pointed out that $\mathbb{H}^{n}$ with this metric has constant sectional curvature equal to $-1$. Then, from (\ref{scalarcurvature1}) we obtain $\lambda=-\frac{m+n-1}{n(n-1)}$, and the result follows. \hfill $\Box$ \iffalse \noindent {\bf Proof of Theorem \ref{teo4}:} It is a straightforward computation from (\ref{system2}) that \begin{eqnarray*}\label{eqseggrau} m(m-1)G^{2}-\left[2m(n-2)\varphi'\right]G+[\lambda(m+n-2)+(n-2)(n-1)(\varphi')^{2}]=0, \end{eqnarray*} where $G=\pm\sqrt{\frac{\kappa[\lambda(n-m)-\bar{R}]}{m(m-1)}}$ and $\kappa=\pm1$. Therefore, from the second order equation we have \begin{eqnarray*}\label{G} G=\frac{m(n-2)\varphi'\pm \sqrt{\Delta}}{m(m-1)}, \end{eqnarray*} where $\Delta=[m^{2}(n-2)^{2}-m(m-1)(n-2)(n-1)](\varphi')^{2}-\lambda m(m-1)(m+n-2)$. Observe that, by hypothesis $m=n-1$, then $\Delta=-\lambda m(m-1)(m+n-2)$. Whence, (\ref{G}) became \begin{eqnarray*} G=\varphi'\pm \sqrt{-\lambda\frac{(2n-3)}{(n-1)(n-2)}}. \end{eqnarray*} This implies that $\lambda<0$. Then, taking $\beta=\pm \sqrt{-\lambda\frac{(2n-3)}{(n-1)(n-2)}}$ \begin{eqnarray}\label{hquatro} G^{2}=(\varphi')^{2}+2\varphi'\beta+\beta^{2}. \end{eqnarray} Since $G^{2}=\frac{\kappa(\lambda-\bar{R})}{(n-1)(n-2)}$ and $\bar{R}=\kappa(n-1)[2\varphi\varphi''-n(\varphi')^{2}]$ from (\ref{hquatro}) we get \begin{eqnarray}\label{edoboa} \varphi\varphi''-(\varphi')^{2}+\tilde{\beta}\varphi'+\theta=0, \end{eqnarray} where $\tilde{\beta}=\pm\sqrt{\frac{-\lambda(2n-3)(n-2)}{(n-1)}}$ and $\theta=-\lambda\frac{2n+\kappa-3}{2(n-1)}$. Then, from (\ref{edoboa}) we obtain \begin{eqnarray*} \varphi(\xi) = \frac{1}{2}\xi(\sqrt{\tilde{\beta}^{2}+4\theta}+\tilde{\beta})+\ln\left(\frac{\sqrt{\tilde{\beta}^{2}+4\theta}}{\theta_{1}\exp(\xi\sqrt{\tilde{\beta}^{2}+4\theta})-\theta_{2}}\right), \end{eqnarray*} where $\tilde{\beta}^{2}+4\theta=-\lambda\frac{(2n-3)(n-4)+2\kappa}{(n-1)}$ and $\theta_{1}\neq0$. Observe that, if $n=4$ then $\kappa=1$. \hfill $\Box$ \fi \ \begin{acknowledgement} The authors would like to express their deep thanks to professor Ernani Ribeiro Jr for valuable suggestions. \end{acknowledgement}
{'timestamp': '2017-08-17T02:02:17', 'yymm': '1708', 'arxiv_id': '1708.04720', 'language': 'en', 'url': 'https://arxiv.org/abs/1708.04720'}
ArXiv
\section{Introduction} Entanglement, as measured by, {\em e.g.}, bipartite block entanglement entropy, is playing an increasingly important role in the study of condensed-matter or quantum many-body physics, both conceptually and quantitatively. It has been used as a very useful and in some cases indispensable way to characterize phases and phase transitions, especially for phases and quantum phase transitions in strongly correlated fermionic or spin systems (for a review, see Ref. [\onlinecite{amico:517}]). For bosonic systems, studies of entanglement entropy have mostly focused on {\em relativistic} free bosonic field theories \cite{wilczek94,calabrese-2004-0406,casini-2005-0512}, which are equivalent to coupled harmonic oscillator systems (for reviews, see Refs. [\onlinecite{adesso-2007-40}] and [\onlinecite{RevModPhys.77.513}]). In this paper we study the entanglement properties of free {\em non-relativistic} Bose gases. In addition to interest in its own right, our motivation also comes in part from the following consideration. In recent studies it has been shown that entanglement is enhanced at quantum critical points \cite{QPT_Sachdev} and strongly correlated phases with topological order \cite{xgwen1990}, in the form of either violation of area law \cite{PhysRevD.34.373, area-law, eisert-2008, wolf:010404, gioev:100503, Barthel:2006, Li:2006, wilczek94, PhysRevLett.90.227902, calabrese-2004-0406, PhysRevLett.93.260602, santachiara-2006-L06002, feiguin:160409, bonesteel:140405}, or subleading corrections to the area law that diverges with block size \cite{levin:110405, kitaev:110404, fradkin:050404} (usually in a logarithmic fashion). On the other hand there have been relatively few studies of the behavior of entanglement entropy in states with traditional long-range order \cite{vidal05, vidal06, vidal07}. In a recent work \cite{ding:052109}, we calculated the block entanglement entropy of some exactly soluble spin models that exhibit ferromagnetic or antiferromagnetic long-range order in the ground state, and found that such conventional orders also lead to logarithmically divergent contribution to the entropy. Bose-Einstein condensation (BEC) is perhaps the simplest example of conventional ordering. It is thus natural to study its entanglement properties. As we are going to show, a Bose-Einstein condensate (referred to as a condensate from now on) indeed makes a logarithmically divergent contribution to the entropy as well. Besides the entanglement entropy of the ground state, the entanglement properties of system at finite temperature are also of great interest. However, the entanglement entropy is only well-defined for a pure state. For a system that is described by a mixed density matrix, the von Neumann entropy of the reduced density matrix becomes different for the two parts of the bipartite systems. In such cases, there is a natural extension of the entanglement entropy that one can work with - the mutual information \cite{dreissig,Wolf2008}. We will show that a condensate, when present, makes a logarithmically divergent contribution to the mutual information. This paper is organized as follows. In Sec. II we study the ground state entanglement entropy of a generic free boson model that is translationally invariant \cite{klich06}. In Sec. III we introduce an infinite-range hopping model for bosons which is exactly solvable, and calculate the mutual information analytically. In Sec. IV we introduce a long-range hopping model for bosons in one-dimension (1D) which exhibits a finite temperature BEC for a certain parameter range, then we present a numerical study of the mutual information for this model. In the end, we summarize and discuss the results of this paper in Sec. V. \section{Zero Temperature: Entanglement Entropy of Free Bosons} Consider a general Hamiltonian of free bosons hopping on a lattice of size $L$: \begin{equation}\label{Eq:GeneralH} H = - \sum_{ij} t_{ij} \hat{a}^\dagger_i \hat{a}_j, \end{equation} where $t_{ij} > 0$, $\hat{a}_i(\hat{a}^\dagger_i)$'s are the bosonic annihilation (creation) operators. If the system is translationally invariant, $t_{ij} = t_{i-j}$, then the Hamiltonian can be diagonalized by Fourier transformation: \begin{equation} H = \sum_{k} \varepsilon(k) \hat{b}^\dagger_k \hat{b}_k, \end{equation} where $\hat{b}_k = \frac{1}{\sqrt{L}}\sum_j e^{-ijk} \hat{a}_j$ is the annihilation operator in $k$ space. In most generic cases, the ground state is the $k=0$ state. At zero temperature, all particles fall into the ground state. For a system containing $N$ particles, the ground state is given by: \begin{equation} \ket{\Psi_0} = \frac{1}{\sqrt{N!}} (\hat{b}^\dagger_0)^N \ket{0} = \frac{1}{\sqrt{N!}} (\frac{1}{\sqrt{L}} \sum_j \hat{a}^\dagger_j)^N \ket{0}. \end{equation} To consider its bipartite block entanglement entropy, we divide the system of size $L$ in two parts, and label them $A$ and $B$ respectively. Let the sizes of each part be $L_A$ and $L_B$, $L_A + L_B = L$, and define \begin{equation} \hat{a}^\dagger_A = \frac{1}{\sqrt{L_A}} \sum_{j \in A} \hat{a}^\dagger_j,\ \hat{a}^\dagger_B = \frac{1}{\sqrt{L_B}} \sum_{j \in B} \hat{a}^\dagger_j. \end{equation} Then we can write $\ket{\Psi_0}$ as: \begin{equation} \begin{split} \ket{\Psi_0} & = \frac{L^{-N/2}}{\sqrt{N!}} (\sqrt{L_A} \hat{a}_A + \sqrt{L_B} \hat{a}_B )^N \ket{0} = \frac{L^{-N/2}}{\sqrt{N!}}\sum_{l=0}^{N} \frac{N!}{(N-l)!l!} (\sqrt{L_A} \hat{a}^\dagger_A)^{l} (\sqrt{L_B} \hat{a}^\dagger_B)^{N-l} \\ & = L^{-N/2} \sum_{l} \sqrt{\frac{N!}{(N-l)!l!}} L_A^{l/2} L_B^{(N-l)/2} \left[ \frac{1}{\sqrt{l!}} \hat{a}^{\dagger l}_A \frac{1}{\sqrt{(N-l)!}} \hat{a}^{\dagger N-l}_B \ket{0} \right] \\ & = \sum_l \sqrt{\lambda_l} \ket{l}_A \otimes \ket{N-l}_B, \end{split} \end{equation} where $\lambda_l = L^{-N} \frac{N!}{(N-l)!l!} L_A^{l} L_B^{N-l}$, $\ket{l}_A = \frac{1}{\sqrt{l!}} \hat{a}^{\dagger l}_A \ket{0}_A$, $\ket{N-l}_B = \frac{1}{\sqrt{(N-l)!}} \hat{a}_B^{^\dagger N-l}\ket{0}_B$, and $\ket{0} = \ket{0}_A \otimes \ket{0}_B$. This is an explicit Schmidt decomposition, and therefore the entanglement entropy is readily given by: \begin{equation} E = - \sum_l \lambda_l \ln \lambda_l. \end{equation} We are interested in the asymptotic behavior in two limiting cases: (1) the equal partition case; (2) size of $B$ is substantially larger than $A$, i.e., $L_B \gg L_A$. (i) Equal partition, $L_A = L_B = \frac{L}{2}$: \begin{equation} \lambda_l = \frac{N!}{l!(N-l)!} \frac{L_A^l L_B^{N-l}}{L^N} = \frac{N!}{l!(N-l)! 2^N} = \frac{N!}{(\frac{N}{2})!} \frac{(\frac{N}{2})! 2^N}{(\frac{N}{2} - (\frac{N}{2} - l))! (\frac{N}{2} + (\frac{N}{2} - l))!}. \end{equation} Let $x = \frac{N}{2} - l$, then $x \in [-\frac{N}{2}, \frac{N}{2}]$, and we can denote $\lambda_l$ as $\lambda_x = \frac{2^N N!}{(\frac{N}{2})!} \frac{(\frac{N}{2})!}{(\frac{N}{2} - x)!(\frac{N}{2} + x)!}$ which can be approximated by a Gaussian distribution factor $\lambda_x \sim e^{\frac{- 2x^2}{N}}$ when $N$ is large. In the limit $N \rightarrow \infty$, the summation over $n$ (or $x$) can be approximated by an integral. Also in this limit, the Gaussian factor is sharply peaked around $x = 0$, the integral region can be extended to from minus infinity to infinity. Using the fact that $\sum_x \lambda_x\simeq \int^{\infty}_{-\infty}\lambda(x) dx = 1$, we arrive at \begin{equation} \lambda(x) \simeq \sqrt{\frac{2}{N \pi}} e^{-\frac{2 x^2}{N}}. \end{equation} The entanglement entropy is then \begin{equation} \label{eqn:EEEP} E \simeq - \int^{\infty}_{-\infty}\lambda(x)\ln \lambda(x) dx = \frac{1}{2} \left(1 + \ln(\frac{N \pi}{2}) \right) = \frac{1}{2} \ln N + \mathcal{O}(1). \end{equation} (ii) Unequal partition, $L_B \gg L_A$: If $L_B \gg L_A$, $L \rightarrow \infty$, but keep $\frac{N}{L} \rightarrow \ev{n}$ (fixed), the distribution of $\lambda_l$ approaches a Poisson distribution: \begin{equation} \lambda_l = \frac{N!}{l!(N-l)!} \frac{L_A^l L_B^{N-l}}{L^N} \xrightarrow{^{N \rightarrow \infty}} \frac{(L_A \ev{n} )^l e^{-L_A \langle n \rangle }}{l!}. \end{equation} The entropy of the Poisson distribution, which in this case is our entanglement entropy, is known to be: \begin{equation}\label{Eq:EE} \begin{split} E &= \frac{1}{2} [1 + \ln (2\pi L_A \ev{n})] - \frac{1}{12 L_A \ev{n}} + O(\frac{1}{(L_A \ev{n})^2})\\ &= \frac{1}{2}\left[1 + \ln (2 \pi N_A )\right] -\frac{1}{12 N_A} + O(\frac{1}{(N_A)^2}) = \frac{1}{2}\ln N_A + \mathcal{O}(1), \end{split} \end{equation} where $N_A = L_A \ev{n}$ is the average particle number in subsystem $A$. Therefore, we find, in both cases, that the leading term of the mutual information goes as $\frac{1}{2} \ln N_A$ for $L_A \le L_B$. \section{Mutual Information: Analytic Study of an Infinite-Range Hopping Model} In this section, we will study the natural generalization of entanglement entropy at finite temperature: the mutual information, which is defined as \begin{equation} E_M = \frac{1}{2} (E_A + E_B - S), \end{equation} where $E_A$ and $E_B$ are the von Neumann entropy of the reduced density matrices of subsystems A and B, respectively, and $S$ is the entropy of the whole system. Note that at finite temperature $E_A$ and $E_B$ are no longer the same due to the fact that the system is described by a mixed density matrix. We must emphasize here that our definition of mutual information differs from its usual definition \cite{minote} by a factor of 2 so that it will converge to the entanglement entropy when the system approaches a pure state. \subsection{Model, spectrum and thermodynamic properties} In order to facilitate an exact solution, we consider the following infinite-range hopping model which is obtained by setting $t_{ij}$ in Eq. (\ref{Eq:GeneralH}) to a constant properly scaled by the system size $t_{ij} = t / L$ so that the thermodynamic limit is well-defined. The Hamiltonian is then \begin{equation} H = -\frac{t}{L} \sum_{i,j}\hat{a}_i^\dagger \hat{a}_j = - \frac{t}{L} (\sum_i \hat{a}_i^\dagger) (\sum_j \hat{a}_j). \end{equation} By substituting the Fourier transform of $\hat{a}_j$'s defined in Sec. II, $\hat{b}_k = \frac{1}{\sqrt{L}}\sum_j e^{-ijk} \hat{a}_j$, one obtains: \begin{equation} H = -t \hat{b}^\dagger_0 \hat{b}_0. \end{equation} This model has a very simple spectrum with a ground state with energy $-t$, and all the other excited states are degenerate with zero energy. This particularly simplified spectrum makes an exact solution possible. To study the finite temperature properties of this model, we will work with the grand canonical ensemble (GCE), in which the chemical potential $\mu$ is introduced to control the average density of the system. This model exhibits BEC at finite temperature $T_C$. To determine $T_C$, we start by considering a system of finite size $L$, and its occupation numbers are: \begin{equation} \ev{N_{k=0}} = \ev{N_0} = \frac{1}{e^{\beta (-t - \mu)} - 1},\text{ } \ev{N_{k}} = \frac{1}{e^{\beta(-\mu)} - 1}\text{ for $k \ne 0$}. \end{equation} Here $\ev{N_{0}}$ and $\ev{N_{k}}$ denote average occupation numbers for the corresponding states in $k$-space; $\beta = \frac{1}{T}$ is the inverse temperature. From this point on, when we write $\ev{N_k}$, it immediately indicates $k \ne 0$. The average total particle number of the system will be denoted as $\ev{N}$. To identify $T_C$, we know in the thermodynamic limit, when $T \rightarrow T_C + 0^+$, $\mu \rightarrow E_{k=0} = -t$, and $\frac{\langle N_0 \rangle}{N} \rightarrow 0$. Therefore, $\ev{N_{k}} = \frac{1}{e^{\beta_c t} - 1} \simeq \frac{ \langle N \rangle }{L} = \ev{n}$ where $\ev{n}$ is the average particle density. So we obtain: \begin{equation} T_C = \frac{t}{\ln (1 + 1/\ev{n})}. \label{Eq:inf_Tc} \end{equation} Above $T_C$, $\ev{n} = \frac{L - 1}{L} \frac{1}{e^{-\beta \mu} - 1} + \frac{1}{L} \frac{1}{e^{\beta (-t - \mu)} - 1}$, from which in the large $L$ limit we can derive that \begin{equation} \mu = -T \ln\left(1+\frac{1}{\ev{n}}\right). \end{equation} $\mu$ has a finite size correction which is negligible above $T_C$, but will become important below $T_C$. We also know that the partition function of the system in GCE. bears the following form: \begin{equation} Z = (\frac{1}{1 - e^{\beta \mu}})^{L-1} \frac{1}{1 - e^{\beta(t+\mu)}}, \end{equation} from which it is easy to show that the entropy in GCE. takes the following form: \begin{equation}\label{Eq:thermal entropy} \begin{split} S &= - \frac{\partial \Omega}{\partial T} = \ln Z - \frac{1}{T} \frac{\partial}{\partial \beta} \ln Z\\ &= (1 + \ev{N_0}) \ln(1 + \ev{N_0}) - \ev{N_0} \ln N_0 + (L - 1) \left[(1 + \ev{N_k}) \ln (1 + \ev{N_k}) - \ev{N_k} \ln \ev{N_k} \right]. \end{split} \end{equation} Anticipating later relevance, we are particularly interested in the behavior of finite size systems near $T_C$. For a finite system, the chemical potential $\mu$ is no longer strictly equal the ground state energy below $T_C$, but picks up a finite size correction $\delta \mu$ determined by the following condition: \begin{equation} \frac{1}{e^{-\beta \delta \mu} - 1} = \ev{N_0}, \end{equation} from which we can easily derive that \begin{equation} \delta \mu = - T \ln\left(1 + \frac{1}{\ev{N_0}}\right). \end{equation} Consider $T = T_C = \frac{t}{\ln(1 + 1/\langle n \rangle)}$, and make use of the following fact \begin{equation} \ev{N_0}= \ev{N} - \sum_{k \ne 0} \ev{N_k} = \ev{N} - \frac{L - 1}{e^{\beta (t - \delta \mu)} - 1}, \end{equation} we obtain \begin{equation} \ev{N_0} = \ev{N} - \frac{L - 1}{e^{\beta_c (t - \delta \mu)} - 1} = \ev{N} - \frac{L - 1}{(1 + L / \ev{N}) (1 + 1 / \ev{N_0}) - 1}. \end{equation} This equation can be solved to give $\ev{N_0}$ as a function of system size $L$ at a given density $\ev{n} = \ev{N} / L$, at $T=T_C$: \begin{equation}\label{Eq:N_0@Tc} \ev{N_0} = \sqrt{L} \sqrt{\left(\frac{\ev{N}}{L}\right)^2 + \frac{\ev{N}}{L} + \frac{1}{4L}} \simeq \sqrt{L} \sqrt{ \langle n \rangle ^2 + \langle n \rangle }. \end{equation} Even though this divergent $N_0$ does not affect the thermodynamic behavior of the system, as we will see later it makes a (leading) divergent contribution to the mutual information at $T=T_C$ depending on how the system is partitioned, or specifically how large is the subsystem size $L_A$ compared with this $\sqrt{L}$ divergence. \subsection{Formalism and issues} In the following part, we will use Peschel's result \cite{Peschel2003} on the reduced density matrix of a Gaussian state: \begin{equation}\label{Eq:RDM} \rho_A = \mathcal{K} e^{\{\ln((1+G)G^{-1})\}^T_{ij}\hat{a}^\dagger_i \hat{a}_j}, \end{equation} where $G_{ij} = \ev{\hat{a}^\dagger_i \hat{a}_j}$ is the two point correlation function matrix truncated within the subsystem, and $\mathcal{K}$ is the normalization factor. The entropy is given as \begin{equation}\label{Eq:entropy} E_A = \sum_l \left[(1 + g_l)\ln (1 + g_l) - g_l \ln g_l\right], \end{equation} where $g_l$'s are the complete set of eigenvalues of $G$'s (after truncation). Actually this formula also applies to the original system. We must note that, this formula does not lead to the correct zero temperature limit for the entropy. At zero temperature, $G_{ij} = \ev{a_i^\dagger a_j} = \ev{n}$. Its eigenvalues are all zero except for one: $g_0 = \ev{n} L = \ev{N}$, which gives us a non-zero entropy $S_{T=0} =(\ev{N} + 1) \ln (\ev{N} + 1) - \ev{N} \ln \ev{N} = \ln \ev{N} + \ev{N} \ln (1 + \frac{1}{\ev{N}}) \sim \ln \ev{N}$ at $T = 0$. This reflects the fact that we are working with GCE where the particle-number fluctuation is still permissible at $T = 0$ and the fluctuation amplitude $\delta N \sim \ev{N}$. However, as we show below, the mutual information still converges to the correct zero temperature limit, the entanglement entropy, at least to the leading order. The von Neumann entropy for a subsystem $A$ is given by \begin{equation} E_{A}^{(\text{GCE})} = (N_A + 1) \ln (N_A + 1) - N_A \ln N_A = \ln N_A + N_A \ln (1 + \frac{1}{N_A}), \end{equation} where $N_A = \ev{n} L_A$ is the average total particle number in the subsystem $A$. In the large $N$ limit, the second term converges to $1$. So the mutual information is given by: \begin{equation} E_{M} \equiv \frac{1}{2} (E_A + E_B - S_{GCE\ T=0}) = \frac{1}{2} \left(\ln \frac{N_A N_B}{N} + 1\right). \end{equation} For $N_A \le N_B$, we have \begin{equation} E_M \simeq \frac{1}{2} \ln N_A + \mathcal{O}(1). \end{equation} This agrees with Eq. (\ref{Eq:EE}) at the leading order. \subsection{Mutual information} According to our Eq. (\ref{Eq:RDM}) and Eq. (\ref{Eq:entropy}), to obtain the von Neumann entropy of the reduced density matrix, all what we have to do is to diagonalize the truncated two-point correlation function matrix. Fortunately, within this infinite-range hopping model, this is rather simple. For a finite system, we can obtain a general result for all temperatures: \begin{equation} \begin{split} G_{ij} &= \ev{\hat{a}^\dagger_i \hat{a}_j} = \ev{\frac{1}{L} \sum_k e^{-i k (i-j)} \hat{b}^\dagger_k \hat{b}_k} \\ & = \frac{1}{L} \ev{\hat{b}^\dagger_0 \hat{b}_0} + \frac{1}{L} \sum_{k \neq 0} e^{-ik(i-j)}\ev{\hat{b}^\dagger_k \hat{b}_k} = \frac{\ev{N_{0}}}{L} + \frac{\ev{N_{k}}}{L} \sum_{k \neq 0} e^{-ik(i-j)}\\ & = \frac{\ev{N_0}}{L} + (\delta_{ij} - \frac{1}{L}) \ev{N_k}. \end{split} \end{equation} In the above calculation, we have made use of the fact that $\ev{N_k}$ is $k$-{\em independent}. This matrix is easily diagonalized. For a system of size $L$, and a $G$ truncated to a size of $L_A \times L_A$ denoted by $G_{A}$, the eigenvalues are \begin{equation} g_1 = \frac{L_A \ev{N_0}}{L} + \frac{L - L_A}{L} \ev{N_k},\ g_l =\ev{N_k} \text{ for $l = 2, \dots , L_A$}. \end{equation} Now the von Neumann entropy of subsystem $A$ can be calculated directly from above result: \begin{equation} \begin{split} E_A &= \sum_{l = 1}^{L_A} \left((1 + g_l) \ln (1 + g_l) - g_l \ln g_l \right)\\ &= \left( 1 + \frac{L_A \ev{N_0}}{L} + \frac{L - L_A}{L} \ev{N_k} \right) \ln \left(1 + \frac{L_A \ev{N_0}}{L} + \frac{L - L_A}{L} \ev{N_k} \right) \\ &- \left(\frac{L_A \ev{N_0}}{L} + \frac{L - L_A}{L} \ev{N_k} \right) \ln \left(\frac{L_A \ev{N_0}}{L} + \frac{L - L_A}{L} \ev{N_k} \right)\\ & + (L_A - 1)\left[(1 + \ev{N_k}) \ln (1 + \ev{N_k}) - \ev{N_k} \ln (\ev{N_k}) \right].\\ \end{split} \end{equation} Combining the above with Eq. (\ref{Eq:thermal entropy}), we can obtain the mutual information for a general bipartite system: \begin{equation}\label{Eq:MI} \begin{split} &E_M = \frac{1}{2} (E_A + E_B - S) \\ &= \frac{1}{2} \biggl[ \left(1 + \frac{L_A \ev{N_0}}{L} + \frac{L_B}{L} \ev{N_k}\right) \ln\left(1 + \frac{L_A \ev{N_0}}{L} + \frac{L_B}{L} \ev{N_k}\right)\\ & - \left(\frac{L_A \ev{N_0}}{L} + \frac{L_B}{L} \ev{N_k}\right) \ln\left(\frac{L_A \ev{N_0}}{L} + \frac{L_B}{L} \ev{N_k}\right) \\ &+ \left(1 + \frac{L_B \ev{N_0}}{L} + \frac{L_A}{L} \ev{N_k}\right) \ln\left(1 + \frac{L_B \ev{N_0}}{L} + \frac{L_A}{L} \ev{N_k}\right)\\ &-\left(\frac{L_B \ev{N_0}}{L} + \frac{L_A}{L} \ev{N_k}\right) \ln\left(\frac{L_B \ev{N_0}}{L} + \frac{L_A}{L} \ev{N_k}\right) \\ & -(1 + \ev{N_k}) \ln (1 + \ev{N_k}) + \ev{N_k} \ln \ev{N_k} - (1 + \ev{N_0}) \ln (1 + \ev{N_0}) + \ev{N_0} \ln \ev{N_0} \biggl]. \\ \end{split} \end{equation} Next, we shall discuss the asymptotic behavior of $E_M$ in different temperature regions and with different partitions. \begin{enumerate} \item[(1)] $L_A \ll L$, $T > T_C$: in this case, $\ev{N_0}$ and $\ev{N_k} \simeq \ev{n}$ are both of order one , so $\frac{L_A \langle N_0 \rangle }{L} \rightarrow 0$, $\frac{L_A \langle N_k \rangle }{L} \rightarrow 0$, $L_B \simeq L$. Thus \begin{equation} E_M \simeq \mathcal{O}\left(\frac{L_A}{L}\right). \end{equation} \item[(2)] $L_A = L_B = \frac{L}{2}$, $T > T_C$: in this case, $E_M$ is reduced to \begin{equation} \begin{split} E_M &= \frac{1}{2} \biggl(\left(2 + \ev{N_0} + \ev{N_k}\right) \ln \left(1 + \frac{1}{2}\left(\ev{N_0} + \ev{N_k}\right)\right) \\ &- \left(\ev{N_0} + \ev{N_k}\right) \ln \left(\ev{N_0}/2 + \ev{N_k}/2\right) \\ & - (1 + \ev{N_k}) \ln\left(1 + \ev{N_k}\right) +\ev{N_k} \ln \ev{N_k} \\ &- \left(1 + \ev{N_0}\right) \ln (1 + \ev{N_0}) + \ev{N_0} \ln \ev{N_0} \biggl) \end{split} \end{equation} which can also be written as an explicit function of $L$, $\ev{n}$ and $T$ using previous results. \item[(3)] $L_A \ll L$, $T < T_C$: in this case, $\ev{N_0}$ becomes a macroscopic number, while $\ev{N_k}$ remains to be of order $1$. Therefore we isolate the contribution from \ev{N_0}, and other terms are of order $\mathcal{O}(1)$: \begin{equation}\label{Eq:E_m3} E_M = \frac{1}{2} \ln \left(1 + \frac{L_A \langle N_0 \rangle }{L} \right) + \mathcal{O}(1). \end{equation} If we define $\ev{N_{A0}} = \frac{L_A \langle N_0 \rangle }{L}$ to be the average particle number in the condensate of the subsystem, then \begin{equation} E_M = \frac{1}{2} \ln \ev{N_{A0}} + \mathcal{O}(1). \end{equation} \item[(4)] $L_A = L_B = \frac{L}{2}$, $T < T_C$: the leading contribution is again obtained by keeping $\ev{N_0}$'s contribution only: \begin{equation}\label{Eq:E_m4} E_M = \frac{1}{2} \ln \left(\frac{1}{4}\ev{N_0} + 1 \right) + \mathcal{O}(1) = \frac{1}{2} \ln \ev{N_{A0}} + \mathcal{O}(1). \end{equation} \item[(5)] $L_A \ll L$, $T = T_C$: as we calculated before, \ev{N_0} diverges as $\sqrt{L}$. When $L_A \ll L$, according to Eqs. (\ref{Eq:E_m3}) and (\ref{Eq:N_0@Tc}), we have \begin{equation} E_M = \frac{1}{2} \ln \left(1 + \frac{L_A \sqrt{(\langle n \rangle + 1) \langle n \rangle}}{\sqrt{L}} \right) + \mathcal{O}(1) = \frac{1}{2} \ln\left(\langle n \rangle \frac{L_A}{\sqrt{L}}\right) + \mathcal{O}(1). \end{equation} For such partition, the scaling behavior of mutual information depends on the ratio $\frac{L_A}{\sqrt{L}}$. If we consider $L_A$ is a small but still finite fraction of $L$, the scaling behavior of the mutual information still persists: $E_M = \frac{1}{4} \ln (\langle n \rangle L_A) + \mathcal{O}(1)$. \item[(6)] $L_A = L/2$, $T = T_C$: by referring to Eqs. (\ref{Eq:E_m4}) and (\ref{Eq:N_0@Tc}), we have \begin{equation} E_M = \frac{1}{2} \ln \left(\frac{1}{4} \sqrt{(\langle n \rangle + 1)\langle n \rangle L} \right) + \mathcal{O}(1) = \frac{1}{4} \ln (\langle n \rangle L_A) + \mathcal{O}(1). \end{equation} \end{enumerate} To sum up, from this study, we find that the extensive part of the thermal entropy of the whole system is canceled out in the mutual information. Below $T_C$ the mutual information is dominated by contribution from the condensate. Even above $T_C$, contribution from the condensate is of the same order as that from the excited states. The mutual information really characterizes the quantum feature of the system. To visualize the behavior of mutual information, we present a numerical calculation for the mutual information of this model in Fig. (\ref{Fig:mi_inf}). This is done by numerically diagonalizing the truncated two-point correlation function matrix then computing the von Neumann entropy of the reduced density matrix from those eigenvalues. The system is equally partitioned, $\ev{n} = 1$, so $T_C = \frac{1}{\ln 2}$. We see exactly what our analytic results tell us: above $T_C$ the mutual information saturates; below $T_C$, $E_M \simeq \frac{1}{2} L_A$; at $T_C$, $E_M \simeq \frac{1}{4} L_A$. Note that in the plot, the analytic results (dash lines) deviate from the numerical calculation because we only keep terms to the subleading order; terms that goes to zero [i.e. of order $\mathcal{O}(\frac{1}{L_A})$] in the thermodynamic limit are neglected. \begin{figure} \includegraphics[width=8cm]{inf_hopping.eps} \caption{(Color online) Numerical calculation of mutual information for the infinite-range hopping model for equally partitioned systems with average density $\ev{n} = 1$. The scatters are numerical data, while the dash lines are obtained from our analytic results corresponding to the particular temperature. We see exactly what our analytic results tell us: above $T_C = \frac{1}{\ln 2}$ the mutual information saturates; below $T_C$, $E_M \simeq \frac{1}{2} L_A$; at $T_C$, $E_M \simeq \frac{1}{4} L_A$. Note that the analytic results (dash lines) deviate from the numerical calculation because we only keep terms to the subleading order; terms that goes to zero [i.e., of order $\mathcal{O}(\frac{1}{L_A})$] in the thermodynamic limit are neglected.} \label{Fig:mi_inf} \end{figure} \section{Numerical Study of Mutual Information in One-dimension with Long-range Hopping} In this section, we shall present our results of numerical study of the mutual information of free bosons living on a one dimensional lattice. In this numerical study, we adopt our previous method, and calculate the von Neumann entropy of the reduced density matrix from the eigenvalues of the truncated two-point correlation function matrix. Throughout this calculation, we hold the average density fixed at $\ev{n} = 1$ (which means we keep adjusting the chemical potential at different temperatures) and consider equal partition only. It is well known that for nearest-neighbor (NN) (or other short-range) hopping models whose dispersion relation at long-wave length takes the form $\epsilon (k) \sim k^2$, a finite temperature BEC can only exist in three dimensions (3D). However, 3D is in general very challenging for a numerical study that requires large system sizes. Moreover, in 3D the mutual information is dominated by area law \cite{area-law}, which renders the logarithmic divergence suggested by our study in Sec. III sub-leading and thus difficult to isolate. For both of these reasons, it is desirable to study a model in 1D with BEC at finite $T$. In 1D, the short-range hopping model does {\em not} support BEC at finite temperature. To stabilize a condensate in 1D, we introduce power-law long-range hopping in our free boson model to modify its long-wave length dispersion. This is similar to what was done in Ref. [\onlinecite{yusuf}], in which the authors introduced long-range interaction between spins to stabilize magnetic order in 1D. The Hamiltonian with long-range hopping is obtained by setting $t_{ij}$ in Eq. (\ref{Eq:GeneralH}) to the following form, tuned by a parameter $\gamma$: \begin{equation} H = -\sum_{ij} \frac{t}{\abs{i-j}^\gamma} a^\dagger_i a_j = -2t \sum_k (\sum_{n=1}^{L-1} \frac{\cos (nk)}{n^\gamma}) b^\dagger_k b_k = \sum_k \varepsilon_\gamma(k) b^\dagger_k b_k. \end{equation} We will show that the long wave-length dispersion is modified to be $\varepsilon_\gamma(k) \sim k^{\gamma - 1}$ for $\gamma < 3$, as a result if which a finite temperature BEC exists for $\gamma < 2$. Consider the eigenenergy function $\varepsilon_\gamma(k) = - 2 t \sum_{n=1}^{L-1} \frac{\cos (nk)}{n^\gamma}$ in the thermodynamic limit: \begin{equation} \begin{split} \varepsilon_\gamma(k) = -2t\ \sum_{n=1}^{\infty} \frac{\cos (nk)}{n^\gamma} = -2t\ \text{Re}\left[ \sum_n \frac{e^{i n k}}{n^\gamma} \right] = -2t\ \text{Re}\left[ F(\gamma, i k) \right], \end{split}\label{Eq:ek} \end{equation} where $F(\gamma, v)$ is the Bose-Einstein integral function \cite{PhysRev.83.678} defined as: \begin{equation} F(\gamma, v) = \frac{1}{\Gamma(\gamma)} \int dx \frac{x^{\gamma - 1}}{e^{x + v} - 1} = \sum_{n=1}^{\infty} \frac{e^{- n v}}{n^\gamma}. \end{equation} The analytic properties of $F(\gamma, v)$ near $v = 0$ are known \cite{PhysRev.83.678}: \begin{equation}\label{Eq:vare_expansion} F(\gamma, v) = \begin{cases}[1.5] \displaystyle{\Gamma(1 - \gamma) v^{\gamma - 1} + \sum_{n=0}^{\infty} \frac{\zeta(\gamma - n)}{n!} (-v)^n}, & \gamma \notin \mathbb{Z}, \\ \displaystyle{\frac{(-v)^{\gamma - 1}}{(\gamma - 1)!} \left[\sum_{r = 1}^{\gamma - 1} \frac{1}{r} - \ln(v) \right] + \sum_{n \neq \gamma - 1}^{\infty} \frac{\zeta(\gamma - n)}{n!} (-v)^n}, & \gamma \in \mathbb{Z}, \end{cases} \end{equation} where $\zeta(x)$ is the Riemann zeta function. Thus we find that $\varepsilon_\gamma(k) \rightarrow k^{\gamma-1}$ for small $k$ when $1 < \gamma <3$. When $\gamma > 3$, the low energy dispersion is dominated by the $k^2$ term. When $\gamma \le 1$, $\varepsilon_\gamma(k)$ is not well-defined in the thermodynamic limit; in order to have a well-defined thermodynamic limit, the hopping energy $t$ must be properly scaled by the system size in this case. Next we will consider the thermodynamics of this model with different $\gamma$ and demonstrate that for $\gamma < 2$, we indeed have a finite temperature BEC. At low temperature, only the small $k$ part of the spectrum is important. For $1 < \gamma < 3$ we consider free bosons with a dispersion $\sigma k^{\gamma-1}$. Here $\sigma = -2 t \Gamma(1 - \gamma)$ is given in Eq. (\ref{Eq:vare_expansion}). The average density of such system in the thermodynamic limit is given by: \begin{equation} \frac{ \langle N \rangle}{L} = \frac{1}{2 \pi}\int dk \ev{N_k} = \frac{1}{2\pi} \frac{\sigma^{\frac{1}{1 - \gamma}} }{(\gamma - 1)} \int_0^\infty d\varepsilon \frac{\varepsilon^{\frac{1}{\gamma -1} -1}}{z^{-1}e^{\beta \varepsilon} - 1} = \frac{(\beta \sigma)^{\frac{1}{1-\gamma}}}{2 \pi (\gamma -1)} \Gamma(\frac{1}{\gamma -1}) g_{\frac{1}{\gamma - 1}}(z), \end{equation} where $z = e^{\beta \mu}$, $\beta = \frac{1}{T}$ is the inverse temperature, and $g_v(z) = \frac{1}{\Gamma(v)} \int_0^\infty dx \frac{x^{v-1}}{z^{-1} e^x - 1}$ is the Bose-Einstein integral function. To have a finite temperature BEC, $\ev{N} / L = \ev{n}$ must remain finite when $z \rightarrow 1$, which indicates $\frac{1}{\gamma - 1} > 1$, because for $v \le 1$, $g_{v}(1)$ diverges. To have a better understanding of the thermodynamics of this model, in Fig. (\ref{Fig:Tc}) we present a numerical calculation of $T_C$. The exact average density of the system is: \begin{equation} \frac{\langle N \rangle}{L} = \frac{1}{2\pi} \int_{-\pi}^{\pi} dk \frac{1}{e^{\beta(\varepsilon_\gamma(k) - \mu)} - 1}, \end{equation} where $\varepsilon_\gamma(k)$ is the eigenenergy function given in Eq. (\ref{Eq:ek}). $T_C$ is computed by setting $\mu = \varepsilon_\gamma(0)$ and then solving this equation numerically. According to Fig. \ref{Fig:Tc}, $T_C$ grows monotonically from $0$ to $\infty$ as $\gamma$ goes from $2$ to $1$. The divergent behavior of $T_C$ as $\gamma \rightarrow 1$ is a consequence of the divergent bandwidth in that limit. \begin{figure} \includegraphics[width=8cm]{Tc.eps} \caption{(Color online) Numerical calculation of $T_C$ (line + symbols) for the long-range hopping model in the thermodynamic limit. $T_C$ is measured in unit of the hopping energy $t$ which is set to 1. As one can see, $T_C$ grows monotonically from $0$ to $\infty$ as $\gamma$ goes from $2$ to $1$. The divergent behavior of $T_C$ as $\gamma \rightarrow 1$ is a consequence of the divergent bandwidth.} \label{Fig:Tc} \end{figure} According to our study of the infinite range hopping model, above $T_c$ the mutual information should saturate as the system size grows. Below $T_C$, the mutual information has a scaling behavior $E_M \simeq \frac{1}{2} \ln L_A$; for $T = T_C$ $E_M \simeq \frac{1}{4} \ln L_A$ for equally partitioned system in that model. We expect the $\ln L_A$ scaling behavior both below $T_C$ and at $T_C$ to persist in the long-range hopping model. As we shall see later, this is indeed the case. However, the details of the scaling behavior (i.e. the prefactor) can be different for different $\gamma$. To study this scaling behavior, we fix the temperature and examine the mutual information as a function of system size. This is desirable because, if our conjecture according to the study of infinite-range hopping model is correct, the mutual information will be proportional to $\ln L_A$ when $T \le T_C$. Before we demonstrate our results in the long-range hopping model, first let us verify our analysis in the NN hopping model in which no BEC would occur. This case actually corresponds to $\gamma \rightarrow \infty$. The Hamiltonian is given by restricting the hopping in Eq. (\ref{Eq:GeneralH}) to the nearest neighbors only: \begin{equation} H_{n.n} = - t \sum_{\langle ij \rangle} \hat{c}_i^\dagger \hat{c}_j. \end{equation} Figure \ref{Fig:fixT_k2} is a linear-log plot of mutual information against subsystem size at different temperatures. Throughout our study we shall consider equal partition only, i.e., $L_A = L / 2$. The average density is also set to $\ev{n} = 1$ here, so are the other results we will show later. Clearly, at a fixed temperature, the mutual information saturates as the system size grows. At low temperatures, small systems can be considered in the zero temperature limit. This leads to the mutual information growing as $\sim \frac{1}{2}\ln L_A$, until saturation kicks in. \begin{figure} \includegraphics[width=8cm]{mi_n1_k2.eps} \caption{(Color online) Mutual information of the nearest neighbor hopping model plotted against subsystem size on a logarithmic scale. Average density is set to $\ev{n} = 1$, and the system is equally partitioned, $L = 2 L_A$. The black dash line is $E_M \sim \frac{1}{2}\ln L_A$. This line will be in other graphs for comparison as well. Clearly, the mutual information saturates when the system size grows large enough.} \label{Fig:fixT_k2} \end{figure} Next we consider $ 1 < \gamma < 2$. Now the finite temperature transition emerges, and the signature for the transition in mutual information - the logarithmic scaling with (sub-)system size also emerges. In Fig. \ref{Fig:fixT_g1.7}, we plot the mutual information (scatters) for $\gamma = 1.7$ at different temperatures. The best fit (cyan dash line corresponding to the diamond data points) for $\beta = 0.5 > \beta_C$ gives $E_M = 0.2405 \ln L_A + 0.214$. At $\beta_C = 0.297$, the scaling behavior is fit (magenta dash line) as $E_M = 0.1226 \ln L_A + 0.1688$. Both behaviors agrees qualitatively with what has been suggested by our analytic study of the infinite-range hopping model. When the temperature is well below $T_C$, we have the logarithmic scaling behavior: a set of parallel linear lines on this logarithm scale plot for different temperatures. However, the prefactor is significantly different from that of the infinite range hopping model which is $\frac{1}{2}$. In fact, by calculating mutual information of different $\gamma$'s, we find that the prefactor varies as $\gamma$ changes. For very low temperature, small systems again effectively fall into the zero temperature region, and the mutual information restores to the $\sim \frac{1}{2} \ln L_A$ behavior as in zero temperature. But when the system size becomes large it crosses back to the finite temperature scaling behavior again. This is evident for $\beta > 1$ in the figure. For temperature close to but still below $T_C$, small systems behave differently: the mutual information scales more like the line for $\beta = \beta_C$, and it bends up as the system size increases and finally crosses back to its genuine behavior below $T_C$. For temperature close to $T_C$ but now above, small systems behave the other way: the mutual information bends downwards and saturates at large system size. $\beta_c$ serves as a very distinctive boundary between the above two different bending behaviors. The latter two bending features are also present in Fig. \ref{Fig:mi_inf}, our numerical verification of the infinite-range hopping model. But the first feature at very low temperature is missing in Fig. \ref{Fig:mi_inf} since in that case the mutual information scales the same way as the entanglement entropy. \begin{figure} \includegraphics[width=8cm]{gamma_1.7.eps} \caption{(Color online) Mutual information of the long-range hopping model with the parameter $\gamma = 1.7$ as a function of subsystem size on a logarithmic scale, at various (inverse) temperatures. The average boson density \ev{n} is set to 1, and the system is equally partitioned, $L_A = L /2$. The scaling behavior for inverse temperature $\beta = 0.5$ goes as $E_M = 0.2405 \ln L_A + 0.214$ (cyan dash line corresponding to the diamond data points). At the transition point, $\beta=\beta_C = 0.297$, the scaling law is fit to be $E_M = 0.1226 \ln L_A + 0.1688$ (red dash line corresponding to the right triangle data points).} \label{Fig:fixT_g1.7} \end{figure} Very similar behaviors were observed for the entire range $1 < \gamma < 2$. Representative results are presented in Figs. \ref{Fig:fixT_g1.5} and \ref{Fig:fixT_g1.3} for $\gamma=1.5$ and $1.3$ respectively. We thus conclude that for the entire range $1 < \gamma < 2$, mutual information saturates for $T > T_C$, while it diverges logarithmically with increasing subsystems size, for both $T < T_C$ and $T=T_C$. The coefficients in front of the logarithms are $\gamma$-dependent. \begin{figure} \includegraphics[width=8cm]{mi_gamma1.5_n1.eps} \caption{(Color online) Mutual information of the long-range hopping model with the parameter $\gamma = 1.5$ as a function of subsystem size on a logarithmic scale, at various (inverse) temperatures. The average boson density \ev{n} is set to 1, and the system is equally partitioned, $L_A = L /2$. The mutual information for $\beta = 1$ is fit to scale as $E_M \simeq 0.324 \ln L + 0.445$ (orange dash line). At $\beta_C = 0.16843$, we observer a weaker scaling behavior which is fit to be $E_M = 0.064 \ln L_A + 0.084$ (red dash line).} \label{Fig:fixT_g1.5} \end{figure} \begin{figure} \includegraphics[width=8cm]{gamma_1.3.eps} \caption{(Color online) Mutual information of the long-range hopping model with the parameter $\gamma = 1.3$ as a function of subsystem size on a logarithmic scale, at various (inverse) temperatures. The average boson density \ev{n} is set to 1, and the system is equally partitioned, $L_A = L /2$. The best fitting for $\beta = 0.5 > \beta_C$ (cyan dash line) gives $E_M = 0.378 \ln L_A + 0.2$. At $\beta_C = 0.0954$, the scaling behavior is fit (red dash line) as $E_M = 0.023 \ln L_A + 0.03535$.} \label{Fig:fixT_g1.3} \end{figure} \section{Summary and Concluding Remarks} In this paper we have studied entanglement properties of free {\it non-relativistic} Bose gases. At zero temperature, all particles fall into the ground state, and we find the entanglement entropy diverges as the logarithm of the particle number in the subsystem. At finite temperatures, we studied the natural generalization of entanglement entropy - the mutual information. We find the mutual information has a similar divergence in the presence of a Bose-Einstein condensate. When the system is above $T_C$ or does not have a condensate, the mutual information saturates for large subsystem size. It should be noted that for the special models we studied in this paper there is no area-law contribution to the mutual information, thus the contribution from the condensate, when present, dominates the mutual information. In more generic models in two- or three-dimensions where an area-law contribution is present, we expect such logarithmic divergent contribution from the condensate to be present as a sub-leading term in the subsystem-size dependence of the entanglement entropy and mutual information. Physically it is easy to understand why the condensate makes such an important contribution to entanglement. First of all, BEC is intrinsically a quantum process, just like entanglement reflects the intrinsically quantum nature of the system. More specifically, when a (macroscopically) large number of particles occupy the same state (at $k=0$), they are necessarily delocalized throughout the sample, giving rise to entanglement between blocks. Just like in our previous work on a very different system \cite{ding:052109}, our results here suggest that {\em conventional} ordering, like BEC, makes a logarithmic contribution to entanglement. One thus needs to take caution when using entanglement as a diagnostic for exotic phases (such as topological phases) or quantum criticality. \section*{ACKNOWLEDGMENTS} We thank Dr. Libby Heaney for a useful correspondence. This work was supported by NSF Grant No. DMR-0704133. The authors thank the Kavli Institute for Theoretical Physics (KITP) for the warm hospitality during the completion of this work. The work at KITP was supported in part by National Science Foundation Grant No. PHY-0551164.
{'timestamp': '2009-07-31T21:03:14', 'yymm': '0906', 'arxiv_id': '0906.0016', 'language': 'en', 'url': 'https://arxiv.org/abs/0906.0016'}
ArXiv
\section{Introduction} Wave turbulence is observed in the interaction of nonlinear dispersive waves in many physical processes, see the review \cite{newell} and references therein. Zakharov and Filonenko \cite{Zakh} proposed a theory of weak turbulence of capillary waves on the surface of a liquid. According to which, a stationary regime of wave turbulence with an energy spectrum, now called the Zakharov-Filonenko spectrum, is formed at the boundary of a liquid. To date, the theory of weak turbulence has been very well confirmed both experimentally \cite{mezhov1, mezhov2,falcon2007, falc_exp} and numerically \cite{Push, korot,pan14,falcon14}. Physical experiments \cite{falcon2, falcon3} carried out for magnetic fluids in a magnetic field showed that the external field can modify the turbulent spectrum of capillary waves. So far, there was no theoretical explanation for this fact. In this paper, we consider the nonlinear dynamics of the free surface of a magnetic fluid in a horizontal magnetic field. Melcher \cite{melcher1961} has shown that the problem under study is mathematically completely equivalent to the problem of the dynamics of the free surface of a dielectric fluid in a horizontal electric field. For this reason, in this work we will use the previously obtained results for non-conducting liquids in an electric field. The dynamics of coherent structures (solitons or collapses) on the surface of liquids in a magnetic (electric) field has been very well studied (see, for example, \cite{koulova18,ferro10, tao18, zu2002, gao19}). At the same time, the turbulence of surface waves in an external electromagnetic field was not theoretically investigated (except of our recent work \cite{ko_19_jetpl}). In this paper, we will show that at the free surface of a ferrofluid in a magnetic field, new wave turbulence spectra differing from the classical spectra for capillary and gravitational waves can be realized. \section{Linear analysis} We consider a potential flow of an ideal incompressible ferrofluid with infinite depth and a free surface in a uniform horizontal external magnetic field. The fluid is dielectric, i.e., there are no free electrical currents in the liquid. Since, the problem under consideration is anisotropic because of the existence of the separated direction of the magnetic field, we consider only plane symmetric waves propagating in the direction parallel to the external field. Let the vector of a magnetic field induction be directed along the $x$ axis (correspondingly, the $y$ axis of the Cartesian coordinate system is perpendicular to it) and has the absolute value $B$. The shape of the boundary is described by the function $y=\eta(x,t)$, for the unperturbed state $y=0$. The dispersion relation for linear waves at the boundary of the liquid has the form \cite{melcher1961} \begin{equation}\label{disp}\omega^2=gk+\frac{\gamma(\mu)}{\rho}B^2k^2+\frac{\sigma}{\rho}k^3,\end{equation} where $\omega$ is the frequency, $g$ is the gravitational acceleration, $k$ is the wavenumber, $\gamma(\mu) =(\mu-1)^2 (\mu_0(\mu+1))^{-1}$ is the auxiliary coefficient, $\mu_0$ is the magnetic permeability of vacuum, $\mu$ and $\rho$ are the magnetic permeability and mass density of the liquid, respectively, and $\sigma$ is the surface tension coefficient. Let estimate the characteristic physical scales in the problem under study. In the absence of an external field, the dispersion relation (\ref{disp}) describes the propagation of the surface gravity-capillary waves. Their minimum phase speed is determined by the formula: $v_{min}=(4 \sigma g / \rho)^{1/4}$. To obtain the characteristic magnetic field we need to equate $v_{min}^2$ to the coefficient before $k^2$ (it has a dimension of squared velocity) in the right-hand side of (\ref{disp}). Thus, the critical value of the magnetic field induction has the form \begin{equation}\label{field} B_c^2=\frac{2(\rho g\sigma)^{1/2}}{\gamma(\mu)}. \end{equation} The characteristic scales of length and time are \begin{equation}\label{scale} \lambda_0=2 \pi \left(\frac{\sigma}{g\rho}\right)^{1/2},\quad t_0=2 \pi \left(\frac{\sigma}{g^{3}\rho} \right)^{1/4}. \end{equation} Let us calculate the specific values of the introduced quantities for the liquid used in the experiments \cite{falcon2, falcon3}. Put the fluid parameters as follows $$\rho=1324\, \mbox{kg/m}^3,\quad \sigma=0.059\, \mbox{N/m}, \quad \mu=1.69.$$ Substituting these parameters into above formulas, we obtain the estimations for the characteristic quantities in the problem under study: $\lambda_0\approx 1.3$ cm, $t_0\approx0.1$ s, and $B_c\approx 196$~G. It should be noted that the critical value of the magnetic field decreases with increasing magnetic permeability of the liquid. For the liquid with $\mu=10$, it is estimated as a relatively small quantity $B_c\approx 30$ G. Further in the work, it will be demonstrated that the magnetic wave turbulence can develop at the boundary of a ferrofluid with high magnetic permeability in the following field range: $2\leq B/B_c \leq 6$, i.e., the maximum value of $B_{max}$ used in the work is near 200 G. Note that in the case of a strong magnetic field, the dispersion relation (\ref{disp}) must be modified taking into account the magnetization curve, as was done in \cite{zel69}. Let us rewrite the expression for the magnetization $M(H)$ of a colloidal ferrofluid composed of particles of one size \cite{rosen87}: $$M(H)/M_{st}=\left(\coth \theta-1/\theta\right)\equiv L(\theta),\,\,\,\theta=\frac{\pi \mu_0M_d D^3H}{6 k_B T},$$ where $L(\theta)$ is the Langevin function, $D$ is the particle diameter, $M_{st}$ is the magnetic saturation, $M_d$ is the domain magnetization of the particles, $k_B$ is Boltzmann's constant, $T$ is the absolute temperature. For small values of the external field, the Langevin function can be approximated by the linear dependence $L(\theta)\approx \theta/3$. Hence, in such a situation, the fluid magnetization is linearly related with the magnetic field strength $M=\chi_i H$, where $\chi_i$ is the initial magnetic susceptibility defined as, \begin{equation}\label{chi}\chi_i=\frac{\pi \mu_0M_d^2 d^3}{18 k_B T}. \end{equation} Here we took into account that $M_{st}=\phi M_d$, where $\phi$ is the volume fraction of the ferromagnetic particles in a liquid. The formula (\ref{chi}) gives a relatively large value, $\chi_i\approx 9.5$, for a fluid consisting of the magnetite particles ($M_d\approx 4.46 \cdot 10^5$ A/m) with the characteristic size $D=22$ nm, and $T=300$~K. We now estimate the characteristic field strength $H_0$ for which the magnetization curve becomes deviate from the linear law $\theta/3$. For the value $\theta(H_0)=1$, the relative deviation between $L(\theta(H_0))$ and $\theta(H_0)/3$ is near 6$\%$, so this equality can be used as a criterion of the characteristic field. Form the definition of $\theta$, we obtain the expression for the characteristic magnetic field strength: \begin{equation}\label{field2}H_0=\frac{6k_BT}{\pi \mu_0 M_d D^3}.\end{equation} For the fluid with magnetic permeability, $\mu\approx 10$, the field strength is estimated as, $H_0\approx 1.3\cdot 10^3$ A/m (we put the fluid parameters as follows: $D=22$ nm, $M_d=4.46\cdot10^5$~A/m, $T=300$ K). At the same time, the critical magnetic field defined from (\ref{field}) should be near $H_c=B_c/\mu\mu_0\approx 0.2\cdot 10^3$~A/m, which is much less than the characteristic field (\ref{field2}). The maximum value of the magnetic field used in this work can be estimated as $H_{max}=B_{max}/\mu\mu_0\approx 1.5 \cdot 10^3$~A/m, this value is close to $H_0$. Thus, for the maximum magnetic field, the magnetization curve will differ from the linear dependence. Quantitatively, this difference is estimated at around 10$\%$, which is a relatively small value. For this reason, further in the work, we will assume that the magnetization of the ferrofluid linearly depends on the magnetic field strength. \section{Turbulence spectra for dispersionless waves} The dispersion law (\ref{disp}) describes three types of the surface waves: gravity, capillary, and magnetic ones. The magnetic surface waves are most interesting for us in this work. In contrast to the gravity and capillary waves, such waves propagate without dispersion. Indeed, in the following range of wavenumbers $k_{gm}\ll k\ll k_{mc}$, where $k_{gm}=g \rho /\gamma B^2$, and $k_{mc}=\gamma B^2/\sigma$, the dispersion law (\ref{disp}) has the simple form \begin{equation}\label{lindisp} \omega^2=v_A^2 k^2,\qquad v_A^2=\frac{\gamma(\mu)B^2}{\rho}. \end{equation} The wavenumber $k_{gm}$ is transitional between the gravity and the magnetic waves, and $k_{mc}$ separates the magnetic waves from the capillary ones. In the limit of a strong field $B\gg B_c$, and high magnetic permeability $\mu\gg 1$, Zubarev \cite {zu2004, zuzu2006, zu2009} has found exact particular solutions of the full equations of magnetic hydrodynamics in the form of nonlinear surface waves propagating without distortions in the direction or against the direction of the external horizontal magnetic field. In fact, the solutions obtained are a complete analogy of Alfv\'en waves in a perfectly conducting fluid which can propagate without distortions along the direction of the external magnetic field. The interaction is possible only between oppositely propagating waves, and it is elastic \cite{mhd0}. Surface waves in the high magnetic field regime studied in this work have the same properties \cite{zubkoch14}. The classical result of studying the wave magnetohydrodynamic (MHD) turbulence is the Iroshnikov-Kraichnan spectrum \cite{irosh,kraich}. According to the phenomenological theory of Iroshnikov and Kraichnan, the turbulent spectrum for fluctuations of the local magnetic field $\delta B_k$ and the fluid velocity $\delta V_k$ has the form: \begin{equation}\label{IK1} |\delta B_k|^2\sim|\delta V_k|^2\sim (SV_A)^{1/2}k^{-1/2}, \end{equation} where $V_A=(\mu_0\rho)^{-1/2}B$ is the Alvf\'en speed, $B$ is the magnetic field induction inside the fluid, $S$ is the rate of energy dissipation per unit mass. Note that in such a model of turbulence, the fluctuations of velocity and magnetic field should be small: $\delta V_k\ll V_A$, $\delta B_k\ll B$. According to (\ref{IK1}), the turbulence spectrum for the spectral density of the system energy has the form: \begin{equation}\label{enIK} \varepsilon_k \sim (S V_A) k^{-3/2}. \end{equation} The spectrum (\ref{IK1}) is written in terms of fluctuations of the velocity and the magnetic field in a liquid in 3D geometry, formally we can obtain its analogue for the quantities $\eta$ and $\psi$ (the value of the velocity potential at the boundary of liquid) used in the work. To do this, let introduce the perturbations of velocity $ \delta v_k $ and magnetic field induction $\delta b_k$ at the fluid boundary $y=\eta$. From the dimensional analysis ($\delta v_k \sim k\psi_k $, $\delta b_k \sim B k \eta_k $) and the dispersion relation $\omega_k \sim k$, in the strong field limit, one can obtain the spectra: \begin{equation}\label{IK2} |\eta_k|^2\sim|\psi_k|^2\sim k^{-5/2},\quad |\eta_\omega|^2\sim|\psi_\omega|^2\sim \omega^{-5/2}. \end{equation} In our recent work \cite{ko_19_jetpl}, it was observed that the slope of the spectrum for the surface elevation in $k$-space is close to $-2.5$. But the analysis of the spectrum for the quantity $\psi(k,\omega)$ was not carried out. In the present work, we will examine the realizability of the spectrum (\ref{IK2}) in detail. The Iroshnikov-Kraichnan energy spectrum (\ref{enIK}) can be obtained with help of the dimensional analysis of the weak turbulence spectra \cite{naz2003}. The weak turbulence spectrum for the energy density of a wave system with the linear dispersion law ($\omega_k\sim k$) like (\ref{lindisp}) and quadratically nonlinearity (three-waves interactions) can be written as follows (for more details see Nazarenko's book \cite{naz2011}): \begin{equation}\label{energy} \varepsilon_k\sim k^{\frac{1}{2}(d-6)},\qquad k=|\textbf{k}|, \end{equation} where $d$ is the dimension of space. It can be seen, that the spectrum (\ref{enIK}) is a particular case of (\ref{energy}) for $d=3$. The spectrum (\ref{energy}) also describes the energy distribution of the acoustic wave turbulence \cite{zakh70,efimov18}. The spectral density of the system energy for our problem is related with the surface elevation spectrum as follows: $\varepsilon_k\sim \omega_k |\eta_k|^2$. Form this expression and energy spectra (\ref{energy}) we can obtain the dimensional estimations for the turbulence spectra in terms of $\eta (k,\omega)$ and $\psi (k,\omega)$: \begin{equation}\label{sp2} |\eta_k|^2\sim |\psi_k|^2\sim k^{-3}, \,\, |\eta_\omega|^2\sim|\psi_\omega|^2\sim \omega^{-3},\, d=2, \end{equation} \begin{equation}\label{sp3} |\eta_k|^2\sim |\psi_k|^2\sim k^{-7/2}, \,\, |\eta_\omega|^2\sim|\psi_\omega|^2\sim \omega^{-7/2},\, d=1, \end{equation} For the first time, the spectrum (\ref{sp2}) was obtained by Falcon in \cite{falcon3} for a normal magnetic field, in \cite{falcon2} it was shown that in a tangential field, the spectrum index shifts to the region of higher values. Note that dimensional estimates for the capillary turbulence spectrum can also be obtained in one-dimensional geometry. Although this derivation is formal, since the capillary waves do not satisfy the conditions of three-wave resonances in 1D geometry, it will be useful to have these estimates for the comparison with the results of our numerical simulation. Skipping the details, we write out the surface spectrum for the one-dimensional capillary wave turbulence \cite{naz2011}: \begin{equation}\label{ZF} |\eta_k|^2\sim k^{-17/4},\qquad |\eta_{\omega}|\sim\omega^{-19/6}. \end{equation} It can be seen that these relations are highly different from that obtained for pure magnetic surface waves (\ref{IK2}), (\ref{sp2}), and (\ref{sp3}). The main purpose of this work is to find out which of the spectra will be the closest to that observed in a direct numerical simulation. \section{Results of numerical simulation} Our numerical model is based on the weakly nonlinear approximation, when the angles of boundary inclination are small $ \alpha = | \eta_x | \ll 1 $. We consider the liquid with high magnetic permeability $ \mu \gg 1 $, i.e., the surface waves have properties similar to Alfv\'en waves. For further analysis, it is convenient to introduce the dimensionless variables $$\eta\to \eta \cdot \lambda_0,\quad x\to x\cdot \lambda_0,\quad t\to t \cdot t_0,\quad \psi\to \psi \cdot \lambda_0^2/t_0,$$ where $\lambda_0$, $t_0$ are the characteristic values of length and time (\ref{scale}). It is convenient to introduce the dimensionless parameter $\beta$ defining the magnetic field induction as follows $\beta=\sqrt{2}B/B_c$, where $B_c$ is defined by (\ref{field}), i.e., if $B=B_c$ then $\beta^2=2$. Below, we consider the region of magnetocapillary waves $\beta^2+k\gg 1/k$, i.e., the wavelengths for which the effect of gravitational force can be neglected. The dispersion relation (\ref{disp}) can be represented in the dimensionless form \begin{equation}\label{disp2}\omega^2=\beta^2k^2+k^3.\end{equation} According to (\ref{disp2}), the linear surface waves are divided into two types: low-frequency magnetic ($k\ll k_c$, $\omega\ll \omega_c$) and high-frequency capillary ($k\gg k_c$, $\omega\gg\omega_c$) waves, where $k_c$ and $\omega_c$ are the crossover wavenumber and frequency defined as $$k_c=\beta^2,\qquad \omega_c=\sqrt{2}\beta^3.$$ The equations of the boundary motion up to the quadratically nonlinear terms were first time obtained by Zubarev in \cite{zu2004}, they can be represented in the form $$\psi_t=\eta_{xx}+\frac{1}{2}\left[\beta^2[(\hat k \eta)^2-(\eta_x)^2] +(\hat k \psi)^2-(\psi_x)^2\right]$$ \begin{equation}\label{eq1}+\beta^2 \left[-\hat k \eta +\hat k(\eta\hat k \eta)+\partial_x(\eta \eta_x)\right]+\hat D_k \psi,\end{equation} \begin{equation}\label{eq2}\eta_t=\hat k \psi- \hat k(\eta \hat k \psi)-\partial_x(\eta \psi_x)+\hat D_k \eta,\end{equation} where $\hat k$ is the integral operator having the form: $\hat k f_k=|k| f_k$ in the Fourier representation. The operator $\hat D_k$ describes viscosity and is defined in the $k$-space as $$\hat D_k=-\nu (|k|-|k_d|)^2, \quad |k|\geq |k_d|;\quad \hat D_k=0,\quad |k|< |k_d|.$$ Here, $\nu$ is a constant, and $k_d$ is the wavenumber determining the spatial scale at which the energy dissipation occurs. Equations (\ref{eq1}) and (\ref{eq2}) are Hamiltonian and can be derived as the variational derivatives $$\frac{\partial \psi}{\partial t}=-\frac{\delta \mathcal{H}}{\delta \eta},\qquad \frac{\partial \eta}{\partial t}=\frac{\delta \mathcal{H}}{\delta \psi}.$$ Here, $$\mathcal{H}=\mathcal{H}_0+\mathcal{H}_1=\frac{1}{2}\int \left[\psi \hat k \psi+\beta^2 \eta \hat k \eta+(\eta_x)^2 \right]dx-$$ $$-\frac{1}{2}\int \eta\left[(\hat k \psi)^2-(\psi_x)^2+\beta^2[(\hat k \eta)^2-(\eta_x)^2]\right]dx,$$ is the Hamiltonian of the system specifying the total energy. The terms $\mathcal{H}_0$ and $\mathcal{H}_1$ correspond to the linear and quadratic nonlinear terms in (\ref{eq1}) and (\ref{eq2}), respectively. The spectra (\ref{IK2}), (\ref{sp2}), and (\ref{sp3}) are obtained in the limit of an infinitely strong magnetic field, $\beta\gg1 $. It should be noted that it is not observed in formally ignoring the capillary pressure in the equations (\ref{eq1})-(\ref{eq2}). In the absence of surface tension, the interaction of counter-propagating waves results in the appearance of singular points at the boundary, at which the curvature of the surface increases infinitely \cite{kochzub18,koch18,ko_19_pjtp}. For the realization of the regime of magnetic wave turbulence on the fluid surface, it is necessary to take into account the effect of capillary forces. It immediately follows from this fact that the weak turbulence spectra obtained in the formal limit of a strong field will be distorted in the region of dispersive capillary waves for $k\geq k_c$. In the current work, it will be shown that the turbulence spectrum of the surface waves is divided into two regions: a low-frequency one, at which the calculated spectrum is close to the 1D spectrum (\ref{sp3}) and a high-frequency region, in which the capillary forces deform the magnetic wave turbulence spectrum. Let us proceed to the description of the results of our numerical experiments. To minimize the effect of coherent structures (collapses or solitons), the initial conditions for (\ref{eq1}) and (\ref{eq2}) are taken in the form of two counter-propagating interacting wavepackets: \begin{equation}\label{IC}\eta_1(x)=\sum \limits_{i=1}^{4}a_i\cos(k_i x),\quad\eta_2(x)=\sum \limits_{i=1}^{4}b_i\cos(p_i x), \end{equation} $$\eta(x,0)=\eta_1+\eta_2,\qquad \psi(x,0)=\beta(\hat H \eta_1-\hat H \eta_2),$$ where $a_i$, $b_i$ are the wave amplitudes (they were random), $k_i$, $p_i$ are the wavenumbers, and $\hat H$ is the Hilbert transform defined in $k$-space as $\hat H f_k=i \mbox{sign}(k) f_k$. The spatial derivatives and integral operators were calculated using pseudo-spectral methods with the total amount of harmonics $N$, and the time integration was performed by the fourth-order explicit Runge-Kutta method with the step $dt$. The model did not involve the mechanical pumping of the energy of the system. Hence, the average steepness of the boundary $ \overline\alpha$ was determined only by the initial conditions (\ref{IC}). The calculations were performed with the parameters $N=1024$, $dt=5\cdot 10^{-5}$, $\nu=10$, $k_d=340$. To stabilize the numerical scheme, the amplitudes of higher harmonics with $k\geq412$ were equated to zero at each step of the integration in time. In the current work, we present the results of four numerical experiments carried out with the different values of $\beta$ and, hence, with different $B/B_c$, see the parameters used in Table~1. From the Table~1, we can see that as the field increased, the nonlinearity level (the averaged steepness) required for the realization of a direct energy cascade decreased. Apparently, this effect is related with satisfying the conditions of three-wave resonances: \begin{equation}\label{reson}\omega=\omega_1+\omega_2,\qquad k=k_1+k_2.\end{equation} The conditions (\ref{reson}) are satisfied for any waves described by the linear dispersion law (\ref{lindisp}). For the waves from a high-frequency region, the capillary term in (\ref{disp2}) forbids the three-wave resonance in one-dimensional geometry. But the quasi-resonances are still possible and the probability of their realization is higher for the stronger magnetic fields. It should be noted that three-wave resonances can be achieved in 1D geometry in the absence of external field for the gravity-capillary waves near the minimum of their phase speed \cite{nonlocal15}. The lower threshold of the parameter $\beta$ required for the turbulence development is determined by the criterion of applicability of the weakly nonlinear approximation ($\alpha\ll1$). The upper threshold of the fields is limited by the tendency to form strong discontinuities that can correspond to the appearance of vertical liquid jets \cite {kochzub18j} and the formation of a regime of strong turbulence generated by singularities \cite{kuz2004}. The range of variation of the amplitudes $a_i$, $b_i$ in initial conditions (\ref{IC}) were chosen empirically to minimize the deviation of the model from the weakly nonlinear one. The wavenumbers $k_i$, $p_i$ were chosen in such a way that the energy exchange between nonlinear waves was the most intense. For all numerical experiments, the set of wavenumbers in (\ref{IC}) is the same and is presented in Table~2. \begin{table} \caption{The parameters of four numerical experiments presented in the work, $\beta^2$ is the auxiliary dimensionless parameter, $B/B_c$ is the corresponding dimensionless magnetic field induction, $\overline\alpha$ is the averaged steepness of the surface, $\omega_c$ is the crossover frequency. } \begin{center} \begin{tabular}{ccccc} \hline $\beta^2$ & 10 & 30& 50 & 70 \\ \hline $B/B_c$ & 2.24 & 3.87& 5.00 & 5.92 \\ \hline $\overline\alpha$ & 0.15 & 0.12& 0.10 & 0.09\\ \hline $\omega_c$ & 44.72 & 232.38 & 500.00 & 828.25\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{The set of the wavenumbers used in the initial conditions (\ref{IC}), $i$ is the summation index, $k_i$ and $p_i$ are the wavenumbers of the wavepackets traveling to the left and to the right, respectively.} \begin{center} \begin{tabular}{ccccc} \hline i & 1& 2 & 3 & 4 \\ \hline $k_i$ & 3 & 5 & 7 &9\\ \hline $p_i$ & 2 & 4 & 6 &8\\ \hline \end{tabular} \end{center} \end{table} Figure 1 shows the calculated energy dissipation rate $s=|d\mathcal {H}/dt|$ as a function of time for the different values of magnetic field induction. It can be seen that the system under study proceeds to the regime of quasistationary energy dissipation in times of the order $10^3 t_0$. In this mode, the probability density functions for the angles of boundary inclination become very close to Gaussian distributions (see Fig. 2). This behavior indicates the absence of strong space-time correlations and, consequently, the formation of the Kolmogorov-like spectrum of wave turbulence. \begin{figure} \center{\includegraphics[width=1\linewidth]{fig1.eps}} \caption{\small The energy dissipation rate versus time for the different values of magnetic field.} \label{buble} \end{figure} \begin{figure} \center{\includegraphics[width=1\linewidth]{fig2.eps}} \caption{\small Probability density functions (\emph{p.d.f.}) for the angles of the boundary inclination for the different values of $B/B_c$, black dotted lines correspond to Gaussian distributions.} \label{buble} \end{figure} Figure 3 shows the time-averaged spectra of the surface $I_{\eta}(k)=\overline{|\eta_k|}^2$ and the velocity potential perturbations $I_{\psi}(k)=\overline{|\psi_k|}^2$ for $B/B_c=2.24$. As can be seen, the inertial range of the wavenumbers is splitted into two regions. In the first region with the small $k$, the spectra are in relatively good agreement with the 1D spectrum (\ref{sp3}). At the second region of high $k$, where capillarity dominates, the spectra have the different power-law: \begin{equation}\label{IK3}I_{\eta}(k)\sim k^{-5/2}, \quad I_{\psi}(k)\sim k^{-3/2}.\end{equation} It is interesting that the transition between two spectra occurs at higher wavenumbers than $k_c$. Since, the level of nonlinearity in this case is quite large, the observed effect can be associated with an increase in the magnetic field at steep surface inhomogeneities. Apparently, the local intensification of the magnetic field can lead to the shift of $k_c$. \begin{figure} \center{\includegraphics[width=1\linewidth]{fig3.eps}} \caption{\small Time averaged spectra $I_{\eta}(k)$ (blue line) and $I_{\psi}(k)$ (green line) for $B/B_c=2.24$, the black dotted lines correspond to the 1D spectrum (\ref{sp3}), red dotted lines are the best power-law fit (\ref{IK3}) of the calculated spectra, the vertical black dotted line shows the crossover wavenumber $k_c$.} \label{buble} \end{figure} The spectrum (\ref{IK3}) for the surface elevation coincides with the Iroshnikov-Kraichnan one (\ref{IK2}), but the spectrum for the velocity potential does not. Thus, the spectrum observed at high $k$ is not the MHD turbulence spectrum (\ref{IK2}), as was suggested in our previous work \cite{ko_19_jetpl}. At the same time, the spectrum (\ref{IK3}) does not coincide with the Zakharov-Filonenko spectrum (\ref{ZF}) for the pure capillary waves. Consider this spectrum in the frequency domain $\omega$. We can empirically rewrite the spectra (\ref{IK3}) in terms of $\omega$ using the dispersion relation $ k\sim \omega^{2/3}$ for $k\gg k_c$ \begin{equation}\label{IK4}I_{\eta}(\omega)\sim \omega^{-5/3}, \quad I_{\psi}(\omega)\sim \omega^{-1}. \end{equation} Figure 4 shows how the spectra $I_{\eta} (\omega)$ and $I_{\psi} (\omega) $ change as the field induction increases. From the Fig.~4~(a), it can be seen that for a relatively weak magnetic field, the spectrum is mainly determined by the relation (\ref{IK4}). As the field increases, the region of magnetic turbulence expands, see Fig.~4~(b) and (c). For the maximum magnetic field $B/B_c\approx 5.92$, the capillary waves shift to the region of viscous dissipation and almost do not have energy contribution to the spectrum of turbulence, see Fig. 4 (d). The crossover frequencies are in a good agreement with the $\omega_c$, except the first case where the level of nonlinearity is too high. In general, the calculated spectrum of turbulence in the low-frequency region is in good agreement with the spectrum (\ref{sp3}) obtained from the dimensional analysis \cite{naz2003,naz2011}. \begin{figure} \center{\includegraphics[width=1\linewidth]{fig4.eps}} \caption{\small Time averaged spectra $I_{\eta}(\omega)$ (blue lines) and $I_{\psi}(\omega)$ (green lines) for the different values of $B/B_c$: (a) 2.24, (b) 3.87, (c) 5.00, (d) 5.92. The black dotted lines correspond to the 1D spectrum (\ref{sp3}), red dotted lines are the best power-law fit (\ref{IK4}) of the calculated spectra, the vertical black dotted lines show the crossover frequencies $\omega_c$.} \label{buble} \end{figure} \section{Conclusion} Thus, in the present work, a numerical study of the wave turbulence of the surface of a magnetic fluid in a horizontal magnetic field has been carried out within the framework of a one-dimensional weakly nonlinear model that takes into account the effects of capillarity and viscosity. The results show that the spectrum of turbulence is divided into two regions: a low-frequency (\emph{i}) and a high-frequency (\emph{ii}) one. In the region (\emph{i}), the magnetic wave turbulence is realized. The power-law spectrum of the surface elevation has the same exponent in $k$ and $\omega$ domains and is close to the value $-3.5$, which is in good agreement with the estimation (\ref{sp3}) obtained from the dimensional analysis of the weak turbulence spectra. In the high-frequency region (\emph {ii}), where the capillary forces dominate, the spatial spectrum of the surface waves is close to $k^{-5/2}$, which corresponds to $\omega^{-5/3}$ in terms of the frequency. This spectrum does not coincide with the spectrum (\ref{ZF}) for pure capillary waves. A possible explanation of this fact is that three-wave interactions for the capillary waves are forbidden in 1D geometry and this power-law spectrum can be generated by coherent structures (like shock fronts) arising in the regime of a strong field \cite{kochzub18,koch18,ko_19_pjtp}. Its is well known that collapses and turbulence can coexist in one-dimensional models of wave turbulence \cite{MMT15,MMT17}. In conclusion, we note that the results obtained in the work are in qualitative agreement with the experimental studies \cite{falcon2, falcon3}, in which it is shown that the external magnetic field can deform the Zakharov-Filonenko spectrum for capillary turbulence. The quantitative discrepancy may be due to the one-dimensional geometry and relatively high magnetic field leading to the nonlinear dependence of the magnetization curve, which is not taken into account in the current work. \section*{Acknowledgments} I am deeply grateful to N.M. Zubarev, A.I. Dyachenko, and N.B. Volkov for stimulating discussions. This work is supported by Russian Science Foundation project No. 19-71-00003. \bibliographystyle{abbrv}
{'timestamp': '2020-02-21T02:11:48', 'yymm': '2002', 'arxiv_id': '2002.08698', 'language': 'en', 'url': 'https://arxiv.org/abs/2002.08698'}
ArXiv
\section{Semiclassics} Here we include back-scattering processes in the derivation of Eq.~(16) in the main text for the local density of states at large frequency, $\omega\gg\omega_\star$; we find that it only modifies the expression~(17) for $\omega_\text{cr}$ (the factor $\sqrt{3}$ would be replaced with $\sqrt{2}$ if only forward-scattering processes were present). Namely, at $\omega\gg\omega_\star$, we use the analogy with the one-dimensional Schr\"odinger equation for a particle in a Gaussian disorder potential to evaluate the averaged local plasmon density of states that appears in Eq.~(12) in the main text. For this, we employ the Fokker-Planck method reviewed in \cite{Lifshitz1988}. We may express the Green function that solves Eq.~(10) in the main text, together with boundary conditions, as \begin{equation} G(x,x';\omega)=\frac {\pi } {vW} \left[\Psi_-(x)\Psi_+(x')\Theta(x'-x)+\Psi_-(x')\Psi_+(x)\Theta(x-x')\right]\,. \label{eq:GFclassical} \end{equation} Here $\Psi_+$ and $\Psi_-$ are two solutions of the differential equation \begin{equation} \label{eq:diff} \omega^2 \Psi_\pm(x)=-v^2\partial_x^2 \Psi_\pm(x)+V(x)\Psi_\pm(x) \end{equation} with boundary conditions $\partial_x \Psi_-(0)=0$ and $\Psi_-(0)=1$, and $\partial_x \Psi_+(d)=1$ and $\Psi_+(d)=0$, respectively, $W$ is a spatially-independent Wronskian, \begin{equation} W=\Psi_-(x)\partial_x\Psi_+(x)-\partial_x\Psi_-(x)\Psi_+(x)\,, \end{equation} and $\Theta(x)$ is the Heaviside function [with $\Theta(0)=1/2$]. Using the Wronskian and the boundary conditions, we find from Eq.~\eqref{eq:GFclassical} \begin{equation} \label{eq:G00} G(0,0;\omega)=\frac{\pi }v\frac{\Psi_+(0)}{\partial_x\Psi_+(0)}\,. \end{equation} We further define four functions $\rho_\pm$ and $\phi_\pm$ such that \begin{equation} \label{eq:parametrize} \Psi_\pm(x)=\rho_\pm(x)\sin\phi_\pm(x) \quad\text{and}\quad \partial_x\Psi_\pm(x)=(\omega/v) \rho_\pm(x)\cos\phi_\pm(x)\,. \end{equation} Equation \eqref{eq:diff} and its boundary conditions then read equivalently \begin{equation} \label{eq:neq-eq} \partial_x \rho_\pm=-\frac{V}{2v\omega}\sin2\phi_\pm \quad\mathrm{and}\quad \partial_x \phi_\pm=\frac\omega v-\frac{V}{v\omega}\sin^2\phi_\pm \end{equation} with boundary conditions $\rho_+(d)=v/\omega$ and $\phi_+(d)=0$, and $\rho_-(0)=1$ and $\phi_-(0)=\pi/2$, respectively. Inserting Eq.~\eqref{eq:parametrize} into Eq.~\eqref{eq:G00}, we relate the statistics of \begin{equation} \label{eq:G00-param} G(0,0;\omega)=\frac{\pi }\omega\tan\phi_+(0) \end{equation} with that of the solution $\phi_+$ of the relevant pair of Eqs.~\eqref{eq:neq-eq}. In general, $G(0,0;\omega)$ is complex. Its real part readily follows from Eq.~\eqref{eq:G00-param}. In order to derive its imaginary part, one should include a small imaginary part to the frequency (in the upper complex plane), $\omega\to \omega+i0^+$, which in turn adds a small imaginary part with the same sign to $\phi_+$. Using Eq.~(13) in the main text, we can then relate the statistical properties of the local density of states to those of $\phi_+(0)$, \begin{equation} \label{eq:res-rvdG3} \nu(x=0,\omega) =-\frac 2 v \sum_m\delta\left(\phi_+(0)-(m+\frac12)\pi\right)\,. \end{equation} To proceed further, we use the assumption of large frequency to treat the term $\propto V$ in Eq.~(\ref{eq:neq-eq}) as a perturbation, and decompose $\phi_+(x)=\omega (x-d)/v+\zeta(x)$ , where $\zeta(x)$ is a slowly varying function on the scale of $v/\omega$. Furthermore, the disorder potential contains three relevant terms for the spatial variation of $\zeta(x)$, \begin{equation} \label{eq:semiclassical-disorder} V(x)=V_0(x)+V_1(x)e^{2i\omega (x-d)/v}+V^*_1(x)e^{-2i\omega(x-d)/v}\,, \end{equation} where the slowly varying (on the scale of $v/\omega$) components $V_0(x)$ and $V_1(x)$ describe forward- and back-scattering, and have Gaussian averages, \begin{equation} \label{eq:correl-reduced-2} \langle V_0(x)V_0(x')\rangle=\langle V_1(x)V^*_1(x')\rangle= 8 \pi^2R_\star\omega_\star^4\delta(x-x')\,. \end{equation} Equation~\eqref{eq:correl-reduced-2} reproduces the correlator for $V= (4\pi K\lambda v/\hbar a)\cos(2\bar\theta+\chi)$, in which the spatial variations of $\bar\theta$ are safely ignored at $\omega\gg\omega_\star$ (see discussion in main text) and the correlator for $\cos\chi$ is given by Eq.~(4) in the main text. Then retaining only the slowly varying components in Eq.~(\ref{eq:neq-eq}), we find \begin{equation} \label{eq:neq-eq3} \partial_x \zeta=-\frac 1{2v\omega}V_0+ \frac 1{4v\omega}\left[V_1^*e^{2i\zeta}+V_1e^{-2i\zeta}\right] \end{equation} with the boundary condition $\zeta(d)=0$. Equation~\eqref{eq:neq-eq3} is a Wiener process whose Fokker-Planck equation reads~\cite{Lifshitz1988} \begin{equation} \label{eq:const-diff} \frac{\partial P}{\partial x}={\cal D}\frac{\partial ^2 P}{\partial\zeta^2}\qquad\mathrm{with}\qquad {\cal D}=\frac {3\pi^2} 2 \frac{R_\star\omega_\star^4}{v^2\omega^2}=\frac{3\pi^2}{2\ell(\omega)}\,; \end{equation} its boundary condition is $P(x=d,\zeta)=\delta(\zeta)$. The corresponding solution of Eq.~(\ref{eq:const-diff}) is \begin{equation} \label{eq:gaussian-distrib} P(x=0,\zeta)=\frac1{2\sqrt{\pi {\cal D} d}}e^{-\zeta^2/(4{\cal D}d)}\,. \end{equation} The derivation of Eq.~(\ref{eq:gaussian-distrib}) required $\omega\gg\omega_\star$. Once this condition is satisfied, the obtained distribution function is valid at any ratio $d/\ell(\omega)$. [In particular, forward and backward scattering processes contribute additively (in a 2/3-1/3 ratio) to the ``diffusion constant'' $\cal D$.] This is quite remarkable, as the waves' localization length and mean free path are of the same order in a one-dimensional system. Averaging Eq.~\eqref{eq:res-rvdG3} over the distribution \eqref{eq:gaussian-distrib} then yields its disorder-averaged value at any $d$, see Eq.~(14) in the main text. Subsequently, one gets the disorder-averaged real part of the reflection amplitude through Eq.~(12) in the main text. Using Kramers-Kronig relations, we get similarly the disorder-averaged real part of the local Green function, \begin{equation} \label{eq:r-im} \frac{2\omega}{\pi^2 v}\text{Re}\langle G(0,0;\omega)\rangle = 2\nu_0 \sum_m\frac{ \sqrt{ 2}\Delta}{\pi \delta(\omega)} D\left(\frac{\omega-(m+1/2)\Delta}{\sqrt{2}\delta(\omega)}\right)\,, \end{equation} where $D(x)=e^{-x^2}\int_0^xdt e^{t^2}$ is the Dawson function, with asymptotes $D(x)\approx x$ at $|x|\ll 1$ and $D(x)\approx 1/(2x)$ at $|x|\gg 1$. In the frequency range $\omega_\star\ll\omega\ll \omega_\text{cr}$, the Poisson summation formula applied to Eq.~\eqref{eq:r-im} yields \begin{equation} \label{eq:r-im-2} \frac{2\omega}{\pi^2 v} \text{Re}\langle G(0,0;\omega)\rangle \rangle = 4\nu_0\sin\left({2\pi \omega}/{\Delta}\right)e^{-2(\pi \omega_{\rm cr}/{\omega})^2}\,. \end{equation} Equations \eqref{eq:r-im} and \eqref{eq:r-im-2} determine the disorder-averaged imaginary part of the reflection amplitude, \begin{equation} \label{eq:local-dos} \langle r''(\omega)\rangle =\frac{K}{K_0}\frac{2\omega}{\pi} \text{Re}\langle G(0,0;\omega)\rangle \,. \end{equation} \section{Soft modes} Here we calculate the disordered-averaged local density of states at low frequency $\omega\ll\omega_\star$ in the strong-pinning regime. We confirm the scaling captured by Eq.~(17) in the main text, and we obtain the numerical factor in front of it. Infinitely-strong pinning on separate impurities reduces the system to a sequence of independent segments. Averaging the local density of state in the segment nearby the interface at $x=0$, and whose length is given by the Poisson distribution, $P(L)=ce^{-cL}$, yields \begin{equation} \label{eq:DOS-Gorkov} \langle \nu(x=0,\omega)\rangle \approx 2c/\omega e^{-\pi v c/(2\omega)} \quad\text{at}\quad \omega\ll\pi v c\,. \end{equation} Finite-strength impurities allow for special configurations carrying soft excitations with arbitrarily-low frequency, as was noticed in~\cite{Aleiner1994}. Closely following their work, we consider three impurities at positions $x_1=L_1,x_2=x_1+L,x_3=x_2+L_2>0$ nearby $x=0$, and such that $L\ll 1/(\gamma c) \ll L_1,L_2$, where $\gamma=K\lambda/(\hbar v\sqrt{ac^3})$ is the large parameter in the strong pinning regime. The sum of elastic and potential energies associated with a static charge density such that $\partial_x\bar\theta(0)=0$ and $\bar\theta(x_j)=\theta_j$, \begin{equation} E=\frac{\hbar v}{2\pi K} \left[\frac{(\theta_2-\theta_1)^2}{L}+\frac{(\theta_3-\theta_2)^2}{L_2}\right]-\frac\lambda{\sqrt{ac}}\sum_{j=1}^3\cos(2\theta_j+\chi_j)\,, \end{equation} is minimized with $\theta_3=-\chi_3/2$ and $\theta_1-\theta_2=-2\pi \gamma c L \sin[(\chi_1-\chi_2)/2]\cos \Phi \ll 1$ with $\Phi=\theta_1+\theta_2+(\chi_1+\chi_2)/2$. Then, if $\chi_1-\chi_2=\pi+2\varepsilon$ with $0<\varepsilon\ll 1$, the equilibrium solution for $\Phi$ (with $|\Phi|\ll 1$) is obtained from the minimization of a quartic potential, \begin{equation} \label{eq:E-quartic} U=-h\Phi+A\Phi^2+B\Phi^4\,, \end{equation} where $h=\hbar v[\theta_3+(\chi_1+\chi_2)/4]/(2\pi K L_2)$, $A=\hbar v/(8\pi K L_2)-\varepsilon \lambda/\sqrt{ac}+ 4\pi K\lambda^2 L/(\hbar vac)$, and $B=\varepsilon \lambda/(12\sqrt{ac})-4\pi K\lambda^2 L/(3\hbar vac)$. Using a Born-Oppenheimer approximation, we find the oscillation frequency $\omega_0$ in the minimum of the potential~\eqref{eq:E-quartic} using the Lagrangian $L=T-U$ where $T=M\dot\Phi^2/2$ with an effective ``mass'' $M=\hbar (L_1+L_2/3)/(\pi vK)$ for the coordinate $\Phi$. Namely, \begin{equation} \label{eq:freq2} \omega_0^2=\frac{4}M (h^2B)^{1/3} F\left( A/{(h^2B)^{1/3}}\right), \end{equation} where $F(\eta)$ is defined implicitly by $F(\eta)=\eta+6\xi^2$ with $-1+2\eta\xi+4\xi^3=0$. Function $F(\eta)$ reaches a minimum of order $1$ at $\eta=3/2^{5/3}$. Using Eq.~\eqref{eq:freq2} and the properties of $F(\eta)$, we find that the phase space $\Gamma$ that allows for small oscillation frequencies $\omega_0\ll \omega$ is bounded by critical values for $|h|$ and $|A|$ scaling like $\omega^3$ and $\omega^2$, respectively. Thus, $\Gamma\propto \omega^5$, yielding a density of state $\nu\sim d\Gamma/d\omega\propto \omega^4$. Furthermore, the conditions $B>0$ and $A\approx 0$, which are necessary to find soft modes, yield two constraints: $\varepsilon<1/(6\pi L_2c\gamma)$ and $L<\varepsilon/(16\pi c\gamma)$, where $L_2\sim 1/c$. They yield an additional smallness in the suppression of the local density of states, \begin{equation} \label{eq:dos-universal} \langle \nu(x=0,\omega)\rangle \propto \nu_0 \gamma^{-3} (\omega/vc)^4 \,. \end{equation} The power-law \eqref{eq:dos-universal} dominates at low frequencies, while the exponential suppression \eqref{eq:DOS-Gorkov} takes over in an intermediate range of frequencies, $vc/\ln\gamma\ll \omega\ll vc$. The missing numerical factor in Eq.~(\ref{eq:dos-universal}) is found by an explicit calculation of the average $\langle\nu(x=0,\omega)\rangle$ at $\omega\ll\omega_\star$, \begin{eqnarray} \label{eq:dos-interm} \langle \nu(x=0,\omega)\rangle=\int_0^\infty dL_1P(L_1) \int_0^\infty dL_2P(L_2) \int_0^{L_0} dL P(L) \int_0^{2\pi} \frac{d\chi_1}{2\pi}\int_0^{2\pi} \frac{d\chi_2}{2\pi}\int_0^{2\pi} \frac{d\chi_3}{2\pi}\nonumber\\ \times \int_{-\infty}^\infty dh' \int_{-\infty}^\infty dA'\delta(h-h')\delta(A-A') \frac 1{L_1+L_2/3}\delta(\omega-\omega_0(A',h'))\,, \end{eqnarray} where the factor $1/(L_1+L_2/3)$ comes from the normalisation of the wavefunction associated with small oscillations. Here we used the Poisson distribution for the segments' lengths, and we assumed a uniform distribution for phases $\chi_1,\chi_2,\chi_3$ in the impurity potential. Furthermore, the upper limit $L_0=1/(96 \pi^2 L_2 c^2\gamma^2)$ in the integral over $L$ is set by the validity of expansion (\ref{eq:E-quartic}) leading to Eq.~(\ref{eq:freq2}). Now, setting $h'\approx A'\approx 0$ in the Dirac distributions (which is required to obtain low-frequency modes) allows performing the integrations over $h'$ and $A'$, \begin{equation} \int\!\! dh' \int\!\! dA' \delta(\omega-\omega_0(A',h'))=2\omega \int \!\!d\eta \int \!\!dh (h^2B)^{1/3}\delta\left(\omega^2-\frac{4(h^2B)^{1/3}}{M}F(\eta)\right) =\frac 3{16}\frac{M^{5/2}\omega^4}{B^{1/2}}\int\!\! \frac{d\eta}{F(\eta)^{5/2}} =\frac 1{8}\frac{M^{5/2}\omega^4}{B^{1/2}}\,. \end{equation} The remaining integrations in Eq.~\eqref{eq:dos-interm} can be performed to yield \begin{eqnarray} \langle \nu(x=0,\omega)\rangle&=&\frac 1\pi \nu_0\frac{1}{\gamma^2}\left(\frac\omega{vc}\right)^4c^5 \int_0^\infty dL_1\int_0^\infty dL_2 \int_0^{L_0} dL \frac{(L_1+L_2/3)^{3/2}L_2}{\sqrt{L_0-L}} e^{-c(L_1+L_2+L)} \nonumber \\ \nonumber\\ &=&\frac 1{2\sqrt{6}\pi^2}\nu_0\frac{1}{\gamma^3}\left(\frac\omega{vc}\right)^4 \int_0^\infty dx_1 dx_2 \sqrt{x_2(x_1+x_2/3)^3}e^{-x_1-x_2} \nonumber\\ &\approx& 0.12\, \nu_0\frac{1}{\gamma^3}\left(\frac\omega{vc}\right)^4\,, \end{eqnarray} where we used that the ranges of integration giving the dominant contributions are $L\ll 1/c$ and $L_1,L_2\sim 1/c$. The scaling is the same as in~\cite{Aleiner1994} for the global density of states, but the numerical prefactor is different. At $\gamma \sim 1$, a crossover to the weak pinning occurs. That leaves no room for the intermediate asymptote \eqref{eq:DOS-Gorkov} and broadens the region for the $\omega^4$-dependence to $\omega\lesssim\omega_\star$, leading to \begin{equation} \label{eq:Dos-w4} \langle \nu(x=0,\omega)\rangle \propto \nu_0 (\omega/\omega_\star)^4 \,. \end{equation} The missing here, but appearing in Eq.~(17) of the main text, proportionality coefficient is found within a numerical simulation outlined in the next Section. \section{Numerics} \label{sec:numerics} Here we provide details on the numerical computation of the local density of state at arbitrary frequency, which allowed us plotting Fig.~2 in the main text. For this, we start by formulating the problem in dimensionless variables. Namely, by rescaling the spatial dimension $x=\xi y$ with length $\xi=R_\star/(2\pi^2)^{1/3}$, we find that the static field $\bar\theta(y)$ setting the charge density in a given charge disorder configuration in the disconnected Josephson junction chain should admit boundary conditions $\partial_y\bar\theta(0)=0$ and $\bar\theta(d/\xi)=0$, and minimize the energy \begin{equation} {\cal E}[\bar\theta]=\frac{\hbar v}{2\pi K\xi}\int_0^{d/\xi} dy\left[(\partial_y\bar\theta)^2-{\cal V}'\cos2\bar\theta+{\cal V}''\sin2\bar\theta\right]\,. \end{equation} Here ${\cal V}'(y)$ and ${\cal V}''(y)$ are two independent real random fields characterizing the disorder configuration, such that \begin{equation} \label{eq:correl-dimless} \langle {\cal V}'(y){\cal V}'(y')\rangle=\langle {\cal V}''(y){\cal V}''(y')\rangle=\delta(y-y'')\,. \end{equation} The spectrum of small oscillations in the potential set by the static field $\bar\theta$ is also found as a dimensionless eigenproblem, \begin{equation} \label{eq:eigen-pb} \Omega_n^2\psi_n=-\partial_y^2\psi_n+2({\cal V}'\cos2\bar\theta-{\cal V}''\sin2\bar\theta)\psi_n \end{equation} with eigenfrequency $\Omega_n=\omega_n \xi/v$ and (real) eigenfunction $\psi_n$ normalized such that ${\int_0^{d/\xi} dy\psi^2_n(y)}=1$. The local density of states then reads as \begin{equation} \nu(x=0,\omega)=\pi\nu_0\sum_n {\psi^2_n(0)}\delta(\Omega-\Omega_n)\,. \end{equation} Next we discretize the above equations by introducing a small spacing $\epsilon$, such that the (dimensionless) length corresponds to $M=(d/\xi)/\epsilon$ sites. The random fields with correlators \eqref{eq:correl-dimless} are replaced with discrete fields ${\cal V}'_{m}$ and ${\cal V}''_{m}$ at sites $y_m=m\epsilon$ ($0\leq m<M$), which are uncorrelated from site to site, and which are drawn with flat probability in the interval $-\sqrt{3/\epsilon}<{\cal V}'_{m},{\cal V}''_{m}<\sqrt{3/\epsilon}$. The random static field $\bar \theta$ is obtained by minimization of the energy functional \begin{equation} {\cal E}[\{\bar\theta_m\}]=\frac{\hbar v\epsilon}{2\pi K\xi} \left\{ \sum_{m=1}^{M-1}\left[ \frac{(\bar\theta_{m-1}-\bar\theta_{m})^2}{\epsilon^2} -({\cal V}'_{m}\cos2\bar\theta_m-{\cal V}''_{m}\sin2\bar\theta_m)\right] +\frac{\bar\theta^2_{M-1}}{\epsilon^2}-\frac 12 ({\cal V}'_{0}\cos2\bar\theta_0-{\cal V}''_{0}\sin2\bar\theta_0) \right\}\,. \end{equation} The two last terms are such that the boundary conditions for $\bar \theta$ are satisfied in the continuum limit, $\epsilon\to 0$. (In particular, in order to reproduce the boundary condition at $y=0$ and get this result, one may think of $\bar\theta$ as the even solution of a twice longer chain with symmetric disorder on either side of the central node $m=0$.) We then use the algorithm described in Ref.~\cite{Gurarie2003} to obtain the variables $\{\bar\theta_m\}$ [taking discrete values $2\pi p/P$ ($0\leq p\leq P-1$ with integer $P\gg 1$)] that realize the absolute minimum of the energy $\cal E$. Once the static configuration field has been determined, we can obtain the local density of states, \begin{equation} {\nu(x=0,\omega)}=-2{\nu_0}\Omega\text{Im}G_{0,0}(\Omega)\,, \end{equation} in terms of the retarded Green function $G_{m,m'}(\Omega)$ associated with Eq.~\eqref{eq:eigen-pb}. The boundary condition at site $m=0$ is obtained by considering a Green function $\tilde G_{m,m'}(\Omega)$ in a twice longer space, \begin{equation} G_{m,m'}=\tilde G_{m,m'}+\tilde G_{-m,m'}\,, \end{equation} such that $G_{0,0}=2\tilde G_{0,0}$. The later Green function is obtained as a continued fraction using the recursive equation \begin{equation} (\Omega^2-{\cal W}_m+\frac 2{\epsilon^2})\tilde G_{m,m'}(\Omega) -\frac 1{\epsilon^2}\left[\tilde G_{m+1,m'}(\Omega)+\tilde G_{m-1,m'}(\Omega)\right]=\delta_{m,m'} \end{equation} with ${\cal W}_m=2({\cal V}'_m\cos2\bar\theta_m-{\cal V}''_m\sin2\bar\theta_m)$. In the plots shown here and in Fig.~2 in the main text, we choose a number of sites $M=200,400,\,\text{and}\,800$, and a mesh $\epsilon=0.05$, corresponding to $d/R_\star=M\epsilon /(2 \pi^{2})^{1/3}\approx 3.7,7.4,\,\text{and}\,14.8$, $\omega_\star/\Delta=n\epsilon /(2 \pi^{5})^{1/3}\approx 1.2,2.4,\,\text{and}\,4.7$, and $\omega_{\text{cr}} =\sqrt{3/2}(n\epsilon)^{3/2} /\pi^2\approx 3.9,11.1,\,\text{and}\,31.4$, respectively. We also introduce an imaginary broadening $\Omega\to\Omega+i\tilde \gamma$ with $\tilde \gamma=0.02 \Delta$. \begin{figure} (a)\includegraphics[width=0.35\columnwidth]{Fig-20-angle.pdf} (b)\includegraphics[width=0.34\columnwidth]{Fig-20-1config.pdf} \caption{\label{Fig:single-dis} (a) Spatial profile of the static configuration $\bar\theta(x)$ for a given disorder configuration in a chain with length $d=7.4 R_\star$, corresponding to $\omega_\star\approx 2.4 \Delta$ and $\omega_{\text{cr}} \approx 11.1 \Delta$. (b) Local density of states (in units of $\nu_0$) as a function of the frequency (in units of $\Delta$) for that disorder configuration.} \end{figure} In Fig.~\ref{Fig:single-dis}, we plot the spatial profile of the static configuration along the chain for a single disorder configuration (left panel). We also show the local density of states (normalized by the bulk value $\nu_0$) as a function of the frequency (in units of $\Delta$) for that disorder configuration. Typically, the distant levels do contribute to the local density of states at low energies, but the contributions are exponentially small, $\exp(-d/R_\star)$. Thus one sees a few ``representative'' levels contributing to the local density of states at $\omega\lesssim\omega_\star$. By contrast, a (roughly) regular spectrum of equidistant states is visible at large frequencies $\omega\gtrsim\omega_\text{cr}$. \begin{figure} (a)\includegraphics[width=0.35\columnwidth]{Fig-10.pdf} (b)\includegraphics[width=0.35\columnwidth]{Fig-20.pdf} \\ (c)\includegraphics[width=0.35\columnwidth]{Fig-40.pdf} (d)\includegraphics[width=0.35\columnwidth]{Fig-All.pdf} \caption{\label{Fig:av-dis} (a-c) Local density of states (in units of $\nu_0$) as a function of the frequency (in units of $\Delta$) averaged over 5000 disorder configurations, and comparison with the asymptotic predictions at frequencies below and above $\omega_\star$, for chains with length $d=3.7,7.4,\,\text{and}\,14.8 R_\star$, respectively (Fig.~\ref{Fig:av-dis}(b) is same as Fig.~2 in the main text). (d) Disorder-averaged local density of states as a function of the frequency (in units of $\omega_\star$) for the three chain's lengths.} \end{figure} In Figs.~\ref{Fig:av-dis} and 2 in the main text, we plot the local density of states averaged over a large number of disorder configurations for three different lengths. We compare the result with the asymptotic formula \begin{equation} \langle \nu(0,\omega)\rangle /\nu_0= C(\omega/\omega_\star)^4 \quad \text{with}\quad C\sim 0.032\,, \end{equation} which was speculated to hold at low frequency $\omega\lesssim\omega_\star$, and with the prediction \begin{equation} \langle \nu(0,\omega)\rangle/\nu_0 = \sqrt{\frac 2 \pi}\frac{\omega_{\text{cr}}}{\omega}\sum_{n=0}^\infty \exp\left[-\frac{\omega^2}{2\omega_{\text{cr}}^2}\left(\frac \omega \Delta-n-\frac 12\right)^2\right]\,, \end{equation} which holds at frequencies $\omega\gg\omega_\star$, yielding well resolved peaks at $\omega\gg\omega_\text{cr}$. \begin{figure} \includegraphics[width=0.35\columnwidth]{Fig-All-local-global.pdf} \caption{\label{Fig:all} Local and global disorder-averaged density of states as a function of the frequency, for the same parameters as in Fig.~\ref{Fig:av-dis}.} \end{figure} In particular, Fig.~\ref{Fig:av-dis}(d) shows the scaling of the disorder-averaged local density of states as a function of $\omega/\omega_\star$ for three different lengths of the chain. In addition, Fig.~\ref{Fig:all} shows both the local and global densities of states; it illustrates the relation $\langle \nu(x=0,\omega)\rangle\approx 2 \langle \nu(\omega)\rangle$, which approximately holds at low frequencies, $\omega\lesssim\omega_\star$. \section{Inelastic scattering} Here we use perturbation theory in the phase slip term in Hamiltonian~(3) of the main text to provide an alternative derivation of the frequency shift, yielding Eq.~(16) in the main text in long chains, $d\gg R_\star$, at large frequencies, $\omega\gg\omega_\star$. Then we use the same approach to argue that inelastic scattering processes give negligible corrections to the result.\\ \subsection{Frequency shift} Hamiltonian (2) in the main text, with boundary conditions $\partial_x\theta(x=0)=0$ and $\theta(x=d)=0$ in a finite-length chain, is diagonalized as \begin{equation} H_0= \sum_{n=0}^\infty (n+1/2)\Delta a^\dagger_na_n\,, \end{equation} where $a_n$ is an annihilation operator of a plasmon, such that \begin{equation} \label{eq:theta} \theta(x)= \sum_{n=0}^\infty \sqrt{\frac{K}{n+1/2}}(a_n+ a_n^\dagger ) \cos (q_n x) \end{equation} with $q_n=\pi (n+1/2)/d$. The dominant effect of a small but finite $\lambda$ is to shift the resonance frequencies, $\omega_n=(n+1/2)\Delta +\delta_n$. In first order in $\lambda$, the frequency shift of mode $n$ is the energy difference between no plasmon in the chain and the singly-occupied $n$-plasmon mode, $\delta_n=1/\hbar\left(\langle 1_n|H_1|1_n\rangle-\langle 0|H_1|0\rangle\right)$, where $|0\rangle$ is the vacuum state, $|1_n\rangle=a^\dagger_n|0\rangle$, and \begin{equation} \label{eq:phase-slip-term} H_1=-\frac\lambda a\int dx\cos(2\theta+\chi) \end{equation} is the phase slip term. Inserting Eq.~\eqref{eq:theta} in it, we get \begin{eqnarray} \delta_n &=&-\frac\lambda {\hbar a} \mathrm{Re}\int dx\langle 0|\prod_m \left( a_ne^{2i\sqrt{\frac{ K}{m+1/2}}( a_m+ a^\dagger_m)\cos (q_m x)} a_n^\dagger- e^{2i\sqrt{\frac{ K}{m+1/2}}( a_m+ a^\dagger_m)\cos (q_m x)}\right)|0\rangle e^{i\chi(x)} \nonumber\\ &\approx&\frac{4K \lambda }{\hbar a(n+1/2)}\int _0^d dx\sin^2(q_n x)\exp\left(-{2 K}\sum_{m}\frac{\sin^2(q_m x)}{m+1/2}\right)\cos\chi(x)\,, \end{eqnarray} where we used $n\gg 1$ to expand the factor with $m=n$ up to lowest order in $K$, and to add back the (small) term with $m=n$ in the exponential. In long arrays, $d\gg R_\star$ (which can only happen if $K<3/2$, according to Eq.~(7) in the main text), the exponential factor can be evaluated by taking the continuum limit, $\sum_m\approx (d/\pi ) \int dq$, replacing $\sin^2(q_mx)\approx 1/2$, and using $1/\ell_{\mathrm{sc}}$ and $1/R_\star (\gg 1/d)$ as the high and low momenta cutoffs of the log-divergent sum (this takes into account the fact that plasmons with frequency $\omega<\omega_\star$ are localized, and do not contribute to the sum). It yields \begin{equation} \label{eq:dn} \delta_n =\frac{4 K\lambda (R_\star/\ell_{\rm sc})^{-K} }{\hbar a(n+1/2)}\int _0^d dx\sin^2(q_n x)\cos\chi(x)\,. \end{equation} On average, $\langle\delta_n\rangle=0$. Using the correlator (4) in the main text, we find that its variance can be presented as \begin{equation} \label{eq:var} \frac{\langle\delta^2_{n}\rangle}{\Delta^2} =\frac{3}{\pi^2 (n+1/2)^2}\left(\frac{d}{R_\star}\right)^3\,, \end{equation} in agreement with Eq.~(15) in the main text upon identification $\delta\left(\omega=(n+1/2)\Delta\right)\equiv{\langle\delta^2_{n}\rangle}^{1/2}$. \subsection{Internal dissipation} Considered in the harmonic approximation, the plasmon excitation spectrum of the Josephson-junction chain is broadened by radiative decay. At strong impedance mismatch with the waveguide, $K/K_0\ll 1$, the corresponding level width $\Gamma=(2v/d)K/K_0$ is small [for discussion of $\Gamma$, see the main text above Eq.~(10)]. Interaction between the plasmons may provide additional channels of ``internal dissipation''. In the absence of phase slips, charge disorder does not affect the dynamics of phases $\varphi_n$ in Eq.~(1) of the main text. The anharmonicity stemming from the expansion of $\cos(\varphi_n-\varphi_{n-1})$ beyond the second order in $\varphi_n-\varphi_{n-1}$, results in a momentum-conserving interaction between the plasmons. Because the plasmon spectrum $\omega(q)$ is convex, decay of a plasmon occurs only in collisions with other plasmons present in the system (a spontaneous decay is not allowed by the energy and momentum conservation laws), see, {\sl e.g.}, Ref.~\cite{Lin2013}. On the contrary, interaction stemming from the phase slip term \eqref{eq:phase-slip-term} allows a plasmon to decay into a number of plasmons of lower frequency and may provide an additional channel of ``internal dissipation''. Leaving aside a complex question regarding the possibility of many-body localization~\cite{Aleiner2006} of excitations of an isolated system at $\Gamma\to 0$, here we provide a crude estimate of the contribution of the internal dissipation to the plasmon line width. The decay rate of a state $|i\rangle$ of Hamiltonian $H_0$ due to a perturbation $H_1$ is given by the Fermi's Golden Rule, \begin{equation} \Gamma_i=2\pi\sum_f\left|\langle f|H_1|i\rangle\right|^2\delta(\varepsilon_i-\varepsilon_f) ={\rm Re}\int_{-\infty}^{\infty} dt e^{i0t} \langle i|e^{iH_0t}H_1e^{-iH_0t}H_1|i\rangle\,. \end{equation} Here $\varepsilon_i$ and $\varepsilon_f$ are, respectively, the energies of the initial state $|i\rangle$ and final states $|f\rangle$. We take a plasmon state $|i\rangle=a^\dagger _n|0\rangle$ with the (unperturbed) value of energy $\varepsilon_i= (n+1/2)\Delta$ as an approximate eigenstate of $H_0$. Inserting Eq.~\eqref{eq:theta} into the phase slip term \eqref{eq:phase-slip-term}, we then identify contributions leading to its decay into $p$ plasmons, \begin{equation} H_1=\sum_{p \geq 2}\sum_{n_1\dots n_p}V_{n,n_1\dots n_p} a_{n_1}^\dagger\dots a_{n_p}^\dagger a_n +\text{H.c.}+\text{other terms} \end{equation} with \begin{equation} V_{n,n_1\dots n_p} =\frac{(2i)^{p+1}}{p!}\frac{\lambda(R_\star/\ell_{\rm sc})^{-K}}{a}\frac{K^{(p+1)/2}}{\sqrt{nn_1\dots n_pn}}\int_0^d dx\frac {e^{i\chi(x)}+(-1)^pe^{-i\chi(x)}}{2}\cos(q_{n_1} x)\dots \cos(q_{n_p} x) \cos(q_{n} x)\,. \end{equation} Here the factor $(R_\star/\ell_{\rm sc})^{-K}$ has the same origin as the one that appears in Eq.~\eqref{eq:dn}; furthermore we used $n_i+1/2\approx n_i$ at $n_i\gg 1$. Then, we evaluate the inelastic decay rate induced by $H_1$ using Wick's theorem, \begin{eqnarray} \Gamma_n&\approx& \frac{\lambda^2}{a^2} \left(\frac{R_\star}{\ell_{\rm sc}}\right)^{-2K} \sum_p \frac{(4K)^{p+1}}{p!} \sum_{n_1\dots n_p} \frac 1{n_1\dots n_p n} \left(\frac{da}{2^{p+2}}\right) \int dt \langle 0|a_na^\dagger_n(t)a_{n_1}(t)\dots a_{n_{p}}(t)c_{n_{p}}^\dagger \dots a_{n_1}^\dagger a_{n}a_{n}^\dagger |0\rangle \nonumber\\ &=&\frac{\lambda^2}{a^2} \left(\frac{R_\star}{\ell_{\rm sc}}\right)^{-2K} \sum_p \frac{(4K)^{p+1}}{p!} \sum_{n_1\dots n_p} \frac 1{n_1\dots n_p n} \left(\frac{da}{2^{p+2}}\right) \int dt iG_n^>(-t)iG_{n_1}^>(t)\dots iG_{n_p}^>(t) \label{Gamman1} \end{eqnarray} with $iG_l^>(t)= \langle a_l(t)a^\dagger_l(0)\rangle$. For estimates, we replaced here a factor which depends on the disorder configuration by a respective average: \begin{equation} \left(\frac{da}{2^{p+2}}\right) = \int_0^d dx \int_0^ddy \frac {e^{i\chi(x)}+(-1)^pe^{-i\chi(x)}}{2}\frac {e^{i\chi(y)}+(-1)^pe^{-i\chi(y)}}{2} \prod_{m=n_1,\dots,n_p,n} \cos q_m x \cos q_m y \,. \end{equation} Within the Rayleigh-Schr\"odinger perturbation theory, $G^>_{l}(t)\approx -i e^{-i\omega_l t}e^{-\Gamma_l |t|/2}$ with $\omega_l$ evaluated in the absence of phase slips, $\omega_l \to \varepsilon_l=(l+1/2)\Delta$, and the decay rate coming solely from radiation of the mode $i$ into the waveguide, $\Gamma_l\to \Gamma$. Assuming that the internal dissipation does occur, we modify $G^>_{l}(t)$ by allowing for $\Gamma_l\neq\Gamma$ and for $\omega_l$ corrected by the phase slips in the presence of a specific configuration of disorder. This way, Eq.~(\ref{Gamman1}) becomes a self-consistent equation for finding $\Gamma_n$: \begin{equation} \Gamma_n= \frac {2 d K^2\lambda^2}a \left(\frac{R_\star}{\ell_{\rm sc}}\right)^{-2K} \sum_{p\geq 2} \frac{(2K)^{p-1}}{p!} \sum_{n_1\dots n_p} \frac 1{n_1\dots n_p n} \frac{\Gamma_{n_1}+\dots+\Gamma_{n_p}+\Gamma_n}{(\omega_{n_1}+\dots+\omega_{n_p}-\omega_n)^2+\frac 14 (\Gamma_{n_1}+\dots+\Gamma_{n_p}+\Gamma_n)^2} \,. \label{Gamman2} \end{equation} The disorder-induced frequency shifts $\omega_l-\varepsilon_l$ are important, as they destroy multiple resonances which would occur for the equidistant unperturbed spectrum $\varepsilon_l$ appearing in the denominators of the summand of Eq.~(\ref{Gamman2}). It is clear from Eq.~(15) of the main text that the typical shifts $|\omega_l-\varepsilon_l|\sim\omega_{\rm cr}/l$ depend on $l$, obstructing the resonances unless the sum of $\Gamma_l$ in the denominators of Eq.~(\ref{Gamman2}) exceeds ${\rm min}\{\Delta,\omega_{\rm cr}/l\}$. We are interested in the estimate of $\Gamma_n$ for modes with relatively high frequencies, $\varepsilon_n\gtrsim\omega_{\rm cr}$, at which $\omega_{\rm cr}/n\lesssim\Delta$. Our analysis of Eq.~(\ref{Gamman2}) at $K\ll 1$ points to $\Gamma_n\ll\omega_{\rm cr}/n$ in that frequency range, while $\Gamma_l\sim\Delta$ at $\omega_l\sim\sqrt{ K}\omega_{\rm cr}$ (here we assume that albeit being small, $K$ is large enough to allow for $\sqrt{K}\omega_{\rm cr}\gtrsim\omega_\star$, {\it i.e.}, $R_\star/d\lesssim K\ll 1$). At small $K$, the main contribution to the sum over $p$ comes from the first term ($p=2$) corresponding to a decay of the plasmon mode $n$ into another high-frequency mode $n_1\approx n$ accompanied by emission of one broadened plasmon ($\Gamma_l\gtrsim\Delta$) with frequency below $\sqrt{K}\omega_{\rm cr}$. That allows us to replace summation over $\{n\}$ in Eq.~(\ref{Gamman2}) by integration. Dispensing with a factor $\sim\ln(\sqrt{K}\omega_{\rm cr}/\omega_\star)$ and with a numerical factor which do not alter our conclusions, we obtain \begin{equation} \label{eq:inel} \Gamma(\omega)\sim K\frac{\omega^3_\star}{\omega^2} \sim \Delta \frac{K\omega^2_\text{cr}}{\omega^2}\quad\text{at}\quad \omega \gtrsim \sqrt{K}\omega_\text{cr}\,. \end{equation} Note that the frequency dependence here agrees with the one found in \cite{Rosenow2007}, as well as \cite{Bard2018} [cf.~the second line in their Eq.~(43)], at $K\ll 1$. Using Eq.~(\ref{eq:inel}) and Eq.~(15) of the main text, the ratio $\Gamma(\omega)/\delta(\omega)$ can be presented in the form \begin{equation} \frac{\Gamma(\omega)}{\delta(\omega)}\sim K\,\frac{\omega_{\rm cr}}{\omega}\,. \label{Gamman3} \end{equation} This confirms that:\\ (i) at $\omega\gtrsim\omega_{\rm cr}$ the main contribution to the widths of the disorder-averaged plasmon resonances comes from the inhomogeneous broadening $\delta(\omega)$ considered in the main text, rather than from the internal dissipation;\\(ii) due to the internal dissipation, $\Gamma(\omega)$ may reach a value $\sim\Delta$ at low frequencies, $\omega\sim \sqrt{K}\omega_{\rm cr}$.
{'timestamp': '2019-01-08T02:13:12', 'yymm': '1901', 'arxiv_id': '1901.01515', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.01515'}
ArXiv
\section{Introduction} Among the most promising approaches to address the issue of global optimization of an unknown function under reasonable smoothness assumptions comes from extensions of the multi-armed bandit setup. \cite{Bubeck2009} highlighted the connection between cumulative regret and simple regret which facilitates fair comparison between methods and \cite{Bubeck2011} proposed bandit algorithms on metric space $\cX$, called $\cX$-armed bandits. In this context, theory and algorithms have been developed in the case where the expected reward is a function $f:\cX\to\bR$ which satisfies certain smoothness conditions such as Lipschitz or H\"older continuity \citep{Kleinberg2004, Kocsis2006, Auer2007, Kleinberg2008, Munos2011}. Another line of work is the Bayesian optimization framework \citep{Jones1998, Bull2011, Mockus2012} for which the unknown function $f$ is assumed to be the realization of a prior stochastic process distribution, typically a Gaussian process. An efficient algorithm that can be derived in this framework is the popular GP-UCB algorithm due to \cite{Srinivas2012}. However an important limitation of the upper confidence bound (UCB) strategies without smoothness condition is that the search space has to be {\em finite} with bounded cardinality, a fact which is well known but, up to our knowledge, has not been discussed so far in the related literature. In this paper, we propose an approach which improves both lines of work with respect to their present limitations. Our purpose is to: (i) relax smoothness assumptions that limit the relevance of $\cX$-armed bandits in practical situations where target functions may only display random smoothness, (ii) extend the UCB strategy for arbitrary sets $\cX$. Here we will assume that $f$, being the realization of a given stochastic process distribution, fulfills a \emph{probabilistic smoothness} condition. We will consider the stochastic process bandit setup and we develop a UCB algorithm based on {\em generic chaining} \citep{Bogachev1998,Adler2009,Talagrand2014,Gine2015}. Using the generic chaining construction, we compute hierarchical discretizations of $\cX$ under the form of chaining trees in a way that permits to control precisely the discretization error. The UCB algorithm then applies on these successive discrete subspaces and chooses the accuracy of the discretization at each iteration so that the cumulative regret it incurs matches the state-of-the art bounds on finite $\cX$. In the paper, we propose an algorithm which computes a generic chaining tree for arbitrary stochastic process in quadratic time. We show that this tree is optimal for classes like Gaussian processes with high probability. Our theoretical contributions have an impact in the two contexts mentioned above. From the bandit and global optimization point of view, we provide a generic algorithm that incurs state-of-the-art regret on stochastic process objectives including non-trivial functionals of Gaussian processes such as the sum of squares of Gaussian processes (in the spirit of mean-square-error minimization), or nonparametric Gaussian processes on ellipsoids (RKHS classes), or the Ornstein-Uhlenbeck process, which was conjectured impossible by \citep{Srinivas2010} and \citep{Srinivas2012}. From the point of view of Gaussian process theory, the generic chaining algorithm leads to tight bounds on the supremum of the process in probability and not only in expectation. The remainder of the paper is organized as follows. In Section~\ref{sec:framework}, we present the stochastic process bandit framework over continuous spaces. Section~\ref{sec:chaining} is devoted to the construction of generic chaining trees for search space discretization. Regret bounds are derived in Section~\ref{sec:regret} after choosing adequate discretization depth. Finally, lower bounds are established in Section~\ref{sec:lower_bound}. \section{Stochastic Process Bandits Framework} \label{sec:framework} We consider the optimization of an unknown function $f:\cX\to\bR$ which is assumed to be sampled from a given separable stochastic process distribution. The input space $\cX$ is an arbitrary space not restricted to subsets of $\bR^D$, and we will see in the next section how the geometry of $\cX$ for a particular metric is related to the hardness of the optimization. An algorithm iterates the following: \begin{itemize} \item it queries $f$ at a point $x_i$ chosen with the previously acquired information, \item it receives a noisy observation $y_i=f(x_i)+\epsilon_t$, \end{itemize} where the $(\epsilon_i)_{1\le i \le t}$ are independent centered Gaussian $\cN(0,\eta^2)$ of known variance. We evaluate the performances of such an algorithm using $R_t$ the cumulative regret: \[R_t = t\sup_{x\in\cX}f(x) - \sum_{i=1}^t f(x_i)\,.\] This objective is not observable in practice, and our aim is to give theoretical upper bounds that hold with arbitrary high probability in the form: \[\Pr\big[R_t \leq g(t,u)\big] \geq 1-e^{-u}\,.\] Since the stochastic process is separable, the supremum over $\cX$ can be replaced by the supremum over all finite subsets of $\cX$ \citep{Boucheron2013}. Therefore we can assume without loss of generality that $\cX$ is finite with arbitrary cardinality. We discuss on practical approaches to handle continuous space in Appendix~\ref{sec:greedy_cover}. Note that the probabilities are taken under the product space of both the stochastic process $f$ itself and the independent Gaussian noises $(\epsilon_i)_{1\le i\le t}$. The algorithm faces the exploration-exploitation tradeoff. It has to decide between reducing the uncertainty on $f$ and maximizing the rewards. In some applications one may be interested in finding the maximum of $f$ only, that is minimizing $S_t$ the simple regret: \[S_t = \sup_{x\in\cX}f(x) - \max_{i\leq t}f(x_i)\,.\] We will reduce our analysis to this case by simply observing that $S_T\leq \frac{R_T}{T}$. \paragraph{Confidence Bound Algorithms and Discretization.} To deal with the uncertainty, we adopt the \emph{optimistic optimization} paradigm and compute high confidence intervals where the values $f(x)$ lie with high probability, and then query the point maximizing the upper confidence bound \citep{Auer2002}. A naive approach would use a union bound over all $\cX$ to get the high confidence intervals at every points $x\in\cX$. This would work for a search space with fixed cardinality $\abs{\cX}$, resulting in a factor $\sqrt{\log\abs{\cX}}$ in the Gaussian case, but this fails when $\abs{\cX}$ is unbounded, typically a grid of high density approximating a continuous space. In the next section, we tackle this challenge by employing {\em generic chaining} to build hierarchical discretizations of $\cX$. \section{Discretizing the Search Space via Generic Chaining} \label{sec:chaining} \subsection{The Stochastic Smoothness of the Process} Let $\ell_u(x,y)$ for $x,y\in\cX$ and $u\geq 0$ be the following confidence bound on the increments of $f$: \[\ell_u(x,y) = \inf\Big\{s\in\bR: \Pr[f(x)-f(y) > s] < e^{-u}\Big\}\,.\] In short, $\ell_u(x,y)$ is the best bound satisfying $\Pr\big[f(x)-f(y) \geq \ell_u(x,y)\big] < e^{-u}$. For particular distributions of $f$, it is possible to obtain closed formulae for $\ell_u$. However, in the present work we will consider upper bounds on $\ell_u$. Typically, if $f$ is distributed as a centered Gaussian process of covariance $k$, which we denote $f\sim\cGP(0,k)$, we know that $\ell_u(x,y) \leq \sqrt{2u}d(x,y)$, where $d(x,y)=\big(\E(f(x)-f(y))^2\big)^{\frac 1 2}$ is the canonical pseudo-metric of the process. More generally, if it exists a pseudo-metric $d(\cdot,\cdot)$ and a function $\psi(\cdot,\cdot)$ bounding the logarithm of the moment-generating function of the increments, that is, \[\log \E e^{\lambda(f(x)-f(y))} \leq \psi(\lambda,d(x,y))\,,\] for $x,y\in\cX$ and $\lambda\in I \subseteq \bR$, then using the Chernoff bounding method \citep{Boucheron2013}, \[\ell_u(x,y) \leq \psi^{*-1}(u,d(x,y))\,,\] where $\psi^*(s,\delta)=\sup_{\lambda\in I}\big\{\lambda s - \psi(\lambda,\delta)\big\}$ is the Fenchel-Legendre dual of $\psi$ and $\psi^{*-1}(u,\delta)=\inf\big\{s\in\bR: \psi^*(s,\delta)>u\big\}$ denotes its generalized inverse. In that case, we say that $f$ is a $(d,\psi)$-process. For example if $f$ is sub-Gamma, that is: \begin{equation} \label{eq:sub_gamma} \psi(\lambda,\delta)\leq \frac{\nu \lambda^2 \delta^2}{2(1-c\lambda \delta)}\,, \end{equation} we obtain, \begin{equation} \label{eq:sub_gamma_tail} \ell_u(x,y) \leq \big(c u + \sqrt{2\nu u}\big) d(x,y)\,. \end{equation} The generality of Eq.~\ref{eq:sub_gamma} makes it convenient to derive bounds for a wide variety of processes beyond Gaussian processes, as we see for example in Section~\ref{sec:gp2}. \subsection{A Tree of Successive Discretizations} As stated in the introduction, our strategy to obtain confidence intervals for stochastic processes is by successive discretization of $\cX$. We define a notion of tree that will be used for this purpose. A set $\cT=\big(\cT_h\big)_{h\geq 0}$ where $\cT_h\subset\cX$ for $h\geq 0$ is a tree with parent relationship $p:\cX\to\cX$, when for all $x\in \cT_{h+1}$ its parent is given by $p(x)\in \cT_h$. We denote by $\cT_{\leq h}$ the set of the nodes of $\cT$ at depth lower than $h$: $\cT_{\leq h} = \bigcup_{h'\leq h} \cT_h'$. For $h\geq 0$ and a node $x\in \cT_{h'}$ with $h\leq h'$, we also denote by $p_h(x)$ its parent at depth $h$, that is $p_h(x) = p^{h'-h}(x)$ and we note $x\succ s$ when $s$ is a parent of $x$. To simplify the notations in the sequel, we extend the relation $p_h$ to $p_h(x)=x$ when $x\in\cT_{\leq h}$. We now introduce a powerful inequality bounding the supremum of the difference of $f$ between a node and any of its descendent in $\cT$, provided that $\abs{\cT_h}$ is not excessively large. \begin{theorem}[Generic Chaining Upper Bound] \label{thm:chaining} Fix any $u>0$, $a>1$ and $\big(n_h\big)_{h\in\bN}$ an increasing sequence of integers. Set $u_i=u+n_i+\log\big(i^a\zeta(a)\big)$ where $\zeta$ is the Riemann zeta function. Then for any tree $\cT$ such that $\abs{\cT_h}\leq e^{n_h}$, \[\forall h\geq 0, \forall s\in\cT_h,~ \sup_{x\succ s} f(x)-f(s) \leq \omega_h\,,\] holds with probability at least $1-e^{-u}$, where, \[\omega_h = \sup_{x\in\cX} \sum_{i> h} \ell_{u_i}\big(p_i(x), p_{i-1}(x)\big)\,.\] \end{theorem} The full proof of the theorem can be found in Appendix~\ref{sec:proof_chaining}. It relies on repeated application of the union bound over the $e^{n_i}$ pairs $\big(p_i(x),p_{i-1}(x)\big)$. Now, if we look at $\cT_h$ as a discretization of $\cX$ where a point $x\in\cX$ is approximated by $p_h(x)\in\cT_h$, this result can be read in terms of discretization error, as stated in the following corollary. \begin{corollary}[Discretization error of $\cT_h$] \label{cor:chaining} Under the assumptions of Theorem~\ref{thm:chaining} with $\cX=\cT_{\leq h_0}$ for $h_0$ large enough, we have that, \[\forall h, \forall x\in\cX,~ f(x)-f(p_h(x)) \leq \omega_h\,,\] holds with probability at least $1-e^{-u}$. \end{corollary} \subsection{Geometric Interpretation for $(d,\psi)$-processes} \label{sec:psi_process} The previous inequality suggests that to obtain a good upper bound on the discretization error, one should take $\cT$ such that $\ell_{u_i}(p_i(x),p_{i-1}(x))$ is as small as possible for every $i>0$ and $x\in\cX$. We specify what it implies for $(d,\psi)$-processes. In that case, we have: \[\omega_h \leq \sup_{x\in\cX} \sum_{i>h} \psi^{*-1}\Big(u_i,d\big(p_i(x),p_{i-1}(x)\big)\Big)\,.\] Writing $\Delta_i(x)=\sup_{x'\succ p_i(x)}d(x',p_i(x))$ the $d$-radius of the ``cell'' at depth $i$ containing $x$, we remark that $d(p_i(x),p_{i-1}(x))\leq \Delta_{i-1}(x)$, that is: \[ \omega_h \leq \sup_{x\in\cX} \sum_{i>h} \psi^{*-1}\big(u_i,\Delta_{i-1}(x)\big)\,. \] In order to make this bound as small as possible, one should spread the points of $\cT_h$ in $\cX$ so that $\Delta_h(x)$ is evenly small, while satisfying the requirement $\abs{\cT_h}\leq e^{n_h}$. Let $\Delta = \sup_{x,y\in\cX}d(x,y)$ and $\epsilon_h=\Delta 2^{-h}$, and define an $\epsilon$-net as a set $T\subseteq \cX$ for which $\cX$ is covered by $d$-balls of radius $\epsilon$ with center in $T$. Then if one takes $n_h=2\log N(\cX,d,\epsilon_h)$, twice the metric entropy of $\cX$, that is the logarithm of the minimal $\epsilon_h$-net, we obtain with probability at least $1-e^{-u}$ that $\forall h\geq 0, \forall s\in\cT_h$\,: \begin{equation} \label{eq:classical_chaining} \sup_{x\succ s}f(x)-f(s) \leq \sum_{i>h} \psi^{*-1}(u_i, \epsilon_i)\,, \end{equation} where $u_i= u+2\log N(\cX,d,\epsilon_i)+\log(i^a\zeta(a))$. The tree $\cT$ achieving this bound consists in computing a minimal $\epsilon$-net at each depth, which can be done efficiently by Algorithm~\ref{alg:greedy_cover} if one is satisfied by an almost optimal heuristic which exhibits an approximation ratio of $\max_{x\in\cX} \sqrt{\log \log \abs{\cB(x,\epsilon)}}$, as discussed in Appendix~\ref{sec:greedy_cover}. This technique is often called \emph{classical chaining} \citep{Dudley1967} and we note that an implementation appears in \cite{Contal2015} on real data. However the upper bound in Eq.~\ref{eq:classical_chaining} is not tight as for instance with a Gaussian process indexed by an ellipsoid, as discussed in Section~\ref{sec:gp}. We will present later in Section~\ref{sec:lower_bound} an algorithm to compute a tree $\cT$ in quadratic time leading to both a lower and upper bound on $\sup_{x\succ s}f(x)-f(s)$ when $f$ is a Gaussian process. The previous inequality is particularly convenient when we know a bound on the growth of the metric entropy of $(\cX,d)$, as stated in the following corollary. \begin{corollary}[Sub-Gamma process with metric entropy bound] \label{cor:subgamma_bigoh} If $f$ is sub-Gamma and there exists $R,D\in\bR$ such that for all $\epsilon>0$, $N(\cX,d,\epsilon) \leq (\frac R \epsilon)^D$, then with probability at least $1-e^{-u}$\,: \[\forall h\geq 0,\forall s\in\cT_h,~ \sup_{x\succ s}f(x)-f(s) =\cO\Big(\big(c(u + D h)+\sqrt{\nu(u+Dh)}\big) 2^{-h}\Big)\,.\] \end{corollary} \begin{proof} With the condition on the growth of the metric entropy, we obtain $u_i = \cO\big(u+D\log R + D i\big)$. With Eq.~\ref{eq:classical_chaining} for a sub-Gamma process we get, knowing that $\sum_{i=h}^\infty i 2^{-i} =\cO\big(h 2^{-h}\big)$ and $\sum_{i=h}^\infty \sqrt{i}2^{-i}=\cO\big(\sqrt{h}2^{-h}\big)$, that $\omega_h = \cO\Big(\big(c (u+D h) + \sqrt{\nu(u+D h)}\big)2^{-h}\Big)$. \end{proof} Note that the conditions of Corollary~\ref{cor:subgamma_bigoh} are fulfilled when $\cX\subset [0,R]^D$ and there is $c\in\bR$ such that for all $x,y\in\cX,~d(x,y) \leq c\norm{x-y}_2$, by simply cutting $\cX$ in hyper-cubes of side length $\epsilon$. We also remark that this condition is very close to the near-optimality dimension of the metric space $(\cX,d)$ defined in \cite{Bubeck2011}. However our condition constraints the entire search space $\cX$ instead of the near-optimal set $\cX_\epsilon = \big\{ x\in\cX: f(x)\geq \sup_{x^\star\in\cX}f(x^\star)-\epsilon\big\}$. Controlling the dimension of $\cX_\epsilon$ may allow to obtain an exponential decay of the regret in particular deterministic function $f$ with a quadratic behavior near its maximum. However, up to our knowledge no progress has been made in this direction for stochastic processes without constraining its behavior around the maximum. A reader interested in this subject may look at the recent work by \cite{Grill2015} on smooth and noisy functions with unknown smoothness, and the works by \cite{Freitas2012} or \cite{Wang2014b} on Gaussian processes without noise and a quadratic local behavior. \section{Regret Bounds for Bandit Algorithms} \label{sec:regret} Now we have a tool to discretize $\cX$ at a certain accuracy, we show here how to derive an optimization strategy on $\cX$. \subsection{High Confidence Intervals} Assume that given $i-1$ observations $Y_{i-1}=(y_1,\dots,y_{i-1})$ at queried locations $X_{i-1}$, we can compute $L_i(x,u)$ and $U_i(x,u)$ for all $u>0$ and $x\in\cX$, such that: \[ \Pr\Big[ f(x) \in \big(L_i(x,u), U_i(x,u)\big) \Big] \geq 1-e^{-u}\,.\] Then for any $h(i)>0$ that we will carefully choose later, we obtain by a union bound on $\cT_{h(i)}$ that: \[ \Pr\Big[ \forall x\in\cT_{h(i)},~ f(x) \in \big(L_i(x,u+n_{h(i)}), U_i(x,u+n_{h(i)})\big) \Big] \geq 1-e^{-u}\,.\] And by an additional union bound on $\bN$ that: \begin{equation} \label{eq:ucb} \Pr\Big[ \forall i\geq 1, \forall x\in\cT_{h(i)},~ f(x) \in \big(L_i(x,u_i), U_i(x,u_i)\big) \Big] \geq 1-e^{-u}\,, \end{equation} where $u_i=u+n_{h(i)}+\log\big(i^a\zeta(a)\big)$ for any $a>1$ and $\zeta$ is the Riemann zeta function. Our \emph{optimistic} decision rule for the next query is thus: \begin{equation} \label{eq:argmax} x_i \in \argmax_{x\in\cT_{h(i)}} U_i(x,u_i)\,. \end{equation} Combining this with Corollary~\ref{cor:chaining}, we are able to prove the following bound linking the regret with $\omega_{h(i)}$ and the width of the confidence interval. \begin{theorem}[Generic Regret Bound] \label{thm:regret_bound} When for all $i\geq 1$, $x_i \in \argmax_{x\in \cT_{h(i)}} U_i(x,u_i)$ we have with probability at least $1- 2 e^{-u}$: \[ R_t = t \sup_{x\in\cX} f(x)-\sum_{i=1}^t f(x_i) \leq \sum_{i=1}^t\Big\{ \omega_{h(i)} + U_i(x_i,u_i)-L_i(x_i,u_i)\Big\}\,.\] \end{theorem} \begin{proof} Using Theorem~\ref{thm:chaining} we have that, \[\forall h\geq 0,\,\sup_{x\in\cX}f(x) \leq \omega_h+\sup_{x\in\cX}f(p_h(x))\,,\] holds with probability at least $1-e^{-u}$. Since $p_{h(i)}(x) \in \cT_{h(i)}$ for all $x\in\cX$, we can invoke Eq.~\ref{eq:ucb}\,: \[\forall i\geq 1\,~ \sup_{x\in\cX} f(x)-f(x_i) \leq \omega_{h(i)}+\sup_{x\in\cT_{h(i)}}U_i(x,u_i)-L_i(x_i,u_i)\,,\] holds with probability at least $1-2e^{-u}$. Now by our choice for $x_i$, $\sup_{x\in\cT_{h(i)}}U_i(x,u_i) = U_i(x_i,u_i)$, proving Theorem~\ref{thm:regret_bound}. \end{proof} In order to select the level of discretization $h(i)$ to reduce the bound on the regret, it is required to have explicit bounds on $\omega_i$ and the confidence intervals. For example by choosing \[h(i)=\min\Big\{i:\bN: \omega_i \leq \sqrt{\frac{\log i}{i}} \Big\}\,,\] we obtain $\sum_{i=1}^t \omega_{h(i)} \leq 2\sqrt{t\log t}$ as shown later. The performance of our algorithm is thus linked with the decrease rate of $\omega_i$, which characterizes the ``size'' of the optimization problem. We first study the case where $f$ is distributed as a Gaussian process, and then for a sum of squared Gaussian processes. \subsection{Results for Gaussian Processes} \label{sec:gp} The problem of regret minimization where $f$ is sampled from a Gaussian process has been introduced by \cite{Srinivas2010} and \cite{grunewalder2010}. Since then, it has been extensively adapted to various settings of Bayesian optimization with successful practical applications. In the first work the authors address the cumulative regret and assume that either $\cX$ is finite or that the samples of the process are Lipschitz with high probability, where the distribution of the Lipschitz constant has Gaussian tails. In the second work the authors address the simple regret without noise and with known horizon, they assume that the canonical pseudo-metric $d$ is bounded by a given power of the supremum norm. In both works they require that the input space is a subset of $\bR^D$. The analysis in our paper permits to derive similar bounds in a nonparametric fashion where $(\cX,d)$ is an arbitrary metric space. Note that if $(\cX,d)$ is not totally bounded, then the supremum of the process is infinite with probability one, so is the regret of any algorithm. \paragraph{Confidence intervals and information gain.} First, $f$ being distributed as a Gaussian process, it is easy to derive confidence intervals given a set of observations. Writing $\mat{Y}_i$ the vector of noisy values at points in $X_i$, we find by Bayesian inference \citep{Rasmussen2006} that: \[\Pr\Big[ \abs{f(x)-\mu_i(x)} \geq \sigma_i(x)\sqrt{2u}\Big] < e^{-u}\,,\] for all $x\in\cX$ and $u>0$, where: \begin{align} \label{eq:mu} \mu_i(x) &= \mat{k}_i(x)^\top \mat{C}_i^{-1}\mat{Y}_i\\ \label{eq:sigma} \sigma_i^2(x) &= k(x,x) - \mat{k}_i(x)^\top \mat{C}_i^{-1} \mat{k}_i(x)\,, \end{align} where $\mat{k}_i(x) = [k(x_j, x)]_{x_j \in X_i}$ is the covariance vector between $x$ and $X_i$, $\mat{C}_i = \mat{K}_i + \eta^2 \mat{I}$, and $\mat{K}_i=[k(x,x')]_{x,x' \in X_i}$ the covariance matrix and $\eta^2$ the variance of the Gaussian noise. Therefore the width of the confidence interval in Theorem~\ref{thm:regret_bound} can be bounded in terms of $\sigma_{i-1}$: \[U_i(x_i,u_i)-L_i(x_i,u_i) \leq 2\sigma_{i-1}(x_i)\sqrt{2u_i}\,.\] Furthermore it is proved in \cite{Srinivas2012} that the sum of the posterior variances at the queried points $\sigma_{i-1}^2(x_i)$ is bounded in terms of information gain: \[\sum_{i=1}^t \sigma_{i-1}^2(x_i) \leq c_\eta \gamma_t\,,\] where $c_\eta=\frac{2}{\log(1+\eta^{-2})}$ and $\gamma_t = \max_{X_t\subseteq\cX:\abs{X_t}=t} I(X_t)$ is the maximum information gain of $f$ obtainable by a set of $t$ points. Note that for Gaussian processes, the information gain is simply $I(X_t)=\frac 1 2 \log\det(\mat{I}+\eta^{-2}\mat{K}_t)$. Finally, using the Cauchy-Schwarz inequality and the fact that $u_t$ is increasing we have with probability at least $1- 2 e^{-u}$: \begin{equation} \label{eq:gp_regret} R_t \leq 2\sqrt{2 c_\eta t u_t \gamma_t} + \sum_{i=1}^t \omega_{h(i)}\,. \end{equation} The quantity $\gamma_t$ heavily depends on the covariance of the process. On one extreme, if $k(\cdot,\cdot)$ is a Kronecker delta, $f$ is a Gaussian white noise process and $\gamma_t=\cO(t)$. On the other hand \cite{Srinivas2012} proved the following inequalities for widely used covariance functions and $\cX\subset \bR^D$: \begin{itemize} \item linear covariance $k(x,y)=x^\top y$, $\gamma_t=\cO\big(D \log t\big)$. \item squared exponential covariance $k(x,y)=e^{-\frac 1 2 \norm{x-y}_2^2}$, $\gamma_t=\cO\big((\log t)^{D+1}\big)$. \item Mat\'ern covariance, $k(x,y)=\frac{2^{p-1}}{\Gamma(p)}\big(\sqrt{2p}\norm{x-y}_2\big)^p K_p\big(\sqrt{2p}\norm{x-y}_2\big)$, where $p>0$ and $K_p$ is the modified Bessel function, $\gamma_t=\cO\big( (\log t) t^a\big)$, with $a=\frac{D(D+1)}{2p+D(D+1)}<1$ for $p>1$. \end{itemize} \paragraph{Bounding $\omega_h$ with the metric entropy.} We now provide a policy to choose $h(i)$ minimizing the right hand side of Eq.\ref{eq:gp_regret}. When an explicit upper bound on the metric entropy of the form $\log N(\cX,d,\epsilon)\leq \cO(-D \log \epsilon)$ holds, we can use Corollary~\ref{cor:subgamma_bigoh} which gives: \[\omega_h\leq\cO\big(\sqrt{u+D h}2^{-h}\big)\,.\] This upper bound holds true in particular for Gaussian processes with $\cX\subset[0,R]^D$ and for all $x,y\in\cX$, $d(x,y) \leq \cO\big(\norm{x-y}_2\big)$. For stationary covariance this becomes $k(x,x)-k(x,y)\leq \cO\big(\norm{x-y}_2\big)$ which is satisfied for the usual covariances used in Bayesian optimization such as the squared exponential covariance or the Mat\'ern covariance with parameter $p\in\big(\frac 1 2, \frac 3 2, \frac 5 2\big)$. For these values of $p$ it is well known that $k(x,y)=h_p\big(\sqrt{2p}\norm{x-y}_2\big) \exp\big(-\sqrt{2p}\norm{x-y}_2\big)$, with $h_{\frac 1 2}(\delta)=1$, $h_{\frac 3 2}(\delta)=1+\delta$ and $h_{\frac 5 2}(\delta)=1+\delta+\frac 1 3 \delta^2$. Then we see that is suffices to choose $h(i)=\ceil{\frac 1 2 \log_2 i}$ to obtain $\omega_{h(i)} \leq \cO\Big( \sqrt{\frac{u+\frac 1 2 D\log i}{i}} \Big)$ and since $\sum_{i=1}^t i^{-\frac 1 2}\leq 2 \sqrt{t}$ and $\sum_{i=1}^t \big(\frac{\log i}{i}\big)^{\frac 1 2} \leq 2\sqrt{t\log t}$, \[R_t \leq \cO\Big(\sqrt{t \gamma_t \log t }\Big)\,, \] holds with high probability. Such a bound holds true in particular for the Ornstein-Uhlenbeck process, which was conjectured impossible in \cite{Srinivas2010} and \cite{Srinivas2012}. However we do not know suitable bounds for $\gamma_t$ in this case and can not deduce convergence rates. \paragraph{Gaussian processes indexed on ellipsoids and RKHS.} As mentioned in Section~\ref{sec:psi_process}, the previous bound on the discretization error is not tight for every Gaussian process. An important example is when the search space is a (possibly infinite dimensional) ellipsoid: \[\cX=\Big\{ x\in \ell^2: \sum_{i\geq 1}\frac{x_i^2}{a_i^2} \leq 1\Big\}\,.\] where $a\in\ell^2$, and $f(x) = \sum_{i\geq 1}x_ig_i$ with $g_i\iid \cN(0,1)$, and the pseudo-metric $d(x,y)$ coincide with the usual $\ell_2$ metric. The study of the supremum of such processes is connected to learning error bounds for kernel machines like Support Vector Machines, as a quantity bounding the learning capacity of a class of functions in a RKHS, see for example \cite{Mendelson2002}. It can be shown by geometrical arguments that $\E \sup_{x: d(x,s)\leq \epsilon} f(x)-f(s) \leq \cO\big(\sqrt{\sum_{i\geq 1}\min(a_i^2,\epsilon^2)}\big)\,,$ and that this supremum exhibits $\chi^2$-tails around its expectation, see for example \cite{Boucheron2013} and \cite{Talagrand2014}. This concentration is not grasped by Corollary~\ref{cor:subgamma_bigoh}, it is required to leverage the construction of Section~\ref{sec:lower_bound} to get a tight estimate. Therefore the present work forms a step toward efficient and practical online model selection in such classes in the spirit of \cite{Rakhlin2014} and \cite{Gaillard2015}. \subsection{Results for Quadratic Forms of Gaussian Processes} \label{sec:gp2} The preeminent model in Bayesian optimization is by far the Gaussian process. Yet, it is a very common task to attempt minimizing a regret on functions which does not look like Gaussian processes. Consider the typical cases where $f$ has the form of a mean square error or a Gaussian likelihood. In both cases, minimizing $f$ is equivalent to minimize a sum of squares, which we can not assume to be sampled from a Gaussian process. To alleviate this problem, we show that this objective fits in our generic setting. Indeed, if we consider that $f$ is a sum of squares of Gaussian processes, then $f$ is sub-Gamma with respect to a natural pseudo-metric. In order to match the challenge of maximization, we will precisely take the opposite. In this particular setting we allow the algorithm to observe directly the noisy values of the \emph{separated} Gaussian processes, instead of the sum of their square. To simplify the forthcoming arguments, we will choose independent and identically distributed processes, but one can remove the covariances between the processes by Cholesky decomposition of the covariance matrix, and then our analysis adapts easily to processes with non identical distributions. \paragraph{The stochastic smoothness of squared GP.} Let $f=-\sum_{j=1}^N g_j^2(x)$, where $\big(g_j\big)_{1\le j\le N}$ are independent centered Gaussian processes $g_j\iid\cGP(0,k)$ with stationary covariance $k$ such that $k(x,x)=\kappa$ for every $x\in\cX$. We have for $x,y\in\cX$ and $\lambda<(2\kappa)^{-1}$: \[\log\E e^{\lambda(f(x)-f(y))} = -\frac{N}{2}\log\Big(1-4\lambda^2(\kappa^2-k^2(x,y))\Big)\,. \] Therefore with $d(x,y)=2\sqrt{\kappa^2-k^2(x,y)}$ and $\psi(\lambda,\delta)=-\frac{N}{2}\log\big(1-\lambda^2\delta^2\big)$, we conclude that $f$ is a $(d,\psi)$-process. Since $-\log(1-x^2) \leq \frac{x^2}{1-x}$ for $0\leq x <1$, which can be proved by series comparison, we obtain that $f$ is sub-Gamma with parameters $\nu=N$ and $c=1$. Now with Eq.~\ref{eq:sub_gamma_tail}, \[\ell_u(x,y)\leq (u+\sqrt{2 u N})d(x,y)\,.\] Furthermore, we also have that $d(x,y)\leq \cO(\norm{x-y}_2)$ for $\cX\subseteq \bR^D$ and standard covariance functions including the squared exponential covariance or the Mat\'ern covariance with parameter $p=\frac 3 2$ or $p=\frac 5 2$. Then Corollary~\ref{cor:subgamma_bigoh} leads to: \begin{equation} \label{eq:omega_gp2} \forall i\geq 0,~ \omega_i \leq \cO\Big( u+D i + \sqrt{N(u+D i)}2^{-i}\Big)\,. \end{equation} \paragraph{Confidence intervals for squared GP.} As mentioned above, we consider here that we are given separated noisy observations $\mat{Y}_i^j$ for each of the $N$ processes. Deriving confidence intervals for $f$ given $\big(\mat{Y}_i^j\big)_{j\leq N}$ is a tedious task since the posterior processes $g_j$ given $\mat{Y}_i^j$ are not standard nor centered. We propose here a solution based directly on a careful analysis of Gaussian integrals. The proof of the following technical lemma can be found in Appendix~\ref{sec:gp2_tail}. \begin{lemma}[Tails of squared Gaussian] \label{lem:gp2_tail} Let $X\sim\cN(\mu,\sigma^2)$ and $s>0$. We have: \[\Pr\Big[ X^2 \not\in \big(l^2, u^2\big)\Big] < e^{-s^2}\,,\] for $u=\abs{\mu}+\sqrt{2} \sigma s$ and $l=\max\big(0,\abs{\mu}-\sqrt{2}\sigma s\big)$. \end{lemma} Using this lemma, we compute the confidence interval for $f(x)$ by a union bound over $N$. Denoting $\mu_i^j$ and $\sigma_i^j$ the posterior expectation and deviation of $g_j$ given $\mat{Y}_i^j$ (computed as in Eq.~\ref{eq:mu} and \ref{eq:sigma}), the confidence interval follows for all $x\in\cX$: \begin{equation} \label{eq:gp2_ci} \Pr\Big[ \forall j\leq m,~ g_j^2(x) \in \big( L_i^j(x,u), U_i^j(x,u) \big)\Big] \geq 1- e^{-u}\,, \end{equation} where \begin{align*} U_i^j(x,u) &= \Big(\abs{\mu_i^j(x)}+\sqrt{2(u+\log N)} \sigma_{i-1}^j(x)\Big)^2\\ \text{ and } L_i^j(x,u) &= \max\Big(0, \abs{\mu_i^j(x)}-\sqrt{2(u+\log N)} \sigma_{i-1}^j(x)\Big)^2\,. \end{align*} We are now ready to use Theorem~\ref{thm:regret_bound} to control $R_t$ by a union bound for all $i\in\bN$ and $x\in\cT_{h(i)}$. Note that under the event of Theorem~\ref{thm:regret_bound}, we have the following: \[\forall j\leq m, \forall i\in\bN, \forall x\in\cT_{h(i)},~ g_j^2(x) \in \big(L_i^j(x,u_i), U_i^j(x,u_i)\big)\,,\] Then we also have: \[\forall j\leq m, \forall i\in\bN, \forall x\in\cT_{h(i)},~ \abs{\mu_i^j(x)} \leq \abs{g_j(x)}+\sqrt{2(u_i+\log N)}\sigma_{i-1}^j(x)\,,\] Since $\mu_0^j(x)=0$, $\sigma_0^j(x)=\kappa$ and $u_0\leq u_i$ we obtain $\abs{\mu_i^j(x)} \leq \sqrt{2(u_i+\log N)}\big(\sigma_{i-1}^j(x)+\kappa\big)$. Therefore Theorem~\ref{thm:regret_bound} says with probability at least $1-2e^{-u}$: \[R_t \leq \sum_{i=1}^t\Big\{\omega_{h(i)} + 8\sum_{j\leq N}(u_i+\log N)\big(\sigma_{i-1}^j(x)+\kappa\big)\sigma_{i-1}^j(x_i) \Big\}\,.\] It is now possible to proceed as in Section~\ref{sec:gp} and bound the sum of posterior variances with $\gamma_t$\,: \[R_t \leq \cO\Big( N u_t \big(\sqrt{t \gamma_t} + \gamma_t\big) + \sum_{i=1}^t \omega_{h(t)} \Big)\,.\] As before, under the conditions of Eq.~\ref{eq:omega_gp2} and choosing the discretization level $h(i)=\ceil{\frac 1 2 \log_2 i}$ we obtain $\omega_{h(i)}=\cO\Big(i^{-\frac 1 2} \big(u+\frac 1 2 D\log i\big)\sqrt{N}\Big)$, and since $\sum_{i=1}^t i^{-\frac 1 2} \log i\leq 2 \sqrt{t}\log t$, \[R_t \leq \cO\Big(N \big(\sqrt{t\gamma_t \log t}+\gamma_t\big) + \sqrt{Nt}\log t\Big)\,,\] holds with high probability. \section{Tightness Results for Gaussian Processes} \label{sec:lower_bound} We present in this section a strong result on the tree $\cT$ obtained by Algorithm~\ref{alg:tree_lb}. Let $f$ be a centered Gaussian process $\cGP(0,k)$ with arbitrary covariance $k$. We show that a converse of Theorem~\ref{thm:chaining} is true with high probability. \subsection{A High Probabilistic Lower Bound on the Supremum} We first recall that for Gaussian process we have $\psi^{*-1}(u_i,\delta)=\cO\big(\delta \sqrt{u+n_i}\big)$, that is: \[\forall h\geq 0, \forall s\in\cT_h,~\sup_{x\succ s}f(x)-f(s) \leq \cO\Big(\sup_{x\succ s}\sum_{i>h}\Delta_i(x) \sqrt{u+n_i}\Big)\,,\] with probability at least $1-e^{-u}$. For the following, we will fix for $n_i$ a geometric sequence $n_i=2^i$ for all $i\geq 1$. Therefore we have the following upper bound: \begin{corollary} Fix any $u>0$ and let $\cT$ be constructed as in Algorithm~\ref{alg:tree_lb}. Then there exists a constant $c_u>0$ such that, for $f\sim\cGP(0,k)$, \[\sup_{x\succ s} f(x)-f(s) \leq c_u \sup_{x\succ s} \sum_{i>h} \Delta_i(x)2^{\frac i 2}\,,\] holds for all $h\geq 0$ and $s\in\cT_h$ with probability at least $1-e^{-u}$. \end{corollary} To show the tightness of this result, we prove the following probabilistic bound: \begin{theorem}[Generic Chaining Lower Bound] \label{thm:lower_bound} Fix any $u>0$ and let $\cT$ be constructed as in Algorithm~\ref{alg:tree_lb}. Then there exists a constant $c_u>0$ such that, for $f\sim\cGP(0,k)$, \[\sup_{x\succ s} f(x)-f(s) \geq c_u \sup_{x\succ s}\sum_{i=h}^\infty \Delta_i(x)2^{\frac i 2}\,,\] holds for all $h\geq 0$ and $s\in\cT_h$ with probability at least $1-e^{-u}$. \end{theorem} The benefit of this lower bound is huge for theoretical and practical reasons. It first says that we cannot discretize $\cX$ in a finer way that Algorithm~\ref{alg:tree_lb} up to a constant factor. This also means that even if the search space $\cX$ is ``smaller'' than what suggested using the metric entropy, like for ellipsoids, then Algorithm~\ref{alg:tree_lb} finds the correct ``size''. Up to our knowledge, this result is the first construction of tree $\cT$ leading to a lower bound at every depth with high probability. The proof of this theorem shares some similarity with the construction to obtain lower bound in expectation, see for example \cite{Talagrand2014} or \cite{Ding2011} for a tractable algorithm. \subsection{Analysis of Algorithm~\ref{alg:tree_lb}} Algorithm~\ref{alg:tree_lb} proceeds as follows. It first computes $(\cT_h)_{h\geq 0}$ a succession of $\epsilon_h$-nets as in Section~\ref{sec:psi_process} with $\epsilon_h=\Delta 2^{-h}$ where $\Delta$ is the diameter of $\cX$. The parent of a node is set to the closest node in the upper level, \[\forall t\in\cT_h,~ p(t) = \argmin_{s\in\cT_{h-1}} d(t,s)\,\] Therefore we have $d(t,p(t))\leq \epsilon_{h-1}$ for all $t\in\cT_h$. Moreover, by looking at how the $\epsilon_h$-net is computed we also have $d(t_i,t_j) \geq \epsilon_h$ for all $t_i,t_j\in\cT_h$. These two properties are crucial for the proof of the lower bound. Then, the algorithm updates the tree to make it well balanced, that is such that no node $t\in\cT_h$ has more that $e^{n_{h+1}-n_h}=e^{2^h}$ children. We note at this time that this condition will be already satisfied in every reasonable space, so that the complex procedure that follows is only required in extreme cases. To force this condition, Algorithm~\ref{alg:tree_lb} starts from the leafs and ``prunes'' the branches if they outnumber $e^{2^h}$. We remark that this backward step is not present in the literature on generic chaining, and is needed for our objective of a lower bound with high probability. By doing so, it creates a node called a \emph{pruned node} which will take as children the pruned branches. For this construction to be tight, the pruning step has to be careful. Algorithm~\ref{alg:tree_lb} attaches to every pruned node a value, computed using the values of its children, hence the backward strategy. When pruning branches, the algorithm keeps the $e^{2^h}$ nodes with maximum values and displaces the others. The intuition behind this strategy is to avoid pruning branches that already contain pruned node. Finally, note that this pruning step may creates unbalanced pruned nodes when the number of nodes at depth $h$ is way larger that $e^{2^h}$. When this is the case, Algorithm~\ref{alg:tree_lb} restarts the pruning with the updated tree to recompute the values. Thanks to the doubly exponential growth in the balance condition, this can not occur more that $\log \log \abs{\cX}$ times and the total complexity is $\cO\big(\abs{\cX}^2\big)$. \subsection{Computing the Pruning Values and Anti-Concentration Inequalities} We end this section by describing the values used for the pruning step. We need a function $\varphi(\cdot,\cdot,\cdot,\cdot)$ satisfying the following anti-concentration inequality. For all $m\in\bN$, let $s\in\cX$ and $t_1,\dots,t_m\in\cX$ such that $\forall i\leq m,~p(t_i)=s$ and $d(s,t_i)\leq \Delta$, and finally $d(t_i,t_j)\geq \alpha$. Then $\varphi$ is such that: \begin{equation} \label{eq:varphi} \Pr\Big[\max_{i\leq m}f(t_i)-f(s) \geq \varphi(\alpha,\Delta,m,u) \Big]>1-e^{-u}\,. \end{equation} A function $\varphi$ satisfying this hypothesis is described in Lemma~\ref{lem:max_one_lvl} in Appendix~\ref{sec:proof_lower_bound}. Then the value $V_h(s)$ of a node $s\in\cT_h$ is computed with $\Delta_i(s) = \sup_{x\succ s} d(x,s)$ as: \[V_h(s) = \sup_{x\succ s} \sum_{i>h} \varphi\Big(\frac 1 2 \Delta_h(x),\Delta_h(x),m,u\Big) \one_{p_i(x)\text{ is a pruned node}}\,.\] The two steps proving Theorem~\ref{thm:lower_bound} are: first, show that $\sup_{x\succ s}f(x)-f(s) \geq c_u V_h(s)$ for $c_u>0$ with probability at least $1-e^{-u}$, second, show that $V_h(s) \geq c_u'\sup_{x\succ s}\sum_{i>h}\Delta_i(x)2^{\frac i 2}$ for $c_u'>0$. The full proof of this theorem can be found in Appendix~\ref{sec:proof_lower_bound}. \paragraph{Acknowledgements.} We thank C\'edric Malherbe and Kevin Scaman for fruitful discussions.
{'timestamp': '2016-02-17T02:08:26', 'yymm': '1602', 'arxiv_id': '1602.04976', 'language': 'en', 'url': 'https://arxiv.org/abs/1602.04976'}
ArXiv
\section{Introduction} A drawing of a graph $G$ in the plane is a mapping, in which every vertex of $G$ is mapped into a point in the plane, and every edge into a continuous curve connecting the images of its endpoints. We assume that no three curves meet at the same point, and no curve contains an image of any vertex other than its endpoints. A \emph{crossing} in such a drawing is a point where the images of two edges intersect, and the \emph{crossing number} of a graph $G$, denoted by $\optcro{G}$, is the smallest number of crossings achievable by any drawing of $G$ in the plane. The goal in the {\sf Minimum Crossing Number}\xspace problem is to find a drawing of the input graph $G$ with minimum number of crossings. We denote by $n$ the number of vertices in $G$, and by $d_{\mbox{\textup{\footnotesize{max}}}}$ its maximum vertex degree. The concept of the graph crossing number dates back to 1944, when P\'al Tur\'an has posed the question of determining the crossing number of the complete bipartite graph $K_{m,n}$. This question was motivated by improving the performance of workers at a brick factory, where Tur\'an has been working at the time (see Tur\'an's account in \cite{turan_first}). Later, Anthony Hill (see~\cite{Guy-complete-graphs}) has posed the question of computing the crossing number of the complete graph $K_n$, and Erd\"{o}s and Guy~\cite{erdos_guy73} noted that \emph{``Almost all questions one can ask about crossing numbers remain unsolved.''} Since then, the problem has become a subject of intense study, with hundreds of papers written on the subject (see, e.g. the extensive bibliography maintained by Vrt'o \cite{vrto_biblio}.) Despite this enormous stream of results and ideas, some of the most basic questions about the crossing number problem remain unanswered. For example, the crossing number of $K_{11}$ was established just a few years ago (\cite{K11}), while the answer for $K_t, t\geq 13$, remains elusive. We note that in general $\optcro{G}$ can be as large as $\Omega(n^4)$, for example for the complete graph. In particular, one of the famous results in this area, due to Ajtai et al.~\cite{ajtai82} and Leighton~ \cite{leighton_book} states that if $|E(G)|\geq 4n$, then $\optcro{G}=\Omega(|E(G)|^3/n^2)$. In this paper we focus on the algorithmic aspect of the problem. The first non-trivial algorithm for {\sf Minimum Crossing Number}\xspace was obtained by Leighton and Rao \cite{LR}, who combined their breakthrough result on balanced separators with the techniques of Bhatt and Leighton~\cite{bhatt84} for VLSI design, to obtain an algorithm that finds a drawing of any bounded-degree $n$-vertex graph with at most $O(\log^4 n) \cdot (n + \optcro{G})$ crossings. This bound was later improved to $O(\log^3 n) \cdot (n+\optcro{G})$ by Even, Guha and Schieber \cite{EvenGS02}, and the new approximation algorithm of Arora, Rao and Vazirani~\cite{ARV} for Balanced Cut gives a further improvement to $O(\log^2 n) \cdot (n+\optcro{G})$, thus implying an $O(n \cdot \log^2 n)$-approximation for {\sf Minimum Crossing Number}\xspace on bounded-degree graphs. This result can also be extended to general graphs with maximum vertex degree $d_{\mbox{\textup{\footnotesize{max}}}}$, where the approximation factor becomes $O(n\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}})\cdot \log^2n)$. Chuzhoy, Makarychev and Sidiropoulos~\cite{CMS10} have recently improved this result to an $O(n\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}})\cdot \log^{3/2} n)$-approximation. On the negative side, the problem was shown to be NP-complete by Garey and Johnson \cite{crossing_np_complete}, and remains NP-complete even on cubic graphs~\cite{Hlineny06a}. More surprisingly, even in the very restricted case, where the input graph $G$ is obtained by adding a single edge to a planar graph, the problem is still NP-complete~\cite{cabello_edge}. The NP-hardness proof of ~\cite{crossing_np_complete}, combined with the inapproximability result for Minimum Linear-Arrangement \cite{Ambuhl07}, implies that there is no PTAS for {\sf Minimum Crossing Number}\xspace unless NP has randomized subexponential time algorithms. To summarise, although current lower bounds do not rule out the possibility of a constant-factor approximation for the problem, the state of the art, prior to this work, only gives an $\tilde O(n\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}))$-approximation. In view of this glaring gap in our understanding of the problem, a natural question is whether we can obtain good algorithms for the case where the optimal solution cost is low --- arguably, the most interesting setting for this problem. A partial answer was given by Grohe~\cite{Grohe04}, who showed that the problem is fixed-parameter tractable. Specifically, Grohe designed an exact $O(n^2)$-time algorithm, for the case where the optimal solution cost is bounded by a constant. Later, Kawarabayashi and Reed \cite{KawarabayashiR07} have shown a linear-time algorithm for the same setting. Unfortunately, the running time of both algorithms depends super-exponentially on the optimal solution cost. Our main result is an efficient randomized algorithm, that, given any $n$-vertex graph with maximum degree $d_{\mbox{\textup{\footnotesize{max}}}}$, produces a drawing of $G$ with $O\left ((\optcro{G})^{10}\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$ crossings with high probability. In particular, we obtain an $O\left (n^{9/10}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$-approximation for general graphs, and an $\tilde{O}(n^{9/10})$-approximation for bounded-degree graphs, thus breaking the long standing barrier of $\tilde{O}(n)$-approximation for this setting. We note that many special cases of the {\sf Minimum Crossing Number}\xspace problem have been extensively studied, with better approximation algorithms known for some. Examples include $k$-apex graphs~\cite{crossing_apex,CMS10}, bounded genus graphs~\cite{BorozkyPT06,crossing_genus,crossing_projective,crossing_torus,CMS10} and minor-free graphs~\cite{WoodT06}. Further overview of work on {\sf Minimum Crossing Number}\xspace can be found in the expositions of Richter and Salazar \cite{richter_survey}, Pach and T\'{o}th \cite{pach_survey}, Matou\v{s}ek \cite{matousek_book}, and Sz\'{e}kely \cite{szekely_survey}. \noindent {\bf Our results and techniques.} Our main result is summarized in the following theorem. \begin{theorem}\label{theorem: main-crossing-number} There is an efficient randomized algorithm, that, given any $n$-vertex graph $G$ with maximum degree $d_{\mbox{\textup{\footnotesize{max}}}}$, finds a drawing of $G$ in the plane with $O\left ((\optcro{G})^{10}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$ crossings with high probability. \end{theorem} Combining this theorem with the algorithm of Even et al.~\cite{EvenGS02}, we obtain the following corollary. \begin{corollary}\label{corollary: main-approx-crossing-number} There is an efficient randomized $O\left (n^{9/10}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$-approximation algorithm for {\sf Minimum Crossing Number}\xspace. \end{corollary} We now give an overview of our techniques. Instead of directly solving the {\sf Minimum Crossing Number}\xspace problem, it is more convenient to work with a closely related problem -- {\sf Minimum Planarization}\xspace. In this problem, given a graph $G$, the goal is to find a minimum-cardinality subset $E^*$ of edges, such that the graph $G\setminus E^*$ is planar. The two problem are closely related, and this connection was recently formalized by~\cite{CMS10}, in the following theorem: \begin{theorem}[\cite{CMS10}]\label{thm:CMS10} Let $G=(V,E)$ be any $n$-vertex graph of maximum degree $d_{\max}$, and suppose we are given a subset $E^*\subset E$ of edges, $|E^*|=k$, such that $G\setminus E^*$ is planar. Then there is an efficient algorithm to find a drawing of $G$ in the plane with at most $O\left(d_{\max}^3 \cdot k \cdot (\optcro{G} + k)\right)$ crossings. \end{theorem} Therefore, in order to solve the {\sf Minimum Crossing Number}\xspace problem, it is sufficient to find a good solution to the {\sf Minimum Planarization}\xspace problem on the same graph. We note that an $O(\sqrt {n\log n}\cdot d_{\mbox{\textup{\footnotesize{max}}}})$-approximation algorithm for the {\sf Minimum Planarization}\xspace problem follows easily from the Planar Separator theorem of Lipton and Tarjan~\cite{planar-separator} (see e.g.~\cite{CMS10}), and we are not aware of any other algorithmic results for the problem. Our main technical result is the proof of the following theorem, which, combined with Theorem~\ref{thm:CMS10}, implies Theorem~\ref{theorem: main-crossing-number}. \begin{theorem}\label{thm:main} There is an efficient randomized algorithm, that, given an $n$-vertex graph $G=(V,E)$ with maximum degree $d_{\mbox{\textup{\footnotesize{max}}}}$, finds a subset $E^*\subseteq E$ of edges, such that $G\setminus E^*$ is planar, and with high probability $|E^*| = O\left ((\optcro{G})^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)\right )$. \end{theorem} We now describe our main ideas and techniques. Given an optimal solution $\phi$ to the {\sf Minimum Crossing Number}\xspace problem on graph $G$, we say that an edge $e\in E(G)$ is \emph{good} iff it does not participate in any crossings in $\phi$. For convenience, we consider a slightly more general version of the problem, where, in addition to the graph $G$, we are given a simple cycle $X\subseteq G$, that we call the \emph{bounding box}, and our goal is to find a drawing of $G$, such that the edges of $X$ do not participate in any crossings, and all vertices and edges of $G\setminus X$ appear on the same side of the closed curve to which $X$ is mapped. In other words, if $\gamma_X$ is the simple closed curve to which $X$ is mapped, and $F_1,F_2$ are the two faces into which $\gamma_X$ partitions the plane, then one of the faces $F\in \set{F_1,F_2}$ must contain the drawings of all the edges and vertices of $G\setminus X$. We call such a drawing \emph{a drawing of $G$ inside the bounding box $X$}. Since we allow $X$ to be empty, this is indeed a generalization of the {\sf Minimum Crossing Number}\xspace problem. In fact, from Theorem~\ref{thm:CMS10}, it is enough to find what we call a \emph{weak solution} to the problem, namely, a small-cardinality subset $E^*$ of edges with $E^*\cap E(X)=\emptyset$, such that there is a planar drawing of the remaining graph $G\setminus E^*$ inside the bounding box $X$. Our proof consists of three major ingredients that we describe below. The algorithm is iterative. Throughout the algorithm, we gradually remove some edges from the graph, and gradually build a planar drawing of the remaining graph. One of the central notions we use is that of \emph{graph skeletons}. A skeleton $K$ of graph $G$ is simply a sub-graph of $G$, that contains the bounding box $X$, and has a unique planar drawing (for example, it may be convenient to think of $K$ as being $3$-vertex connected). Given a skeleton $K$, and a small subset $E'$ of edges (that we eventually remove from the graph), we say that $K$ is an \emph{admissible skeleton} iff all the edges of $K$ are good, and every connected component of $G\setminus (K\cup E')$ only contains a small number of vertices (say, at most $(1-1/\rho)n$, for some balance parameter $\rho$). Since $K$ has a unique planar drawing, and all its edges are good, we can find its unique planar drawing efficiently, and it must be identical to the drawing $\phi_K$ of $K$ induced by the optimal solution $\phi$. Let ${\mathcal{F}}$ be the set of faces in this drawing. Since $K$ only contains good edges, for each connected component $C$ of $G\setminus (K\cup E')$, all edges and vertices of $C$ must be drawn completely inside one of the faces $F_C\in {\mathcal{F}}$ in $\phi$. Therefore, if, for each such connected component $C$, we can identify the face $F_C$ inside which it needs to be embedded, then we can recursively solve the problems induced by each such component $C$, together with the bounding box formed by the boundary of $F_C$. In fact, given an admissible skeleton $K$, we show that we can find a good assignment of the connected components of $G\setminus (K\cup E')$ to the faces of ${\mathcal{F}}$, so that, on the one hand, all resulting sub-problems have solutions of total cost at most $\optcro{G}$, while, on the other hand, if we combine weak solutions to these sub-problems with the set $E'$ of edges, we obtain a feasible weak solution to the original problem. The assignment of the components to the faces of ${\mathcal{F}}$ is done by reducing the problem to an instance of the Min-Uncut problem. We defer the details of this part to later sections, and focus here on finding an admissible skeleton $K$. Our second main ingredient is the use of well-linked sets of vertices, and well-linked balanced bi-partitions. Given a set $S$ of vertices, let $G[S]$ be the sub-graph of $G$ induced by $S$, and let $\Gamma(S)$ be the subset of vertices of $S$ adjacent to the edges in $E(S,\overline S)$. Informally, we say that $S$ is $\alpha$-well-linked, iff every pair of vertices in $\Gamma(S)$ can send one flow unit to each other, with overall congestion bounded by $\alpha|\Gamma(S)|$. We say that a bi-partition $(S,\overline S)$ of the vertices of $G$ is $\rho$-balanced and $\alpha$-well-linked, iff $|S|,|\overline S|\geq n/\rho$, and both $S$ and $\overline S$ are $\alpha$-well-linked. Suppose we can find a $\rho$-balanced, $\alpha$-well linked bi-partition of $G$ (it is convenient to think of $\rho,\alpha=\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)$). In this case, we show a randomized algorithm, that w.h.p. constructs an admissible skeleton $K$, as follows. Let ${\mathcal{P}},{\mathcal{P}}'$ be the collections of the flow-paths in $G[S]$ and $G[\overline S]$ respectively, guaranteed by the well-linkedness of $S$ and $\overline S$. Since the congestion on all edges is relatively low, only a small number of paths in ${\mathcal{P}}\cup {\mathcal{P}}'$ contain bad edges. Therefore, if we choose a random collection of paths from ${\mathcal{P}}$ and ${\mathcal{P}}'$ with appropriate probability, the resulting skeleton $K$, obtained from the union of these paths, is unlikely to contain bad edges. Moreover, we can show that w.h.p., every connected component of $G\setminus K$ only contains a small number of edges in $E(S,\overline S)$. It is still possible that some connected component $C$ of $G\setminus K$ contains many vertices of $G$. However, only one such component $C$ may contain more than $n/2$ vertices. Let $E'$ be the subset of edges in $E(S,\overline S)$, that belong to $C$. Then, since the original cut $(S,\overline S)$ is $\rho$-balanced, once we remove the edges of $E'$ from $C$, it will decompose into small enough components. This will ensure that all connected components of $G\setminus (K\cup E')$ are small enough, and $K$ is admissible. Using these ideas, given an efficient algorithm for computing $\rho$-balanced $\alpha$-well-linked cuts, we can obtain an algorithm for the {\sf Minimum Crossing Number}\xspace problem. Unfortunately, we do not have an efficient algorithm for computing such cuts. We can only compute such cuts in graphs that do not contain a certain structure, that we call \emph{nasty vertex sets}. Informally, a subset $S$ of vertices is a nasty set, iff $|S|>>|E(S,\overline S)|^2$, and the sub-graph $G[S]$ induced by $S$ is planar. We show an algorithm, that, given any graph $G$, either produces a $\rho$-balanced $\alpha$-well linked cut, or finds a nasty set $S$ in $G$. Therefore, if $G$ does not contain any nasty sets, we can compute the $\rho$-balanced $\alpha$-well-linked bi-paritition of $G$, and hence obtain an algorithm for {\sf Minimum Crossing Number}\xspace. Moreover, given {\bf any} graph $G$, if our algorithm fails to produce a good solution to {\sf Minimum Crossing Number}\xspace on $G$, then w.h.p. it returns a nasty set of vertices in $G$. The third major component of our algorithm is handling the nasty sets. Suppose we are given a nasty set $S$, and assume for now that it is also $\alpha$-well-linked for some parameter $\alpha=\operatorname{poly}(\log n)$. Let $\Gamma(S)$ denote the endpoints of the edges in $E(S,\overline S)$ that belong to $S$, and let $|\Gamma(S)|=z$. Recall that $|S|>>z^2$, and $G[S]$ is planar. Intuitively, in this case we can use the $z\times z$ grid to ``simulate'' the sub-graph $G[S]$. More precisely, we replace the sub-graph $G[S]$ with the $z\times z$ grid $Z_S$, and identify the vertices of the first row of the grid with the vertices in $\Gamma(S)$. We call the resulting graph the \emph{contracted graph}, and denote it by $G_{|S}$. Notice that the number of vertices in $G_{|S}$ is smaller than that in $G$. When $S$ is not well-linked, we perform a simple well-linked decomposition procedure to partition $S$ into a collection of well-linked subsets, and replace each one of them with a grid separately. Given a drawing of the resulting contracted graph $G_{|S}$, we say that it is a \emph{canonical drawing} if the edges of the newly added grids do not participate in any crossings. Similarly, we say that a planarizing subset $E^*$ of edges is a weak canonical solution for $G_{|S}$, iff the edges of the grids do not belong to $E^*$. We show that the crossing number of $G_{|S}$ is bounded by $\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\optcro{G}$, and this bound remains true even for canonical drawings. On the other hand, we show that given any weak canonical solution $E^*$ for $G_{|S}$, we can efficiently find a weak solution of comparable cost for $G$. Therefore, it is enough to find a weak feasible canonical solution for graph $G_{|S}$. However, even the contracted graph $G_{|S}$ may still contain nasty sets. We then show that, given any nasty set $S'$ in $G_{|S}$, we can find another subset $S''$ of vertices in the original graph $G$, such that the contracted graph $G_{|S''}$ contains fewer vertices than $G_{|S}$. The crossing number of $G_{|S''}$ is again bounded by $\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\optcro{G}$ even for canonical drawings, and a weak canonical solution to $G_{|S''}$ gives a weak solution to $G$ as before. Our algorithm then consists of a number of stages. In each stage, it starts with the current contracted graph $G_{|S}$ (where in the first stage, $S=\emptyset$, and $G_{|S}=G$). It then either finds a good weak canonical solution for problem $G_{|S}$, thus giving a feasible solution to the original problem, or returns a nasty set $S'$ in graph $G_{|S}$. We then construct a new contracted graph $G_{|S''}$, that contains fewer vertices than $G_{|S}$, and becomes the input to the next stage. \noindent{\bf Organization.} We start with some basic definitions, notation, and general results on cuts and flows in Section~\ref{sec: Prelims}. We then present a more detailed algorithm overview in Section~\ref{sec: overview}. Section~\ref{sec: graph contraction} is devoted to the graph contraction step, and the rest of the algorithm appears in Sections~\ref{sec: alg} and~\ref{sec: iteration}. For convenience, the list of all main parameters appears in Section~\ref{sec: param-list} of Appendix. Our conclusions appear in Section~\ref{sec: conclusions}. \label{------------------------------------------------Preliminaries--------------------------------------------} \section{Preliminaries and Notation}\label{sec: Prelims} In order to avoid confusion, throughout the paper, we denote the input graph by ${\mathbf{G}}$, with $|V({\mathbf{G}})|=n$, and maximum vertex degree $d_{\mbox{\textup{\footnotesize{max}}}}$. In statements regarding general arbitrary graphs, we will denote them by $G$, to distinguish them from the specific graph ${\mathbf{G}}$. \noindent{\bf General Notation.} We use the words ``drawing'' and ``embedding'' interchangeably. Given any graph $G$, a drawing $\phi$ of $G$, and any sub-graph $H$ of $G$, we denote by $\phi_H$ the drawing of $H$ induced by $\phi$, and by $\operatorname{cr}_{\phi}(G)$ the number of crossings in the drawing $\phi$ of $G$. Notice that we can assume w.l.o.g. that no edge crosses itself in any drawing. For any pair $E_1,E_2\subseteq E(G)$ of subsets of edges, we denote by $\operatorname{cr}_{\phi}(E_1,E_2)$ the number of crossings in $\phi$ in which the images of edges of $E_1$ intersect the images of edges of $E_2$, and by $\operatorname{cr}_{\phi}(E_1)$ the number of crossings in $\phi$ in which the images of edges of $E_1$ intersect with each other. Given two disjoint sub-graphs $H_1,H_2$ of $G$, we will sometimes write $\operatorname{cr}_{\phi}(H_1,H_2)$ instead of $\operatorname{cr}_{\phi}(E(H_1),E(H_2))$, and $\operatorname{cr}_{\phi}(H_1)$ instead of $\operatorname{cr}_{\phi}(E(H_1))$. If $G$ is a planar graph, and $\phi$ is a drawing of $G$ with no crossings, then we say that $\phi$ is a \emph{planar} drawing of $G$. For a graph $G=(V,E)$, and subsets $V'\subseteq V$, $E'\subseteq E$ of its vertices and edges respectively, we denote by $G[V']$, $G\setminus V'$, and $G\setminus E'$ the sub-graphs of $G$ induced by $V'$, $V\setminus V'$, and $E\setminus E'$, respectively. \begin{definition} Let $\gamma$ be any closed simple curve, and let $F_1,F_2$ be the two faces into which $\gamma$ partitions the plane. Given any drawing $\phi$ of a graph $G$, we say that $G$ is \emph{embedded inside $\gamma$}, iff one of the faces $F\in\set{F_1,F_2}$ contains the images of all edges and vertices of $G$ (the images of the vertices of $G$ may lie on $\gamma$). Similarly, if $C\subseteq G$ is a simple cycle, then we say that $G$ is embedded inside $C$, iff the edges of $C$ do not participate in any crossings, and $G\setminus E(C)$ is embedded inside $\gamma_C$ -- the simple closed curve to which $C$ is mapped. \end{definition} Given a graph $G$ and a bounding box $X$, we define the problem $\pi(G,X)$, that we use extensively. \begin{definition} Given a graph $G$ and a simple (possibly empty) cycle $X\subseteq G$, called the \emph{bounding box}, a \emph{strong solution} for problem $\pi(G,X)$, is a drawing $\psi$ of $G$, in which $G$ is embedded inside the bounding box $X$, and its cost is the number of crossings in $\psi$. A \emph{weak solution} to problem $\pi(G,X)$ is a subset $E'\subseteq E(G)\setminus E(X)$ of edges, such that $G\setminus E'$ has a \emph{planar drawing}, in which it is embedded inside the bounding box $X$. \end{definition} Notice that in order to prove Theorem~\ref{thm:main}, it is enough to find a weak solution for problem $\pi({\mathbf{G}},X_0)$, where $X_0=\emptyset$, of cost $O\left ((\optcro{{\mathbf{G}}})^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)\right )$. \begin{definition} For any graph $G=(V,E)$, a subset $V'\subseteq V$ of vertices is called a $c$-separator, iff $|V'|=c$, and the graph $G\setminus V'$ is not connected. We say that $G$ is $c$-connected iff it does not contain $c'$-separators, for any $0<c'<c$. \end{definition} We will use the following four well-known results: \begin{theorem} (Whitney~\cite{Whitney})\label{thm:Whitney} Every 3-connected planar graph has a unique planar drawing. \end{theorem} \begin{theorem}(Hopcroft-Tarjan~\cite{planar-drawing})\label{thm:planar drawing} For any graph $G$, there is an efficient algorithm to determine whether $G$ is planar, and if so, to find a planar drawing of $G$. \end{theorem} \begin{theorem}(Ajtai et al.~\cite{ajtai82}, Leighton \cite{leighton_book})\label{thm: large average degree large crossing number} Let $G$ be any graph with $n$ vertices and $m\geq 4n$ edges. Then $\optcro{G}=\Omega(m^3/n^2)=\Omega(n)$. \end{theorem} \begin{theorem}(Lipton-Tarjan~\cite{planar-separator})\label{thm: planar separator} Let $G$ be any $n$-vertex planar graph. Then there is a constant $q$, and an efficient algorithm to partition the vertices of $G$ into three sets $A,B,C$, such that $|A|,|C|\geq n/3$, $|B|\leq q\sqrt{n}$, and there are no edges in $G$ connecting the vertices of $A$ to the vertices of $C$. \end{theorem} \subsection{Well-linkedness} \begin{definition} Let $G=(V,E)$ be any graph, and $J\subseteq V$ any subset of its vertices. We denote by $\operatorname{out}_G(J)=E_G(J,V\setminus J)$, and we call the edges in $\operatorname{out}_G(J)$ the \emph{terminal edges for $J$}. For each terminal edge $e=(u,v)$, with $u\in J$, $v\not \in J$, we call $u$ the \emph{interface vertex} and $v$ the \emph{terminal vertex} for $J$. We denote by $\Gamma_G(J)$ and $T_G(J)$ the sets of all interface and terminal vertices for $J$, respectively, and we omit the subscript $G$ when clear from context (see Figure~\ref{fig: terminal interface vertices}). \end{definition} \begin{figure}[h] \scalebox{0.3}{\rotatebox{0}{\includegraphics{terminal-vertices-edges-cut.pdf}}} \caption{Terminal vertices and edges for set $J$ are red; interface vertices are blue. \label{fig: terminal interface vertices}} \end{figure} \begin{definition} Given a graph $G$, a subset $J$ of its vertices, and a parameter $\alpha>0$, we say that $J$ is $\alpha$-well-linked, iff for any partition $(J_1,J_2)$ of $J$, if we denote by $T_1=\operatorname{out}(J_1)\cap \operatorname{out}(J)$, and by $T_2=\operatorname{out}(J_2)\cap \operatorname{out}(J)$, then $|E(J_1,J_2)|\geq \alpha\cdot \min\set{|T_1|,|T_2|}$ \end{definition} Notice that if $G$ is a connected graph and $J\subset V(G)$ is $\alpha$-well-linked for any $\alpha>0$, then $G[J]$ must be connected. Finally, we define $\rho$-balanced $\alpha$-well-linked bi-partitions. \begin{definition} Let $G$ be any graph, and let $\rho>1, 0<\alpha\leq 1$ be any parameters. We say that a bi-partition $(S,\overline S)$ of $V(G)$ is $\rho$-balanced and $\alpha$-well-linked, iff $|S|,|\overline S|\geq |V(G)|/\rho$ and both $S$ and $\overline S$ are $\alpha$-well-linked. \end{definition} \subsection{Sparsest Cut and Concurrent Flow}\label{subsec: sparsest cut} In this section we summarize some well-known results on graph cuts and flows that we use throughout the paper. We start by defining the non-uniform sparsest cut problem. Suppose we are given a graph $G=(V,E)$, with weights $w_v$ on vertices $v\in V$. Given any partition $(A,B)$ of $V$, the \emph{sparsity} of the cut $(A,B)$ is $\frac{|E(A,B)|}{\min\set{W(A),W(B)}}$, where $W(A)=\sum_{v\in A}w_v$ and $W(B)=\sum_{v\in B}w_v$. In the non-uniform sparsest cut problem, the input is a graph $G$ with weights on vertices, and the goal is to find a cut of minimum sparsity. Arora, Lee and Naor~\cite{sparsest-cut} have shown an $O(\sqrt{\log n}\cdot \log\log n)$-approximation algorithm for the non-uniform sparsest cut problem. We denote by $\ensuremath{{\mathcal{A}}_{\mbox{\textup{\footnotesize{ALN}}}}}\xspace$ this algorithm and by $\ensuremath{\alpha_{\mbox{\textup{\footnotesize{ALN}}}}}=O(\sqrt{\log n}\cdot\log\log n)$ its approximation factor. We will usually work with a special case of the sparsest cut problem, where we are given a subset $T\subseteq V$ of vertices, called terminals, and the vertex weights are $w_v=1$ for $v\in T$, and $w_v=0$ otherwise. A problem dual to sparsest cut is the maximum concurrent multicommodity flow problem. Here, we need to compute the maximum value $\lambda$, such that $\lambda/|T|$ flow units can be simultaneously sent in $G$ between every pair of terminals with no congestion. The flow-cut gap is the maximum possible ratio, in any graph, between the value of the minimum sparsest cut and the maximum concurrent flow. The value of the flow-cut gap in undirected graphs, that we denote by $\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} $ throughout the paper, is $\Theta(\log n)$~\cite{LR, GVY,LLR,Aumann-Rabani}. In particular, if the value of the sparsest cut is $\alpha$, then every pair of terminals can send $\frac{\alpha}{|T|\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }$ flow units to each other with no congestion. Let $G$ be any graph, let $S$ be a subset of vertices of $G$, and let $0 < \alpha < 1$, such that $S$ is $\alpha$-well-linked. We now define the sparsest cut and the concurrent flow instances corresponding to $S$, as follows. For each edge $e\in \operatorname{out}(S)$, we sub-divide the edge by adding a new vertex $t_e$ to it. Let $G'$ denote the resulting graph, and let $T$ denote the set of all vertices $t_e$ for $e\in \operatorname{out}_G(S)$. Consider the graph $H=G'[S]\cup \operatorname{out}_{G'}(S)$. We can naturally define an instance of the non-uniform sparsest cut problem on $H$, where the set of terminals is $T$. The fact that $S$ is $\alpha$-well-linked is equivalent to the value of the sparsest cut in the resulting instance being at least $\alpha$. We obtain the following simple well-known consequence: \begin{observation}\label{observation: existence of flow in well-linked instance} Let $G$, $S$, $H$, and $T$ be defined as above, and let $0<\alpha<1$, such that $S$ is $\alpha$-well-linked. Then every pair of vertices in $T$ can send one flow unit to each other in $H$, such that the maximum congestion on any edge is at most $\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} |T|/\alpha$. Moreover, if $M$ is any partial matching on the vertices of $T$, then we can send one flow unit between every pair $(u,v)\in M$ in graph $H$, with maximum congestion at most $2\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} /\alpha$. \end{observation} \begin{proof} The first part is immediate from the definition of the flow-cut gap. Let $F$ denote the resulting flow. In order to obtain the second part, for every pair $(u,v)\in M$, $u$ will send $1/|T|$ flow units to every vertex in $T$, and $v$ will collect $1/|T|$ flow units from every vertex in $T$, via the flow $F$. It is easy to see that every flow-path is used at most twice. \end{proof} For convenience, when given an $\alpha$-well-linked subset $S$ of vertices in a graph $G$, we will omit the subdivision of the edges in $\operatorname{out}(S)$, and we will say that the edges $e\in \operatorname{out}(S)$ send flow to each other, instead of the corresponding vertices $t_e$. We will also use the algorithm of Arora, Rao and Vazirani~\cite{ARV} for balanced cut, summarized below. \begin{theorem}[Balanced Cut~\cite{ARV}]\label{thm: ARV} Let $G$ be any $n$-vertex graph, and suppose there is a partition of the vertices of $G$ into two sets, $A$ and $B$, with $|A|,|B|\geq \epsilon n$ for some constant $\epsilon>0$, and $|E(A,B)|=c$. Then there is an efficient algorithm to find a partition $(A',B')$ of the vertices of $G$, such that $|A'|,|B'|\geq \epsilon' n$ for some constant $0<\epsilon'<\epsilon$, and $|E(A',B')|\leq O(c\sqrt{\log n})$. \end{theorem} \subsection{Canonical Vertex Sets and Solutions} As already mentioned in the Introduction, we will perform a number of graph contraction steps on the input graph ${\mathbf{G}}$, where in each such graph contraction step, a sub-graph of ${\mathbf{G}}$ will be replaced with a grid. So in general, if $H$ is the current graph, we will also be given a collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $H$, such that for each $Z\in {\mathcal{Z}}$, $H[Z]$ is the $k_Z\times k_Z$ grid, for some $k_Z\geq 2$. We will also ensure that $\Gamma_H(Z)$ is precisely the set of the vertices in the first row of the grid $H[Z]$, and the edges in $\operatorname{out}_H(Z)$ form a matching between $\Gamma_H(Z)$ and $T_H(Z)$. Given such a graph $H$, and a collection ${\mathcal{Z}}$ of vertex subsets, we will be looking for solutions in which the edges of the grids $H[Z]$ do not participate in any crossings. This motivates the following definitions of canonical vertex sets and canonical solutions. Assume that we are given a graph $G$ and a collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $G$, such that each subset $Z\in{\mathcal{Z}}$ is $1$-well-linked (but some vertices of $G$ may not belong to any subset $Z\in {\mathcal{Z}}$). \begin{definition}\label{definiton: canonical subset} We say that a subset $J\subseteq V$ of vertices is \emph{canonical} for ${\mathcal{Z}}$ iff for each $Z\in{\mathcal{Z}}$, either $Z\subseteq J$, or $Z\cap J=\emptyset$. \end{definition} We next define canonical drawings and canonical solutions w.r.t. the collection ${\mathcal{Z}}$ of subsets of vertices: \begin{definition}\label{definition: canonical drawing} Let $G=(V,E)$ be any graph, and ${\mathcal{Z}}$ any collection of disjoint subsets of vertices of $G$. We say that a drawing $\phi$ of $G$ is \emph{canonical} for ${\mathcal{Z}}$ iff for each $Z\in {\mathcal{Z}}$, no edge of $G[Z]$ participates in crossings. Similarly, we say that a solution $E^*$ to the {\sf Minimum Planarization}\xspace problem on $G$ is \emph{canonical} for ${\mathcal{Z}}$, iff for each $Z\in {\mathcal{Z}}$, no edge of $G[Z]$ belongs to $E^*$. \end{definition} \begin{definition} Given a graph $G$, a simple cycle $X\subseteq G$ (that may be empty), and a collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $G$, a \emph{strong solution} to problem $\pi(G,X,{\mathcal{Z}})$ is a drawing $\psi$ of $G$, in which the edges of $E(X)\cup\left(\bigcup_{Z\in{\mathcal{Z}}}E(G[Z])\right )$ do not participate in any crossings, and $G$ is embedded inside the bounding box $X$. The cost of the solution is the number of edge crossings in $\psi$. A \emph{weak solution} to problem $\pi(G,X,{\mathcal{Z}})$ is a subset $E'\subseteq E(G)\setminus E(X)$ of edges, such that graph $G\setminus E'$ has a \emph{planar drawing} inside the bounding box $X$, and for all $Z\in {\mathcal{Z}}$, $E'\cap E(G[Z])=\emptyset$. \end{definition} We will sometimes use the above definition for problem $\pi(G',X,{\mathcal{Z}})$, where $G'$ is a sub-graph of $G$. That is, some sets $Z\in {\mathcal{Z}}$ may not be contained in $G'$, or only partially contained in it. We can then define ${\mathcal{Z}}'$ to contain, for each $Z\in {\mathcal{Z}}$, the set $Z\cap V(G')$. We will sometimes use the notion of weak or strong solution to problem $\pi(G',X,{\mathcal{Z}})$ to mean weak or strong solutions to $\pi(G',X,{\mathcal{Z}}')$, to simplify notation. \subsection{Cuts in Grids} The following simple claim about grids and its corollary are used throughout the paper. \begin{claim}\label{claim: cut of grids} Let $Z$ be the $k\times k$ grid, for any integer $k\geq 2$, and let $\Gamma$ denote the set of vertices in the first row of $Z$. Let $(A,B)$ be any partition of the vertices of $Z$, with $A,B\neq \emptyset$. Then $|E(A,B)|\geq\min\set{|A\cap \Gamma|, |B\cap \Gamma|}+1$. \end{claim} \begin{proof} Let $\Gamma_A=\Gamma\cap A$, $\Gamma_B=\Gamma\cap B$, and assume w.l.o.g. that $|\Gamma_A|\leq |\Gamma_B|$. If $\Gamma_A=\emptyset$, then the claim is clearly true. Otherwise, there is some vertex $t\in \Gamma_A$, such that a vertex $t'$ immediately to the right or to the left of $t$ in the first row of the grid belongs to $\Gamma_B$. Let $e=(t,t')$ be the corresponding edge in the first row of $Z$. We can find a collection of $|\Gamma_A|$ edge-disjoint paths, connecting vertices in $\Gamma_A$ to vertices in $\Gamma_B$, that do not include the edge $e$, as follows: assign a distinct row of $Z$ (different from the first row) to each vertex in $\Gamma_A$. Route each such vertex inside its column to its designated row, and inside this row to the column corresponding to some vertex in $\Gamma_B$. If we add the path consisting of the single edge $e$, we will obtain a collection of $|\Gamma_A|+1$ edge-disjoint paths, connecting vertices in $\Gamma_A$ to vertices in $\Gamma_B$. All these paths have to be disconnected by the above cut. \end{proof} \begin{corollary}\label{corollary: canonical s-t cut} Let $G$ be any graph, ${\mathcal{Z}}$ any collection of disjoint subsets of vertices of $G$, such that for each $Z\in {\mathcal{Z}}$, $G[Z]$ is the $k_Z\times k_Z$ grid, for $k_Z\geq 2$. Moreover, assume that each vertex in the first row of $Z$ is adjacent to exactly one edge in $\operatorname{out}_G(Z)$, and no other vertex of $Z$ is adjacent to edges in $\operatorname{out}_G(Z)$. Let $s,t$ be any pair of vertices of $G$, that do not belong to any set $Z\in {\mathcal{Z}}$, and let $(A,B)$ be the minimum $s$--$t$ cut in $G$. Then both sets $A$ and $B$ are canonical w.r.t. ${\mathcal{Z}}$. \end{corollary} \begin{proof} Assume for contradiction that some set $Z\in {\mathcal{Z}}$ is split between the two sides, $A$ and $B$. Let $\Gamma=\Gamma(Z)$ denote the set of vertices in the first row of $Z$, and let $\Gamma_A=\Gamma\cap A$, $\Gamma_B=\Gamma\cap B$. Assume w.l.o.g. that $|\Gamma_A|\leq |\Gamma_B|$. Then by Claim~\ref{claim: cut of grids} $|E(A\cap Z,B\cap Z)|>|\Gamma_A|$, and so the value of the cut $(A\setminus Z,B\cup Z)$ is smaller than the value of the cut $(A,B)$, a contradiction. \end{proof} \begin{claim}\label{claim: cutting the grid} Let $Z$ be the $k\times k$ grid, for any integer $k\geq 2$, and let $\Gamma$ be the set of vertices in the first row of $Z$. Suppose we are given any partition $(A,B)$ of $V(Z)$, denote $\Gamma_A=\Gamma\cap A$, $\Gamma_B=\Gamma\cap B$, and assume that $|\Gamma_B|\leq |\Gamma_A|$. Then $|B|\leq 4|E(A,B)|^2$. \end{claim} \begin{proof} Denote $M=|E(A,B)|$. Let $C_A$ denote the set of columns associated with the vertices in $\Gamma_A$, and similarly, $C_B$ is the set of columns associated with the vertices in $\Gamma_B$. Notice that $(C_A,C_B)$ define a partition of the columns of $Z$. We consider three cases. The first case is when no column is completely contained in $A$. In this case, for every column in $C_A$, at least one edge must belong to $E(A,B)$, and so $M\geq |\Gamma_A|\geq k/2$. Since $|B|\leq |Z|\leq k^2$, the claim follows. From now on we assume that there is some grid column, denoted by $c$, that is completely contained in $A$. The second case is when some grid column $c'$ is completely contained in $B$. In this case, it is easy to see that $M\geq k$ must hold, as there are $k$ edge-disjont paths connecting vertices of $c$ to vertices of $c'$ in $Z$. So $|B|\leq |Z|\leq k^2\leq M^2$, as required. Finally, assume that no column is contained in $B$. Let $C'_B$ be the set of columns that have at least one vertex in $B$. Clearly, $M\geq |C'_B|$. Let $M'$ be the maximum number of vertices in any column $c'\in C'_B$, which are contained in $B$. Then $M\geq M'$ must hold, since there are $M'$ edge-disjoint paths between the vertices of column $c$, and the vertices of $c'\cap B$. On the other hand, $|B|\leq |C'_B|\cdot M'\leq M^2$. \end{proof} \subsection{Well-linked Decompositions} The next theorem summarizes well-linked decomposition of graphs, which has been used extensively in graph decomposition (e.g., see~\cite{CSS,Raecke}). For completeness we provide its proof in Appendix. \begin{theorem}[Well-linked decomposition]\label{thm: well-linked} Given any graph $G=(V,E)$, and any subset $J\subseteq V$ of vertices, we can efficiently find a partition ${\mathcal{J}}$ of $J$, such that each set $J'\in {\mathcal{J}}$ is $\alpha^*$-well-linked for $\alpha^*=\Omega(1/(\log^{3/2} n\log\log n))$, and $\sum_{J'\in {\mathcal{J}}}|\operatorname{out}(J')|\leq 2\operatorname{out}(J)$. \end{theorem} We now define some additional properties that set $J$ may possess, that we use throughout the paper. We will then show that if a set $J$ has any collection of these properties, then we can find a well-linked decomposition ${\mathcal{J}}$ of $J$, such that every set $J'\in {\mathcal{J}}$ has these properties as well. \begin{definition} Given a graph $G$ and any subset $J\subseteq V(G)$ of its vertices, we say that $J$ has property (P1) iff the vertices of $T(J)$ are connected in $G\setminus J$. We say that it has property (P2) iff there is a planar drawing of $J$ in which all interface vertices $\Gamma(J)$ lie on the boundary of the same face, that we refer to as the \emph{outer face}. We denote such a planar drawing by $\pi(J)$. If there are several such drawing, we select any of them arbitrarily. \end{definition} The next theorem is an extension of Theorem~\ref{thm: well-linked}, and its proof appears in Appendix. \begin{theorem}\label{thm: well-linked-general} Suppose we are given any graph $G=(V,E)$, a subset $J\subseteq V$ of vertices, and a collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $V$, such that each set $Z\in {\mathcal{Z}}$ is $1$-well-linked. Then we can efficiently find a partition ${\mathcal{J}}$ of $J$, such that each set $J'\in {\mathcal{J}}$ is $\alpha^*$-well linked for $\alpha^*=\Omega(1/(\log^{3/2} n\log\log n))$, and $\sum_{J'\in {\mathcal{J}}}|\operatorname{out}(J')|\leq 2\operatorname{out}(J)$. Moreover, if $J$ has any combination of the following three properties: (1) property (P1); (2) property (P2); (3) it is a canonical set for ${\mathcal{Z}}$, then each set $J'\in {\mathcal{J}}$ will also have the same combination of these properties.\end{theorem} Throughout the paper, we use $\alpha^*$ to denote the parameter from Theorem~\ref{thm: well-linked-general}. \label{--------------------------------sec: high level overview----------------------------------} \section{High Level Algorithm Overview}\label{sec: overview} In this section we provide a high-level overview of the algorithm. We start by defining the notion of nasty vertex sets. \begin{definition} Given a graph $G$, we say that a subset $S\subseteq V(G)$ of vertices is \emph{nasty} iff it has properties (P1) and (P2), and $|S|\geq \frac{2^{16}\cdot d_{\mbox{\textup{\footnotesize{max}}}}^6}{(\alpha^*)^2}\cdot|\Gamma(S)|^2$, where $\alpha^*$ is the parameter from Theorem \ref{thm: well-linked}. \end{definition} Note that we do not require that $G[S]$ is connected. For the sake of clarity, let us first assume that the input graph ${\mathbf{G}}$ contains no nasty sets. Our algorithm then proceeds as follows. We use a balancing parameter $\rho=O(\optcro{G}\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$ whose exact value is set later. The algorithm has $O(\rho\cdot\log n)$ iterations. At the beginning of each iteration $h$, we are given a collection $G_1,\ldots, G_{k_h}$ of $k_h\leq \optcro{G}$ disjoint sub-graphs of ${\mathbf{G}}$, together with bounding boxes $X_i\subseteq G_i$ for all $i$. We are guaranteed that w.h.p., there is a strong solution to each problem $\pi(G_i,X_i)$, of total cost at most $\optcro{{\mathbf{G}}}$. In the first iteration, $k_1=1$, and the only graph is $G_1={\mathbf{G}}$, whose bounding box is $X_0=\emptyset$. We now proceed to describe each iteration. The idea is to find a \emph{skeleton} $K_i$ for each graph $G_i$, with $X_i\subseteq K_i$, such that $K_i$ only contains good edges --- that is, edges that do not participate in any crossings in the optimal solution $\phi$, and $K_i$ has a unique planar drawing, in which $X_i$ serves as the bounding box. Therefore, we can efficiently find the drawing $\phi_{K_i}$ of the skeleton $K_i$, induced by the optimal drawing $\phi$. We then decompose the remaining graph $G_i\setminus E(K_i)$ into \emph{clusters}, by removing a small subset of edges from it, so that, on the one hand, for each such cluster $C$, we know the face $F_C$ of $\phi_{K_i}$ where we should embed it, while on the other hand, different clusters $C,C'$ do not interfere with each other, in the sense that we can find an embedding of each one of these clusters separately, and their embeddings do not affect each other. For each such cluster $C$, we then define a new problem $\pi(C,\gamma(F_C))$, where $\gamma(F_C)$ is the boundary of the face $F_C$. We will ensure that all resulting sub-problems have strong solutions whose total cost is at most $\optcro{{\mathbf{G}}}$. In particular, there are at most $\optcro{G}$ resulting sub-problems, for which $\emptyset$ is not a feasible weak solution. Therefore, in the next iteration we will need to solve at most $\optcro{G}$ new sub-problems. The main challenge is to find $K_i$, such that the number of vertices in each such cluster $C$ is bounded by roughly $(1-1/\rho)|V(G_i)|$, so that the number of iterations is indeed bounded by $O(\rho \log n)$. We need this bound on the number of iterations, since the probability of successfully constructing the skeletons in each iteration is only $(1-1/\rho)$. Roughly speaking, we are able to build the skeleton as required, if we can find a $\rho$-balanced $\alpha$-well-linked bipartition of the vertices of $G_i$, where $\alpha=1/\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)$. We are only able to find such a partition if no nasty sets exist in ${\mathbf{G}}$. More precisely, we show an efficient algorithm, that either finds the desired bi-partition, or returns a nasty vertex set. In order to obtain the whole algorithm, we therefore need to deal with nasty sets. We do so by performing a graph contraction step, which is formally defined in the next section. Informally, given a nasty set $S$, we find a partition ${\mathcal{X}}$ of $S$, such that for every pair $X,X'\in {\mathcal{X}}$, the graphs ${\mathbf{G}}[X],{\mathbf{G}}[X']$ share at most one interface vertex and no edges. Each such graph ${\mathbf{G}}[X]$ is also $\alpha^*$-well-linked, has properties (P1) and (P2), and $\sum_{X\in {\mathcal{X}}}|\Gamma(X)|\leq O(|\Gamma(S)|)$. We then replace each sub-graph ${\mathbf{G}}[X]$ of ${\mathbf{G}}$ by a grid $Z_X$, whose interface is $\Gamma(X)$. After we do so for each $X\in {\mathcal{X}}$, we denote by ${\mathbf{G}}_{|S}$ the resulting contracted graph. Notice that we have replaced ${\mathbf{G}}[S]$ by a much smaller graph, whose size is bounded by $O(|\Gamma(S)|^2)$. Let ${\mathcal{Z}}$ denote the collection of sets $V(Z_X)$ of vertices, for $X\in {\mathcal{X}}$. We then show that the cost of the optimal solution to problem $\pi({\mathbf{G}}_{|S},\emptyset,{\mathcal{Z}})$ is at most $\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot\log n)\optcro{G}$. Therefore, we can restrict our attention to canonical solutions only. We also show that it is enough to find a weak solution to problem $\pi({\mathbf{G}}_{|S},\emptyset,{\mathcal{Z}})$, in order to obtain a weak solution for the whole graph ${\mathbf{G}}$. Unfortunately, we do not know how to find a nasty set $S$, such that the corresponding contracted graph ${\mathbf{G}}_{|S}$ contains no nasty sets. Instead, we do the following. Let $H={\mathbf{G}}_{|S}$ be the current graph, which is a result of the graph contraction step on some set $S$ of vertices, and let ${\mathcal{Z}}$ be the corresponding collection of sub-sets of vertices representing the grids. Suppose we can find a nasty canonical set $R$ in the graph $H$. We show that this allows us to find a new set $S'$ of vertices in ${\mathbf{G}}$, such that the contracted graph ${\mathbf{G}}_{|S'}$ contains fewer vertices than ${\mathbf{G}}_{|S}$. Returning to our algorithm, let ${\mathbf{G}}_{|S}$ be the current contracted graph. We show that with high probability, the algorithm either returns a weak solution for ${\mathbf{G}}_{|S}$ of cost $O\left ((\optcro{G})^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)\right )$, or it returns a nasty canonical subset $S'$ of ${\mathbf{G}}_{|S}$. In the former case, we can recover a good weak solution for the original graph ${\mathbf{G}}$. In the latter case, we find a subset $S''$ of vertices in the original graph ${\mathbf{G}}$, and perform another contraction step on ${\mathbf{G}}$, obtaining a new graph ${\mathbf{G}}_{|S''}$, whose size is strictly smaller than that of ${\mathbf{G}}_{|S}$. We then apply the algorithm to graph ${\mathbf{G}}_{|S''}$. Since the total number of graph contraction steps is bounded by $n$, after $n$ such iterations, we are guaranteed w.h.p. to obtain a weak feasible solution of cost $O\left ((\optcro{G})^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)\right )$ to $\pi({\mathbf{G}},\emptyset)$, thus satisfying the requirements of Theorem~\ref{thm:main}. We now turn to formal description of the algorithm. One of the main ingredients is the graph contraction step, summarized in the next section. \section{Graph Contraction Step}\label{sec: graph contraction} The input to the graph contraction step consists of the input graph ${\mathbf{G}}$, and a subset $S\subseteq V({\mathbf{G}})$ of vertices, for which properties (P1) and (P2) hold. It will be convenient to think of $S$ as a nasty set, but we do not require it. Let ${\mathcal{C}}=\set{G_1,\ldots,G_q}$ be the set of all connected components of ${\mathbf{G}}[S]$. For each $1\leq i\leq q$, let $\Gamma_i=V(G_i)\cap \Gamma(S)=\Gamma(V(G_i))$ be the set of the interface vertices of $G_i$. The goal of the graph contraction step is to find, for each $1\leq i\leq q$, a partition ${\mathcal{X}}_i$ of the set $V(G_i)$, that has the following properties. Let ${\mathcal{X}}=\bigcup_{i=1}^q{\mathcal{X}}_i$. \begin{properties}{C} \item Each set $X\in {\mathcal{X}}$ is $\alpha^*$-well-linked, and has properties (P1) and (P2). Moreover, there is a planar drawing $\pi'(X)$ of ${\mathbf{G}}[X]$, and a simple closed curve $\gamma_X$, such that ${\mathbf{G}}[X]$ is embedded inside $\gamma_X$ in $\pi'(X)$, and the vertices of $\Gamma(X)$ lie on $\gamma_X$.\label{property: subsets-first} \item For each $X\in{\mathcal{X}}$, either $|\Gamma(X)|=2$, or there is a partition $(C^*_X,R_1,\ldots,R_t)$ of $X$, such that ${\mathbf{G}}[C^*_X]$ is $2$-connected and $\Gamma(X)\subseteq C^*_X$. Moreover, for each $1\leq t'\leq t$, there is a vertex $u_{t'}\in C^*_X$, whose removal from ${\mathbf{G}}[X]$ separates the vertices of $R_{t'}$ from the remaining vertices of $X$. \label{property: structure of X} \item For each pair $X,X'\in {\mathcal{X}}$, the two sets of vertices are completely disjoint, except for possibly sharing one interface vertex, $v\in \Gamma(X)\cap \Gamma(X')$. \label{property: disjointness} \item For each $1\leq i\leq q$, if $\Gamma_i'=\bigcup_{X\in{\mathcal{X}}_i}\Gamma(X)$, then $|\Gamma_i'|\leq 9|\Gamma_i|$. \label{property: subsets - last} \item For each $X\in {\mathcal{X}}$, $|X|\geq (\alpha^*|\Gamma(X)|)^2/64d_{\mbox{\textup{\footnotesize{max}}}}^2$.\label{property: size-last} \end{properties} For each set $X\in {\mathcal{X}}$, we now define a new graph $Z'_X$, that will eventually replace the sub-graph ${\mathbf{G}}[X]$ in ${\mathbf{G}}$. Intuitively, we need $Z'_X$ to contain the vertices of $\Gamma(X)$ and to be $1$-well-linked w.r.t. these vertices. We also need it to have a unique planar embedding where the vertices of $\Gamma(X)$ lie on the boundary of the same face, and finally, we need the size of the graph $Z'_X$ to be relatively small, since this is a graph contraction step. The simplest graph satisfying these properties is a grid of size $|\Gamma(X)|\times |\Gamma(X)|$. Specifically, we first define a graph $Z_X$ as follows: if $|\Gamma_X|=1$, then $Z_X$ consists of a single vertex, and if $|\Gamma_X|=2$, then $Z_X$ consists of a single edge. Otherwise, $Z_X$ is a grid of size $|\Gamma(X)|\times |\Gamma(X)|$. In order to obtain the graph $Z'_X$, we add the set $\Gamma(X)$ of vertices to $Z_X$, and add a matching between the vertices of the first row of the grid and the vertices of $\Gamma(X)$. This is done so that the order of the vertices of $\Gamma(X)$ along the first row of the grid is the same as their order along the curve $\gamma_X$ in the drawing $\pi'(X)$. We refer to these new edges as the \emph{matching edges}. For the cases where $|\Gamma_X|=1$ and $|\Gamma_X|=2$, we obtain $Z'_X$ by adding the vertices of $\Gamma(X)$ to $Z_X$, and adding an arbitrary matching between $\Gamma_X$ and the vertices of $Z_X$. (See Figure~\ref{fig: grids}). \begin{figure}[h] \centering \subfigure[General case]{ \scalebox{0.4}{\includegraphics{grid-new-cut.pdf}} \label{fig: grid} } \hfill \subfigure[$|\Gamma(X)|=1$]{ \scalebox{0.3}{\includegraphics{grid-1-interface-cut.pdf}} \label{fig: grid 1 interface} } \hfill \subfigure[$|\Gamma(X)|=2$]{ \scalebox{0.3}{\includegraphics{grid-2-interface-cut.pdf}} \label{fig: grid 2 interface} } \caption{Graph $Z'_X$. The matching edges and the interface vertices are blue; the grid $Z_X$ is black.}\label{fig: grids} \end{figure} The contracted graph ${\mathbf{G}}_{|S}$ is obtained from ${\mathbf{G}}$, by replacing, for each $X\in{\mathcal{X}}$, the subgraph ${\mathbf{G}}[X]$ of ${\mathbf{G}}$, with the graph $Z'_X$. This is done as follows: first, delete all vertices and edges of ${\mathbf{G}}[X]$, except for the vertices of $\Gamma(X)$, from ${\mathbf{G}}$, and add the edges and the vertices of $Z'_X$ instead. Next, identify the copies of the interface vertices $\Gamma[X]$ in the two graphs. Let $H={\mathbf{G}}_{|S}$ denote the resulting contracted graph. Notice that \begin{equation}\label{eq: final size} \sum_{i=1}^q\sum_{X\in {\mathcal{X}}_i}|V(Z'_X)|\leq \sum_{i=1}^q\sum_{X\in {\mathcal{X}}_i}2|\Gamma(X)|^2\leq \sum_{i=1}^q2|\Gamma_i'|^2d_{\mbox{\textup{\footnotesize{max}}}}^2\leq 162d_{\mbox{\textup{\footnotesize{max}}}}^2|\Gamma|^2 \end{equation} (we have used the fact that a vertex may belong to the interface of at most $d_{\mbox{\textup{\footnotesize{max}}}}$ sets $X\in {\mathcal{X}}_i$, and Property~(\ref{property: subsets - last})). Therefore, if the initial vertex set $S$ is nasty, then we have indeed reduced the graph size, as $|V(H)|<|V({\mathbf{G}})|$. We now define a collection ${\mathcal{Z}}$ of subsets of vertices of $H$, as follows: ${\mathcal{Z}}=\set{V(Z_X)\mid X\in {\mathcal{X}}}$. Notice that these sets are completely disjoint, as $Z_X$ does not contain the interface vertices $\Gamma(X)$. Moreover, for each $Z\in {\mathcal{Z}}$, $H[Z]$ is a grid, $\Gamma_H(Z)$ consists of the vertices in the first row of the grid, and $\operatorname{out}_H(Z)$ consists of the set of the matching edges, each of which connects a vertex in the first row of the grid $Z$ to a distinct vertex in $T_H(Z)$. Using Definitions~\ref{definiton: canonical subset} and \ref{definition: canonical drawing}, we can now define canonical subsets of vertices, canonical drawings and canonical solutions to the {\sf Minimum Planarization}\xspace problem on $H$, with respect to ${\mathcal{Z}}$. Our main result for graph contraction is summarized in the next theorem, whose proof appears in Appendix. \begin{theorem}\label{thm: graph contraction} Let $S\subseteq V({\mathbf{G}})$ be any subset of vertices with properties (P1) and (P2), and let $\set{G_1,\ldots,G_q}$ be the set of all connected components of graph ${\mathbf{G}}[S]$. Then for each $1\leq i\leq q$, we can efficiently find a partition ${\mathcal{X}}_i$ of $V(G_i)$, such that the resulting partition ${\mathcal{X}}=\bigcup_{i=1}^q{\mathcal{X}}_i$ of $S$ has properties~(\ref{property: subsets-first})--(\ref{property: size-last}). Moreover, there is a canonical drawing of the resulting contracted graph $H={\mathbf{G}}_{|S}$ with $O(d_{\max}^9 \cdot \log^{10} n \cdot (\log\log n)^4 \cdot \optcro{{\mathbf{G}}})$ crossings. \end{theorem} The next claim shows, that in order to find a good solution to the {\sf Minimum Planarization}\xspace problem on ${\mathbf{G}}$, it is enough to solve it on ${\mathbf{G}}_{|S}$. \begin{claim}\label{claim: enough to solve contracted graph} Let $S$ be any subset of vertices of ${\mathbf{G}}$, ${\mathcal{X}}$ any partition of $S$ with properties~(\ref{property: subsets-first})--(\ref{property: size-last}), $H={\mathbf{G}}_{|S}$ the corresponding contracted graph and ${\mathcal{Z}}$ the collection of grids $Z_X$ for $X\in {\mathcal{X}}$. Then given any canonical solution $E^*$ to the {\sf Minimum Planarization}\xspace problem on $H$, we can efficiently find a solution of cost $O(d_{\mbox{\textup{\footnotesize{max}}}})|E^*|$ to {\sf Minimum Planarization}\xspace on ${\mathbf{G}}$.\end{claim} \begin{proof} Partition set $E^*$ of edges into two subsets: $E^*_1$ contains all edges that belong to sub-graphs $Z'_X$ for $X\in {\mathcal{X}}$, and $E^*_2$ contains all remaining edges. Notice that since $E^*$ is a canonical solution, each edge $e\in E^*_1$ must be a matching edge for some graph $Z'_X$. Also from the construction of the contracted graph $H$, all edges in $E^*_2$ belong to $E({\mathbf{G}})$. Consider some set $X\in {\mathcal{X}}$, and let $\Gamma'(X)\subseteq \Gamma(X)$ denote the subset of the interface vertices of $Z'_X$, whose matching edges belong to $E^*_1$. Let $\Gamma'=\bigcup_{X\in {\mathcal{X}}}\Gamma'(X)$. We now define a subset $E^{**}_1$ of edges of ${\mathbf{G}}$ as follows: for each vertex $v\in \Gamma'$, add all edges incident to $v$ in ${\mathbf{G}}$ to $E^{**}_1$. Finally, we set $E^{**}=E^{**}_1\cup E^*_2$. Notice that $E^{**}$ is a subset of edges of ${\mathbf{G}}$, and $|E^{**}|=|E_1^{**}|+|E_2^*|\leq d_{\mbox{\textup{\footnotesize{max}}}}|E_1^*|+|E_2^*|\leq d_{\mbox{\textup{\footnotesize{max}}}} |E^*|$. In order to complete the proof of the claim, it is enough to show that $E^{**}$ is a feasible solution to the {\sf Minimum Planarization}\xspace problem on ${\mathbf{G}}$. Let ${\mathbf{G}}'={\mathbf{G}}\setminus E^{**}$, let $H'=H\setminus E^*$, and let $\psi$ be a planar drawing of $H'$. It is now enough to construct a planar drawing $\psi'$ of ${\mathbf{G}}'$. In order to do so, we start from the planar drawing $\psi$ of $H'$. We then consider the sets $X\in {\mathcal{X}}$ one-by-one. For each such set, we replace the drawing of $Z'_X\setminus \Gamma'(X)$ with a drawing of ${\mathbf{G}}[X]\setminus \Gamma'(X)$. The drawings of the vertices in $\Gamma(X)$ are not changed by this procedure. After all sets $X\in {\mathcal{X}}$ are processed, we will obtain a planar drawing of graph ${\mathbf{G}}'$ (that may also contain drawings of some edges in $E^{**}$, that we can simply erase). Consider some such set $X\in {\mathcal{X}}$. Let $G$ be the current graph (obtained from $H'$ after a number of such replacement steps), and let $\psi$ be the current planar drawing of $G$. Observe that the grid $Z_X$ has a unique planar drawing. We say that a planar drawing of graph $Z'_X\setminus \Gamma'(X)$ is \emph{standard} in $\psi$, iff we can draw a simple closed curve $\gamma'_X$, such that $Z_X$ is embedded completely inside $\gamma'_X$; no other vertices or edges of $G$ are embedded inside $\gamma'_X$; the only edges that $\gamma'_X$ intersects are the matching edges of $Z'_X\setminus \Gamma'(X)$, and each such matching edge is intersected exactly once by $\gamma'_X$ (see Figure~\ref{fig: standard drawing}). \begin{figure}[h] \scalebox{0.4}{\rotatebox{0}{\includegraphics{standard-drawing-cut.pdf}}}\caption{A standard drawing of $Z'_X\setminus \Gamma'(X)$ \label{fig: standard drawing}} \end{figure} It is possible that the drawing of $Z'_X\setminus \Gamma'(X)$ in $\psi$ is not standard. However, since $\psi$ is planar, this can only happen for the following three reasons: (1) some connected component $C$ of the current graph $G$ is embedded inside some face of the grid $Z_X$: in this case we can simply move the drawing of $C$ elsewhere; (2) there is some subset $C$ of $V(G)$, and a vertex $v\in \Gamma(X)\setminus \Gamma'(X)$, such that $\Gamma_G(C)=v$, and $G[C]$ is embedded inside one of the faces of the grid $Z_X$ incident to the other endpoint of the matching edge of $v$; and (3) there is some subset $C$ of $V(G)$, and two consecutive vertices $u,v\in \Gamma(X)\setminus \Gamma'(X)$, such that $\Gamma_G(C)=\set{u,v}$, and $G[C]$ is embedded inside the unique face of the grid $Z_X$ incident to the other endpoints of the matching edges of $u$ and $v$ (See Figure \ref{fig: any to standard drawing}). In the latter two cases, we simply move the drawing of $C$ right outside the grid, so that the corresponding matching edges now cross the curve $\gamma'(X)$. \begin{figure}[h] \centering \subfigure{ \scalebox{0.4}{\includegraphics{any-to-standard-drawing1-cut.pdf}} \label{fig: any to standard before} } \hfill \subfigure{ \scalebox{0.4}{\includegraphics{any-to-standard-drawing2-cut.pdf}} \label{fig: any to standard after} } \caption{Transforming drawing $\psi$ to obtain a standard drawing of $Z'_X\setminus \Gamma'(X)$. Cases 1, 2 and 3 are illustrated by clusters $C_1$, $C_2$ and $C_3$, respectively. \label{fig: any to standard drawing}} \end{figure} To conclude, we can transform the current planar drawing $\psi$ of the graph $G$ into another planar drawing $\tilde{\psi}$, such that the induced drawing of $Z'_X\setminus \Gamma'(X)$ is standard. We can now draw a simple closed curve $\gamma''(X)$, such that $Z'_X\setminus \Gamma'(X)$ is embedded inside $\gamma''(X)$, no other vertices or edges are embedded inside $\gamma''(X)$, and the set of vertices whose drawings lie on $\gamma''(X)$ is precisely $\Gamma(X)\setminus \Gamma'(X)$. Notice that the ordering of the vertices of $\Gamma(X)\setminus \Gamma'(X)$ along this curve is exactly the same as their ordering along the curve $\gamma(X)$ in the planar embedding $\pi'(X)$ of ${\mathbf{G}}[X]$, guaranteed by Property (\ref{property: subsets-first}). Let $\pi''(X)$ be the drawing of ${\mathbf{G}}[X]\setminus \Gamma'(X)$ induced by $\pi'(X)$. We can now simply replace the drawing of $Z'_X\setminus \Gamma'(X)$ with the drawing $\pi''(X)$ of ${\mathbf{G}}[X]\setminus \Gamma'(X)$, identifying the curves $\gamma_X$ and $\gamma''_X$, and the drawings of the vertices in $\Gamma(X)\setminus \Gamma'(X)$ on them. The resulting drawing remains planar, and the drawings of the vertices in $\Gamma(X)$ do not change. \end{proof} Finally, we show that if we find a nasty canonical set in ${\mathbf{G}}_{|S}$, then we can contract ${\mathbf{G}}$ even further. The proof of the following theorem appears in Appendix. \begin{theorem}\label{thm: nasty canonical set to contraction} Let $S$ be any subset of vertices of ${\mathbf{G}}$, ${\mathcal{X}}$ any partition of $S$ with properties~(\ref{property: subsets-first})--(\ref{property: size-last}), $H={\mathbf{G}}_{|S}$ the corresponding contracted graph, and ${\mathcal{Z}}$ the corresponding collection of grids $Z_X$ for $X\in {\mathcal{X}}$. Then given any nasty canonical vertex set $R\subseteq V(H)$, we can efficiently find a subset $S'\subseteq V({\mathbf{G}})$ of vertices, and a partition ${\mathcal{X}}'$ of $S'$, such that properties~(\ref{property: subsets-first})--(\ref{property: size-last}) hold for ${\mathcal{X}}'$, and if $H'={\mathbf{G}}_{|S'}$ is the corresponding contracted graph, then $|V(H')|<|V(H)|$. Moreover, there is a canonical drawing $\phi'$ of $H'$ with $\operatorname{cr}_{\phi'}(H') = O(d_{\max}^9 \cdot \log^{10} n \cdot (\log\log n)^4 \cdot \optcro{{\mathbf{G}}})$. \end{theorem} Notice that Claim~\ref{claim: enough to solve contracted graph} applies to the new contracted graph as well. \label{---------------------------------------------------------sec algorithm---------------------------------------------------} \section{The Algorithm}\label{sec: alg} The algorithm consists of a number of stages. In each stage $j$, we are given as input a subset $S$ of vertices of ${\mathbf{G}}$, the contracted graph $\H={\mathbf{G}}_{|S}$, and the collection ${\mathcal{Z}}$ of disjoint sub-sets of vertices of $\H$, corresponding to the grids $Z_X$ obtained during the contraction step. The goal of stage $j$ is to either produce a nasty canonical set $R$ in $\H$, or to find a weak feasible solution to problem $\pi(\H,\emptyset,{\mathcal{Z}})$. We prove the following theorem. \begin{theorem}\label{thm: alg summary} There is an efficient randomized algorithm, that, given a contracted graph $\H$, a corresponding collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $\H$, and a bound $\mathsf{OPT}'$ on the cost of the strong optimal solution to problem $\pi(\H,\emptyset,{\mathcal{Z}})$, with probability at least $1/\operatorname{poly}(n)$, produces either a nasty canonical subset $R$ of vertices of $\H$, or a weak feasible solution $E^*$, $|E^*|\leq O((\mathsf{OPT}')^{5}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot\log n))$ for problem $\pi(\H,\emptyset,{\mathcal{Z}})$. (Here, $n=|V({\mathbf{G}})|$). \end{theorem} We prove this theorem in the rest of this section, but we first show how Theorems~\ref{thm:main}, \ref{theorem: main-crossing-number} and Corollary~\ref{corollary: main-approx-crossing-number} follow from it. We start with proving Theorem~\ref{thm:main}, by showing an efficient randomized algorithm to find a subset $E^*\subseteq E({\mathbf{G}})$ of edges, such that ${\mathbf{G}}\setminus E^*$ is planar, and $|E^*|\leq O((\optcro{G})^5\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$. We assume that we know the value $\optcro{{\mathbf{G}}}$, by using the standard practice of guessing this value, running the algorithm, and then adjusting the guessed value accordingly. It is enough to ensure that whenever the guessed value $\mathsf{OPT}\geq \optcro{{\mathbf{G}}}$, the algorithm indeed returns a subset $E^*$ of edges, $|E^*|\leq O(\mathsf{OPT}^{5}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$, such that ${\mathbf{G}}\setminus E^*$ is a planar graph w.h.p. Therefore, from now on we assume that we are given a value $\mathsf{OPT}\geq \optcro{{\mathbf{G}}}$. The algorithm consists of a number of stages. The input to stage $j$ is a contracted graph $\H$, with the corresponding family ${\mathcal{Z}}$ of vertex sets. In the input to the first stage, $\H={\mathbf{G}}$, and ${\mathcal{Z}}=\emptyset$. In each stage $j$, we run the algorithm from Theorem~\ref{thm: alg summary} on the current contracted graph $\H$, and the family ${\mathcal{Z}}$ of vertex subsets. From Theorem~\ref{thm: graph contraction}, there is a strong feasible solution to problem $\pi(\H,\emptyset,{\mathcal{Z}})$ of cost $O(\mathsf{OPT} \cdot\operatorname{poly}(\log n\cdot d_{\mbox{\textup{\footnotesize{max}}}}))$, and so we can set the parameter $\mathsf{OPT}'$ to this value. Whenever the algorithm returns a nasty canonical set $R$ in graph $\H$, we terminate the current stage, and compute a new contracted graph $\H'$, guaranteed by Theorem~\ref{thm: nasty canonical set to contraction}. Graph $\H'$, together with the corresponding family ${\mathcal{Z}}'$ of vertex subsets, becomes the input to the next stage. Alternatively, if, after $\operatorname{poly}(n)$ executions of the algorithm from Theorem~\ref{thm: alg summary}, no nasty canonical set is returned, then with high probability, one of the algorithm executions has returned a weak feasible solution $E^*$, $|E^*|\leq O(\mathsf{OPT}^{5}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot\log n))$ for problem $\pi(\H,\emptyset,{\mathcal{Z}})$. From Claim~\ref{claim: enough to solve contracted graph}, we can recover from this solution a planarizing set $E^{**}$ of edges for graph ${\mathbf{G}}$, with $|E^{**}|=O(\mathsf{OPT}^{5}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot\log n))$. Since the size of the contracted graph $\H$ goes down after each contraction step, the number of stages is bounded by $n$, thus implying Theorem~\ref{thm:main}. Combining Theorem~\ref{thm:main} with Theorem~\ref{thm:CMS10} immediately gives Theorem~\ref{theorem: main-crossing-number}. Finally, we obtain Corollary~\ref{corollary: main-approx-crossing-number} as follows. Recall that the algorithm of Even et al.~ \cite{EvenGS02} computes a drawing of any $n$-vertex bounded degree graph ${\mathbf{G}}$ with $O(\log^2 n) \cdot (n+\optcro{G})$ crossings. It was shown in~\cite{CMS10}, that this algorithm can be extended to arbitrary graphs, where the number of crossings becomes $O(\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}})\cdot \log^2 n) \cdot (n+\optcro{G})$. We run their algorithm, and the algorithm presented in this section, on graph ${\mathbf{G}}$, and output the better of the two solutions. If $\optcro{{\mathbf{G}}}<n^{1/10}$, then our algorithm is an $O(n^{9/10}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$-approximation; otherwise, the algorithm of~\cite{EvenGS02} gives an $O(n^{9/10}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$-approximation. The remainder of this section is devoted to proving Theorem~\ref{thm: alg summary}. Recall that we are given the contracted graph $\H$, and a collection ${\mathcal{Z}}$ of vertex-disjoint subsets of $V(\H)$. For each $Z\in {\mathcal{Z}}$, $\H[Z]$ is a grid, and $E(Z,V(\H)\setminus Z)$ consists of a set $M_Z$ of matching edges. Each such edge connects a vertex in the first row of $Z$ to a distinct vertex in $T_{\H}(Z)$, and these edges form a matching between the first row of $Z$ and $T_{\H}(Z)$. Abusing the notation, we denote the bound on the cost of the strong optimal solution to $\pi(\H,\emptyset,{\mathcal{Z}})$ by $\mathsf{OPT}$ from now on, and the number of vertices in $\H$ by $n$. For each $Z\in {\mathcal{Z}}$, we use $Z$ to denote both the set of vertices itself, and the grid $\H[Z]$. We assume throughout the rest of the section that $\mathsf{OPT}\cdot d_{\mbox{\textup{\footnotesize{max}}}}^6< \sqrt n$: otherwise, if $\mathsf{OPT}\cdot d_{\mbox{\textup{\footnotesize{max}}}}^6\geq \sqrt n$, then the set $E'$ of all edges of $\H$ that do not participate in grids $Z\in {\mathcal{Z}}$, is a feasible weak canonical solution for problem $\pi(\H,\emptyset,{\mathcal{Z}})$. It is easy to see that $|E'|\leq O(\mathsf{OPT}^2\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}))$: this is clearly the case if $|E'|\leq 4n$; otherwise, if $|E'|>4n$, then by Theorem~\ref{thm: large average degree large crossing number}, $\mathsf{OPT}=\Omega(n)$, and so $|E'|=O(n^2)=O(\mathsf{OPT}^2)$. We use two parameters: $\rho=O(\mathsf{OPT}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$ and $m^*=O(\mathsf{OPT}^3\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n))$, whose exact values we set later. The algorithm consists of $2\rho\log n$ iterations. The input to iteration $h$ is a collection $G_1,\ldots,G_{k_h}$ of $k_h\leq \mathsf{OPT}$ sub-graphs of $\H$, together with bounding boxes $X_i\subseteq G_i$ for all $1\leq i\leq k_h$. We denote $H_i=G_i\setminus V(X_i)$ and $n(H_i)=|V(H_i)|$. Additionally, we have collections $\edges1,\ldots,\edges{h-1}$ of edges of $\H$, where for each $1\leq h'\leq h-1$, set $\edges{h'}$ has been computed in iteration $h'$. We say that $(G_1,X_1),\ldots,(G_{k_h},X_{k_h})$, and $\edges1,\ldots,\edges{h-1}$ is a \emph{valid} input to iteration $h$, iff the following invariants hold: \label{start invariants for the whole algorithm-------------------------------------------------------------} \begin{properties}{V} \item For all $1\leq i,j\leq k_h$, graphs $H_i$ and $H_j$ are completely disjoint. \label{invariant 1: disjointness} \item For all $1\leq i\leq k_h$, $G_i\subseteq \H\setminus (\edges 1,\ldots,\edges{h-1})$, and $H_i$ is the sub-graph of $\H$ induced by $V(H_i)$. In particular, no edges $e\subseteq V(H_i)$ belong to $\edges 1,\ldots,\edges{h-1}$. Moreover, every edge $e\in E(\H)$ belongs to either $\bigcup_{h'=1}^h\edges{h'}$ or to $\bigcup_{i=1}^{k_h}G_i$. \label{invariant 2: proper subgraph} \item For all $Z\in {\mathcal{Z}}$, for all $1\leq i\leq k_h$, either $Z\cap V(H_i)=\emptyset$, or $Z\subseteq V(H_i)$. Let ${\mathcal{Z}}_i=\set{Z\in {\mathcal{Z}}\mid Z\subseteq V(H_i)}$.\label{invariant 2: canonical} \item For all $1\leq i\leq k_h$, there is a strong solution $\phi_i$ to $\pi(G_i,X_i,{\mathcal{Z}}_i)$, with $\sum_{i=1}^{k_h}\operatorname{cr}_{\phi_i}(G_i)\leq \mathsf{OPT}$. \label{invariant 3: there is a cheap solution} \item If we are given {\bf any} weak solution $E_i'$ to problem $\pi(G_i,X_i,{\mathcal{Z}}_i)$, for all $1\leq i\leq k_h$, and denote $\tilde{E}^{(h)}=\bigcup_{i=1}^{k_h}E_i$, then $\edges1\cup\cdots\edges {h-1}\cup \tilde{E}^{(h)}$ is a feasible weak solution to problem $\pi(\H,\emptyset,{\mathcal{Z}})$.\label{invariant 4: any weak solution is enough} \item For each $1\leq h'<h$, and $1\leq i\leq k_h$, the number of edges in $\edges{h'}$ incident on vertices of $H_i$ is at most $m^*$, and $|\edges{h'}|\leq \mathsf{OPT} \cdot m^*$. Moreover, no edges in grids $Z\in {\mathcal{Z}}$ belong to $\bigcup_{h'=1}^{h-1}\edges {h'}$.\label{invariant 4.5: number of edges removed so far} \item Let $n_h=(1-1/\rho)^{(h-1)/2}\cdot n$. For each $1\leq i\leq k_h$, either $n(H_i)\leq n_h$, or $X_i=\emptyset$ and $n(H_i)\leq n_{h-1}$. \label{invariant 5: bound on size} \end{properties} \label{end invariants for the whole algorithm-------------------------------------------------------------} The input to the first iteration consists of a single graph, $G_1=\H$, with the bounding box $X_1=\emptyset$. It is easy to see that all invariants hold for this input. We end the algorithm at iteration $h^*$, where $n_{h^*}\leq (m^*\cdot \rho\cdot \log n)^2$. Clearly, $h^*\leq 2\rho\log n$, from Invariant~(\ref{invariant 5: bound on size}). Let ${\mathcal{G}}$ be the set of all instances that serve as input to iteration $h^*$. We need the following theorem, whose proof appears in Appendix. \begin{theorem}\label{thm: stopping condition} There is an efficient algorithm, that, given any problem $\pi(G,X,{\mathcal{Z}}')$, where $V(G\setminus X)$ is canonical for ${\mathcal{Z}}'$, and $\pi(G,X,{\mathcal{Z}}')$ has a strong solution of cost $\overline{\mathsf{OPT}}$, finds a weak feasible solution to $\pi(G,X,{\mathcal{Z}}')$ of cost $O(\overline{\mathsf{OPT}}\cdot \sqrt{n'}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n')+\overline{\mathsf{OPT}}^3)$, where $n'=|V(G\setminus X)|$, and $d_{\mbox{\textup{\footnotesize{max}}}}$ is the maximum degree in $G$. \end{theorem} For each $1\leq i\leq k_{h^*}$, let $\edges{h^*}_i$ be the weak solution from Theorem~\ref{thm: stopping condition}, and let $\edges{h^*}=\bigcup_{i=1}^{k_{h^*}}\edges{h^*}_i$. Let $\mathsf{OPT}_i$ denote the cost of the strong optimal solution to $\pi(G_i,X_i,{\mathcal{Z}}_i)$. Then $|\edges{h^*}|=\sum_{i=1}^{k_{h^*}}O(\mathsf{OPT}_i\cdot \sqrt{n(H_i)}\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)+\mathsf{OPT}_i^3)$. Since $n(H_i)\leq n_{h^*-1}\leq 2n_{h^*}$ for all $i$, this is bounded by $\sum_{i=1}^{k_{h^*}}O(\mathsf{OPT}_i\cdot m^*\cdot \rho\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\log n)+\mathsf{OPT}_i^3)\leq O(\mathsf{OPT}\cdot m^*\cdot \rho\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \log n)+\mathsf{OPT}^3)$, as $\sum_{i=1}^{k_{h^*}}\mathsf{OPT}_i\leq \mathsf{OPT}$ from Invariant~(\ref{invariant 3: there is a cheap solution}). The final solution is $E^*=\bigcup_{h=1}^{h^*}\edges{h}$, and \[\begin{split} |E^*|&\leq \sum_{h=1}^{h^*-1}|\edges{h}|+|\edges{h^*}|\\ &\leq (2\rho\log n)(\mathsf{OPT}\cdot m^*)+O(\mathsf{OPT}\cdot m^*\cdot \rho\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)+\mathsf{OPT}^3)\\ &=O(\mathsf{OPT}^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)). \end{split}\] We say that the execution of iteration $h$ is \emph{successful}, iff it either produces a valid input to the next iteration, together with the set $\edges{h}$ of edges, or finds a nasty canonical set in $\H$. We show how to execute each iteration, so that it is successful with probability at least $(1-1/\rho)$, if all previous iterations were successful. If any iteration returns a nasty canonical set, then we stop the algorithm and return this vertex set as an output. Since there are at most $2\rho\log n$ iterations, the probability that all iterations are successful is at least $(1-1/\rho)^{2\rho\log n}\geq 1/\operatorname{poly}(n)$. In order to complete the proof of Theorem~\ref{thm: alg summary}, it is now enough to show an algorithm for executing each iteration, such that, given a valid input to the current iteration, the algorithm either finds a nasty canonical set in $\H$, or returns a valid input to the next iteration, with probability at least $\frac 1 \rho$. We do so in the next section. \section{Iteration Execution}\label{sec: iteration} \label{------------------------------------------------iteration execution-----------------------------------------------------------------------} Throughout this section, we denote $n=|V(\H)|$, $\phi$ is the optimal canonical solution for the {\sf Minimum Crossing Number}\xspace problem on $\H$, and $\mathsf{OPT}$ is its cost. We start by setting the values of the parameters $\rho$ and $m^*$. The value of the parameter $\rho$ depends on two other parameters, that we define later. Specifically, we will define two functions $\lambda: \mathbb{N}\rightarrow {\mathbb R}$, $N:\mathbb{N}\rightarrow {\mathbb R}$: \[\lambda(n')=\Omega\left(\frac 1{\log n'\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2}\right )\] and \[N(n')=O(d_{\mbox{\textup{\footnotesize{max}}}}\sqrt{n'\log n'})\] for all $n'>0$. Also, recall that $\alpha^*=\Omega\left(\frac 1 {\log^{3/2}n\cdot \log\log n}\right )$ is the well-linkedness parameter from Theorem~\ref{thm: well-linked-general}. We need the value of $\rho$ to satisfy the following two inequalities: \begin{equation}\label{eq: value of rho 1} \forall 0<n'\leq n\quad \quad \rho>\frac{25\cdot 2^{24}d_{\mbox{\textup{\footnotesize{max}}}}^6\cdot N^2(n')}{n'\cdot \lambda^2(n')\cdot (\alpha^*)^2} \end{equation} \begin{equation}\label{eq: value of rho 2} \forall 0<n'\leq n\quad \quad \rho >\frac{9\mathsf{OPT}}{\lambda(n')} \end{equation} Substituting the values of $N(n'),\lambda(n')$ and $\alpha^*$ in the above inequalities, we get that it is sufficient to set: \[\rho=\Theta(\log n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2)\max\set{d_{\mbox{\textup{\footnotesize{max}}}}^{10}\log^5 n(\log\log n)^2,\mathsf{OPT}}=O\left (\mathsf{OPT}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\log n)\right ).\] The value of parameter $m^*$ is: \[m^*=O\left (\frac{\mathsf{OPT}^2\cdot\rho\cdot\log^2n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )=O\left (\mathsf{OPT}^3\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n\right ))\] We now turn to describe each iteration $h$. Our goal is to either find a nasty canonical subset of vertices in $\H$, or produce a feasible input to the next iteration, $h+1$. Throughout the execution of iteration $h$, we construct a set ${\mathcal{G}}_{h+1}$ of new problem instances, for which Invariants~(\ref{invariant 1: disjointness})--(\ref{invariant 5: bound on size}) hold. We do not need to worry about the number of the instances in ${\mathcal{G}}_{h+1}$ being bounded by $\mathsf{OPT}$, since, from Invariant~(\ref{invariant 3: there is a cheap solution}), the number of instances in ${\mathcal{G}}_{h+1}$, which do not have a solution of cost $0$, is bounded by $\mathsf{OPT}$. Since we can efficiently identify such instances, they will then become the input to the next iteration. We will also gradually construct the set $\edges h$ of edges, that we remove from the problem instance in this iteration. The iteration is executed on each one of the graphs $G_i$ separately. We fix one such graph $G_i$, for $1\leq i\leq k_h$, and focus on executing iteration $h$ on $G_i$. We need a few definitions. \begin{definition} Given any graph $H$, we say that a simple path $P\subseteq H$ is a $2$-path, iff the degrees of all inner vertices of $P$ are $2$. We say that it is a maximal $2$-path iff it is not contained in any other $2$-path. \end{definition} \begin{definition} We say that a connected graph $H$ is \emph{rigid} iff either $H$ is a simple cycle, or, after we replace every maximal $2$-path in $H$ with an edge, we obtain a $3$-vertex connected graph, with no self-loops or parallel edges. \end{definition} Observe that if $H$ is rigid, then it has a unique planar drawing. We now define the notion of a valid skeleton. \begin{definition} Assume that we are given an instance $\pi=\pi(G,X,{\mathcal{Z}}')$ of the problem, and let $\phi'$ be the optimal strong solution for this instance. Given a subset $\tilde{E}$ of edges of $G$, and a sub-graph $K\subseteq G$, we say that $K$ is a \emph{valid skeleton} for $\pi,\tilde{E},\phi'$, iff the following conditions hold: \begin{itemize} \item Graph $K$ is rigid, and the edges of $K$ do not participate in crossings in $\phi'$. Moreover, the set $V(K)$ of vertices is canonical for ${\mathcal{Z}}'$. \item $X\subseteq K$, and no edges of $\tilde{E}$ belong to $K$. \item Every connected component of $G\setminus (K\cup \tilde E)$ contains at most $n_{h+1}$ vertices. \end{itemize} \end{definition} Notice that if $K$ is a valid skeleton, then we can efficiently find the drawing $\phi_K'$ induced by $\phi'$ -- this is the unique planar drawing of $K$. Each connected component $C$ of $G\setminus (K\cup \tilde E)$ must then be embedded entirely inside some face $F_C$ of $\phi'$. Once we determine the face $F_C$ for each such component $C$, we can solve the problem recursively on these components, where for each component $C$, the bounding box becomes the boundary of $F_C$. This is the main idea of our algorithm. In fact, we will be able to find a valid skeleton $K_i$ for each instance $\pi(G_i,X_i,{\mathcal{Z}}_i)$ and drawing $\phi_i$, for $1\leq i\leq k_h$, w.h.p., but we cannot ensure that this skeleton will contain the bounding box $X_i$. If there is a large collection of edge-disjoint paths, connecting $K_i$ to $X_i$ in $G_i$, we can still connect $X_i$ to $K_i$, by choosing a small subset of these paths at random. This will give the desired final valid skeleton that contains $X_i$. However, if there is only a small number of such paths, then we cannot find a single valid skeleton that contains $X_i$ (in particular, it is possible that all edges incident on $X_i$ participate in crossings in $\phi_i$, so such a skeleton does not exist). However, in the second case, we can find a small subset $E'_i$ of edges, whose removal disconnects $X_i$ from many vertices of $G_i$. In particular, after we remove $E'_i$ from $G_i$, graph $G_i$ will decompose into two connected components: one containing $X_i$, and at most $n_{h+1}$ other vertices, and another that does not contain $X_i$. The first component is denoted by $G_i^X$, and the second by $G_i'$. The sub-instance defined by $G_i'$ is now completely disconnected from the rest of the graph, and it has no bounding box, so we can add it directly to ${\mathcal{G}}_{h+1}$. For the sub-instance $G_i^X$, we show that $X_i$ is a valid skeleton. The edges in $E_i'$ are then added to $\edges h$. We now define these notions more formally. Recall that for each $i: 1\leq i\leq k_h$, problem $\pi(G_i,X_i,{\mathcal{Z}}_i)$ is guaranteed to have a strong feasible solution $\phi_i$ of cost at most $\mathsf{OPT}_i$. For each such instance, we will find two subsets of edges $E'_i$, and $E''_i$, where $|E'_i|=O(\mathsf{OPT}^2\cdot \rho\cdot d_{\mbox{\textup{\footnotesize{max}}}})$, and $|E''_i|=O\left (\frac{\mathsf{OPT}^2\cdot\rho\cdot \log^2n\cdotd_{\mbox{\textup{\footnotesize{max}}}}^2\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}}}{\alpha^*}\right)$, that will be added to $\edges h$. Assume first that $X_i\neq \emptyset$. So by Invariant~(\ref{invariant 5: bound on size}), $|V(G_i\setminus X_i)|\leq n_h$. The graph $G_i\setminus E'_i$ consists of two connected sub-graphs: $G_i^X$, that contains the bounding box $X_i$, and the remaining graph $G'_i$. We will find a subset $E''_i$ of edges and a skeleton $K_i$ for graph $G_i^X$, such that w.h.p., $K_i$ is a valid skeleton for the instance $\pi(G_i^X,X_i,{\mathcal{Z}}_i)$, the set $E''_i$ of edges, and the solution $\phi_i$. Therefore, each one of the connected components of $G_i^X\setminus (K_i\cup E''_i)$ contains at most $n_{h+1}$ vertices. We will process these components, to ensure that we can solve them independently, and then add them to set ${\mathcal{G}}_{h+1}$, where they will serve as input to the next iteration. The remaining graph, $G'_i$, contains at most $n_h$ vertices from Invariant~(\ref{invariant 5: bound on size}), and has no bounding box. So we can add $\pi(G_i,\emptyset,{\mathcal{Z}}_i)$ to ${\mathcal{G}}_{h+1}$ directly. If $X_i=\emptyset$, then we will ensure that $E'_i=\emptyset$, $G'_i=\emptyset$ and $G_i^X=G_i$. Recall that in this case, from Invariant~(\ref{invariant 5: bound on size}), $|V(G_i)|\leq n_{h-1}$. We will find a valid skeleton $K_i$ for $\pi(G_i,X_i,{\mathcal{Z}}_i),E''_i,\phi_i$, and then process the connected components of $G_i\setminus (K_i\cup E''_i)$ as in the previous case, before adding them to set ${\mathcal{G}}_{h+1}$. The algorithm consists of three steps. Given a graph $G_i\in \set{G_1,\ldots,G_{k_h}}$ with the bounding box $X_i$, the goal of the first step is to either produce a nasty canonical vertex set in the whole contracted graph $\H$, or to find a $\rho$-balanced $\alpha^*$-well-linked partition $(A,B)$ of $V(G_i)$, where $A$ and $B$ are canonical, and $|E(A,B)|$ is small. The goal of the second step is to find the sets $E'_i,E''_i$ of edges and a valid skeleton $K_i$ for instance $\pi(G^X_i,X_i,{\mathcal{Z}}_i)$. In the third step, we produce a new collection of instances, from the connected components of graphs $G_i\setminus (E''_i\cup K_i)$, which, together with the graphs $G_i'$, for $1\leq i\leq k_h$, are then added to ${\mathcal{G}}_{h+1}$, to become the input to the next iteration. \subsection{Step 1: Partition}\label{sec: step 1} Throughout this step, we fix some graph $G\in \set{G_1,\ldots,G_{k_h}}$. We denote by $X$ its bounding box, and let $H^0=G\setminus V(X)$. Notice that graph $H^0$ is not necessarily connected. We denote by $H$ the largest connected component of $H^0$, and by ${\mathcal{H}}$ the set of the remaining connected components. We focus on $H$ only in the current step. Let $n'=|V(H)|$. If $n'\leq (m^*\cdot \rho\cdot \log n)^2$, then we can simply proceed to the third step, as the size of every connected component of $H^0$ is bounded by $n'\leq n_{h^*}\leq n_{h+1}$. We then define $E'=E''=\emptyset$, $G^X=G$, $G'=\emptyset$, and we use $X$ as the skeleton $K$ for $G$. It is easy to see that it is a valid skeleton. Therefore, we assume from now on that: \begin{equation}\label{eq: upper bound on rho in terms of n'} n'\geq (m^*\cdot \rho\cdot \log n)^2 \end{equation} Recall that from Invariant~(\ref{invariant 2: canonical}), $H$ is canonical w.r.t. ${\mathcal{Z}}$, so we define ${\mathcal{Z}}'=\set{Z\in {\mathcal{Z}}: Z\subseteq H}$. Throughout this step, whenever we say that a set is canonical, we mean that it is canonical w.r.t. ${\mathcal{Z}}'$. Recall that the goal of the current step is to produce a partition $(A,B)$ of the vertices of $H$, such that $A$ and $B$ are both canonical, the partition is $\rho$-balanced and $\alpha^*$-well-linked, and $|E(A,B)|$ is small, or to find a nasty canonical vertex set in $\H$. In fact we will define 4 different cases. The first two cases are the easy cases, for which it is easy to find a suitable skeleton, even though we do not obtain a $\rho$-balanced $\alpha^*$-well-linked bi-partition. The third case will give the desired bi-partition $(A,B)$, and the fourth case will produce a partition with slightly different, but still sufficient properties. We then show that if none of these four cases happen, then we can find a nasty canonical set in $\H$. The first case is when there is some grid $Z\in {\mathcal{Z}}'$ with $|Z|\geq n'/2$. If this case happens, we continue directly to the second step (this is the simple case where eventually the skeleton will be simply $Z$ itself, after we connect it to the bounding box). In the rest of this step we assume that for each $Z\in {\mathcal{Z}}'$, $|Z|< n'/2$. The initial partition is summarized in the next theorem, whose proof appears in Appendix. \begin{theorem}\label{thm: initial partition} Assume that for each $Z\in {\mathcal{Z}}'$, $|Z|< n'/2$. Then we can efficiently find a partition $(A,B)$ of $V(H)$, such that: \begin{itemize} \item Both $A$ and $B$ are canonical. \item $|A|, |B|\geq \lambda n'$, for $\lambda=\Omega\left(\frac{1}{\log n'\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2}\right )$ and $|E(A,B)|\leq O(d_{\mbox{\textup{\footnotesize{max}}}}\sqrt {n'\log n'})$. \item Set $A$ is $\alpha^*$-well-linked \end{itemize} \end{theorem} We say that Case 2 happens iff $|E(A,B)|\leq \frac{10^7\mathsf{OPT}^2\cdot \rho\cdot\log^2n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}$. If Case 2 happens, we continue directly to Step 2 (this is also a simple case, in which the eventual skeleton is the bounding box $X$ itself, and $E''=E(A,B)$). Let $N=\Theta(d_{\mbox{\textup{\footnotesize{max}}}}\sqrt{n'\log n'})$, so that $|E(A,B)|\leq N$. Notice that set $B$ has property (P1) in $H$, since set $A$ is connected. Our next step is to use Theorem~\ref{thm: well-linked-general} to produce an $\alpha^*$-well-linked decomposition ${\mathcal{C}}$ of $B$, where each set of $C\in {\mathcal{C}}$ has property (P1) and is canonical w.r.t. ${\mathcal{Z}}'$, with $\sum_{C\in {\mathcal{C}}}|\operatorname{out}_H(C)|\leq 2N$. It is easy to see that the decomposition will give a slightly stronger property than (P1): namely, for each $C\in {\mathcal{C}}$, for every edge $e\in \operatorname{out}_H(C)$, there is a path $P\subseteq H\setminus C$, connecting $e$ to some vertex of $A$. We will use this property later. We are now ready to define the third case. This case happens if there is some set $C\in {\mathcal{C}}$, with $|C|\geq n'/\rho$. So if Case 3 happens, we have found two disjoint sets $A,C$ of vertices of $H$, with $|A|,|C|\geq n'/\rho$, both sets being canonical w.r.t. ${\mathcal{Z}}'$ and $\alpha^*$-well-linked. In the next lemma, whose proof appears in Appendix, we show that we can expand this partition to the whole graph $H$. \begin{lemma}\label{lemma: decomposition for Case 2} If Case 3 happens, then we can efficiently find a partition $(A',B')$ of $V(H)$, such that $|A'|,|B'|\geq n'/\rho$, both sets are canonical w.r.t. ${\mathcal{Z}}'$, and $\alpha^*$-well-linked w.r.t. $\operatorname{out}_H(A'),\operatorname{out}_H(B')$, respectively. \end{lemma} If Case 3 happens, we continue directly to the second step. We assume that Case 3 does not happen from now on Notice that the above decomposition is done in the graph $H$, that is, the sets $C\in {\mathcal{C}}$ are well-linked w.r.t. $\operatorname{out}_H(C)$, and $\sum_{C\in {\mathcal{C}}}|\operatorname{out}_H(C)|\leq 2N$. Property (P1) is also only ensured for $T_H(C)$, and not necessarily for $T_G(C)$. For each $C\in {\mathcal{C}}$, let $\operatorname{out}^X(C)=\operatorname{out}_G(C)\setminus \operatorname{out}_H(C)$, that is, $\operatorname{out}^X(C)$ contains all edges connecting $C$ to the bounding box $X$. We do not have any bound on the size of $\operatorname{out}^X(C)$, and $C$ is not guaranteed to be well-linked w.r.t. these edges. The purpose of the final partitioning step is to take care of this. This step is only performed if $X\neq \emptyset$. We perform the final partitioning step on each cluster $C\in {\mathcal{C}}$ separately. We start by setting up an $s$-$t$ min-cut/max-flow instance, as follows. We construct a graph $\tilde{C}$, by starting with $H[C]\cup \operatorname{out}_G(C)$, and identifying all vertices in $T_H(C)$ into a source $s$, and all vertices in $T_G(C)\setminus T_H(C)$ into a sink $t$. Let $F$ be the maximum $s$-$t$ flow in $\tilde{C}$, and let $(\tilde{C}_1,\tilde{C}_2)$ be the corresponding minimum $s$-$t$ cut, with $s\in \tilde{C}_1,t\in \tilde{C}_2$. From Corollary~\ref{corollary: canonical s-t cut}, both $\tilde C_1$ and $\tilde C_2$ are canonical. We let $C_1$ be the set of vertices of $\tilde{C}_1$, excluding $s$, and $C_2$ is the set of vertices of $\tilde{C}_2$, excluding $t$. Notice that both $C_1$ and $C_2$ are also canonical. We say that $C_1$ is a cluster of type $1$, and $C_2$ is cluster of type $2$. Recall that we have computed a max-flow $F$ connecting $s$ to $t$ in $\tilde{C}$. Since all capacities are integral, and all capacities of edges in $H[C]$ are unit, $F$ consists of a collection ${\mathcal{P}}$ of edge-disjoint paths in the graph $H[C]\cup\operatorname{out}_G(C)$. Each such path $P$ connects an edge in $\operatorname{out}_H(C)$ to an edge in $\operatorname{out}^X(C)$. Path $P$ consists of two consecutive segments: one is completely contained in $C_1$, and the other is completely contained in $C_2$. If the first segment is non-empty, then it defines a path $P_1\subseteq H[C_1]\cup \operatorname{out}_G(C_1)$, connecting an edge in $\operatorname{out}_H(C)$, to an edge in $E(\tilde C_1,\tilde C_2)$. Similarly, if the second segment is non-empty, then it defines a path $P_2\subseteq H[C_2]\cup \operatorname{out}_G(C_2)$, connecting an edge in $E(\tilde C_1,\tilde C_2)$ to an edge in $\operatorname{out}^X(C)$. Every edge in $E(C_1,C_2)$ participates in one such path $P_1\subseteq H[C_1]\cup\operatorname{out}_G(C_1)$, and one such path $P_2\subseteq H[C_2]\cup\operatorname{out}_G(C_2)$. Similarly, if $e\in \operatorname{out}^X(C)\cap \operatorname{out}_G(C_1)$, then it is also an endpoint of exactly one path $P_1\subseteq H[C_1]\cup\operatorname{out}_G(C_1)$, and if $e\in \operatorname{out}_G(C_2)\setminus \operatorname{out}^X(C)$, then it is an endpoint of exactly one such path $P_2\subseteq H[C_2]\cup\operatorname{out}_G(C_2)$. \begin{figure}[h] \scalebox{0.5}{\rotatebox{0}{\includegraphics{step1-last-partition-cut.pdf}}} \caption{Partition of cluster $C$. Edges of $\operatorname{out}_H(C)$ are blue, edges of $\operatorname{out}^X(C)$ are red; edges participating in the min-cut are marked by $*$. The black edges belong to both $E_2(C_1)$ and $E_1(C_2)$.} \label{fig: step 1 last partition} \end{figure} For the cluster $C_1$, let $E_1(C_1)=\operatorname{out}_H(C_1)\cap \operatorname{out}_H(C)$, and $E_2(C_1)=\operatorname{out}_G(C_1)\setminus \operatorname{out}_H(C)$. All edges in $E_2(C_1)$ belong to either $E(C_1,C_2)$ or $\operatorname{out}^X(C)$. By the above discussion, we have a collection ${\mathcal{P}}(C_1)$ of edge disjoint paths in $H[C_1]\cup\operatorname{out}_G(C_1)$, each path connecting an edge in $E_1(C_1)$ to an edge in $E_2(C_1)$, and every edge in $E_2(C_1)$ is an endpoint of a path in ${\mathcal{P}}(C_1)$. An important property of cluster $C_1$ that we will use later is that if $C_1\neq \emptyset$, then $E_1(C_1)\neq \emptyset$. All edges in $E_1(C_1)$ can reach set $A$ in graph $H\setminus C_1$, and all edges in $E_2(C_1)$ can reach the set $V(X)$ of vertices in the graph $G\setminus C_1$. Moreover, if $E_2(C_1)\neq \emptyset$, then there is a path $P(C_1)$, connecting a vertex of $C_1$ to a vertex of $X$, such that $P(C_1)$ only contains vertices of $C_2$. In particular, it does not contain vertices of any other type-1 clusters. Similarly, for the cluster $C_2$, let $E_2(C_2)=\operatorname{out}_G(C_2)\cap \operatorname{out}^X(C)$, and $E_1(C_2)=\operatorname{out}_G(C_2)\setminus \operatorname{out}^X(C_2)$. All edges in $E_1(C_2)$ belong to either $E(C_1,C_2)$, or to $\operatorname{out}_H(C)$. From the above discussion, we have a set ${\mathcal{P}}(C_2)$ of edge-disjoint paths in $H[C_2]\cup\operatorname{out}_G(C_2)$, each such path connecting an edge in $E_1(C_2)$ to an edge in $E_2(C_2)$, and every edge in $E_1(C_2)$ is an endpoint of one such path. Let ${\mathcal T}_1$ be the set of all non-empty clusters of type $1$, and ${\mathcal T}_2$ the set of clusters of type $2$. For the case where $X=\emptyset$, all clusters $C\in {\mathcal{C}}$ are type-$1$ clusters, and ${\mathcal T}_2=\emptyset$. We are now ready to define the fourth case. We say that Case 4 happens, iff clusters in ${\mathcal T}_2$ contain at least $\lambda n'/2$ vertices altogether. Notice that Case 4 can only happen if $X\neq \emptyset$. The proof of the next lemma appears in Appendix. \begin{lemma}\label{lemma: decomposition for Case 3} If Case 4 happens, then we can find a partition $(A',B')$ of $V(H)$, such that $|A'|,|B'|\geq n'/\rho$, both $A'$ and $B'$ are canonical, and $A'$ is $\alpha^*$-well-linked w.r.t. $E(A',B')$. Moreover, if we denote by $\operatorname{out}^X(B')=\operatorname{out}_G(B')\setminus E(A',B')$, then there is a collection ${\mathcal{P}}$ of edge-disjoint paths in graph $H[B']\cup \operatorname{out}_G(B')$, connecting the edges in $E(A',B')$ to edges in $\operatorname{out}^X(B')$, such that each edge $e\in E(A',B')$ is an endpoint of exactly one such path.\end{lemma} We will show below that for cases 1---4, we can successfully construct a skeleton and produce an input to the next iteration, with high probability. In the next theorem, whose proof appears in Appendix, we show that if none of these cases happen, then we can efficiently find a nasty canonical set. \begin{theorem}\label{thm: case 4} If none of the cases 1--4 happen, then we can efficiently find a nasty canonical set in the original contracted graph $\H$. \end{theorem} \subsection{Step 2: Skeleton Construction}\label{subsection: skeleton construction} Let $(G,X)\in\set{(G_1,X_1),\ldots,(G_{k_h},X_{k_h})}$, let $\phi'$ be the strong solution to problem $\pi(G,X,{\mathcal{Z}}')$, guaranteed by Invariant~(\ref{invariant 3: there is a cheap solution}), and let $\mathsf{OPT}'$ denote its cost. Recall that $H$ is the largest connected component in $G\setminus X$, and ${\mathcal{Z}}'=\set{Z\in {\mathcal{Z}}: Z\subseteq V(H)}$. We say that an edge $e\in E(G)$ is \emph{good} iff it does not participate in any crossings in $\phi'$. Recall that for each $Z\in {\mathcal{Z}}'$, all edges of $G[Z]$ are good. In the second step we define the subsets $E',E''$ of edges, the two sub-graphs $G^X$ and $G'$ of $G$, and construct a valid skeleton $K$ for $\pi(G^X,X,{\mathcal{Z}}'), E''$ and $\phi'$, for Cases 1---4. We define a set $T\subseteq E(G)$ of edges, that we refer to as ``terminals'' for the rest of this section, as follows. For Case 1, $T=\emptyset$. For Case 2, $T=E(A,B)$, where $(A,B)$ is the partition of $H$ from Theorem~\ref{thm: initial partition}. For Cases 3 and 4, $T=E(A',B')$, where $(A',B')$ are the partitions of $H$ given by Lemmas~\ref{lemma: decomposition for Case 2} and \ref{lemma: decomposition for Case 3}, respectively. For convenience, we rename $(A',B')$ as $(A,B)$ for these two cases. Since the partition $(A,B)$ of $H$ is canonical for cases 2--4, we are guaranteed that $T$ does not contain any edges of grids $Z\in {\mathcal{Z}}'$. The easiest case is Case 2. The skeleton $K$ for this case is simply the bounding box $X$, and we set $E''=T$. Recall that $|T|\leq \frac{10^7\mathsf{OPT}^2\cdot \rho\cdot\log^2n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}$ for this case. Since $|A|,|B|\geq n'/\rho$, it is easy to verify that $X$ is a valid skeleton for $G$, $\phi'$ and $E''$. In particular, $|A|,|B|\leq n'(1-\rho)\leq n_{h-1}(1-\rho)\leq n_{h+1}$. We set $E'=\emptyset$, $G^X=G$, and $G'=\emptyset$. From now on we focus on Cases 1, 3 and 4. We first build an initial skeleton $K'$ of $G$, and a subset $E''$ of edges, such that $K'$ has all the required properties, except that it is possible that $X\not\subseteq K'$. Specifically, we will ensure that $K'$ only contains good edges, is rigid, and every connected component of $H\setminus (K'\cup E'')$ contains at most $n_{h+1}$ vertices. In the end, we will either connect $K'$ to $X$, or find a small subset $E'$ of edges, separating the two sets. The initial skeleton $K'$ for Case 1 is simply the grid $Z\in{\mathcal{Z}}'$ with $|Z|\geq n'/2$, and we set $E''=\emptyset$. Observe that $K'$ is good, rigid, canonical, and every connected component of $H\setminus K'$ contains at most $n'/2\leq n_{h-1}/2\leq n_{h+1}$ vertices. The construction of the initial skeleton for Cases 3 and 4 is summarized in the next theorem, whose proof is deferred to the Appendix. \begin{theorem}\label{thm: initial skeleton for Cases 3 and 4} Assume that Cases 3 or 4 happen. Then we can efficiently construct a skeleton $K'\subseteq G$, such that with probability at least $\left (1-\frac 1{2\rho\cdot \mathsf{OPT}}\right )$, $K'$ is good, rigid, and every connected component of $H\setminus K'$ contains at most $O\left ( \frac{\mathsf{OPT}^2\cdot \rho\cdot \log^2 n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2 \cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )$ terminals. \end{theorem} Let ${\mathcal{C}}$ be the set of all connected components of $H\setminus K'$. Observe that at most one of the components may contain more than $n'/2$ vertices. Let $C$ denote this component, and let $E''$ be the set of terminals contained in $C$, $E''=T\cap E(C)$. Let ${\mathcal{C}}'$ be the set of all connected components of $C\setminus E''$. Then for each $C'\in {\mathcal{C}}'$, $|V(C')|\leq n'(1-\rho)$ must hold: otherwise, $V(C')$ must contain vertices that belong to both $A$ and $B$, and so $E(C')$ must contain at least one terminal. Therefore, the size of every connected component of $H\setminus (K'\cup E'')$ is bounded by $n'(1-\rho)\leq n_{h-1}(1-\rho)\leq n_{h+1}$ from Invariant~(\ref{invariant 5: bound on size}). Recall that the terminals do not belong to the grids $Z\in {\mathcal{Z}}'$. Observe that it is possible that $V(K')$ is not canonical. Consider some grid $Z\in{\mathcal{Z}}$, such that $V(Z)\cap V(K')\neq \emptyset$. If $Z\cap K'$ is a simple path, then we will deal with such grids at the end of the third step. Let ${\mathcal{Z}}''(G)$ denote the set of all such grids. Assume now that $Z\cap K'$ is not a simple path. Since graph $K'$ is rigid, it must be the case that there are at least three matching edges from $\operatorname{out}_G(Z)$ that belong to $K'$. In this case, we can simply add the whole grid $Z$ to the skeleton $K'$, and still the new skeleton $K'$ remains good and rigid, and every connected component of $H\setminus (K'\cup E'')$ contains at most $n_{h+1}$ vertices. So from now on we assume that if $V(Z)\cap V(K')\neq \emptyset$ for some $Z\in {\mathcal{Z}}$, then $Z\cap K'$ is a simple path, and so $Z\in {\mathcal{Z}}''(G)$. We denote by $K^+$ the union of $K'$ with all the grids in ${\mathcal{Z}}''(G)$. Clearly, $K^+$ is connected, canonical, but it is not necessarily rigid. Consider Cases 1, 3 and 4. If $X=\emptyset$, then we define $E'=\emptyset$, $G^X=G$, $G'=\emptyset$ and the final skeleton $K=K'$. It is easy to see that $K$ is a valid skeleton for $\pi(G^X,X,{\mathcal{Z}}'\setminus{\mathcal{Z}}''(G))$, $E''$ and $\phi'$. Otherwise, if $X\neq \emptyset$, we now try to connect the skeleton $K'$ to the bounding box $X$ (observe that some of the vertices of $X$ may already belong to $K'$). In order to do so, we will try to find a set ${\mathcal{P}}'$ of $24\mathsf{OPT}^2\rho$ vertex-disjoint paths in $G\setminus E''$, connecting the vertices of $X$ to the vertices of $K^+$ (where some of these paths can be simply vertices in $X\cap K^+$). We distinguish between three cases. The first case is when such a collection of paths does not exist in $G\setminus E''$. Then there must be a set $V'\subseteq V(G)$ of at most $24\mathsf{OPT}^2\rho$ vertices, whose removal from $G\setminus E''$ separates $X$ from $K^+$. Therefore, the size of the edge min-cut separating $X$ from $K^+\setminus X$ in $G\setminus E''$ is at most $24\mathsf{OPT}^2\rhod_{\mbox{\textup{\footnotesize{max}}}}$. Observe that both $K^+$ and $X$ are canonical w.r.t. ${\mathcal{Z}}'$, and the vertices in $V(X)\cap V(K^+)$ cannot belong to sets $Z\in {\mathcal{Z}}'$, by the definition of ${\mathcal{Z}}'$. Therefore, from Corollary~\ref{corollary: canonical s-t cut}, there is a subset $E'$ of at most $24\mathsf{OPT}^2\rhod_{\mbox{\textup{\footnotesize{max}}}}$ edges (canonical edge min-cut), whose removal partitions graph $G\setminus E''$ into two connected sub-graphs, $G^X$ containing $X$, and $G'=G\setminus V(G^X)$, and moreover, $V(G_X)$ and $V(G')$ are both canonical, and the edges of $E'$ do not belong to any grids $Z\in {\mathcal{Z}}'$. We add the instance $\pi(G',\emptyset,{\mathcal{Z}}')$ directly to ${\mathcal{G}}_{h+1}$. From Invariant~(\ref{invariant 5: bound on size}), since $X\neq \emptyset$, $|V(G')|\leq n_h$, and since the bounding box of the new instance is $\emptyset$, it is a valid input to the next iteration. For graph $G^X$, we use $X$ as its skeleton. Observe that every connected component of $G^X\setminus (X\cup E'')$ must be either a sub-graph of some connected component of $H\setminus (K'\cup E'')$ (and then its size is bounded by $n_{h+1}$), or it must belong to ${\mathcal{H}}^0$ (and then its size is bounded by $n_{h-1}/2\leq n_{h+1}$). Therefore, $X$ is a valid skeleton for $\pi(G^X,X,{\mathcal{Z}}'\setminus {\mathcal{Z}}''(G))$, $E''$, and $\phi'$. The second case is when there is some grid $Z\in {\mathcal{Z}}''(G)$, such that for any collection ${\mathcal{P}}'$ of $24\mathsf{OPT}^2\rho$ vertex-disjoint paths, connecting the vertices of $X$ to the vertices of $K^+$ in $G$, at least half the paths contain vertices of $\Gamma(Z)$ as their endpoints. Recall that only $2$ edges of $\operatorname{out}_H(Z)$ belong to $K'$. Then there is a collection $E'$ of at most $12d_{\mbox{\textup{\footnotesize{max}}}}\mathsf{OPT}^2\rho+2$ edges in $G\setminus E''$, whose removal separates $V(X)\cup Z$ from $V(K^+)\setminus (Z\cup X)$. Again, we can ensure that the edges of $E'$ do not belong to the grids $Z\in {\mathcal{Z}}'$. Let $G^X$ denote the resulting subgraph that contains $X$, and $G'=G\setminus G^X$. Then both $G^X$ and $G'$ are canonical as before, and we can add the instance $\pi(G',\emptyset,{\mathcal{Z}}')$ to ${\mathcal{G}}_{h+1}$, as before. In order to build a valid skeleton for graph $G^X$, we consider the subset ${\mathcal{P}}''\subseteq {\mathcal{P}}'$ of $12\mathsf{OPT}^2\rho$ vertex-disjoint paths, connecting the vertices of $X$ to the vertices of $\Gamma(Z)$, and we randomly choose three such paths. We then let the skeleton $K$ of $G^X$ consist of the union of $X$, $Z$, and the three selected paths. It is easy to see that the resulting graph $K$ is rigid, and with probability at least $(1-\frac 1{2\rho\cdot \mathsf{OPT}})$, it only contains good edges. Moreover, every connected component of $G^X\setminus (K\cup E'')$ is either a sub-graph of a connected component of $H\setminus (K'\cup E'')$ (and may contain at most $n_{h+1}$ vertices), or it belongs to ${\mathcal{H}}^0$ (and then its size is bounded by $n_{h+1}$). Therefore, $K$ is a valid skeleton for $\pi(G^X,X,{\mathcal{Z}}'\setminus {\mathcal{Z}}''(G))$, $E''$, and $\phi'$. The third case is when we can find the desired collection ${\mathcal{P}}'$ of paths, and moreover, for each grid $Z\in {\mathcal{Z}}''(G)$, at most half the paths in ${\mathcal{P}}'$ contain vertices of $\Gamma(Z)$. We then randomly select three paths from ${\mathcal{P}}'$, making sure that at most two paths containing vertices of $\Gamma(Z)$ are selected for any grid $Z\in {\mathcal{Z}}''(G)$. Since at most $2\mathsf{OPT}$ of the paths in ${\mathcal{P}}'$ are bad, with probability at least $1-1/(2\mathsf{OPT}\rho)$, none of the selected paths is bad. We then define $K$ to be the union of $K'$, $X$, and the three selected paths. Additionally, if, for some grid $Z\in {\mathcal{Z}}''(G)$, one or two of the selected paths contain vertices in $\Gamma(Z)$, then remove $Z$ from ${\mathcal{Z}}''(G)$, and add it to $K$. It is easy to verify that the resulting skeleton is rigid, and it only contains good edges. Moreover, every connected component of $G\setminus (K\cup E'')$, is either a sub-graph of a connected component of $H\setminus (K'\cup E'')$, or it is a sub-graph of one of the graphs in ${\mathcal{H}}^0$. In the former case, its size is bounded by $n_{h+1}$ as above, while in the latter case, its size is bounded by $|V(G\setminus X)|/2\leq n_{h-1}/2<n_{h-1}(1-\rho)\leq n_{h+1}$. We set $E'=\emptyset$, $G^X=G$, and $G'=\emptyset$. To summarize this step, we have started with the instance $\pi(G,X,{\mathcal{Z}}')$, and defined two subsets $E',E''$ of edges, with $|E'|\leq O(\mathsf{OPT}^2d_{\mbox{\textup{\footnotesize{max}}}}\rho)$ and $|E''|\leq O\left ( \frac{\mathsf{OPT}^2\cdot \rho\cdot \log^2 n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2 \cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )$, whose removal disconnects $G$ into two connected sub-graphs: $G^X$ containing $X$, and $G'$. Moreover, both sets $V(G^X)$, $V(G')$ are canonical, and $E',E''$ do not contain edges belonging to grids $Z\in{\mathcal{Z}}'$. We have added instance $\pi(G',\emptyset,{\mathcal{Z}}')$ to ${\mathcal{G}}_{h+1}$, and we have defined a skeleton $K$ for $G^X$. We have shown that $K$ is a valid skeleton for $\pi(G^X,X,{\mathcal{Z}}'\setminus {\mathcal{Z}}''(G))$, $E''$, and $\phi'$. The probability that this step is successful for a fixed graph $G\in\set{G_1,\ldots,G_{k_h}}$ is at least $(1-1/(\rho\cdot \mathsf{OPT}))$, and so the probability that it is successful across all graphs is at least $(1-1/\rho)$. We can assume w.l.o.g. that every edge in set $E'$ has one endpoint in $G^X$ and one endpoint in $G'$: otherwise, this edge does not separate $G^X$ from $G'$, and can be removed from $E'$. Similarly, we can assume w.l.o.g. that for every edge $e\in E''$, the two endpoints of $e$ either belong to distinct connected components of $G^X\setminus (K\cup E'')$, or one endpoint belongs to $G^X$, and the other to $G'$. We will use these facts later, to claim that Invariant~(\ref{invariant 2: proper subgraph}) holds for the resulting instances. \subsection{Step 3: Producing Input to the Next Iteration}\label{sec: step 3} Recall that so far, for each $1\leq i\leq k_h$, we have found two collections $E_i',E_i''$ of edges, two sub-graphs $G_i^X$ and $G_i'$ with $X_i\subseteq G_i^X$, and a valid skeleton $K_i$ for $\pi(G_i^X,X_i,{\mathcal{Z}}\setminus {\mathcal{Z}}''(G_i))$, $\phi_i$, $E''_i$. The sets $E_i'\cup E_i''$ do not contain any edges of the grids $Z\in {\mathcal{Z}}$, and each edge in $E_i'\cup E''_i$ either connects a vertex of $G_i^X$ to a vertex of $G_i'$, or vertices of two distinct connected components of $G_i^X\setminus (K_i\cup E''_i)$. Recall that $G_i'$ contains at most $n_{h}$ vertices, and there are no edges in $G_i\setminus (E_i'\cup E_{i}'')$ connecting the vertices of $G_i'$ to those of $G_i^X$. Let ${\mathcal{C}}'$ denote the set of all connected components of $G_i^X\setminus (K_i\cup E_i'')$. Then for each $C\in {\mathcal{C}}'$, $|V(C)|\leq n_{h+1}$. Since graph $K_i$ is rigid, we can find the planar drawing $\phi_i(K_i)$ of $K_i$ induced by $\phi_i$ efficiently. Since all edges of $K_i$ are good for $\phi_i$, each connected component $C\in{\mathcal{C}}'$ is embedded inside a single face $F_C^*$ of $\phi_i$. Intuitively, we would like to find this face $F_C^*$ for each such connected component $C$, and then solve the problem recursively on $C$, together with the bounding box $\gamma(F_C^*)$ --- the boundary of the face $F_C^*$. Apart from the difficulty in identifying the face $F_C^*$, a problem with this approach is that it is not clear that we can solve the problems induced by different connected components separately. For example, if both $C$ and $C'$ need to be embedded inside the same face $F$, then even if we find weak solutions for problems $\pi(C,\gamma(F),{\mathcal{Z}})$ and $\pi(C',\gamma(F),{\mathcal{Z}}')$, it is not clear that these two solutions can be combined together to give a feasible weak solution for the whole problem, since the drawings of $C\cup \gamma(F)$ and $C'\cup \gamma(F)$ may interfere with each other. We will define below the condition under which the two clusters are considered independent and can be solved separately. We will then find an assignment of each cluster $C$ to one of the faces of $\phi_i(K_i)$, and find a further partition of each cluster $C\in {\mathcal{C}}'$, such that all resulting clusters assigned to the same face are independent, and their corresponding problems can therefore be solved separately. We now focus on some graph $G=G_i^X\in \set{G_1^X,\ldots,G_{k_h}^X}$, and we denote its bounding box by $X$, its skeleton $K_i$ by $K$, and the two sets $E_i',E_i''$ of edges by $E'$ and $E''$ respectively. We let $\phi'$ denote the drawing of $G_i^X$ induced by the drawing $\phi_i$, guaranteed by Invariant~(\ref{invariant 3: there is a cheap solution}). As before, ${\mathcal{C}}'$ is the set of all connected components of $G\setminus (K\cup E'')$. While further partitioning the clusters $C\in{\mathcal{C}}'$ to ensure independence, we may have to remove edges that connect the vertices of $C$ to the skeleton $K$. However, such edges do not strictly belong to the cluster $C$. We next perform a simple transformation of the graph $G\setminus (E'\cup E'')$ in order to take care of this technicality. Consider the graph $G\setminus (E'\cup E'')$. We perform the following transformation: let $e=(v,x)$ be any edge in $E(G)\setminus (E'\cup E'')$, such that $x\in K$, $v\not\in K$. We add an artificial vertex $z_e$, that subdivides $e$ into two edges: an artificial edge $(x,z_e)$, and a non-artificial edge $(v,z_e)$. We denote $x_{z_e}=x$. Similarly, if $e=(x,x')$ is any edge in $E(G)\setminus (E'\cup E'')$, with $x,x'\in K$, then we add two artificial vertices $z_e,z'_e$, that subdivide $e$ into three edges, artificial edges $(x,z_e)$, and $(z'_e,x')$, and a non-artificial edge $(z_e,z'_e)$. We denote $x_{z_e}=x$, and $x_{z_{e}'}=x'$. If edge $e$ belonged to any grids $Z\in {\mathcal{Z}}$ (which can happen if $Z\in {\mathcal{Z}}''(G)$), then we consider all edges obtained from sub-divviding $e$ also a part of $Z$. Let $\tilde{G}$ denote the resulting graph, $\Gamma$ the set of all these artificial vertices, and let $E_{\tilde{G}}(\Gamma,K)$ be the set of all artificial edges in $\tilde G$. Let $\tilde{\phi}$ be the drawing of $\tilde{G}$ induced by $\phi'$. Notice that we can assume w.l.o.g. that the edges of $E_{\tilde G}(\Gamma,K)$ do not participate in any crossings in $\tilde{\phi}$. We use this assumption throughout the current section. For any sub-graph $C$ of $\tilde G\setminus K$, we denote by $\Gamma(C)=\Gamma\cap V(C)$, and $\operatorname{out}_K(C)$ is the subset of artificial edges adjacent to the vertices of $C$, that is, $\operatorname{out}_K(C)=E_{\tilde G}(\Gamma_C,K)$. We also denote by $C^+=C\cup \operatorname{out}_K(C)$, and by $\delta(C)$ the set of endpoints of the edges in $\operatorname{out}_K(C)$ that belong to $K$. Let ${\mathcal{C}}$ the set of all connected components of $\tilde G\setminus K$. We next formally define the notion of independence of clusters. Eventually, we will find a further partition of each one of the clusters $C\in {\mathcal{C}}$, so that the resulting clusters are independent, and can be solved separately in the next iteration. Let $\phi_K'$ be the drawing of $K$ induced by $\phi'$. Recall that this is the unique planar drawing of $K$, that can be found efficiently. Let ${\mathcal{F}}$ be the set of faces of $\phi_K'$. For each face $F\in {\mathcal{F}}$, let $\gamma(F)$ denote the set of edges and vertices lying on its boundary. Since $K$ is rigid, $\gamma(F)$ is a simple cycle. Since all edges of $K$ are good for $\phi'$, for every component $C\in {\mathcal{C}}$, $C^+$ is embedded completely inside some face $F^*_C$ of ${\mathcal{F}}$ in the drawing $\tilde {\phi}$, and so $\delta(C)\subseteq \gamma(F)$ must hold. Therefore, there are three possibilities: either there is a unique face $F_C\in {\mathcal{F}}$, such that $\delta(C)\subseteq \gamma(F_C)$. In this case we say that $C$ is of type 1, and $F_C=F^*_C$ must hold; or there are two faces $F_1(C),F_2(C)$, whose both boundaries contain $\delta(C)$, so $F^*_C\in\set{F_1(C),F_2(C)}$. In this case we say that $C$ is of type 2. The third possibility is that $|\delta(C)|\leq 1$. In this case we say that $C$ is of type 3, and we can embed $C$ inside any face whose boundary contains the vertex $\delta(C)$. The embedding of such clusters does not affect other clusters. For convenience, when $C$ is of type 1, we denote $F_1(C)=F_2(C)=F_C$, and if it is of type 3, then we denote $F_1(C)=F_2(C)=F$, where $F$ is any face of ${\mathcal{F}}$ whose boundary contains $\delta(C)$. We now formally define when two clusters $C,C'\in {\mathcal{C}}$ are independent. Let $C,C'\in {\mathcal{C}}$ be any two clusters, such that there is a face $F\in {\mathcal{F}}$, with $\delta(C),\delta(C')\subseteq \gamma(F)$. The set $\delta(C)$ of vertices defines a partition $\Sigma$ of $\gamma(F)$ into segments, where every segment $\sigma\in \Sigma$ contains two vertices of $\delta(C)$ as its endpoints, and does not contain any other vertices of $\delta(C)$. Similarly, the set $\delta(C')$ of vertices defines a partition $\Sigma'$ of $\gamma(F)$. \begin{definition} We say that the two clusters $C,C'$ are \emph{independent}, iff $\delta(C)$ is completely contained in some segment $\sigma'\in \Sigma'$. Notice that in this case, $\delta(C')$ must also be completely contained in some segment $\sigma\in \Sigma$. \end{definition} Our goal in this step is to assign to each cluster $C\in {\mathcal{C}}$, a face $F(C)\in\set{F_1(C),F_2(C)}$, and to find a partition ${\mathcal{Q}}(C)$ of the vertices of the cluster $C$. Intuitively, each such cluster $Q\in{\mathcal{Q}}(C)$ will become an instance in the input to the next iteration, with $\gamma(F(C))$ as its bounding box. Suppose we are given such an assignment $F(C)$ of faces, and the partition ${\mathcal{Q}}(C)$ for each $C\in {\mathcal{C}}$. We will use the following notation. For each $C\in {\mathcal{C}}$, let $E^*(C)$ denote the set of edges cut by ${\mathcal{Q}}(C)$, that is, $E^*(C)=\bigcup_{Q\neq Q'\in {\mathcal{Q}}(C)}E_{\tilde{G}}(Q,Q')$, and let $E^*=\bigcup_{C\in {\mathcal{C}}}E^*(C)$. For each $Q\in {\mathcal{Q}}(C)$, we denote by $X_Q=\gamma(F(C))$, the boundary of the face inside which $C$ is to be embedded. For each face $F\in {\mathcal{F}}$, we denote by ${\mathcal{Q}}(F)=\bigcup_{C:F(C)=F}{\mathcal{Q}}(C)$ the set of all clusters to be embedded inside $F$, and we denote by ${\mathcal{Q}}=\bigcup_{C\in {\mathcal{C}}}{\mathcal{Q}}(C)$. Abusing the notation, for each cluster $Q\in {\mathcal{Q}}$, we will refer to $Q$ both as the set of vertices, and as the sub-graph $\tilde G[Q]$ induced by it. As before, we denote $Q\cup \operatorname{out}_K(Q)$ by $Q^+$. The next theorem shows that it is enough to find an assignment of every cluster $C\in {\mathcal{C}}$ to a face $F(C)\in \set{F_1(C),F_2(C)}$, and a partition ${\mathcal{Q}}(C)$ of the vertices of $C$, such that all the resulting clusters assigned to every face of ${\mathcal{F}}$ are independent. \begin{theorem}\label{thm: no conflict case} Suppose we are given, for each cluster $C\in {\mathcal{C}}$, a face $F(C)\in \set{F_1(C),F_2(C)}$, and a partition ${\mathcal{Q}}(C)$ of the vertices of $C$. Moreover, assume that for every face $F\in {\mathcal{F}}$, every pair $Q,Q'\in {\mathcal{Q}}(F)$ of clusters is independent, and for each $Z\in {\mathcal{Z}}$, $E^*\cap E(Z)=\emptyset$. Then: \begin{itemize} \item For each $Q\in {\mathcal{Q}}$, there is a strong solution to the problem $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$, such that the total cost of these solutions, over all $Q\in {\mathcal{Q}}$, is bounded by $\operatorname{cr}_{\tilde \phi}(\tilde G)\leq \operatorname{cr}_{\phi'}(G)$. \item For each $Q\in {\mathcal{Q}}$, let $E^{**}_Q$ be any feasible weak solution to the problem $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$, and let $E^{**}=\bigcup_{Q\in {\mathcal{Q}}}E^{**}_Q$. Then $E'\cup E''\cup E^*\cup E^{**}$ is a feasible weak solution to problem $\pi(G,X,{\mathcal{Z}})$. \end{itemize} \end{theorem} We remark that this theorem does not require that the sets $C\in {\mathcal{C}}$ are canonical vertex sets. \begin{proof} Fix some $Q\in {\mathcal{Q}}$, and let $\tilde{\phi}_{Q^+}$ be the drawing of $Q^+\cup X_Q$ induced by $\tilde{\phi}$. Recall that the edges of the skeleton $K$ do not participate in any crossings in $\tilde \phi$, and every pair $Q,Q'\in {\mathcal{Q}}$ of graphs is completely disjoint. Therefore, $\sum_{Q\in {\mathcal{Q}}}\operatorname{cr}_{\tilde \phi_{Q^+}}(Q^+)\leq \operatorname{cr}_{\tilde \phi}(\tilde G)$. Observe that every edge of $\tilde G$ belongs either to $K$, or to $E^*$, or to $Q^+$ for some $Q\in {\mathcal{Q}}$. Therefore, it is now enough to show that for each $Q\in {\mathcal{Q}}$, $\tilde \phi_{Q^+}$ is a feasible strong solution to problem $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$. Since $\phi'$ is canonical, so is $\tilde \phi_{Q^+}$. It now only remains to show that $Q^+$ is completely embedded on one side (that is, inside or outside) of the cycle $X_Q$ in $\tilde \phi_{Q^+}$. Let $C\in {\mathcal{C}}$, such that $Q\in {\mathcal{Q}}(C)$. Recall that $C$ is a connected component of $\tilde G\setminus K$. Since $K$ is good, $C$ is embedded completely inside one face in ${\mathcal{F}}$. In particular, since $X_Q$ is the boundary of one of the faces in ${\mathcal{F}}$, all vertices and edges of $C$ (and therefore of $Q$) are completely embedded on one side of $X_Q$. Therefore, $X_Q$ can be viewed as the bounding box in the embedding $\tilde \phi_{Q^+}$. We now prove the second part of the theorem. For each $Q\in {\mathcal{Q}}$, let $E^{**}_Q$ be any feasible weak solution to the problem $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$, and let $E^{**}=\bigcup_{Q\in {\mathcal{Q}}}E^{**}_Q$. We first show that $E'\cup E''\cup E^*\cup E^{**}$ is a feasible weak solution to the problem $\pi(\tilde G,X,{\mathcal{Z}})$. Let $F\in {\mathcal{F}}$ be any face of $\phi'_K$. For each $Q\in {\mathcal{Q}}(F)$, let $\tilde Q=Q\setminus E^{**}_Q$, and let $\tilde Q^+=Q^+\setminus E^{**}_Q$. Since $E^{**}_Q$ is a weak solution for instance $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$, there is a planar drawing $\psi_{Q}$ of $\tilde{Q}^+\cup X_Q$, inside the bounding box $X_Q=\gamma(F)$. It is enough to show that for each face $F\in {\mathcal{F}}$, we can find a planar embedding of graphs $\tilde{Q}^+$, for all $Q\in{\mathcal{Q}}(F)$ inside $\gamma(F)$. Fix an arbitrary ordering ${\mathcal{Q}}(F)=\set{Q_1,\ldots,Q_r}$. We now gradually construct a planar drawing of the graphs $\tilde{Q}_j^+$ inside $\gamma(F)$. For convenience, we will also be adding new artificial edges to this drawing. We perform $r$ iterations, and the goal in iteration $j: 1\leq j\leq r$ is to add the graph $\tilde{Q}_j^+$ to the drawing. We will maintain the following invariant: at the beginning of every iteration $j$, for each $j'\geq j$, there is a face $F'$ in the current drawing, such that $\delta(Q_j)\subseteq \gamma(F')$. In the first iteration, we simply use the drawing $\psi_{Q_1}$ of $\tilde{Q}_1^+\cup \gamma(F)$. The vertices of $\delta(Q_1)$ define a partition $\Sigma_1$ of $\gamma(F)$ into segments, such that every segment contains two vertices of $\delta(Q_1)$ as its endpoints, and no other vertices of $\delta(Q_1)$. For each such segment $\sigma$, we add a new artificial edge $e_{\sigma}$ connecting its endpoints to the drawing. All such edges can be added without creating any crossings. Since every pair of clusters in ${\mathcal{Q}}(F)$ is independent, for each graph $Q_j$, $j> 1$, the vertices of $\delta(Q_j)$ are completely contained in one of the resulting segments $\sigma\in \Sigma_1$. The face $F'$ of the current drawing, whose boundary consists of $\sigma$ and $e_{\sigma}$ then has the property that $\delta(Q_j)\subseteq \gamma(F')$. Consider now some iteration $j+1$, and let $F'$ be the face of the current drawing, such that $\delta(Q_{j+1})\subseteq \gamma(F')$. We add the drawing $\psi_{Q_{j+1}}$ of $\tilde Q_{j+1}^+\cup \gamma(F)$, with $\gamma(F')$ replacing $\gamma(F)$ as the bounding box. We can do so since $\delta(Q_j)\subseteq \gamma(F')$. We can therefore add this drawing, so that no crossings with edges that already belong to the drawing are introduced. The bounding box $\gamma(F')$ is then sub-divided into the set $\Sigma'$ of sub-segments, by the vertices of $\delta(Q_j)$. Again, for each such segment $\sigma'$, we add an artificial edge $e_{\sigma'}$, connecting its endpoints, to the drawing, inside the face $F'$, such that no crossings are introduced. Since there are no conflicts between clusters in ${\mathcal{Q}}(F)$, for each $Q_{j'}$, with $j'>j+1$, such that $\delta(Q_{j'})\subseteq \gamma(F')$, there is a segment $\sigma'\in \Sigma'$, containing all vertices of $\delta(Q_{j'})$. The corresponding new face $F''$, formed by $\sigma'$ and the edge $e_{\sigma'}$ will then have the property that $\delta(Q_{j'})\subseteq \gamma(F'')$. We have thus shown that $\tilde{G}\setminus (E^*\cup E^{**})$ has a planar drawing. The same drawing induces a planar drawing for $G\setminus (E'\cup E''\cup E^*\cup E^{**})$. \end{proof} In the rest of this section, we will show an efficient algorithm to find the assignment of the faces of ${\mathcal{F}}$ to the clusters $C\in {\mathcal{C}}$, and the partition ${\mathcal{Q}}(C)$ of each such cluster, satisfying the requirements of Theorem~\ref{thm: no conflict case}. Our goal is also to ensure that $|E^*|$ is small, as these edges are eventually removed from the graph. If two clusters $C,C'\in {\mathcal{C}}$, with $\delta(C),\delta(C')\subseteq \gamma(F)$ for some $F\in {\mathcal{F}}$ are not independent, then we say that they have a conflict. The process of partitioning both clusters into sub-clusters to ensure that the sub-clusters are independent is called conflict resolution. The next theorem shows how to perform conflict resolution for a pair of clusters. The proof of this theorem is due to Yury Makarychev~\cite{Yura}. We provide it here for completeness. \begin{theorem}\label{thm: conflict resolution for 2 clusters} Let $C,C'\in {\mathcal{C}}$, such that both $C$ and $C'$ are embedded inside the same face $F\in {\mathcal{F}}$ in $\tilde{\phi}$. Then we can efficiently find a subset $E_{C,C'}\subseteq E(C)$ of edges, $|E_{C,C'}|\leq 30 \operatorname{cr}_{\tilde\phi}(E(C),E(C'))$, such that if ${\mathcal{C}}'$ denotes the collection of all connected components of $C\setminus E_{C,C'}$, then for every cluster $Q\in {\mathcal{C}}'$, $Q$ and $C'$ are independent. Moreover, $E_{C,C'}$ does not contain any edges of the grids $Z\in {\mathcal{Z}}$. \end{theorem} \begin{proof} We say that a set $\tilde{E}$ of edges is valid iff it satisfies the condition of the theorem. For simplicity, we will assign weights $w_e$ to edges as follows: edges that belong to grids $Z\in {\mathcal{Z}}$ have infinite weight, and all other edges have weight $1$. We first claim that there is a valid set of weight at most $\operatorname{cr}_{\tilde{\phi}}(C, C')$. Indeed, let $\tilde{E}$ be the set of edges of $C$, that are crossed by the edges of $C'$ in $\tilde \phi$. Clearly, $|\tilde E| \leq \operatorname{cr}_{\tilde{\phi}}(C,C')$, and this set does not contain any edges in grids $Z\in {\mathcal{Z}}$, or edges adjacent to the vertices of $K$ (this was our assumption when we defined $\tilde\phi$). Let ${\mathcal{C}}'$ be the set of all connected components of $C\setminus \tilde E$, and consider some cluster $Q\in {\mathcal{C}}'$. Assume for contradiction, that $Q$ and $C'$ are not independent. Then there are four vertices $a,b,c,d\in \gamma(F)$, whose ordering along $\gamma(F)$ is $(a,b,c,d)$, and $a,c\in \delta(Q)$, while $(b,d)\in \delta(C')$. But then there must be a path $P\subseteq Q\cup\operatorname{out}_K(Q)$ connecting $a$ to $c$, and a path $P'\subseteq C'\cup \operatorname{out}_K(C')$, connecting $b$ to $d$, as both $Q$ and $C'$ are connected graphs. Moreover, since $Q$ and $C'$ are completely disjoint, the two paths must cross in $\tilde{\phi}$. Recall that we have assumed that the artificial edges adjacent to $K$ do not participate in any crossings in $\tilde{\phi}$. Therefore, the crossing is between an edge of $Q$ and an edge of $C'$. This is impossible, since we have removed all edges that participate in such crossings from $C$. We now show how to \textit{efficiently} find a valid set $\tilde E$ of edges, of weight at most $30 \operatorname{cr}_{\tilde{\phi}}(E(C),E(C'))$. Let $\Sigma' = \{\sigma'_1,\sigma'_2,\dots, \sigma'_k\}$ be the set of segments of $\gamma(F)$, defined by $\delta(C')$, in the circular order. Throughout the rest of the proof we identify $k+1$ and $1$. Consider the set $\Gamma(C)$ of vertices. We partition this set into a number of subsets, as follows. For $1\leq i\leq k$, let $\Gamma_i\subseteq \Gamma(C)$ denote the subset of vertices $z\in \Gamma(C)$, for which $x_z$ lies strictly inside the segment $\sigma_i'$. Let $\Gamma_{i,i+1}\subseteq \Gamma(C)$ denote the subset of vertices $z\in \Gamma(C)$, for which $x_z$ is the vertex separating segments $\sigma'_i$ and $\sigma'_{i+1}$. We now restate the problem of finding a valid cut $E_{C,C'}$ as an assignment problem. We need to assign each vertex of $C$ to one of the segments $\sigma'_1, \dots, \sigma'_k$ so that \begin{itemize} \item every vertex in $\Gamma_i$ is assigned to the segment $\sigma'_i$; \item every vertex in $\Gamma_{i,i+1}$ is assigned to either $\sigma'_i$ or $\sigma'_{i+1}$. \end{itemize} We say that an edge of $C$ is cut by such an assignment, iff its endpoints are assigned to different segments. Given any such assignment, whose weight is finite, let $\tilde E$ be the set of cut edges. We prove that set $\tilde E$ is valid. Since the weight of $\tilde E$ is finite, it cannot contain edges of grids $Z\in {\mathcal{Z}}$. Let ${\mathcal{C}}'$ be the collection of all connected components of $C\setminus \tilde E$. It is easy to see that for each $Q\in {\mathcal{C}}'$, $Q$ and $C'$ is independent. This is since for all edges in $\operatorname{out}_K(Q)$, their endpoints that belong to $K$ must all be contained inside a single segment $\sigma'$ of $\Sigma'$. On the other hand, every finite-weight valid set $\tilde E$ of edges corresponds to a valid assignment. Let ${\mathcal{C}}'$ be the set of all connected components of $C\setminus \tilde E$, and let $Q\in {\mathcal{C}}'$. Since there are no conflicts between $Q$ and $C'$, all vertices of $\delta(Q)$ that serve as endpoints of the set $\operatorname{out}_K(Q)$ of edges, must be contained inside a single segment $\sigma'\in \Sigma'$. If the subset of $\delta(Q)$ contains a single vertex, there can be two such segments of $\Sigma'$, and we choose any one of them arbitrarily; if this subset of $\delta(Q)$ is empty, then we choose an arbitrary segment of $\Sigma'$. We then assign all vertices of $Q$ to $\sigma'$. Since $\tilde E$ does not contain any edges that are adjacent to the vertices of $K$ (as such edges are not part of $E(C)$), we are guaranteed that every vertex in $\Gamma_i$ is assigned to the segment $\sigma'_i$, and every vertex in $\Gamma_{i,i+1}$ is assigned to either $\sigma'_i$ or $\sigma'_{i+1}$, for all $1\leq i\leq k$. We now show how to approximately solve the assignment problem, and therefore the original problem, using linear programming. We will ensure that the weight of the solution $E_{C,C'}$ is at most $30$ times the optimum, and so $|E_{C,C'}|\leq 30 \operatorname{cr}_{\tilde \phi}(E(C),E(C'))$. For each vertex $u$ of $C$ and segment $\sigma'_i$ we introduce an indicator variable $y_{u,i}$, for assigning $u$ to segment $\sigma'_i$. All variables for vertex $u$ form a vector $y_u = (y_{u,1}, \dots, y_{u,k}) \in {\mathbb R}^k$. We denote the standard basis of ${\mathbb R}^k$ by $e_1,\dots, e_k$. In the intended integral solution, $y_u = e_i$ if $u$ is assigned to $\sigma'_i$; that is, $y_{u,i} = 1$ and $y_{u,j} = 0$ for $j\neq i$. Equip the space ${\mathbb R}^k$ with the $\ell_1$ norm $\|y_u\|_1 = \sum_{i=1}^k |y_{u,i}|$. We solve the following linear program. \begin{align*} \text{minimize } &&\frac{1}{2} \sum_{e=(u,v)\in E(C)} w_e\cdot \|y_u - y_v\|_1\\ \text{subsject to }&& \\ && \|y_u\|_1 = 1 &&& \forall u \in V(C);\\ && y_{u,i} = 1 &&& \forall 1\leq i\leq k, \forall u \in \Gamma_i;\\ && y_{u,i} + y_{u,i+1} = 1 &&& \forall 1\leq i\leq k,\forall u \in \Gamma_{i,i+1};\\ && y_{u,i} \geq 0 &&& \forall u \in V(C), \forall 1\leq i\leq k. \end{align*} Let $\mathsf{OPT}_{LP}$ be the value of the optimal solution of the LP. For all $1\leq i\leq k$, $r\in (1/2,3/5)$, define balls $B_i^r = \{u: y_{u,i} \geq r\}$ and $B_{i,i+1}^r = \{u: u\not\in B_i^r\cup B_{i+1}^r; y_{u,i}+y_{u,i+1} \geq 5r/3\}$. Note that since, for each $u\in V(C)$, at most one coordinate $y_{u,i}$ can be greater than $\ensuremath{\frac{1}{2}}$, whenever $r\geq \ensuremath{\frac{1}{2}}$, the balls $B_i^r$ and $B_j^r$ are disjoint for all $i\neq j$. Similarly, balls $B_{i,i+1}^r$ and $B_{j,j+1}^{r}$ are disjoint for $i\neq j$ when $r \geq 1/2$: this is since, if $u\in B_{i,i+1}^r$, then $y_{u,i}+y_{u,i+1}\geq 5/6$ must hold, while $y_{u,i},y_{u,i+1}<\ensuremath{\frac{1}{2}}$. Therefore, $y_{u,i},y_{u,i+1}>1/3$ must hold, and there could be at most two coordinates $1\leq j\leq k$, for which $y_{u,j}>1/3$. For each value of $r: 1/2\leq r/\leq 3/5$, we let $E^r$ denote all edges that have exactly one endpoint in the balls $B_i^r$, and $B_{i,i+1}^r$, for all $1\leq i\leq k$. We choose $r\in (1/2,3/5)$ that minimizes $|E^r|$, and we let $E_{C,C'}$ denote the set $E^r$ for this value of $r$. We assign all vertices in balls $B_i^r$ and $B_{i,i+1}^r$ to the segment $\sigma'_i$. We assign all unassigned vertices to an arbitrary segment. We need to verify that this assignment is valid; that is, vertices from $\Gamma_i$ are assigned to $\sigma'_i$ and vertices from $\Gamma_{i,i+1}$ are assigned to either $\sigma'_i$ or $\sigma'_{i+1}$, for all $1\leq i\leq k$. Indeed, if $u\in\Gamma_i$, then $y_{u,i} = 1$, and so $u\in B^r_i$; similarly, if $u\in\Gamma_{i,i+1}$ then $y_{u,i} + y_{u,i+1} = 1$, and so $u\in B^r_i\cup B^r_{i+1}\cup B^r_{i,i+1}$. Finally, we need to show that the cost of the assignment is at most $30 \mathsf{OPT}_{LP}$. In fact, we show that if we choose $r\in (1/2,3/5)$ uniformly at random, then the expected cost is at most $30\mathsf{OPT}_{LP}$. Consider an edge $e=(u,v)$. We compute the probability that $e\in B^r_i$, for each $1\leq i\leq k$. This is the probability that $y_{u,i}\geq r$, but $y_{v,i}<r$ (or vice versa if $y_{v,i}<y_{u,i}$). This probability is bounded by $10|y_{u,i}-y_{v,i}|$. Similarly, the probability that $u\in B^r(i,i+1)$ but $v\not \in B^r(i,i+1)$ is bounded by the probability that $y_{u,i}+y_{u,i+1}\geq 5r/3$, but $y_{v,i}+y_{v,i+1}< 5r/3$, or vice versa. This probability is at most $6\cdot \frac 5 3 ((y_{u,i}+y_{u,i+1})-(y_{v,i}+y_{v,i+1}))\leq 10(|y_{u,i}-y_{v,i}|+|y_{u,i+1}-y_{v,i+1}|)$. Therefore, overall, the probability that $e=(u,v)$ belongs to the cut is at most: \[\sum_{i=1}^k10 |y_{u,i}-y_{v,i}|+\sum_{i=1}^k 10(|y_{u,i}-y_{v,i}|+|y_{u,i+1}-y_{v,i+1}|)\leq 10 \norm{y_u-y_v}_1+20\norm{y_u-y_v}_1=30\norm{y_u-y_v}_1\] \end{proof} We now show how to find the assignment $F(C)$ of faces of ${\mathcal{F}}$ to all clusters $C\in {\mathcal{C}}$, together with the partition ${\mathcal{Q}}(C)$ of the vertices of $C$. We will reduce this problem to an instance of the min-uncut problem. Recall that the input to the min-uncut problem is a collection $X$ of Boolean variables, together with a collection $\Psi$ of constraints. Each constraint $\psi\in \Psi$ has non-negative weight $w_{\psi}$, and involves exactly two variables of $X$. All constraints $\psi\in \Psi$ are required to be of the form $x\neq y$, for $x,y\in X$. The goal is to find an assignment to all variables of $X$, to minimize the total weight of unsatisfied constraints. Agarwal et. al.~\cite{ACMM} have shown an $O(\sqrt{\log n})$-approximation algorithm for Min Uncut. Fix any pair $C,C'\in {\mathcal{C}}$ of clusters, and a face $F\in {\mathcal{F}}$, such that $\delta(C),\delta(C')\subseteq \gamma(F)$. Let $E'_{C,C'}$ denote the union of the sets $E_{C,C'}$ and $E_{C',C}$ of edges from Theorem~\ref{thm: conflict resolution for 2 clusters}, and let $w_{C,C'}=|E'(C,C')|$, $w_{C,C'}\leq 60\operatorname{cr}_{\tilde{\phi}}(E(C),E(C'))$. For each face $F\in {\mathcal{F}}$, we denote by ${\mathcal{C}}(F)\subseteq {\mathcal{C}}$ the set of all clusters $C\in {\mathcal{C}}$, of the first type, for which $\delta(C)\subseteq \delta(F)$. Recall that for each such cluster, $F^*_C=F$ must hold. Let $E'(F)=\bigcup_{C,C'\in {\mathcal{C}}(F)}E'_{C,C'}$, and let $w_F=|E'(F)|$. Let ${\mathcal{P}}$ be the set of all maximal $2$-paths in $K$. For every path $P\in {\mathcal{P}}$, we denote by ${\mathcal{C}}(P)\subseteq {\mathcal{C}}$ the set of all type-2 clusters $C$, for which $\delta(C)\subseteq P$. Let $F_1(P),F_2(P)$ be the two faces of $K$, whose boundaries contain $P$. Recall that for each $C\in {\mathcal{C}}(P)$, $F^*_C\in \set{F_1(P),F_2(P)}$. For every $C\in {\mathcal{C}}(P)$, $F\in \set{F_1(C),F_2(C)}$, let $w_{C,F}=\sum_{C'\in {\mathcal{C}}(F)}w_{C,C'}$. If we decide to assign $C$ to face $F_1(P)$, then we will pay $w_{C,F_1(P)}$ for this assignment, and similarly, if $C$ is assigned to face $F_2(P)$, we will pay $w_{C,F_2(P)}$. We now set up an instance of the min-uncut problem, as follows. The set of variables $X$ contains, for each path $P\in {\mathcal{P}}$, for each $F\in \set{F_1(P),F_2(P)}$, a Boolean variable $y_{P,F}$, and for each path $P\in {\mathcal{P}}$ and cluster $C\in {\mathcal{C}}(P)$ a Boolean variable $y_C$. Intuitively, if $y_C=y_{P,F_1(P)}$, then $C$ is assigned to $F_1(P)$, and if $y_C=y_{P,F_2(P)}$, then $C$ is assigned to $F_2(P)$. The set $\Psi$ of constraints contains constraints of three types: first, for each path $P\in {\mathcal{P}}$, we have the constraint $y_{P,F_1(P)}\neq y_{P,F_2(P)}$ of infinite weight. For each $P\in {\mathcal{P}}$, for each pair $C,C'\in {\mathcal{C}}(P)$ of clusters, there is a constraint $y_C\neq y_{C'}$, of weight $w_{C,C'}$. Finally, for each $P\in {\mathcal{P}}$, $F\in \set{F_1(P),F_2(P)}$, and for each $C\in {\mathcal{C}}(P)$, we have a constraint $y_C\neq y_{P,F}$ of weight $w_{C,F}$. \begin{claim} There is a solution to the min-uncut problem, whose cost, together with $\sum_{F\in {\mathcal{F}}}w_F$, is bounded by $60\operatorname{cr}_{\tilde{\phi}}(G)$. \end{claim} \begin{proof} We simply consider the optimal solution $\tilde{\phi}$. For each path $P\in {\mathcal{P}}$, we assign $y_{P,F_1(P)}=0$ and $y_{P,F_2(P)}=1$. For each cluster $C\in {\mathcal{C}}(P)$, if $F^*(C)=F_1(P)$, then we set $y_C=y_{P,F_1(P)}$, and otherwise we set $y_C=y_{P,F_2(P)}$. From Theorem~\ref{thm: conflict resolution for 2 clusters}, for every pair $C,C'$ of clusters with $F^*_C=F^*_{C'}$, $w_{C,C'}\leq 60\operatorname{cr}_{\tilde{\phi}}(C,C')$. \end{proof} We can therefore find an $O(\sqrt{\log n})$-approximate solution to the resulting instance of the min-uncut problem, using the algorithm of~\cite{ACMM}. This solution naturally defines an assignment of faces to clusters. Namely, if $C$ is a type-1 cluster, then we let $F(C)=F$, where $F$ is the unique face with $\delta(C)\subseteq \gamma(F)$. If $C$ is a type-2 cluster, and $C\in {\mathcal{C}}(P)$, for some path $P\in {\mathcal{P}}$, then we assign $C$ to $F_1(P)$ if $y_C=y_{P,F_1(P)}$, and we assign it to $F_2(C)$ otherwise. If $C$ is a type-3 cluster, then we assign it to any face that contains the unique vertex in $\delta(C)$. For each face $F$, let ${\mathcal{C}}'(F)$ denote all clusters $C$ that are assigned to $C$. Let $\tilde E(F)$ denote the union of the sets $E'_{C,C'}$ of edges for all $C,C'\in {\mathcal{C}}'(F)$, and let $\tilde E=\bigcup_{F\in{\mathcal{F}}}\tilde E(F)$. For each cluster $C\in {\mathcal{C}}$, we now obtain a partition ${\mathcal{Q}}'(C)$ of its vertices that corresponds to the connected components of graph $C\setminus \tilde E$. For each $Q\in {\mathcal{Q}}'(C)$, we let $Q$ denote both the set of vertices in the connected component of $C\setminus \tilde{E}$, and the sub-graph of $\tilde{G}$ induced by $Q$. From Theorem~\ref{thm: conflict resolution for 2 clusters}, we are guaranteed that for every face $F\in {\mathcal{F}}$, for all $C,C'\in {\mathcal{C}}'_F$, if $Q\in {\mathcal{Q}}'(C)$ and $Q'\in {\mathcal{Q}}'(C')$, then $Q$ and $Q'$ are independent. It is however possible that for some $C\in {\mathcal{C}}$, there is a pair $Q,Q'\in {\mathcal{Q}}'(C)$ of clusters, such that there is a conflict between $Q$ and $Q'$. In order to avoid this, we perform the following grouping procedure: For each $F\in{\mathcal{F}}$, for each $C\in {\mathcal{C}}'_F$, while there is a pair $Q,Q'\in {\mathcal{Q}}(C)$ of clusters that are not independent, remove $Q,Q'$ from ${\mathcal{Q}}'(C)$, and replace them with $Q\cup Q'$. For each $C\in {\mathcal{C}}$, let ${\mathcal{Q}}(C)$ be the resulting partition of the vertices of $C$. Clearly, each pair $Q,Q'\in {\mathcal{Q}}(C)$ is independent. \begin{claim} For each $F\in {\mathcal{F}}$, for each pair $C,C'\in {\mathcal{C}}'(F)$ of clusters, and for each $Q\in {\mathcal{Q}}(C), Q'\in {\mathcal{Q}}(C')$, clusters $Q$ and $Q'$ are independent.\end{claim} \begin{proof} Consider the partitions ${\mathcal{Q}}'(C)$, ${\mathcal{Q}}'(C')$, as they change throughout the grouping procedure. Before we have started the grouping procedure, every pair $Q\in {\mathcal{Q}}'(C)$, $Q'\in {\mathcal{Q}}'(C')$ was independent. Consider the first step in the grouping procedure, such that this property held for ${\mathcal{Q}}'(C),{\mathcal{Q}}'(C')$ before this step, but does not hold anymore after this step. Assume w.l.o.g. that the grouping step was performed on a pair $Q_1,Q_2\in {\mathcal{Q}}'(C)$. Since no other clusters in ${\mathcal{Q}}'(C)$ or ${\mathcal{Q}}'(C')$ were changed, there must be a cluster $Q'\in {\mathcal{Q}}'(C')$, such that both pairs $Q_1,Q'$ and $Q_2,Q'$ are independent, but $Q_1\cup Q_2$ and $Q'$ are not independent. We now show that this is impossible. Let $\Sigma$ be the partitioning of $\gamma(F)$ defined by the vertices of $\delta(Q')$. Since $Q_1$ and $Q'$ are independent, there is a segment $\sigma\in \Sigma$, such that $\delta(Q_1)\subseteq \sigma$. Similarly, since $Q_2$ and $Q'$ are independent, there is a segment $\sigma'\in \Sigma$, such that $\delta(Q_2)\subseteq \sigma'$. However, since $Q_1$ and $Q_2$ are not independent, $\sigma=\sigma'$ must hold. But then all vertices of $\delta(Q_1\cup Q_2)$ are contained in the segment $\sigma\in \Sigma$, contradicting the fact that $(Q_1\cup Q_2)$ and $Q'$ are not independent. \end{proof} To summarize, we have shown how to find an assignment $F(C)\in \set{F_1(C),F_2(C)}$ for every cluster $C\in {\mathcal{C}}$, and a partition ${\mathcal{Q}}(C)$ of the vertices of every cluster $C$, such that for every face $F\in {\mathcal{F}}$, every pair $Q,Q'\in {\mathcal{Q}}(F)$ of clusters is independent. Moreover, if $E^*$ denotes the subset of edges $E_{\tilde{G}}(Q,Q')$ for all $Q,Q'\in {\mathcal{Q}}$, then we have ensured that $|E^*|\leq O(\sqrt{\log n})\operatorname{cr}_{\tilde {\phi}}(\tilde{G})=O(\sqrt{\log n})\operatorname{cr}_{\phi'}(G)$, and set $E^*$ does not contain edges of grids $Z\in {\mathcal{Z}}$, or artificial edges. Therefore, the conditions of Theorem~\ref{thm: no conflict case} hold. We now define the set $\edges h(G)$, as follows: $\edges h(G)=E'\cup E''\cup E^*$. Recall that $|E'|\leq O(\mathsf{OPT}^2\rhod_{\mbox{\textup{\footnotesize{max}}}})$, $|E''|\leq O\left(\frac{\mathsf{OPT}^2\cdot\rho\cdot\log^2n\cdotd_{\mbox{\textup{\footnotesize{max}}}}^2\cdot\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )$, and $|E^*|\leq O(\sqrt{\log n})\operatorname{cr}_{\phi'}(G)$. Therefore, \[|\edges h(G)|\leq O\left(\frac{\mathsf{OPT}^2\cdot\rho\cdot\log^2n\cdotd_{\mbox{\textup{\footnotesize{max}}}}^2\cdot\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )+O(\sqrt{\log n})\operatorname{cr}_{\phi'}(G)\leq m^*.\] We also set $\edges h=\bigcup_{G\in\set{G_1^X,\ldots,G_{k_h}^X}}\edges h(G)$, so $|\edges h|\leq m^*\cdot \mathsf{OPT}$ as required. We now define a collection ${\mathcal{G}}_{h+1}$ of instances. Recall that for all $1\leq i\leq k_h$, this collection already contains the instance $\pi(G_i',\emptyset,{\mathcal{Z}})$. Let $G=G_i^X$, and $Q\in {\mathcal{Q}}$. Let $Q'$ denote the subset of vertices of $Q$ without the artificial vertices, and let $H_Q$ be the sub-graph of $\H$ induced by $Q\cup X_Q$. We then add the instance $\pi_G(H_Q,X_Q,{\mathcal{Z}})$ to ${\mathcal{G}}_{h+1}(G)$. This finishes the definition of the set ${\mathcal{G}}'_{h+1}$. From Theorem~\ref{thm: no conflict case}, for each $1\leq i\leq k_h$, there is a strong solution to each resulting sub-instance of $G_i^X$, such that the total cost of these solutions is at most $\operatorname{cr}_{\phi_i}(G_i^X)$. Clearly, $\phi_i$ also induces a strong solution to instance $\pi(G_i',\emptyset,{\mathcal{Z}})$ of cost $\operatorname{cr}_{\phi_i}(G_i')$. Therefore, there is a strong solution for each instance in ${\mathcal{G}}_{h+1}$ of total cost at most $\sum_{i=1}^{k_h}\operatorname{cr}_{\phi_i}(G_i)\leq \mathsf{OPT}$, and so the number of instances in ${\mathcal{G}}_{h+1}$, for which $\emptyset$ is not a feasible weak solution is bounded by $\mathsf{OPT}$. We let ${\mathcal{G}}'_{h+1}\subseteq {\mathcal{G}}_{h+1}$ denote the set of all instances for which $\emptyset$ is not a feasible solution. Observe that we can efficiently verify whether $\emptyset$ is a feasible solution for a given instance, so we can compute ${\mathcal{G}}'_{h+1}$ efficiently. We now claim that ${\mathcal{G}}'_{h+1}$ is a valid input to the next iteration, except that it may not satisfy Invariant~(\ref{invariant 2: canonical}) due to the grid sets ${\mathcal{Z}}''(G)$ -- we deal with this issue at the end of this section. We have already established Invariant~(\ref{invariant 3: there is a cheap solution}) in the above discussion. Also, from Theorem~\ref{thm: no conflict case}, if we find a weak feasible solution $\tilde{E}_H$ for each instance $H\in {\mathcal{G}}_{h+1}$, then the union of these solutions, together with $\edges h$, gives a weak feasible solution to all instances $\pi(G_i,X_i,{\mathcal{Z}}_i)$ for $1\leq i\leq k_h$, thus giving Invariant~(\ref{invariant 4: any weak solution is enough}). In order to establish Invariant~(\ref{invariant 4.5: number of edges removed so far}), observe that the number of edges in $\edges h$, incident on any new instance is bounded by the maximum number of edges in $\edges h$ that belong to any original instance $G_1,\ldots,G_{k_h}$, which is bounded by $m^*$, and the total number of edges in $\edges h$ is bounded by $m^*\cdot \mathsf{OPT}$. Invariant~(\ref{invariant 5: bound on size}) follows from the fact that for each $1\leq i\leq k_h$, $|V(G_i')|\leq n_h$, and these graphs have empty bounding boxes. All sub-instances of $G_i^X$ were constructed by further partitioning the clusters in $G_i^X\setminus (K_i\cup E'')$, and each such cluster contains at most $n_{h+1}$ vertices. Invariant~(\ref{invariant 1: disjointness}) is immediate, as is Invariant~(\ref{invariant 2: proper subgraph}) (recall that we have ensured that all edges in $E',E'',E^*$ connect vertices in distinct sub-instances). Finally, if we assume that ${\mathcal{Z}}''(G)=\emptyset$ for all $G\in \set{G_1,\ldots,G_{k_h}}$, then the resulting sub-instances are canonical, as we have ensured that the edges in sets $E',E'',E^*$ do not belong to the grids $Z\in {\mathcal{Z}}$, thus giving Invariant~(\ref{invariant 2: canonical}). Therefore, we have shown how to produce a valid input to the next iteration, for the case where ${\mathcal{Z}}''(G)=\emptyset$ for all $G\in \set{G_1,\ldots,G_{k_h}}$. It now only remains to show how to deal with the grids in sets ${\mathcal{Z}}''(G)$. \paragraph{Dealing with grids in sets ${\mathcal{Z}}''(G)$} Let $G\in \set{G_1^X,\ldots,G_{k_h}^X}$, and let $Z\in {\mathcal{Z}}''(G)$. Recall that this means that $Z\cap K$ is a simple path, that we denote by $P_Z$, and in particular, $K$ contains exactly two edges of $\operatorname{out}(Z)$, that we denote by $e_Z$ and $e'_Z$. The difficulty in dealing with such grids is that, on the one hand, we need to ensure that all new sub-instances are canonical, so we would like to add such grids to the skeleton $K$. On the other hand, since $Z\cap K$ is a simple path, graph $K\cup Z$ is not rigid, and has 2 different planar drawings (obtained by ``flipping'' $Z$ around the axis $P_Z$), so we cannot claim that we can efficiently find the optimal drawing $\phi'_{K\cup Z}$ of $K\cup Z$. Our idea in dealing with this problem is that we use the conflict resolution procedure to establish which face of the skeleton $K$ each such grid $Z\in{\mathcal{Z}}''(G)$ must be embedded in. Once this face is established, we can simply add $Z$ to $K$. Even though the resulting skeleton is not rigid, its drawing is now fixed. More specifically, let $Z\in {\mathcal{Z}}''(G)$ be any such grid, and let $v,v'$ be the two vertices in the first row of $Z$ adjacent to the edges $e_z$ and $e'_z$, respectively. We start by replacing the path $P_Z$ in the skeleton $K$, with the unique path connecting $v$ and $v'$ that only uses the edges of the first row of $Z$. Let $P'_Z$ denote this path. We perform this transformation for each $Z\in {\mathcal{Z}}''(G)$. The resulting skeleton $K$ is still rigid and good. It is now possible that the size of some connected component of $G\setminus (K\cup E'')$ becomes larger. However, since we eventually add all vertices of all such grids $Z\in {\mathcal{Z}}''(G)$ to the skeleton, this will not affect the final outcome of the current iteration. We then run the conflict resolution procedure exactly as before, and obtain the collection ${\mathcal{G}}_{h+1}$ of new instances as before. Consider some such instance $\pi(H_Q,X_Q,{\mathcal{Z}})$, and assume that $H_Q$ is a sub-graph of $G\in\set{G_1^X,\ldots,G^X_{k_h}}$. Let $Q=H_Q\setminus X_Q$. From the above discussion, $Q$ is canonical w.r.t. ${\mathcal{Z}}\setminus {\mathcal{Z}}''(G)$. The only problem is that for some grids $Z\in {\mathcal{Z}}''(G)$, $Q$ may contain the vertices of $Z\setminus P'_Z$. This can only happen if $P'_Z$ belongs to the bounding box $X_Q$. Recall that we are guaranteed that there is a strong solution to instance $\pi(H_Q,X_Q,{\mathcal{Z}})$, and the total cost of all such solutions over all instances in ${\mathcal{G}}_{h+1}$ is at most $\mathsf{OPT}$. In particular, the edges of $Z$ do not participate in crossings in this solution. Therefore, we can simply consider the grid $Z$ to be part of the skeleton, remove its vertices from $Q$, and update the bounding box of the resulting instance if needed. In other words, the conflict resolution procedure, by assigning every cluster $C\in {\mathcal{C}}$ to a face of ${\mathcal{F}}$, has implicitly defined a drawing of the graph $K\cup(\bigcup_{Z\in{\mathcal{Z}}''(G)}Z)$. Even though this drawing may be different from the drawing induced by $\phi'$, we are still guaranteed that the resulting sub-problems all have strong feasible solutions of total cost bounded by $\mathsf{OPT}$. The final instances in ${\mathcal{G}}'_{h+1}$ are now guaranteed to satisfy all Invariants~(\ref{invariant 1: disjointness})--(\ref{invariant 5: bound on size}). \section{Conclusions} \label{sec: conclusions} We have shown an efficient randomized algorithm to find a drawing of any graph ${\mathbf{G}}$ in the plane with at most $O\left ((\optcro{{\mathbf{G}}})^{10}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$ crossings. We did not make an effort to optimize the powers of $\mathsf{OPT},d_{\mbox{\textup{\footnotesize{max}}}}$ and $\log n$ in this guarantee, or the constant hidden in the $O(\cdot)$ notation, and we believe that they can be improved. We hope that the technical tools developed in this paper will help obtain better algorithms for the {\sf Minimum Crossing Number}\xspace problem. A specific possible direction is obtaining efficient algorithms for $\rho$-balanced $\alpha$-well-linked bi-partitions. In particular, an interesting open question is whether there is an efficient algorithm, that, given an $n$-vertex graph $G$ with maximum degree $d_{\mbox{\textup{\footnotesize{max}}}}$, finds a $\rho$-balanced $\alpha$-well-linked bi-partition of $G$, for $\rho,\alpha=\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)$. In fact it is not even clear whether such a bi-partition exists in every graph. We note that the dependence of $\rho$ on $d_{\mbox{\textup{\footnotesize{max}}}}$ is necessary, for example, in the star graph. This question appears to be interesting in its own right, and its positive resolution would greatly simplify our algorithm and improve its performance guarantee. We also note that if we only require that one of the two sets in the bi-partition is well-linked, then there is an efficient algorithm for finding such bi-partitions, similarly to the proof of Theorem~\ref{thm: initial partition}. \paragraph{Acknowledgements.} The author thanks Yury Makarychev and Anastasios Sidiropoulos for many fruitful discussions, and for reading earlier drafts of the paper. \label{------------------------------------------------Start Appendix------------------------------------------------------------------------}
{'timestamp': '2010-12-02T02:02:31', 'yymm': '1012', 'arxiv_id': '1012.0255', 'language': 'en', 'url': 'https://arxiv.org/abs/1012.0255'}
ArXiv
\section{Introduction} Hadronic decays of $D$ mesons provide important information to understand the weak and strong interactions in the charm sector. Various experiments have measured the branching fractions of hadronic decays of $D$ mesons~\cite{pdg2014}. However, the measurement accuracy of the Cabibbo-favored (CF) decays $D\to \bar K\pi\eta^\prime$ is still very poor~\cite{pdg2014}. The Particle Data Group (PDG) gives a branching fraction of $(0.75\pm0.19)\%$ for $D^0\to K^-\pi^+\eta^\prime$, which was measured by the CLEO collaboration 25 years ago~\cite{prd48_4007,pdg2014}. There are no measurements for the isospin-related decay modes $D^0\to K^0_S\pi^0\eta^\prime$ and $D^+\to K^0_S\pi^+\eta^\prime$. The statistical isospin model (SIM) proposed in Refs.~\cite{rosner1,rosner2} predicts a simple ratio of the branching fractions for the isospin multiplets: ${\mathcal B}(D^0\to K^-\pi^+\eta^\prime):{\mathcal B}(D^0\to K^0_S\pi^0\eta^\prime):{\mathcal B}(D^+\to K^0_S\pi^+\eta^\prime) \equiv 1:{\mathcal R^0}:{\mathcal R^+} \equiv 1:\frac{{\mathcal B}(D^0\to K^0_S\pi^0\eta^\prime)}{{\mathcal B}(D^0\to K^-\pi^+\eta^\prime)}:\frac{{\mathcal B}(D^+\to K^0_S\pi^+\eta^\prime)}{{\mathcal B}(D^0\to K^-\pi^+\eta^\prime)}=1:0.4:0.9$. Precision measurements of the branching fractions of $D\to \bar K\pi\eta^\prime$ are crucial to test the SIM prediction. In this paper, we report an improved measurement of the branching fraction for $D^0\to K^-\pi^+\eta^\prime$ and the first measurements of the branching fractions for $D^0\to K^0_S\pi^0\eta^\prime$ and $D^+\to K^0_S\pi^+\eta^\prime$. The analysis is performed using an $e^+e^-$ annihilation data sample corresponding to an integrated luminosity of $2.93$~fb$^{-1}$~\cite{lum} collected with the BESIII detector~\cite{bes3} at $\sqrt s=3.773$~GeV. At this energy, relatively clean $D^0$ and $D^+$ meson samples are obtained from the processes $e^+e^-\to\psi(3770)\to D^0\bar D^0$ or $D^+D^-$. To improve statistics, we use a single-tag method, in which either a $D$ or $\bar D$ is reconstructed in an event. Throughout the text, charge conjugated modes are implied, and $D\bar D$ refers to $D^0\bar D^0$ and $D^+D^-$ unless stated explicitly. \section{BESIII detector and Monte Carlo simulation} \label{sec:detector} The BESIII detector is a magnetic spectrometer that operates at the BEPCII collider. It has a cylindrical geometry with a solid-angle coverage of 93\% of $4\pi$. It consists of several main components. A 43-layer main drift chamber~(MDC) surrounding the beam pipe performs precise determinations of charged particle trajectories and measures the specific ionization energy loss~(${\rm d}E/{\rm d}x$) for charged particle identification~(PID). An array of time-of-flight counters~(TOF) is located outside the MDC and provides additional PID information. A CsI(Tl) electromagnetic calorimeter~(EMC) surrounds the TOF and is used to measure the deposited energies of photons and electrons. A solenoidal superconducting magnet outside the EMC provides a 1 T magnetic field in the central tracking region of the detector. The iron flux return of the magnet is instrumented with the resistive plate muon counters arranged in nine layers in the barrel and eight layers in the endcaps for identification of muons with momenta greater than 0.5\,GeV/$c$. More details about the BESIII detector are described in Ref.~\cite{bes3}. A Monte Carlo~(MC) simulation software package, based on {\sc geant4}~\cite{geant4}, includes the geometric description and response of the detector and is used to determine the detection efficiency and to estimate backgrounds for each decay mode. An inclusive MC sample, which includes the $D^0\bar D^0$, $D^+D^-$ and non-$D\bar D$ decays of the $\psi(3770)$, initial-state-radiation~(ISR) production of the $\psi(3686)$ and $J/\psi$, the continuum process $e^+e^-\to q\bar q$~($q=u$,~$d$,~$s$), Bhabha scattering events, di-muon events and di-tau events, is produced at $\sqrt s=3.773\,{\rm GeV}$. The equivalent luminosity of the inclusive MC sample is ten times that of the data sample. The $\psi(3770)$ decays are generated with the MC generator {\sc kkmc}~\cite{kkmc}, which incorporates the effects of ISR~\cite{isr}. Final-state-radiation\,(FSR) effects are simulated with the \textsc{photos} package~\cite{photons}. The known decay modes are generated using {\sc evtgen}~\cite{evtgen} with branching fractions taken from the PDG~\cite{pdg2014}, while the remaining unknown decays are generated using {\sc lundcharm}~\cite{lundcharm}. \section{Event selection} \label{sec:evtsel} In this analysis, all charged tracks are required to be within $|\rm{cos\theta}|<0.93$, where $\theta$ is the polar angle with respect to the positron beam. Good charged tracks, except those used to reconstruct $K^0_{S}$ mesons, are required to originate from the interaction region defined by $V_{xy}< 1$~cm and $|V_{z}|< 10$~cm, where $V_{xy}$ and $|V_{z}|$ are the distances of the closest approach of the reconstructed tracks to the interaction point (IP), perpendicular to and along the beam direction, respectively. Charged kaons and pions are identified using the ${\rm d}E/{\rm d}x$ and TOF measurements. The combined confidence levels for the kaon and pion hypotheses ($CL_{K}$ and $CL_{\pi}$) are calculated and the charged track is identified as kaon (pion) if $CL_{K(\pi)}$ is greater than $CL_{\pi(K)}$. The neutral kaon is reconstructed via the $K^0_S\to\pi^{+}\pi^{-}$ decay mode. Two oppositely charged tracks with $|V_{z}|< 20$~cm are assumed to be a $\pi^+\pi^-$ pair without PID requirements and the $\pi^+\pi^-$ pair is constrained to originate from a common vertex. The $\pi^+\pi^-$ combination with an invariant mass $M_{\pi^+\pi^-}$ in the range $|M_{\pi^+\pi^-}-M_{K^0_S}|<0.012$\,GeV/$c^2$, where $M_{K^0_S}$ is the nominal $K^0_{S}$ mass~\cite{pdg2014}, and a measured flight distance from the IP greater than twice its resolution is accepted as a $K^0_S$ candidate. Figure~\ref{fig:sig}(a) shows the $\pi^+\pi^-$ invariant mass distribution, where the two solid arrows denote the $K^0_S$ signal region. Photon candidates are selected using the EMC information. The time of the candidate shower must be within 700\,ns of the event start time and the shower energy should be greater than 25\,(50)\,MeV if the crystal with the maximum deposited energy for the cluster of interest is in the barrel~(endcap) region~\cite{bes3}. The opening angle between the candidate shower and any charged track is required to be greater than $10^{\circ}$ to eliminate showers associated with charged tracks. Both $\pi^0$ and $\eta$ mesons are reconstructed via the $\gamma\gamma$ decay mode. The $\gamma\gamma$ combination with an invariant mass within $(0.115,\,0.150)$ or $(0.515,\,0.570)$\,GeV$/c^{2}$ is regarded as a $\pi^0$ or $\eta$ candidate, respectively. To improve resolution, a one constraint (1-C) kinematic fit is applied to constrain the invariant mass of the photon pair to the nominal $\pi^{0}$ or $\eta$ invariant mass~\cite{pdg2014}. The $\eta^\prime$ mesons are reconstructed through the decay $\eta^\prime\to\pi^{+}\pi^{-}\eta$. The invariant mass of the $\pi^{+}\pi^{-}\eta$ combination $M_{\pi^{+}\pi^{-}\eta}$ is required to satisfy $|M_{\pi^+\pi^-\eta}-M_{\eta^\prime}|<0.015$\,GeV/$c^2$, where $M_{\eta^\prime}$ is the nominal $\eta^\prime$ mass~\cite{pdg2014}. The boundaries of the one dimensional (1D) $\eta'$ signal region are illustrated by the two solid arrows shown in Fig.~\ref{fig:sig}(b). The $D^{0(+)}\to K^-(K^0_S)\pi^+\eta^\prime$ decay is selected from the $K^-(K^0_S)\pi^+\pi^+\pi^-\eta$ combination. Since the two $\pi^+$s in the event have low momenta and are indistinguishable, the $\eta^\prime$ may be formed from either of the $\pi^+\pi^-\eta$ combinations, whose invariant masses are denoted as $M_{\pi_1^{+}\pi^{-}\eta}$ and $M_{\pi_2^{+}\pi^{-}\eta}$. Figure~\ref{fig:sig}(c) shows the scatter plot of $M_{\pi_2^{+}\pi^{-}\eta}$ versus $M_{\pi_1^{+}\pi^{-}\eta}$ for the $D^0\to K^-\pi^+\eta^\prime$ candidate events in the data sample. Events with at least one $\pi^+\pi^-\eta$ combination in the two dimensional (2D) $\eta^\prime$ signal region, shown by the solid lines in Fig.~\ref{fig:sig}(c), are kept for further analysis. \begin{figure}[htbp] \centering \includegraphics[width=3.3in]{eta_ks_3_1.eps} \caption{ (Color online)\ (a)~Distribution of $M_{\pi^+\pi^-}$ for the $K^0_S$ candidates from $D^0\to K^0_S\pi^0\eta^\prime$ decays and (b)~the combined $M_{\pi_{1}^+\pi^-\eta}$ and $M_{\pi_{2}^+\pi^-\eta}$ distribution for the $\eta^\prime$ candidates from $D^0\to K^-\pi^+\eta^\prime$ decays, where the dots with error bars are data, the histograms are inclusive MC samples, and the pairs of red solid~(blue dashed) arrows show the boundaries of the $K^0_S$ or $\eta^\prime$ 1D signal~(sideband) region. (c)~Scatter plot of $M_{\pi^+_2\pi^-\eta}$ versus $M_{\pi^+_1\pi^-\eta}$ for the $D^0\to K^-\pi^+\eta^\prime$ candidate events in the data sample, where the range surrounded by the red solid~(blue dashed) lines denotes the $\eta^\prime$ 2D signal~(sideband) region. In these figures, except for the $K^0_S$ or $\eta^\prime$ mass requirement, all selection criteria and an additional requirement of $|M_{\rm BC}-M_D|<0.005$~GeV/$c^2$ have been imposed. The signal and sideband regions, illustrated here, are applied for all decays of interest in the analysis. }\label{fig:sig} \end{figure} To distinguish $D$ mesons from backgrounds, we define two kinematic variables, the energy difference $\Delta E \equiv E_D-E_{\rm beam}$ and the beam-constrained mass $M_{\rm BC} \equiv \sqrt{E^{2}_{\rm beam}-|\vec{p}_{D}|^{2}}$, where $E_D$ and $\vec{p}_{D}$ are the energy and momentum of the $D$ candidate in the $e^+e^-$ center-of-mass system and $E_{\rm beam}$ is the beam energy. For each signal decay mode, only the combination with the minimum $|\Delta E|$ is kept if more than one candidate passes the selection requirements. Mode-dependent $\Delta E$ requirements, as listed in Table~\ref{tab:singletagN_MC}, are applied to suppress combinatorial backgrounds. These requirements are about $\pm 3.5\sigma_{\Delta E}$ around the fitted $\Delta E$ peaks, where $\sigma_{\Delta E}$ is the resolution of the $\Delta E$ distribution obtained from fits to the data sample. \section{Data analysis} \label{sec:ana} \begin{figure}[htbp] \centering \includegraphics[width=3.3in]{mbc_3_1.eps} \caption{(Color online) Fits to the $M_{\rm BC}$ distributions of the (a) $D^0\to K^-\pi^+\eta^\prime$, (b) $D^0\to K^0_S\pi^0\eta^\prime$, and (c) $D^+\to K^0_S\pi^+\eta^\prime$ candidate events. The dots with error bars are data, the blue solid curves are the total fits and the red dashed curves are the fitted backgrounds. The dotted, dashed and solid histograms are the scaled BKGI, BKGII, and BKGIII components~(see the last paragraph of Sec.~\ref{sec:evtsel}), respectively.}\label{fig:datafit_Massbc} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=7in]{mm_compare.eps} \caption{ (Color online) The $M_{K\pi}$, $M_{\pi\eta^\prime}$, and $M_{K\eta^\prime}$ distributions of data (dots with error bars) and MC simulations (histograms). The top, middle, and bottom rows correspond to $D^0\to K^-\pi^+\eta^\prime$, $D^0\to K^0_S\pi^0\eta^\prime$, and $D^+\to K^0_S\pi^+\eta^\prime$ candidate events, respectively. The blue dashed histograms are PHSP MC samples. The red solid histograms are the modified MC samples. The yellow shaded histograms are the backgrounds estimated from the inclusive MC sample. An additional requirement of $|M_{\rm BC}-M_D|<0.005$~GeV/$c^2$ has been imposed on the events shown in these plots.}\label{fig:mm_compare} \end{figure*} The $M_{\rm BC}$ distributions of the accepted candidate events for the decays of interest in the data sample are shown in Fig.~\ref{fig:datafit_Massbc}. Unbinned maximum likelihood fits to these spectra are performed to obtain the $D$ signal yields. In the fits, the $D$ signal is modeled by an MC-simulated shape convolved with a Gaussian function with free parameters accounting for the difference between the detector resolution of the data and that of the MC simulation. The background shape is described by an ARGUS function~\cite{ARGUS}. The potential peaking backgrounds are investigated as follows. The combinatorial $\pi^+\pi^-$~(called BKGI) or $\pi^+\pi^-\eta$~(called BKGII) pairs in the $K^0_S$ or $\eta^\prime$ signal region may survive the event selection criteria and form peaking backgrounds around the $D$ mass in the $M_{\rm BC}$ distributions. These background components are validated by the data events in the $K^0_S$($\eta^\prime$) sideband region defined as $0.020\,(0.022)<|M_{\pi^+\pi^-\,(\pi^+\pi^-\eta)}-M_{K^0_S\,(\eta^\prime)}|<0.044\,(0.046)$\,GeV/$c^2$, as indicated by the ranges between the adjacent pair of blue dashed arrows in Fig.~\ref{fig:sig}(a)[(b)]. For $D^0\to K^-\pi^+\eta^\prime$ and $D^+\to K^0_S\pi^+\eta^\prime$ decays, the data events in the $\eta^\prime$ 2D sideband region, enclosed by the blue dashed lines in Fig.~\ref{fig:sig}(c), are examined. For these events, either $M_{\pi_1^{+}\pi^{-}\eta}$ or $M_{\pi_2^{+}\pi^{-}\eta}$ is in the $\eta^\prime$ 1D sideband region, but both are outside the $\eta^\prime$ 1D signal region. These two background components are normalized by the ratios of the magnitude of the backgrounds in the $K^0_S$\,($\eta^\prime$) signal and sideband regions. The background components from other processes~(called BKGIII) are estimated by analyzing the inclusive MC sample. The scaled $M_{\rm BC}$ distributions of the surviving events for the BKGI, BKGII and BKGIII components are shown as the dotted, dashed and solid histograms in Fig.~\ref{fig:datafit_Massbc}, respectively. In these spectra, no peaking backgrounds are found, which indicates that the background shape is well modeled by the ARGUS function. From each fit, we obtain the number of $D\to \bar K\pi\eta^\prime$ signal events $N_{\rm tag}$, as summarized in Table~\ref{tab:singletagN_MC}. The statistical significances of these decays, which are estimated from the likelihood difference between the fits with and without the signal component, are all greater than $10\sigma$. Figure~\ref{fig:mm_compare} shows the $M_{K\pi}$, $M_{\pi\eta^\prime}$, and $M_{K\eta^\prime}$ distributions of $D\to \bar K\pi\eta^\prime$ candidate events for data and MC simulations after requiring $|M_{\rm BC}-M_D|<0.005$~GeV/$c^2$. No obvious sub-resonances have been observed in these invariant mass distributions. Nevertheless, the phase space (PHSP) MC distributions are not in good agreement with the data distribution (see the blue dashed histograms and dots with errors in Fig.~\ref{fig:mm_compare}). To solve this problem, we modify the MC generator to produce the correct invariant mass distributions according to the Dalitz plot distributions in data. In the Dalitz plot, the background component is modeled by the inclusive MC simulation, while the signal component is generated according to efficiency-corrected PHSP MC simulation. In Fig.~\ref{fig:dalitz}, we show the Dalitz plots of $D^0\to K^-\pi^+\eta^\prime$ candidate events for data and the modified MC sample. The invariant mass distributions $M_{K\pi}$, $M_{\pi\eta^\prime}$, and $M_{K\eta^\prime}$ of the modified MC samples are in good agreement with the data distributions (see the red solid histograms and dots with errors in Fig.~\ref{fig:mm_compare}). In the following, we use the modified MC sample to determine the detection efficiencies in the calculation of the branching fractions. \begin{figure}[htbp] \centering \includegraphics[width=3.4in]{dalitz_in_mc2.eps} \caption{\small Dalitz plots of $M^2_{K^-\pi^+}$ vs. $M^2_{\pi^+\eta^\prime}$ for $D^0\to K^-\pi^+\eta^\prime$ candidate events in data (left) and modified MC sample (right). }\label{fig:dalitz} \end{figure} \section{Branching fractions} The branching fraction of $D\to \bar K\pi\eta^\prime$ is determined according to \begin{equation} \label{equ:branchingfraction} {\mathcal B}(D\to \bar K\pi\eta^\prime) = \frac{N_{\rm tag{\color{blue}}}}{2\cdot N_{D\bar D}\cdot\epsilon \cdot{\mathcal B_{\eta^\prime}} \cdot{\mathcal B_{\eta}} (\cdot{\mathcal B_{\rm inter}}) }, \end{equation} where $N_{\rm tag}$ is the number of $D\to \bar K\pi\eta^\prime$ signal events, $N_{D\bar D}$ is the total number of $D\bar D$ pairs, $\epsilon$ is the detection efficiency which has been corrected by the differences in the efficiencies for charged particle tracking and PID, as well as $\pi^0$ and $\eta$ reconstruction, between the data and MC simulation as discussed in Sec.~\ref{sec:ana}, and summarized in Table~\ref{tab:singletagN_MC}. In Eq.~(\ref{equ:branchingfraction}), ${\mathcal B_{\rm inter}}$ is the product branching fraction $\mathcal B_{K^0_S}\cdot{\mathcal B_{\pi^0}}$\,($\mathcal B_{K^0_S}$) for the decay $D^0\to K^0_S\pi^0\eta^\prime$\,($D^+\to K^0_S\pi^+\eta^\prime$), and ${\mathcal B_{\eta^\prime}}$, ${\mathcal B_{\eta}}$, ${\mathcal B_{K^0_S}}$ and ${\mathcal B_{\pi^0}}$ denote the branching fractions of the decays $\eta^\prime\to\pi^+\pi^-\eta$, $\eta\to\gamma\gamma$, $K^0_S \to \pi^+\pi^-$, and $\pi^0\to\gamma\gamma$, respectively, taken from the PDG~\cite{pdg2014}. With the single-tag method, the CF decays $D^0(D^+)\to \bar K\pi\eta^\prime$ are indistinguishable from the doubly Cabibbo-suppressed (DCS) decays $\bar D^0(D^+)\to \bar K(K)\pi\eta^\prime$. However, the DCS contributions are expected to be small and negligible in the calculations of branching fractions, but will be taken into account as a systematic uncertainty. Taking $N_{D^{0}\bar{D}^{0}}=(10597\pm28_{\rm stat.}\pm 98_{\rm syst.})\times 10^3$ and $N_{D^{+}D^{-}}=(8296\pm31_{\rm stat.}\pm65_{\rm syst.})\times 10^3$ from Ref.~\cite{bes3ddyield}, the branching fraction of each decay is determined with Eq.~(\ref{equ:branchingfraction}) and summarized in Table~\ref{tab:singletagN_MC}. \begin{table*}[htp] \centering \caption{\label{tab:singletagN_MC} $\Delta E$ requirements, input quantities and results for the determination of the branching fractions. The efficiencies do not include the branching fractions for the decays of the daughter particles of $\eta^\prime$, $\eta$, $K^0_S$, and $\pi^0$ mesons. The uncertainties are statistical only. } \small \begin{tabular}{lcccc} \hline \multicolumn{1}{c}{Decay mode}& $\Delta E$ (MeV) & $N_{\rm tag}$ &$\epsilon$ (\%) &$\mathcal B$ ($\times 10^{-3}$) \\ \hline $D^0\to K^-\pi^+\eta^\prime$ &$(-26,+28)$ &$2528\pm59$ &$10.97\pm0.08$&$6.43\pm0.15$\\ $D^0\to K^0_S\pi^0\eta^\prime$ &$(-35,+38)$ &$289 \pm26$ &$4.67\pm0.04$&$2.52\pm0.22$\\ $D^+\to K^0_S\pi^+\eta^\prime$ &$(-27,+28)$ &$267 \pm24$ &$7.23\pm0.05$&$1.90\pm0.17$\\ \hline \end{tabular} \end{table*} \section{Systematic uncertainties} \label{sec:sys} The systematic uncertainties in the measurements of the branching fractions and the branching ratios, ${\mathcal R}^0\equiv\frac{{\mathcal B}(D^0\to K^0_S\pi^0\eta^\prime)}{{\mathcal B}(D^0\to K^-\pi^+\eta^\prime)}$, and ${\mathcal R}^+\equiv \frac{{\mathcal B}(D^+\to K^0_S\pi^+\eta^\prime)}{{\mathcal B}(D^0\to K^-\pi^+\eta^\prime)} $, are summarized in Table~\ref{tab:relsysuncertainties}. Each contribution, estimated relative to the measured branching fraction, is discussed below. \begin{table*}[htp] \centering \caption{\label{tab:relsysuncertainties} Relative systematic uncertainties (in \%) in the branching fractions, ${\mathcal R}^0$, and ${\mathcal R}^+$. The numbers before or after `/' in the last two columns denote the remaining systematic uncertainties of ${\mathcal B}(D^0\to K^{-}\pi^{+}\eta')$ and ${\mathcal B}(D^{0(+)}\to K^{0}_{S}\pi^{0(+)}\eta')$ that do not cancel in the determination of ${\mathcal R}^{0}$ and ${\mathcal R}^{+}$. } \begin{small} \begin{tabular}{lccccc} \hline Source & ${\mathcal B}(D^0\to K^{-}\pi^{+}\eta')$&${\mathcal B}(D^0\to K^{0}_{S}\pi^{0}\eta')$&${\mathcal B}(D^+\to K^{0}_{S}\pi^{+}\eta')$&${\mathcal R}^0$&${\mathcal R}^+$\\ \hline Number of $D\bar D$ pairs & 1.0 & 1.0 & 0.9 & --/-- &1.0/0.9 \\ Tracking of $K^\pm(\pi^\pm)$ & 3.0 & 2.0 & 2.5 &1.0/-- &1.0/-- \\ PID of $K^\pm(\pi^\pm)$ & 2.0 & 1.0 & 1.5 &1.0/-- &0.5/-- \\ $K_S^0$ reconstruction & -- & 1.5 & 1.5 & --/1.5& --/1.5 \\ $\pi^0\,(\eta)$ reconstruction & 1.0 & 2.0 & 1.0 & --/1.0& --/-- \\ $M_{\rm BC}$ fit & 0.5 & 3.6 & 1.9 &0.5/3.6&0.5/1.9 \\ $\eta^\prime$ mass window & 1.0 & 1.0 & 1.0 & --/-- & --/-- \\ $\Delta E$ requirement & 0.1 & 2.4 & 4.5 &0.1/2.4&0.1/4.5 \\ MC modeling & 1.6 & 0.5 & 1.7 &1.6/0.5&1.6/1.7 \\ MC statistics & 0.7 & 0.9 & 0.7 &0.7/0.9&0.7/0.7 \\ Quoted branching fractions & 1.7 & 1.7 & 1.7 & --/0.1& --/0.1 \\ $D^0\bar D^0$ mixing & 0.1 & 0.1 & -- & --/-- & --/-- \\ DCS contribution& 0.6 & 0.6 & 0.6 & --/-- & --/-- \\ \hline Total & 4.8 & 6.0 & 6.6 & 5.3 & 6.0 \\ \hline \end{tabular} \end{small} \end{table*} \begin{itemize} \item {\bf \boldmath Number of $D\bar D$ pairs}: The total numbers of $D^0\bar D^0$ and $D^+D^-$ pairs produced in the data sample are cited from a previous measurement~\cite{bes3ddyield} that uses a combined analysis of both single-tag and double-tag events in the same data sample. The total uncertainty in the quoted number of $D^0\bar D^0~(D^+D^-)$ pairs is 1.0\%~(0.9\%), obtained by adding both the statistical and systematic uncertainties in quadrature. \item {\bf \boldmath Tracking and PID of $K^\pm(\pi^\pm)$}: The tracking and PID efficiencies for $K^\pm(\pi^\pm)$ are investigated using double-tag $D\bar D$ hadronic events. A small difference between the efficiency in the data sample and that in MC simulation~(called the data-MC difference) is found. The momentum weighted data-MC differences in the tracking [PID] efficiencies are determined to be $(+2.4\pm0.4)\%$, $(+1.0\pm0.5)\%$, and $(+1.9\pm1.0)\%$ [$(-0.2\pm0.1)\%$, $(-0.1\pm0.1)\%$ and $(-0.2\pm0.1)\%$] for $K^\pm$, $\pi^\pm_{\rm direct}$, and $\pi^\pm_{\rm in-direct}$, respectively. Here, the uncertainties are statistical and the subscript $_{\rm direct}$ or $_{\rm in-direct}$ indicates the $\pi^\pm$ produced in $D$ or $\eta^\prime$ decays, respectively. In this work, the MC efficiencies have been corrected by the momentum weighted data--MC differences in the $K^\pm(\pi^\pm)$ tracking and PID efficiencies. Finally, a systematic uncertainty for charged particle tracking is assigned to be 1.0\% per $\pi^\pm_{\rm in-direct}$ and 0.5\% per $K^\pm$ or $\pi^\pm_{\rm direct}$. The systematic uncertainty for PID efficiency is taken as 0.5\% per $K^\pm$, $\pi^\pm_{\rm direct}$ or $\pi^\pm_{\rm in-direct}$. \item {\bf\boldmath $K_S^0$ reconstruction}: The $K_{S}^{0}$ reconstruction efficiency, which includes effects from the track reconstruction of the charged pion pair, vertex fit, decay length requirement and $K^0_S$ mass window, has been studied with a control sample of $J/\psi\to K^{*}(892)^{\mp}K^{\pm}$ and $J/\psi\to \phi K_S^{0}K^{\pm}\pi^{\mp}$~\cite{sysks}. The associated systematic uncertainty is assigned as 1.5\% per $K^0_S$. \item {\bf \boldmath $\pi^0\,(\eta)$ reconstruction}: The $\pi^0$ reconstruction efficiency, which includes effects from the photon selection, 1-C kinematic fit and $\pi^0$ mass window, is verified with double-tag $D\bar D$ hadronic decay samples of $D^0\to K^-\pi^+$, $K^-\pi^+\pi^+\pi^-$ versus $\bar D^0\to K^+\pi^-\pi^0$, $K^0_S\pi^0$~\cite{syspi0}. A small data-MC difference in the $\pi^0$ reconstruction efficiency is found. The momentum weighted data-MC difference in $\pi^0$ reconstruction efficiencies is found to be $(-0.5\pm1.0)\%$, where the uncertainty is statistical. After correcting the MC efficiencies by the momentum weighted data-MC difference in $\pi^0$ reconstruction efficiency, the systematic uncertainty due to $\pi^0$ reconstruction is assigned as 1.0\% per $\pi^0$. The systematic uncertainty due to $\eta$ reconstruction is assumed to be the same as that for $\pi^0$ reconstruction. \item {\bf \boldmath $\eta^\prime$ mass window}: The uncertainty due to the $\eta^\prime$ mass window is studied by fitting to the $\pi^+\pi^-\eta$ invariant mass spectrum of the $K^-\pi^+\eta^\prime$ candidates. The difference between the data and MC simulation in the efficiency of the $\eta^\prime$ mass window restriction is $(0.8\pm0.2)$\%. The associated systematic uncertainty is assigned as 1.0\%. \item {\bf \boldmath $M_{\rm BC}$ fit}: To estimate the systematic uncertainty due to the $M_{\rm BC}$ fit, we repeat the measurements by varying the fit range [$(1.8415,1.8865)$\,GeV/$c^2$], the signal shape\,(with different MC matching requirements) and the endpoint\,(1.8865\,GeV/$c^2$) of the ARGUS function ($\pm0.2$\,MeV/$c^2$). Summing the relative changes in the branching fractions for these three sources in quadrature yields 0.5\%, 3.6\%, and 1.9\% for $D^0\to K^-\pi^+\eta^\prime$, $D^0\to K^0_S\pi^0\eta^\prime$, and $D^+\to K^0_S\pi^+\eta^\prime$, respectively, which are assigned as systematic uncertainties. \item {\bf \boldmath $\Delta E$ requirement}: To investigate the systematic uncertainty due to the $\Delta E$ requirement, we repeat the measurements with alternative $\Delta E$ requirements of $3.0\sigma_{\Delta E}$ and $4.0\sigma_{\Delta E}$ around the fitted $\Delta E$ peaks. The changes in the branching fractions, 0.1\%, 2.4\%, and 4.5\%, are taken as systematic uncertainties for $D^0\to K^-\pi^+\eta^\prime$, $D^0\to K^0_S\pi^0\eta^\prime$, and $D^+\to K^0_S\pi^+\eta^\prime$, respectively. \item {\bf MC modeling}: The systematic uncertainty in the MC modeling is studied by varying MC-simulated background sizes for the input $M^{2}_{K\pi}$ and $M^{2}_{\pi\eta^\prime}$ distributions in the generator by~$\pm20\%$. The largest changes in the detection efficiencies, 1.6\%, 0.5\%, and 1.7\% are taken as systematic uncertainties for $D^0\to K^-\pi^+\eta^\prime$, $D^0\to K^0_S\pi^0\eta^\prime$, and $D^+\to K^0_S\pi^+\eta^\prime$, respectively. \item {\bf MC statistics}: The uncertainties due to the limited MC statistics are 0.7\%, 0.9\% and 0.7\% for $D^0\to K^-\pi^+\eta^\prime$, $D^0\to K^0_S\pi^0\eta^\prime$, and $D^+\to K^0_S\pi^+\eta^\prime$, respectively. \item {\bf Quoted branching fractions}: The uncertainties of the quoted branching fractions for $\eta^\prime \to \pi^+\pi^-\eta$, $\eta\to \gamma\gamma$, $K^0_S\to \pi^+\pi^-$, and $\pi^0\to \gamma\gamma$ are taken from the world average and are 1.6\%, 0.5\%, 0.07\%, and 0.03\%~\cite{pdg2014}, respectively. \item {\bf $D^0\bar D^0$ mixing}: Because $D^0\bar D^0$ meson pair is coherently produced in $\psi(3770)$ decay, the effect of $D^0\bar D^0$ mixing on the branching fractions of neutral $D$ meson decays is expected to be due to the next-to-leading-order of the $D^0\bar D^0$ mixing parameters $x$ and $y$~\cite{zzxing,asner}. With $x=(0.32\pm0.14)\%$ and $y=(0.69^{+0.06}_{-0.07})\%$ from PDG~\cite{pdg2014}, we conservatively assign 0.1\% as the systematic uncertainty. \item {\bf DCS contribution}: Based on the world-averaged values of the branching fractions, the branching fraction ratios between the known DCS decays and the corresponding CF decays are in the range of (0.2-0.6)\%. Therefore, we take the largest ratio 0.6\% as a conservative estimation of the systematic uncertainty of the DCS effects. \end{itemize} The above relative systematic uncertainties are added in quadrature, and a total of 4.8\%, 6.0\%, 6.6\%, 5.3\% and 6.0\% for the measurements of ${\mathcal B}(D^0\to K^-\pi^+\eta^\prime)$, ${\mathcal B}(D^0\to K^0_S\pi^0\eta^\prime)$, ${\mathcal B}(D^+\to K^0_S\pi^+\eta^\prime)$, ${\mathcal R^0}$, and ${\mathcal R^+}$, respectively, is obtained. \section{Summary and discussion} Based on an analysis of an $e^+e^-$ data sample with an integrated luminosity of 2.93 fb$^{-1}$ collected at $\sqrt s= 3.773$~GeV with the BESIII detector, we measure the branching fractions of hadronic $D$ meson decays to be: ${\mathcal B}(D^0\to K^-\pi^+\eta^\prime)=(6.43 \pm 0.15_{\rm stat.} \pm 0.31_{\rm syst.})\times 10^{-3}$, ${\mathcal B}(D^0\to K^0_S\pi^0\eta^\prime)=(2.52 \pm 0.22_{\rm stat.} \pm 0.15_{\rm syst.})\times 10^{-3}$, and ${\mathcal B}(D^+\to K^0_S\pi^+\eta^\prime)=(1.90 \pm0.17_{\rm stat.} \pm 0.13_{\rm syst.})\times 10^{-3}$. The measured branching fraction of $D^0\to K^-\pi^+\eta^\prime$ is consistent with the previous result measured by CLEO~\cite{prd48_4007,pdg2014}, but improved with a factor of 4 in precision. The branching fractions of $D^0\to K^0_S\pi^0\eta^\prime$ and $D^+\to K^0_S\pi^+\eta^\prime$ are determined for the first time. Using the measured branching fractions, we determine the ratios of branching fractions to be ${\mathcal R^0}=0.39\pm0.03_{\rm stat.}\pm0.02_{\rm syst.}$ and ${\mathcal R^+}=0.30\pm0.03_{\rm stat.}\pm0.02_{\rm syst.}$. ${\mathcal R^0}$ agrees well with the value 0.4 predicted by the SIM, but ${\mathcal R^+}$ significantly deviates from the expected value 0.9. This deviation may arise from a possible phase difference between two isospin states in the SIM~\cite{lvcd}. In our analysis, we do not find an obvious $K^{*}$ signal in the $K\pi$ invariant mass distributions, which is consistent with the predictions of small $D^0\to \bar K^{*0}\eta^\prime$ and $D^+\to K^{*+}\eta^\prime$ contributions~\cite{Ddecay1,Ddecay2,Ddecay3}. Summing over the branching fractions of $D\to \bar K\pi\eta^\prime$ decays and the other exclusive $D\to\eta^\prime X$ decays in PDG~\cite{pdg2014}, we obtain the sums of the branching fractions of all the exclusive $D^0\to\eta^\prime X$ and $D^+\to\eta^\prime X$ to be $(3.23\pm0.13)\%$ and $(1.06\pm0.07)\%$, respectively. They are consistent with the measured inclusive production ${\mathcal B} (D^0\to\eta^\prime X)=(2.48\pm0.27)\%$ and ${\mathcal B} (D^+\to\eta^\prime X)=(1.04\pm0.18)\%$~\cite{prd74_112005} within $2.5\sigma$ and $0.1\sigma$, respectively. This excludes the possibility of additional exclusive $D\to\eta^\prime X$ decay modes with large branching fractions. \section{Acknowledgements} The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. The authors are grateful to Fu-Sheng Yu, Jonathan L. Rosner, and Zhizhong Xing for helpful discussions. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11335008, 11425524, 11475123, 11625523, 11635010, 11675200, 11735014, 11775230; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U1532257, U1532258, U1532101; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contracts Nos. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0010118, DE-SC-0010504, DE-SC-0012069; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt.
{'timestamp': '2019-01-01T02:12:47', 'yymm': '1809', 'arxiv_id': '1809.03750', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.03750'}
ArXiv
\section{Abstract} Quantum entanglement, one of the most counterintuitive effects in quantum mechanics \cite{Einstein35}, plays an essential role in quantum information and communication technology. Thus far, many efforts have been taken to create multipartite entanglement in photon polarization \cite{Walther2005, Lu2007, Gao2010, Wieczorek2009, Prevedel2009}, quadrature amplitude \cite{Furusawa2009}, and ions \cite{Haffner2005}, for demonstration and precise operation of quantum protocols. These efforts have mainly concentrated on the generation of pure entangled states, such as GHZ \cite{Greenberger1989}, W \cite{Dur2000}, and cluster \cite{Briegel2001} states. By contrast, bound entanglement \cite{Horodecki1998} could not be distilled into pure entangled states, and had been considered useless for quantum information protocols such as quantum teleportation \cite{Boumeester1997, Furusawa1998}. However, it is interesting that some bound entanglement can be distilled by certain procedures \cite{Smolin2001} or interaction with auxiliary systems \cite{Acin2004, Shor2003}. These properties provide new quantum communication schemes, for instance, remote information concentration \cite{Murao2001}, secure quantum key distribution \cite{Horodecki2005}, super-activation \cite{Shor2003}, and convertibility of pure entangled states \cite{Ishizaka2004}. Recently, a distillation protocol from the bound entangled state, so called `unlocking' \cite{Smolin2001}, has been experimentally demonstrated \cite{Amselem2009,Lavoie2010}. In this protocol, as depicted in Fig. 1 (a), four-party bound entanglement in the Smolin state \cite{Smolin2001} can be distilled into two parties (e.g., A and D) when the other two parties (e.g., B and C) come together and make joint measurements on their qubits. The unlocking protocol, in principle, can distill pure and maximal entanglement into the two qubits. However, it does not belong to the category of LOCC, since the two parties have to meet to distill the entanglement. \begin{tiny} \begin{figure}[t!] \includegraphics[width= \columnwidth, clip]{activation_unlock.eps} \label{ } \caption{Principle of the distillation of the bound entanglement in the Smolin state $\rho_s$. Each circle represents a qubit, and the blue line and squares show the entanglement of parties. BSM, Bell state measurement; CC, classical channel. (a) Unlocking. (b) Activation of the bound entanglement. Both of them can distill entanglement from the Smolin state; however, the activation protocol can be carried out under LOCC, while the unlocking needs two parties coming together and making joint measurements. } \end{figure} \end{tiny} The activation of bound entanglement that we demonstrate here is another protocol by which one can distill the Smolin-state bound entanglement by means of LOCC. The principle of the activation is sketched in Fig. 1 (b). Consider four parties, A, B, C, and D each of which has a qubit in the Smolin state. The Smolin state is a statistically equal mixture of pairs of the four Bell states, and its density matrix $\rho_s$ is given by \begin{align} \rho_s &= \sum_{i=1}^{4} |\phi^i\rangle \langle \phi^i |_{AB}\otimes |\phi^i\rangle \langle \phi^i |_{CD} \notag \\ &= \sum_{i=1}^{4} |\phi^i\rangle \langle \phi^i |_{AC}\otimes |\phi^i\rangle \langle \phi^i |_{BD} \notag \\ &= \sum_{i=1}^{4} |\phi^i\rangle \langle \phi^i |_{AD}\otimes |\phi^i\rangle \langle \phi^i |_{BC}, \end{align} where $|\phi^i\rangle \in \{ |\phi^{\pm} \rangle ,|\psi^{\pm} \rangle \}$ are the two-qubit Bell states given by \begin{align} |\phi^{\pm} \rangle &= \frac{1}{\sqrt{2}} \left( |00 \rangle \pm |11 \rangle \right) \notag \\ |\psi^{\pm} \rangle &= \frac{1}{\sqrt{2}} \left( |01 \rangle \pm |10 \rangle \right), \end{align} where $\ket{0}$ and $\ket{1}$ are the qubit bases. Since the $\rho_s$ state is symmetric with respect to the exchange of any two parties, $\rho_s$ is separable in any two-two bipartite cuts. This implies that there is no distillable entanglement in any two-two bipartite cuts: $D_{AB|CD}(\rho_s)=D_{AC|BD}(\rho_s)=D_{AD|BC}(\rho_s)=0$, where $D_{i|j}(\rho)$ is the distillable entanglement of $\rho$ in an $i|j$ bipartite cut. In addition to the Smolin state $\rho_s$, two of the parties (e.g., B and C) share distillable entanglement in the two-qubit Bell state (e.g., $\ket{\psi^+}_{B'C'}$). Hence, the initial state $\rho_I$ is given by \begin{align} \rho_I &= \rho_s \otimes \ket{\psi^+}\bra{\psi^+}_{B'C'} \notag \\ &= \sum_{i=1}^{4} \ket{\phi^i}\bra{\phi^i}_{AD}\otimes \ket{\phi^i}\bra{\phi^i} _{BC} \otimes \ket{\psi^+}\bra{\psi^+}_{B'C'}. \end{align} The state $\rho_s$ or $\ket{\psi^+}_{B'C'}$ gives no distillable entanglement into A and D: $D_{A|D}(\rho_s) = D_{A|D}(\ket{\psi^+}_{B'C'})=0$, since $D_{A|D}(\rho_s) \leq D_{AB|CD}(\rho_s)=0$. To distill entanglement into A and D, the Bell state measurements (BSMs), the projection measurements into the Bell bases, are taken for the qubits B-B' and C-C', and the results are informed A and D via classical channels. Due to the property that A-D and B-C share the same Bell states $\ket{\phi^i}_{AD}\otimes\ket{\phi^i}_{BC}$ in $\rho_s$, the result of the BSMs in each $\ket{\phi^i}_{BC}\otimes \ket{\psi^+}_{B'C'}$ can tell the type of the Bell state shared by A and D. Table I shows the list of the resulting states shared by A and D for all the possible combinations of the result of the BSM of B-B' and C-C'. Given this information one can determine the state shared by A and D and then convert any $\ket{\phi^i}_{AD}$ into $\ket{\psi^-}_{AD}$ by local unitary operations. Hence, the activation protocol can distill entanglement from the Smolin state by four parties' LOCC with the help of the auxiliary two-qubit Bell state. This is in strong contrast to the unlocking protocol (see Fig. 1 (a)), which requires non-local joint BSM between the two parties (B and C). It is noteworthy that in our activation protocol, the distillable entanglement between A and D is superadditive: \begin{equation} D_{A|D}(\rho_I) > D_{A|D}(\rho_s) +D_{A|D}(\ket{\psi^+}_{B'C'}) = 0. \end{equation} This superadditivity means that the bound entanglement is activated with the help of the auxiliary distillable entanglement, although A and D share no distillable entanglement for $\rho_s$ or $\ket{\psi^+}_{B'C'}$ alone. This protocol can also be regarded as an entanglement transfer from B-C to A-D. In this context, it is interesting that two parties (B-C) can transfer the Bell state to the other two parties (A-D) despite being separated: $D_{AD|BC}(\rho_s) = 0$. In other words, entanglement can be transferred by the mediation of the undistillable, bound entanglement. This unique feature is quite different from two-stage entanglement swapping \cite{Goebel2008}, which needs distillable entanglement shared by senders and receivers to transfer the Bell states. \begin{table}[b!] \caption{Relationship between the combination of the results of the BSM of B-B' and C-C', and the state of AD. Each combination of the BSMs tells the state of A-D. } \label{table} \begin{center}\small \def\arraystretch{1.8} \begin{tabular}{lcccccc}\hline & & & $\ket{\phi^+}_{BB'}$ & $\ket{\phi^-}_{BB'}$ & $\ket{\psi^+}_{BB'}$& $\ket{\psi^-}_{BB'}$ \\ \hline $\ket{\phi^+}_{CC'}$ & & & $\ket{\psi^+}_{AD}$ & $\ket{\psi^-}_{AD}$ & $\ket{\phi^+}_{AD}$ & $\ket{\phi^-}_{AD} $ \\ $\ket{\phi^-}_{CC'}$ & & & $\ket{\psi^-}_{AD}$ & $\ket{\psi^+}_{AD}$ & $\ket{\phi^-}_{AD}$ & $\ket{\phi^+}_{AD} $ \\ $\ket{\psi^+}_{CC'}$ & & & $\ket{\phi^+}_{AD}$ & $\ket{\phi^-}_{AD}$ & $\ket{\psi^+}_{AD}$ & $\ket{\psi^-}_{AD} $ \\ $\ket{\psi^-}_{CC'}$ & & & $\ket{\phi^-}_{AD}$ & $\ket{\phi^+}_{AD}$ & $\ket{\psi^-}_{AD}$ & $\ket{\psi^+}_{AD} $ \\ \hline \end{tabular} \end{center} \vspace*{-4mm} \end{table} \begin{tiny} \begin{figure}[t!] \includegraphics[width=\columnwidth, clip]{activation_exsetup05.eps} \label{ } \caption{Scheme of the activation of the bound entanglement. Each source of spontaneous parametric down-conversion (SPDC) produces $\ket{\psi^+}$. The four-photon states emitted from SPDC1 and SPDC2 pass through liquid crystal variable retarders (LCVRs) to be transformed into the Smolin states. The polarization of two-photon states in mode A and D are analyzed on the condition that the pair of the Bell state, $\ket{\phi^+}_{BB'}\otimes\ket{\phi^+}_{CC'}$ is detected. P, a polarizer. } \end{figure} \end{tiny} Figure 2 illustrates the experimental scheme of our activation protocol. In our experiment, the physical qubits are polarized photons, having horizontal $\ket{H}$ and vertical $\ket{V}$ polarizations as the state bases. By using three sources of spontaneous parametric down-conversion (SPDC) \cite{Kwiat95}, we produced three $\ket{\psi^+}$ states simultaneously. The state $\rho(\psi^+) \equiv \ket{\psi^+}\bra{\psi^+}_{AB}\otimes \ket{\psi^+}\bra{\psi^+}_{CD}$ emitted from the SPDC1 and SPDC2 was transformed into the Smolin state by passing through the synchronized liquid-crystal variable retarders (LCVRs, see Supplementary Material). The state $\ket{\psi^+}_{B'C'}$ emitted from SPDC3 was used as the auxiliary Bell state for the activation protocol. A polarizing beam splitter (PBS) and a $\ket{+}_{B}\ket{+}_{B'}$ ($\ket{+}_{C}\ket{+}_{C'}$) coincidence event at modes B and B' (C and C') allow the projection onto the Bell state $\ket{\phi^+}_{BB'}$ ($\ket{\phi^+}_{CC'}$), where $\ket{+}_{i}= (\ket{H}_i+\ket{V}_i)/\sqrt{2}$ \cite{Goebel2008}. Given the simultaneous BSMs at B-B' and C-C' we post-selected the events of detecting $\ket{\phi^+}_{BB'} \otimes \ket{\phi^+}_{CC'}$ out of the 16 combinations (Table I). The result, i.e., the state after the activation, was expected to be $\ket{\psi^+}_{AD}$. To characterize the experimentally obtained Smolin state $\rho_s^{exp}$ and the state after the activation process $\rho_{AD}$, the maximum likelihood state tomography \cite{James2001} was performed (see Supplementary Material). \begin{tiny} \begin{figure}[t!] \includegraphics[width=0.7\columnwidth, clip]{2010112701mlre.eps} \caption{ Real part of the measured Smolin state $\rho_s^{exp}$. } \end{figure} \end{tiny} Figure 3 shows the real part of the reconstructed density matrix of the Smolin state $\rho_s^{exp}$ we obtained. The fidelity to the ideal Smolin state $F_s$ = $ \left({\rm Tr} \sqrt{\sqrt{\rho_s} \rho_s^{exp} \sqrt{\rho_s} }\right)^2$ was calculated to be 82.2$\pm$0.2$\%$. From $\rho_s^{exp}$, we evaluated the separability of the generated state across the bipartite cuts AB$|$CD, AC$|$BD, and AD$|$BC in terms of the logarithmic negativity (LN) \cite{Vidal2002}, which is an entanglement monotone quantifying the upper bound of the distillable entanglement under LOCC. The LN of the density matrix $\rho$ composed of the two subsystems $i$ and $j$ is given by \begin{equation} LN _{i|j}(\rho ) =\log_2 ||\rho^{T_{i}}||, \end{equation} where $\rho^{T_{i}}$ represents the partial transpose with respect to the subsystem $i$, and $||\rho^{T_{i}}||$ is the trace norm of $\rho^{T_{i}}$. The LN values thus obtained for the three bipartite cuts of $\rho_s^{exp}$ are presented in Table II, together with those of the Smolin state $\rho_s$ and the state $\rho(\psi^+)$. The state $\rho_s$ has zero negativity for all three bipartite cuts, while the state $\rho(\psi^+)$ has finite values, i.e., finite distillable entanglement originally from the two Bell states, for AC$|$BD and AD$|$BC cuts. For $\rho_s^{exp}$, the LN values are all close to zero, indicating that $\rho_s^{exp}$ has a very small amount, if any, of distillable entanglement. \begin{table}[b!] \caption{Logarithmic negativities (LNs) for the two-two bipartite cuts. } \label{table} \begin{center}\small \def\arraystretch{1.3} \begin{tabular}{lccc}\hline & $\rho_s^{exp}$ & $\rho_s$ & $\rho(\psi^+)$ \\ \hline $LN_{AB|CD}(\rho)$ & $0.076 \pm 0.012 $ & 0 & 0 \\ $LN_{AC|BD}(\rho)$ & $0.183 \pm 0.012 $ & 0 & 2 \\ $LN_{AD|BC}(\rho)$ & $0.178 \pm 0.012 $ & 0 & 2 \\\hline \end{tabular} \end{center} \vspace*{-4mm} \end{table} To test other separabilities, we calculated the entanglement witness Tr$\{W \rho_s^{exp} \}$, which shows negative values for the non-separable states, and non-negative values for separable ones. The witness for our four-qubit states is $W = I^{\otimes 4}-\sigma_x^{\otimes 4}-\sigma_y^{\otimes 4}-\sigma_z^{\otimes 4}$ \cite{Amselem2009,Toth2005}. We obtained Tr$\{W \rho_s^{exp} \}= -1.30 \pm 0.02$, while the values for the ideal Smolin state and the state $\rho(\psi^i)$ were both -2. The negative witness value indicates that $\rho_s^{exp}$ has no separability in general. Taking account of the result that $\rho_s^{exp}$ has almost no distillable entanglement for the two-two bipartite cuts, $\rho_s^{exp}$ should have distillable entanglement in one-three bipartite cuts and/or tripartite cuts. \begin{tiny} \begin{figure}[t!] \includegraphics[width=0.484\columnwidth, clip]{smolin_ad03.eps} \includegraphics[width=0.49\columnwidth, clip]{rhoad.eps} \label{ } \caption{Density matrices of the qubits A and D before (a) and after (b) the activation experiment. (a) Reduced-density matrix $ \rho_{sAD}^{exp}$ = Tr$_{BC} \rho_s^{exp}$. (b) The density matrix $\rho_{AD}$, triggered by the two BSMs at B-B' and C-C'. } \end{figure} \end{tiny} In the following we describe the results of our activation experiment. Figure 4 (a) shows the reduced density matrix $ \rho_{sAD}^{exp}$ = Tr$_{BC} \rho_s^{exp}$, i.e., the density matrix before the activation. We confirmed that $ \rho_{sAD}^{exp}$ gives no distillable entanglement to A and D: $ LN_{A|D}(\rho_{sAD}^{exp}) = 0$. Figure 4 (b) shows the density matrix after the activation $\rho_{AD}$, the reconstructed two-qubit density matrix in modes A and D triggered by the two BSMs at B-B' and C-C'. In the process of the state reconstruction we subtracted the accidental coincidences caused by higher-order emission of SPDC (see Supplementary Information). The fidelity $F_{AD} = \bra{\psi^+}\rho_{AD}\ket{\psi^+}_{AD} $ to the ideally activated state $\ket{\psi^+}_{AD}$ was 85$\pm 5$\%, which is larger than the classical limit of 50\%. The obtained LN was $LN_{A|D}(\rho_{AD}) = 0.83\pm 0.08$, indicating that we have gained a certain amount of entanglement via our activation process, whereas A and D initially share no distillable entanglement. We quantified the increase of the distillable entanglement via our activation experiment. We evaluated the distillable entanglement before and after the activation by means of its lower and upper bounds as follows. The upper bound of $D_{A|D}(\rho_s^{exp})$, the distillable entanglement before the activation, was given by \begin{align} D_{A|D}(\rho_s^{exp}) \leq LN_{AB|CD}(\rho_s^{exp} ) = 0.076. \end{align} The observed LN value after the activation process, $LN_{A|D}(\rho_{AD} ) = 0.83$, is larger than this value. However, since these are just the upper bounds of the distillable entanglement, we should examine the lower bound of $\rho_{AD}$ to confirm the increase of the distillable entanglement. To quantify the lower bound of $D_{A|D}(\rho)$, we used \begin{align} D_H(\rho) \leq D_{A|D}(\rho ), \end{align} where $D_H(\rho)$ is the distillable entanglement via a certain distillation protocol, the so-called hashing method, known as the best method for Bell diagonal states of rank 2 \cite{Bennett1996}. $D_H(\rho)$ is given by, \begin{align} D_H(\rho ) = 1+F\log_2(F)+(1-F)\log_2\frac{(1-F)}{3}, \end{align} where $F$ is the maximum state-fidelity over the four Bell states $F$ = max$(\bra{\phi^i}\rho\ket{\phi^i})$. It is known that $D_H(\rho) > 0$ for $F > 0.8107$ \cite{Bennett1996}. The fidelity for $\rho_{AD}$ does satisfy this criterion and the value of $D_H(\rho_{AD})$ is calculated to be 0.15. The combination of Eq. (6) and (7) show a clear increase of the distillable entanglement via our activation experiment: $D_{A|D}(\rho_s^{exp}) \leq 0.076 < 0.15 \leq D_{A|D}(\rho_{AD})$. In conclusion, we have experimentally demonstrated the activation of bound entanglement, unleashing the entanglement bound in the Smolin state by means of LOCC with the help of the auxiliary entanglement of the two-qubit Bell state. We reconstructed the density matrices of the states before and after the activation protocol by full state tomography. We observed the increase of distillable entanglement via the activation process, examining two inequalities that bind the values of the distillable entanglement. The gain of distillable entanglement clearly demonstrates the activation protocol in which the undistillable, bound entanglement in $\rho_s$ is essential. Our result will be fundamental for novel multipartite quantum-communication schemes, for example, quantum key secret-sharing, communication complexity reduction, and remote information concentration, in which general classes of entanglement, including bound entanglement, are important. This work was supported by a Grant-in-Aid for Creative Scientific Research (17GS1204) from the Japan Society for the Promotion of Science.
{'timestamp': '2011-11-29T02:01:07', 'yymm': '1111', 'arxiv_id': '1111.6170', 'language': 'en', 'url': 'https://arxiv.org/abs/1111.6170'}
ArXiv
\section{Approach} \label{sec:approach} We elaborate our approach to the query-focused video summarization in this section. Denote by ${\cal V} = \{{\cal V}_t\}_{t=1}^T$ a video that is partitioned to $T$ segments, and by $q$ the query about the video. In our experiments, every segment ${\cal V}_t$ consists of $10$ video shots each of which is 5-second long and is used in Section~\ref{subsec:GS} to collect the concept annotations. \begin{figure*}[t] \includegraphics[width=\textwidth]{./Framework} \caption{\small{Our query-focused video summarizer: Memory network (right) parameterized sequential determinantal point process (left).}} \label{fig:framework} \vspace{-10pt} \end{figure*} \subsection{Query Conditioned Sequential DPP} The sequential determinantal point process (DPP)~\cite{gong2014diverse} is among the state-of-the-art models for generic video summarization. We condition it on the query $q$ as our overarching video summarization model, \begin{align} &P(Y_1=\bm{y}_1, Y_2 = \bm{y}_2, \cdots, Y_T=\bm{y}_T | \mathcal{V}, q) \\ = &P(Y_1=\bm{y}_1 | \mathcal{V}_1, q)\prod_{t=2}^T P(Y_t = \bm{y}_t | \mathcal{V}_t, \bm{y}_{t-1}, q) \end{align} where the $t$-th DPP variable $Y_t$ selects subsets from the $t$-th segment ${\cal V}_t$, i.e., $\bm{y}_t\subseteq{\cal V}_t$, and the distribution $P(Y_t = \bm{y}_t | \mathcal{V}_t, \bm{y}_{t-1}, q)$ is specified by a conditional DPP~\cite{kulesza2012determinantal}, \begin{align} P(Y_t = \bm{y}_t | \mathcal{V}_t, \bm{y}_{t-1}, q) = \frac{\det [\bm{L}(q)]_{\bm{y}_t\cup\bm{y}_{t-1}}}{\det\big(\bm{L}(q) + \bm{I}_t\big)}. \end{align} The nominator on the right-hand side is the principle minor of the (L-ensemble) kernel matrix $\bm{L}(q)$ indexed by the subsets $\bm{y}_t\cup\bm{y}_{t-1}$. The denominator calculates the determinant of the sum of the kernel matrix and a corrupted identity matrix whose elements indexed by $\bm{y}_{t-1}$ are 0's. Readers are referred to the great tutorial~\cite{kulesza2012determinantal} on DPP for more details. Note that the DPP kernel $\bm{L}(q)$ is parameterized by the query $q$. We have to carefully devise the way of parameterizing it in order to take account of the following properties. In query-focused video summarization, a user selects a shot to the summary for two possible reasons. One is that the shot is quite related to the query and thus becomes appealing to the user. The other may attribute to the contextual importance of the shot; e.g., the user would probably choose a shot to represent a prominent event in the video even if the event is not quite relevant to the query. To this end, we use a memory network to model the two types of importance (query-related and contextual) of a video shot simultaneously. \eat{ In this section, we briefly describe the state-of-the-art in query-focused video summarization, depicting their shortcomings. Subsequently, we identify and explain the building blocks of our pipeline followed by a novel framework that addresses the limitations and outperforms the existing work. \subsection{Query-Focused Video Summarization} \label{subsec:qfvs} Video summarization is highly subjective; given a video, different users may summarize it differently. The reason behind this is twofold: 1) each user finds some events (collection of concepts) more important and appealing than other users, or 2) some concepts are main focus of certain videos, e.g. given a video of an \textit{auto show} has to be summarized in a way that is collective mainly around different cars. Therefore in ~\cite{sharghi2016query}, authors designed the first framework for query-focused video summarization to address different user needs and also content-aware video indexing. They use a hierarchical graphical model which requires both static and dynamic features. The first stage in their approach is to generate \textit{query-dependent} (dynamic) from static features, that are high-level SentiBank ~\cite{borth2013sentibank} concept scores for all dictionary elements. The scheme employed to acquire the dynamic feature set is a re-weighting function that scales down the concept detection scores for the concepts that are not present in the user preferences list while preserving full score for those who user has indicated to be interested in. The dynamic feature set is then fed to the first summarizer unit to acquire \textit{query-relevant} summary of the video. At the second stage, static features are used to \textit{complement} the summary from the first stage by adding parts of the video that may not correspond to any query, but are import in the context of the video; hence, both the \textit{query} and the \textit{story} of the video is maintained in the system summary. Even though this approach is able to generate descent summaries, we argue the existence of the following unappealing constraints:\\ \begin{itemize} \item Mandating high-level concept detectors as features \item Restricted query representation \item High supervision in training the model \end{itemize} } \eat{ Requiring high concept detectors as features is very limiting, mainly because itself is an open problem in computer vision; it is not easy to train high-level detectors that work well in unconstrained settings (such as an untrimmed egocentric video). Additionally, adding any item to the dictionary means one needs to either find or train a detector, which in various cases is extremely hard. As for the query, ideally, it must be free entry text, such that a user can enter words or sentences that describes their desire instead of using limited elements in a predefined dictionary. It is hard to augment the approach in ~\cite{sharghi2016query} to address this issue. Finally, training the model requires high supervision; one not only needs to know the groundtruth summaries, but they must be able to infer which elements in the groundtruth summary is query-relevant/irrelevant. In this work we propose an attention-based query-focused video summarization approach, that addresses the limitations mentioned above, while pushing the performance drastically. \subsection{Background} In this section, we provide some background information that facilitates understanding our proposed approach. Our Attention-based Query-Focused video summarization approach consists of two units working jointly to generate summaries. At the first step, a memory network cell receives the shot features as well as the query, and generates a query-dependent feature that is then fed to a determinantal point process ~\cite{kulesza2012determinantal} (DPP) summarizer unit. In what follows, we first explain how memory network generates query-dependent features, and subsequently we review basics of DPP unit. } \subsection{Memory Network to Parameterize DPP Kernels} The memory network~\cite{sukhbaatar2015end} offers a neural network architecture to naturally attend a question to ``facts'' (cf.\ the rightmost panel of Figure~\ref{fig:framework}). In our work, we shall measure the relevance between the query $q$ and a video shot and incorporate such information into the DPP kernel $\bm{L}(q)$. Therefore, it is straightforward to substitute the question in memory network by our query, but the ``facts'' are less obvious. As discussed in Section~\ref{subsec:Dict}, there could be various scenarios for a query and a shot. All the query concepts may appear in the shot but possibly in different frames; one or two concepts of the query may not be present in the shot; it is also possible that none of the concepts are relevant to any frame in the shot. In other words, the memory network is supposed to screen all the video frames in order to determine the shot's relevance to the query. Hence, we uniformly sample 8 frames from each shot as the ``facts''. The video frames are represented using the same feature as~\cite{sharghi2016query} (cf.\ $\bm{f}_1,\cdots,\bm{f}_K$ on the rightmost panel of Figure~\ref{fig:framework}). The memory network takes as input the video frames $\{\bm{f}_k\}$ of a shot and a query $q$. The frames are transformed to memory vectors $\{\bm{m}_k\}$ through an embedding matrix $A$. Similarly, the query ${q}$, represented by a binary indication vector, is mapped to the internal state $\bm{u}$ using an embedding matrix $C$. The attention scheme is implemented simply by a dot product followed by a softmax function, \begin{equation} p_k = \text{Softmax}(\bm{u}^T\bm{m}_k), \end{equation} where $p_k$ carries how much attention the query $q$ incurred over the frame $\bm{f}_k$. Equipped with the attention scores $\{p_k\}$, we assemble another embedding $\{\bm{c}_k\}$ of the frames, obtained by the mapping matrix $B$ in figure~\ref{fig:framework}, into the video shot representation $\bm{o}$: \begin{equation} \bm{o} = \sum_{k}p_i\bm{c}_k, \end{equation} which is conditioned on the query $q$ and entails the relevance strength of the shot to the query. As a result, we expect the DPP kernel parameterized by the following \begin{equation} [\bm{L}(q)]_{ij} = \bm{o}_i^T {D}^T {D} \bm{o}_j \label{eWeightCombine} \end{equation} is also flexible in modeling the importance of the shots to be selected into the video summary. Here $i$ and $j$ index two shots, and ${D}$ is another embedding matrix. Note that the contextual importance of a shot can be inferred from a shot's similarities to the others by the kernel matrix, while the query-related importance is mainly by the attention scheme in the memory network. \subsection{Learning and Inference} We learn the overall video summarizer, including the sequential DPP and the memory network, by maximizing the log-likelihood of the user summaries in the training set. We use stochastic gradient descent with mini-batching to optimize the embedding matrices $\{A, B, C, D\}$. The learning rates and numbers of epochs are chosen using the validation set. At the test stage, we sequentially visit the video segments ${\cal V}_1, \cdots, {\cal V}_T$ and select shots from them using the learned summarization model. It is notable that our approach requires less user annotations than the SH-DPP~\cite{sharghi2016query}. It learns directly from the user summaries and implicitly attend the queries to the video shots. However, SH-DPP requires very costly annotations about the relevances between video shots and queries. Our new dataset does supply such supervisions, so we shall include SH-DPP as the baseline method in our experiments. \eat{ In \textit{question-answering} frameworks, the response vector is subsequently passed to an answering module (e.g. softmax) to infer the answer to the question. \subsubsection{Determinantal Point Process} \label{subsec:dpp} \textbf{DPP} ~\cite{kulesza2012determinantal} has been introduced and used in order to model negative correlation. DPP fits summarization tasks as it is able to integrate two components closely; individual importance, and collective diversity. Vanilla DPP was used effectively for document summarization, achieving the state-of-the-art. Define the groundset by $\cal Y$ $= \{1,2,...,N\}$. A DPP defines a probability distribution over a subset selection random variable by: \begin{align} P(Y=y) = {\det(\mat{L}_y)}/{\det(\mat{L}+\mat{I})}, \quad \forall y\subseteq \cal Y, \label{eDef} \end{align} } \eat{ where $\mat{L} \in \mathbb{S}^{N \times N}$ is a positive semidefinite kernel matrix, $\mat{I}$ is identity matrix, and $\mat{L}_y$ is the submatrix indexed by $y$. The individual importance of an item in the set is represented by $P(i \in y) \propto \mat{L}_{ii}$ while the repulsion of any two items can be inferred by $P(i,j \in y) \propto \mat{L}_{ii}\mat{L}_{jj} - \mat{L}^2_{ij}$. In other terms, DPP assigns the highest probability to a subset which best spans the groundset while preserving the diversity in the selected elements. DPP has been successfully used in document-summarization tasks, achieving state-of-the-art performance ~\cite{chao2015large,kulesza2012determinantal}. ~\cite{gong2014diverse,sharghi2016query} succeeded in employing DPP for video summarization. Inspired by them, we use a conditional version of DPP as the summarizer unit in our pipeline. \subsection{Framework} As explained in \ref{subsec:qfvs}, the framework in ~\cite{sharghi2016query} uses a re-weighting scheme in order to generate query-dependent features, however, this introduces unappealing limitations; 1) one has to have access to detectors for every concept in the dictionary, 2) hard restriction over query representation, and 3) hierarchical supervision required to train the model. Our novel \textbf{attention-based} DPP summarizer heals these problems as it is able to: 1) latently learn to distinguish concepts based on low-level cues in the shot, hence no restriction over the input features, 2) query embedding layer in the memory network can be easily swapped with a recurrent neural network (such as LSTM) that is able to transform free text into rich representations, and 3) model is trained end-to-end, and having one layer of DPP summarizer lowers the supervision required to train the model. Figure \ref{fig:framework} shows our proposed method unrolled in time. The framework consists of two elements: attention-based memory network unit and Determinantal Point Process (DPP) summarizer unit. Memory network unit (that is shown on the right side of Figure ~\ref{fig:framework}) takes as input the feature set of a shot $\mat{f}_1 ...\mat{f}_k$ and the query $\mat{q}$, embeds them into the same space via embedding matrices $\mat{A}$ and $\mat{C}$ respectively. Obviously, these two embedding matrices share the size of their first dimension, so that embedded features have the same length. This allows our approach to receive a feature set and a query that are not initially in the same space, letting it accept features of any nature. In the figure, embedded features are shown by $\mat{m}_i$, and the query $\mat{q}$ after transformation via embedding matrix $\mat{C}$ is shown by $\mat{u}$. In this space, the inner product of $\mat{m}_i$ and $\mat{u}$ is meaningful, and represents how well the $\text{i}^\text{th}$ feature of the shot correlates with $\mat{u}$. Having computed the inner products of all features of the shot with $\mat{u}$, we convert them into probability using a softmax layer, shown by $p_i$. The softmax layer represents the attention in the framework; if one of the features in the set has high correlation with $\mat{u}$, then softmax assigns a value of close to $1$ to it while giving much less \textit{attention} to the rest of the features. Probabilities $p_i$ are then used to combine $\mat{c}_i$ to generate $\mat{o}$, that are transformed input features via another embedding matrix $\mat{B}$. Hence, the memory network is able to convert static feature set of a shot into \textit{one} feature vector based on their correlation with the query. By changing the size of embedding matrix $\mat{B}$, one can control the output feature dimension, that is another advantage to our approach. Having acquired query-relevant features $\mat{o}_i$, at the next step they are fed into the summarizer, that is a conditional DPP unit. At time step $t$, the groundset is defined as, $\cal Y$$_t = \{\mat{o}_i,...,\mat{o}_{i+N}\} \cup \{Y_{t-1} = y_{t-1}\}$, where $N$ represents the number of shots to select from and $y_{t-1}$ is the subset selection output from time step $t-1$. This is to enforce Markov diversity between two consecutive time step to ensure less redundancy: \begin{equation} P(Y_t = y_t) = \frac{\det \mat{L}_{y_t \cup y_{t-1}}}{\det (\mat{L} + \mat{I}_t)} \end{equation} where $\mat{L}$ represents the kernel, $Y_t$ being the subset selection random variable, $\mat{L}_{y_t \cup y_{t-1}}$ indicating the square sub-kernel whose columns and rows are indexed by $y_t \cup y_{t-1}$, and $\mat{I}_t$ representing an identity like with the exception of having zero diagonal elements indexed by $y_{t-1}$. } \section{Conclusion} \label{sec:conc} \vspace{-5pt} In this work, our central theme is to study the \textit{subjectiveness} in video summarization. We have analyzed the key challenges caused the subjectiveness and proposed some solutions. In particular, we compiled a dataset that is densely annotated with a comprehensive set of concepts and designed a novel evaluation metric that benefits from the collected annotations. We also devised a new approach to generating personalized summaries by taking user queries into account. We employed memory networks and determinantal point processes in our summarizer, so that our model leverages their attention schemes and diversity modeling capabilities, respectively. Extensive experiments verify the effectiveness of our approach and reveals some nice behaviors of our evaluation metric. \vspace{-10pt} \paragraph{Acknowledgements.} This work is supported by NSF IIS \#1566511, a gift from Adobe Systems, and a GPU from NVIDIA. We thank Fei Sha, the anonymous reviewers and area chairs, especially R2, for their insightful suggestions. \section{Dataset} \label{sec:dataset} In this section, we provide the details on compiling a comprehensive dataset for video summarization. We opt to build upon the currently existing UT Egocentric (UTE) dataset~\cite{lee2012discovering} mainly for two reasons: 1) the videos are consumer grade, captured in uncontrolled everyday scenarios, and 2) each video is 3--5 hours long and contains a diverse set of events, making video summarization a naturally desirable yet challenging task. In what follows, we first explain how we define a dictionary of concepts and determine the best queries over all possibilities for the query-focused video summarization. Then we describe the procedure of gathering user summaries for the queries. We also show informative statistics about the collected dataset. \begin{figure*} \centering \includegraphics[width=\linewidth]{./shot_tags} \vspace{-18pt} \caption{\small{All annotators agree with each other on the prominent concepts in the video shot, while they miss different subtle concepts. }} \label{fig:tags} \vspace{-2pt} \end{figure*} \subsection{Concept Dictionary and Queries} \label{subsec:Dict} We plan to have annotators to transform the semantic information in each video shot to a binary semantic vector (cf.\ Figures~\ref{fig:captionvstags} and \ref{fig:tags}), with 1's indicating the presence of the corresponding concepts and 0's the absence. Such annotations serve as the foundation for an efficient and automatic evaluation method for video summarization described in Section~\ref{subsec:tag}. The key is thus to have a dictionary that covers a wide range and multiple levels of concepts, in order to have the right basis to encode the semantic information. \eat{ \begin{table}[t] \centering \begin{small} \begin{tabular}{ll} \hline \toprule \multicolumn{1}{c}{\normalsize{Concepts}} \\ \hline Baby, ~Beach, ~Bed, ~Blonde, ~Boat, ~Book, ~Building, ~Car, Chair\\ Chocolate, ~Computer, Cup/glass, ~Dance, Desk, Drink, Exercise\\ Face, ~Flower, ~Food, ~Garden, ~Grass, ~Hall, ~Hands, ~~Hat, ~Kids\\ Lady, ~Lake, ~Market, ~Men, ~Musical/instrument, ~Ocean, ~Office\\ Park, ~Party, ~Pets/animal, ~Phone, ~Room, ~School, ~Shoes, ~Sign\\ Sky, ~Sports, ~Street, ~Student, ~Sun, ~Toy, ~Tree, ~Window\\ \bottomrule \end{tabular} \end{small} \caption{\small{Dictionary of concepts}} \label{table:dict} \vspace{-10pt} \end{table} } In~\cite{sharghi2016query}, we have constructed a lexicon of concepts by overlapping nouns in the video shot captions~\cite{yeung2014videoset} with those in the SentiBank~\cite{borth2013sentibank}. Those nouns serve as a great starting point for us since they are mostly entry-level~\cite{ordonez2013large} words. We prune out the concepts that are weakly related to visual content (e.g., \textsc{''Area''}, which could be interpreted in various ways and applicable to most situations). Additionally, we merge the redundant concepts such as \textsc{''Children''} and \textsc{''Kids''}. We also add some new concepts in order to construct an expressive and comprehensive dictionary. Two strategies are employed to find the new concept candidates. First, after watching the videos, we manually add the concepts that appear for a significant frequency, e.g., \textsc{''Computer''}. Second, we use the publicly available statistics about YouTube and Vine search terms to add the terms that are frequently searched by users, e.g., \textsc{''Pet/Animal''}. The final lexicon is a concise and diverse set of 48 concepts (cf.\ Figure~\ref{fig:stats}) that are deemed to be comprehensive for the UTE videos of daily lives. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./Summary_Comparison} \vspace{-18pt} \caption{\small{Two summaries generated by the same user for the queries $\{\textsc{Hat},\textsc{Phone}\}$ and $\{\textsc{Food},\textsc{Drink}\}$, respectively. The shots in the two summaries beside the green bars exactly match each others, while the orange bars show the query-specific shots.}} \label{fig:sumcompare} \vspace{-10pt} \end{figure*} We also construct queries, to acquire query-focused user summaries, using two or three concepts as opposed to singletons. Imagine a use case of video search engines. The queries entered by users are often more than one word. For each video, we formalize 46 queries. They cover the following four distinct scenarios: i) all the concepts in the query appear in the same video shots together (15 such queries); ii) all concepts appear in the video but never jointly in a single shot (15 queries), iii) only one of the concepts constituting the query appears in some shots of the video (15 queries), and iv) none of the concepts in the query are present in the video (1 such query). We describe in the Suppl.\ Materials how we obtain the 46 queries to cover the four scenarios. Such queries and their user annotated summaries challenge an intelligent video summarizer from different aspects and extents. \subsection{Collecting User Annotations} \label{subsec:GS} We plan to build a video summarization dataset that offers 1) efficient and automatic evaluation metrics and 2) user summaries in response to different queries about the videos. For the former 1), we collect user annotations about the presence/absence of concepts in each video shot. This is a quite daunting task conditioning on the lengths of the videos and the size of our concept dictionary. We use Amazon Mechanical Turk (MTurk) (\url{http://www.mturk.com/}) for economy and efficiency considerations. For the latter 2), we hire three student volunteers to have better quality control over the labeled video summaries. We uniformly partition the videos to 5-second-long shots. \vspace{-5pt} \subsubsection{Shot Tagging: Visual Content to Semantic Vector} \label{subsec:tag} \vspace{-5pt} We ask MTurkers to tag each video shot with all the concepts that are present in it. To save the workers' time from watching the shots, we uniformly extract five frames from each shot. A concept is assumed relevant to the shot as long as it is found in any of the five frames. Figure ~\ref{fig:tags} illustrates the tagging results for the same shot by three different workers. While all the workers captured the prominent concepts like \textsc{Sky}, \textsc{Lady}, \textsc{Street}, \textsc{Tree}, and \textsc{Car}, they missed different subtle ones. The union of all their annotations, however, provides a more comprehensive semantic description about the video shot than that of any individual annotator. Hence, we ask three workers to annotate each shot and take their union to obtain the final semantic vector for the shot. On average, we have acquired $4.13$, $3.95$, $3.18$, and $3.62$ concepts per shot for the four UTE videos, respectively. In sharp contrast, the automatically derived concepts~\cite{sharghi2016query} from the shot captions~\cite{yeung2014videoset} are far from enough; on average, there are only $0.29$, $0.58$, $0.23$, and $0.26$ concepts respectively associated with each shot of the four videos. \vspace{-13pt} \paragraph{Evaluating video summaries.} Thanks to the dense concept annotations per video shot, we can conveniently contrast a system generated video summary to user summaries according to the semantic information they entail. We first define a similarity function between any two video shots by intersection-over-union (IOU) of their corresponding concepts. For instance, if one shot is tagged by \{\textsc{Car}, \textsc{Street}\} and another by \{\textsc{Street}, \textsc{Tree}, \textsc{Sign}\}, then the IOU similarity between them is ${1}/{4} = 0.25$. To find the match between two summaries, it is convenient to execute it by the maximum weight matching of a bipartite graph, where the summaries are on opposite sides of the graph. The number of matched pairs thus enables us to compute precision, recall, and F1 score. Although this procedure has been used in the previous work~\cite{khosla2013large,de2011vsumm}, there the edge weights are calculated by low-level visual features which by no means match the semantic information humans obtain from the videos. In sharp contrast, we use the IOU similarities defined directly over the user annotated semantic vectors as the edge weights. \vspace{-5pt} \subsubsection{Acquiring User Summaries} \label{subsec:user_sums} In addition to the dense per-video-shot concept tagging, we also ask annotators to label query-focused video summaries for the 46 queries described in Section~\ref{subsec:Dict}. To ensure consistency in the summaries and better quality control over the summarization process, we switch from MTurk to three student volunteers in our university. We meet and train the volunteers in person. They each summarize all four videos by taking queries into account --- an annotator receives 4 (videos) $\times$ 46 (queries) summarization tasks in total. We thus obtain three user summaries for each query-video pair. \eat{ \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./InterUser} \caption{\small{This figure compares and contrasts the summaries labeled by the same user for two queries $\{\textsc{Hat},\textsc{Phone}\}$ and $\{\textsc{Food},\textsc{Drink}\}$, respectively. Pairs that are marked with green margin exactly match in the user summaries while orange margin depicts the query-relevant shots in the summaries.}} \label{fig:interuser} \end{figure*} } However, we acknowledge that it is infeasible to have the annotators to summarize all the query-video pairs from scratch --- the UTE videos are each 3--5 hours long. To overcome this issue, we expand each temporal video to a set of static key frames. First, we uniformly extract five key frames to represent each shot in the same way as in Section~\ref{subsec:tag}. Second, we pool all the shots corresponding to the three textual summaries~\cite{yeung2014videoset} as the initial candidate set. Third, for each query, we further include all the shots that are relevant to it into the set. A shot is relevant to the query if the intersection of the concepts associated with it and the query is nonempty. As a result, we have a set of candidate shots for each query that covers the main story in the video as well as those of relevance to the query. The annotators summarize the video by removing redundant shots from the set. There are $2500$ to $3600$ shots in the candidate sets, and the summaries labeled by the participants contain only $71$ shots on average. \vspace{-12pt} \begin{table}[t]\centering \small \caption{\small{Inter-user agreement evaluated by F1 score (\%) (U1, U2, and U3: the three student volunteers, O: the oracle summary).}} \label{tab:interuser} \vspace{-10pt} \begin{tabular}{cccccccccccccccccccccc}\toprule U1-U2 & U1-U3 & U2-U3 & U1-O & U2-O & U3-O \\ \midrule 55.27 & 55.85 & 62.67 & 64.97 & 79.75 & 80.07\\ \bottomrule \end{tabular} \label{table:InterUser} \vspace{-15pt} \end{table} \paragraph{Oracle summaries.} Supervised video summarization methods~\cite{gong2014diverse,gygli2015video,sharghi2016query,zhang2016summary,zhang2016video} often learn from one summary per video, or per query-video pair in query-focused summarization, while we have three user generated summaries per query. We aggregate them into one, called the oracle summary, per query-video pair by a greedy algorithm. The algorithm starts from the common shots in the three user summaries. It then greedily chooses one shot every time such that this shot gives rise to the largest marginal gain over the evaluated F1 score. We leave the details to the Suppl.\ Materials. The oracle summaries achieve better agreements with the users than the inter-user consensus (cf.\ Table~\ref{tab:interuser}). \vspace{-12pt} \begin{table} \small \centering \caption{\small{The average lengths and standard deviations of the summaries for different queries. }} \vspace{-10pt} \label{table:sumstats} \begin{tabular}{ccccccccc} \toprule & User 1 & User 2 & User 3 & Oracle \\ \cmidrule{2-5} Vid1 & \small{143.7$\pm$}32.5 & \small{80.2$\pm$}47.1 & \small{62.6$\pm$}15.7 & \small{82.5$\pm$}33.9 \\%&& 54.7$\pm$0.13 & 63.4$\pm$0.11 & 61.8$\pm$0.08 & 64.3$\pm$0.09\\ Vid2 & 103.0$\pm$45.0 & 49.9$\pm$25.2 & 64.4$\pm$11.7 & 64.1$\pm$11.7 \\%&& 52.0$\pm$0.13 & 65.6$\pm$0.08 & 59.6$\pm$0.1 & 63.2$\pm$0.08\\ Vid3 & 97.3$\pm$38.9 & 50.1$\pm$9.6 & 58.4$\pm$9.3 & 59.2$\pm$9.6 \\%&& 56.2$\pm$0.13 & 67.4$\pm$0.06 & 66.7$\pm$0.08 & 68.2$\pm$0.06\\ Vid4 & 79.9$\pm$30.3 & 34.4$\pm$7.3 & 28.9$\pm$8.7 & 35.6$\pm$8.5 \\%&& 42.9$\pm$0.15 & 54.3$\pm$0.08 & 52.4$\pm$0.09 & 54.1$\pm$0.07\\ \bottomrule \end{tabular} \vspace{-15pt} \end{table} \paragraph{Summaries of the same video differ due to queries.} Figure~\ref{fig:sumcompare} shows two summaries labeled by the same user for two distinct queries, $\{\textsc{Hat},\textsc{Phone}\}$ and $\{\textsc{Food},\textsc{Drink}\}$. Note that the summaries both track the main events happening in the video while they differ in the query-specific parts. Besides, table~\ref{table:sumstats} reports the means and standard deviations of the lengths of the summaries per video per user. We can see that the queries highly influence the resulting summaries; the large standard deviations attribute to the queries. \vspace{-22pt} \paragraph{Budgeted summary.} For all the summaries thus far, we do not impose any constraints over the total number of shots to be included into the summaries. After we receive the annotations, however, we let the same participants further reduce the lengths of their summaries to respectively 20 shots and 10 shots. We call them \emph{budgeted} summaries and leave them for future research. \section{Experimental Setup} \label{subsec:eval} In this section, we propose a new evaluation metric and contrast it against other existing metrics. \section{Summary Examples} \label{Examples} In addition to the text, we have enclosed two system summaries to provide qualitative results. The first video summary corresponds to query $q$=\textsc{\{Food,Drink\}} (scenario i), and consists of 93 shots (each shot is 5 seconds long), making it less than 8 minutes long while the original video is $\sim 4$ hours. The second, with query of $q$=\textsc{\{Chocolate,Street\}} (scenario ii), is a summary of length $\sim$ 5 minutes (56 shots) generated by our model for a 3 hours long video. \subsection{A Nice Behavior of Our Evaluation Metric} \vspace{-5pt} Our evaluation method for video summarization is mainly motivated by Yeung et al.~\cite{yeung2014videoset}. Particularly, we share the same opinion that the evaluation should focus on the semantic information which humans can perceive, rather than the low-level visual features or temporal overlaps. However, the captions used in \cite{yeung2014videoset} are diverse, making the ROUGE-SU4 evaluation unstable and poorly correlated with human judgments~\cite{chen2015microsoft}, and often missing subtle details (cf.\ Figure~\ref{fig:captionvstags} for some examples). We rectify those caveats by instead collecting dense concept annotations. Figure~\ref{fig:captionvstags} exhibits a few video shots where the concepts we collected provide a better coverage than the captions about the semantics in the shots. Moreover, we conveniently define an evaluation metric based on the IOU similarity function between any two shots (cf.\ Section~\ref{subsec:tag}) thanks to the concept annotations. Our evaluation metric has some nice behaviors. If we randomly remove some video shots from the user summaries and compare the corrupted summaries with the original ones, an accuracy-like metric should give rise to linearly decreasing values. This is indeed what happens to our recall as shown in Figure~\ref{fig:Del}. In contrast, the ROUGE-SU4 recall, taking as input the shot captions, exhibits some nonlinearality. More results on randomly replacing some shots in the user summaries are included in the Suppl. Materials. \vspace{-5pt} \section{Experimental Results} \label{sec:expset} \begin{table*}[t] \centering \small \caption{\small{Comparison results for query-focused video summarization (\%). }} \label{table:results} \vspace{-10pt} \begin{tabular}{@{}rrrrcrrrcrrr@{}}\toprule & \multicolumn{3}{c}{SeqDPP~\cite{gong2014diverse}} & \phantom{abc}& \multicolumn{3}{c}{SH-DPP~\cite{sharghi2016query}} & \phantom{abc} & \multicolumn{3}{c}{\textbf{Ours}}\\ \cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12} & Precision & Recall & F1 && Precision & Recall & F1 && Precision & Recall & F1\\ \midrule Vid1 & \textbf{53.43} & 29.81 & 36.59 && 50.56 & 29.64 & 35.67 && 49.86 & \textbf{53.38} & \textbf{48.68}\\ Vid2 & \textbf{44.05}& 46.65 & \textbf{43.67} && 42.13& 46.81& 42.72&& 33.71& \textbf{62.09}& 41.66\\ Vid3 & 49.25& 17.44 & 25.26 && 51.92& 29.24& 36.51&& \textbf{55.16} & \textbf{62.40} & \textbf{56.47} \\ Vid4 & 11.14 & \textbf{63.49} & 18.15 && 11.51& 62.88& 18.62&& \textbf{21.39}& 63.12& \textbf{29.96}\\ \bottomrule Avg. & 39.47 & 39.35 & 30.92 && 39.03 & 42.14 & 33.38 && \textbf{40.03} & \textbf{60.25} & \textbf{44.19} \\ \bottomrule \end{tabular} \vspace{-2pt} \end{table*} \eat{ \begin{table*}[t]\centering \small \caption{\small{Testing effectiveness of individual components in our proposed attention-based query-focused summarizer.}} \label{table:elementtest} \vspace{-10pt} \begin{tabular}{@{}rrrrcrrrcrrr@{}}\toprule & \multicolumn{3}{c}{NoAttention} & \phantom{}& \multicolumn{3}{c}{-Emb $D$} & \phantom{} & \multicolumn{3}{c}{EmbSize 256}\\ \cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12} & Precision & Recall & F1 && Precision & Recall & F1 && Precision & Recall & F1\\ \midrule Vid1 & 39.68 & 61.21 & 45.66 && 34.58 & 66.28 & 42.95 && 44.12 & 59.44 & 47.64\\ Vid2 & 25.24 & 67.04 & 35.38 && 24.40 & 68.78 & 34.31 && 30.68 & 65.53 & 39.91\\ Vid3 & 36.67 & 74.27 & 47.51 && 51.26 & 64.60 & 54.46 && 53.88 & 62.58 & 55.61 \\ Vid4 & 14.32 & 65.58 & 22.38 && 15.36 & 67.31 & 23.52 && 17.01 & 66.44 & 25.61\\ \bottomrule Avg. & 28.98 & 67.03 & 37.73 && 31.4 & 66.74 & 38.81 && 36.42 & 63.5 & 42.19\\ \bottomrule \end{tabular} \end{table*} } We report experimental setup and results in this section. \vspace{-10pt} \paragraph{Features.} We extract the same type of features as used in the existing SH-DPP method~\cite{sharghi2016query} in order to have fair comparisons. First, we employ 70 concept detectors from SentiBank~\cite{borth2013sentibank} and use the detection scores for the features of each key frame (8 key frames per 5-second-long shot). However, it is worth mentioning that our approach is not limited to using concept detection scores and, more importantly unlike SH-DPP, does not rely on the per-shot annotations about the relevance to the query --- the per shot user labeled semantic vectors serve for evaluation purpose only. Additionally, we extract a six dimensional contextual feature vector per shot as the mean-correlations of low-level features (including color histogram, GIST~\cite{oliva2001modeling}, LBP~\cite{ojala2002multiresolution}, Bag-of-Words, as well as an attribute feature~\cite{yu2013designing}) in a temporal window whose size varies from 5 to 15 shots. The six-dimensional contextual features are appended to the key frame features in our experiments. \vspace{-10pt} \paragraph{Data split.} We run four rounds of experiments each leaving one video out for testing and one for validation, while keeping the remaining two for training. Since our video summarizer and the baselines are sequential models, the small number (i.e., two) of training videos is not an issue as the videos are extremely long, providing many variations and supervisions at the training stage. \subsection{Comparison Results} \eat{ \begin{table}[t]\centering \small \caption{\small{Comparison results for generic video summarization, i.e., when no video shots are relevant to the query.}} \label{table:glob} \vspace{-10pt} \begin{tabular}{cccc}\toprule & SubMod~\cite{gygli2015video} & Quasi~\cite{zhao2014quasi} & \textbf{Ours}\\ \midrule Vid1 & 49.51 & 53.06 & \textbf{62.66}\\ Vid2 & 51.03 & \textbf{53.80} & 46.11\\ Vid3 & 64.52 & 49.91 & \textbf{58.85} \\ Vid4 & \textbf{35.82} & 22.31 & 33.5\\ \bottomrule Avg. & 50.22 & 44.77 & \textbf{50.29}\\ \bottomrule \end{tabular} \vspace{-10pt} \end{table} } \begin{table*}[t]\centering \small \caption{\small{Comparison results for generic video summarization, i.e., when no video shots are relevant to the query.}} \label{table:glob} \vspace{-10pt} \begin{tabular}{@{}rrrrcrrrcrrr@{}}\toprule & \multicolumn{3}{c}{SubMod~\cite{gygli2015video}} & \phantom{abc}& \multicolumn{3}{c}{Quasi~\cite{zhao2014quasi}} & \phantom{abc} & \multicolumn{3}{c}{\textbf{Ours}}\\ \cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12} & Precision & Recall & F1 && Precision & Recall & F1 && Precision & Recall & F1\\ \midrule Vid1 & 47.86 & 51.28 & 49.51 && 57.37 & 49.36 & 53.06 && \textbf{65.88} & 59.75 & \textbf{62.66}\\ Vid2 & 56.53 & 46.50 & 51.03 && 46.75& 63.34& \textbf{53.80} && 35.07 & \textbf{67.31} & 46.11\\ Vid3 & 62.46 & 66.72 & 64.52 && 53.93 & 46.44 & 49.91&& \textbf{65.95} & 53.12 & \textbf{58.85} \\ Vid4 & \textbf{34.49} & 37.25 & \textbf{35.82} && 13.00 & \textbf{77.88} & 22.31 && 22.29 & 67.74 & 33.5\\ \bottomrule Avg. & \textbf{50.34} & 50.44 & 50.22 && 42.76 & 59.25 & 44.77 && 47.3 & \textbf{61.98} & \textbf{50.29}\\ \bottomrule \end{tabular} \end{table*} \section{Introduction} \label{sec:intro} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./Caption_vs_tags} \vspace{-18pt} \caption{\small{Comparing the semantic information captured by captions in~\cite{yeung2014videoset} and by the concept tags we collected.}} \label{fig:captionvstags} \vspace{-10pt} \end{figure*} Recent years have witnessed a resurgence of interest in video summarization, probably due to the overwhelming video volumes showing up in our daily life. Indeed, both consumers and professionals have the access to ubiquitous video acquisition devices nowadays. While the video data is a great asset for information extraction and knowledge discovery, due to its size and variability, it is extremely hard for users to monitor or find the occurrences in it. Intelligent video summarization algorithms allow us to quickly browse a lengthy video by capturing the essence and removing redundant information. Early video summarization methods were built mainly upon basic visual qualities (e.g., low-level appearance and motion features)~\cite{goldman2006schematic,gygli2015video,laganiere2008video,liu2002optimization,rav2006making,wolf1996key,zhao2014quasi}, while recently more abstract and higher-level cues are leveraged in the summarization frameworks~\cite{gong2014diverse,khosla2013large,kim2014joint,kwon2012unified,sharghi2016query,xiong2014detecting,yaohighlight,zhang2016summary}. However, one of the main obstacles to the research on video summarization is the user subjectivity --- users have various preferences over the summaries they would like to watch. The subjectiveness causes at least two problems. First, no single video summarizer fits all users unless it interacts with and adapts to the users. Second, it is very challenging to evaluate the performance of a video summarizer. In an attempt to solve the first problem, we have studied a new video summarization mechanism, query-focused video summarization~\cite{sharghi2016query}, that introduces user preferences in the form of text queries about the video into the summarization process. While this may be a promising direction to \textbf{\em personalize} video summarizers, the experimental study in \cite{sharghi2016query} was conducted on the datasets originally collected for the conventional generic video summarization~\cite{lee2012discovering,yeung2014videoset}. It remains unclear whether the real users would generate distinct summaries for different queries, and if yes, how much the query-focused summaries differ from each other. In this paper, we explore more thoroughly the query-focused video summarization and build a new dataset particularly designed for it. While we collect the user annotations, we meet the challenge how to define a good evaluation metric to contrast system generated summaries to user labeled ones --- the second problem above-mentioned due to the user subjectivity about the video summaries. We contend that the pursuit of new algorithms for video summarization has actually left one of the basic problems underexplored, i.e., how to benchmark different video summarizers. User study~\cite{lee2015predicting,lu2013story} is too time-consuming to compare different approaches and their variations at large scale. In the prior arts of automating the evaluation procedure, on one end, a system generated summary has to consist of exactly the same key units (frame or shot) as in the user summaries in order to be counted as a good one~\cite{chu2015video,song2015tvsum,xu2015gaze}. On the other end, pixels and low-level features are used to compare the system and user summaries~\cite{gong2014diverse,khosla2013large,kim2014joint,zhang2016summary,zhao2014quasi}, whereas it is unclear what features and distance metrics match users' criteria. Some works strive to find a balance between the two extremes, e.g., using the temporal overlap between two summaries to define the evaluation metrics~\cite{gygli2014creating,gygli2015video,potapov2014category,zhang2016video}. However, all such metrics are derived from either the temporal or visual representations of the videos, without explicitly encoding how humans perceive the information --- after all, the system generated summaries are meant to deliver similar information to the users as those directly labeled by the users. In terms of defining a better measure that closely tracks what humans can perceive from the video summaries, we share the same opinion as Yeung et al.'s~\cite{yeung2014videoset}: it is key to evaluate how well a system summary is able to retain the semantic information, as opposed to the visual quantities, of the user supplied video summaries. Arguably, the semantic information is best expressed by the concepts that represent the fundamental characteristics of what we see in the video at multiple grains, with the focus on different areas, and from a variety of perspectives (e.g., objects, places, people, actions, and their finer-grained entities, etc.). Therefore, as our first contribution, we collect dense per-video-shot concept annotations for our dataset. In other words, we represent the semantic information in each video shot by a binary semantic vector, in which the 1's indicate the presence of corresponding concepts in the shot. We suggest a new evaluation metric for the query-focused (and generic) video summarization based on these semantic vector representations of the video shots\footnote{Both the dataset and the code of the new evaluation metric are publicly available at \url{http://www.aidean-sharghi.com/cvpr2017}.}. In addition, we propose a memory network~\cite{sukhbaatar2015end} parameterized sequential determinantal point process~\cite{gong2014diverse} for tackling the query-focused video summarization. Unlike the hierarchical model in~\cite{sharghi2016query}, our approach does not rely on the costly user supervision about which queried concept appears in which video shot or any pre-trained concept detectors. Instead, we use the memory network to implicitly attend the user query about the video onto different frames within each shot. Extensive experiments verify the effectiveness of our approach. The rest of the paper is organized as follows. We discuss some related works in Section \ref{sec:related}. Section \ref{sec:dataset} elaborates the process of compiling the dataset, acquiring annotations, as well as a new evaluation metric for video summarization. In section \ref{sec:approach} we describe our novel query-focused summarization model, followed by detailed experimental setup and quantitative results in Sections \ref{sec:expset}. \begin{figure*} \centering \includegraphics[width=\linewidth]{./stats} \vspace{-25pt} \caption{\small{The frequencies of concepts showing up in the video shots, counted for each video separately.}} \label{fig:stats} \vspace{-10pt} \end{figure*} \section{Generating Oracle Summaries} \label{Oracle} Supervised video summarization methods often learn from one summary per video, or in the case of query-focused summarization per query-video pair. On the other hand, for evaluation purposes, it is better to contrast a system summary against multiple references and report the average. Thus, we collected 3 user summaries per query-video pair to use for evaluation purposes. However, in order to train the model, we obtain \textit{oracle} summaries that have maximum overall agreement with all three reference summaries (per query-video pair). The algorithm~\cite{kulesza2012determinantal} starts with the set of common shots ($y^0$) in all three reference summaries. Next, at each iteration, it greedily includes the shot that returns the largest marginal gain $G(i)$, \begin{align} \centering G(i) = \sum_u \text{F1-score}(y^0\cup i,y_u) - \sum_u \text{F1-score}(y^0,y_u) \end{align} where $u$ iterates over the user summaries (in our case, $u \in \{1,2,3\}$) and F1-score is obtained from our proposed evaluation metric. Table 1 in the main text shows the correlation between the obtained oracle summaries and user summaries, showing the oracle summary has very high agreement with all the user summaries. \section{Constructing Queries} \label{Queries} In this section, we thoroughly describe the process of generating the queries from dense concept annotations. While users often input free text to query videos through search engines, we simulate the real scenarios and construct the queries using the dense concept annotations we have collected (cf.\ Section 3.2 in the main text) to ease benchmarking different algorithms. By processing the dense user annotation data, we extract various statistics that enable us to have the queries covering a wide range of varieties. Initially, a concept is assumed present in the video if it appears in at least $T$ shots. This is to filter the present noise in annotations acquired from AMT workers and to make sure the concepts really appear together (to steer clear of the pairs that are tagged together as a result of noise or bias). As described in the main text, when a user enters a query $q$ (for instance, on a video search engine), which is usually more than one word, we have four distinct scenarios; i) all the concepts in the query appear in the same video shots together, ii) all concepts appear in the video, but never jointly in a single shot, iii) only one of the concepts constituting the query appears in the some shots of the video, and iv) none of the concepts in the query are present in the video (1 such query). A robust video summarizer, must be able to maintain good performance under any of the scenarios. Therefore, by including enough samples of all the scenarios, we build a comprehensive and diverse dataset. For the scenario i, we create a list of pairs that appear together in the same shots and sort it in descending order. There are two approaches to select concept pairs from this list: 1) to employ a random selection process where the probability of selecting a pair from the list is proportional to the number of times the pair appeared together in the video (this gives higher chance to the concepts that tend to happen together in the video while not completely crossing out the concepts that are not dominant in the video), and 2) to pick few top concept pairs. We opt to use the random selection process to better generalize the dataset and remove bias. For the scenario ii, we are interested in concept pairs that are present in the video but not in the same shots, e.g., concept pairs such as \textsc{Car} and \textsc{Room} that are unlikely to appear in the same shots of the video. To this end, for each pair we compute their harmonic mean of frequencies: \begin{equation} score(f_{c_1},f_{c_2}) = \frac{f_{c_1} \times f_{c_2}}{f_{c_1} + f_{c_2}} \end{equation} where $f_{c_1}$ and $f_{c_2}$ are the frequencies of concepts $c_1$ and $c_2$, respectively. This formulation has two interesting features that make it useful in this regard; 1) the resulting combination of numbers fed to it is always smaller than the smallest entry, 2) it is maximized when both inputs are large and identical. By computing the harmonic mean of frequencies for all the pairs in the list and sorting it in descending order, the concept pairs that have high frequencies for both concepts constituting the query are ranked higher. At this point, we employ the same random selection process to randomly choose pairs from this list. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{./Caption_vs_tags2} \caption{\small{Comparing semantic information in our dense tags vs captions provided by~\cite{yeung2014videoset}. The figure illustrates that the caption is targeting limited information about the scene, while the dense annotations are able to better explain the characteristics of the scene.}} \label{fig:captionvstags} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{./Summary_comparison2} \caption{\small{This figure compares two user summaries (generated by the same user) for two different queries. Both summaries contain shared segments, that are assumed important in the context of the video, while they disagree on the query-relevant segments.}} \label{fig:sumcom} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{ROUGEvsIOU_Chng} \caption{\small{Studying the effect of randomly \textit{replacing} some video shots in the user summary on the performance. The evaluation by ROUGE-SU4~\cite{lin2004rouge} is included for reference.}} \label{fig:ROUGEvsIOU} \end{figure*} For the third scenario, we concentrate on pairs that only one concept constituting the query is present in the video, e.g., if there is no \textsc{Car} present in the entire video while there exists shots with \textsc{Computer} appearing in them, the pair \textsc{Car} and \textsc{Computer} is a candidate for this scenario. To make sure that the constructed dataset is comprehensive and benefits from the versatile dictionary, we first exclude the concepts that were used in the first two scenarios, we put the rest in a list and use their frequencies to randomly sample from them. For the last scenario, where neither of concepts in pairs must be present in the video, we simply use the concepts that never appear in the video. For scenarios i, ii, and iii, we select 15 queries. For scenario iv, we only choose one query; summarizing based on any such query consisting of concepts that are not present in the entire video must result in about the same summary. In other terms, when a user wants the model to summarize the video based on a query consisting of non-present concepts, the summarizer must only return \textit{contextually} important segments of the video, that is essentially what a conventional generic video summarization approach (as opposed to query-dependent approaches) generates. Figure~\ref{fig:sumcom} shows that queries play a major role in the summaries that users generate. For a particular video, the same user has selected summaries that have both common (green margin) and uncommon (orange margin) segments. \section{Related Work} \label{sec:related} We discuss some related works in this section. This work extends our previous efforts~\cite{sharghi2016query} on \emph{personalizing} video summarizers. Both works explore the query-focused video summarization, but we study this problem more thoroughly in this paper through a new dataset with dense per-video-shot tagging of concepts. Our memory network based video summarizer requires less supervision for training than the hierarchical model in~\cite{sharghi2016query}. Unlike our user-annotated semantic vectors for the video shots, Yeung et al.\ asked annotators to caption each video shot using a sentence~\cite{yeung2014videoset}. A single sentence targets only limited information in a video shot and misses many details. Figure~\ref{fig:captionvstags} contrasts the concept annotations in our dataset with the captions for a few video shots. The concept annotations clearly provide a more comprehensive coverage about the semantic information in the shots. Memory networks~\cite{bahdanau2014neural,sukhbaatar2015end,weston2015towards,weston2014memory,xiong2016dynamic} are versatile in modeling the attention scheme in neural networks. They are widely used to address question answering and visual question answering~\cite{antol2015vqa}. The query focusing in our summarization task is analogous to attending questions to the ``facts'' in the previous works, but the facts in our context are temporal video sequences. Moreover, we lay a sequential determinantal point process~\cite{gong2014diverse} on top of the memory network in order to promote diversity in the summaries. A determinantal point process (DPP)~\cite{kulesza2012determinantal} defines a distribution over the power sets of a ground set that encourages diversity among items of the subsets. There have been growing interest in DPP in machine learning and computer vision~\cite{affandi2014learning,agarwal2014notes,batmanghelich2014diversifying,chao2015large,gartrell2016low,gillenwater2014expectation,DBLP:conf/icml/KuleszaT11,kulesza2011learning,kwok2012priors,li2016fast,mariet2015fixed,mariet2016kronecker,snoek2013determinantal}. Our model in this paper extends DPPs' modeling capabilities through the memory neural network. \section{Evaluation Metric Behavior} \label{IOUvsROUGE} As described in Section 5.2 of the main text, we studied the effect of randomly \textbf{removing} some video shots from the user summary on our proposed metric, observing that our metric has a linear drop in recall. Due to the fact that captions only capture limited information about the scene (cf.\ Figure~\ref{fig:captionvstags}), repeating the same experiment and evaluating with ROUGE-SU4 on captions provided by~\cite{yeung2014videoset}, recall showed a non-linear drop. As a side experiment, figure~\ref{fig:ROUGEvsIOU} illustrates the effect of randomly \textbf{replacing} some video shots in the user summary, studying the effect of noise on performance. Here we are swapping some shots with others that might be very similar or different to the original shots. For reference, we include the ROUGE-SU4 metric in our experiments as well.
{'timestamp': '2017-07-18T02:07:49', 'yymm': '1707', 'arxiv_id': '1707.04960', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.04960'}
ArXiv
\section{Introduction} \noindent This is the first of three papers that develop and use structures which are counted by a ``parabolic'' generalization of the Catalan numbers. Apart from some motivating remarks, it can be read by anyone interested in tableaux. It is self-contained, except for a few references to its tableau precursors \cite{Wi2} and \cite{PW1}. Fix $n \geq 1$ and set $[n-1] := \{1,2,...,n-1\}$. Choose a subset $R \subseteq [n-1]$ and set $r := |R|.$ The section on $R$-Catalan numbers can be understood as soon as a few definitions have been read. Our ``rightmost clump deleting'' chains of sets defined early in Section 4 became Exercise 2.202 in Stanley's list \cite{Sta} of interpretations of the Catalan numbers. Consider the ordered partitions of the set $[n]$ with $r+1$ blocks of fixed sizes that are determined by using $R$ to specify ``dividers''. These ordered partitions can be viewed as being the ``inverses'' of multipermutations whose $r+1$ multiplicities are determined by $R$. Setting $J := [n-1] \backslash R$, these multipermutations depict the minimum length coset representatives forming $W^J$ for the quotient of $S_n$ by the parabolic subgroup $W_J$. We refer to the standard forms of the ordered partitions as ``$R$-permutations''. When $R = [n-1]$, the $R$-permutations are just the permutations of $[n]$. The number of 312-avoiding permutations of $[n]$ is the $n^{th}$ Catalan number. In 2012 we generalized the notion of 312-pattern avoidance for permutations to that of ``$R$-312-avoidance'' for $R$-permutations. Here we define the ``parabolic $R$-Catalan number'' to be the number of $R$-312-avoiding $R$-permutations. Let $N \geq 1$ and fix a partition $\lambda$ of $N$. The shape of $\lambda$ has $N$ boxes; assume that it has at most $n$ rows. Let $\mathcal{T}_\lambda$ denote the set of semistandard Young tableaux of shape $\lambda$ with values from $[n]$. The content weight monomial $x^{\Theta(T)}$ of a tableau $T$ in $\mathcal{T}_\lambda$ is formed from the census $\Theta(T)$ of the values from $[n]$ that appear in $T$. The Schur function in $x_1, ..., x_n$ indexed by $\lambda$ can be expressed as the sum over $T$ in $\mathcal{T}_\lambda$ of the content weight monomials $x^{\Theta(T)}$. Let $R_\lambda \subseteq [n-1]$ be the set of column lengths in the shape $\lambda$ that are less than $n$. The type A Demazure characters (key polynomials) in $x_1, ..., x_n$ can be indexed by pairs $(\lambda,\pi)$, where $\lambda$ is a partition as above and $\pi$ is an $R_\lambda$-permutation. We refer to these as ``Demazure polynomials''. The Demazure polynomial indexed by $(\lambda,\pi)$ can be expressed as the sum of the monomials $x^{\Theta(T)}$ over a set $\mathcal{D}_\lambda(\pi)$ of ``Demazure tableaux'' of shape $\lambda$ for the $R$-permutation $\pi$. Regarding $\mathcal{T}_\lambda$ as a poset via componentwise comparison, it can be seen that the principal order ideals $[T]$ in $\mathcal{T}_\lambda$ form convex polytopes in $\mathbb{Z}^N$. The set $\mathcal{D}_\lambda(\pi)$ can be seen to be a certain subset of the ideal $[Y_\lambda(\pi)]$, where the tableau $Y_\lambda(\pi)$ is the ``key'' of $\pi$. It is natural to ask for which $R$-permutations $\pi$ one has $\mathcal{D}_\lambda(\pi) = [Y_\lambda(\pi)]$. Our first main result is: If $\pi$ is an $R_\lambda$-312-avoiding $R_\lambda$-permutation, then the tableau set $\mathcal{D}_\lambda(\pi)$ is all of the principal ideal $[Y_\lambda(\pi)]$ (and hence is convex in $\mathbb{Z}^N$). Our second main result is conversely: If $\mathcal{D}_\lambda(\pi)$ forms a convex polytope in $\mathbb{Z}^N$ (this includes the principal ideals $[Y_\lambda(\pi)]$), then the $R_\lambda$-permutation $\pi$ is $R_\lambda$-312-avoiding. So we can say exactly when one has $\mathcal{D}_\lambda(\pi) = [Y_\lambda(\pi)]$. Our earlier papers \cite{Wi2} and \cite{PW1} gave the first tractable descriptions of the Demazure tableau sets $\mathcal{D}_\lambda(\pi)$. Those results provide the means to prove the main results here. Demazure characters arose in 1974 when Demazure introduced certain $B$-modules while studying singularities of Schubert varieties in the $G/B$ flag manifolds. Flagged Schur functions arose in 1982 when Lascoux and Sch{\"u}tzenberger were studying Schubert polynomials for the flag manifold $GL(n)/B$. Like the Demazure polynomials, the flagged Schur functions in $x_1, ..., x_n$ can be expressed as sums of the weight monomial $x^{\Theta(T)}$ over certain subsets of $\mathcal{T}_\lambda$. Reiner and Shimozono \cite{RS} and then Postnikov and Stanley \cite{PS} described coincidences between the Demazure polynomials and the flagged Schur functions. Beginning in 2011, our original motivation for this project was to better understand their results. In the second paper \cite{PW3} in this series, we deepen their results: Rather than obtaining coincidences at the level of polynomials, we employ the main results of this paper to obtain the coincidences at the underlying level of the tableau sets that are used to describe the polynomials. Fact \ref{fact320.3}, Proposition \ref{prop320.2}, and Theorem \ref{theorem340} are also needed in \cite{PW3}. In Section 8 we indicate why our characterization of convexity for the sets $\mathcal{D}_\lambda(\pi)$ may be of interest in algebraic geometry and representation theory. Each of the two main themes of this series of papers is at least as interesting to us as is any one of the stated results in and of itself. One of these themes is that the structures used in the three papers are counted by numbers that can be regarded as being ``parabolic'' generalizations of the Catalan numbers. In these three papers these structures are respectively introduced to study convexity, the coincidences, and to solve a problem concerning the ``nonpermutable'' property of Gessel and Viennot. It turned out that by 2014, Godbole, Goyt, Herdan, and Pudwell had independently introduced \cite{GGHP} a general notion of pattern avoidance for ordered partitions that includes our notion of $R$-312-avoidance for $R$-permutations. Apparently their motivations for developing their definition were purely enumerative. Chen, Dai, and Zhou obtained \cite{CDZ} further enumerative results. As a result of the work of these two groups, two sequences were added to the OEIS. As is described in our last section, one of those is formed from the counts considered here for a sequence of particular cases. The other of those is formed by summing the counts considered here for all cases. In our series of papers, the parabolic Catalan count arises ``in nature'' in a number of interrelated ways. In this first paper this quantity counts ``gapless $R$-tuples'', ``$R$-rightmost clump deleting chains'', and convex Demazure tableau sets. The parabolic Catalan number further counts roughly another dozen structures in our two subsequent papers. After the first version \cite{PW2} of this paper was initially distributed, we learned of a different (but related) kind of parabolic generalization of the Catalan numbers due to M{\"u}hle and Williams. This is described at the end of this paper. The other main theme of this series of papers is the ubiquity of some of the structures that are counted by the parabolic Catalan numbers. The gapless $R$-tuples arise as the images of the $R$-312-avoiding $R$-permutations under the $R$-ranking map in this paper and as the minimum members of the equivalence classes for the indexing $n$-tuples of a generalization of the flagged Schur functions in our second paper. Moreover, the $R$-gapless condition provides half of the solution to the nonpermutability problem considered in our third paper \cite{PW4}. Since the gapless $R$-tuples and the structures equivalent to them are enumerated by a parabolic generalization of Catalan numbers, it would not be surprising if they were to arise in further contexts. The material in this paper first appeared as one-third of the overly long manuscript \cite{PW2}. The second paper \cite{PW3} in this series presents most of the remaining material from \cite{PW2}. Section 11 of \cite{PW3} describes the projecting and lifting processes that relate the notions of 312-avoidance and of $R$-312-avoidance. Definitions are presented in Sections 2 and 3. In Section 4 we reformulate the $R$-312-avoiding $R$-permutations as $R$-rightmost clump deleting chains and as gapless $R$-tuples. To prepare for the proofs of our two main results, in Section 5 we associate certain tableaux to these structures. Our main results are presented in Sections 6 and 7. Section 8 indicates why convexity for the sets of $\mathcal{D}_\lambda(\pi)$ may be of further interest, and Section 9 contains remarks on enumeration. \section{General definitions and definitions of $\mathbf{\emph{R}}$-tuples} In posets we use interval notation to denote principal ideals and convex sets. For example, in $\mathbb{Z}$ one has $(i, k] = \{i+1, i+2, ... , k\}$. Given an element $x$ of a poset $P$, we denote the principal ideal $\{ y \in P : y \leq x \}$ by $[x]$. When $P = \{1 < 2 < 3 < ... \}$, we write $[1,k]$ as $[k]$. If $Q$ is a set of integers with $q$ elements, for $d \in [q]$ let $rank^d(Q)$ be the $d^{th}$ largest element of $Q$. We write $\max(Q) := rank^1(Q)$ and $\min(Q) := rank^q(Q)$. A set $\mathcal{D} \subseteq \mathbb{Z}^N$ for some $N \geq 1$ is a \textit{convex polytope} if it is the solution set for a finite system of linear inequalities. Fix $n \geq 1$ throughout the paper. Except for $\zeta$, various lower case Greek letters indicate various kinds of $n$-tuples of non-negative integers. Their entries are denoted with the same letter. An $nn$-\textit{tuple} $\nu$ consists of $n$ \emph{entries} $\nu_i \in [n]$ that are indexed by \emph{indices} $i \in [1,n]$. An $nn$-tuple $\phi$ is a \textit{flag} if $\phi_1 \leq \ldots \leq \phi_n$. An \emph{upper tuple} is an $nn$-tuple $\upsilon$ such that $\upsilon_i \geq i$ for $i \in [n]$. The upper flags are the sequences of the $y$-coordinates for the above-diagonal Catalan lattice paths from $(0, 0)$ to $(n, n)$. A \emph{permutation} is an $nn$-tuple that has distinct entries. Let $S_n$ denote the set of permutations. A permutation $\pi$ is $312$-\textit{avoiding} if there do not exist indices $1 \leq a < b < c \leq n$ such that $\pi_a > \pi_b < \pi_c$ and $\pi_a > \pi_c$. (This is equivalent to its inverse being 231-avoiding.) Let $S_n^{312}$ denote the set of 312-avoiding permutations. By Exercises 116 and 24 of \cite{Sta}, these permutations and the upper flags are counted by the Catalan number $C_n := \frac{1}{n+1}\binom{2n}{n}$. Fix $R \subseteq [n-1]$ through the end of Section 7. Denote the elements of $R$ by $q_1 < \ldots < q_r$ for some $r \geq 0$. Set $q_0 := 0$ and $q_{r+1} := n$. We use the $q_h$ for $h \in [r+1]$ to specify the locations of $r+1$ ``dividers'' within $nn$-tuples: Let $\nu$ be an $nn$-tuple. On the graph of $\nu$ in the first quadrant draw vertical lines at $x = q_h + \epsilon$ for $h \in [r+1]$ and some small $\epsilon > 0$. In Figure 7.1 we have $n = 9$ and $R = \{ 2, 3, 5, 7 \}$. These $r+1$ lines indicate the right ends of the $r+1$ \emph{carrels} $(q_{h-1}, q_h]$ \emph{of $\nu$} for $h \in [r+1]$. An \emph{$R$-tuple} is an $nn$-tuple that has been equipped with these $r+1$ dividers. Fix an $R$-tuple $\nu$; we portray it by $(\nu_1, ... , \nu_{q_1} ; \nu_{q_1+1}, ... , \nu_{q_2}; ... ; \nu_{q_r+1}, ... , \nu_n)$. Let $U_R(n)$ denote the set of upper $R$-tuples. Let $UF_R(n)$ denote the subset of $U_R(n)$ consisting of upper flags. Fix $h \in [r+1]$. The $h^{th}$ carrel has $p_h := q_h - q_{h-1}$ indices. The $h^{th}$ \emph{cohort} of $\nu$ is the multiset of entries of $\nu$ on the $h^{th}$ carrel. An \emph{$R$-increasing tuple} is an $R$-tuple $\alpha$ such that $\alpha_{q_{h-1}+1} < ... < \alpha_{q_h}$ for $h \in [r+1]$. Let $UI_R(n)$ denote the subset of $U_R(n)$ consisting of $R$-increasing upper tuples. Consult Table 2.1 for an example and a nonexample. Boldface entries indicate failures. It can be seen that $|UI_R(n)| = n! / \prod_{h=1}^{r+1} p_h! =: \binom{n}{R}$. An $R$-\textit{permutation} is a permutation that is $R$-increasing when viewed as an $R$-tuple. Let $S_n^R$ denote the set of $R$-permutations. Note that $| S_n^R| = \binom{n}{R}$. We refer to the cases $R = \emptyset$ and $R = [n-1]$ as the \emph{trivial} and \emph{full cases} respectively. Here $| S_n^\emptyset | = 1$ and $| S_n^{[n-1]} | = n!$ respectively. An $R$-permutation $\pi$ is $R$-$312$-\textit{containing} if there exists $h \in [r-1]$ and indices $1 \leq a \leq q_h < b \leq q_{h+1} < c \leq n$ such that $\pi_a > \pi_b < \pi_c$ and $\pi_a > \pi_c$. An $R$-permutation is $R$-$312$-\textit{avoiding} if it is not $R$-$312$-containing. (This is equivalent to the corresponding multipermutation being 231-avoiding.) Let $S_n^{R\text{-}312}$ denote the set of $R$-312-avoiding permutations. We define the \emph{$R$-parabolic Catalan number} $C_n^R$ by $C_n^R := |S_n^{R\text{-}312}|$. \begin{figure}[h!] \begin{center} \begin{tabular}{lccc} \underline{Type of $R$-tuple} & \underline{Set} & \underline{Example} & \underline{Nonexample} \\ \\ $R$-increasing upper tuple & $\alpha \in UI_R(n)$ & $(2,6,7;4,5,7,8,9;9)$ & $(3,5,\textbf{5};6,\textbf{4},7,8,9;9)$ \\ \\ $R$-312-avoiding permutation & $\pi \in S_n^{R\text{-}312}$ & $(2,3,6;1,4,5,8,9;7)$ & $(2,4,\textbf{6};1,\textbf{3},7,8,9;\textbf{5})$ \\ \\ Gapless $R$-tuple & $\gamma \in UG_R(n)$ & $(2,4,6;4,5,6,7,9;9)$ & $(2,4,6;\textbf{4},\textbf{6},7,8,9;9)$ \\ \\ \end{tabular} \caption*{Table 2.1. (Non-)Examples of R-tuples for $n = 9$ and $R = \{3,8\}$.} \end{center} \end{figure} Next we consider $R$-increasing tuples with the following property: Whenever there is a descent across a divider between carrels, then no ``gaps'' can occur until the increasing entries in the new carrel ``catch up''. So we define a \emph{gapless $R$-tuple} to be an $R$-increasing upper tuple $\gamma$ such that whenever there exists $h \in [r]$ with $\gamma_{q_h} > \gamma_{q_h+1}$, then $s := \gamma_{q_h} - \gamma_{q_h+1} + 1 \leq p_{h+1}$ and the first $s$ entries of the $(h+1)^{st}$ carrel $(q_h, q_{h+1} ]$ are $\gamma_{q_h}-s+1, \gamma_{q_h}-s+2, ... , \gamma_{q_h}$. The failure in Table 2.1 occurs because the absence of the element $5 \in [9]$ from the second carrel creates a gap. Let $UG_R(n) \subseteq UI_R(n)$ denote the set of gapless $R$-tuples. Note that a gapless $\gamma$ has $\gamma_{q_1} \leq \gamma_{q_2} \leq ... \leq \gamma_{q_r} \leq \gamma_{q_{r+1}}$. So in the full $R = [n-1]$ case, each gapless $R$-tuple is a flag. Hence $UG_{[n-1]}(n) = UF_{[n-1]}(n)$. An $R$-\textit{chain} $B$ is a sequence of sets $\emptyset =: B_0 \subset B_1 \subset \ldots \subset B_r \subset B_{r+1} := [n]$ such that $|B_h| = q_h$ for $h \in [r]$. A bijection from $R$-permutations $\pi$ to $R$-chains $B$ is given by $B_h := \{\pi_1, \pi_2, \ldots, \pi_{q_h}\}$ for $h \in [r]$. We indicate it by $\pi \mapsto B$. The $R$-chains for the two $R$-permutations appearing in Table 2.1 are $\emptyset \subset \{2, 3, 6\} \subset \{ 1,2,3,4,5,6,8,9 \} \subset [9]$ and $\emptyset \subset \{2, 4, 6\} \subset \{1,2,3,4,6,7,8,9\} \subset [9]$. Fix an $R$-permutation $\pi$ and let $B$ be the corresponding $R$-chain. For $h \in [r+1]$, the set $B_h$ is the union of the first $h$ cohorts of $\pi$. Note that $R$-chains $B$ (and hence $R$-permutations $\pi$) are equivalent to the $\binom{n}{R}$ objects that could be called ``ordered $R$-partitions of $[n]$''; these arise as the sequences $(B_1 \backslash B_0, B_2\backslash B_1, \ldots, B_{r+1}\backslash B_r)$ of $r+1$ disjoint nonempty subsets of sizes $p_1, p_2, \ldots, p_{r+1}$. Now create an $R$-tuple $\Psi_R(\pi) =: \psi$ as follows: For $h \in [r+1]$ specify the entries in its $h^{th}$ carrel by $\psi_i := \text{rank}^{q_h-i+1}(B_h)$ for $i \in (q_{h-1},q_h]$. For a model, imagine there are $n$ discus throwers grouped into $r+1$ heats of $p_h$ throwers for $h \in [r+1]$. Each thrower gets one throw, the throw distances are elements of $[n]$, and there are no ties. After the $h^{th}$ heat has been completed, the $p_h$ longest throws overall so far are announced in ascending order. See Table 2.2. We call $\psi$ the \emph{rank $R$-tuple of $\pi$}. As well as being $R$-increasing, it can be seen that $\psi$ is upper: So $\psi \in UI_R(n)$. \vspace{.125in} \begin{figure}[h!] \begin{center} \begin{tabular}{lccc} \underline{Name} & \underline{From/To} & \underline{Input} & \underline{Image} \\ \\ Rank $R$-tuple & $\Psi_R: S_n^R \rightarrow UI_R(n)$ & $(2,4,6;1,5,7,8,9;3)$ & $(2,4,6;5,6,7,8,9;9)$ \\ \\ Undoes $\Psi_R|_{S_n^{R\text{-}312}}$ & $\Pi_R: UG_R(n) \rightarrow S_n^{R\text{-}312}$ & $(2,4,6;4,5,6,7,9;9)$ & $(2,4,6;1,3,5,7,9;8)$ \\ \\ \end{tabular} \caption*{Table 2.2. Examples for maps of $R$-tuples for $n = 9$ and $R = \{3, 8 \}$.} \end{center} \end{figure} The map $\Psi_R$ is not injective; for example it maps another $R$-permutation $(2,4,6;3,5,7,8,9;1)$ to the same image as in Table 2.2. In Proposition \ref{prop320.2}(ii) it will be seen that the restriction of $\Psi_R$ to $S_n^{R\text{-}312}$ is a bijection to $UG_R(n)$ whose inverse is the following map $\Pi_R$: Let $\gamma \in UG_R(n)$. See Table 2.2. Define an $R$-tuple $\Pi_R(\gamma) =: \pi$ by: Initialize $\pi_i := \gamma_i$ for $i \in (0,q_1]$. Let $h \in [r]$. If $\gamma_{q_h} > \gamma_{q_h+1}$, set $s:= \gamma_{q_h} - \gamma_{q_h+1} + 1$. Otherwise set $s := 0$. For $i$ in the right side $(q_h + s, q_{h+1}]$ of the $(h+1)^{st}$ carrel, set $\pi_i := \gamma_i$. For $i$ in the left side $(q_h, q_h + s]$, set $d := q_h + s - i + 1$ and $\pi_i := rank^d( \hspace{1mm} [\gamma_{q_h}] \hspace{1mm} \backslash \hspace{1mm} \{ \pi_1, ... , \pi_{q_h} \} \hspace{1mm} )$. In words: working from right to left, fill in the left side by finding the largest element of $[\gamma_{q_h}]$ not used by $\pi$ so far, then the next largest, and so on. In Table 2.2 when $h=1$ the elements $5,3,1$ are found and placed into the $6^{th}, 5^{th}$, and $4^{th}$ positions. (Since $\gamma$ is a gapless $R$-tuple, when $s \geq 1$ we have $\gamma_{q_h + s} = \gamma_{q_h}$. Since `gapless' includes the upper property, here we have $\gamma_{q_h +s} \geq q_h + s$. Hence $| \hspace{1mm} [\gamma_{q_h}] \hspace{1mm} \backslash \hspace{1mm} \{ \pi_1, ... , \pi_{q_h} \} \hspace{1mm} | \geq s$, and so there are enough elements available to define these left side $\pi_i$. ) Since $\gamma_{q_h} \leq \gamma_{q_{h+1}}$, it can inductively be seen that $\max\{ \pi_1, ... , \pi_{q_h} \} = \gamma_{q_h}$. When we restrict our attention to the full $R = [n-1]$ case, we will suppress most prefixes and subscripts of `$R$'. Two examples of this are: an $[n-1]$-chain becomes a \emph{chain}, and one has $UF(n) = UG(n)$. \section{Shapes, tableaux, connections to Lie theory} A \emph{partition} is an $n$-tuple $\lambda \in \mathbb{Z}^n$ such that $\lambda_1 \geq \ldots \geq \lambda_n \geq 0$. Fix such a $\lambda$ for the rest of the paper. We say it is \textit{strict} if $\lambda_1 > \ldots > \lambda_n$. The \textit{shape} of $\lambda$, also denoted $\lambda$, consists of $n$ left justified rows with $\lambda_1, \ldots, \lambda_n$ boxes. We denote its column lengths by $\zeta_1 \geq \ldots \geq \zeta_{\lambda_1}$. The column length $n$ is called the \emph{trivial} column length. Since the columns are more important than the rows, the boxes of $\lambda$ are transpose-indexed by pairs $(j,i)$ such that $1 \leq j \leq \lambda_1$ and $1 \leq i \leq \zeta_j$. Sometimes for boundary purposes we refer to a $0^{th}$ \emph{latent column} of boxes, which is a prepended $0^{th}$ column of trivial length. If $\lambda = 0$, its shape is the \textit{empty shape} $\emptyset$. Define $R_\lambda \subseteq [n-1]$ to be the set of distinct non-trivial column lengths of $\lambda$. Note that $\lambda$ is strict if and only if $R_\lambda = [n-1]$, i.e. $R_\lambda$ is full. Set $|\lambda| := \lambda_1 + \ldots + \lambda_n$. A \textit{(semistandard) tableau of shape $\lambda$} is a filling of $\lambda$ with values from $[n]$ that strictly increase from north to south and weakly increase from west to east. The example tableau below for $n = 12$ has shape $\lambda = (7^5, 5^4, 2^2)$. Here $R_\lambda = \{5, 9, 11 \}$. Let $\mathcal{T}_\lambda$ denote the set of tableaux $T$ of shape $\lambda$. Under entrywise comparison $\leq$, this set $\mathcal{T}_\lambda$ becomes a poset that is the distributive lattice $L(\lambda, n)$ introduced by Stanley. The principal ideals $[T]$ in $\mathcal{T}_\lambda$ are clearly convex polytopes in $\mathbb{Z}^{|\lambda|}$. Fix $T \in \mathcal{T}_\lambda$. For $j \in [\lambda_1]$, we denote the one column ``subtableau'' on the boxes in the $j^{th}$ column by $T_j$. Here for $i \in [\zeta_j]$ the tableau value in the $i^{th}$ row is denoted $T_j(i)$. The set of values in $T_j$ is denoted $B(T_j)$. Columns $T_j$ of trivial length must be \emph{inert}, that is $B(T_j) = [n]$. The $0^{th}$ \textit{latent column} $T_0$ is an inert column that is sometimes implicitly prepended to the tableau $T$ at hand: We ask readers to refer to its values as needed to fulfill definitions or to finish constructions. We say a tableau $Y$ of shape $\lambda$ is a $\lambda$-\textit{key} if $B(Y_l) \supseteq B(Y_j)$ for $1 \leq l \leq j \leq \lambda_1$. The example tableau below is a $\lambda$-key. The empty shape has one tableau on it, the \textit{null tableau}. Fix a set $Q \subseteq [n]$ with $|Q| =: q \geq 0$. The \textit{column} $Y(Q)$ is the tableau on the shape for the partition $(1^q, 0^{n-q})$ whose values form the set $Q$. Then for $d \in [q]$, the value in the $(q+1-d)^{th}$ row of $Y(Q)$ is $rank^d(Q)$. \begin{figure}[h!] \begin{center} \ytableausetup{boxsize = 1.5em} $$ \begin{ytableau} 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 2 & 2 & 3 & 3 & 3 & 4 & 4\\ 3 & 3 & 4 & 4 & 4 & 6 & 6\\ 4 & 4 & 5 & 5 & 5 & 7 & 7\\ 5 & 5 & 6 & 6 & 6 & 10 & 10\\ 6 & 6 & 7 & 7 & 7\\ 7 & 7 & 8 & 8 & 8\\ 8 & 8 & 9 & 9 & 9\\ 9 & 9 & 10 & 10 & 10\\ 10 & 10 \\ 12 & 12 \end{ytableau}$$ \end{center} \end{figure} The most important values in a tableau of shape $\lambda$ occur at the ends of its rows. Using the latent column when needed, these $n$ values from $[n]$ are gathered into an $R_\lambda$-tuple as follows: Let $T \in \mathcal{T}_\lambda$. We define the \textit{$\lambda$-row end list} $\omega$ of $T$ to be the $R_\lambda$-tuple given by $\omega_i := T_{\lambda_i}(i)$ for $i \in [n]$. Note that for $h \in [r+1]$, down the $h^{th}$ ``cliff'' from the right in the shape of $\lambda$ one has $\lambda_i = \lambda_{i^\prime}$ for $i, i^\prime \in (q_{h-1}, q_{h} ]$. In the example take $h = 2$. Then $q_2 = 9$ and $q_1 = 5$. Here $\lambda_i = 5 = \lambda_{i'}$ for $i, i' \in (5,9]$. Reading off the values of $T$ down that cliff produces the $h^{th}$ cohort of $\omega$. Here this cohort of $\omega$ is $\{7, 8, 9, 10\}$. These values are increasing. So $\omega \in UI_{R_\lambda}(n)$. For $h \in [r]$, the columns of length $q_h$ in the shape $\lambda$ have indices $j$ such that $j \in (\lambda_{q_{h+1}}, \lambda_{q_h}]$. When $h=2$ we have $j \in (\lambda_{11}, \lambda_9] = (2,5]$ for columns of length $q_2 = 9$. A bijection from $R$-chains $B$ to $\lambda$-keys $Y$ is obtained by juxtaposing from left to right $\lambda_n$ inert columns and $\lambda_{q_h}-\lambda_{q_{h+1}}$ copies of $Y(B_h)$ for $r \geq h \geq 1$. We indicate it by $B \mapsto Y$. For $h = 2$ here there are $\lambda_9 - \lambda_{11} = 5-2 = 3$ copies of $Y(B_2)$ with $B_2 = (0, 10] \backslash \{2\}$. Unfortunately we need to have the indices $h$ of the column lengths $q_h$ decreasing from west to east while the column indices $j$ increase from west to east. Hence the elements of $B_{h+1}$ form the column $Y_j$ for $j = \lambda_{q_{h+1}}$ while the elements of $B_h$ form $Y_{j+1}$. A bijection from $R_\lambda$-permutations $\pi$ to $\lambda$-keys $Y$ is obtained by following $\pi \mapsto B$ with $B \mapsto Y$. The image of an $R_\lambda$-permutation $\pi$ is called the \emph{$\lambda$-key of $\pi$}; it is denoted $Y_\lambda(\pi)$. The example tableau is the $\lambda$-key of $\pi = (1,4,6,7,10; 3,5,8,9; 2, 12; 11)$. It is easy to see that the $\lambda$-row end list of the $\lambda$-key of $\pi$ is the rank $R_\lambda$-tuple $\Psi_{R_\lambda}(\pi) =: \psi$ of $\pi$: Here $\psi_i = Y_{\lambda_i}(i)$ for $i \in [n]$. Let $\alpha \in UI_{R_\lambda}(n)$. Define $\mathcal{Z}_\lambda(\alpha)$ to be the subset of tableaux $T \in \mathcal{T}_\lambda$ with $\lambda$-row end list $\alpha$. To see that $\mathcal{Z}_\lambda(\alpha) \neq \emptyset$, for $i \in [n]$ take $T_j(i) := i$ for $j \in [1, \lambda_i)$ and $T_{\lambda_i}(i) := \alpha_i$. This subset is closed under the join operation for the lattice $\mathcal{T}_\lambda$. We define the \emph{$\lambda$-row end max tableau $M_\lambda(\alpha)$ for $\alpha$} to be the unique maximal element of $\mathcal{Z}_\lambda(\alpha)$. The example tableau is an $M_\lambda(\alpha)$. When we are considering tableaux of shape $\lambda$, much of the data used will be in the form of $R_\lambda$-tuples. Many of the notions used will be definitions from Section 2 that are being applied with $R := R_\lambda$. The structure of each proof will depend only upon $R_\lambda$ and not upon how many times a column length is repeated: If $\lambda^\prime, \lambda^{\prime\prime} \in \Lambda_n^+$ are such that $R_{\lambda^\prime} = R_{\lambda^{\prime\prime}}$, then the development for $\lambda^{\prime\prime}$ will in essence be the same as for $\lambda^\prime$. To emphasize the original independent entity $\lambda$ and to reduce clutter, from now on rather than writing `$R$' or `$R_\lambda$' we will replace `$R$' by `$\lambda$' in subscripts and in prefixes. Above we would have written $\omega \in UI_\lambda(n)$ instead of having written $\omega \in UI_{R_\lambda}(n)$ (and instead of having written $\omega \in UI_R(n)$ after setting $R := R_\lambda$). When $\lambda$ is a strict partition, we omit most `$\lambda$-' prefixes and subscripts since $R_\lambda = [n-1]$. To connect to Lie theory, fix $R \subseteq [n-1]$ and set $J := [n-1] \backslash R$. The $R$-permutations are the one-rowed forms of the ``inverses'' of the minimum length representatives collected in $W^J$ for the cosets in $W /W_J$, where $W$ is the Weyl group of type $A_{n-1}$ and $W_J$ is its parabolic subgroup $\langle s_i: i \in J \rangle$. A partition $\lambda$ is strict exactly when the weight it depicts for $GL(n)$ is strongly dominant. If we take the set $R$ to be $R_\lambda$, then the restriction of the partial order $\leq$ on $\mathcal{T}_\lambda$ to the $\lambda$-keys depicts the Bruhat order on that $W^J$. Further details appear in Sections 2, 3, and the appendix of \cite{PW1}. \section{Rightmost clump deleting chains, gapless $\mathbf{\emph{R}}$-tuples} We show that if the domain of the simple-minded global bijection $\pi \mapsto B$ is restricted to $S_n^{R\text{-}312} \subseteq S_n^R$, then a bijection to a certain set of chains results. And while it appears to be difficult to characterize the image $\Psi_R(S_n^R) \subseteq UI_R(n)$ of the $R$-rank map for general $R$, we show that restricting $\Psi_R$ to $S_n^{R\text{-}312}$ produces a bijection to the set $UG_R(n)$ of gapless $R$-tuples. Given a set of integers, a \emph{clump} of it is a maximal subset of consecutive integers. After decomposing a set into its clumps, we index the clumps in the increasing order of their elements. For example, the set $\{ 2,3,5,6,7,10,13,14 \}$ is the union $L_1 \hspace{.5mm} \cup \hspace{.5mm} L_2 \hspace{.5mm} \cup \hspace{.5mm} L_3 \hspace{.5mm} \cup \hspace{.5mm} L_4$, where $L_1 := \{ 2,3 \}, L_2 := \{ 5,6,7 \},$ $L_3 := \{ 10 \}, L_4 := \{ 13,14 \}$. For the first part of this section we temporarily work in the context of the full $R = [n-1]$ case. A chain $B$ is \textit{rightmost clump deleting} if for $h \in [n-1]$ the element deleted from each $B_{h+1}$ to produce $B_h$ is chosen from the rightmost clump of $B_{h+1}$. More formally: It is rightmost clump deleting if for $h \in [n-1]$ one has $B_{h} = B_{h+1} \backslash \{ b \}$ only when $[b, m] \subseteq B_{h+1}$, where $m := max (B_{h+1})$. For $n = 3$ there are five rightmost clump deleting chains, whose sets $B_3 \supset B_2 \supset B_1$ are displayed from the top in three rows: \begin{figure}[h!] \begin{center} \setlength\tabcolsep{.1cm} \begin{tabular}{ccccc} 1& &2& &\cancel{3}\\ &1& &\cancel{2}& \\ & &\cancel{1}& & \\ \end{tabular}\hspace{10mm} \begin{tabular}{ccccc} 1& &2& &\cancel{3}\\ &\cancel{1}& &2& \\ & &\cancel{2}& & \\ \end{tabular}\hspace{10mm} \begin{tabular}{ccccc} 1& &\cancel{2}& &3\\ &1& &\cancel{3}& \\ & &\cancel{1}& & \\ \end{tabular}\hspace{10mm} \begin{tabular}{ccccc} \cancel{1}& &2& &3\\ &2& &\cancel{3}& \\ & &\cancel{2}& & \\ \end{tabular}\hspace{10mm} \begin{tabular}{ccccc} \cancel{1}& &2& &3\\ &\cancel{2}& &3& \\ & &\cancel{3}& & \\ \end{tabular} \end{center} \end{figure} \noindent To form the corresponding $\pi$, record the deleted elements from bottom to top. Note that the 312-containing permutation $(3;1;2)$ does not occur. Its triangular display of $B_3 \supset B_2 \supset B_1$ deletes the `1' from the ``left'' clump in the second row. After Part (0) restates the definition of this concept, we present four reformulations of it: \begin{fact}\label{fact320.1}Let $B$ be a chain. Set $\{ b_{h+1} \} := B_{h+1} \backslash B_h$ for $h \in [n-1]$. Set $m_h := \max (B_h)$ for $h \in [n]$. The following conditions are equivalent to this chain being rightmost clump deleting: \noindent(0) For $h \in [n-1]$, one has $[b_{h+1}, m_{h+1}] \subseteq B_{h+1}$. \noindent(i) For $h \in [n-1]$, one has $[b_{h+1}, m_h] \subseteq B_{h+1}$. \noindent(ii) For $h \in [n-1]$, one has $(b_{h+1}, m_h) \subset B_h$. \noindent(iii) For $h \in [n-1]$: If $b_{h+1} < m_h$, then $b_{h+1} = \max([m_h] \backslash B_h)$. \noindent(iii$^\prime$) For $h \in [n-1]$, one has $b_{h+1} = \max([m_{h+1}] \backslash B_h)$. \end{fact} The following characterization is related to Part (ii) of the preceeding fact via the correspondence $\pi \longleftrightarrow B$: \begin{fact}\label{fact320.2}A permutation $\pi$ is 312-avoiding if and only if for every $h \in [n-1]$ we have \\ $(\pi_{h+1}, \max\{\pi_1, ... , \pi_{h}\}) \subset \{ \pi_1, ... , \pi_{h} \}$. \end{fact} Since the following result will be generalized by Proposition \ref{prop320.2}, we do not prove it here. Part (i) is Exercise 2.202 of \cite{Sta}. \begin{prop}\label{prop320.1}For the full $R = [n-1]$ case we have: \noindent (i) The restriction of the global bijection $\pi \mapsto B$ from $S_n$ to $S_n^{312}$ is a bijection to the set of rightmost clump deleting chains. Hence there are $C_n$ rightmost clump deleting chains. \noindent (ii) The restriction of the rank tuple map $\Psi$ from $S_n$ to $S_n^{312}$ is a bijection to $UF(n)$ whose inverse is $\Pi$. \end{prop} \noindent Here when $R = [n-1]$, the map $\Pi : UF(n) \longrightarrow S_n^{312}$ has a simple description. It was introduced in \cite{PS} for Theorem 14.1. Given an upper flag $\phi$, recursively construct $\Pi(\phi) =: \pi$ as follows: Start with $\pi_1 := \phi_1$. For $i \in [n-1]$, choose $\pi_{i+1}$ to be the maximum element of $[\phi_{i+1}] \backslash \{ \pi_1, ... , \pi_{i} \}$. Now fix $R \subseteq [n-1]$. Let $B$ be an $R$-chain. More generally, we say $B$ is \textit{$R$-rightmost clump deleting} if this condition holds for each $h \in [r]$: Let $B_{h+1} =: L_1 \cup L_2 \cup ... \cup L_f$ decompose $B_{h+1}$ into clumps for some $f \geq 1$. We require $L_e \cup L_{e+1} \cup ... \cup L_f \supseteq B_{h+1} \backslash B_{h} \supseteq L_{e+1} \cup ... \cup L_f$ for some $e \in [f]$. This condition requires the set $B_{h+1} \backslash B_h$ of new elements that augment the set $B_h$ of old elements to consist of entirely new clumps $L_{e+1}, L_{e+2}, ... , L_f$, plus some further new elements that combine with some old elements to form the next ``lower'' clump $L_e$ in $B_{h+1}$. When $n = 14$ and $R = \{3, 5, 10 \}$, an example of an $R$-rightmost clump deleting chain is given by $\emptyset \subset \{ 1 \text{-} 2, 6 \} \subset \{ 1\text{-}2, 5\text{-}6, 8 \} \subset \{ 1\text{-}2, 4\text{-}5\text{-}6\text{-}7\text{-}8, 10, 13\text{-}14 \}$ $\subset \{1\text{-}2\text{-}3\text{-}...\text{-}13\text{-}14\}$. Here are some reformulations of the notion of $R$-rightmost clump deleting: \begin{fact}\label{fact320.3}Let $B$ be an $R$-chain. For $h \in [r]$, set $b_{h+1} := \min (B_{h+1} \backslash B_{h} )$ and $m_h := \max (B_h)$. The following conditions are equivalent to this chain being $R$-rightmost clump deleting: \noindent (i) For $h \in [r]$, one has $[b_{h+1}, m_{h}] \subseteq B_{h+1}$. \noindent (ii) For $h \in [r]$, one has $(b_{h+1}, m_{h}) \subset B_{h+1}$. \noindent (iii) For $h \in [r]$, let $s$ be the number of elements of $B_{h+1} \backslash B_{h}$ that are less than $m_{h}$. These must be the $s$ largest elements of $[m_{h}] \backslash B_{h}$. \end{fact} \noindent Part (iii) will again be used in \cite{PW3} for projecting and lifting 312-avoidance. The following characterization is related to Part (ii) of the preceding fact via the correspondence $\pi \longleftrightarrow B$: \begin{fact}\label{fact320.4}An $R$-permutation $\pi$ is $R$-312-avoiding if and only if for every $h \in [r]$ one has \\ $( \min\{\pi_{q_{h}+1}, ... , \pi_{q_{h+1}} \} , \max \{\pi_1, ... , \pi_{q_{h}}\} ) \subset \{ \pi_1, ... , \pi_{q_{h+1}} \}$. \end{fact} Is it possible to characterize the rank $R$-tuple $\Psi_R(\pi) =: \psi$ of an arbitrary $R$-permutation $\pi$? An \emph{$R$-flag} is an $R$-increasing upper tuple $\varepsilon$ such that $\varepsilon_{q_{h+1} +1 - u} \geq \varepsilon_{q_{h} +1 - u}$ for $h \in [r]$ and $u \in [\min\{ p_{h+1}, p_{h}\}]$. It can be seen that $\psi$ is necessarily an $R$-flag. But the three conditions required so far (upper, $R$-increasing, $R$-flag) are not sufficient: When $n = 4$ and $R = \{ 1, 3 \}$, the $R$-flag $(3;2,4;4)$ cannot arise as the rank $R$-tuple of an $R$-permutation. In contrast to the upper flag characterization in the full case, it might not be possible to develop a simply stated sufficient condition for an $R$-tuple to be the rank $R$-tuple $\Psi_R(\pi)$ of a general $R$-permutation $\pi$. But it can be seen that the rank $R$-tuple $\psi$ of an $R$-312-avoiding permutation $\pi$ is necessarily a gapless $R$-tuple, since a failure of `gapless' for $\psi$ leads to the containment of an $R$-312 pattern. Building upon the observation that $UG(n) = UF(n)$ in the full case, this seems to indicate that the notion of ``gapless $R$-tuple'' is the correct generalization of the notion of ``flag'' from $[n-1]$-tuples to $R$-tuples. (It can be seen directly that a gapless $R$-tuple is necessarily an $R$-flag.) Two bijections lie at the heart of this work; the second one will again be used in \cite{PW3} to prove Theorem 9.1. \begin{prop}\label{prop320.2}For general $R \subseteq [n-1]$ we have: \noindent (i) The restriction of the global bijection $\pi \mapsto B$ from $S_n^R$ to $S_n^{R\text{-}312}$ is a bijection to the set of $R$-rightmost clump deleting chains. \noindent (ii) The restriction of the rank $R$-tuple map $\Psi_R$ from $S_n^R$ to $S_n^{R\text{-}312}$ is a bijection to $UG_R(n)$ whose inverse is $\Pi_R$. \end{prop} \begin{proof}Setting $b_h = \min\{\pi_{q_{h}+1}, ... , \pi_{q_{h+1}} \}$ and $m_{h} = \max \{\pi_1, ... , \pi_{q_{h}}\}$, use Fact \ref{fact320.4}, the $\pi \mapsto B$ bijection, and Fact \ref{fact320.3}(ii) to confirm (i). As noted above, the restriction of $\Psi_R$ to $S_n^{R\text{-}312}$ gives a map to $UG_R(n)$. Let $\gamma \in UG_R(n)$ and construct $\Pi_R(\gamma) =: \pi$. Let $h \in [r]$. Recall that $\max\{ \pi_1, ... , \pi_{q_h} \} = \gamma_{q_h}$. Since $\gamma$ is $R$-increasing it can be seen that the $\pi_i$ are distinct. So $\pi$ is an $R$-permutation. Let $s \geq 0$ be the number of entries of $\{ \pi_{q_{h}+1} , ... , \pi_{q_{h+1}} \}$ that are less than $\gamma_{q_{h}}$. These are the $s$ largest elements of $[\gamma_{q_{h}}] \backslash \{ \pi_1, ... , \pi_{q_{h}} \}$. If in the hypothesis of Fact \ref{fact320.3} we take $B_h := \{\pi_1, ... , \pi_{q_h} \}$, we have $m_h = \gamma_{q_h}$. So the chain $B$ corresponding to $\pi$ satisfies Fact \ref{fact320.3}(iii). Since Fact \ref{fact320.3}(ii) is the same as the characterization of an $R$-312-avoiding permutation in Fact \ref{fact320.4}, we see that $\pi$ is $R$-312-avoiding. It can be seen that $\Psi_R[\Pi_R(\gamma)] = \gamma$, and so $\Psi_R$ is surjective from $S_n^{R\text{-}312}$ to $UG_R(n)$. For the injectivity of $\Psi_R$, now let $\pi$ denote an arbitrary $R$-312-avoiding permutation. Form $\Psi_R(\pi)$, which is a gapless $R$-tuple. Using Facts \ref{fact320.4} and \ref{fact320.3}, it can be seen that $\Pi_R[\Psi_R(\pi)] = \pi$. Hence $\Psi_R$ is injective. \end{proof} \section{Row end max tableaux, gapless (312-avoiding) keys} We study the $\lambda$-row end max tableaux of gapless $\lambda$-tuples. We also form the $\lambda$-keys of the $R$-312-avoiding permutations and introduce ``gapless'' lambda-keys. We show that these three sets of tableaux coincide. Let $\alpha \in UI_\lambda(n)$. The values of the $\lambda$-row end max tableau $M_{\lambda}(\alpha) =: M$ can be determined as follows: For $h \in [r]$ and $j \in (\lambda_{q_{h+1}}, \lambda_{q_h}]$, first set $M_j(i) = \alpha_i$ for $i \in (q_{h-1}, q_h]$. When $h > 1$, from east to west among columns and south to north within a column, also set $M_j(i) := \min\{ M_j(i+1)-1, M_{j+1}(i) \}$ for $i \in (0, q_{h-1}]$. Finally, set $M_j(i) := i$ for $j \in (0, \lambda_n]$ and $i \in (0,n]$. (When $\zeta_j = \zeta_{j+1}$, this process yields $M_j = M_{j+1}$.) The example tableau in Section 3 is $M_\lambda(\alpha)$ for $\alpha = (1,4,6,7,10;7,8,9,10;10,12; 12)$. There we have $s = 4$ and $s = 1$ respectively for $h =1$ and $h=2$: \begin{lem}\label{lemma340.1}Let $\gamma$ be a gapless $\lambda$-tuple. The $\lambda$-row end max tableau $M_{\lambda}(\gamma) =: M$ is a key. For $h \in [r]$ and $j := \lambda_{q_{h+1}}$, the $s \geq 0$ elements in $B(M_{j}) \backslash B(M_{j+1})$ that are less than $M_{j+1}(q_{h}) = \gamma_{q_{h}}$ are the $s$ largest elements of $[\gamma_{q_{h}}] \backslash B(M_{j+1})$. \end{lem} \begin{proof} Let $h \in [r]$ and set $j := \lambda_{q_{h+1}}$. We claim $B(M_{j+1}) \subseteq B(M_j)$. If $M_j(q_{h} + 1) = \gamma_{q_{h}+1} > \gamma_{q_h} = M_{j+1}(q_{h})$, then $M_j(i) = M_{j+1}(i)$ for $i \in (0, q_{h}]$ and the claim holds. Otherwise $\gamma_{q_{h} + 1} \leq \gamma_{q_{h}}$. The gapless condition on $\gamma$ implies that if we start at $(j, q_{h}+1)$ and move south, the successive values in $M_j$ increment by 1 until some lower box has the value $\gamma_{q_{h}}$. Let $i \in (q_{h}, q_{h+1}]$ be the index such that $M_j(i) = \gamma_{q_{h}}$. Now moving north from $(j,i)$, the values in $M_j$ decrement by 1 either all of the way to the top of the column, or until there is a row index $k \in (0, q_{h})$ such that $M_{j+1}(k) < M_j(k+1)-1$. In the former case set $k := 0$. In the example we have $k=1$ and $k=0$ respectively for $h=1$ and $h=2$. If $k > 0$ we have $M_j(x) = M_{j+1}(x)$ for $x \in (0,k]$. Now use $M_j(k+1) \leq M_{j+1}( k+1)$ to see that the values $M_{j + 1}(k+1), M_{j+1}( k+2), ... , M_{j+1}( q_{h})$ each appear in the interval of values $[ M_j(k+1), M_j(i) ]$. Thus $B(M_{j+1}) \subseteq B(M_j)$. Using the parenthetical remark made before the lemma's statement, we see that $M$ is a key. There are $q_{h+1} - i$ elements in $B(M_j) \backslash B(M_{j+1})$ that are larger than $M_{j+1}( q_{h}) = \gamma_{q_{h}}$. So $s := (q_{h+1} - q_{h}) - (q_{h+1} - i) \geq 0$ is the number of values in $B(M_j) \backslash B(M_{j+1})$ that are less than $\gamma_{q_{h}}$. These $s$ values are the complement in $[ M_j(k+1), M_j(i) ]$ of the set $\{ \hspace{1mm} M_{j+1}(x) : x \in [k+1, q_{h}] \hspace{1mm} \}$, where $M_j(i) = M_{j+1}(q_{h}) = \gamma_{q_{h}}$. \end{proof} We now introduce a tableau analog to the notion of ``$R$-rightmost clump deleting chain''. A $\lambda$-key $Y$ is \textit{gapless} if the condition below is satisfied for $h \in [r-1]$: Let $b$ be the smallest value in a column of length $q_{h+1}$ that does not appear in a column of length $q_{h}$. For $j \in (\lambda_{q_{h +2}}, \lambda_{q_{h+1}}]$, let $i \in (0, q_{h+1}]$ be the shared row index for the occurrences of $b = Y_j(i)$. Let $m$ be the bottom (largest) value in the columns of length $q_{h}$. If $b > m$ there are no requirements. Otherwise: For $j \in (\lambda_{q_{h +2}}, \lambda_{q_{h+1}}]$, let $k \in (i, q_{h+1}]$ be the shared row index for the occurrences of $m = Y_j(k)$. For $j \in (\lambda_{q_{h + 2}}, \lambda_{q_{h+1}}]$ one must have $Y_j(i+1) = b+1, Y_j(i+2) = b+2, ... , Y_j(k-1) = m-1$ holding between $Y_j(i) = b$ and $Y_j(k) = m$. (Hence necessarily $m - b = k - i$.) The tableau shown above is a gapless $\lambda$-key. Given a partition $\lambda$ with $R_\lambda =: R$, our next result considers three sets of $R$-tuples and three sets of tableaux of shape $\lambda$: \noindent (a) The set $\mathcal{A}_R$ of $R$-312-avoiding permutations and the set $\mathcal{P}_\lambda$ of their $\lambda$-keys. \noindent (b) The set $\mathcal{B}_R$ of $R$-rightmost clump deleting chains and the set $\mathcal{Q}_\lambda$ of gapless $\lambda$-keys. \noindent (c) The set $\mathcal{C}_R$ of gapless $R$-tuples and the set $\mathcal{R}_\lambda$ of their $\lambda$-row end max tableaux. \newpage \begin{thm}\label{theorem340}Let $\lambda$ be a partition and set $R := R_\lambda$. \noindent (i) The three sets of tableaux coincide: $\mathcal{P}_\lambda = \mathcal{Q}_\lambda = \mathcal{R}_\lambda$. \noindent (ii) An $R$-permutation is $R$-312-avoiding if and only if its $\lambda$-key is gapless. \noindent (iii) If an $R$-permutation is $R$-312-avoiding, then the $\lambda$-row end max tableau of its rank $R$-tuple is its $\lambda$-key. \noindent The restriction of the global bijection $B \mapsto Y$ from all $R$-chains to $R$-rightmost clump deleting chains is a bijection from $\mathcal{B}_R$ to $\mathcal{Q}_\lambda$. The process of constructing the $\lambda$-row end max tableau is a bijection from $\mathcal{C}_R$ to $\mathcal{R}_\lambda$. The bijection $\pi \mapsto B$ from $\mathcal{A}_R$ to $\mathcal{B}_R$ induces a map from $\mathcal{P}_\lambda$ to $\mathcal{Q}_\lambda$ that is the identity. The bijection $\Psi_R$ from $\mathcal{A}_R$ to $\mathcal{C}_R$ induces a map from $\mathcal{P}_\lambda$ to $\mathcal{R}_\lambda$ that is the identity.\end{thm} \noindent Part (iii) will again be used in \cite{PW3} to prove Theorem 9.1; there it will also be needed for the discussion in Section 12. In the full case when $\lambda$ is strict and $R = [n-1]$, the converse of Part (iii) holds: If the row end max tableau of the rank tuple of a permutation is the key of the permutation, then the permutation is 312-avoiding. For a counterexample to this converse for general $\lambda$, choose $n = 4, \lambda = (2,1,1,0)$, and $\pi = (4;1,2;3)$. Then $Y_\lambda(\pi) = M_\lambda(\psi)$ with $\pi \notin S_n^{R\text{-}312}$. The bijection from $\mathcal{C}_R$ to $\mathcal{R}_\lambda$ and the equality $\mathcal{Q}_\lambda = \mathcal{R}_\lambda$ imply that an $R$-tuple is $R$-gapless if and only if it arises as the $\lambda$-row end list of a gapless $\lambda$-key. \begin{proof} \noindent For the first of the four map statements, use the $B \mapsto Y$ bijection to relate Fact \ref{fact320.3}(i) to the definition of gapless $\lambda$-key. The map in the second map statement is surjective by definition and is also obviously injective. Use the construction of the bijection $\pi \mapsto B$ and the first map statement to confirm the equality $\mathcal{P}_\lambda = \mathcal{Q}_\lambda$ and the third map statement. Part (ii) follows. To prove Part (iii), let $\pi \in S_n^{R\text{-}312}$. Create the $R$-chain $B$ corresponding to $\pi$ and then its $\lambda$-key $Y := Y_\lambda(\pi)$. Set $\gamma := \Psi_R(\pi)$ and then $M := M_\lambda(\gamma)$. Clearly $B(Y_{\lambda_v}) = B_1 = \{ \gamma_1, ... , \gamma_v \} = B(M_{\lambda_v})$ for $v := q_1$. Proceed by induction on $h \in [r]$: For $v := q_h$ assume $B(Y_{\lambda_v}) = B(M_{\lambda_v})$. So $\max[B(Y_{\lambda_v})] = Y_{\lambda_v}(v) = M_{\lambda_v}(v) = \gamma_v$. Rename the example $\alpha$ before Lemma \ref{lemma340.1} as $\gamma$. Viewing that tableau as $M_\lambda( \gamma ) =: M$, for $h=2$ we have $M_5(9) = \gamma_9 = 10$. Set $v^\prime := q_{h+1}$. Let $s_Y$ be the number of values in $B(Y_{\lambda_{v^\prime}}) \backslash B(Y_{\lambda_v})$ that are less than $\gamma_v$. Viewing the example tableau as $Y$, for $h=2$ we have $s_Y = 1$. Since $\gamma_v \in B(Y_{\lambda_v})$, the number of values in $B(Y_{\lambda_{v^\prime}}) \backslash B(Y_{\lambda_v})$ that exceed $\gamma_v$ is $p_{h+1} - s_Y$. These values are the entries in $\{ \pi_{v+1} , ... , \pi_{v^\prime} \}$ that exceed $\gamma_v$. So from $\gamma := \Psi_R(\pi)$ and the description of $M_\lambda(\gamma)$ it can be seen that these values are exactly the values in $B(M_{\lambda_{v^\prime}}) \backslash B(M_{\lambda_v})$ that exceed $\gamma_v$. Let $s_M$ be the number of values in $B(M_{\lambda_{v^\prime}}) \backslash B(M_{\lambda_v})$ that are less than $\gamma_v$. Since $M$ is a key by Lemma \ref{lemma340.1} and $\gamma_v \in B(M_{\lambda_v})$, we have $s_M = p_{h+1} - (p_{h+1}-s_Y) = s_Y =: s$. From Proposition \ref{prop320.2}(i) we know that $B$ is $R$-rightmost clump deleting. By Fact \ref{fact320.3}(iii) applied to $B$ and Lemma \ref{lemma340.1} applied to $\gamma$, we see that for both $Y$ and for $M$ the ``new'' values that are less than $\gamma_v$ are the $s$ largest elements of $[\gamma_v] \backslash B(Y_{\lambda_v}) = [\gamma_v] \backslash B(M_{\lambda_v})$. Hence $Y_{\lambda_{v^\prime}} = M_{\lambda_{v^\prime}}$. Since we only need to consider the rightmost columns of each length when showing that two $\lambda$-keys are equal, we have $Y = M$. The equality $\mathcal{P}_\lambda = \mathcal{R}_\lambda$ and the final map statement follow. \end{proof} \begin{cor}When $\lambda$ is strict, there are $C_n$ gapless $\lambda$-keys. \end{cor} \section{Sufficient condition for Demazure convexity} Fix a $\lambda$-permutation $\pi$. We define the set $\mathcal{D}_\lambda(\pi)$ of Demazure tableaux. We show that if $\pi$ is $\lambda$-312-avoiding, then the tableau set $\mathcal{D}_\lambda(\pi)$ is the principal ideal $[Y_\lambda(\pi)]$. First we need to specify how to form the \emph{scanning tableau} $S(T)$ for a given $T \in \mathcal{T}_\lambda$. See page 394 of \cite{Wi2} for an example of this method. Given a sequence $x_1, x_2, ...$, its \emph{earliest weakly increasing subsequence (EWIS)} is $x_{i_1}, x_{i_2}, ...$, where $i_1 = 1$ and for $u > 1$ the index $i_u$ is the smallest index satisfying $x_{i_u} \geq x_{i_{u-1}}$. Let $T \in \mathcal{T}_\lambda$. Draw the shape $\lambda$ and fill its boxes as follows to produce $S(T)$: Form the sequence of the bottom values of the columns of $T$ from left to right. Find the EWIS of this sequence, and mark each box that contributes its value to this EWIS. The sequence of locations of the marked boxes for a given EWIS is its \emph{scanning path}. Place the final value of this EWIS in the lowest available location in the leftmost available column of $S(T)$. This procedure can be repeated as if the marked boxes are no longer part of $T$, since it can be seen that the unmarked locations form the shape of some partition. Ignoring the marked boxes, repeat this procedure to fill in the next-lower value of $S(T)$ in its first column. Once all of the scanning paths originating in the first column have been found, every location in $T$ has been marked and the first column of $S(T)$ has been created. For $j> 1$, to fill in the $j^{th}$ column of $S(T)$: Ignore the leftmost $(j-1)$ columns of $T$, remove all of the earlier marks from the other columns, and repeat the above procedure. The scanning path originating at a location $(l,k) \in \lambda$ is denoted $\mathcal{P}(T;l,k)$. It was shown in \cite{Wi2} that $S(T)$ is the ``right key'' of Lascoux and Sch\"{u}tzenberger for $T$, which was denoted $R(T)$ there. As in \cite{PW1}, we now use the $\lambda$-key $Y_\lambda(\pi)$ of $\pi$ to define the set of \emph{Demazure tableaux}: $\mathcal{D}_\lambda(\pi) :=$ \\ $\{ T \in \mathcal{T}_\lambda : S(T) \leq Y_\lambda(\pi) \}$. We list some basic facts concerning keys, scanning tableaux, and sets of Demazure tableaux. Since it has long been known that $R(T)$ is a key for any $T \in \mathcal{T}_\lambda$, having $S(T) = R(T)$ gives Part (i). Part (ii) is easy to deduce from the specification of the scanning method. The remaining parts follow in succession from Part (ii) and the bijection $\pi \mapsto Y$. \begin{fact}\label{fact420}Let $T \in \mathcal{T}_\lambda$ and let $Y \in \mathcal{T}_\lambda$ be a key. \noindent (i) $S(T)$ is a key and hence $S(T) \in \mathcal{T}_\lambda$. \noindent (ii) $T \leq S(T)$ and $S(Y) = Y$. \noindent (iii) $Y_\lambda(\pi) \in \mathcal{D}_\lambda(\pi)$ and $\mathcal{D}_\lambda(\pi) \subseteq [Y_\lambda(\pi)]$. \noindent (iv) The unique maximal element of $\mathcal{D}_\lambda(\pi)$ is $Y_\lambda(\pi)$. \noindent (v) The Demazure sets $\mathcal{D}_\lambda(\sigma)$ of tableaux are nonempty subsets of $\mathcal{T}_\lambda$ that are precisely indexed by the $\sigma \in S_n^\lambda$. \end{fact} For $U \in \mathcal{T}_\lambda$, define $m(U)$ to be the maximum value in $U$. (Define $m(U) := 1$ if $U$ is the null tableau.) Let $T \in \mathcal{T}_\lambda$. Let $(l,k) \in \lambda$. As in Section 4 of \cite{PW1}, define $U^{(l,k)}$ to be the tableau formed from $T$ by finding and removing the scanning paths that begin at $(l,\zeta_l)$ through $(l, k+1)$, and then removing the $1^{st}$ through $l^{th}$ columns of $T$. (If $l = \lambda_1$, then $U^{(l,k)}$ is the null tableau for any $k \in [\zeta_{\lambda_1}]$.) Set $S := S(T)$. Lemma 4.1 of \cite{PW1} states that $S_l(k) = \text{max} \{ T_l(k), m(U^{(l,k)}) \}$. To reduce clutter in the proofs we write $Y_\lambda(\pi) =: Y$ and $S(T) =: S$. \begin{prop}\label{prop420.1}Let $\pi \in S^\lambda_n$ and $T \in \mathcal{T}_\lambda$ be such that $T \leq Y_\lambda(\pi)$. If there exists $(l,k) \in \lambda$ such that $Y_l(k) < m(U^{(l,k)})$, then $\pi$ is $\lambda$-312-containing. \end{prop} \begin{proof}Reading the columns from right to left and then each column from bottom to top, let $(l,k)$ be the first location in $\lambda$ such that $m(U^{(l,k)}) > Y_l(k)$. In the rightmost column we have $m(U^{(\lambda_1,i)}) = 1$ for all $i \in [\zeta_{\lambda_1}]$. Thus $m(U^{(\lambda_1,i)}) \leq Y_{\lambda_1}(i)$ for all $i \in [\zeta_{\lambda_1}]$. So we must have $l \in [1, \lambda_1)$. There exists $j > l$ and $i \leq k$ such that $m(U^{(l,k)}) = T_j(i)$. Since $T \leq Y$, so far we have $Y_l(k) < T_j(i) \leq Y_j(i)$. Note that since $Y$ is a key we have $k < \zeta_l$. Then for $k < f \leq \zeta_l$ we have $m(U^{(l,f)}) \leq Y_l(f)$. So $T \leq Y$ implies that $S_l(f) \leq Y_l(f)$ for $k < f \leq \zeta_l$. Assume for the sake of contradiction that $\pi$ is $\lambda$-312-avoiding. Theorem \ref{theorem340}(ii) says that its $\lambda$-key $Y$ is gapless. If the value $Y_l(k)$ does not appear in $Y_j$, then the columns that contain $Y_l(k)$ must also contain $[Y_l(k), Y_j(i)]$: Otherwise, the rightmost column that contains $Y_l(k)$ has index $\lambda_{q_{h+1}}$ for some $h \in [r-1]$ and there exists some $u \in [Y_l(k), Y_j(i)]$ such that $u \notin Y_{\lambda_{q_{h+1}}}$. Then $Y$ would not satisfy the definition of gapless $\lambda$-key, since for this $h+1$ in that definition one has $b \leq u$ and $u \leq m$. If the value $Y_l(k)$ does appear in $Y_j$, it appears to the north of $Y_j(i)$ there. Then $i \leq k$ implies that some value $Y_l(f) < Y_j(i)$ with $f < k$ does not appear in $Y_j$. As above, the columns that contain the value $Y_l(f) < Y_l(k)$ must also contain $[Y_l(f), Y_j(i)]$. In either case $Y_l$ must contain $[Y_l(k), Y_j(i)]$. This includes $T_j(i)$. Now let $f > k$ be such that $Y_l(f) = T_j(i)$. Then we have $S_l(f) > S_l(k) = \text{max} \{T_l(k),m(U^{(l,k)}) \}$ $\geq T_j(i) = Y_l(f)$. This is our desired contradiction. \end{proof} As in Section 5 of \cite{PW1}: When $m(U^{(l,k)}) > Y_l(k)$, define the set $A_\lambda(T,\pi;l,k) := \emptyset$. Otherwise, define $A_\lambda(T,\pi;l,k) := [ k , \text{min} \{ Y_l(k), T_l(k+1) -1, T_{l+1}(k) \} ] $. (Refer to fictitious bounding values $T_l(\zeta_l + 1) := n+1$ and $T_{\lambda_l + 1}(l) := n$.) \begin{thm}\label{theorem420}Let $\lambda$ be a partition and $\pi$ be a $\lambda$-permutation. If $\pi$ is $\lambda$-312-avoiding, then $\mathcal{D}_\lambda(\pi) = [Y_\lambda(\pi)]$. \end{thm} \begin{proof}The easy containment $\mathcal{D}_\lambda(\pi) \subseteq [Y_\lambda(\pi)]$ is Fact \ref{fact420}(iii). Conversely, let $T \leq Y$ and $(l,k) \in \lambda$. The contrapositive of Proposition \ref{prop420.1} gives $A_\lambda(T,\pi;l,k) = [ k , \text{min} \{ Y_l(k), T_l(k+1) - 1, T_{l+1}(k) \} ]$. Since $T \leq Y$, we see that $T_l(k) \in A_\lambda(T,\pi;l,k)$ for all $(l,k) \in \lambda$. Theorem 5.1 of \cite{PW1} says that $T \in \mathcal{D}_\lambda(\pi)$. \end{proof} \noindent This result is used in \cite{PW3} to prove Theorem 9.1(ii). \section{Necessary condition for Demazure convexity} Continue to fix a $\lambda$-permutation $\pi$. We show that $\pi$ must be $\lambda$-312-avoiding for the set of Demazure tableaux $\mathcal{D}_\lambda(\pi)$ to be a convex polytope in $\mathbb{Z}^{|\lambda|}$. We do so by showing that if $\pi$ is $\lambda$-312-containing, then $\mathcal{D}_\lambda(\pi)$ does not contain a particular semistandard tableau that lies on the line segment defined by two particular keys that are in $\mathcal{D}_\lambda(\pi)$. \begin{thm}\label{theorem520}Let $\lambda$ be a partition and let $\pi$ be a $\lambda$-permutation. If $\mathcal{D}_\lambda(\pi)$ is convex in $\mathbb{Z}^{|\lambda|}$, then $\pi$ is $\lambda$-312-avoiding. \end{thm} \noindent This result is used in \cite{PW3} to prove Theorem 9.1(iii) and Theorem 10.3. \begin{proof}For the contrapositive, assume that $\pi$ is $\lambda$-312-containing. Here $r := |R_\lambda| \geq 2$. There exists $1 \leq g < h \leq r$ and some $a \leq q_g < b \leq q_h < c$ such that $\pi_b < \pi_c < \pi_a$. Among such patterns, we specify one that is optimal for our purposes. Figure 7.1 charts the following choices for $\pi = (4,8;9;2,3;1,5;6,7)$ in the first quadrant. Choose $h$ to be minimal. So $b \in (q_{h-1}, q_h]$. Then choose $b$ so that $\pi_b$ is maximal. Then choose $a$ so that $\pi_a$ is minimal. Then choose $g$ to be minimal. So $a \in (q_{g-1} , q_g]$. Then choose any $c$ so that $\pi_c$ completes the $\lambda$-312-containing condition. These choices have led to the following two prohibitions; see the rectangular regions in Figure 7.1: \noindent (i) By the minimality of $h$ and the maximality of $\pi_b$, there does not exist $e \in (q_g, q_h]$ such that $\pi_b < \pi_e < \pi_c$. \noindent (ii) By the minimality of $\pi_a$, there does not exist $e \in [q_{h-1}]$ such that $\pi_c < \pi_e < \pi_a$. \noindent If there exists $e \in [q_g]$ such that $\pi_b < \pi_e < \pi_c$, choose $d \in [q_g]$ such that $\pi_d$ is maximal with respect to this condition; otherwise set $d = b$. So $\pi_b \leq \pi_d$ with $d \leq b$. We have also ruled out: \noindent (iii) By the maximality of $\pi_d$, there does not exist $e \in [q_g]$ such that $\pi_d < \pi_e < \pi_c$. \begin{figure}[h!] \begin{center} \includegraphics[scale=1]{Figure11point1}\label{fig71} \caption*{Figure 7.1. Prohibited regions (i), (ii), and (iii) for $\pi = (4,8;9;2,3;1,5;6,7)$.} \end{center} \end{figure} Set $Y := Y_\lambda(\pi)$. Now let $\chi$ be the permutation resulting from swapping the entry $\pi_b$ with the entry $\pi_d$ in $\pi$; so $\chi_b := \pi_d, \chi_d := \pi_b$, and $\chi_e := \pi_e$ when $e \notin \{ b, d \}$. (If $d = b$, then $\chi = \pi$ with $\chi_b = \pi_b = \chi_d = \pi_d$.) Let $\bar{\chi}$ be the $\lambda$-permutation produced from $\chi$ by sorting each cohort into increasing order. Set $X := Y_\lambda(\bar{\chi})$. Let $j$ denote the column index of the rightmost column with length $q_h$; so the value $\chi_b = \pi_d$ appears precisely in the $1^{st}$ through $j^{th}$ columns of $X$. Let $f \leq h$ be such that $d \in (q_{f-1}, q_f]$, and let $k \geq j$ denote the column index of the rightmost column with length $q_f$. The swap producing $\chi$ from $\pi$ replaces $\pi_d = \chi_b$ in the $(j+1)^{st}$ through $k^{th}$ columns of $Y$ with $\chi_d = \pi_b$ to produce $X$. (The values in these columns may need to be re-sorted to meet the semistandard criteria.) So $\chi_d \leq \pi_d$ implies $X \leq Y$ via a column-wise argument. Forming the union of the prohibited rectangles for (i), (ii), and (iii), we see that there does not exist $e \in [q_{h-1}]$ such that $\pi_d = \chi_b < \pi_e < \pi_a$. Thus we obtain: \noindent (iv) For $l > j$, the $l^{th}$ column of $X$ does not contain any values from $[\chi_b, \pi_a)$. \noindent Let $(j,i)$ denote the location of the $\chi_b$ in the $j^{th}$ column of $X$ (and hence $Y$). So $Y_j(i) = \pi_d$. By (iv) and the semistandard conditions, we have $X_{j+1}(u) = \pi_a$ for some $u \leq i$. By (i) and (iii) we can see that $X_j(i+1) > \pi_c$. Let $m$ denote the column index of the rightmost column of $\lambda$ with length $q_g$. This is the rightmost column of $X$ that contains $\pi_a$. Let $\mu \subseteq \lambda$ be the set of locations of the $\pi_a$'s in the $(j+1)^{st}$ through $m^{th}$ columns of $X$; note that $(j+1, u) \in \mu$. Let $\omega$ be the permutation obtained by swapping $\chi_a = \pi_a$ with $\chi_b = \pi_d$ in $\chi$; so $\omega_a := \chi_b = \pi_d$, $\omega_b := \chi_a = \pi_a$, $\omega_d := \chi_d = \pi_b$, and $\omega_e := \pi_e$ when $e \notin \{ d, a, b \}$. Let $\bar{\omega}$ be the $\lambda$-permutation produced from $\omega$ by sorting each cohort into increasing order. Set $W := Y_\lambda(\bar{\omega})$. By (iv), obtaining $\omega$ from $\chi$ is equivalent to replacing the $\pi_a$ at each location of $\mu$ in $X$ with $\chi_b$ (and leaving the rest of $X$ unchanged) to obtain $W$. So $\chi_b < \pi_a$ implies $W < X$. Let $T$ be the result of replacing the $\pi_a$ at each location of $\mu$ in $X$ with $\pi_c$ (and leaving the rest unchanged). So $T < X \leq Y$. See the conceptual Figure 7.2 for $X$ and $T$; the shaded boxes form $\mu$. In particular $T_{j+1}(u) = \pi_c$. This $T$ is not necessarily a key; we need to confirm that it is semistandard. For every $(q,p) \notin \mu$ we have $W_q(p) = T_q(p) = X_q(p)$. By (iv), there are no values in $X$ in any column to the right of the $j^{th}$ column from $[\pi_c, \pi_a)$. The region $\mu$ is contained in these columns. Hence we only need to check semistandardness when moving from the $j^{th}$ column to $\mu$ in the $(j+1)^{st}$ column. Here $u \leq i$ implies $T_j(u) \leq T_j(i) = \pi_d < \pi_c = T_{j+1}(u)$. So $T \in \mathcal{T}_\lambda$. \begin{figure}[h!] \begin{center} \includegraphics[scale=1]{Figure11point2Phase3} \caption*{Figure 7.2. Values of $X$ (respectively $T$) are in upper left (lower right) corners.} \end{center} \end{figure} Now we consider the scanning tableau $S(T) =: S$ of $T$: Since $(j, i+1) \notin \mu$, we have $T_j(i+1) = X_j(i+1)$. Since $X_j(i+1) > \pi_c = T_{j+1}(u)$, the location $(j+1,u)$ is not in a scanning path $\mathcal{P}(T;j,i^\prime)$ for any $i^\prime > i$. Since $T_j(i) = \chi_b = \pi_d < \pi_c$, the location $(j+1,v)$ is in $\mathcal{P}(T;j,i)$ for some $v \in [u,i]$. By the semistandard column condition one has $T_{j+1}(v) \geq T_{j+1}(u) = \pi_c$. Thus $S_j(i) \geq \pi_c > \chi_b = \pi_d = Y_j(i)$. Hence $S(T) \nleq Y$, and so $T \notin \mathcal{D}_\lambda(\pi)$. Since $T \in [Y]$, we have $\mathcal{D}_\lambda(\pi) \neq [Y]$. In $\mathbb{R}^{|\lambda|}$, consider the line segment $U(t) = W + t(X-W)$, where $0 \leq t \leq 1$. Here $U(0) = W$ and $U(1) = X$. The value of $t$ only affects the values at the locations in $\mu$. Let $x := \frac{\pi_c - \chi_b}{\pi_a - \chi_b}$. Since $\chi_b < \pi_c < \pi_a$, we have $0 < x < 1$. The values in $\mu$ in $U(x)$ are $\chi_b + \frac{\pi_c - \chi_b}{\pi_a-\chi_b}(\pi_a-\chi_b) = \pi_c$. Hence $U(x) = T$. Since $X$ and $W$ are keys, by Fact \ref{fact420}(ii) we have $S(X) = X$ and $S(W) = W$. Then $W < X \leq Y$ implies $W \in \mathcal{D}_\lambda(\pi)$ and $X \in \mathcal{D}_\lambda(\pi)$. Thus $U(0), U(1) \in \mathcal{D}_\lambda(\pi)$ but $U(x) \notin \mathcal{D}_\lambda(\pi)$. If a set $\mathcal{E}$ is a convex polytope in $\mathbb{Z}^N$ and $U(t)$ is a line segment with $U(0), U(1) \in \mathcal{E}$, then $U(t) \in \mathcal{E}$ for any $0 < t < 1$ such that $U(t) \in \mathbb{Z}^N$. Since $0 < x < 1$ and $U(x) = T \in \mathbb{Z}^{|\lambda|}$ with $U(x) \notin \mathcal{D}_\lambda(\pi)$, we see that $\mathcal{D}_\lambda(\pi)$ is not a convex polytope in $\mathbb{Z}^{|\lambda|}$. \end{proof} When one first encounters the notion of a Demazure polynomial, given Facts \ref{fact420}(iii)(iv) it is natural to ask when $\mathcal{D}_\lambda(\pi;x)$ is simply all of the ideal $[Y_\lambda(\pi)]$. Since principal ideals in $\mathcal{T}_\lambda$ are convex polytopes in $\mathbb{Z}^{|\lambda|}$, we can answer this question while combining Theorems \ref{theorem420} and \ref{theorem520}: \begin{cor}\label{cor520}Let $\pi \in S_n^\lambda$. The set $\mathcal{D}_\lambda(\pi)$ of Demazure tableaux of shape $\lambda$ is a convex polytope in $\mathbb{Z}^{|\lambda|}$ if and only if $\pi$ is $\lambda$-312-avoiding if and only if $\mathcal{D}_\lambda(\pi) = [Y_\lambda(\pi)]$. \end{cor} \noindent When $\lambda$ is the strict partition $(n,n-1, ..., 2,1)$, this convexity result appeared as Theorem 3.9.1 in \cite{Wi1}. \section{Potential applications of convexity} In addition to providing the core content needed to prove the main results of \cite{PW3}, our convexity results might later be useful in some geometric or representation theory contexts. Our re-indexing of the $R$-312-avoiding phenomenon with gapless $R$-tuples could also be useful. Fix $R \subseteq [n-1]$; inside $G := GL_n(\mathbb{C})$ this determines a parabolic subgroup $P := P_R$. If $R = [n-1]$ then $P$ is the Borel subgroup $B$ of $G$. Fix $\pi \in S_n^R$; this specifies a Schubert variety $X(\pi)$ of the flag manifold $G \slash P$. Pattern avoidance properties for $\pi$ have been related to geometric properties for $X(\pi)$: If $\pi \in S_n$ is 3412-avoiding and 4231-avoiding, then the variety $X(\pi) \subseteq G \slash B$ is smooth by Theorem 13.2.2.1 of \cite{LR}. Since a 312-avoiding $\pi$ satisfies these conditions, its variety $X(\pi)$ is smooth. Postnikov and Stanley \cite{PS} noted that Lakshmibai called these the ``Kempf'' varieties. It could be interesting to extend the direct definition of the notion of Kempf variety from $G \slash B$ to all $G \slash P$, in contrast to using the indirect definition for $G \slash P$ given in \cite{HL}. Berenstein and Zelevinsky \cite{BZ} emphasized the value of using the points in convex integral polytopes to describe the weights-with-multiplicities of representations. Fix a partition $\lambda$ of some $N \geq 1$ such that $R_\lambda = R$. Rather than using the tableaux in $\mathcal{T}_\lambda$ to describe the irreducible polynomial character of $G$ with highest weight $\lambda$ (Schur function of shape $\lambda$), the corresponding Gelfand-Zetlin patterns (which have top row $\lambda$) can be used. These form an integral polytope in $\mathbb{Z}^{n \choose 2}$ that is convex. In Corollary 15.2 of \cite{PS}, Postnikov and Stanley formed convex polytopes from certain subsets of the GZ patterns with top row $\lambda$; these had been considered by Kogan. They summed the weights assigned to the points in these polytopes to obtain the Demazure polynomials $d_\lambda(\pi;x)$ that are indexed by the 312-avoiding permutations. The convex integral polytope viewpoint was used there to describe the degree of the associated embedded Schubert variety $X(\pi)$ in the full flag manifold $G \slash B$. Now assume that $\lambda$ is strict. Here $R = [n-1]$ and the $R$-312-avoiding permutations are the 312-avoiding permutations. The referee of \cite{PW3} informed us that Kiritchenko, Smirnov, and Timorin generalized Corollary 15.2 of \cite{PS} to express \cite{KST} the polynomial $d_\lambda(\pi;x)$ for any $\pi \in S_n^\lambda$ as a sum over the points in certain faces of the GZ polytope for $\lambda$ that are determined by $\pi$. Only one face is used exactly when $\pi$ is 312-avoiding. At a glance it may appear that their Theorem 1.2 implies that the set of points used from the GZ integral polytope for $d_\lambda(\pi;x)$ is convex exactly when $\pi$ is 312-avoiding. So that referee encouraged us to remark upon the parallel 312-avoiding phenomena of convexity in $\mathbb{Z}^N$ for the tableau set $\mathcal{D}_\lambda(\pi)$ and of convexity in $\mathbb{Z}^{n \choose 2}$ for the set of points in these faces. But we soon saw that when $\lambda$ is small it is possible for the union of faces used for $d_\lambda(\pi;x)$ to be convex even when $\pi$ is not 312-avoiding. See Section 12 of \cite{PW3} for a counterexample. To obtain convexity, one must replace $\lambda$ by $m\lambda$ for some $m \geq 2$. In contrast, our Corollary \ref{cor520} holds for all $\lambda$. Postnikov and Stanley remarked that the convex polytope of GZ patterns in the 312-avoiding case was used by Kogan and Miller to study the toric degeneration formed by Gonciulea and Lakshmibai for a Kempf variety. It would be interesting to see if the convexity characterization of the $R$-312-avoiding Demazure tableau sets $\mathcal{D}_\lambda(\pi)$ found here is related to some nice geometric properties for the corresponding Schubert varieties $X(\pi)$ in $G \slash P$. For any $R$-permutation $\pi$ the Demazure tableaux are well suited to studying the associated Schubert variety from the Pl{\"u}cker relations viewpoint, as was illustrated by Lax's re-proof \cite{Lax} of the standard monomial basis that used the scanning method of \cite{Wi2}. \section{Parabolic Catalan counts} The section (or paper) cited at the beginning of each item in the following statement points to the definition of the concept: \begin{thm}\label{theorem18.1}Let $R \subseteq [n-1]$. Write the elements of $R$ as $q_1 < q_2 < ... < q_r$. Set $q_0 := 0$ and $q_{r+1} := n$. Let $\lambda$ be a partition $\lambda_1 \geq \lambda_2 \geq ... \geq \lambda_n \geq 0$ whose shape has the distinct column lengths $q_r, q_{r-1}, ... , q_1$. Set $p_h := q_h - q_{h-1}$ for $1 \leq h \leq r+1$. The number $C_n^R =: C_n^\lambda$ of $R$-312-avoiding permutations is equal to the number of: \noindent (i) \cite{GGHP}: ordered partitions of $[n]$ into blocks of sizes $p_h$ for $1 \leq h \leq r+1$ that avoid the pattern 312, and $R$-$\sigma$-avoiding permutations for $\sigma \in \{ 123, 132, 213, 231, 321 \}$. \noindent (ii) Section 2: multipermutations of the multiset $\{ 1^{p_1}, 2^{p_2}, ... , (r+1)^{p_{r+1}} \}$ that avoid the pattern 231. \noindent (iii) Section 2: gapless $R$-tuples $\gamma \in UG_R(n)$. \noindent (iv) Here only: $r$-tuples $(\mu^{(1)}, ... , \mu^{(r)})$ of shapes such that $\mu^{(h)}$ is contained in a $p_h \times (n-q_h)$ rectangle for $1 \leq h \leq r$ and for $1 \leq h \leq r-1$ the length of the first row in $\mu^{(h)}$ does not exceed the length of the $p_{h+1}^{st}$ (last) row of $\mu^{(h+1)}$ plus the number of times that (possibly zero) last row length occurs in $\mu^{(h+1)}$. \noindent (v) Sections 4 and 5: $R$-rightmost clump deleting chains and gapless $\lambda$-keys. \noindent (vi) Section 6: sets of Demazure tableaux of shape $\lambda$ that are convex in $\mathbb{Z}^{|\lambda|}$. \end{thm} \begin{proof}Part (i) first restates our $C_n^R$ definition with the terminology of \cite{GGHP}; for the second claim see the discussion below. The equivalence for (ii) was noted in Section 2. Use Proposition \ref{prop320.2}(ii) to confirm (iii). For (iv), destrictify the gapless $R$-tuples within each carrel. Use Proposition \ref{prop320.2}(i) and Theorem \ref{theorem340} to confirm (v). Part (vi) follows from Corollary \ref{cor520} and Fact \ref{fact420}(v).\end{proof} To use the Online Encyclopedia of Integer Sequences \cite{Slo} to determine if the counts $C_n^R$ had been studied, we had to form sequences. One way to form a sequence of such counts is to take $n := 2m$ for $m \geq 1$ and $R_m := \{ 2, 4, 6, ... , 2m-2 \}$. Then the $C_{2m}^R$ sequence starts with 1, 6, 43, 352, 3114, ... ; this beginning appeared in the OEIS in Pudwell's A220097. Also for $n \geq 1$ define the \emph{total parabolic Catalan number $C_n^\Sigma$} to be $\sum C_n^R$, sum over $R \subseteq [n-1]$. This sequence starts with 1, 3, 12, 56, 284, ... ; with a `1' prepended, this beginning appeared in Sloane's A226316. These ``hits'' led us to the papers \cite{GGHP} and \cite{CDZ}. Let $R$ be as in the theorem. Let $2 \leq t \leq r+1$. Fix a permutation $\sigma \in S_t$. Apparently for the sake of generalization in and of itself with new enumeration results as a goal, Godbole, Goyt, Herdan and Pudwell defined \cite{GGHP} the notion of an ordered partition of $[n]$ with block sizes $b_1, b_2, ... , b_{r+1}$ that avoids the pattern $\sigma$. It appears that that paper was the first paper to consider a notion of pattern avoidance for ordered partitions that can be used to produce our $R$-312-avoiding permutations: Take $b_1 := q_1$, $b_2 := q_2 - q_1$, ... , $b_{r+1} := n - q_r$, $t := 3$, and $\sigma := (3;1;2)$. Their Theorem 4.1 implies that the number of such ordered partitions that avoid $\sigma$ is equal to the number of such ordered partitions that avoid each of the other five permutations for $t = 3$. This can be used to confirm that the $C_{2m}^R$ sequence defined above is indeed Sequence A220097 of the OEIS (which is described as avoiding the pattern 123). Chen, Dai, and Zhou gave generating functions \cite{CDZ} in Theorem 3.1 and Corollary 2.3 for the $C_{2m}^R$ for $R = \{ 2, 4, 6, ... , 2m-2 \}$ for $m \geq 0$ and for the $C_n^\Sigma$ for $n \geq 0$. The latter result implies that the sequence A226316 indeed describes the sequence $C_n^\Sigma$ for $n \geq 0$. Karen Collins and the second author of this paper have recently deduced that $C_n^\Sigma = \sum_{0 \leq k \leq [n/2]} (-1)^k \binom{n-k}{k} 2^{n-k-1} C_{n-k}$. How can the $C_n^\Sigma$ total counts be modeled? Gathering the $R$-312-avoiding permutations or the gapless $R$-tuples from Theorem \ref{theorem18.1}(ii) for this purpose would require retaining their ``semicolon dividers''. Some other objects model $C_n^\Sigma$ more elegantly. We omit definitions for some of the concepts in the next statement. We also suspend our convention concerning the omission of the prefix `$[n-1]$-': Before, a `rightmost clump deleting' chain deleted one element at each stage. Now this unadorned term describes a chain that deletes any number of elements in any number of stages, provided that they constitute entire clumps of the largest elements still present plus possibly a subset from the rightmost of the other clumps. When $n = 3$ one has $C_n^\Sigma = 12$. Five of these chains were displayed in Section 6. A sixth is \cancel{1} \cancel{2} \cancel{3}. Here are the other six, plus one such chain for $n = 17$: \begin{figure}[h!] \begin{center} \setlength\tabcolsep{.1cm} \begin{tabular}{ccccc} 1& &2& &\cancel{3}\\ &\cancel{1}& &\cancel{2} \end{tabular}\hspace{7mm} \begin{tabular}{ccccc} 1& &\cancel{2}& &3\\ &\cancel{1}& &\cancel{3} \end{tabular}\hspace{7mm} \begin{tabular}{ccccc} \cancel{1}& &2& &3\\ &\cancel{2}& &\cancel{3} \end{tabular}\hspace{7mm} \begin{tabular}{ccccc} 1& &\cancel{2}& &\cancel{3}\\ & & \cancel{1}& & \end{tabular}\hspace{7mm} \begin{tabular}{ccccc} \cancel{1}& &2& &\cancel{3}\\ & & \cancel{2} \end{tabular}\hspace{7mm} \begin{tabular}{ccccc} \cancel{1}& &\cancel{2}& &{3}\\ & &\cancel{3} \end{tabular} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \setlength\tabcolsep{.3cm} \begin{tabular}{ccccccccccccccccc} 1 & 2 & \cancel{3} & 4 & 5 & \cancel{6} & 7 & 8 & 9 & 10 & 11 & \cancel{12} & 13 & 14 & \cancel{15} & 16 & 17 \\ & & 1 & 2 & 4 & 5 & 7 & \cancel{8} & 9 & \cancel{10} & 11 & \cancel{13} & \cancel{14} & \cancel{16} & \cancel{17} & & \\ & & & & & 1 & 2 & \cancel{4} & 5 & \cancel{7} & \cancel{9} & \cancel{11} & & & & & \\ & & & & & & & \cancel{1} & \cancel{2} & \cancel{5} & & & & & & & \end{tabular} \end{center} \end{figure} \vspace{.1pc}\begin{cor}\label{cor18.2} The total parabolic Catalan number $C_n^\Sigma$ is the number of: \noindent (i) ordered partitions of $\{1, 2, ... , n \}$ that avoid the pattern 312. \noindent (ii) rightmost clump deleting chains for $[n]$, and gapless keys whose columns have distinct lengths less than $n$. \noindent (iii) Schubert varieties in all of the flag manifolds $SL(n) / P_J$ for $J \subseteq [n-1]$ such that their ``associated'' Demazure tableaux form convex sets as in Section 7. \end{cor} \noindent Part (iii) highlights the fact that the convexity result of Corollary \ref{cor520} depends only upon the information from the indexing $R$-permutation for the Schubert variety, and not upon any further information from the partition $\lambda$. In addition to their count $op_n[(3;1;2)] = C_n^\Sigma$, the authors of \cite{GGHP} and \cite{CDZ} also considered the number $op_{n,k}(\sigma)$ of such $\sigma$-avoiding ordered partitions with $k$ blocks. The models above can be adapted to require the presence of exactly $k$ blocks, albeit of unspecified sizes. \vspace{.5pc}\noindent \textbf{Added Note.} We learned of the paper \cite{MW} after posting \cite{PW2} on the arXiv. As at the end of Section 3, let $R$ and $J$ be such that $R \cup J = [n-1]$ and $R \cap J = \emptyset$. It could be interesting to compare the definition for what we would call an `$R$-231-avoiding' $R$-permutation (as in \cite{GGHP}) to M{\"u}hle's and Williams' definition of a `$J$-231-avoiding' $R$-permutation in Definition 5 of \cite{MW}. There they impose an additional condition $w_i = w_k + 1$ upon the pattern to be avoided. For their Theorems 21 and 24, this condition enables them to extend the notions of ``non-crossing partition'' and of ``non-nesting partition'' to the parabolic quotient $S_n / W_J$ context of $R$-permutations to produce sets of objects that are equinumerous with their $J$-231-avoiding $R$-permutations. Their Theorem 7 states that this extra condition is superfluous when $J = \emptyset$. In this case their notions of $J$-non-crossing partition and of $J$-non-nesting partition specialize to the set partition Catalan number models that appeared as Exercises 159 and 164 of \cite{Sta}. So if it is agreed that their reasonably stated generalizations of the notions of non-crossing and non-nesting partitions are the most appropriate generalizations that can be formulated for the $S_n / W_J$ context, then the mutual cardinality of their three sets of objects indexed by $J$ and $n$ becomes a competitor to our $C_n^R$ count for the name ``$R$-parabolic Catalan number''. This development has made the obvious metaproblem more interesting: Now not only must one determine whether each of the 214 Catalan models compiled in \cite{Sta} is ``close enough'' to a pattern avoiding permutation interpretation to lead to a successful $R$-parabolic generalization, one must also determine which parabolic generalization applies. \vspace{1pc}\noindent \textbf{Acknowledgments.} We thank Keith Schneider, Joe Seaborn, and David Raps for some helpful conversations, and we are also indebted to David Raps for some help with preparing this paper. We thank the referee for suggesting some improvements in the exposition.
{'timestamp': '2018-07-19T02:10:24', 'yymm': '1706', 'arxiv_id': '1706.03094', 'language': 'en', 'url': 'https://arxiv.org/abs/1706.03094'}
ArXiv
\section{Introduction} De Sitter (dS) spacetime is among the most popular backgrounds in gravitational physics. There are several reasons for this. First of all dS spacetime is the maximally symmetric solution of Einstein's equation with a positive cosmological constant. Due to the high symmetry numerous physical problems are exactly solvable on this background. A better understanding of physical effects in this background could serve as a handle to deal with more complicated geometries. De Sitter spacetime plays an important role in most inflationary models, where an approximately dS spacetime is employed to solve a number of problems in standard cosmology \cite{Lind90}. More recently astronomical observations of high redshift supernovae, galaxy clusters and cosmic microwave background \cite{Ries07} indicate that at the present epoch the universe is accelerating and can be well approximated by a world with a positive cosmological constant. If the universe would accelerate indefinitely, the standard cosmology would lead to an asymptotic dS universe. In addition to the above, an interesting topic which has received increasing attention is related to string-theoretical models of dS spacetime and inflation. Recently a number of constructions of metastable dS vacua within the framework of string theory are discussed (see, for instance, \cite{Kach03,Silv07} and references therein). There is no reason to believe that the version of dS spacetime which may emerge from string theory, will necessarily be the most familiar version with symmetry group $O(1,4)$ and there are many different topological spaces which can accept the dS metric locally. There are many reasons to expect that in string theory the most natural topology for the universe is that of a flat compact three-manifold \cite{McIn04}. In particular, in Ref. \cite% {Lind04} it was argued that from an inflationary point of view universes with compact spatial dimensions, under certain conditions, should be considered a rule rather than an exception. The models of a compact universe with nontrivial topology may play an important role by providing proper initial conditions for inflation (for the cosmological consequences of the nontrivial topology and observational bounds on the size of compactified dimensions see, for example, \cite{Lach95}). The quantum creation of the universe having toroidal spatial topology is discussed in \cite{Zeld84} and in references \cite{Gonc85} within the framework of various supergravity theories. The compactification of spatial dimensions leads to the modification of the spectrum of vacuum fluctuations and, as a result, to Casimir-type contributions to the vacuum expectation values of physical observables (for the topological Casimir effect and its role in cosmology see \cite{Most97,Bord01,Eliz06} and references therein). The effect of the compactification of a single spatial dimension in dS spacetime (topology $% \mathrm{R}^{D-1}\times \mathrm{S}^{1}$) on the properties of quantum vacuum for a scalar field with general curvature coupling parameter and with periodicity condition along the compactified dimension is investigated in Ref. \cite{Saha07} (for quantum effects in braneworld models with dS spaces see, for instance, \cite{dSbrane}). In view of the above mentioned importance of toroidally compactified dS spacetimes, in the present paper we consider a general class of compactifications having the spatial topology $\mathrm{R}^{p}\times (\mathrm{% S}^{1})^{q}$, $p+q=D$. This geometry can be used to describe two types of models. For the first one $p=3$, $q\geqslant 1$,\ and which corresponds to the universe with Kaluza-Klein type extra dimensions. As it will be shown in the present work, the presence of extra dimensions generates an additional gravitational source in the cosmological equations which is of barotropic type at late stages of the cosmological evolution. For the second model $D=3$ and the results given below describe how the properties of the universe with dS geometry are changed by one-loop quantum effects induced by the compactness of spatial dimensions. In quantum field theory on curved backgrounds among the important quantities describing the local properties of a quantum field and quantum back-reaction effects are the expectation values of the field square and the energy-momentum tensor for a given quantum state. In particular, the vacuum expectation values of these quantities are of special interest. In order to evaluate these expectation values, we construct firstly the corresponding positive frequency Wightman function. Applying to the mode-sum the Abel-Plana summation formula, we present this function as the sum of the Wightman function for the topology $% \mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$ plus an additional term induced by the compactness of the $(p+1)$th dimension. The latter is finite in the coincidence limit and can be directly used for the evaluation of the corresponding parts in the expectation \ values of the field square and the energy-momentum tensor. In this way the renormalization of these quantities is reduced to the renormalization of the corresponding quantities in uncompactified dS spacetime. Note that for a scalar field on the background of dS spacetime the renormalized vacuum expectation values of the field square and the energy-momentum tensor are investigated in Refs. \cite% {Cand75,Dowk76,Bunc78} by using various regularization schemes (see also \cite{Birr82}). The corresponding effects upon phase transitions in an expanding universe are discussed in \cite{Vile82,Alle83}. The paper is organized as follows. In the next section we consider the positive frequency Wightman function for dS spacetime of topology $\mathrm{R}% ^{p}\times (\mathrm{S}^{1})^{q}$. In sections \ref{sec:vevPhi2} and \ref% {sec:vevEMT2} we use the formula for the Wightman function for the evaluation of the vacuum expectation values of the field square and the energy-momentum tensor. The asymptotic behavior of these quantities is investigated in the early and late stages of the cosmological evolution. The case of a twisted scalar field with antiperiodic boundary conditions is considered in section \ref{sec:Twisted}. The main results of the paper are summarized in section \ref{sec:Conc}. \section{Wightman function in de Sitter spacetime with toroidally compactified dimensions} \label{sec:WF} We consider a free massive scalar field with curvature coupling parameter $% \xi $\ on background of $(D+1)$-dimensional de Sitter spacetime ($\mathrm{dS}% _{D+1}$) generated by a positive cosmological constant $\Lambda $. The field equation has the form% \begin{equation} \left( \nabla _{l}\nabla ^{l}+m^{2}+\xi R\right) \varphi =0, \label{fieldeq} \end{equation}% where $R=2(D+1)\Lambda /(D-1)$ is the Ricci scalar for $\mathrm{dS}_{D+1}$ and $\xi $ is the curvature coupling parameter. The special cases $\xi =0$ and $\xi =\xi _{D}\equiv (D-1)/4D$ correspond to minimally and conformally coupled fields respectively. The importance of these special cases is related to that in the massless limit the corresponding fields mimic the behavior of gravitons and photons. We write the line element for $\mathrm{dS}% _{D+1}$ in planar (inflationary) coordinates most appropriate for cosmological applications:% \begin{equation} ds^{2}=dt^{2}-e^{2t/\alpha }\sum_{i=1}^{D}(dz^{i})^{2}, \label{ds2deSit} \end{equation}% where the parameter $\alpha $ is related to the cosmological constant by the formula% \begin{equation} \alpha ^{2}=\frac{D(D-1)}{2\Lambda }. \label{alfa} \end{equation}% Below, in addition to the synchronous time coordinate $t$ we will also use the conformal time $\tau $ in terms of which the line element takes conformally flat form:% \begin{equation} ds^{2}=(\alpha /\tau )^{2}[d\tau ^{2}-\sum_{i=1}^{D}(dz^{i})^{2}],\;\tau =-\alpha e^{-t/\alpha },\;-\infty <\tau <0. \label{ds2Dd} \end{equation}% We assume that the spatial coordinates $z^{l}$, $l=p+1,\ldots ,D$, are compactified to $\mathrm{S}^{1}$ of the length $L_{l}$: $0\leqslant z^{l}\leqslant L_{l}$, and for the other coordinates we have $-\infty <z^{l}<+\infty $, $l=1,\ldots ,p$. Hence, we consider the spatial topology $% \mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$, where $q=D-p$. For $p=0$, as a special case we obtain the toroidally compactified dS spacetime discussed in \cite{McIn04,Lind04,Zeld84}. The Casimir densities for a scalar field with periodicity conditions in the case $q=1$ were discussed previously in Ref. \cite{Saha07}. In the discussion below we will denote the position vectors along the uncompactified and compactified dimensions by $\mathbf{z}_{p}=(z^{1},\ldots ,z^{p})$ and $\mathbf{z}_{q}=(z^{p+1},\ldots ,z^{D})$. For a scalar field with periodic boundary condition one has (no summation over $l$)% \begin{equation} \varphi (t,\mathbf{z}_{p},\mathbf{z}_{q}+L_{l}\mathbf{e}_{l})=\varphi (t,% \mathbf{z}_{p},\mathbf{z}_{q}), \label{periodicBC} \end{equation}% where $l=p+1,\ldots ,D$ and $\mathbf{e}_{l}$ is the unit vector along the direction of the coordinate $z^{l}$. In this paper we are interested in the effects of non-trivial topology on the vacuum expectation values (VEVs) of the field square and the energy-momentum tensor. These VEVs are obtained from the corresponding positive frequency Wightman function $% G_{p,q}^{+}(x,x^{\prime })$ in the coincidence limit of the arguments. The Wightman function is also important in consideration of the response of particle detectors at a given state of motion (see, for instance, \cite% {Birr82}). Expanding the field operator over the complete set $\left\{ \varphi _{\sigma }(x),\varphi _{\sigma }^{\ast }(x)\right\} $ of positive and negative frequency solutions to the classical field equation, satisfying the periodicity conditions along the compactified dimensions, the positive frequency Wightman function is presented as the mode-sum: \begin{equation} G_{p,q}^{+}(x,x^{\prime })=\langle 0|\varphi (x)\varphi (x^{\prime })|0\rangle =\sum_{\sigma }\varphi _{\sigma }(x)\varphi _{\sigma }^{\ast }(x^{\prime }), \label{Wigh1} \end{equation}% where the collective index $\sigma $ specifies the solutions. Due to the symmetry of the problem under consideration the spatial dependence of the eigenfunctions $\varphi _{\sigma }(x)$ can be taken in the standard plane-wave form, $e^{i\mathbf{k}\cdot \mathbf{z}}$. Substituting into the field equation, we obtain that the time dependent part of the eigenfunctions is a linear combination of the functions $\tau ^{D/2}H_{\nu }^{(l)}(|\mathbf{k|}\tau )$, $l=1,2$, where $H_{\nu }^{(l)}(x)$ is the Hankel function and \begin{equation} \nu =\left[ D^{2}/4-D(D+1)\xi -m^{2}\alpha ^{2}\right] ^{1/2}. \label{knD} \end{equation}% Different choices of the coefficients in this linear combination correspond to different choices of the vacuum state. We will consider de Sitter invariant Bunch-Davies vacuum \cite{Bunc78} for which the coefficient for the part containing the function $H_{\nu }^{(1)}(|\mathbf{k|}\tau )$ is zero. The corresponding eigenfunctions satisfying the periodicity conditions take the form \begin{equation} \varphi _{\sigma }(x)=C_{\sigma }\eta ^{D/2}H_{\nu }^{(1)}(k\eta )e^{i% \mathbf{k}_{p}\cdot \mathbf{z}_{p}+i\mathbf{k}_{q}\cdot \mathbf{z}% _{q}},\;\eta =\alpha e^{-t/\alpha }, \label{eigfuncD} \end{equation}% where we have decomposed the contributions from the uncompactified and compactified dimensions with the notations% \begin{eqnarray} \mathbf{k}_{p} &=&(k_{1},\ldots ,k_{p}),\;\mathbf{k}_{q}=(k_{p+1},\ldots ,k_{D}),\;k=\sqrt{\mathbf{k}_{p}^{2}+\mathbf{k}_{q}^{2}},\; \notag \\ \;k_{l} &=&2\pi n_{l}/L_{l},\;n_{l}=0,\pm 1,\pm 2,\ldots ,\;l=p+1,\ldots ,D. \label{kD1D2} \end{eqnarray}% Note that we have transformed the Hankel function to have the positive defined argument and instead of the conformal time $\tau $ the variable $% \eta $ is introduced which we will call the conformal time as well. The eigenfunctions are specified by the set $\sigma =(\mathbf{k}% _{p},n_{p+1},\ldots ,n_{D})$ and the coefficient $C_{\sigma }$ is found from the standard orthonormalization condition \begin{equation} -i\int d^{D}x\sqrt{|g|}g^{00}\varphi _{\sigma }(x)\overleftrightarrow{% \partial }_{\tau }\varphi _{\sigma ^{\prime }}^{\ast }(x)=\delta _{\sigma \sigma ^{\prime }}, \label{normcond} \end{equation}% where the integration goes over the spatial hypersurface $\tau =\mathrm{const% }$, and $\delta _{\sigma \sigma ^{\prime }}$ is understood as the Kronecker delta for the discrete indices and as the Dirac delta-function for the continuous ones. By using the Wronskian relation for the Hankel functions one finds% \begin{equation} C_{\sigma }^{2}=\frac{\alpha ^{1-D}e^{i(\nu -\nu ^{\ast })\pi /2}}{% 2^{p+2}\pi ^{p-1}L_{p+1}\cdots L_{D}}. \label{normCD} \end{equation} Having the complete set of eigenfunctions and using the mode-sum formula (% \ref{Wigh1}), for the positive frequency Wightman function we obtain the formula \begin{eqnarray} G_{p,q}^{+}(x,x^{\prime }) &=&\frac{\alpha ^{1-D}(\eta \eta ^{\prime })^{D/2}e^{i(\nu -\nu ^{\ast })\pi /2}}{2^{p+2}\pi ^{p-1}L_{p+1}\cdots L_{D}}% \int d\mathbf{k}_{p}\,e^{i\mathbf{k}_{p}\cdot \Delta \mathbf{z}_{p}} \notag \\ &&\times \sum_{\mathbf{n}_{q}=-\infty }^{+\infty }e^{i\mathbf{k}_{q}\cdot \Delta \mathbf{z}_{q}}H_{\nu }^{(1)}(k\eta )[H_{\nu }^{(1)}(k\eta ^{\prime })]^{\ast }, \label{GxxD} \end{eqnarray}% with $\Delta \mathbf{z}_{p}=\mathbf{z}_{p}-\mathbf{z}_{p}^{\prime }$, $% \Delta \mathbf{z}_{q}=\mathbf{z}_{q}-\mathbf{z}_{q}^{\prime }$, and% \begin{equation} \sum_{\mathbf{n}_{q}=-\infty }^{+\infty }=\sum_{n_{p+1}=-\infty }^{+\infty }\ldots \sum_{n_{D}=-\infty }^{+\infty }. \label{nqsum} \end{equation}% As a next step, we apply to the series over $n_{p+1}$ in (\ref{GxxD}) the Abel-Plana formula \cite{Most97,Saha07Gen}% \begin{equation} \sideset{}{'}{\sum}_{n=0}^{\infty }f(n)=\int_{0}^{\infty }dx\,f(x)+i\int_{0}^{\infty }dx\,\frac{f(ix)-f(-ix)}{e^{2\pi x}-1}, \label{Abel} \end{equation}% where the prime means that the term $n=0$ should be halved. It can be seen that after the application of this formula the term in the expression of the Wightman function which corresponds to the first integral on the right of (% \ref{Abel}) is the Wightman function for dS spacetime with the topology $% \mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$, which, in the notations given above, corresponds to the function $G_{p+1,q-1}^{+}(x,x^{\prime })$. As a result one finds \begin{equation} G_{p,q}^{+}(x,x^{\prime })=G_{p+1,q-1}^{+}(x,x^{\prime })+\Delta _{p+1}G_{p,q}^{+}(x,x^{\prime }). \label{G1decomp} \end{equation}% The second term on the right of this formula is induced by the compactness of the $z^{p+1}$ - direction and is given by the expression \begin{eqnarray} \Delta _{p+1}G_{p,q}^{+}(x,x^{\prime }) &=&\frac{2\alpha ^{1-D}(\eta \eta ^{\prime })^{D/2}}{(2\pi )^{p+1}V_{q-1}}\int d\mathbf{k}_{p}\,e^{i\mathbf{k}% _{p}\cdot \Delta \mathbf{z}_{p}}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }e^{i\mathbf{k}_{q-1}\cdot \Delta \mathbf{z}_{q-1}} \notag \\ &&\times \int_{0}^{\infty }dx\,\frac{x\cosh (\sqrt{x^{2}+\mathbf{k}% _{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}\Delta z^{p+1})}{\sqrt{x^{2}+\mathbf{k}% _{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}% _{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}}-1)} \notag \\ &&\times \left[ K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta x)K_{\nu }(\eta ^{\prime }x)\right] , \label{GxxD2} \end{eqnarray}% where $\mathbf{n}_{q-1}=(n_{p+2},\ldots ,n_{D})$, $I_{\nu }(x)$ and $K_{\nu }(x)$ are the Bessel modified functions and the notation% \begin{equation} k_{\mathbf{n}_{q-1}}^{2}=\sum_{l=p+2}^{D}(2\pi n_{l}/L_{l})^{2} \label{knD1+2} \end{equation}% is introduced. In formula (\ref{GxxD2}), $V_{q-1}=L_{p+2}\cdots L_{D}$ is the volume of $(q-1)$-dimensional compact subspace. Note that the combination of the Bessel modified functions appearing in formula (\ref% {GxxD2}) can also be written in the form% \begin{eqnarray} K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta x)K_{\nu }(\eta ^{\prime }x) &=&\frac{2}{\pi }\sin (\nu \pi )K_{\nu }(\eta x)K_{\nu }(\eta ^{\prime }x) \notag \\ &&+I_{\nu }(\eta x)K_{\nu }(\eta ^{\prime }x)+K_{\nu }(\eta x)I_{\nu }(\eta ^{\prime }x), \label{eqformComb} \end{eqnarray}% which explicitly shows that this combination is symmetric under the replacement $\eta \rightleftarrows \eta ^{\prime }$. In formula (\ref{GxxD2}% ) the integration with respect to the angular part of $\mathbf{k}_{p}$ can be done by using the formula% \begin{equation} \int d\mathbf{k}_{p}\,e^{i\mathbf{k}_{p}\cdot \Delta \mathbf{z}_{p}}F(|% \mathbf{k}_{p}|)=\frac{(2\pi )^{p/2}}{|\Delta \mathbf{z}_{p}|^{p/2-1}}% \int_{0}^{\infty }d|\mathbf{k}_{p}|\,|\mathbf{k}_{p}|^{p/2}F(|\mathbf{k}% _{p}|)J_{p/2-1}(|\mathbf{k}_{p}||\Delta \mathbf{z}_{p}|), \label{intang} \end{equation}% where $J_{\mu }(x)$ is the Bessel function. After the recurring application of formula (\ref{GxxD2}), the Wightman function for dS spacetime with spatial topology $\mathrm{R}^{p}\times (% \mathrm{S}^{1})^{q}$ is presented in the form% \begin{equation} G_{p,q}^{+}(x,x^{\prime })=G_{\mathrm{dS}}^{+}(x,x^{\prime })+\Delta G_{p,q}^{+}(x,x^{\prime }), \label{GdSGcomp} \end{equation}% where $G_{\mathrm{dS}}^{+}(x,x^{\prime })\equiv G_{D,0}^{+}(x,x^{\prime })$ is the corresponding function for uncompactified dS spacetime and the part% \begin{equation} \Delta G_{p,q}^{+}(x,x^{\prime })=\sum_{l=1}^{q}\Delta _{D-l+1}G_{D-l,l}^{+}(x,x^{\prime }), \label{DeltaGtop} \end{equation}% is induced by the toroidal compactification of the $q$-dimensional subspace. Two-point function in the uncompactified dS spacetime is investigated in \cite{Cand75,Dowk76,Bunc78,Bros96,Bous02} (see also \cite{Birr82}) and is given by the formula% \begin{equation} G_{\mathrm{dS}}^{+}(x,x^{\prime })=\frac{\alpha ^{1-D}\Gamma (D/2+\nu )\Gamma (D/2-\nu )}{2^{(D+3)/2}\pi ^{(D+1)/2}\left( u^{2}-1\right) ^{(D-1)/4}% }P_{\nu -1/2}^{(1-D)/2}(u), \label{WFdS} \end{equation}% where $P_{\nu }^{\mu }(x)$ is the associated Legendre function of the first kind and \begin{equation} u=-1+\frac{\sum_{l=1}^{D}(z^{l}-z^{\prime l})^{2}-(\eta -\eta ^{\prime })^{2}% }{2\eta \eta ^{\prime }}. \label{u} \end{equation}% An alternative form is obtained by using the relation between the the associated Legendre function and the hypergeometric function. \section{Vacuum expectation values of the field square} \label{sec:vevPhi2} We denote by $\langle \varphi ^{2}\rangle _{p,q}$ the VEV of the field square in dS spacetime with spatial topology $\mathrm{R}^{p}\times (\mathrm{S% }^{1})^{q}$. Having the Wightman function we can evaluate this VEV taking the coincidence limit of the arguments. Of course, in this limit the two-point functions are divergent and some renormalization procedure is needed. The important point here is that the local geometry is not changed by the toroidal compactification and the divergences are the same as in the uncompactified dS spacetime. As in our procedure we have already extracted from the Wightman function the part $G_{\mathrm{dS}}^{+}(x,x^{\prime })$, the renormalization of the VEVs is reduced to the renormalization of the uncompactified dS part which is already done in literature. The VEV\ of the field square is presented in the decomposed form% \begin{equation} \langle \varphi ^{2}\rangle _{p,q}=\langle \varphi ^{2}\rangle _{\mathrm{dS}% }+\langle \varphi ^{2}\rangle _{c},\;\langle \varphi ^{2}\rangle _{c}=\sum_{l=1}^{q}\Delta _{D-l+1}\langle \varphi ^{2}\rangle _{D-l,l}, \label{phi2dSplComp} \end{equation}% where $\langle \varphi ^{2}\rangle _{\mathrm{dS}}$ is the VEV in uncompactified $\mathrm{dS}_{D+1}$ and the part $\langle \varphi ^{2}\rangle _{c}$ is due to the compactness of the $q$-dimensional subspace. Here the term $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}$ is defined by the relation similar to (\ref{G1decomp}): \begin{equation} \langle \varphi ^{2}\rangle _{p,q}=\langle \varphi ^{2}\rangle _{p+1,q-1}+\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}. \label{phi2decomp} \end{equation}% This term is the part in the VEV induced by the compactness of the $z^{p+1}$ - direction. This part is directly obtained from (\ref{GxxD2}) in the coincidence limit of the arguments:% \begin{eqnarray} \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{2\alpha ^{1-D}\eta ^{D}}{2^{p}\pi ^{p/2+1}\Gamma (p/2)V_{q-1}}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }d|\mathbf{k}_{p}|\,|\mathbf{k}_{p}|^{p-1} \notag \\ &&\times \int_{0}^{\infty }dx\,\frac{xK_{\nu }(x\eta )\left[ I_{-\nu }(x\eta )+I_{\nu }(x\eta )\right] }{\sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}% _{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}% _{q-1}}^{2}}}-1)}. \label{phi2Dc} \end{eqnarray}% Instead of $|\mathbf{k}_{p}|$ introducing a new integration variable $y=% \sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}$ and expanding $% (e^{Ly}-1)^{-1}$, the integral over $y$ is explicitly evaluated and one finds% \begin{eqnarray} \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{4\alpha ^{1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}% _{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta ) \notag \\ &&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p-1}}% f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelPhi2} \end{eqnarray}% where we use the notation% \begin{equation} f_{\mu }(y)=y^{\mu }K_{\mu }(y). \label{fmunot} \end{equation}% By taking into account the relation between the conformal and synchronous time coordinates, we see that the VEV of the field square is a function of the combinations $L_{l}/\eta =L_{l}e^{t/\alpha }/\alpha $. In the limit when the length of the one of the compactified dimensions, say $z^{l}$, $% l\geqslant p+2$, is large, $L_{l}\rightarrow \infty $, the main contribution into the sum over $n_{l}$ in (\ref{DelPhi2}) comes from large values of $% n_{l}$ and we can replace the summation by the integration in accordance with the formula% \begin{equation} \frac{1}{L_{l}}\sum_{n_{l}=-\infty }^{+\infty }f(2\pi n_{l}/L_{l})=\frac{1}{% \pi }\int_{0}^{\infty }dy\,f(y). \label{sumtoint} \end{equation}% The integral over $y$ is evaluated by using the formula from \cite{Prud86} and we can see that from (\ref{DelPhi2}) the corresponding formula is obtained for the topology $\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$. For a conformally coupled massless scalar field one has $\nu =1/2$ and $% \left[ I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)=1/x$. In this case the corresponding integral in formula (\ref{DelPhi2}) is explicitly evaluated and we find% \begin{equation} \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}=\frac{2(\eta /\alpha )^{D-1}% }{(2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\frac{f_{p/2}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(L_{p+1}n)^{p}}% ,\;\xi =\xi _{D},\;m=0. \label{DelPhi2Conf} \end{equation}% In particular, the topological part is always positive. Formula (\ref% {DelPhi2Conf}) could also be obtained from the corresponding result in $% (D+1) $-dimensional Minkowski spacetime with spatial topology $\mathrm{R}% ^{p}\times (\mathrm{S}^{1})^{q}$, taking into account that two problems are conformally related: $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}=a^{1-D}(\eta )\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}^{% \mathrm{(M)}}$, where $a(\eta )=\alpha /\eta $ is the scale factor. This relation is valid for any conformally flat bulk. The similar formula takes place for the total topological part $\langle \varphi ^{2}\rangle _{c}$. Note that, in this case the expressions for $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}$ are obtained from the formulae for $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}^{\mathrm{(M)}}$ replacing the lengths $L_{l}$ of the compactified dimensions by the comoving lengths $\alpha L_{l}/\eta $, $% l=p,\ldots ,D$. Now we turn to the investigation of the topological part $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}$ in the VEV of the field square in the asymptotic regions of the ratio $L_{p+1}/\eta $. For small values of this ratio, $L_{p+1}/\eta \ll 1$, we introduce a new integration variable $% y=L_{p+1}x$. By taking into account that for large values $x$ one has $\left[ I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)\approx 1/x$, we find that to the leading order $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}$ coincides with the corresponding result for a conformally coupled massless field, given by (\ref{DelPhi2Conf}):% \begin{equation} \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx (\eta /\alpha )^{D-1}\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}^{\mathrm{(M)}% },\;L_{p+1}/\eta \ll 1. \label{DelPhi2Poq} \end{equation}% For fixed value of the ratio $L_{p+1}/\alpha $, this limit corresponds to $% t\rightarrow -\infty $ and the topological part $\langle \varphi ^{2}\rangle _{c}$ behaves like $\exp [-(D-1)t/\alpha ]$. By taking into account that the part $\langle \varphi ^{2}\rangle _{\mathrm{dS}}$ is time independent, from here we conclude that in the early stages of the cosmological expansion the topological part dominates in the VEV\ of the field square. For small values of the ratio $\eta /L_{p+1}$, we introduce a new integration variable $y=L_{p+1}x$ and expand the integrand by using the formulae for the Bessel modified functions for small arguments. For real values of the parameter $\nu $, after the integration over $y$ by using the formula from \cite{Prud86}, to the leading order we find% \begin{equation} \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx \frac{2^{(1-p)/2+\nu }\eta ^{D-2\nu }\Gamma (\nu )}{\pi ^{(p+3)/2}V_{q-1}\alpha ^{D-1}}% \sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\frac{% f_{(p+1)/2-\nu }(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(L_{p+1}n)^{p+1-2\nu }}% ,\;\eta /L_{p+1}\ll 1. \label{DelPhi2Mets} \end{equation}% In the case of a conformally coupled massless scalar field $\nu =1/2$ and this formula reduces to the exact result given by Eq. (\ref{DelPhi2Conf}). For fixed values of $L_{p+1}/\alpha $, the limit under consideration corresponds to late stages of the cosmological evolution, $t\rightarrow +\infty $, and the topological part $\langle \varphi ^{2}\rangle _{c}$ is suppressed by the factor $\exp [-(D-2\nu )t/\alpha ]$. Hence, in this limit the total VEV is dominated by the uncompactified dS part $\langle \varphi ^{2}\rangle _{\mathrm{dS}}$. Note that formula (32) also describes the asymptotic behavior of the topological part in the strong curvature regime corresponding to small values of the parameter $\alpha $. In the same limit, for pure imaginary values of the parameter $\nu $ in a similar way we find the following asymptotic behavior \begin{eqnarray} \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &\approx &\frac{4\alpha ^{1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n% }_{q-1}=-\infty }^{+\infty }\frac{1}{(nL_{p+1})^{p+1}} \notag \\ &&\times {\mathrm{Re}}\left[ 2^{i|\nu |}\Gamma (i|\nu |)(nL_{p+1}/\eta )^{2i|\nu |}f_{(p+1)/2-i|\nu |}(nL_{p+1}k_{\mathbf{n}_{q-1}})\right] . \label{DelPhi2MetsIm} \end{eqnarray}% Defining the phase $\phi _{0}$ by the relation \begin{equation} Be^{i\phi _{0}}=2^{i|\nu |}\Gamma (i|\nu |)\sum_{n=1}^{\infty }\sum_{\mathbf{% n}_{q-1}=-\infty }^{+\infty }n^{2i|\nu |-p-1}f_{(p+1)/2-i|\nu |}(nL_{p+1}k_{% \mathbf{n}_{q-1}}), \label{Bphi0} \end{equation}% we write this formula in terms of the synchronous time:% \begin{equation} \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx \frac{4\alpha e^{-Dt/\alpha }B}{(2\pi )^{(p+3)/2}L_{p+1}^{p+1}V_{q-1}}\cos [2|\nu |t/\alpha +2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}]. \label{DelPhi2MetsIm1} \end{equation}% Hence, in the case under consideration at late stages of the cosmological evolution the topological part is suppressed by the factor $\exp (-Dt/\alpha )$ and the damping of the corresponding VEV has an oscillatory nature. \section{Vacuum energy-momentum tensor} \label{sec:vevEMT2} In this section we investigate the VEV for the energy-momentum tensor of a scalar field in $\mathrm{dS}_{D+1}$ with toroidally compactified $q$% -dimensional subspace. In addition to describing the physical structure of the quantum field at a given point, this quantity acts as the source of gravity in the semiclassical Einstein equations. It therefore plays an important role in modelling self-consistent dynamics involving the gravitational field. Having the Wightman function and the VEV of the field square we can evaluate the vacuum energy-momentum tensor by using the formula% \begin{equation} \langle T_{ik}\rangle _{p,q}=\lim_{x^{\prime }\rightarrow x}\partial _{i}\partial _{k}^{\prime }G_{p,q}^{+}(x,x^{\prime })+\left[ \left( \xi -% \frac{1}{4}\right) g_{ik}\nabla _{l}\nabla ^{l}-\xi \nabla _{i}\nabla _{k}-\xi R_{ik}\right] \langle \varphi ^{2}\rangle _{p,q}, \label{emtvev1} \end{equation}% where $R_{ik}=Dg_{ik}/\alpha ^{2}$ is the Ricci tensor for $\mathrm{dS}_{D+1} $. Note that in (\ref{emtvev1}) we have used the expression for the classical energy-momentum tensor which differs from the standard one by the term which vanishes on the solutions of the field equation (see, for instance, Ref. \cite{Saha04}). As in the case of the field square, the VEV of the energy-momentum tensor is presented in the form% \begin{equation} \langle T_{i}^{k}\rangle _{p,q}=\langle T_{i}^{k}\rangle _{p+1,q-1}+\Delta _{p+1}\langle T_{i}^{k}\rangle _{p,q}. \label{TikDecomp} \end{equation}% Here $\langle T_{i}^{k}\rangle _{p+1,q-1}$ is the part corresponding to dS spacetime with $p+1$ uncompactified and $q-1$ toroidally compactified dimensions and $\Delta _{p+1}\langle T_{i}^{k}\rangle _{p,q}$ is induced by the compactness along the $z^{p+1}$ - direction. The recurring application of formula (\ref{TikDecomp}) allows us to write the VEV in the form% \begin{equation} \langle T_{i}^{k}\rangle _{p,q}=\langle T_{i}^{k}\rangle _{\mathrm{dS}% }+\langle T_{i}^{k}\rangle _{c},\;\langle T_{i}^{k}\rangle _{c}=\sum_{l=1}^{q}\Delta _{D-l+1}\langle T_{i}^{k}\rangle _{D-l,l}, \label{TikComp} \end{equation}% where the part corresponding to uncompactified dS spacetime, $\langle T_{i}^{k}\rangle _{\mathrm{dS}}$, is explicitly decomposed. The part $% \langle T_{i}^{k}\rangle _{c}$ is induced by the comactness of the $q$% -dimensional subspace. The second term on the right of formula (\ref{TikDecomp}) is obtained substituting the corresponding parts in the Wightman function, Eq. (\ref% {GxxD2}), and in the field square, Eq. (\ref{DelPhi2}), into formula (\ref% {emtvev1}). After the lengthy calculations for the energy density one finds% \begin{eqnarray} \Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q} &=&\frac{2\alpha ^{-1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}% _{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx \notag \\ &&\times \frac{xF^{(0)}(x\eta )}{(nL_{p+1})^{p-1}}f_{(p-1)/2}(nL_{p+1}\sqrt{% x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelT00} \end{eqnarray}% with the notation% \begin{eqnarray} F^{(0)}(y) &=&y^{2}\left[ I_{-\nu }^{\prime }(y)+I_{\nu }^{\prime }(y)\right] K_{\nu }^{\prime }(y)+D(1/2-2\xi )y\left[ (I_{-\nu }(y)+I_{\nu }(y))K_{\nu }(y)\right] ^{\prime } \notag \\ &&+\left[ I_{-\nu }(y)+I_{\nu }(y)\right] K_{\nu }(y)\left( \nu ^{2}+2m^{2}\alpha ^{2}-y^{2}\right) , \label{F0} \end{eqnarray}% and the function $f_{\mu }(y)$ is defined by formula (\ref{fmunot}). The vacuum stresses are presented in the form (no summation over $i$)% \begin{eqnarray} \Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q} &=&A_{p,q}-\frac{4\alpha ^{-1-D}\eta ^{D+2}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{% \mathbf{n}_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta ) \notag \\ &&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p+1}}% f_{p}^{(i)}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelTii} \end{eqnarray}% where we have introduced the notations% \begin{eqnarray} f_{p}^{(i)}(y) &=&f_{(p+1)/2}(y),\;i=1,\ldots ,p, \notag \\ f_{p}^{(p+1)}(y) &=&-y^{2}f_{(p-1)/2}(y)-pf_{(p+1)/2}(y), \label{fp+1} \\ f_{p}^{(i)}(y) &=&(nL_{p+1}k_{i})^{2}f_{(p-1)/2}(y),\;i=p+2,\ldots ,D. \notag \end{eqnarray}% In formula (\ref{DelTii}) (no summation over $i$, $i=1,\ldots ,D$), \begin{eqnarray} A_{p,q} &=&\left[ \left( \xi -\frac{1}{4}\right) \nabla _{l}\nabla ^{l}-\xi g^{ii}\nabla _{i}\nabla _{i}-\xi R_{i}^{i}\right] \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} \notag \\ &=&\frac{2\alpha ^{-1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}% \sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,\frac{xF(x\eta )}{(nL_{p+1})^{p-1}}% f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{A} \end{eqnarray}% with the notation% \begin{eqnarray} F(y) &=&\left( 4\xi -1\right) y^{2}\left[ I_{-\nu }^{\prime }(y)+I_{\nu }^{\prime }(y)\right] K_{\nu }^{\prime }(y)+\left[ 2(D+1)\xi -D/2\right] y(% \left[ I(y)+I_{\nu }(y)\right] K_{\nu }(y))^{\prime } \notag \\ &&+\left[ I_{-\nu }(y)+I_{\nu }(y)\right] K_{\nu }(y)\left[ \left( 4\xi -1\right) \left( y^{2}+\nu ^{2}\right) \right] . \label{Fy} \end{eqnarray}% As it is seen from the obtained formulae, the topological parts in the VEVs are time-dependent and, hence, the local dS symmetry is broken by them. As an additional check of our calculations it can be seen that the topological terms satisfy the trace relation \begin{equation} \Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}=D(\xi -\xi _{D})\nabla _{l}\nabla ^{l}\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}+m^{2}\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}. \label{tracerel} \end{equation}% In particular, from here it follows that the topological part in the VEV\ of the energy-momentum tensor is traceless for a conformally coupled massless scalar field. The trace anomaly is contained in the uncompactified dS part only. We could expect this result, as the trace anomaly is determined by the local geometry and the local geometry is not changed by the toroidal compactification. For a conformally coupled massless scalar field $\nu =1/2$ and, by using the formulae for $I_{\pm 1/2}(x)$ and $K_{1/2}(x)$, after the integration over $% x $ from formulae (\ref{DelT00}), (\ref{DelTii}) we find (no summation over $% i$)% \begin{equation} \Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}=-\frac{2(\eta /\alpha )^{D+1}}{% (2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\frac{g_{p}^{(i)}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(nL_{p+1})^{p+2}% }, \label{DelTConf} \end{equation}% with the notations% \begin{eqnarray} g_{p}^{(0)}(y) &=&g_{p}^{(i)}(y)=f_{p/2+1}(y),\;i=1,\ldots ,p, \notag \\ g_{p}^{(i)}(y) &=&(nL_{p+1}k_{i})^{2}f_{p/2}(y),\;i=p+2,\ldots ,D, \label{gi} \\ g_{p}^{(p+1)}(y) &=&-(p+1)f_{p/2+1}(y)-y^{2}f_{p/2}(y). \notag \end{eqnarray}% As in the case of the filed square, this formula can be directly obtained by using the conformal relation between the problem under consideration and the corresponding problem in $(D+1)$-dimensional Minkowski spacetime with the spatial topology $\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$. Note that in this case the topological part in the energy density is always negative and is equal to the vacuum stresses along the uncompactified dimensions. In particular, for the case $D=3$, $p=0$ (topology $(\mathrm{S}^{1})^{3}$) and for $L_{i}=L$, $i=1,2,3$, from formulae (\ref{TikComp}), (\ref{DelTConf}) for the topological part in the vacuum energy density we find $\langle T_{0}^{0}\rangle _{c}=-0.8375(a(\eta )L)^{-4}$ (see, for example, Ref. \cite% {Most97}). The general formulae for the topological part in the VEV of the energy density are simplified in the asymptotic regions of the parameters. For small values of the ratio $L_{p+1}/\eta $ we can see that to the leading order $\Delta _{p+1}\langle T_{i}^{k}\rangle _{p,q}$ coincides with the corresponding result for a conformally coupled massless field (no summation over $i$):% \begin{equation} \Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx -\frac{2(\eta /\alpha )^{D+1}}{(2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}% _{q-1}=-\infty }^{+\infty }\frac{g_{p}^{(i)}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{% (nL_{p+1})^{p+2}},\;L/\eta \ll 1. \label{TiiSmall} \end{equation}% For fixed values of the ratio $L_{p+1}/\alpha $, this formula describes the asymptotic behavior of the VEV at the early stages of the cosmological evolution corresponding to $t\rightarrow -\infty $. In this limit the topological part behaves as $\exp [-(D+1)t/\alpha ]$ and, hence, it dominates the part corresponding to the uncompactified dS spacetime which is time independent. In particular, the total energy density is negative. In the opposite limit of small values for the ratio $\eta /L_{p+1}$ we introduce in the formulae for the VEV of the energy-momentum tensor an integration variable $y=L_{p+1}x$ and expand the integrants over $\eta /L_{p+1}$. For real values of the parameter $\nu $, for the energy density to the leading order we find% \begin{eqnarray} \Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q} &\approx &\frac{2^{\nu }D\left[ D/2-\nu +2\xi \left( 2\nu -D-1\right) \right] }{(2\pi )^{(p+3)/2}L_{p+1}^{1-q}V_{q-1}\alpha ^{D+1}}\Gamma (\nu ) \notag \\ &&\times \left( \frac{\eta }{L_{p+1}}\right) ^{D-2\nu }\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\frac{f_{(p+1)/2-\nu }(nL_{p+1}k_{\mathbf{n}_{q-1}})}{n^{(p+1)/2-\nu }}. \label{T00smallEta} \end{eqnarray}% In particular, this energy density is positive for a minimally coupled scalar field and for a conformally coupled massive scalar field. Note that for a conformally coupled massless scalar the coefficient in (\ref% {T00smallEta}) vanishes. For the vacuum stresses the second term on the right of formula (\ref{DelTii}) is suppressed with respect to the first term given by (\ref{A}) by the factor $(\eta /L_{p+1})^{2}$ for $i=1,\ldots ,p+1$% , and by the factor $(\eta k_{i})^{2}$ for $i=p+2,\ldots ,D$. As a result, to the leading order we have the relation (no summation over $i$) \begin{equation} \Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx \frac{2\nu }{D}\Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q},\;\eta /L_{p+1}\ll 1, \label{TiismallEta} \end{equation}% between the energy density and stresses, $i=1,\ldots ,D$. The coefficient in this relation does not depend on $p$ and, hence, it takes place for the total topological part of the VEV as well. Hence, in the limit under consideration the topological parts in the vacuum stresses are isotropic and correspond to the gravitational source with barotropic equation of state. Note that this limit corresponds to late times in terms of synchronous time coordinate $t$, $(\alpha /L_{p+1})e^{-t/\alpha }\ll 1$, and the topological part in the VEV is suppressed by the factor $\exp [-(D-2\nu )t/\alpha ]$. For a conformally coupled massless scalar field the coefficient of the leading term vanishes and the topological parts are suppressed by the factor $\exp [-(D+1)t/\alpha ]$. As the uncompactified dS part is constant, it dominates the topological part at the late stages of the cosmological evolution. For small values of the ratio $\eta /L_{p+1}$ and for purely imaginary $\nu $% , in the way similar to that used for the case of the field square we can see that the energy density behaves like% \begin{equation} \Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q}\approx \frac{4De^{-Dt/\alpha }BB_{D}}{(2\pi )^{(p+3)/2}\alpha L_{p+1}^{p+1}V_{q-1}}\sin [2|\nu |t/\alpha +2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}+\phi _{1}], \label{T00ImEta} \end{equation}% where the coefficient $B_{D}$ and the phase $\phi _{1}$ are defined by the relation% \begin{equation} |\nu |(1/2-2\xi )+i\left[ D/4-(D+1)\xi \right] =B_{D}e^{i\phi _{1}}. \label{DefBD} \end{equation}% In the same limit, the main contribution into the vacuum stresses comes from the term $A$ in (\ref{A}) and one has (no summation over $i$)% \begin{equation} \Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx \frac{8|\nu |e^{-Dt/\alpha }BB_{D}}{(2\pi )^{(p+3)/2}\alpha L_{p+1}^{p+1}V_{q-1}}\cos [2|\nu |t/\alpha +2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}+\phi _{1}]. \label{TiiImEta} \end{equation}% As we see, in the limit under consideration to the leading order the vacuum stresses are isotropic. \section{Twisted scalar field} \label{sec:Twisted} One of the characteristic features of field theory on backgrounds with non-trivial topology is the appearance of topologically inequivalent field configurations \cite{Isha78}. In this section we consider the case of a twisted scalar field on background of dS spacetime with the spatial topology $\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$ assuming that the field obeys the antiperiodicity condition (no summation over $l$)% \begin{equation} \varphi (t,\mathbf{z}_{p},\mathbf{z}_{q}+L_{l}\mathbf{e}_{l})=-\varphi (t,% \mathbf{z}_{p},\mathbf{z}_{q}), \label{AntiPer} \end{equation}% where $\mathbf{e}_{l}$ is the unit vector along the direction of the coordinate $z^{l}$, $l=p+1,\ldots ,D$. The corresponding Wightman fucntion and the VEVs of the field square and the energy-momentum tensor can be found in the way similar to that for the field with periodicity conditions. The eigenfunctions have the form given by (\ref{eigfuncD}), where now% \begin{equation} k_{l}=2\pi (n_{l}+1/2)/L_{l},\;n_{l}=0,\pm 1,\pm 2,\ldots ,\;l=p+1,\ldots ,D. \label{nltwisted} \end{equation}% The positive frequency Wightman function is still given by formula (\ref% {GxxD}). For the summation over $n_{p+1}$ we apply the Abel-Plana formula in the form \cite{Most97,Saha07Gen}% \begin{equation} \sum_{n=0}^{\infty }f(n+1/2)=\int_{0}^{\infty }dx\,f(x)-i\int_{0}^{\infty }dx\,\frac{f(ix)-f(-ix)}{e^{2\pi x}+1}. \label{abel2} \end{equation}% Similar to (\ref{GxxD2}), for the correction to the Wightman function due to the compactness of the $(p+1)$th spatial direction this leads to the result \begin{eqnarray} \Delta _{p+1}G_{p,q}^{+}(x,x^{\prime }) &=&-\frac{2\alpha ^{1-D}(\eta \eta ^{\prime })^{D/2}}{(2\pi )^{p+1}V_{q-1}}\int d\mathbf{k}_{p}\,e^{i\mathbf{k}% _{p}\cdot \Delta \mathbf{z}_{p}}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }e^{i\mathbf{k}_{q-1}\cdot \Delta \mathbf{z}_{q-1}} \notag \\ &&\times \int_{0}^{\infty }dx\,\frac{x\cosh (\sqrt{x^{2}+\mathbf{k}% _{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}\Delta z^{p+1})}{\sqrt{x^{2}+\mathbf{k}% _{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}% _{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}}+1)} \notag \\ &&\times \left[ K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta x)K_{\nu }(\eta ^{\prime }x)\right] , \label{GxxD2tw} \end{eqnarray}% where now $\mathbf{k}_{q-1}=(\pi (2n_{p+2}+1)/L_{p+2},\ldots ,\pi (2n_{D}+1)/L_{D})$, and \begin{equation} k_{\mathbf{n}_{q-1}}^{2}=\sum_{l=p+2}^{D}\left[ \pi (2n_{l}+1)/L_{l}\right] ^{2}. \label{knqtw} \end{equation}% Taking the coincidence limit of the arguments, for the VEV of the field square we find \begin{eqnarray} \Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{4\alpha ^{1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }(-1)^{n}\sum_{\mathbf{n}% _{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta ) \notag \\ &&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p-1}}% f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelPhi2tw} \end{eqnarray}% with the notations being the same as in (\ref{DelPhi2}). Note that in this formula we can put $\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }=2^{q-1}\sum_{% \mathbf{n}_{q-1}=0}^{+\infty }$. In particular, for the topology $\mathrm{R}% ^{D-1}\times \mathrm{S}^{1}$ with a single compactified dimension of the length $L_{D}=L$, considered in \cite{Saha07}, we have $\langle \varphi ^{2}\rangle _{c}=\Delta _{D}\langle \varphi ^{2}\rangle _{D-1,1}$ with the topological part given by the formula% \begin{eqnarray} \langle \varphi ^{2}\rangle _{c} &=&\frac{4\alpha ^{1-D}}{(2\pi )^{D/2+1}}% \sum_{n=1}^{\infty }(-1)^{n}\int_{0}^{\infty }dx\,x^{D-1} \notag \\ &&\times \left[ I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)\frac{% K_{D/2-1}(nLx/\eta )}{(nLx/\eta )^{D/2-1}}. \label{phi2SingComp} \end{eqnarray}% In figure \ref{fig1} we have plotted the topological part in the VEV of the field square in the case of a conformally coupled twisted massive scalar ($% \xi =\xi _{D}$) for $D=3$ dS spacetime with spatial topologies $\mathrm{R}% ^{2}\times \mathrm{S}^{1}$ (left panel) and $(\mathrm{S}^{1})^{3}$ (right panel) as a function of $L/\eta =Le^{t/\alpha }/\alpha $. In the second case we have taken the lengths for all compactified dimensions being the same: $% L_{1}=L_{2}=L_{3}\equiv L$. The numbers near the curves correspond to the values of the parameter $m\alpha $. Note that we have presented conformally non-trivial examples and the graphs are plotted by using the general formula (\ref{DelPhi2tw}). For the case $m\alpha =1$ the parameter $\nu $ is pure imaginary and in accordance with the asymptotic analysis given above the behavior of the field square is oscillatory for large values of the ratio $% L/\eta $. For the left panel in figure \ref{fig1} the first zero is for $% L/\eta \approx 8.35$ and for the right panel $L/\eta \approx 9.57$. \begin{figure}[tbph] \begin{center} \begin{tabular}{cc} \epsfig{figure=sahfig1a.eps,width=7.cm,height=6cm} & \quad % \epsfig{figure=sahfig1b.eps,width=7.cm,height=6cm}% \end{tabular}% \end{center} \caption{The topological part in the VEV of the field square in the case of a conformally coupled twisted massive scalar ($\protect\xi =\protect\xi _{D}$% ) for $D=3$ dS spacetime with spatial topologies $\mathrm{R}^{2}\times \mathrm{S}^{1}$ (left panel) and $(\mathrm{S}^{1})^{3}$ (right panel) as a function of $L/\protect\eta =Le^{t/\protect\alpha }/\protect\alpha $. In the second case we have taken the lengths for all compactified dimensions being the same: $L_{1}=L_{2}=L_{3}\equiv L$. The numbers near the curves correspond to the values of the parameter $m\protect\alpha $. } \label{fig1} \end{figure} In the case of a twisted scalar field the formulae for the VEV of the energy-momentum tensor are obtained from formulae for the untwisted field given in the previous section (formulae (\ref{DelT00}), (\ref{DelTii})) with $k_{\mathbf{n}_{q-1}}^{2}$ from (\ref{knqtw}) and by making the replacement% \begin{equation} \sum_{n=1}^{\infty }\rightarrow \sum_{n=1}^{\infty }(-1)^{n},\ \label{SumRepl} \end{equation}% and $k_{i}=2\pi (n_{i}+1/2)/L_{i}$ in expression (\ref{fp+1}) for $% f^{(i)}(y) $, $i=p+2,\ldots ,D$. In figure \ref{fig2} the topological part in the VEV of the energy density is plotted versus $L/\eta $ for a a conformally coupled twisted massive scalar in $D=3$ dS spacetime with spatial topologies $\mathrm{R}^{2}\times \mathrm{S}^{1}$ (left panel) and $(% \mathrm{S}^{1})^{3}$ (right panel). In the latter case the lengths of compactified dimensions are the same. As in figure \ref{fig1}, the numbers near the curves are the values of the parameter $m\alpha $. For $m\alpha =1$ the behavior of the energy density for large values $L/\eta $ correspond to damping oscillations. In the case $m\alpha =0.25$ (the parameter $\nu $ is real) for the example on the left panel the topological part of the energy density vanishes for $L/\eta \approx 9.2$, takes the minimum value $\langle T_{0}^{0}\rangle _{c}\approx -3.1\cdot 10^{-6}/\alpha ^{4}$ for $L/\eta \approx 12.9$ and then monotonically goes to zero. For the example on the right panel with $m\alpha =0.25$ the energy density vanishes for $L/\eta \approx 45$, takes the minimum value $\langle T_{0}^{0}\rangle _{c}\approx -1.1\cdot 10^{-8}/\alpha ^{4}$ for $L/\eta \approx 64.4$ and then monotonically goes to zero. For a conformally coupled massless scalar field in the case of topology $(\mathrm{S}^{1})^{3}$ one has $\langle T_{0}^{0}\rangle _{c}=0.1957(\eta /\alpha L)^{4}$. Note that in the case of topology $\mathrm{R}^{D-1}\times \mathrm{S}^{1}$ for a conformally coupled massless scalar we have the formulae (no summation over $l$)% \begin{eqnarray} \langle T_{l}^{l}\rangle _{c} &=&\frac{1-2^{-D}}{\pi ^{(D+1)/2}}\left( \frac{% \eta }{\alpha L}\right) ^{D+1}\zeta _{\mathrm{R}}(D+1)\Gamma \left( \frac{D+1% }{2}\right) , \label{TllConfTwS1} \\ \langle T_{D}^{D}\rangle _{c} &=&-D\langle T_{0}^{0}\rangle _{c},\;\xi =\xi _{D},\;m=0, \label{T00ConfTwS1} \end{eqnarray}% where $l=0,1,\ldots ,D-1$, and $\zeta _{\mathrm{R}}(x)$ is the Riemann zeta function. The corresponding energy density is positive. \begin{figure}[tbph] \begin{center} \begin{tabular}{cc} \epsfig{figure=sahfig2a.eps,width=7.cm,height=6cm} & \quad % \epsfig{figure=sahfig2b.eps,width=7.cm,height=6cm}% \end{tabular}% \end{center} \caption{The same as in figure \protect\ref{fig1} for the topological part of the energy density. } \label{fig2} \end{figure} \section{Conclusion} \label{sec:Conc} In topologically non-trivial spaces the periodicity conditions imposed on possible field configurations change the spectrum of the vacuum fluctuations and lead to the Casimir-type contributions to the VEVs of physical observables. Motivated by the fact that dS spacetime naturally arise in a number of contexts, in the present paper we consider the quantum vacuum effects for a massive scalar field with general curvature coupling in $(D+1)$% -dimensional dS spacetime having the spatial topology $\mathrm{R}^{p}\times (% \mathrm{S}^{1})^{q}$. Both cases of the periodicity and antiperiodicity conditions along the compactified dimensions are discussed. As a first step for the investigation of vacuum densities we evaluate the positive frequency Wightman function. This function gives comprehensive insight into vacuum fluctuations and determines the response of a particle detector of the Unruh-DeWitt type. Applying the Abel-Plana formula to the corresponding mode-sum, we have derived a recurrence relation which presents the Wightman function for the $\mathrm{dS}_{D+1}$ with topology $\mathrm{R}^{p}\times (% \mathrm{S}^{1})^{q}$ in the form of the sum of the Wightman function for the topology $\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$ and the additional part $\Delta _{p+1}G_{p,q}^{+}$ induced by the compactness of the $(p+1)$th spatial dimension. The latter is given by formula (\ref{GxxD2}) for a scalar field with periodicity conditions and by formula (\ref{GxxD2tw}) for a twisted scalar field. The repeated application of formula (\ref{G1decomp}) allows us to present the Wightman function as the sum of the uncompactified dS and topological parts, formula (\ref{DeltaGtop}). As the toroidal compactification does not change the local geometry, by this way the renormalization of the bilinear field products in the coincidence limit is reduced to that for uncompactifeid $\mathrm{dS}_{D+1}$. Further, taking the coincidence limit in the formulae for the Wightman function and its derivatives, we evaluate the VEVs of the field square and the energy-momentum tensor. For a scalar field with periodic conditions the corresponding topological parts are given by formula (\ref{DelPhi2}) for the field square and by formulae (\ref{DelT00}) and (\ref{DelTii}) for the energy density and vacuum stresses respectively. The trace anomaly is contained in the uncompactified dS part only and the topological part satisfies the standard trace relation (\ref{tracerel}). In particular, this part is traceless for a conformally coupled massless scalar. In this case the problem under consideration is conformally related to the corresponding problem in $(D+1)$-dimensional Minkowski spacetime with the spatial topology $\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$ and the topological parts in the VEVs are related by the formulae $\langle \varphi ^{2}\rangle _{c}=(\eta /\alpha )^{D-1}\langle \varphi ^{2}\rangle _{c}^{\mathrm{(M)}}$ and $\langle T_{i}^{k}\rangle _{c}=(\eta /\alpha )^{D+1}\langle T_{i}^{k}\rangle _{c}^{% \mathrm{(M)}}$. Note that for a conformally coupled massless scalar the topological part in the energy density is always negative and is equal to the vacuum stresses along the uncompactified dimensions. For the general case of the curvature coupling, in the limit $L_{p+1}/\eta \ll 1$ the leading terms in the asymptotic expansion of the VEVs coincide with the corresponding expressions for a conformally coupled massless field. In particular, this limit corresponds to the early stages of the cosmological expansion, $t\rightarrow -\infty $, and the topological parts behave like $e^{-(D-1)t/\alpha }$ for the field square and like $% e^{-(D+1)t/\alpha }$ for the energy-momentum tensor. Taking into account that the uncompactified dS part is time independent, from here we conclude that in the early stages of the cosmological evolution the topological part dominates in the VEVs. In the opposite asymptotic limit corresponding to $% \eta /L_{p+1}\ll 1$, the behavior of the topological parts depends on the value of the parameter $\nu $. For real values of this parameter the leading terms in the corresponding asymptotic expansions are given by formulae (\ref% {DelPhi2Mets}) and (\ref{T00smallEta}) for the field square and the energy-momentum tensor respectively. The corresponding vacuum stresses are isotropic and the topological part of the energy-momentum tensor corresponds to the gravitational source of the barotropic type with the equation of state parameter equal to $-2\nu /D$. In the limit under consideration the topological part in the energy density is positive for a minimally coupled scalar field and for a conformally coupled massive scalar field. In particular, this limit corresponds to the late stages of the cosmological evolution, $t\rightarrow +\infty $, and the topological parts of the VEVs are suppressed by the factor $e^{-(D-2\nu )t/\alpha }$ for both the field square and the energy-momentum tensor. For a conformally coupled massless field the coefficient of the leading term in the asymptotic expansion vanishes and the topological part is suppressed by the factor $% e^{-(D+1)t/\alpha }$. In the limit $\eta /L_{p+1}\ll 1$ and for pure imaginary values of the parameter $\nu $ the asymptotic behavior of the topological parts in the VEVs of the field square and the energy-momentum tensor is described by formulae (\ref{DelPhi2MetsIm1}), (\ref{T00ImEta}), (% \ref{TiiImEta}). These formulae present the leading term in the asymptotic expansion of the topological parts at late stages of the cosmological evolution. In this limit the topological terms oscillate with the amplitude going to the zero as $e^{-Dt/\alpha }$ for $t\rightarrow +\infty $. The phases of the oscillations for the energy density and vacuum stresses are shifted by $\pi /2$. In section \ref{sec:Twisted} we have considered the case of a scalar field with antiperiodicity conditions along the compactified directions. The Wightman fucntion and the VEVs of the field square and the energy-momentum tensor are evaluated in the way similar to that for the field with periodicity conditions. The corresponding formulae are obtained from the formulae for the untwisted field with $k_{\mathbf{n}_{q-1}}^{2}$ defined by Eq. (\ref{knqtw}) and by making the replacement (\ref{SumRepl}). In this case we have also presented the graphs of the topological parts in the VEVs of the field square and the energy-momentum tensor for $\mathrm{dS}_{4}$ with the spatial topologies $\mathrm{R}^{2}\times \mathrm{S}^{1}$ and $(% \mathrm{S}^{1})^{3}$. \section*{Acknowledgments} AAS would like to acknowledge the hospitality of the INFN Laboratori Nazionali di Frascati, Frascati, Italy. The work of AAS was supported in part by the Armenian Ministry of Education and Science Grant. The work of SB has been supported in part by the European Community Human Potential Program under contract MRTN-CT-2004-005104 "Constituents, fundamental forces and symmetries of the Universe" and by INTAS under contract 05-7928.
{'timestamp': '2008-02-15T13:43:50', 'yymm': '0802', 'arxiv_id': '0802.2190', 'language': 'en', 'url': 'https://arxiv.org/abs/0802.2190'}
ArXiv
\section{Introduction}\label{sec:intro} Very metal-poor stars (hereafter, VMP stars, with [Fe/H] $\le -2.0$)\footnote{[A/B] = $\log(N_{\rm A}/N_{\rm B})- \log(N_{\rm A}/N_{\rm B})_{\odot}$, and $\log \epsilon_{\rm A} = \log(N_{\rm A}/N_{\rm H})+12$ for elements A and B.} belonging to the halo population of the Galaxy are believed to have formed at the earliest epoch of star formation, and preserve at their surfaces the chemical composition produced by the first generations of stars. Studies of the chemical composition of VMP stars have, in the past decade, proven to be very important for understanding individual nucleosynthesis processes \citep[e.g., ][]{mcwilliam95,ryan96,cayrel04}. In particular, the abundances of the neutron-capture elements provide strong constraints on the modeling of explosive nucleosynthesis, and for identifying the likely astrophysical sites in which they are produced. One surprising result found by previous studies is the existence of a large scatter in measured abundance ratios between the neutron-capture elements and other metals (e.g., Ba/Fe). The scatter appears most significant at [Fe/H]$\sim -3$. For instance, the abundance ratio of [Ba/Fe] in stars near this metallicity exhibits a range of about three dex \citep[e.g., ][]{ mcwilliam98}. Some of the Ba-enhanced stars have abundance patterns that can be explained by the slow neutron-capture process (s-process). These stars typically also have high abundances of carbon. Such carbon-enhanced, metal-poor, s-process-rich (hereafter, CEMP-s) stars are believed to belong to binary systems, the presently observed member having been polluted by an Asymptotic Giant Branch (AGB) companion through mass transfer at an earlier epoch. However, even after removing the CEMP-s stars from samples of objects with metallicity near [Fe/H] = $-3.0$, a large scatter in [Ba/Fe] remains. Even though the abundance ratios between neutron-capture elements and lighter metals like iron exhibit significant scatter, the abundance patterns of heavy neutron-capture elements (Ba--Os) in stars with [Fe/H]$< -2.5$ agree very well with that of the r-process component in solar system material \citep[e.g., ][]{sneden96,westin00,cayrel01}. This fact indicates that the neutron-capture elements in metal-poor, non CEMP-s stars have originated from the r-process. From the abundance ratios of Ba/Eu or La/Eu, \citet{burris00} and \cite{simmerer04} concluded that significant contributions from the (main) s-process appear at [Fe/H]$\sim -2.3$, though small effect of the s-process is also suggested at slightly lower [Fe/H] ($\sim -2.7$). The large scatter in [Ba/Fe] in stars with [Fe/H] $< -2.5$ means that the astrophysical site responsible for this process was different from the site responsible for Fe production, and the elements produced were not well mixed into the interstellar matter from which the low-mass stars we are currently observing were formed. Moreover, the above observational result suggests that the abundance pattern of the solar-system r-process component is not the result of a mixture of yields from individual nucleosynthesis events having very different abundance patterns, but rather is a result of nucleosynthesis processes that yielded very similar abundance patterns for these elements throughout Galactic history. This phenomenon is sometimes referred to as the ``universality'' of the r-process nucleosynthesis, and it is believed to be a key for understanding this process, as well as for the use of the heaviest radioactive nuclei (Th and U) as cosmo-chronometers to place direct limits on the ages of extremely metal-poor (EMP; [Fe/H] $\lesssim -3.0$) and VMP stars. By way of contrast, measured abundance ratios between {\it light} neutron-capture elements, such as Sr ($Z=38$), and {\it heavy} ones ($Z\geq 56$) in VMP stars exhibit a large dispersion \citep[e.g., ][]{mcwilliam98}. The abundance patterns from Sr to Ba in stars with enhancements of r-process elements do {\it not} agree well with that of the solar-system r-process component \citep{sneden00}. In other words, the universality of the r-process apparently does not apply to the lighter neutron-capture elements. The processes responsible for the synthesis of light neutron-capture elements have been recently studied with growing interest. \citet{truran02} pointed out the existence of a group of stars that exhibit high Sr/Ba ratios but low Ba abundances, and emphasized the contrast to r-process-enhanced stars that have relatively low Sr/Ba ratios. These authors suggested the existence of a quite different origin of these elements than those (heavier) elements formed by the ``main'' r-process. \citet{travaglio04} investigated the production of light neutron-capture elements, Sr, Y, and Zr, from the point of view of Galactic chemical evolution. They concluded that there must exist a process that has provided these light neutron-capture elements throughout Galactic history, which they referred to as the ``lighter element primary process (LEPP)'', and estimated its contribution to the abundances of Sr, Y, and Zr in solar-system material. Thus, although the existence of a process that produces light neutron-capture elements without significant contribution to the heavier ones is suggested by several previous studies, its astrophysical site remains unclear. This process is usually distinguished from the weak s-process, which is believed to occur in helium-burning massive stars, but thought to be inefficient at low metallicity. It is clearly desirable to obtain additional observational constraints on the nature of this unknown process for the production of light neutron-capture elements, such as the elemental abundance patterns produced by the process, and the level of its contribution to stars with different metallicity. Although some observational studies \citep[e.g., ][]{johnson02b} have previously focused on this issue, and have provided important abundance results, more studies for larger sample of stars with very low metallicity and accurately measured abundances are required. Our previous study determined chemical abundances for 22 VMP stars, and discussed the abundance ratios of neutron-capture elements (Honda et al. 2004a, b: hereafter, Paper I and Paper II, respectively). In Paper II, we investigated the correlation between Sr and Ba abundances for stars with [Fe/H]$<-2.5$, and found that the scatter of the Sr abundance ($\log\epsilon$(Sr)) increases with decreasing Ba abundance. The present paper reports the chemical abundances for an additional 16 VMP stars, as well as for two stars that have already been studied in Paper II, based on observations obtained with the Subaru Telescope High Dispersion Spectrograph \citep[HDS; ][]{noguchi02} during its commissioning phase. In Section~\ref{sec:obs}, we describe the sample selection, details of the observations, and measurements of equivalent widths and radial velocities. Elemental abundance measurements are presented in Section~\ref{sec:ana}. In this section we also consider a new star that exhibits a significantly high Mg/Fe abundance ratio compared to other VMP stars in our study. In Section~\ref{sec:disc} we combine the chemical abundances for VMP stars reported by previous work with our new sample, and discuss the production of light neutron-capture elements in the early Galaxy. \section{Observations and Measurements}\label{sec:obs} \subsection{Sample Selection and Photometry Data} Our sample of stars was originally selected from candidate very metal-poor stars identified in the HK survey of Beers and colleagues (Beers, Preston, \& Shectman 1985; 1992; Beers 1999) whose medium-resolution (1--2~{\AA}) spectra indicated that they possessed metallicities [Fe/H] $\le -2.5$. While in Paper II we selected objects that were likely to have excesses of r-process elements, our new sample has no such explicit bias. Indeed, none of the objects in the new sample exhibit significant excesses of heavy neutron-capture elements (see below). Nevertheless, the abundances of light neutron-capture elements such as Sr are distributed over a very wide range in our new sample. Table 1 provides a listing of the objects considered in our study, including their coordinates, details of the observations conducted, and their measured radial velocities (see below). The neutron-capture element-enhanced star CS~30306--132 was already analyzed in Paper II, but is also included here for comparison purposes. For the same reason, the bright metal-poor giant HD~122563 was also analyzed. Table 2 presents optical $BVRI$ photometry (Johnson--Kron--Cousins system) for our sample stars; with the exception of HD~122563, these data are drawn from photometry reported by Beers et al. (2005, in preparation). Errors in the $BVRI$ magnitudes are typically on the order of 0.01--0.02 magnitudes. Near infrared $JHK$ photometry was, again with the exception of HD~122563, provided by the Two Micron All Sky Survey (2MASS) Point Source Catalog \citep{skrutskie97}. Estimates of the interstellar reddening, $E(B-V)$, for each object were obtained from the \citet{schlegel98} map; the extinction of each band was obtained from the reddening relation provided in their Table 6. \subsection{High-Dispersion Spectroscopy} High-dispersion spectroscopy for the purpose of conducting our chemical abundance studies was obtained with Subaru/HDS, using a spectral resolving power $R = 50,000$ (a slit width of 0.72~arcsec), in April 2001, July 2001, and February 2002. The atmospheric dispersion corrector (ADC) was used in all observing runs. Two EEV-CCDs were used with no on-chip binning. The spectra cover the wavelength range 3550--5250 {\AA}, with a small gap in the coverage between 4350 and 4450~{\AA} due to the physical separation between the two CCDs. The object list and observing details are given in Table 1. Standard data reduction procedures (bias subtraction, flat-fielding, background subtraction, extraction, and wavelength calibration) were carried out with the IRAF echelle package\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation.}. In order to remove suspected cosmic-ray hits, we first apply median filtering to a two-dimensional CCD image. When a remarkably high count was found at one pixel in the original image compared to the median-filtered image, the recorded counts of that pixel were replaced by the value obtained in the median-filtered image.\footnote{At the beginning of the data reduction, we subtracted bias level from each object frame. Since the inter-order region of the object frame has very low count, we first added a constant to object frame, and then applied the IRAF task {\it median} to make median-filtered image (with the parameter {\it x[y]window} of 3). The constant added to the object frame is the same order of the peak count due to the photons from the star. We divided the object frame by the median-filtered image, resulting in a frame which has count close to unity, with exceptions of pixels affected by cosmic-ray. We identified the pixels that have counts more then 20~\% higher than those in surrounding pixels as those affected by cosmic-ray, and replaced their counts by unity (this process is more effectively made by applying the task {\it lineclean} to fitting to both column and line directions). We multiply the obtained frame by the median-filtered image, and finally subtract the constant that was added at the first process. We confirmed the above parameter choices are quite conservative, i.e., this procedure does not affect the pixels which are apparently not affected by cosmic-ray events, though there remain some pixels which seem to be affected by cosmic-ray.} Wavelength calibration was performed using Th-Ar spectra obtained a few times during each night of observations. The signal-to-noise (S/N) ratio given in Table~\ref{tab:obs} was estimated from the peak counts of the spectra in the 149th Echelle order ($\sim 4000$~{\AA}). We note that the values are given per resolution element (6~km~s$^{-1}$). Since the resolution element is covered by about 6.7 pixels, the S/N ratios per pixel are by a factor of $\sim 2.6$ lower than those listed in the table. \subsection{Equivalent Widths}\label{sec:ew} Equivalent widths were measured for isolated atomic lines by fitting gaussian profiles \citep{press96}, while a spectrum synthesis technique was applied to CH molecular bands, as well as to atomic lines that are significantly affected by hyperfine splitting. The measured equivalent widths of elements lighter than La ($Z\leq 56$) are given in Table~\ref{tab:ew}. Heavier elements are detected only in stars having relatively higher abundances of neutron-capture elements. The measured equivalent widths of heavier elements are given for 11 objects in Table~\ref{tab:ew2} separately. Comparisons of equivalent widths with those reported in Paper I are shown in Figure 1 for HD~122563 and CS~30306--132. While the same data were analyzed for CS~30306--132 in both studies, our data for HD~122563 are different than those in Paper I. Measurements of equivalent widths were made independently by W.A. (this work) and S.H. (Paper I) using different software, although both applied gaussian fitting procedures. The two measurements show quite good agreement, although small departures appear for the strongest lines; the trends are opposite for the two stars, suggesting these are not systematic in origin. Most likely, they are the result of adopting slightly different fitting ranges for strong lines by the two investigators. Comparisons with the measurements by \citet{cayrel04} are also shown in Figure 2 for HD~122563 and CS~30325--094. The agreement is again quite good; there is no obvious systematic differences between the two sets of measurements. The equivalent widths of the two resonance lines of Ba require special attention. Comparisons of the equivalent widths between the two lines are shown in Figure~\ref{fig:ewba}. The equivalent width of the \ion{Ba}{2} 4934~{\AA} line is sometimes larger than that of the 4554~{\AA} line, even though the $gf$-value of the 4934~{\AA} is smaller, by a factor of two, than that of the 4554~{\AA} line. This apparent discrepancy is not necessarily due to measurement error. First, the wavelength of the former line is larger by $\sim$8\% than the latter one, and correction for this factor is required in order to compare the equivalent widths. In addition, the effect of hyperfine splitting is expected to be more significant for the 4934~{\AA} line than that for the 4554~{\AA} line, if isotopes with odd mass number ($^{135}$Ba and $^{137}$Ba) significantly contribute to the absorption. We simulated these effects by calculating equivalent widths of both lines using a model atmosphere for a metal-poor giant, assuming Ba isotope fractions of the r-process component in solar-system material \citep{arlandini99}. The results are shown by the solid line in Figure~\ref{fig:ewba}. Our calculations demonstrate that, after the small correction for the difference of the wavelengths, the equivalent width of the 4554~{\AA} is twice that of the 4934~{\AA} line in the weak-line limit ($W\lesssim 20$~m{\AA}), but the equivalent widths of the two lines are similar at $W\sim 120$~m{\AA}; the equivalent width of the 4934~{\AA} is larger than that of 4554~{\AA} if the lines are stronger. We also show the results of calculations not including hyperfine splitting for comparison purposes (the dashed line in the figure). The measured equivalent widths of strongest lines are better explained by the calculations taking account of the effect of hyperfine splitting. This suggests a large contribution from isotopes with odd mass numbers, which are expected as a result of r-process nucleosynthesis. This would be consistent with the fact that the objects having strong Ba lines in our sample show abundance ratios (e.g., Ba/Eu ratios) similar to that of the r-process component in solar system material (see below). \subsection{Radial velocities} We measured heliocentric radial velocities ($V_{\rm r}$) for each spectrum, as given in Table~\ref{tab:obs}. The measurements were made using clean, isolated \ion{Fe}{1} lines. The standard deviation of the values from individual lines is adopted as the error of the measurements reported in the table. Systematic errors in the measurements are not included in these errors. The wavelength calibration was made using Th-Ar comparison spectra that were obtained during the observing runs, without changing the spectrograph setup. Hence, the systematic error is basically determined by the stability of the spectrograph. The spectrum shift during one night is at most 0.5 pixel (0.45~km~s$^{-1}$), which corresponds to a temperature variation of four degrees centigrade \citep{noguchi02}, if the setup is not changed during the night. Combining this possible systematic error with the random errors (the 3~$\sigma$ level is typically 0.6-0.8~km~s$^{-1}$), the uncertainties of the reported radial velocity measurements are $0.7\sim 1.0$~km~s$^{-1}$. For three stars in our sample (BS~16934--002, CS~30306--132, and CS~30325--028), two or three spectra were obtained on different observing runs. While no clear variation of $V_{\rm r}$ is found in BS~16934--002 and CS~30306--132, CS~30325--028 exhibits a 2.4~km~s$^{-1}$ difference between the two measurements, suggesting possible binarity of this object. Further monitoring of radial velocity for this object is required to determine its binary nature, which may be related to its chemical abundance properties. The heliocentric radial velocity of HD~122563 measured in our study is $V_{\rm r}=-25.81$~km~s$^{-1}$. This is similar to the previous measurements reported in Paper I and in \citet{norris96} ($V_{\rm r}=-26.5 \sim -27.2$~km~s$^{-1}$). A radial velocity $V_{\rm r}=-108.46$~km~s$^{-1}$ was obtained for CS~30306--132 from the 2001 July spectrum. This agrees, within the errors, with the results obtained from the independent measurement reported in Paper I using the same spectra.\footnote{Note that the sign of the $V_{\rm r}$ of CS~30306--132 in Paper I is not correct: the correct value is $V_{\rm r}=-109.01$~km~s$^{-1}$.} \section{Chemical Abundance Analysis and Results}\label{sec:ana} A standard analysis using model atmospheres was performed for the measured equivalent widths for most of the detectable elements, while a spectrum synthesis technique was applied for the CH molecular bands and atomic lines strongly affected by hyperfine splitting. For the calculation of synthetic spectra and equivalent widths using model atmospheres, we applied the LTE spectral synthesis code used in \citet{aoki97}. \citet{unsold55}'s treatment of van der Waals broadening, enhanced by a factor of 2.2 in $\gamma$, was used as in \citet{ryan96}. The polynomial partition function approximations provided by \citet{irwin81} were applied to the heavy elements. In this section we describe the determination of stellar atmospheric parameters (subsection~\ref{sec:param}) and abundance measurements for carbon (subsection~\ref{sec:carbon}), $\alpha$-elements (subsection~\ref{sec:alpha}), and the neutron-capture elements (subsection~\ref{sec:ncap}) in detail. Estimates of uncertainties in abundance measurements are presented in subsection~\ref{sec:error}. \subsection{Atmospheric Parameters}\label{sec:param} Effective temperatures were estimated from the photometry listed in Table~\ref{tab:photometry} using the empirical temperature scale of \citet{alonso99}, after reddening corrections were carried out. The filter-corrections of \citet{fernie83} were applied to convert the Johnson-Kron-Cousins $V-R$, $V-I$, and $R-I$ colors to Johnson ones that were used in the temperature scale of \citet{alonso99}. The filter-corrections provided by \citet{carpenter01} and \citet{alonso94} were applied to 2MASS $J, H$, and $K$ data to derive those in the TSC system (via the CIT ones) that is used by \citet{alonso99}. We have chosen to adopt the values determined from $V-K$, as described in Paper II (Table~\ref{tab:photometry}). For comparison purposes, we also give the difference between the effective temperature from the $V-K$ and the average of the effective temperature from four colors ($B-V, V-R, V-I$ and $R-I$). The agreement is quite good, less than $\pm 100$ K, between the effective temperatures derived from $V-K$ and from the average of the optical bands. The effective temperatures adopted in the abundance analyses are listed in Table~\ref{tab:param}. Note that the near infrared photometry data were not available for several objects when our first abundance analyses were made. In these cases, we estimated the effective temperature from the optical colors (e.g., $B-V$), and performed a re-analysis if the effective temperature obtained from $V-K$ is significantly different from that adopted in our first analyses. In other cases, we did not repeat the analysis. The largest difference of the effective temperature from $V-K$ and adopted one is 79~K (CS~30306--132), which is below the expected error of the effective temperature determination. An LTE abundance analysis was carried out for \ion{Fe}{1} and \ion{Fe}{2} lines using the model atmospheres of \citet{kurucz93}. We performed abundance analyses in the standard manner for the measured equivalent widths. Surface gravities ($\log g$) were determined from the ionization balance between Fe {\small I} and Fe {\small II}; the microturbulent velocity ($v_{\rm tur}$) was determined from the Fe {\small I} lines by demanding no dependence of the derived abundance on equivalent widths. The final atmospheric parameters are reported in Table~\ref{tab:param}. We note that there exists a correlation between the lower excitation potential ($\chi$) of \ion{Fe}{1} lines and the abundance derived from individual lines. The slope is typically $-0.04$~dex~eV$^{-1}$. Such correlations were already reported by \citet{johnson02a} who also applied the Kurucz's model atmosphere grid to the analyses of very metal-poor giants. This trend disappears if systematically lower effective temperatures (by about 150~K) are assumed \citep{johnson02a}. Hence, our effective temperatures might be systematically higher than those determined spectroscopically in previous studies. \subsection{Carbon}\label{sec:carbon} Carbon abundances were estimated from the CH molecular band at 4322~{\AA} following the procedures described in Paper II, using the same line list for the CH band. The band was detected in all stars except for CS~30327--038, for which only an upper limit of the carbon abundance could be estimated. The carbon abundances of HD~122563 and CS~30306--132 were also determined by Paper II. The present work adopts similar atmospheric parameters to those in the previous one, and the agreement between the two measurements is fairly good. The carbon abundances of HD~122563 and CS~30325--094 were also measured by \citet{cayrel04} from the G-band of the CH $A-X$ system. While the agreement of the [C/Fe] values for HD~122563 between the two works is fairly good, the [C/Fe] determined by our present analysis for CS~30325--094 is 0.5~dex higher than that of \citet{cayrel04}. The discrepancy can be partially ($\sim$0.2~dex) explained by the small difference of the adopted effective temperatures (100~K). However, the reason for the remaining discrepancy is not clear. The carbon abundances of giants are expected to be affected by internal processes, i.e., CNO-cycle and dredge-up. Figure~\ref{fig:cl} shows the correlation between the carbon abundance ratio ([C/Fe]) and luminosity ($\log L$/L$_{\odot}$) that is estimated using the relation $L/L_{\odot}\propto (R/R_{\odot})^{2}(T_{\rm eff}/T_{{\rm eff}\odot})^{4}\propto (M/M_{\odot})(g/g_{\odot})^{-1}(T_{\rm eff}/T_{{\rm eff}\odot})^{4}$, assuming the mass of the stars to be 0.8~M$_{\odot}$, for our sample and those of Paper II and \citet{cayrel04}. While the bulk of stars with $\log L$/L$_{\odot} \lesssim 2.5$ have [C/Fe]$\sim +0.4$, [C/Fe] decreases with increasing luminosity.\footnote{A few stars have exceptionally high carbon abundances ([C/Fe]$\sim +1.0$) at high luminosity ($\log L$/L$_{\odot} \sim 2.7$). The well known carbon- enhanced objects CS~22892--052 and CS~22949--037 are included in this group.} This decreasing trend can be interpreted as a result of internal processes. Similar tendency was already found by \citet{cayrel04} who investigated the correlation between [C/Fe] and effective temperature for their sample. However, the trend is more clear in our figure where luminosity is adopted as the indicator of the evolutionary stage. A more detailed analysis for the Cayrel et al.'s sample was made by \citet{spite05}, including N and Li abundances. A similar discussion is also found in \citet{carretta00} for more metal-rich stars. \subsection{The $\alpha$ and Iron-Peak Elements}\label{sec:alpha} There are numerous previous studies of the $\alpha$- and iron-peak elements in VMP stars. Our measurements confirm the usual over-abundances of $\alpha$--elements relative to iron that have been found previously for the majority of metal-poor stars. Figure~\ref{fig:mgfe} shows the trend of [Mg/Fe] as a function of [Fe/H]. A Mg over-abundance of about 0.4--0.6~dex is found for most stars in our sample. These values are similar to the majority of metal-poor dwarf stars reported by \citet{cohen04} and to the giants studied in Paper II. Note that these two studies found several stars with low abundances of $\alpha$-elements ([Mg/Fe]$\sim0.0$), while no such star is found in the present sample. The [Mg/Fe] values of giants studied by \citet{cayrel04} seem to be slightly (0.10--0.15~dex) lower than those of our stars. Since only two stars are in common between the two studies, this discrepancy may be simply due to differences in the samples under consideration. However, the [Mg/Fe] of HD~122563 determined by us is +0.54, while \citet{cayrel04} derived [Mg/Fe]=+0.36 for the same object ($\Delta$ [Mg/Fe]=0.18~dex). Hence, there seems to exist a systematic difference between the two studies. We note that the [Mg/Fe] of HD~122563 determined in Paper II ([Mg/Fe]=+0.54) agrees with our result. Measurements of Mg/Fe ratios are relatively insensitive to the adopted atmospheric parameters (subsection~\ref{sec:error}). Hence, the difference of model parameters between our study and \citet{cayrel04} for metal-poor giants is not likely to be the source for the discrepancies in the derived [Mg/Fe] values. However, it should be noted that the Mg lines used in the abundance measurements in these two studies are somewhat different. For instance, the present study uses the \ion{Mg}{1} 4057.5 and 4703.0~{\AA} lines, which were not adopted by \citet{cayrel04}, while they used \ion{Mg}{1} 4351.9 and 5528.405~{\AA} lines that are not covered by our spectral range. Moreover, the $gf$-values of some lines are different (e.g., \ion{Mg}{1} 4570~{\AA} line: their $\log gf$ value is 0.3~dex higher than ours). These differences may partially explain the discrepancies in the Mg/Fe ratios. We attempted an analysis of Mg abundance for HD~122563 using the equivalent widths and line data of \citet{cayrel04}, and derived [Mg/Fe]=+0.45. The discrepancy ($\Delta$ [Mg/Fe]=0.09~dex) then becomes much smaller than the original one. Nevertheless, there seems to remain a $\sim$0.1~dex discrepancy in [Mg/Fe] between the two studies for which no clear reason has been found. We call attention to the very large Mg over-abundance in BS~16934--002 ([Mg/Fe]=+1.25). Figure~\ref{fig:sp_mg} shows examples of Mg absorption features in this star, compared to those of HD~122563, which has similar atmospheric parameters and iron abundance. While the strengths of the Fe absorption lines are very similar in the two objects, the Mg absorption lines in BS~16934--002 are clearly much stronger than those in HD~122563. We note that the lower excitation potential of the two Mg lines shown in this figure are quite different, hence the stronger Mg absorption features in BS~16934--002 is not due to the (small) differences of atmospheric parameters in the two stars. The derived Ti abundance of BS~16934--002 relative to Fe is slightly higher than other stars in our sample, including HD~122563. This is also seen in the spectra shown in Figure~\ref{fig:sp_mg}. Another remarkable abundance anomaly found in BS~16934--002, compared to other stars, is its high abundances of Al ([Al/Fe]=+0.03) and Sc ([Sc/Fe]=+0.7). There are two other well-studied stars with [Fe/H]=$-3.5 \sim -4.0$ having extremely high Mg/Fe ratios (CS~22949--037: McWilliam et al. 1995, Norris et al. 2001, Depagne et al. 2002; CS~29498--043: Aoki et al. 2002a). These two stars also exhibit large over-abundances of C, N, O, and Si, hence they might be more properly interpreted as ``iron-deficient'' stars, perhaps related to supernovae that ejected only small amount of material from their deepest layers \citep{tsujimoto03,umeda03}. It is thus of some significance that BS~16934--002 has {\it no clear excess} of either C or Si. Its iron abundance is more than five times higher than the two stars. Hence, the origin of the Mg excess in BS~16934--002 is perhaps quite different from that in CS~22949--037 and CS~29498--043. Further chemical abundance measurements based on higher quality spectra are clearly needed to understand the nucleosynthesis processes responsible for this star. In particular, oxygen would be a key element to measure. \subsection{The Neutron-Capture Elements}\label{sec:ncap} The abundances of Sr and Ba were determined by a standard analysis of the \ion{Sr}{2} and \ion{Ba}{2} resonance lines. Because of their large transition probabilities and the relatively high abundances of these two elements in our stars, Sr and Ba are detected in all of our program objects. An exception is the Ba in CS~30325--094, for which only an upper limit of its abundance was determined. In the analyses of the Ba lines, the effects of hyperfine splitting and isotope shifts are included, assuming the isotope ratios of the r-process component of solar system material. The Ba in such VMP stars is expected to have originated from the r-process, except for stars with large excesses of carbon and s-process elements \citep[e.g., ][]{mcwilliam98}. Since the metallicity range covered by our sample is [Fe/H]$\leq-2.4$, where contributions of the s-process is small in general (see Section 1), and no CEMP-s stars are included in our sample, the above assumption is quite reasonable for our analyses. The effect of hyperfine splitting is clearly seen in stars with strong Ba lines, as mentioned in subsection~\ref{sec:ew}. The light neutron-capture elements Y and Zr are detected in 14 and 11 stars in our sample, respectively. The derived abundances of these four elements (Sr, Y, Zr, and Ba) are listed in Table~\ref{tab:res}. An upper-limit of the abundance is given when no absorption line is detected. Our sample includes no stars with significant over-abundances of heavy neutron-capture elements, with the exception of CS~30306--132, which was already studied in Paper II and is re-analyzed here for comparison purposes. For this reason, elements heavier than Ba are detected only in a limited number of objects. The abundances of these heavy elements are given in Table~\ref{tab:heavy}. The abundances were determined by standard analysis techniques, taking into account hyperfine splitting for La \citep{lawler01a} and Yb (Sneden 2003, private communication). For the element Eu a spectrum synthesis technique was applied because the three \ion{Eu}{2} lines analyzed in the present work show remarkably strong effects of hyperfine splitting \citep{lawler01b}. An isotope ratio of $^{151}$Eu:$^{153}$Eu=50:50 was assumed in the analysis. Figure~\ref{fig:abpat1} shows the abundance patterns of seven elements from Zn to Eu for BS~16543--097 and BS~16080--054, whose Zn abundances are quite similar. BS~16543--097 is a star having relatively high abundances of heavy neutron-capture elements as compared to most stars in our sample, while the abundances of these elements in BS~16080--054 are lower by about 1~dex than those in BS~16543--097. Nevertheless, the abundance patterns from Zn to Zr are very similar in the two stars. Such large difference in the abundance ratios between light and heavy neutron-capture elements has already been reported in a number of VMP stars \citep[e.g., ][]{truran02}, and suggests the existence of two (or more) processes that produce neutron-capture elements. This point is discussed in detail in Section~\ref{sec:disc}. \subsection{Uncertainties}\label{sec:error} Random abundance errors in the analysis are estimated from the standard deviation of the abundances derived from individual lines for each species. These values are sometimes unrealistically small, however, when only a few lines are detected. For this reason, we adopted the larger of (a) the value for the listed species and (b) that for \ion{Fe}{1} as estimates of the random errors. Typical random errors are on the order of 0.1~dex. We estimated the upper limit of the chemical abundances for several elements when no absorption line is detected. The error of equivalent width measurements is estimated using the relation $\sigma_{W}\sim (\lambda n_{\rm pix}^{1/2}) /(R[S/N])$, where $R$ is the resolving power, $S/N$ is the signal-to-noise ratio per pixel, and $n_{\rm pix}^{1/2}$ is the number of pixels over which the integration is performed \citep{norris01}. The upper limit of equivalent widths, used to estimate the upper limit of abundances, is assumed to be 3$\sigma_{W}$. Errors arising from uncertainties of the atmospheric parameters were evaluated for $\sigma (T_{\rm eff})=100$~K, $\sigma (\log g)=0.3$~dex, and $\sigma (v_{\rm tur}) =0.3$~km s$^{-1}$ for HD~122563, CS~30306--132, and CS~29516--041. HD~122563 is a well-known metal-poor giant. CS~30306--132 has high abundances of neutron-capture elements, while these elements in CS~29516--041 are relatively deficient. We applied the errors estimated for elements other than neutron-capture elements for HD~122563 to all other stars. The strengths of absorption lines of neutron-capture elements show a quite large scatter in our sample. Since the errors in abundance measurements are sensitive to the line strengths, we estimated errors in abundance measurements including the difference of line strengths as follows. (1) Light neutron-capture elements -- we applied the errors for Sr, Y, and Zr estimated for HD~122563 to most stars. For stars with weak Sr lines, we adopted the errors estimated for CS~29516--041 (in such objects, Y and Zr are not detected). (2) Heavy neutron-capture elements -- we applied the errors estimated for CS~30306--132 to stars with strong Ba lines, while we adopted errors estimated for HD~122563 for other stars. Finally, we derived the total uncertainty by adding, in quadrature, the individual errors, and list them in Tables \ref{tab:res} and \ref{tab:heavy}. \section{Discussion}\label{sec:disc} In this section we focus on the elemental abundances of light and heavy neutron-capture elements in metal-poor stars, which are not significantly affected by the (main) s-process, and discuss their possible origins. We first inspect the full sample based on the abundances of Sr and Ba, taken to be representative of the light and heavy neutron-capture elements, respectively, combining our new measurements with the results of previous work (subsection~\ref{sec:srba}). Then, we confirm the similarity of the abundance patterns of light neutron-capture elements in VMP stars with high and low abundances of heavy neutron-capture elements (subsection~\ref{sec:sryzr}). Since the measured abundances of Sr and Ba are, unfortunately, rather uncertain, because of the strengths of the resonance lines, particularly in stars with high abundances of neutron-capture elements, we investigate the abundances of Y, Zr, and Eu in detail for stars having relatively high abundances of neutron-capture elements (subsection~\ref{sec:yzreu}). \subsection{Sr and Ba abundances}\label{sec:srba} As mentioned in Section~\ref{sec:intro}, Sr and Ba abundances in VMP stars exhibit remarkably large scatter; even the Sr/Ba ratio has a large scatter. Though the scatter in Sr/Ba ratios appears to be somewhat larger at lower metallicity, the behavior is unclear, in particular at [Fe/H]$<-3.5$ where the sample is still very small. Previous studies \citep[e.g., ][ Paper II]{truran02} have shown that the Sr/Ba ratio exhibits a correlation with the abundance of Ba (i.e., heavy neutron-capture elements), and the scatter is larger at lower Ba abundance in metal-poor stars. Figure~\ref{fig:srba} shows the abundances of Sr and Ba for our sample and others from previous studies, in which stars with [Fe/H]$>-2.5$ are excluded to select only VMP stars to which contributions of the main s-process are small. The same diagram was shown in Paper II, but our new sample increases the number of stars with low Ba abundances. We here adopt the values of $\log \epsilon$(X), rather than [X/Fe], because the abundances of neutron-capture elements do not show a clear correlation with Fe abundance. Moreover, abundances of VMP stars are usually expected to be determined by a quite limited number of nucleosynthesis events. If this is true, the abundance ratio relative to Fe is less meaningful, and indeed makes the discussion more complicated. We discuss the correlation with metallicity (Fe abundance) later in this section. As already shown in Paper II, the diagram of Sr and Ba abundances (Figure~\ref{fig:srba}) clearly demonstrates (1) the absence of Ba-enhanced stars with low Sr abundance, and (2) the larger scatter in Sr abundances at lower Ba abundance.\footnote{Some CEMP-s stars exhibit very high Ba abundances with a moderate excess of Sr (e.g., LP~706--7 = CS~31062--012: $\log \epsilon$(Ba)=1.65, and $\log \epsilon$(Sr)=0.67; Aoki et al. 2002b). These stars are excluded from our sample, as mentioned in the caption of Figure~\ref{fig:srba}.} The former result implies that the process that produces heavy neutron-capture elements like Ba also forms light neutron-capture elements such as Sr. This gives a strong constraint on the modeling of the r-process yielding heavy neutron-capture elements, often referred to as the "main" r-process \citep[e.g., ][]{truran02}. The distribution of the Sr and Ba abundances in Figure~\ref{fig:srba} can be naturally explained by assuming two nucleosynthesis processes. One produces both Sr and Ba, while the other produces Sr with little Ba, as already discussed in Paper II. In order to investigate this point in more detail, we show Sr-Ba diagrams separately for three metallicity ranges: [Fe/H]$\leq -3.1$, $-3.1<$[Fe/H]$\leq -2.9$, and [Fe/H] $> -2.9$ (Figure~{\ref{fig:srba3}). As can be seen in these figures, the stars with lowest Fe abundances have very low Ba abundances, while the Ba-rich stars appear at around [Fe/H]$\sim -3$, and then all stars with [Fe/H]$>-2.9$ have relatively high Ba abundances. This suggests that the main r-process operates only at [Fe/H] $\gtrsim -3$, and the effect is more or less seen in all stars with higher metallicity. Similar points have already been made by previous papers \citep[e.g., ][]{qian00} to explain the large scatter of Ba abundances in stars at [Fe/H]$\sim -3$. An important result of the present analysis is that the scatter of Sr abundances persists even in the lowest metallicity regime. In other words, the presumed second process that produces Sr with little Ba operates even at [Fe/H]$<-3$ . This is a clear difference of this process from the main r-process, which apparently did not significantly affect this metallicity range. We note that the three stars with [Fe/H]$\sim -4$ studied by \citet{francois03} have very low abundances of {\it both} Sr and Ba. The [Fe/H]$\sim -4$ star CS~22949--037 has a relatively high Sr abundance ($\log \epsilon$(Sr)=$-0.72$, Depagne et al. 2002). However, the high abundances of C, N, O, and $\alpha$-elements relative to Fe found in this star mean that Fe is not a good metallicity indicator for this object. If this object is excluded, all stars with high Sr abundances in the most metal-poor group (top panel of Figure~\ref{fig:srba3}) have [Fe/H]$\gtrsim -3.5$. This may be another constraint on the process producing light neutron-capture elements at very low metallicity. However, the sample is still too small, and further measurements of Sr and Ba abundances at [Fe/H]$<-3.5$ are strongly desired. The above inspection demonstrates that the process forming both light and heavy neutron-capture elements (the main r-process) appears only for stars with [Fe/H]$\gtrsim -3$, while another process producing light neutron-capture elements appears at even lower metallicity. This metallicity dependence is important information to constrain the progenitor stars responsible for such events. However, there exists some controversy on the implication of the metallicity of these stars. In this metallicity range, no clear age-metallicity relation can be assumed, since the metal enrichment is expected to be strongly dependent on the nature of the previous-generation stars and the formation processes of the low-mass stars we are currently observing. One possibility is that the metallicity indicates the sequence of mass of the progenitor stars that contributed to the next generation low-mass stars. \citet{tsujimoto00} suggested that lower-metallicity stars reflect the yields of supernovae whose progenitor mass is lower, on the basis of the theoretical prediction that supernovae from lower mass progenitors yield smaller amount of metals.\footnote{\citet{tsujimoto00} adopted [Mg/H] as a metallicity indicator, while our discussion makes use of [Fe/H]. However, the [Mg/Fe] ratio is almost constant in most stars in this metallicity range, as seen in subsection \ref{sec:alpha}.} The high abundances of light neutron-capture elements in some stars in the lowest metallicity range indicate that these elements were provided by supernovae with even lower-mass progenitors, while the main r-process is related to progenitors with 20~M$_{\odot}$, according to \citet{tsujimoto00}. On the other hand, the lower metallicity might result from the higher explosion energy of type II supernovae, which swept up larger amounts of interstellar matter and induced the formation of next-generation stars with lower metallicity. If higher-mass progenitor stars explode with higher energy, the existence of stars with high abundances of light neutron-capture elements at very low metallicity indicates that the process responsible for these elements is related to very massive stars. Because of the difficulties noted above, interpretations of the metallicity dependence of the processes producing light and/or heavy neutron-capture elements are still premature. Although our results provide constraints on such models, further investigation of each process is required. In particular, detailed chemical abundance studies of stars having high abundances of neutron-capture elements would be very useful. Previously, several stars with large overabundances of heavy neutron-capture elements have been studied in great detail \citep[e.g., ][]{sneden96,cayrel01}, providing strong constraints on models of the main r-process. By way of contrast, studies for stars with low Ba and high Sr abundances are still quite limited to date. Studies of the detailed abundance patterns of such objects, as has been made for stars with excesses of heavy r-process elements like CS~22892--052 \citep{sneden03} and CS~31082--001 \citep{hill02}, will provide a definitive constraint on modeling the presumed additional process that creates the light neutron-capture elements in the early Galaxy. \subsection{Abundance Ratios of Light Neutron-Capture Elements}\label{sec:sryzr} In the previous section we investigated the correlation between the abundances of Sr and Ba, which are detected in almost all stars in our sample. In this section we investigate the abundance ratios of the three light neutron-capture elements Sr, Y, and Zr ([Sr/Y] and [Y/Zr]). Figure~\ref{fig:sryzr} shows the values of [Sr/Y] and [Y/Zr] as functions of [Ba/H] for stars with [Fe/H] $<-2.5$. Since Y and Zr are detected only in stars with relatively high Sr abundances, stars with very low Sr abundance, which are located at the lower left in Figure~\ref{fig:srba}, are not included in Figure~\ref{fig:sryzr}. Therefore, the stars with high [Ba/H] reflect the results of the main r-process, while the other process that produces light neutron-capture elements with little heavier species is presumed to be responsible for stars with low [Ba/H] values in this figure. The lower panel of Figure~\ref{fig:sryzr} shows no clear dependence of [Y/Zr] on [Ba/H], indicating that these two processes produce very similar abundance ratios of light neutron-capture elements. The average value of [Y/Zr] for the 7 stars in our sample in which both Y and Zr are detected is $<$[Y/Zr]$>=-0.44$, while that of the all stars shown in the figure (29 stars) including objects studied by previous work is $-0.38$. The Y and Zr abundance ratios in VMP stars have been investigated by \citet{johnson02b} in some detail. These authors found no correlation between [Y/Zr] and [Fe/H] in the very low metallicity range. This means that there is no correlation between [Y/Zr] and the abundances of heavy neutron-capture elements, because their sample includes stars with both high and low abundances of Ba. Figure~\ref{fig:sryzr} shows this result more clearly, by increasing the sample of VMP stars and by adopting the Ba abundance, instead of Fe, as the horizontal axis. Since the main component of the s-process is the dominant contributor to Y and Zr in solar-system material \citep[e.g., ][]{arlandini99}, the [Y/Zr] ratio yielded by this process is similar to the solar-system one ([Y/Zr]$_{\rm main-s} \sim 0$). This is clearly different from the [Y/Zr] value found in the VMP stars shown in Figure~\ref{fig:sryzr}. The weak component of the s-process, which was introduced to explain the excess of light s-process nuclei in the Solar System, is a candidate for the process responsible for the stars with large excesses of light neutron-capture elements relative to the heavier ones. However, the yields of elements produced by this process rapidly decreases with increasing mass number at around $A\sim 90$. Indeed, the [Y/Zr] ratio predicted by \citet{raiteri93} for the weak s-process is [Y/Zr]$_{\rm weak-s}$=+0.3, which cannot explain the observational results for VMP stars. The [Y/Zr] ratio predicted for the r-process component in the Solar System, estimated by \citet{arlandini99}, is [Y/Zr]$_{\rm r-ss}\sim -0.3$, which agrees well with the values found for VMP stars. This suggests that the light neutron-capture elements in VMP stars, including objects with large excesses of light neutron-capture elements relative to heavy ones, originated from the r-process, although the r-process fraction of these light neutron-capture elements is still quite uncertain.\footnote{Indeed, the decomposition of solar-system abundances by \citet{burris00} indicates a larger fraction of the r-process component for Y, resulting in [Y/Zr]$_{\rm r-ss}$ =0.17, much higher than found in r-process element-enhanced stars \citep[see][]{hill02}.} The predictions of this abundance ratio by existing models of the r-process exhibit rather large variations \citep{woosley94,wanajo02,wanajo03}, presumably reflecting the uncertainties of nuclear data and wide range of parameters of models such as the electron fraction. The small scatter in the Y/Zr ratios found in the VMP stars suggests the existence of some mechanisms that regulate this abundance ratio. The abundance ratio of Y/Zr in these stars and its small scatter could be strong constraints on the modeling of the r-process nucleosynthesis. The upper panel of Figure~\ref{fig:sryzr} shows [Sr/Y] as a function of [Ba/H]. The bulk of stars have $0.0 \le {\rm [Sr/Y]} \le +0.50$, but the scatter is much larger than that observed in [Y/Zr]. This may reflect the large errors in Sr abundance measurements caused by the limited number of lines used for the measurements and the difficulty in the analysis of strong resonance lines. Given such relatively large uncertainties, we can only claim that the [Sr/Y] value is constant within a range of $\sim$0.3~dex. \citet{johnson02b} also investigated the [Sr/Y] ratios in metal-poor stars, and suggested a constant value of [Sr/Y] with a relatively large scatter. In Figure~\ref{fig:baeu}, we show for completeness the [Ba/Eu] ratios, which have been studied in detail by previous studies \citep[e.g., ][]{mcwilliam98}. The average of the [Ba/Eu] values is $\sim -0.6$, agreeing with the ratio of the r-process component in solar-system material. This indicates again that the neutron-capture elements, at least the heavy ones, in these metal-poor stars are dominated by the products of the r-process. \subsection{Ba-Enhanced stars}\label{sec:yzreu} In this section we investigate the correlation between the abundances of light and heavy neutron-capture elements, adopting Y, Zr and Eu abundances as indicators, rather than the Sr and Ba abundances. The first two elements represent the light neutron-capture elements, while Eu represents the heavy neutron-capture elements. Since the absorption lines of \ion{Y}{2}, \ion{Zr}{2}, and \ion{Eu}{2} are much weaker than the resonance lines of \ion{Sr}{2} and \ion{Ba}{2}, the abundances of Y, Zr, and Eu are only determined for stars having relatively high abundances of neutron-capture elements. However, the uncertainties of abundance measurements for these three elements are smaller than those for Sr and Ba in general. Therefore, Y, Zr, and Eu are good probes to investigate the light and heavy neutron-capture elements in neutron-capture-element-rich stars. The upper and lower panels of Figure~\ref{fig:yzreu} show the abundances of Y and Zr, respectively, as functions of the Eu abundance. The typical abundance ratio of $N_{\rm Ba}/N_{\rm Eu}$ in these stars is $\sim 10$, corresponding to [Ba/Eu]$\sim -0.7$. Hence, the distribution of Eu abundance from $\log \epsilon$(Eu) = $-3$ to $-1$ corresponds to that of the Ba abundance from $\log \epsilon$(Ba) = $-2$ to 0 in Figure~\ref{fig:srba}. The Y and Zr abundances show clear correlations with the Eu abundance, in particular in the range $\log \epsilon$(Eu)$>-2$. The scatter seen in these diagrams is much smaller than that in the Sr--Ba diagram (Figure~\ref{fig:srba}), even if we limit the discussion to stars with high Ba abundances. The tight correlation between the Y (Zr) and Eu abundances suggests that the scatter of Sr (and Ba) abundances found in the stars with high Ba abundances in the Sr--Ba diagram is, at least partially, caused by measurement errors in Sr and/or Ba. It should be noted that, even if errors of abundance measurements are included, the very large ($>2$~dex) scatter in the Sr abundances among stars with low Ba abundances remains. The slope of the correlation found in the Y--Eu and Zr--Eu diagrams is very interesting. Since these are diagrams of logarithmic abundances, the increase of both elements with a fixed ratio (on a linear scale) results in a line with a slope of unity, and the change of the ratio appears only as a parallel shift of the line. Since stars with extremely low metallicity ([Fe/H]$<-2.5$) are treated here, their abundances of neutron-capture elements are expected to have been determined by a quite limited number of processes \citep{audouze95}. The large scatter in the abundances ($\log \epsilon$ values) of neutron-capture elements is interpreted as a result of variation in the amount of yields by individual supernovae, and subsequent dilution by interstellar matter. If the interstellar matter contains almost no neutron-capture elements, the dilution does not change the abundance ratios between neutron-capture elements, while that changes their total abundances (relative to hydrogen). If we assume fixed abundance ratios between light and heavy neutron-capture elements, the slope found in Figure~\ref{fig:yzreu} should be unity. However, a correlation with a slope of $\sim 1/2$ is found in the figure. The line with a slope of 1/2 is formed by the increase of abundances with a relation of $y\propto x^{1/2}$ (e.g., $N_{\rm Y} \propto N_{\rm Eu}^{1/2}$). Such an behavior is quite unexpected, if the abundances of neutron-capture elements relative to hydrogen are determined by the yields from explosive processes like supernovae and the dilution by interstellar matter that contains no neutron-capture elements. Moreover, the scatter is larger at lower Eu abundances. How might we explain the observed distribution of Y and Eu abundances? If we assume two processes, which produce different ratios of Y(Zr)/Eu, some insight can be obtained. The solid lines in the upper panel of Figure~\ref{fig:yzreu} indicate the increase of Eu and Y abundances with a fixed ratio of $\delta N_{\rm Y}/\delta N_{\rm Eu} = 5$, assuming two different initial values ($\log \epsilon$(Y)$= -0.5$ and $-2.8$ at $\log \epsilon$(Eu)$=-3.5$). Most stars in the diagram can be explained by changing the initial Y abundances between these two values. The dotted lines correspond to the case of $\delta N_{\rm Y}/\delta N_{\rm Eu} = 2$ and initial values of $\log \epsilon$(Y)$= -0.2$ and $-3.2$ at $\log \epsilon$(Eu)$=-3.5$. The large cross in the figure indicates the values of the r-process component for these elements in the Solar System \citep{arlandini99}, which can also be explained by the increase of Y and Eu in such a manner, although it must be kept in mind that the r-process contribution to solar-system material for light neutron-capture elements is still somewhat uncertain. One might intuit that the increase of Y abundances with respect to Eu with a fixed ratio corresponds to enrichment by the main r-process, while the initial values assumed above are determined by the process that produced light neutron-capture elements with little accompanying heavy species. If this is true, the neutron-capture elements in stars near the line with a slope of unity are provided by the main r-process, while all other stars are more or less affected by the process that produces light neutron-capture elements. The above comparisons of solid and dotted lines with data points indicate that the main r-process produces the Y/Eu abundance ratio of $\delta N_{\rm Y}/\delta N_{\rm Eu} = 2\sim 5$, while another process yields these elements with a quite large distribution ($\sim 2$~dex) in the ratio. The solid and dotted lines in the lower panel of Figure~\ref{fig:yzreu} indicates the increase of Zr and Eu abundances with a fixed ratio of $\delta N_{\rm Zr}/\delta N_{\rm Eu} = 20$ and 10, respectively, assuming two different initial values. The Zr and Eu abundances of metal-poor stars as well as that of the r-process component in the Solar System \citep{arlandini99}, indicated by the cross in this figure, can also be explained by changing the initial [Zr/Eu] ratio. In the above discussion, the effect of the nucleosynthesis process producing light neutron-capture elements is regarded as the source of the difference of the ``initial values'' of Y and Zr. However, this does not necessarily mean that these processes have operated in advance of the main r-process. The time scale of the contribution of each process is dependent on the progenitor mass, which is still unclear, as discussed in subsection~\ref{sec:srba}. While a fixed abundance ratio of [Y/Eu] and [Zr/Eu] can be assumed for the main r-process, large distributions of these abundance ratios are required for the other process to explain the large scatter in the Y and Zr abundances at low Eu abundance. Further observational studies to determine the elemental abundance patterns produced by this process, covering a wider atomic number range, are clearly required. \section{Summary and Concluding Remarks} We have measured elemental abundances for 18 very metal-poor stars using spectra obtained with the Subaru Telescope High Dispersion Spectrograph. The metallicity range covered by our sample is $-3.1 \lesssim$ [Fe/H] $\lesssim -2.4$. While the abundances of carbon, $\alpha$-elements, and iron-peak elements determined for these stars show similar trends to those found by previous work, we found an exceptional star, BS~16934--002, a giant with [Fe/H] = $-2.8$ having large over-abundances of Mg, Al and Sc. Further abundance studies for this object are strongly desired. By combining our new results with those of previous studies, we investigated the distribution of neutron-capture elements in very metal-poor stars, focusing on the production of the light neutron-capture elements (Sr, Y, and Zr), and found the following observational results: \noindent (1) A large scatter is found in the abundance ratios between the light and heavy neutron-capture elements (e.g., Sr/Ba) for stars with low abundances of heavy neutron-capture elements. Most of these stars have extremely low metallicity ([Fe/H]$\lesssim -3$). \noindent (2) Stars with high abundances of heavy neutron-capture elements appear in the metallicity range of [Fe/H]$\gtrsim -3$. The observed scatter in the ratios between light and heavy neutron-capture elements is much smaller in these stars. In particular, the Y and Zr abundances exhibit a clear correlation with Eu abundance in stars with high Eu abundances, but the trend is not explained by the increases of light and heavy elements with fixed abundance ratios. \noindent (3) The Y/Zr ratio is similar in stars with high and low abundances of heavy neutron-capture elements. The values of the [Y/Zr] indicate these are not products of the main nor weak components of the s-process, but must have an origin related to the r-process. These observational results indicate the existence of the process that yielded light neutron-capture elements (Sr, Y, and Zr) with little contribution to heavy ones (e.g., Ba, Eu). This process seems to be different from the weak s-process. Such a process has been suggested by previous studies \citep[e.g., ][]{truran02,johnson02b,travaglio04}, as mentioned in Section~\ref{sec:intro}. The above results suggest that this process appears even in extremely metal-poor stars ([Fe/H]$\sim -3.5$), but is seen as well in stars with higher metallicity and higher abundances of heavy neutron-capture elements. The ratios of light to heavy neutron-capture elements (e.g., Sr/Ba, Y/Eu) formed by this process have a wide distribution, while the abundance ratios of elements among light neutron-capture elements (e.g., Y/Zr), as well as those among heavy ones (e.g., Ba/Eu), are almost constant. These observational results provide new constraints on modeling r-process nucleosynthesis, and identifying its likely astrophysical sites. However, further observational studies are required. In particular, measurements for lower metallicity stars ([Fe/H]$<-3.5$) are very important to understand the process that produced light neutron-capture elements in the very early Galaxy. More detailed abundance studies for stars showing large excesses of light neutron-capture elements, with low abundances of heavier ones, will provide definitive constraints on the modeling of that process. \acknowledgments T.C.B. acknowledges partial support from a series of grants awarded by the US National Science Foundation, most recently, AST 04-06784, as well as from grant PHY 02-16783; Physics Frontier Center/Joint Institute for Nuclear Astrophysics (JINA).
{'timestamp': '2005-06-19T03:50:34', 'yymm': '0503', 'arxiv_id': 'astro-ph/0503032', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/0503032'}
ArXiv
\section{Introduction} At its core, general relativity is a theory of gravity phrased operationally in terms of measurements of distances and time using classical rulers and clocks. Quantizing these notions has been a major problem of theoretical physics for the past century and, as of today, there is still no complete theory of quantum gravity. Nevertheless, there are multiple effective tools that can be used in order to better understand the relationship of gravity and quantum physics in low energy regimes. In particular, the behaviour of quantum fields in curved spacetimes can be well described using a semiclassical theory, where the background is classical and the matter fields are quantum. Although this approach does not provide a full theory of quantum gravity, it gives important results, such as the Unruh and Hawking effects~\cite{fullingUnruhEffect,HawkingRadiation,Unruh1976,Unruh-Wald} and the model of inflation~\cite{inflation}, which describes the universe fractions of seconds after its creation. A more recent application of this semiclassical theory is to rephrase classical notions of space and time intervals in terms of properties of quantum fields~\cite{achim,achimQGInfoCMB,achim2,mygeometry}. As argued in~\cite{achim2,mygeometry}, this rephrasing might lead to a quantum theory of spacetime, which could redefine the notions of distance and time close to the Planck scale. In order to relate the spacetime geometry with properties of a quantum field theory (QFT), it is necessary to study the specific way that the background spacetime affects a quantum field. The effect of curvature in the correlation function of a QFT has been thoroughly studied in the literature~\cite{Wald2,kayWald,achim}. In fact, it is possible to show that, within short scales, the behaviour of the correlations of a quantum field can be written as the correlations in flat spacetime added to terms that involve corrections due to curvature~\cite{birrell_davies,DeWittExpansion}. This suggests that if one finds a mechanism to locally probe these correlations, one would then be able to recover the geometry of spacetime. One way of probing quantum fields locally, and recovering their correlation functions, is through the use of particle detector models~\cite{pipo,mygeometry}. Broadly speaking, particle detector models are localized non-relativistic quantum systems that couple to a quantum field. Examples of physical realizations of these range from atoms probing the electromagnetic field~\cite{Pozas2016,Nicho1,richard} to nucleons interacting with the neutrino fields~\cite{neutrinos,antiparticles,fermionicharvesting}. After their first introduction by Unruh and DeWitt in~\cite{Unruh1976,DeWitt}, these models found many different uses for studying a wide range of phenomena of quantum field theories in both flat and curved spacetimes. There are several applications of these models, such as the study of the entanglement in quantum fields~\cite{Valentini1991,Reznik1,reznik2,Retzker2005,Pozas-Kerstjens:2015,Pozas2016}, the Unruh effect~\cite{Unruh-Wald,HawkingRadiation,fullingHadamard,matsasUnruh,mine} and Hawking radiation~\cite{HawkingRadiation,HawkingMain,WaldRadiation,detRadiation,detectorsCavitiesFalling}. Moreover, particle detectors can be used to provide a measurement framework for quantum field theory~\cite{FVdebunked,chicken}, to probe the topology~\cite{topology} and geometry of spacetime~\cite{mygeometry}, among other applications~\cite{pipo,teleportation,adam}. In this manuscript we show how it is in principle possible to recover the curvature of spacetime using smeared particle detectors ultra rapidly coupled to a quantum field~\cite{deltaCoupled,nogo}. Smeared particle detectors have a finite spatial extension, which can be controlled to probe the quantum field in different directions. The effect of the spacetime curvature in the correlation function of the quantum field then affects the transition probabilities of the detector. We precisely quantify how curvature affects the response of particle detectors, so that by comparing the response of a detector in curved spacetimes with what would be seen in Minkowski, one can infer the spacetime curvature. Using particle detectors with different shapes then gives access to the spacetime curvature in different directions, so that it is possible to reconstruct the full Riemann tensor at the center of the detector's trajectory, and all geometrical quantities derived from it. Our results are another instance of rephrasing geometrical properties of spacetime in terms of measurements of observable quantities of quantum fields~\cite{achim,achimQGInfoCMB,achim2,mygeometry}. We argue that such rephrasing is an important step towards understanding the relationship between quantum theory and gravity. This sets the grounds for future works which might provide a detailed answer to how to define the notions of space and time in scales where the classical notions provided by general relativity fail to work. This paper is organized as follows. In Section \ref{sec:detectors} we describe the coupling and the dynamics of a particle detector ultra rapidly coupled to a massless scalar field. In Section \ref{sec:curvature} we write the excitation probability of an ultra rapidly coupled particle detector as an expansion in the detector size, with coefficients related to the spacetime curvature. In Section \ref{sec:recovery} we provide a protocol such that one can recover the spacetime curvature from the excitation probability of particle detectors. The conclusions of our work can be found in Section \ref{sec:conclusion}. \section{Ultra rapid sampling of quantum fields}\label{sec:detectors} In this section we describe the particle detector model that will be used in this manuscript. We consider a two-level Unruh-DeWitt (UDW) detector model coupled to a free massless scalar quantum field $\hat{\phi}(\mf x)$ in a \mbox{$D = n+1$} dimensional spacetime $\mathcal{M}$ with metric $g$. The Lagrangian associated with the field can be written as \begin{equation} \mathcal{L} = - \frac{1}{2} \nabla_\mu \phi \nabla^\mu \phi \end{equation} where $\nabla$ is the Levi-Civita connection. We will not be concerned with the details of the quantization of the field here. However, we will assume that the state of the field is a Hadamard state, for reasons that will become clear in Section \ref{sec:curvature}. Moreover, Hadamard states are those for which it is possible to associate a finite value to the stress-energy tensor of the quantum field~\cite{birrell_davies,fewsterNecessityHadamard}, which makes them appealing from a physical perspective. The detector is modelled by a two-level system undergoing a timelike trajectory $\mf z(\tau)$ in $\mathcal{M}$ with four-velocity $u^\mu(\tau)$ and proper time parameter $\tau$. We pick Fermi normal coordinates $(\tau,\bm{x})$ around $\mf z(\tau)$ (for more details we refer the reader to~\cite{poisson,us,us2}). We assume the proper energy gap of the two-level system to be $\Omega$, such that its free Hamiltonian in its proper frame is given by \mbox{$\hat{H}_D = \Omega \hat{\sigma}^+\hat{\sigma}^-$}, where $\hat{\sigma}^\pm$ are the standard raising/lowering ladder operators. The interaction with the scalar field is prescribed by the scalar interaction Hamiltonian density \begin{equation} \hat{h}_I(\mf x) = \tilde{\lambda} \Lambda(\mf x) \hat{\mu}(\tau)\hat{\phi}(\mf x), \end{equation} where \mbox{$\hat{\mu}(\tau) = e^{- i \Omega \tau} \hat{\sigma}^- +e^{i \Omega \tau} \hat{\sigma}^+ $} is detector monopole moment, $\tilde{\lambda}$ is the coupling constant, and $\Lambda(\mf x)$ is a scalar function that defines the spacetime profile of the interaction. This setup defines the interaction of a UDW detector with a real scalar quantum field, and has been thoroughly studied in the literature~\cite{birrell_davies,pipo,mygeometry,antiparticles,fermionicharvesting,Unruh1976,DeWitt,Pozas-Kerstjens:2015,us,us2}. This model also has a physical appeal, as it has been shown to reproduce realistic models, such as atoms interacting with the electromagnetic field~\cite{Pozas2016,Nicho1,richard} and nucleons with the neutrino fields~\cite{neutrinos,antiparticles,fermionicharvesting}. Under the assumption that the shape of the interaction between the detector and the field is constant in the detector's frame i.e. a rigid detector, we can write the spacetime smearing function as $\Lambda(\mf x) = \chi(\tau)f(\bm x)$, where now $f(\bm x)$ (the smearing function) defines the shape of the interaction and $\chi(\tau)$ (the switching function) controls the strength and the duration of the coupling. This decomposition also allows one to control the proper time duration of the interaction by considering $\chi(\tau) = \eta\, \varphi(\tau/T)/T$ for a positive compactly supported function $\varphi$ that is $L^1(\mathbb{R})$ normalized and symmetric with respect to the origin. Here $\eta$ and $T$ are parameters with units of time, which ensure that $\chi(\tau)$ is dimensionless. In this manuscript we will be particularly interested in an ultra rapid coupling~\cite{deltaCoupled,nogo}, which is obtained when $T\longrightarrow 0$ and $\chi(\tau) \longrightarrow \eta\, \delta(\tau)$. The evolution of the system after an interaction is implemented by the time evolution operator \begin{equation}\label{U1} \hat{U} = \mathcal{T}_\tau \exp\left(-i \int_\mathcal{M} \dd V \hat{h}_I(\mf x)\right), \end{equation} where $\mathcal{T}_\tau$ denotes time ordering with respect to $\tau$ and $\dd V = \sqrt{-g} \,\dd^{D}x$ is the invariant spacetime volume element. In the case of ultra rapid coupling, where \mbox{$\chi(\tau) = \eta \, \delta(\tau)$}, Eq. \eqref{U1} simplifies to \begin{align} \hat{U} &= e^{-i \hat{\mu}\, \hat{Y}(f)} = \text{cos}(\hat{Y}) -i \hat{\mu}\, \text{sin}(\hat{Y}), \end{align} where \begin{align} \hat{Y}(f)&= \lambda \int \dd^3\bm x \sqrt{-g} f(\bm x) \hat{\phi}(\bm x), \end{align} with $\hat{\mu} = \hat{\mu}(0)= \hat{\sigma}^+ + \hat{\sigma}^- $, $\lambda =\tilde{\lambda}\eta$ and \mbox{$\hat{\phi}(\bm x) = \hat{\phi}(\tau = 0,\bm x)$} is the field evaluated at the rest space associated to the interaction time, $\tau = 0$. In the equation above the integral is performed over the rest space of the system at $\tau = 0$. We consider a setup where the detector starts in the ground state $\hat{\rho}_{\textrm{d},0}= \hat{\sigma}^-\hat{\sigma}^{+}$ and the field starts in a given Hadamard state $\omega . The final state of the detector after the interaction, $\hat{\rho}_{\textrm{d}}$, will then be given by the partial trace over the field degrees of freedom. It can be written as \begin{align} \hat{\rho}_{\textrm{d}} =& \omega(\hat{U}\hat{\rho}_{\textrm{d},0} \hat{U}^\dagger) \nonumber\\ =& \omega(\cos^2(\hat{Y})) \hat{\sigma}^-\hat{\sigma}^+ + \omega(\sin^2(\hat{Y})) \hat{\sigma}^+ \hat{\sigma}^-\nonumber\\ =& \frac{1}{2}\left(\openone + \omega(\cos{}(2\hat{Y}))\hat{\sigma}_z\right), \end{align} where we used $\hat{\mu} \hat{\rho}_{\text{d},0} \hat{\mu} = \hat{\sigma}^+ \hat{\sigma}^-$ and $\omega(\text{sin}(\hat{Y})\text{cos}(\hat{Y})) = 0$ due to the fact that this operator is odd in the field $\hat{\phi}$, and $\omega$ is a Hadamard state, so that it is quasifree~\cite{kayWald} and all of its odd point-functions vanish. In particular, notice that $\tr(\hat{\rho}_{\textrm{d}}) = \omega(\cos^2(\hat{Y})+\sin^2(\hat{Y})) = 1$, as expected. The excitation probability of the detector is then given by \begin{equation} P = \tr(\hat{\rho}_{\text{d}}\sigma^+ \sigma^-) = \omega(\sin^2(\hat{Y})) = \frac{1}{2}\left(1 - \omega(e^{2 i \hat{Y}})\right), \end{equation} where we used $\sin^2\theta = \frac{1}{2}\left(1 - \cos(2\theta)\right)$ and the fact that $\omega(\cos{}(2\hat{Y})) = \omega(\text{exp}({2 i \hat{Y}}))$, because only the even part of the exponential contributes. Moreover, there is a simple expression for the excitation probability in terms of a smeared integral of the field's Wightman function $W(\mf x,\mf x') = \omega(\hat{\phi}(\mf x)\hat{\phi}(\mf x'))$. Let \begin{equation}\label{eq:L} \mathcal{L} = \lambda^2 \int \dd^n\bm x \dd^n \bm x'\,\sqrt{-g}\sqrt{-g'}\, f(\bm x) f(\bm x') W(\bm x,\bm x'), \end{equation} where $W(\bm x, \bm x') = W(\tau\! =\! 0, \bm x,\tau'\!=\! 0, \bm x')$. Then, we show in Appendix \ref{app:L} that if $\omega$ is a quasifree state, \mbox{$\omega(e^{2 i \hat{Y}}) = e^{-2 \mathcal{L}}$}, so that the excitation probability of the delta coupled detector can be written as \begin{equation}\label{eq:P} P = \frac{1}{2}\left(1 - e^{-2 \mathcal{L}}\right). \end{equation} Notice that in the pointlike limit the detector is essentially sampling the field correlator at a single point. In this case, $\hat{\rho}_{\text{d}}\rightarrow \frac{1}{2}\openone$ and no information about the quantum field can be obtained. By considering finite sized detectors, it is then possible to sample the field in local regions, allowing one to recover information about both the field and its background spacetime. \begin{comment} \subsection{An explicit example: a Gaussian detector in Minkowski spacetime.} \begin{equation}\label{eq:f} f(\bm x) = \frac{1}{(2\pi L^2)^\frac{n}{2}}e^{- \frac{1}{2}{a_{ij} x^i x^j}} \end{equation} \trr{Define this bad boy here:} \begin{equation} \mathcal{L}_0 = \lambda^2 \eta^2 \int\dd^n \bm x \dd^n \bm x' f(\bm x) f(\bm x') W_0(\bm x,\bm x'), \end{equation} where $W(\bm x,\bm x')$ is the Wightman function of a massless scalar field in the vacuum of Minkowski spacetime. \trr{Compute the $\mathcal{L}_0$ explicitly as a function of $a_{ij}$ (or its eigenvalues, doesn't really matter). Doing so allows us to write the last section and finish the paper with great style.} \end{comment} \section{The effect of curvature on the excitation probability}\label{sec:curvature} In this section we will derive an expansion for the excitation probability of a particle detector rapidly interacting with a quantum field in curved spacetimes. From now on, we will focus in the case of (3+1) dimensions. Our expansion will relate $P$ in Eq. \eqref{eq:P} with the excitation probability of a delta-coupled particle detector in Minkowski spacetime. By comparing these results, we will later be able to rewrite the components of the Riemann curvature tensor as a function of the excitation probability of the detector. \begin{comment}The $\hat{Y}(f)$ operator can then be written as \begin{align} \hat{Y}(f) = \lambda \eta \int\dd^3 \bm x \left(1 + \frac{1}{2}Q_{ij} x^i x^j\right) f(\bm x) \hat{\phi}(x) + \mathcal{O}(L^3)\\= \left(1 -Q_{ij} \pdv{}{a_{ij}}\right)\hat{Y_0}(f) + \mathcal{O}(L^3), \end{align} where we have defined \begin{align} \hat{Y_0}(f) = \lambda \eta \int\dd^3 \bm x f(\bm x) \hat{\phi}(x). \end{align} Using the fact that differentiation with respect to $a_{ij}$ are of order of $L^2$, we can write the excitation probability of the detector as \begin{equation} P = \frac{1}{2}\left(1 - \omega(e^{2i \hat{Y}_0(f)}) + Q_{ij}\pdv{}{a_{ij}}\omega(e^{2i \hat{Y}_0(f)})\right)+\mathcal{O}(L^4). \end{equation} \end{comment} Notice that the detector's excitation probability in Eq. \eqref{eq:P} is entirely determined by $\mathcal{L}$ in \mbox{Eq. \eqref{eq:L}}, so that in order to obtain an expansion for the excitation probability, it is enough to expand $\mathcal{L}$. The first step in order to perform our expansion is to write the Wightman function in curved spacetimes as its flat spacetime analog added to an expansion in terms of curvature. Assuming the field state $\omega$ to be a Hadamard state, it can be shown that the correlation function of a quantum field can be written \mbox{as~\cite{fullingHadamard,fullingHadamard2,kayWald,equivalenceHadamard,fewsterNecessityHadamard}} \begin{align} W(\mf x,\mf x') \!=\! \frac{1}{8\pi^2}\! \frac{\Delta^{1/2}( \mf x,\mf x')}{\sigma( \mf x, \mf x')} \!+\! v( \mf x,\mf x') \text{ln}|\sigma(\mf x, \mf x')|\!+\!w(\mf x,\mf x'),\label{Whada} \end{align} where $v(\mf x,\mf x')$ and $w(\mf x,\mf x')$ are regular functions in the limit $\mf x'\rightarrow \mf x$, $\Delta(\mf x,\mf x')$ is the Van-Vleck determinant (see~\cite{poisson}) and $\sigma(\mf x,\mf x')$ is Synge's world function, corresponding to one half the geodesic separation between the events $\mf x$ and $\mf x'$. In Eq. \eqref{Whada}, the function $w(\mf x,\mf x')$ contains the state dependence, while $v(\mf x,\mf x')$ is fully determined by the properties of both the field and the spacetime. We can then write \begin{align} W(\mf x,\mf x') = \frac{1}{8\pi^2\sigma} \bigg[\Delta^{1/2} \!+ \! 8 \pi^2v_0\,\sigma\,&\text{ln}|\sigma|+8\pi^2 w_0\,\sigma \label{Wexpsigma}\\ &\:\:\:\:\:\:\:\:\:\:+ \mathcal{O}(\sigma^2\,\text{ln}|\sigma|)\bigg],\nonumber \end{align} where $v_0 = v_0(\mf x,\mf x')$ and $w_0 = w_0(\mf x,\mf x')$ are the first order terms of an expansion of $v$ and $w$ in powers of $\sigma$~\cite{DeWittExpansion}. Notice that we have factored the Minkowski spacetime Wightman function for a massless field, $W_0(\mf x,\mf x') = \frac{1}{8\pi^2\sigma}$ in Eq. \eqref{Wexpsigma}. In~\cite{DeWittExpansion,poisson} it was shown that for a massless field, $v_0(\mf x,\mf x') = R(\mf x)/6 + \mathcal{O}(\sqrt{\sigma})$, so that the leading order contribution for the expansion is given by the Ricci scalar. The same is true for the state dependent part of the Wightman function, $w(\mf x,\mf x') = \omega_0(\mf x) + \mathcal{O}(\sqrt{\sigma})$ for a given function $\omega_0(\mf x)$ which determines the state contribution to $W(\mf x,\mf x')$ to leading order in $\sigma$. Moreover, the Van-Vleck determinant admits the following expansion \begin{equation} \Delta^{\frac{1}{2}}(\mf x,\mf x') = 1 + \frac{1}{12}R_{\alpha \beta}(\mf x) \sigma^\alpha(\mf x,\mf x') \sigma^\beta(\mf x,\mf x'), \end{equation} where $\sigma^{\alpha}(\mf x,\mf x')$ denotes the tangent vector to the geodesic that connects $\mf x$ and $\mf x'$ such that its length corresponds to the spacetime separation between $\mf x$ and $\mf x'$. $\sigma_\alpha$ also corresponds to $\partial_\alpha \sigma$~\cite{poisson}. Combining the results above, we find that the Wightman function of a quantum field in a Hadamard state can be approximated as \begin{align} W(\mf x,\mf x') \approx W_0(\mf x,\mf x')\Big(1 +& \frac{1}{12}R_{\alpha\beta}(\mf x)\sigma^\alpha(\mf x,\mf x') \sigma^\beta(\mf x,\mf x') \nonumber\\ &\!\!\!+\frac{4\pi^2}{3}R(\mf x)\,\sigma(\mf x,\mf x')\,\text{ln}|\sigma(\mf x,\mf x')|\nonumber\\ &\:\:\:\:\:\:+8 \pi^2\omega_0(\mf x) \sigma(\mf x,\mf x')\Big),\label{WW0partial} \end{align} where $W_0(\mf x,\mf x')$ is the Wightman function in Minkowski spacetime. Equation \eqref{WW0partial} allows one to locally relate the Wightman function in curved spacetimes with its Minkowski counterpart. However, we wish to have an expansion in terms of the proper distance from the center of the interaction, $\mf z$. This proper distance can be expressed naturally in terms of Synge's world function in Fermi normal coordinates due to the fact that $x^i = \sigma^i(\mf z,\mf x)$, so $\sigma(\mf x,\mf x') = \frac{1}{2}\sigma^i(\mf z,\mf x)\sigma_i(\mf z,\mf x) = \frac{1}{2}x^ix_i$. Thus, considering $\mf x$ and $\mf x'$ sufficiently close to the point $\mf z$, we can use the following approximations \begin{align} \sigma(\mf x,\mf x') &\approx \sigma(\mf z,\mf x) - \sigma_\alpha(\mf z,\mf x) \sigma^\alpha(\mf z,\mf x') + \sigma(\mf z,\mf x')\nonumber,\\ \sigma^\alpha(\mf x',\mf x) &\approx \sigma^\alpha(\mf z,\mf x) - \sigma^\alpha(\mf z,\mf x') = (\mf x-\mf x')^\alpha. \end{align} It is also possible to expand the Ricci scalar and the Ricci tensor according to $R(\mf x) = R(\mf z) + \mathcal{O}(r)$ and \mbox{$R_{\alpha\beta}(\mf x) = R_{\alpha\beta}(\mf z) + \mathcal{O}(r)$}, where $\mathcal{O}(r)$ denotes terms of order $r = \sqrt{x^i x_i}$. Analogously, we can expand the state dependent term as $\omega_0(\mf x) = \omega_0(\mf z)+\mathcal{O}(r)$. In the end we obtain an expression that relates $W(\mf x,\mf x')$ with $W_0(\mf x,\mf x')$, tensors evaluated at $\mf z$, and the effective separation vector between $\mf z$ and $\mf x$/$\mf x'$: \begin{align} W(\mf x,\mf x') \approx W_0(\mf x,\mf x')\!\bigg[1&\! + \!\frac{1}{12}R_{ij}(x-x')^i(x-x')^j \label{WW0} \\ &\!\!\!+\frac{2\pi^2}{3}R\,(\mf x-\mf x')^2\,\text{ln}\left|\tfrac{1}{2}(\mf x-\mf x')^2\right|\nonumber\\ &\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:+4\pi^2\omega_0(\mf z)(\mf x-\mf x')^2 \bigg],\nonumber \end{align} where the curvature tensors are all evaluated at the center point of the interaction, $\mf z$. In Eq. \eqref{WW0}, \mbox{$(x-x')^i = x^i - x'{}^i$} denotes the difference in Fermi normal coordinates of the points $\mf x$ and $\mf x'$ and \mbox{$(\mf x-\mf x')^2 = (x-x')^i(x-x')_i = r^2$}. The last factor that was not yet considered in our expansion is the factor of $\sqrt{-g}$ terms that show up in the definition of $\mathcal{L}$ in Eq. \eqref{eq:L}. If the detector size is small enough compared to the radius of curvature of spacetime, we can employ the expansion of the metric determinant detailed in Appendix \ref{ap:fermi} around the center of the interaction $\mf z$. We then have \begin{equation}\label{sqrtg} \sqrt{-g} = 1 + a_i x^i +\tfrac{1}{2}M_{ij} x^i x^j + \mathcal{O}(r^3), \end{equation} where $r = \sqrt{\delta_{ij}x^i x^j}$ corresponds to the proper distance from a point to $\mf z(0)$, $a_i$ is the four-acceleration of the trajectory at $\tau = 0$ and the tensor $M_{ij}$ is evaluated at $\mf z$. This tensor is explicitly given by \begin{equation} M_{ij} = \tfrac{2}{3}R_{\tau i \tau j} - \tfrac{1}{3} R_{ij}. \end{equation} At this stage, we have all the tools required to expand the excitation probability. Combining the results of Eqs. \eqref{WW0} and \eqref{sqrtg}, we can write the excitation probability of a smeared delta-coupled particle detector in a curved spacetime as the following short scale expansion \begin{widetext} \begin{align}\label{PcsPflat} P \approx P_0 + e^{-2 \mathcal{L}_0}\left( M_{ij}\mathcal{Q}^{ij}+2a_i\mathcal{D}^i+\frac{1}{12}R_{ij}\mathcal{L}^{ij} +\frac{2\pi^2}{3} R\, \mathcal{L}_R +4 \pi^2 \omega_0 \mathcal{L}_{\omega}\right), \end{align} \end{widetext} where $P_0 = \frac{1}{2}\left(1 -e^{-2 \mathcal{L}_0}\right)$ and we have defined: \begin{align} \mathcal{L}_0 \!&=\! \lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x'),\nonumber\\ \mathcal{Q}^{ij} \!&= \!\lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')x^ix^j,\nonumber\\ \mathcal{D}^{i} \!&= \!\lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')x^i,\nonumber\\ \mathcal{L}^{ij} \!&= \!\lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')(x\!-\!x')^i(x\!-\!x')^j,\nonumber\\ \mathcal{L}_R \!&= \!\lambda^2 \!\!\! \int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')\nonumber\\ &\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\times (\bm x-\bm x')^2\:\text{ln}\left|\tfrac{1}{2}(\bm x-\bm x')^2\right|,\nonumber\\ \mathcal{L}_{\omega}\!&= \!\lambda^2 \!\!\!\int\!\!\dd^3 \bm x \dd^3 \bm x' \!f(\bm x) f(\bm x') W_0(\bm x,\bm x')(\bm x - \bm x')^2.\label{Ls} \end{align} Notice that $P_0$ corresponds to the excitation probability of the detector if it were interacting with the vacuum of Minkowski spacetime. Eq. \eqref{PcsPflat} contains all corrections to the excitation probability of the detector up to second order in the detector size, as we have considered all terms of this order or lower in our computations. In Eq. \eqref{PcsPflat} we see corrections arising from five different fronts. The $M_{ij}\mathcal{Q}^{ij}$ term is associated with the spacetime volume element in the rest frame of the trajectory where the detector interacts with the quantum field. The $a_i\mathcal{D}^i$ term is the effect that the instantaneous acceleration of the detector has in the shape of its rest surface. The $R_{ij}\mathcal{L}^{ij}$ term is related to the corrections to the correlation function due to the Van-Vleck determinant, associated with the determinant of the parallel propagator. The $R \,\mathcal{L}_R$ term is associated to the corrections to the correlation function due to spacetime curvature. Finally, the $\omega_0 \,\mathcal{L}_\omega$ term is associated with the state of the quantum field. We highlight that this is the only term in Eq. \eqref{PcsPflat} whose coefficient is not independent of the other ones, given that we can write $\mathcal{L}_\omega = \delta_{ij}\mathcal{L}^{ij}$. The expansion of Eq. \eqref{PcsPflat} contains the effect of the curvature of spacetime in the excitation probability of a smeared delta-coupled UDW detector. Moreover, this expansion works for a large class of spacetimes under weak assumptions for the quantum field, provided that the detector size is small compared to the curvature radius of spacetime. It is also important to mention that the integral for $\mathcal{L}$ in Eq. \eqref{eq:L} is not solvable analytically in most spacetimes, and can demand great computational power to be performed numerically. However, the expression for $\mathcal{L}_0$, $\mathcal{Q}^{ij}$, $\mathcal{D}^i$, $\mathcal{L}^{ij}$, $\mathcal{L}_R$ and $\mathcal{L}_\omega$ can be computed analytically for a large class of smearing functions (see, for instance Appendix \ref{app:L-terms}). In this sense, the expansion presented in this section can be used to simplify the study sufficiently small particle detectors in curved spacetimes. Overall, the expansion in Eq. \eqref{PcsPflat} shows the different ways that the background geometry manifests itself on ultra rapid localized measurements of a quantum field. \section{The curvature of spacetime in terms of the excitation probability}\label{sec:recovery} In this section we will use the results of Section \ref{sec:curvature} in order to build a protocol by which one can obtain the curvature of spacetime from the excitation probability of delta-coupled particle detectors. In order to do so, we will consider explicit shapes for detectors, so that we can explicitly compute the $\mathcal{L}_0$, $\mathcal{Q}^{ij}$, $\mathcal{D}^{i}$, $\mathcal{L}^{ij}$, $\mathcal{L}_R$ and $\mathcal{L}_{\omega}$ terms of Eq. \eqref{PcsPflat}, and obtain the curvature-dependent terms $M_{ij}$, $R_{ij}$ and $R$ from the excitation probabilities. Before outlining the operational protocol which will allow us to recover the spacetime curvature, it is important to discuss the effect that the detector size has in the excitation probability in flat spacetimes. Consider a pointlike detector in Minkowski spacetime. After the ultra rapid coupling with the quantum field, this detector will be in a maximally mixed state, with excitation probability equal to $1/2$. The physical reason behind the detector ending up in a maximally mixed state is that it instantaneously probes all of the field modes. This generates a great amount of noise, which results in the detector state containing no information about the field. Overall, the size of the detector determines the smallest wavelength (largest energy modes) that it is sensitive to. Thus, increasing the size of the detector makes it sensitive to less energetic modes, which then decreases the excitation probability, according to Eq. \eqref{eq:P}. This allows one to obtain information about the field modes up to a cutoff determined by the inverse of the detector's size. The discussion of the last paragraph can also be extended to curved spacetimes. In particular, the fact that a point-like detector delta-coupled to a quantum field ends up in a maximally mixed state also holds in general spacetimes. In fact, in the pointlike limit one ends up sampling smaller and smaller regions that are locally flat, and too small to be affected by curvature. This can be explicitly seen from Eq. \eqref{PcsPflat}, where all the correction terms are proportional to some power of the detector size (Eq. \eqref{Ls}). Similarly, as discussed in the case of flat spacetimes, a finite-sized particle detector will then couple to field modes of finite-sized wavelengths, and the effect of these modes in the particle detector will change its excitation probability. Moreover, the curvature in different directions will affect the modes that propagate in these directions differently. This implies that probing the quantum field with smeared delta-coupled particle detectors with different shapes should allow one to recover the spacetime curvature in different directions. We are now at a step where we can explicitly formulate a protocol where spacetime curvature can be recovered from ultra rapid local measurements of a quantum field. In order to do this, we will first have to make assumptions about the spacetime $\mathcal{M}$ and the events where we sample the field. As one would expect, in order to recover the classical curvature of spacetime in terms of expected values of quantum systems, one would require many samplings of the quantum field in similar conditions. Thus, we will require our spacetime to be locally stationary for the duration of the experiment\footnote{This is a strong condition that could be relaxed, as we only need spacetime not to vary too much in the frame of one timelike curve during the experiments, but we will assume this stronger version in order to build an explicit protocol.}, so that it contains a local timelike Killing field $\xi$ localized in the region where the experiments take place. Moreover, we will assume that the center points of the interactions of the particle detectors with the quantum field can all be connected by the flow of $\xi$. This will ensure that the curvature tensor $R_{\mu\nu\alpha\beta}$ and all other tensors derived from it are the same for all interactions considered, so that the expansion of Eq. \eqref{PcsPflat} has constant coefficients $M_{ij}$, $a_i$, $R_{ij}$ and $R$. The final assumption for our setup is that the different centers of the interactions are sufficiently separated in time so that the backreaction that each coupling of the detectors has on the field can dissipate away. This is a key assumption, which implies that the field state being probed remains approximately the same throughout the interaction. Equivalently, this implies that the state dependent part of the Wightman function expansion in Eq. \eqref{PcsPflat}, $\omega_0$, will remain approximately constant within the detectors smearings. We note that we are considering a massless field, so that field excitations propagate at light-speed. Thus, the assumption that $\omega_0$ is approximately constant translates into the different interactions being separated in time by more than the detectors' light-crossing time. Overall, this is a reasonable assumption for any experimental setup. In order to build an explicit protocol, we will consider the detectors smearing functions to be given by ellipsoidal Gaussians in their respective Fermi normal frames. By considering ellipsoidal Gaussians as the shape of the detectors, we will then be able to select the modes that they are sensitive to in each spatial direction. Explicitly, we consider smearing functions of the form \begin{equation}\label{eq:f} f(\bm x) = \frac{\sqrt{\det(a_{ij})}}{(2\pi)^\frac{3}{2}}e^{- \frac{1}{2}{a_{ij} x^i x^j}}, \end{equation} where $a_{ij}$ is a positive symmetric bilinear map. We assume $\sqrt{\det(a_{ij})} = \mathcal{O}(L^{-3})$, where $L$ is a constant with units of length that determines the approximate size of the detectors and dictates the smallest wavelengths that they are sensitive to. The smearing function is prescribed in the detector's rest space in terms of the Fermi normal coordinates $\bm x = (x^1,x^2,x^3)$. With the explicit choice of detectors shapes in Eq. \eqref{eq:f}, it is possible to compute most coefficients from the expansion of Eq. \eqref{PcsPflat} analytically. In fact, in Appendix \ref{app:L-terms}, we show that with the choice of elliptic Gaussian for $f(\bm x)$, $\mathcal{L}_0$, $\mathcal{Q}^{ij}$, $\mathcal{D}^i$, $\mathcal{L}^{ij}$ and $\mathcal{L}_\omega$ can be computed analytically. Moreover, $\mathcal{D}^i = 0$ in this case, so that the expansion in Eq. \eqref{PcsPflat} can be written as \begin{equation} P = P_0 + e^{- 2 \mathcal{L}_0}\left(M_{ij}\mathcal{Q}^{ij}+N_{ij}\mathcal{L}^{ij} + \frac{2\pi^2}{3} R \,\mathcal{L}_R \right),\label{PP0} \end{equation} where $M_{ij} = \frac{2}{3} R_{\tau i \tau j} - \frac{1}{3}R_{ij}$ and $N_{ij} = \frac{1}{12}R_{ij} + 4\pi^2 \omega_0 \delta_{ij}$. In Appendix \ref{app:L-terms} we also show that $\mathcal{Q}^{ij}$, $\mathcal{L}^{ij}$ and $\mathcal{L}_R$ can all be varied independently due to their different non-linear dependence on $a_{ij}$ (or equivalently, on the shape of the detector). In this sense, Eq. \eqref{PP0} particularizes the expansion in Eq. \eqref{PcsPflat} for this specific setup and explicitly shows the independent coefficients $\mathcal{Q}^{ij}$, $\mathcal{L}^{ij}$ and $\mathcal{L}_R$ determined by the detectors' shape. We are now at a stage where we can pick different detector sizes and shapes in order to recover information about the curvature of spacetime from their excitation probabilities. First, we consider the case where the detector's trajectory $\mf z(\tau)$ is the flow of the Killing vector field $\xi$. In this case, we expect to recover the tensors $M_{ij}$ and $N_{ij}$ and the scalar $R$ by sampling the probability $P$ in Eq. \eqref{PcsPflat} for different shapes of detectors (or, correspondingly, for different values of $a_{ij}$). That is, we perform measurements using different detectors with different shapes placed in different orientations, so that we ``sample the effect of curvature'' in each direction. In order to fully recover these tensors, it is necessary to sample the field using at least $13$ different values of $a_{ij}$ which give a set of $13$ \emph{linearly independent} coefficients $\mathcal{Q}^{ij}$ (with $6$ independent components), $\mathcal{L}^{ij}$ (with $6$ independent components) and $\mathcal{L}_R$. We then need a total of $13 = 6+6+1$ measurements in order to be able to write $M_{ij}$, $N_{ij}$ and $R$ in terms of the different probabilities. From the tensors $M_{ij}$, $N_{ij}$ and $R$ it is possible to recover $R_{ij}$, $R_{\tau i \tau j}$ and $\omega_0$. In fact, using \mbox{$M_{i}{}^i = \frac{2}{3} R_{\tau \tau} - \frac{1}{3}R_{i}{}^i$} and \mbox{$R = -R_{\tau\tau} + R_{i}{}^i$}, we obtain \mbox{$R_{i}{}^i = 2R + 3M_i{}^i$}. We can then obtain the state dependent term, \mbox{$\omega_0 = \frac{1}{12\pi^2}\left(N_{i}{}^i - \frac{1}{12}R_{i}{}^i\right)$}. Finally, the curvature tensors can be written as \mbox{$R_{ij} = 12(N_{ij} - 4\pi^2 \omega_0 \delta_{ij})$} and \mbox{$R_{\tau i \tau j} = \frac{3}{2}M_{ij} + 2 R_{ij}$}. This protocol then allows one to recover $13$ independent terms: we recover all the space components of the Ricci scalar $R_{ij}$, all components of the Riemann tensor of the form $R_{\tau i \tau j}$ and the state dependent term $\omega_0$. In particular, from $R_{ij}$ and $R_{\tau i \tau j}$, it is possible to obtain $R_{\tau \tau}$ and the Ricci scalar $R$. The protocol outlined above then allows one to recover information about the spacetime geometry using only $13$ different couplings of detectors with the field. Moreover, if the spacetime whose geometry we wish to recover has known symmetries, it might be possible to require even less than $13$ samplings by exploiting these symmetries. At this stage it should be clear that it is possible to recover some information about the spacetime geometry from the excitation probability of ultra rapid coupled particle detectors. However, it is still not possible to recover the full Ricci tensor, or the full Riemann curvature tensor from the setup described so far. In fact, it is not possible to write the components $R_{\tau i}$, $R_{\tau i j k}$ or $R_{ijkl}$ in terms of $M^{ij}$, $N^{ij}$ and $R$. However, it is possible to recover these tensors by considering detectors in different states of motion such that the center of their interactions with the quantum field is at events that still lie along the same flow of the Killing field $\xi$. For concreteness, consider a second detector which has a relative velocity $v$ in a (Fermi normal) coordinate direction $x^i$ with respect to the previous setup. In this case, it is possible to write the instantaneous four-velocity of the second detector at the point of the interaction as $\mf u' = \gamma(\mf u+v \mf e_i)$, where $\mf u$ is the four-velocity of the flow of the $\xi$ trajectory, $\mf e_i$ is the frame vector associated with the Fermi normal coordinates in the direction $i$ and $v$ is the magnitude of the instantaneous relative three-velocity between the trajectories. Then, performing the same protocol described above for detectors with relative velocity $v$ at the interaction points, we will obtain the tensors $R_{i'j'}$, $R_{\tau'i'\tau'j'}$ and the scalar $R_{\tau'\tau'}$, where the primed coordinates are associated with the components with respect to the Fermi frame of the trajectory $\mf u'$. Using the standard Lorentz coordinate transformation between these frames at the interaction points, it is possible to write \mbox{$R_{\tau\tau} = \gamma^2(R_{\tau'\tau'} - 2 v R_{\tau' i'} + v^2 R_{i'i'})$}. This expression now allows us to write the components $R_{\tau'i'}$ in terms of other previously obtained tensor components. An analogous procedure can also be carried over to the Riemann curvature tensor, allowing one to obtain $R_{\tau'i'j'k'}$ and $R_{i'j'k'l'}$ by considering frames with relative motion with respect to the flow of $\xi$. With this protocol, we are then able to recover all components of the Riemann curvature tensor We have particularized this protocol for specific elliptical gaussian detector shapes, so that their proper acceleration did not play any role in the expansion of Eq. \eqref{PcsPflat}. That is, this choice allows one to recover the geometry of spacetime regardless of the instantaneous proper acceleration of the detectors. However, it is possible to generalize this procedure using general detector shapes, provided that one finds linearly independent coefficients for the terms $\mathcal{Q}^{ij}$, $\mathcal{D}^i$, $\mathcal{L}^{ij}$ and $\mathcal{L}_R$. In fact, if we had considered detectors with nontrivial $\mathcal{D}^i$ terms, the acceleration of the detector would also play a role in the expansion of Eq. \eqref{PcsPflat}. Then, with $16$ couplings it would be possible to recover $M_{ij}$, $a_i$, $N_{ij}$ and $R$. An analogous protocol could then be performed in order to recover the full Riemann curvature tensor curvature of spacetime. Overall, we have shown that it is possible to write the components of the curvature tensors in terms of the excitation probabilities of smeared delta coupled particle detectors of different shapes in different states of motion. In order to do so, we assume that the spacetime geometry is approximately unchanged for the duration of the experiments. Intuitively, by varying the shape of the detector in different directions, the detector will couple to different field modes, which are affected by curvature in specific ways according to Eq. \eqref{PcsPflat}. Having the specific dependence of these modes on curvature then allows one to associate the excitation probability of the particle detectors with the geometry of spacetime. \begin{comment} or more explicitly as: \begin{equation}\label{eq:recon} \begin{bmatrix} \mathcal{L}_{10} & M_{10} & \mathcal{L}_R & \mathcal{L}_{\omega} \\ \mathcal{L}_{01} & M_{01} & \mathcal{L}_R & \mathcal{L}_{\omega} \\ \mathcal{L}_{11} & M_{11} & \mathcal{L}_R & \mathcal{L}_{\omega} \\ \vdots & \vdots & \vdots & \vdots \\ \mathcal{L}_{33} & M_{33} & \mathcal{L}_R & \mathcal{L}_{\omega} \end{bmatrix} \begin{bmatrix} Q_{ij} \\ R{ij} \\ R \\ \omega_0 \end{bmatrix} = \begin{bmatrix} e^{2\mathcal{L}_0^{(1)}}(P^{(1)} - P_0^{(1)})\\ \vdots \\ e^{2\mathcal{L}_0^{(9)}}(P^{(9)} - P_0^{(9)}) \end{bmatrix} \end{equation} \end{comment} \section{Conclusion}\label{sec:conclusion} We have expressed the spacetime curvature in terms of the excitation probability of smeared particle detectors delta coupled to a quantum field. Specifically, we devised a protocol in which one considers particle detectors of specific shapes and with specific states of motion which repeatedly interact with the quantum field. Under the assumption that the background geometry is approximately unchanged during these measurements, one can then recover the components of the Riemann curvature tensor associated with the directions in which each detector is more smeared. With the protocol we have devised, it is then possible to recover all components of the Riemann curvature tensor, and thus all information about the spacetime geometry, from measurable quantities of particle detectors. Overall, we have devised a protocol by which one can write the geometry of spacetime in terms of the expectation values of quantum observables. This represents yet another step towards obtaining a theory of spacetime and gravity which is compatible with with quantum theory and rephrases classical notions of spacetime and curvature entirely in terms of properties of quantum fields. \section*{Acknowledgements} The authors thank Bruno de S. L. Torres and Barbara \v{S}oda for insightful discussions and Erickson Tjoa for reviewing the manuscript. A.S. thanks Prof. Robert Mann for his supervision. T. R. P. thanks Profs. David Kubiz\v{n}\'ak and Eduardo Mart\'in-Mart\'inez’s funding through their NSERC Discovery grants. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Industry Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
{'timestamp': '2022-02-24T02:00:18', 'yymm': '2202', 'arxiv_id': '2202.11108', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.11108'}
ArXiv
\section{Decoupling methods} \label{decoupling_methods} Even though Problem \ref{semi_discrete_problem} is discretized in time, it is still coupled across the interface. That makes solving the subproblems independently impossible. To deal with this obstacle, we chose to use an iterative approach on each of the subintervals $I_n$ and introduce decoupling strategies. For a fixed time interval $I_n$ every iteration of a decoupling method consists of the following steps: \begin{enumerate} \item Using the solution of the solid subproblem from the previous iteration $\vec{U}_{s, k}^{(i - 1)}$, we set the boundary conditions on the interface at the time $t_n$, solve the fluid problem and get the solution $\vec{U}_{f, k}^{(i)}$. \item Similarly, we use the solution $\vec{U}_{f, k}^{(i)}$ for setting the boundary conditions of the solid problem and obtain an intermediate solution $\widetilde{\vec{U}}_{s, k}^{(i)}$. \item We apply a decoupling function to the intermediate solution $\widetilde{\vec{U}}_{s, k}^{(i)}$ and acquire $\vec{U}_{s, k}^{(i)}$. \end{enumerate} This procedure is visualized by \[ \vec{U}_{s, k}^{(i - 1)} \xrightarrow[\text{subproblem}]{\text{fluid}} \vec{U}_{f, k}^{(i)} \xrightarrow[\text{subproblem}]{\text{solid}} \widetilde{\vec{U}}_{s, k}^{(i)} \xrightarrow[\text{function}]{\text{decoupling}} \vec{U}_{s, k}^{(i)}. \] The main challenge emerges from the transition between $\widetilde{\vec{U}}_{s, k}^{(i)}$ and $\vec{U}_{s, k}^{(i)}$. In the next subsections, we will present two techniques. The first one is the relaxation method described in Section \ref{relaxation}. The second one, in Section \ref{shooting}, is the shooting method. We clarify how the intermediate solution $\widetilde{\vec{U}}_{s,k}^{(i)}$ is obtained from $\vec{U}_{s, k}^{(i - 1)}$ by the definition of Problem~\ref{decoupled_problem}. \begin{problem} For a given $\vec{U}_{s, k}^{(i - 1)} \in X_{s, k}^n$, find $\vec{U}_{f, k}^{(i)} \in X_{f, k}^n$ and $\widetilde{\vec{U}}_{s, k}^{(i)} \in X_{s, k}^n$ such that: \begin{flalign*} B_f^n &\left(\begin{array}{l} \vec{U}_{f, k}^{(i)} \\ \vec{U}_{s, k}^{(i - 1)} \end{array}\right)(\boldsymbol{\Phi}_{f, k}) = F_f^n(\boldsymbol{\Phi}_{f, k}) \\ B_s^n & \left(\begin{array}{l} \vec{U}_{f, k}^{(i)} \\ \widetilde{\vec{U}}_{s, k}^{(i)} \end{array}\right)(\boldsymbol{\Phi}_{s, k}) = F_s^n(\boldsymbol{\Phi}_{s, k}) \end{flalign*} for all $\boldsymbol{\Phi}_{f, k} \in Y_{f, k}^n$ and $\boldsymbol{\Phi}_{s, k} \in Y_{s, k}^n$. \label{decoupled_problem} \end{problem} \begin{remark} \normalfont Even though in Problem \ref{decoupled_problem} we demand $\vec{U}_{s, k}^{(i - 1)} \in X_{s, k}^n$, in fact, assuming we already know $\vec{U}_{s, k}(t_{n - 1})$, it is sufficient to set $\left(\vec{U}_{s,k}^{(i - 1)}(t_n)\right)\Big|_{\Gamma}$. The semi-discrete fluid operator (\ref{b_f^n}) is coupled with the solid operator (\ref{b_s^n}) only across the interface~$\Gamma$. Additionally, the interpolation operator (\ref{interpolation_operator_primal}) constructs values over the whole time interval $I_n$ based only on values in the points $t_{n - 1}$ and $t_n$. \label{boundary_values} \end{remark} \subsection{Relaxation method} \label{relaxation} The first of the presented methods consists of a simple interpolation operator being an example of a fixed point method. It contains the iterated solution of each of the two subproblems, taking the interface values from the last iteration of the other problem. For reasons of stability, such explicit partitioned iteration usually requires the introduction of a damping parameter. Here, we only consider fixed damping parameters. \begin{definition}[Relaxation Function] Let $\vec{U}_{s, k}^{(i -1)} \in X_{s, k}^n$ and $\widetilde{\vec{U}}_{s, k}^{(i)} \in X_{s, k}^n$ be the solid solution of Problem \ref{decoupled_problem}. Then for $\tau \in [0, 1]$ the relaxation function $R: X_{s, k}^n \to X_{s, k}^n$ is defined as: \begin{equation*} R(\vec{U}_{s, k}^{(i -1)})\coloneqq \tau \widetilde{\vec{U}}_{s, k}^{(i)} + (1 - \tau)\vec{U}_{s, k}^{(i -1)} \end{equation*} \end{definition} Assuming that we already know the value $\vec{U}_{s, k}(t_{n -1})$, we pose \begin{equation*}\left\{ \begin{aligned} \vec{U}_{s, k}^{(0)}(t_n)&\coloneqq \vec{U}_{s, k}(t_{n - 1}), \\ \vec{U}_{s, k}^{(i)}(t_n)&\coloneqq R(\vec{U}_{s,k}^{(i-1)})(t_n). \end{aligned}\right. \end{equation*} The stopping criterion is based on checking how far the computed solution is from the fixed point. We evaluate the $l^{\infty}$ norm of $\left( \widetilde{\vec{U}}_{s, k}^{(i + 1)}(t_n) - \vec{U}_{s, k}^{(i)}(t_n) \right)\Big|_{\Gamma}$ and once for $i_{\text{stop}}$ this norm is desirably small, we set $$\vec{U}_{k}(t_n) \coloneqq \left(\begin{matrix} \vec{U}_{f, k}^{(i_{\text{stop}})} \\ \vec{U}_{s, k}^{(i_{\text{stop}})} \end{matrix}\right)(t_n).$$ \subsection{Shooting method} \label{shooting} Here we present another iterative method, where we define a root-finding problem on the interface. We use the Newton method with a matrix-free GMRES method for approximation of the inverse of the Jacobian. \begin{definition}[Shooting Function] Let $\vec{U}_{s, k}^{(i -1)} \in X_{s, k}^n$ and $\widetilde{\vec{U}}_{s, k}^{(i)} \in X_{s, k}^n$ be the solid solution of Problem \ref{decoupled_problem}. Then the shooting function $S: (X_{s, k}^n)^2 \to (L^2(\Gamma))^2$ is defined as: \begin{equation} S(\vec{U}_{s, k}^{(i -1)})\coloneqq \left(\vec{U}_{s,k}^{(i - 1)}(t_n) - \widetilde{\vec{U}}_{s,k}^{(i)}(t_n) \right)\Big|_{\Gamma} \label{shooting_function} \end{equation} \end{definition} Our aim is finding the root of function (\ref{shooting_function}). To do so, we employ the Netwon method \begin{equation*} S'(\vec{U}_{s,k}^{(i - 1)})\vec{d} = -S(\vec{U}_{s,k}^{(i - 1)}). \end{equation*} In each iteration of the Newton method, the greatest difficulty causes computing and inverting the Jacobian $S'(\vec{U}_{s,k}^{(i - 1)})$. Instead of approximating all entries of the Jacobian matrix, we consider an approximation of the matrix-vector product only. Since the Jacobian matrix-vector product can be interpreted as a directional derivative, one can assume \begin{equation} S'(\vec{U}_{s,k}^{(i - 1)})\vec{d} \approx \frac{S(\vec{U}_{s,k}^{(i - 1)} + \varepsilon \vec{d} ) - S(\vec{U}_{s,k}^{(i - 1)} )}{\varepsilon}. \label{jacobian_operator} \end{equation} In principle, the vector $\vec{d}$ is not known. Thus, the formula above can not be used for solving the system directly. However, it is possible to use this technique with iterative solvers which only require the computation of matrix-vector products. Because we did not want to assume much structure of the operator (\ref{jacobian_operator}), we chose the matrix-free GMRES method. Such matrix-free Newton-Krylov methods are frequently used if the Jacobian is not available or too costly for evaluation~\cite{KnollKeyes2004}. Once $\vec{d}$ is computed, we set \begin{equation} \begin{cases} \vec{U}_{s, k}^{(0)}(t_n)\big|_{\Gamma}: = \vec{U}_{s, k}(t_{n - 1})\big|_{\Gamma}, \\ \vec{U}_{s, k}^{(i)}(t_n)\big|_{\Gamma}\coloneqq \vec{U}^{(i - 1)}_{s, k}(t_n)\big|_{\Gamma} + \vec{d}. \end{cases} \end{equation} Here, we stop iterating when the $l^{\infty}$ norm of $S(\vec{U}_{s, k}^{(i)})$ is sufficiently small and then we take $$\vec{U}_{k}(t_n)\big|_{\Gamma} \coloneqq \left(\begin{matrix} \vec{U}_{f, k}^{(i_{\text{stop}})} \\ \widetilde{\vec{U}}_{s, k}^{(i_{\text{stop}})} \end{matrix}\right)(t_n)\big|_{\Gamma}.$$ We note that the method presented here is similar to the one presented in \cite{Degroote2009}, where the authors also introduced a root-finding problem on the interface and solved it with a quasi-Newton method. The main difference lies in the approximation of the inverse of the Jacobian. Instead of using a matrix-free linear solver, there the Jacobian is approximated by solving a least-squares problem. \subsection{Numerical comparison of the performance} \label{comparison} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textcolor{white}{adjust to the next line}} \\ \large{\textbf{No micro time-stepping}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=14, ymin=1.6782839494474103e-18, ymax=7.2335262037580485e-06, xtick={0,5,10}, ytick={1.0e-16, 1.0e-12, 1.0e-8}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 5.063468341658501e-06) (2.0, 1.5164739991123107e-06) (3.0, 4.541735599230415e-07) (4.0, 1.360218687206842e-07) (5.0, 4.073761844344486e-08) (6.0, 1.2200638101940217e-08) (7.0, 3.6540077493352834e-09) (8.0, 1.09435037943872e-09) (9.0, 3.2775047633182807e-10) (10.0, 9.815903496973628e-11) (11.0, 2.9397962946332215e-11) (12.0, 8.804490045335847e-12) (13.0, 2.636884962462999e-12) (14.0, 7.897291452087574e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 7.2335262037580485e-06) (6.0, 1.6782839494474103e-18) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \begin{multicols}{2} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the fluid subdomain}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=14, ymin=1.6143131874927976e-18, ymax=7.2336889009813875e-06, xtick={0,5,10}, ytick={1.0e-16, 1.0e-12, 1.0e-8}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 5.063582229714357e-06) (2.0, 1.516508024155365e-06) (3.0, 4.5418372513637254e-07) (4.0, 1.3602490562847677e-07) (5.0, 4.073852572897223e-08) (6.0, 1.2200909154588915e-08) (7.0, 3.654088726135697e-09) (8.0, 1.0943745710675214e-09) (9.0, 3.277577034094575e-10) (10.0, 9.816119403036871e-11) (11.0, 2.939860797104603e-11) (12.0, 8.804682734489964e-12) (13.0, 2.6369425288130414e-12) (14.0, 7.897463161900205e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 7.2336889009813875e-06) (6.0, 1.6143131874927976e-18) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the solid subdomain}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=14, ymin=3.1299108171965233e-17, ymax=7.234364854672545e-06, xtick={0,5,10}, ytick={1.0e-16, 1.0e-12, 1.0e-8}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 5.0640553973980905e-06) (2.0, 1.516686459736926e-06) (3.0, 4.5424819164532013e-07) (4.0, 1.3604752339998525e-07) (5.0, 4.074629352738889e-08) (6.0, 1.2203533968598318e-08) (7.0, 3.6549644338047178e-09) (8.0, 1.0946637381482126e-09) (9.0, 3.27852382732407e-10) (10.0, 9.819197494872247e-11) (11.0, 2.9408554182419644e-11) (12.0, 8.807880058075925e-12) (13.0, 2.637965706152789e-12) (14.0, 7.900728285347777e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 7.234364854672545e-06) (6.0, 3.1299108171965233e-17) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \end{multicols} \caption{Performance of decoupling methods for Configuration \ref{configuration_1} in one macro time-step in the case of $M_{n} = 1$ and $L_{n} = 1$ (top), $M_{n} = 10$ and $L_{ n} = 1$ (left), $M_{n} = 1$ and $L_{n} = 10$ (right).} \label{comparison_one_timestep_configuration_1} \end{figure} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textcolor{white}{adjust to the next line}} \\ \large{\textbf{No micro time-stepping}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=20, ymin=2.5981947607924254e-17, ymax=0.007656308303176444, xtick={0, 10, 20}, ytick={1.0e-16, 1.0e-12, 1.0e-8, 1.0e-4}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 0.005359415811535416) (2.0, 0.0016050925565451867) (3.0, 0.00048070951497492957) (4.0, 0.00014396779939445696) (5.0, 4.311694922066177e-05) (6.0, 1.2913105301192662e-05) (7.0, 3.8673490456006905e-06) (8.0, 1.1582333328392495e-06) (9.0, 3.468795986062422e-07) (10.0, 1.0388706259737299e-07) (11.0, 3.111316455267489e-08) (12.0, 9.318090293869872e-09) (13.0, 2.790677516931567e-09) (14.0, 8.35780819315297e-10) (15.0, 2.503082492867421e-10) (16.0, 7.496490104927928e-11) (17.0, 2.2451248778689066e-11) (18.0, 6.723954703037041e-12) (19.0, 2.0137587739115056e-12) (20.0, 6.030839615136963e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 0.007656308303176444) (6.0, 1.3362380439487633e-12) (11.0, 2.5981947607924254e-17) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \begin{multicols}{2} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the fluid subdomain}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=20, ymin=3.8236446003307725e-17, ymax=0.007656309743571194, xtick={0, 10, 20}, ytick={1.0e-16, 1.0e-12, 1.0e-8, 1.0e-4}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 0.005359416820499795) (2.0, 0.0016050927552661936) (3.0, 0.0004807095435034251) (4.0, 0.00014396779865734116) (5.0, 4.3116946220042164e-05) (6.0, 1.2913103569907667e-05) (7.0, 3.867348277715173e-06) (8.0, 1.1582330281757546e-06) (9.0, 3.468794850001077e-07) (10.0, 1.0388702187616845e-07) (11.0, 3.111315033098777e-08) (12.0, 9.318085433252181e-09) (13.0, 2.7906758946635684e-09) (14.0, 8.357802788856557e-10) (15.0, 2.503080787801899e-10) (16.0, 7.4964826958197e-11) (17.0, 2.245124566513367e-11) (18.0, 6.723933459764385e-12) (19.0, 2.0137405319378455e-12) (20.0, 6.030960440037066e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 0.007656309743571194) (6.0, 1.3908079328133397e-12) (11.0, 3.8236446003307725e-17) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the solid subdomain}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=20, ymin=5.225568472448191e-16, ymax=0.008832788810731775, xtick={0, 10, 20}, ytick={1.0e-16, 1.0e-12, 1.0e-8, 1.0e-4}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 0.006182952167512061) (2.0, 0.001851894094775634) (3.0, 0.0005546722347904078) (4.0, 0.00016613331350273555) (5.0, 4.9759619095894025e-05) (6.0, 1.4903812813475945e-05) (7.0, 4.463933820745542e-06) (8.0, 1.3370206885870224e-06) (9.0, 4.004594325296193e-07) (10.0, 1.1994411584002287e-07) (11.0, 3.5925215834573944e-08) (12.0, 1.0760187730531687e-08) (13.0, 3.2228516215462864e-09) (14.0, 9.652967270027322e-10) (15.0, 2.8912230173214463e-10) (16.0, 8.659680184574186e-11) (17.0, 2.5937321917881572e-11) (18.0, 7.768221735936033e-12) (19.0, 2.3266147157063255e-12) (20.0, 6.973162229222618e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 0.008832788810731775) (6.0, 2.2599031490117177e-11) (11.0, 5.225568472448191e-16) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \end{multicols} \caption{Performance of decoupling methods for Configuration \ref{configuration_2} in one macro time-step in the case of $M_{n} = 1$ and $L_{n} = 1$ (top), $M_{n} = 10$ and $L_{ n} = 1$ (left), $M_{n} = 1$ and $L_{n} = 10$ (right).} \label{comparison_one_timestep_configuration_2} \end{figure} In Figures~\ref{comparison_one_timestep_configuration_1} and \ref{comparison_one_timestep_configuration_2} we present the comparison of the performance of both methods based on the number of micro time-steps. We assumed that the micro time-steps have a uniform size. We performed the simulations in the case of no micro time-stepping ($L_n = 1$, $M_n = 1$), micro time-stepping in the fluid subdomain ($M_n = 10$, $L_n = 1$) and the solid subdomain ($M_n = 1$, $L_n = 10$). Figure~\ref{comparison_one_timestep_configuration_1} shows results for the right hand side according to Configuration~\ref{configuration_1}. Figure~\ref{comparison_one_timestep_configuration_2} corresponds to Configuration~\ref{configuration_2}. We investigated one macro time-step $I_2 = [0.02, 0.04]$. We set the relaxation parameter to $\tau = 0.7$. Both methods are very robust concerning the number of micro time-steps. The relaxation method, as expected, has a linear convergence rate. In both cases, despite the nested GMRES method, the performance of the shooting method is much better. For Configuration~\ref{configuration_1}, the relaxation method needs 13 iterations to converge. The shooting method needs only 2 iterations of the Newton method (which is the reason why each of the graphs in Figure~\ref{comparison_one_timestep_configuration_1} displays only two evaluations of the error) and overall requires 6 evaluations of the decoupling function. In the case of Configuration~\ref{configuration_2}, both methods need more iterations to reach the same level of accuracy. The number of iterations of the relaxation method increases to 20 while the shooting method needs 3 iterations of the Newton method and 11 evaluations of the decoupling function. \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textcolor{white}{adjust to the next line}} \\ \large{\textbf{No micro time-stepping}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 15) (2, 14) (3, 14) (4, 14) (5, 14) (6, 15) (7, 15) (8, 14) (9, 15) (10, 15) (11, 15) (12, 14) (13, 14) (14, 15) (15, 14) (16, 15) (17, 15) (18, 14) (19, 14) (20, 14) (21, 15) (22, 15) (23, 15) (24, 15) (25, 14) (26, 14) (27, 14) (28, 15) (29, 15) (30, 15) (31, 14) (32, 14) (33, 14) (34, 15) (35, 15) (36, 15) (37, 14) (38, 15) (39, 14) (40, 15) (41, 15) (42, 15) (43, 15) (44, 14) (45, 14) (46, 14) (47, 15) (48, 15) (49, 14) (50, 15) }; \addplot[ color=black, mark=square, ] coordinates { (1, 6) (2, 6) (3, 6) (4, 6) (5, 6) (6, 6) (7, 6) (8, 6) (9, 6) (10, 6) (11, 5) (12, 6) (13, 6) (14, 6) (15, 6) (16, 6) (17, 6) (18, 6) (19, 6) (20, 6) (21, 6) (22, 5) (23, 6) (24, 6) (25, 6) (26, 6) (27, 6) (28, 6) (29, 6) (30, 6) (31, 6) (32, 6) (33, 6) (34, 6) (35, 6) (36, 6) (37, 6) (38, 6) (39, 6) (40, 6) (41, 6) (42, 5) (43, 6) (44, 6) (45, 6) (46, 6) (47, 6) (48, 6) (49, 6) (50, 5) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \begin{multicols}{2} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the fluid subdomain}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 15) (2, 14) (3, 14) (4, 14) (5, 14) (6, 15) (7, 15) (8, 14) (9, 15) (10, 15) (11, 15) (12, 14) (13, 14) (14, 15) (15, 14) (16, 15) (17, 15) (18, 14) (19, 14) (20, 14) (21, 15) (22, 15) (23, 15) (24, 15) (25, 14) (26, 14) (27, 14) (28, 15) (29, 15) (30, 15) (31, 14) (32, 14) (33, 14) (34, 15) (35, 15) (36, 15) (37, 14) (38, 15) (39, 14) (40, 15) (41, 15) (42, 15) (43, 15) (44, 14) (45, 14) (46, 14) (47, 15) (48, 15) (49, 14) (50, 15) }; \addplot[ color=black, mark=square, ] coordinates { (1, 6) (2, 6) (3, 6) (4, 6) (5, 6) (6, 6) (7, 6) (8, 6) (9, 6) (10, 6) (11, 5) (12, 6) (13, 6) (14, 6) (15, 6) (16, 6) (17, 6) (18, 6) (19, 6) (20, 6) (21, 6) (22, 5) (23, 6) (24, 6) (25, 6) (26, 6) (27, 6) (28, 6) (29, 6) (30, 6) (31, 6) (32, 6) (33, 6) (34, 6) (35, 6) (36, 6) (37, 6) (38, 6) (39, 6) (40, 6) (41, 6) (42, 5) (43, 6) (44, 6) (45, 6) (46, 6) (47, 6) (48, 6) (49, 6) (50, 5) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the solid subdomain}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 15) (2, 14) (3, 14) (4, 14) (5, 14) (6, 15) (7, 14) (8, 14) (9, 15) (10, 15) (11, 14) (12, 15) (13, 14) (14, 15) (15, 15) (16, 15) (17, 15) (18, 14) (19, 14) (20, 14) (21, 15) (22, 15) (23, 14) (24, 14) (25, 14) (26, 15) (27, 14) (28, 15) (29, 15) (30, 14) (31, 14) (32, 14) (33, 14) (34, 15) (35, 15) (36, 15) (37, 15) (38, 14) (39, 14) (40, 14) (41, 15) (42, 15) (43, 14) (44, 14) (45, 14) (46, 15) (47, 15) (48, 14) (49, 15) (50, 14) }; \addplot[ color=black, mark=square, ] coordinates { (1, 6) (2, 6) (3, 6) (4, 5) (5, 6) (6, 6) (7, 6) (8, 6) (9, 6) (10, 6) (11, 6) (12, 6) (13, 6) (14, 6) (15, 6) (16, 6) (17, 6) (18, 6) (19, 6) (20, 6) (21, 6) (22, 6) (23, 6) (24, 6) (25, 6) (26, 6) (27, 6) (28, 6) (29, 5) (30, 6) (31, 6) (32, 6) (33, 6) (34, 6) (35, 6) (36, 6) (37, 6) (38, 6) (39, 6) (40, 6) (41, 5) (42, 5) (43, 6) (44, 6) (45, 6) (46, 6) (47, 6) (48, 5) (49, 5) (50, 5) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \end{multicols} \caption{Number of evaluations of the decoupling functions for Configuration \ref{configuration_1} needed for convergence on the time interval $I = [0, 1]$ for $N = 50$ in the case of $M_{n} = 1$ and $L_{n} = 1$ (top), $M_{n} = 10$ and $L_{ n} = 1$ (left), $M_{n} = 1$ and $L_{n} = 10$ (right).} \label{comparison_whole_timeline_configuration_1} \end{figure} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textcolor{white}{adjust to the next line}} \\ \large{\textbf{No micro time-stepping}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=south east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 21) (2, 20) (3, 20) (4, 20) (5, 20) (6, 21) (7, 21) (8, 21) (9, 20) (10, 20) (11, 21) (12, 21) (13, 21) (14, 21) (15, 21) (16, 20) (17, 20) (18, 21) (19, 21) (20, 21) (21, 21) (22, 21) (23, 20) (24, 21) (25, 21) (26, 21) (27, 20) (28, 21) (29, 20) (30, 20) (31, 21) (32, 21) (33, 21) (34, 20) (35, 21) (36, 21) (37, 20) (38, 21) (39, 21) (40, 21) (41, 20) (42, 21) (43, 21) (44, 21) (45, 21) (46, 21) (47, 20) (48, 20) (49, 20) (50, 21) }; \addplot[ color=black, mark=square, ] coordinates { (1, 10) (2, 11) (3, 11) (4, 11) (5, 11) (6, 11) (7, 11) (8, 11) (9, 11) (10, 11) (11, 11) (12, 11) (13, 11) (14, 11) (15, 11) (16, 11) (17, 11) (18, 11) (19, 11) (20, 11) (21, 11) (22, 11) (23, 11) (24, 11) (25, 11) (26, 11) (27, 11) (28, 11) (29, 11) (30, 11) (31, 11) (32, 11) (33, 11) (34, 11) (35, 11) (36, 11) (37, 11) (38, 11) (39, 11) (40, 11) (41, 11) (42, 11) (43, 11) (44, 11) (45, 11) (46, 11) (47, 11) (48, 11) (49, 11) (50, 11) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \begin{multicols}{2} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the fluid subdomain}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=south east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 21) (2, 20) (3, 20) (4, 20) (5, 20) (6, 21) (7, 21) (8, 21) (9, 20) (10, 20) (11, 21) (12, 21) (13, 21) (14, 21) (15, 21) (16, 20) (17, 20) (18, 21) (19, 21) (20, 21) (21, 21) (22, 21) (23, 20) (24, 21) (25, 21) (26, 21) (27, 20) (28, 21) (29, 20) (30, 20) (31, 21) (32, 21) (33, 21) (34, 20) (35, 21) (36, 21) (37, 20) (38, 21) (39, 21) (40, 21) (41, 20) (42, 21) (43, 21) (44, 21) (45, 21) (46, 21) (47, 20) (48, 20) (49, 20) (50, 21) }; \addplot[ color=black, mark=square, ] coordinates { (1, 10) (2, 11) (3, 11) (4, 11) (5, 11) (6, 11) (7, 11) (8, 11) (9, 11) (10, 11) (11, 11) (12, 11) (13, 11) (14, 11) (15, 11) (16, 11) (17, 11) (18, 11) (19, 11) (20, 11) (21, 11) (22, 11) (23, 11) (24, 11) (25, 11) (26, 11) (27, 11) (28, 11) (29, 11) (30, 11) (31, 11) (32, 11) (33, 11) (34, 11) (35, 11) (36, 11) (37, 11) (38, 11) (39, 11) (40, 11) (41, 11) (42, 11) (43, 11) (44, 11) (45, 11) (46, 11) (47, 11) (48, 11) (49, 11) (50, 11) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the solid subdomain}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=south east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 21) (2, 20) (3, 20) (4, 20) (5, 20) (6, 21) (7, 21) (8, 21) (9, 20) (10, 21) (11, 21) (12, 21) (13, 21) (14, 21) (15, 21) (16, 20) (17, 21) (18, 21) (19, 21) (20, 21) (21, 21) (22, 18) (23, 21) (24, 21) (25, 21) (26, 21) (27, 21) (28, 20) (29, 21) (30, 21) (31, 21) (32, 21) (33, 21) (34, 21) (35, 20) (36, 21) (37, 21) (38, 21) (39, 21) (40, 21) (41, 18) (42, 21) (43, 21) (44, 21) (45, 21) (46, 21) (47, 20) (48, 21) (49, 21) (50, 21) }; \addplot[ color=black, mark=square, ] coordinates { (1, 11) (2, 11) (3, 11) (4, 11) (5, 11) (6, 11) (7, 11) (8, 11) (9, 11) (10, 11) (11, 11) (12, 11) (13, 11) (14, 11) (15, 11) (16, 11) (17, 11) (18, 11) (19, 11) (20, 11) (21, 11) (22, 11) (23, 11) (24, 11) (25, 11) (26, 11) (27, 11) (28, 11) (29, 11) (30, 11) (31, 11) (32, 11) (33, 11) (34, 11) (35, 11) (36, 11) (37, 11) (38, 11) (39, 11) (40, 11) (41, 11) (42, 11) (43, 11) (44, 11) (45, 11) (46, 11) (47, 11) (48, 11) (49, 11) (50, 11) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \end{multicols} \caption{Number of evaluations of the decoupling functions for Configuration \ref{configuration_2} needed for convergence on the time interval $I = [0, 1]$ for $N = 50$ in the case of $M_{n} = 1$ and $L_{n} = 1$ (top), $M_{n} = 10$ and $L_{ n} = 1$ (left), $M_{n} = 1$ and $L_{n} = 10$ (right).} \label{comparison_whole_timeline_configuration_2} \end{figure} In Figures~\ref{comparison_whole_timeline_configuration_1} and \ref{comparison_whole_timeline_configuration_2} we show the number of evaluations of the decoupling function needed to reach the stopping criteria throughout the complete time interval $I = [0, 1]$ for $N = 50$. Similarly, we performed the simulations in the case of no micro time-stepping, micro time-stepping in the fluid and the solid subdomain. We considered both Configuration~\ref{configuration_1} and~\ref{configuration_2}. In the case of Configuration~\ref{configuration_1}, the number of evaluations of the decoupling function using the relaxation method varied between 14 and 15. For the shooting function, this value was mostly equal to 6 with a few exceptions when only 5 evaluations were needed. For Configuration~\ref{configuration_2}, the relaxation method needed between 18 and 21 iterations while for the shooting method it was almost exactly constant to 11. For each configuration, graphs corresponding to no micro time-stepping and micro time-stepping in the fluid subdomain are the same, while introducing micro time-stepping in the solid subdomain resulted in slight variations. For both decoupling methods, the independence of the performance from the number of micro time-steps extends to the whole time interval $I$. \section{Goal oriented estimation} \label{goal_oriented_estimation} In Section 1 we formulated the semi-discrete problem enabling usage of different time-step sizes in fluid and solid subdomains, whereas in Section 2 we presented methods designed to efficiently solve such problems. However, so far the choice of the step sizes was purely arbitrary. In this section, we are going to present an easily localized error estimator, which can be used as a criterion for the adaptive choice of the time-step size. For the construction of the error estimator, we used the dual weighted residual (DWR) method~\cite{BeckerRannacher2001}. Given a differentiable functional $J: X \to \mathbb{R}$, our aim is finding a way to approximate $J(\vec{U}) - J(\vec{U}_k)$, where $\vec{U}$ is the solution to Problem~\ref{continuous_problem} and $\vec{U}_k$ is the solution to Problem~\ref{semi_discrete_problem}. The goal functional $J: X \to \mathbb{R}$ is split into two parts $J_f: X_f \to \mathbb{R}$ and $J_s:X_s \to \mathbb{R}$ which refer to the fluid and solid subdomains, respectively $$J(\vec{U}): = J_f(\vec{U}_f) + J_s(\vec{U}_s). $$ The DWR method embeds computing the value of $J$ in the optimal control framework - it is equivalent to solving the following optimization problem \begin{equation*} J(\vec{U}) = \min !, \quad B(\vec{U})(\boldsymbol{\Phi}) = F(\boldsymbol{\Phi}) \textnormal{ for all }\boldsymbol{\Phi} \in X, \end{equation*} where \begin{flalign*} B(\vec{U})(\boldsymbol{\Phi}) & \coloneqq B_f(\vec{U})(\boldsymbol{\Phi}_f) + B_s(\vec{U})(\boldsymbol{\Phi}_s), \\ F(\boldsymbol{\Phi}) & \coloneqq F_f(\boldsymbol{\Phi}_f) + F_s(\boldsymbol{\Phi}_s). \end{flalign*} Solving this problem corresponds to finding stationary points of a Lagrangian $\mathcal{L}: X \times (X \oplus Y_k) \to \mathbb{R}$ \begin{equation*} \mathcal{L}(\vec{U}, \vec{Z}): = J(\vec{U}) + F(\vec{Z}) - B(\vec{U})(\vec{Z}). \end{equation*} We can not take $X \times X$ as the domain of $\mathcal{L}$ because we operate in a nonconforming set-up, that is $Y_k \notin X$. Because the form $B$ describes a linear problem, finding stationary points of $\mathcal{L}$ is equivalent to solving the following problem: \begin{problem} For a given $\vec{U} \in X$ being the solution of Problem \ref{continuous_problem}, find $\vec{Z} \in X$ such that: \begin{flalign*} B(\boldsymbol{\Xi}, \vec{Z}) = J'_{\vec{U}}(\boldsymbol{\Xi}) \end{flalign*} for all $\boldsymbol{\Xi} \in X$. \label{lagrangian_problem} \end{problem} The solution $\vec{Z}$ is called an \textit{adjoint solution}. By $J'_{\vec U}(\boldsymbol{\Xi})$ we denote the Gateaux derivative of $J(\cdot)$ at $\vec U$ in direction of the test function $\boldsymbol{\Xi}$. \subsection{Adjoint problem} \label{adjoint_problem} \subsubsection{Continuous variational formulation} \label{adjoint_continuous_variational_formulation} As the first step in decoupling the Problem~\ref{lagrangian_problem}, we would like to split the form $B$ into forms corresponding to fluid and solid subproblems. However, we can not fully reuse the forms (\ref{a_f}) and (\ref{a_s}) because of the interface terms - the forms have to be sorted regarding test functions. Thus, after defining abbreviations, \[ \begin{aligned} \boldsymbol{\Xi}_f &\coloneqq \left(\begin{matrix} \xi_f \\ \eta_f \end{matrix}\right), \quad& \boldsymbol{\Xi}_s &\coloneqq \left(\begin{matrix} \xi_s \\ \eta_s \end{matrix}\right), \quad& \boldsymbol{\Xi} &\coloneqq \left(\begin{matrix} \boldsymbol{\Xi}_f \\ \boldsymbol{\Xi}_s \end{matrix}\right), \\ \vec{Z}_f &\coloneqq \left(\begin{matrix} z_f \\ y_f \end{matrix}\right), & \vec{Z}_s& \coloneqq \left(\begin{matrix} z_s \\ y_s \end{matrix}\right), & \vec{Z} &\coloneqq \left(\begin{matrix} \vec{Z}_f \\ \vec{Z}_s \end{matrix}\right) \end{aligned} \] we choose the splitting \begin{equation*} B(\boldsymbol{\Xi})(\vec{Z})\coloneqq \widetilde{B}_f(\boldsymbol{\Xi}_f)(\vec{Z}) + \widetilde{B}_s(\boldsymbol{\Xi}_s)(\vec{Z}), \end{equation*} where \begin{subequations} \begin{flalign*} \widetilde{B}_f(\boldsymbol{\Xi}_f)(\vec{Z}) \coloneqq & - \int_I \langle \eta_f, \partial_t z_f \rangle_f \diff t + \int_I \widetilde{a}_f(\boldsymbol{\Xi}_f)(\vec{Z}) \diff t + (\eta_f(T), z_f(T))_f, \\ \widetilde{B}_s(\boldsymbol{\Xi}_s)(\vec{Z}) \coloneqq & - \int_I \langle \eta_s, \partial_t z_s \rangle_s \diff t - \int_I \langle \xi_s, \partial_t y_s \rangle_s \diff t + \int_I \widetilde{a}_s(\boldsymbol{\Xi}_s)(\vec{Z}) \diff t \\ & \qquad + (\eta_s(T), z_s(T))_s + (\xi_s(T), y_s(T))_s \nonumber \end{flalign*} \end{subequations} and \begin{subequations} \begin{flalign*} \widetilde{a}_f(\boldsymbol{\Xi}_f)(\vec{Z}) \coloneqq & \; (\nu \nabla \eta_f, \nabla z_f)_f + (\beta \cdot \nabla \eta_f, z_f)_f + (\nabla \xi_f, \nabla y_f)_f \\ &\qquad - \langle \partial_{\vec{n}_f} \xi_f, y_f \rangle_{\Gamma} + \frac{\gamma}{h} \langle \xi_f, y_f \rangle_{\Gamma} - \langle \nu \partial_{\vec{n}_f} \eta_f, z_f \rangle_{\Gamma} + \frac{\gamma}{h}\langle \eta_f, z_f \rangle_{\Gamma} \nonumber \\ & \qquad + \langle \nu \partial_{\vec{n}_f} \eta_f, z_s \rangle_{\Gamma}, \nonumber \\ \widetilde{a}_s(\boldsymbol{\Xi}_s)(\vec{Z}) \coloneqq & \; (\lambda \nabla \xi_s, \nabla z_s)_s + (\delta \nabla \eta_s, \nabla z_s)_s - (\eta_s, y_s)_s \\ &\qquad - \frac{\gamma}{h} \langle \xi_s, y_f \rangle_{\Gamma} - \frac{\gamma}{h} \langle \eta_s, z_f \rangle_{\Gamma} - \langle \delta \partial_{\vec{n}_s} \eta_s, z_s \rangle_{\Gamma}. \nonumber \end{flalign*} \end{subequations} We have applied integration by parts in time which reveals that the adjoint problem runs backward in time. That leads to the formulation of a continuous adjoint variational problem: \begin{problem} For a given $\vec{U} \in X$ being the solution of Problem \ref{continuous_problem}, find $\vec{Z} \in X$ such that: \[ \begin{aligned} \widetilde{B}_f(\boldsymbol{\Xi}_f)(\vec{Z}) &= (J_f)'_{\vec{U}}(\boldsymbol{\Xi}_f) \\ \widetilde{B}_s(\boldsymbol{\Xi}_s)(\vec{Z}) &= (J_s)'_{\vec{U}}(\boldsymbol{\Xi}_s) \end{aligned} \] for all $\boldsymbol{\Xi}_f \in X_f $ and $\boldsymbol{\Xi}_s \in X_s $. \label{adjoint_continuous_problem} \end{problem} \subsubsection{Semi-discrete Petrov-Galerkin formulation} \label{adjoint_semi_discrete_petrov_galerkin_formulation} The semi-discrete formulation for the adjoint problem is similar to the one of the primal problem. The main difference lies in the fact that this time trial functions are piecewise constant in time $\vec{Z}_k \in Y_k$, while test functions are piecewise linear in time $\boldsymbol{\Xi}_f \in X_{f, k}$, $\boldsymbol{\Xi}_s \in X_{s, k}$. After the rearrangement of the terms in accordance to test functions on every interval $I_n$, we arrive with the scheme \begin{subequations} \begin{flalign*} \widetilde{B}_f^n(\boldsymbol{\Xi}_{f, k})(\vec{Z}_k) = & \ \frac{k_{f, n}^{M_{n}}}{2}\widetilde{a}_f(\boldsymbol{\Xi}_{f, k}(t_n))(i_n^f\vec{Z}_{k}(t_n)) \\ & \quad+ \sum_{m = 1}^{M_{n} - 1} \bigg\{ (\eta_{f, k}(t^m_{f, n}), z_{f, k}(t^m_{f, n}) - z_{f, k}(t^{m + 1}_{f, n}))_f \nonumber \\ & \qquad \qquad +\frac{k^m_{f, n}}{2}\widetilde{a}_f(\boldsymbol{\Xi}_{f, k}(t_{f, n}^m))(i_n^f\vec{Z}_{k}(t_{f, n}^{m})) \nonumber\\ & \qquad \qquad + \frac{k^{m + 1}_{f, n}}{2}\widetilde{a}_f(\boldsymbol{\Xi}_{f, k}(t_{f, n}^m))(i_n^f\vec{Z}_{k}(t_{f, n}^{m + 1})) \bigg\} \nonumber \\ & \quad+ (\eta_{f, k}(t_{n - 1}), z_{f, k}(t_{n - 1}) - z_{f, k}(t_{f, n}^{1}))_f \nonumber\\ &\quad+ \frac{k^1_{f, n}}{2}\widetilde{a}_f(\boldsymbol{\Xi}_{f, k}(t_{n - 1}))(i_n^f\vec{Z}_{k}(t_{f, n}^{1})), \nonumber \\ \widetilde{B}^n_s(\boldsymbol{\Xi}_{s, k})(\vec{Z}_k) = & \ \frac{k_{s, n}^{L_n}}{2}\widetilde{a}_s(\boldsymbol{\Xi}_{s, k}(t_n))(i_n^s\vec{Z}_{k}(t_n)) \\ & \quad+ \sum_{l = 1}^{L_n - 1} \bigg\{ (\eta_{s, k}(t_{s, n}^l), z_{s, k}(t_{s, n}^l) - z_{s, k}(t_{s, n}^{l + 1}))_s \nonumber \\ & \qquad \qquad + (\xi_{s,k}(t_{s, n}^l), y_{s, k}(t_{s, n}^l) - y_{s, k}(t_{s, n}^{l + 1}))_s \nonumber \\ & \qquad \qquad + \frac{k^l_{s, n}}{2}\widetilde{a}_s(\boldsymbol{\Xi}_{s, k}(t_{s, n}^l))(i_n^s\vec{Z}_{k}(t_{s, n}^l)) \nonumber\\ & \qquad \qquad + \frac{k^{l + 1}_{s, n}}{2}\widetilde{a}_s(\boldsymbol{\Xi}_{s, k}(t^l_{s, n}))(i_n^s\vec{Z}_{k}(t^{l + 1}_{s, n})) \bigg\} \nonumber \\ & \quad+ (\eta_{s, k}(t_{n - 1}), z_{s, k}(t_{n - 1}) - z_{s, k}(t^1_{s, n}))_s \nonumber \\ & \quad + (\xi_{s, k}(t_{n - 1}), y_{s, k}(t_{n - 1}) - y_{s, k}(t_{s, n}^1))_s \nonumber \\ & \quad + \frac{k^1_{s, n}}{2}\widetilde{a}_s(\boldsymbol{\Xi}_{s, k}(t_{n - 1}))(i_n^s\vec{Z}_{s, k}(t_{s, n}^1)). \nonumber \end{flalign*} \end{subequations} Note that the adjoint problem does not have a designated initial value at the final time $T$. Instead, the starting value is implicitly defined by the variational formulation. The final schemes are constructed as sums over the macro time intervals $I_n$ and values at the final time $T$ \begin{subequations} \begin{flalign*} \widetilde{B}_f(\boldsymbol{\Xi}_{f, k})(\vec{Z}_k) = & \sum_{n = 1}^{N} \widetilde{B}_f^n(\boldsymbol{\Xi}_{f, k})(\vec{Z}_k) + (\eta_{f, k}(T), z_{f, k}(T))_f,\\ \widetilde{B}_s(\boldsymbol{\Xi}_{s, k})(\vec{Z}_{s, k}) = & \sum_{n = 1}^{N}\widetilde{B}_s^n(\boldsymbol{\Xi}_{s, k})(\vec{Z}_{k}) + (\eta_{s, k}(T), z_{s, k}(T))_s + (\xi_{s, k}(T), y_{s,k}(T))_s. \end{flalign*} \end{subequations} With that at our disposal, we can formulate a semi-discrete adjoint variational problem: \begin{problem} For a given $\vec{U} \in X$ being the solution of Problem \ref{continuous_problem}, find $\vec{Z}_k \in Y_k$ such that: \[ \begin{aligned} \widetilde{B}_f(\boldsymbol{\Xi}_{f, k})(\vec{Z}_{k}) &= (J_f)'_{\vec{U}}(\boldsymbol{\Xi}_{f, k}) \\ \widetilde{B}_s(\boldsymbol{\Xi}_{s,k})(Z_k) &= (J_s)'_{\vec{U}}(\boldsymbol{\Xi}_{s,k}) \end{aligned} \] for all $\boldsymbol{\Xi}_{f, k} \in X_{f, k}$ and $\boldsymbol{\Xi}_{s, k} \in X_{s, k}$. \label{adjoint_semi_discrete_problem} \end{problem} After formulating the problem in a semi-discrete manner, the decoupling methods from Section~\ref{decoupling_methods} can be applied. \subsection{A posteriori error estimate} \label{aposteriori_error} We define the primal residual, split into parts corresponding to the fluid and solid subproblems \begin{equation*} \rho(\vec{U})(\boldsymbol{\Phi}) \coloneqq \rho_f(\vec{U})(\boldsymbol{\Phi}_f) + \rho_s(\vec{U})(\boldsymbol{\Phi}_s), \end{equation*} where \begin{flalign*} \rho_f(\vec{U})(\boldsymbol{\Phi}_f) & \coloneqq F_f(\boldsymbol{\Phi}_f) - B_f(\vec{U})(\boldsymbol{\Phi}_f), \\ \rho_s(\vec{U})(\boldsymbol{\Phi}_s) & \coloneqq F_s(\boldsymbol{\Phi}_s) - B_s(\vec{U})(\boldsymbol{\Phi}_s). \nonumber \end{flalign*} Similarly, we establish the adjoint residual resulting from the adjoint problem \begin{equation*} \rho^*(\vec{Z})(\boldsymbol{\Xi})\coloneqq \rho_f^*(\vec{Z})(\boldsymbol{\Xi}_f) + \rho_s^*(\vec{Z})(\boldsymbol{\Xi}_s) \end{equation*} with \begin{flalign*} \rho_f^*(\vec{Z})(\boldsymbol{\Xi}_f) & \coloneqq (J_f)'_{\vec{U}}(\boldsymbol{\Xi}_f) - \widetilde{B}_f(\boldsymbol{\Xi}_f)(\vec{Z}) \\ \rho_s^*(\vec{Z})(\boldsymbol{\Xi}_s) & \coloneqq (J_s)'_{\vec{U}}(\boldsymbol{\Xi}_s)- \widetilde{B}_s(\boldsymbol{\Xi}_s)(\vec{Z}). \nonumber \end{flalign*} Becker and Rannacher \cite{BeckerRannacher2001} introduced the a posteriori error representation: \begin{multline} J(\vec{U}) - J(\vec{U}_k) = \frac{1}{2} \min_{\boldsymbol{\Phi}_k \in Y_{ k}}\rho(\vec{U}_k)(\vec{Z} - \boldsymbol{\Phi}_k) + \frac{1}{2}\min_{\boldsymbol{\Xi}_k \in X_k}\rho^*(\vec{Z}_k)(\vec{U} - \boldsymbol{\Xi}_k)\\ + \mathcal{O}(|\vec{U} - \vec{U}_k|^3, |\vec{Z} - \vec{Z}_k|^3) \label{estimator} \end{multline} This identity can be used to derive an a posteriori error estimate. Two steps of approximation are required: first, the third order remainder is neglected and second, the approximation errors $\vec Z-\boldsymbol{\Phi}_k$ and $\vec U-\boldsymbol{\Xi}_k$, the \emph{weights}, are replaced by interpolation errors $\vec Z-i_k\vec Z$ and $\vec U-i_k \vec U$, which are then replaced by discrete reconstructions, since the exact solutions $\vec U, \vec Z\in X$ are not available. See \cite{MeidnerRichter2014} and \cite{SchmichVexler2008} for a discussion of different reconstruction schemes. Due to these approximation steps, this estimator is not precise and it does not result in rigorous bounds. The estimator consists of a primal and adjoint component. Each of them is split again into a fluid and a solid counterpart \begin{equation} \sigma_k \coloneqq \theta_{f, k} + \theta_{s, k} + \vartheta_{f, k} + \vartheta_{s, k}. \label{residualds_formula} \end{equation} The primal estimators are derived from the primal residuals using $\vec{U}_k$ and $\vec{Z}_k$ being the solutions to Problems~\ref{semi_discrete_problem} and \ref{adjoint_semi_discrete_problem}, respectively \begin{flalign*} \theta_{f,k} & \coloneqq \frac{1}{2} \rho_f(\vec{U}_k)(\vec{Z}_{f, k}^{(1)} - \vec{Z}_{f, k}), \\ \theta_{s,k} & \coloneqq \frac{1}{2} \rho_s(\vec{U}_k)(\vec{Z}_{s, k}^{(1)} - \vec{Z}_{s, k}). \nonumber \end{flalign*} The adjoint reconstructions $\vec{Z}_{f, k}^{(1)}$ and $\vec{Z}_{s, k}^{(1)}$ approximating the exact solution are constructed from $\vec{Z}_k$ using linear extrapolation (see Figure~\ref{reconstruction}, right) \begin{flalign*} \vec{Z}_{f, k}^{(1)}\big|_{I_{f, n}^m} \coloneqq & \frac{t - \bar{t}^{m + 1}_{f, n}}{\bar{t}^{m - 1}_{f, n} - \bar{t}^{m + 1}_{f, n}}\vec{Z}_{f, k}(t^{m - 1}_{f, n}) + \frac{t - \bar{t}^{m - 1}_{f, n}}{\bar{t}^{m + 1}_{f, n} - \bar{t}^{m - 1}_{f, n}}\vec{Z}_{f, k}(t^{m + 1}_{f, n}), \\ \vec{Z}_{s, k}^{(1)}\big|_{I_{s, n}^m} \coloneqq & \frac{t - \bar{t}^{m + 1}_{s, n}}{\bar{t}^{m - 1}_{s, n} - \bar{t}^{m + 1}_{s, n}}\vec{Z}_{s, k}(t^{m - 1}_{s, n}) + \frac{t - \bar{t}^{m - 1}_{s, n}}{\bar{t}^{m + 1}_{s, n} - \bar{t}^{m - 1}_{s, n}}\vec{Z}_{s, k}(t^{m + 1}_{s, n}), \nonumber \end{flalign*} with the interval midpoints \begin{equation}\label{midpoints} \bar{t}^m_{f, n} = \frac{t^m_{f, n} + t^{m - 1}_{f, n}}{2},\qquad \bar{t}^m_{s, n} = \frac{t^m_{s, n} + t^{m - 1}_{s, n}}{2}. \end{equation} The adjoint estimators are based on the adjoint residuals \begin{flalign*} \vartheta_{f,k} \coloneqq \frac{1}{2}\rho_f^*(\vec{Z}_k)(\vec{U}_{f, k}^{(2)} - \vec{U}_{f, k}), \\ \vartheta_{s,k} \coloneqq \frac{1}{2}\rho_s^*(\vec{Z}_k)(\vec{U}_{s, k}^{(2)} - \vec{U}_{s, k}). \nonumber \end{flalign*} The primal reconstructions $\vec{U}_{f, k}^{(2)}$ and $\vec{U}_{s, k}^{(2)}$ are extracted from $\vec{U}_k$ using quadratic reconstruction. The reconstruction is performed on the micro time mesh level on local patches consisting of two neighboring micro time-steps (see Figure~\ref{reconstruction}, left). In general, the patch structure does not have to coincide with the micro and macro time mesh structure - two micro time-steps being in the same local patch do not have to be in the same macro time-step. Additionally, we demand two micro time steps from the same local patch to have the same length. \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale=0.9] \draw (-2.75, -0.75) -- (2.75, -0.75); \draw[ultra thick] (0,0) parabola (2,2); \draw[ultra thick] (0,0) parabola (-2,2); \draw[fill = black] (0,0) circle (0.075cm); \draw[fill = black] (2,2) circle (0.075cm); \draw[fill = black] (-2,2) circle (0.075cm); \draw (-2, -0.65) -- (-2, -0.85); \draw (0, -0.65) -- (0, -0.85); \draw (2, -0.65) -- (2, -0.85); \node at (0, - 1.25) {$t^{m}_{f, n}$}; \node at (-2, - 1.25) {$t^{m - 1}_{f, n}$}; \node at (2, - 1.25) {$t^{m + 1}_{f, n}$}; \draw[dashed] (-2,-0.75) -- (-2,2); \draw[dashed] (0,-0.75) -- (0,0); \draw[dashed] (2,-0.75) -- (2,2); \draw (-2,2) -- (0,0) -- (2,2); \draw (-2.75, 1.5) -- (-2, 2); \draw (2, 2) -- (2.75, 0.5); \node at (2,2.5) {$\vec{U}_{f, k}(t^{m + 1}_{f, n})$}; \node at (-2,2.5) {$\vec{U}_{f, k}(t^{m - 1}_{f, n})$}; \node at (0,1.15) {$\vec{U}_{f, k}(t^{m}_{f, n})$}; \end{tikzpicture}\hspace{0.5cm} \begin{tikzpicture}[scale=0.9] \draw (0.5, 0.0) -- (7.5, 0.0); \draw (1.0, -0.1) -- (1.0, 0.1); \draw (3.0, -0.1) -- (3.0, 0.1); \draw (5.25, -0.1) -- (5.25, 0.1); \draw (7.25, -0.1) -- (7.25, 0.1); \draw (2.0, -0.05) -- (2.0, 0.05); \draw (6.25, -0.05) -- (6.25, 0.05); \node at (1.0, - 0.5) {$ t^{m - 2}_{f, n}$}; \node at (2.0, - 0.5) {$\bar{t}^{m - 1}_{f, n}$}; \node at (3.0, - 0.5) {$ t^{m - 1}_{f, n}$}; \node at (5.25, - 0.5) {$ t^{m}_{f, n}$}; \node at (6.25, - 0.5) {$\bar{t}^{m + 1}_{f, n}$}; \node at (7.25, - 0.5) {$t^{m + 1}_{f, n}$}; \draw[fill = black] (3,0.75) circle (0.075cm); \draw[fill = black] (2,0.75) circle (0.075cm); \draw[fill = black] (6.25,2.875) circle (0.075cm); \draw[fill = black] (7.25,2.875) circle (0.075cm); \draw (2, 0.75) -- (3, 1.25); \draw (5.25, 2.375) -- (6.25, 2.875); \draw[ultra thick] (3, 1.25) -- (5.25, 2.375); \draw[ultra thick] (3, 1.15) -- (3, 1.35); \draw[ultra thick] (5.25, 2.275) -- (5.25, 2.475); \draw[dashed] (3, 1.25) -- (3, 0); \draw (1, 0.75) -- (3, 0.75); \draw[dashed] (2, 0.75) -- (2, 0); \draw[dashed] (6.25,2.875) -- (6.25, 0); \draw[dashed] (7.25,2.875) -- (7.25, 0); \draw (5.25,2.875) -- (7.25,2.875); \draw[dashed] (5.25, 2.375) -- (5.25, 0); \node at (8.4,2.85) {$\vec{Z}_{f, k}(t^{m + 1}_{f, n})$}; \node at (1.5,1.35) {$\vec{Z}^{(1)}_{f, k}(\bar{t}^{m - 1}_{f, n})$}; \node at (4.15,0.75) {$\vec{Z}_{f, k}(t^{m - 1}_{f, n})$}; \node at (6,3.45) {$\vec{Z}^{(1)}_{f, k}(\bar{t}^{m + 1}_{f,n})$}; \end{tikzpicture} \end{center} \caption{Reconstruction of the primal solution $\vec{U}_{f, k}^{(2)}$ (left) and the adjoint solution $\vec{Z}_{f, k}^{(1)}$ (right).} \label{reconstruction} \end{figure} We compute the effectivity of the error estimate using \begin{equation*} \textnormal{eff}_k \coloneqq \frac{\sigma_k}{J(\vec{U}_{\textnormal{exact}}) - J(\vec{U}_k)} , \end{equation*} where $J(\vec{U}_{\textnormal{exact}})$ can be approximated by extrapolation in time. \subsection{Adaptivity} \label{adaptivity} The residuals (\ref{residualds_formula}) can be easily localised by restricting them to a specific subinterval \begin{alignat*}{2} & \theta_{f, k}^{n, m}\coloneqq \theta_{f, k}|_{I_{f, n}^{m}}, \qquad && \theta_{s, k}^{n, m}\coloneqq \theta_{s, k}|_{I_{s, n}^{m}}, \\ & \vartheta_{f, k}^{n, m}\coloneqq \vartheta_{f, k}|_{I_{f, n}^{m}}, \qquad && \vartheta_{s, k}^{n, m}\coloneqq \vartheta_{s, k}|_{I_{s, n}^{m}}. \end{alignat*} After defining global numbers of subintervals $M\coloneqq \sum_{n = 1}^N M_n$ and $L\coloneqq \sum_{n = 1}^N L_n$ we can compute an average for each of the components \begin{equation} \bar{\sigma}_{k}\coloneqq \frac{1}{2M} \sum_{n = 1}^{N} \sum_{m = 1}^{M_n} \left( |\theta_{f, k}^{n, m}| + |\vartheta_{f, k}^{n, m}| \right) + \frac{1}{2L} \sum_{n = 1}^{N} \sum_{l = 1}^{L_n} \left( | \theta_{s, k}^{n, l}| + |\vartheta_{s, k}^{n, l}|\right). \label{partial_average} \end{equation} This way we can obtain satisfactory refining criteria \begin{flalign} & \left( \left|\theta_{f, k}^{n, m} \right| \geq \bar{\sigma}_k \textnormal{ or } \left|\vartheta_{f, k}^{n, m} \right| \geq \bar{\sigma}_k \right) \Longrightarrow \textnormal{ refine } I_{f, n}^m, \label{refine_criterium} \\ & \left( \left|\theta_{s, k}^{n, l} \right| \geq \bar{\sigma}_k \textnormal{ or } \left|\vartheta_{s, k}^{n, l} \right| \geq \bar{\sigma}_k \right) \Longrightarrow \textnormal{ refine } I_{s, n}^l. \nonumber \end{flalign} Taking into account the time interval partitioning structure, we arrive with the following algorithm: \begin{enumerate} \item Mark subintervals using the refining criteria (\ref{refine_criterium}). \item Adjust the local patch structure - in case only one subinterval from a specific patch is marked, mark the other one as well (see Figure~\ref{local_patches}). \begin{figure}[t] \centering \begin{tikzpicture}[scale = 0.8] \draw (-2.75, -0.75) -- (2.75, -0.75); \draw[ultra thick] (0,0) parabola (2,2); \draw[ultra thick] (0,0) parabola (-2,2); \draw[fill = black] (0,0) circle (0.075cm); \draw[fill = black] (2,2) circle (0.075cm); \draw[fill = black] (-2,2) circle (0.075cm); \draw (-2, -0.65) -- (-2, -0.85); \draw (0, -0.65) -- (0, -0.85); \draw (2, -0.65) -- (2, -0.85); \node at (0, - 1.25) {$ t^{m}_{f, n}$}; \node at (-2, - 1.25) {$t^{m - 1}_{f, n}$}; \node at (2, - 1.25) {$ t^{m + 2}_{f, n}$}; \node at (1, - 1.25) {$ t^{m + 1}_{f, n}$}; \draw[dashed] (-2,-0.75) -- (-2,2); \draw[dashed] (0,-0.75) -- (0,0); \draw[dashed] (2,-0.75) -- (2,2); \draw (1, -0.65) -- (1, -0.85); \draw[->] (1.0, -2.1) -- (1.0, -1.65); \node at (1.0, -2.35) {refine}; \draw[thick, ->] (2.75,0.75) .. controls (3.25,1.25) and (4.25,1.25) .. (4.75,0.75); \draw (4.75, -0.75) -- (9.75, -0.75); \draw[ultra thick] (7.5,0) parabola (9.5,2); \draw[ultra thick] (7.5,0) parabola (5.5,2); \draw[fill = black] (7.5,0) circle (0.075cm); \draw[fill = black] (9.5,2) circle (0.075cm); \draw[fill = black] (5.5,2) circle (0.075cm); \draw (5.5, -0.65) -- (5.5, -0.85); \draw (7.5, -0.65) -- (7.5, -0.85); \draw (9.5, -0.65) -- (9.5, -0.85); \node at (7.5, - 1.25) {$ t^{m + 1}_{f, n}$}; \node at (5.5, - 1.25) {$t^{m - 1}_{f, n}$}; \node at (9.5, - 1.25) {$t^{m + 3}_{f, n}$}; \draw[dashed] (5.5,-0.75) -- (5.5,2); \draw[dashed] (7.5,-0.75) -- (7.5,0); \draw[dashed] (9.5,-0.75) -- (9.5,2); \draw (6.5, -0.65) -- (6.5, -0.85); \draw (8.5, -0.65) -- (8.5, -0.85); \node at (6.5, - 1.25) {$ t^{m}_{f, n}$}; \draw[->] (8.5, -2.1) -- (8.5, -1.65); \node at (8.5, -2.35) {refine}; \node at (8.5, - 1.25) {$ t^{m + 2}_{f, n}$}; \draw[->] (6.5, -2.1) -- (6.5, -1.65); \node at (6.5, -2.35) {refine}; \end{tikzpicture} \caption{An example of preserving the local patch structure during the marking procedure: if the time step is refined, the other time step belonging to the same patch will also be refined.} \label{local_patches} \end{figure} \item Perform time refining. \item Adjust the macro time-step structure - in case within one macro time-step there exist a fluid and a solid micro time-step that coincide, split the macro time-step into two macro time-steps at this point (see Figure~\ref{splitting}). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{Figure1.pdf} \caption{An example of a splitting mechanism of macro time-steps. On the left, we show the mesh before refinement: middle (in black) the macro nodes, top (in blue) the fluid nodes and bottom (in red) the solid nodes with subcycling. In the center sketch, we refine the first macro interval once within the fluid domain. Since one node is shared between fluid and solid, we refine the macro mesh to resolve subcycling. This final configuration is shown on the right.} \label{splitting} \end{figure} \end{enumerate} \section{Numerical results} \label{numerical_results} \subsection{Fluid subdomain functional} \label{fluid_functional} For the first example, we chose to test the derived error estimator on a goal functional concentrated in the fluid subproblem \begin{equation*} J_f(\vec{U})\coloneqq \int_{0}^T \nu\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x})\nabla v_f, \nabla v_f\right)_f \diff t, \quad J_s(\vec{U}) \coloneqq 0 \end{equation*} where $\widetilde{\Omega}_f = (2, 4) \times (0, 1)$ is the right half of the fluid subdomain. For this example, we also took the right hand side concentrated in the fluid subdomain, presented in Configuration~\ref{configuration_1}. As the time interval, we choose $I = [0, 1]$. Then we have $$(J_f)'_{\vec{U}}(\boldsymbol{\Xi}_f) = \int_{0}^T 2\nu\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x})\nabla v_f, \nabla \eta_f\right)_f \diff t.$$ Since the functional is nonlinear, we use a 2-point Gaussian quadrature for integration in time. With~(\ref{midpoints}), the quadrature points read as \[ g_{f, n}^{m, 1} \coloneqq \bar t_{f,n}^m + \frac{t_{f, n}^m - t_{f, n}^{m - 1}}{2 \sqrt{3}},\quad g_{f, n}^{m, 2} \coloneqq \bar t_{f,n}^m - \frac{t_{f, n}^m - t_{f, n}^{m - 1}}{2 \sqrt{3}}. \] With that at hand, we can formulate the discretization of the functional \begin{equation}\label{functional:fluid} \begin{aligned} (J_f)'_{\vec{U}}&(\Xi_{f,k}) = \sum_{n = 1}^N\sum_{m = 1}^{M_n}\sum_{q=1}^2 \nu\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^m\nabla v_{f, k}(g_{f, n}^{m, q})j_{f,n}^m\nabla\eta_{f, k}(g_{f, n}^{m, q})\right)_f \\ &= \sum_{n = 1}^N\sum_{q=1}^2 \bigg\{\nu(-g_{f, n}^{1, q} + t_{f, n}^1)\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^1\nabla v_{f, k}(g_{f, n}^{1, q})\nabla\eta_{f, k}(t_{f, n}^0)\right)_f \\ & + \sum_{m = 1}^{M_n - 1} \Big\{\nu(-g_{f, n}^{m + 1, q} + t_{f, n}^{m + 1})\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^{m + 1}\nabla v_{f, k}(g_{f, n}^{m + 1, q})\nabla\eta_{f, k}(t_{f, n}^m)\right)_f \\ & \qquad \qquad+ \nu(g_{f, n}^{m, q} - t_{f, n}^{m - 1})\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^{m}\nabla v_{f, k}(g_{f, n}^{m, q})\nabla\eta_{f, k}(t_{f, n}^m)\right)_f\Big\} \\ &+ \nu(g_{f, n}^{M_n, q} - t_{f, n}^{M_n - 1})\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^{M_n}\nabla v_{f, k}(g_{f, n}^{M_n, q})\nabla\eta_{f, k}(t_{f, n}^{M_n})\right)_f \bigg\}, \end{aligned} \end{equation} where the nodal interpolation is defined as: \[ j_{f, n}^m\nabla v_{f, k}(t) \coloneqq \frac{t_{f, n}^m - t}{k_{f, n}^m} \nabla v_{f, k}(t_{f, n}^{m - 1}) + \frac{t - t_{f, n}^{m - 1}}{k_{f, n}^m}\nabla v_{f, k}(t_{f, n}^m) \] In Table~\ref{fluid_residuals_uniform_equal} we show results of the a posteriori error estimator on a sequence of uniform time meshes. Here, we considered the case without any micro time-stepping, that is the time-step sizes in both fluid and solid subdomains are uniformly equal. That gives a total number of time-steps in the fluid domain equal to $N$ and $N$ in the solid domain. Table~\ref{fluid_residuals_uniform_equal} consists of partial residuals $\theta_{f,k},\theta_{s,k},\vartheta_{f,k}$ and $\vartheta_{s,k}$, overall estimate $\sigma_k$, extrapolated errors $\widetilde{J}-J(\vec U_k)$ and effectivities $\textnormal{eff}_k$. The values of the goal functional on the three finest meshes were used for extrapolation in time. As a result, we got the reference value $\widetilde{J} = 6.029469 \cdot 10^{-5}$. Except for the coarsest mesh, the estimator is very accurate and the effectivities are almost 1. On finer meshes, values of $\theta_{f,k}$ and $\vartheta_{f, k}$ are very close to each other which is due to the linearity of the coupled problem~\cite{BeckerRannacher2001}. A similar phenomenon happens for $\theta_{s, k}$ and $\vartheta_{s, k}$. The residuals are concentrated in the fluid subdomain, which suggests the usage of smaller time-step sizes in this space domain. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{c|cccc|ccc} \toprule $N$ & $\theta_{f, k}$ & $\theta_{s, k}$ & $\vartheta_{f, k}$ & $\vartheta_{s, k}$ & $\sigma_{k}$ & $\widetilde{J} - J(\vec{U}_k)$ & $\textnormal{eff}_k$ \\ \midrule 50 & $3.62\cdot 10^{-8}$ & $5.01 \cdot 10^{-10}$ & $1.05 \cdot 10^{-7}$ &$5.03 \cdot 10^{-10}$ & $1.42 \cdot 10^{-7}$ & $8.06 \cdot 10^{-8}$& 1.76 \\ 100 & $9.66 \cdot 10^{-9}$ & $1.37 \cdot 10^{-10}$ & $9.96 \cdot 10^{-9}$ & $1.40 \cdot 10^{-10}$ & $1.99 \cdot 10^{-8}$ & $2.05 \cdot 10^{-8}$ & 0.97 \\ 200 & $2.48 \cdot 10^{-9}$ & $3.00 \cdot 10^{-11}$ & $2.52 \cdot 10^{-9}$ & $3.02 \cdot 10^{-11}$ & $5.07 \cdot 10^{-9}$ & $5.22 \cdot 10^{-9}$ & 0.97 \\ 400 & $6.28 \cdot 10^{-10}$ & $9.44 \cdot 10^{-12}$ & $6.33 \cdot 10^{-10}$ & $9.56 \cdot 10^{-12}$& $1.28 \cdot 10^{-9}$ & $1.31 \cdot 10^{-9}$ & 0.98 \\ 800 & $1.58 \cdot 10^{-10}$ & $2.02 \cdot 10^{-12}$ & $1.58 \cdot 10^{-10}$ & $2.06 \cdot 10^{-12}$& $3.20 \cdot 10^{-10}$ & $3.28 \cdot 10^{-10}$ & 0.98 \\ \bottomrule \end{tabular} } \caption{Residuals and effectivities for fluid subdomain functional in case of uniform time-stepping in case $M_n, L_n = 1$ for all $n$.} \label{fluid_residuals_uniform_equal} \end{center} \end{table} Table~\ref{fluid_residuals_uniform_refined} collects results for another sequence of uniform time meshes. In this case, each of the macro time-steps in the fluid domain is split into two micro time-steps of the same size. That results in $2N$ time-steps in the fluid domain and $N$ in the solid domain. The performance is still highly satisfactory. The residuals remain mostly concentrated in the fluid subdomain. Additionally, after comparing Tables~\ref{fluid_residuals_uniform_equal} and~\ref{fluid_residuals_uniform_refined}, one can see that corresponding values of $\theta_{f, k}$ and $\vartheta_{f, k}$ are the same (value for $N = 800$ in Table~\ref{fluid_residuals_uniform_equal} and $N = 400$ in Table~\ref{fluid_residuals_uniform_refined}, etc.). Overall, introducing micro time-stepping improves performance and reduces extrapolated error $\widetilde{J} - J(\vec{U}_k)$ more efficiently. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{c|cccc|ccc} \toprule $N$ &$\theta_{f, k}$ & $\theta_{s, k}$ & $\vartheta_{f, k}$ & $\vartheta_{s, k}$ & $\sigma_{k}$ & $\widetilde{J} - J(\vec{U}_k)$& $\textnormal{eff}_k$ \\ \midrule 50 & $9.66 \cdot 10^{-9}$ & $4.99 \cdot 10^{-10}$ & $9.96 \cdot 10^{-9}$ &$5.01 \cdot 10^{-10}$ &$2.06 \cdot 10^{-8}$ & $2.17 \cdot 10^{-8}$& 0.95\\ 100 & $2.48 \cdot 10^{-9}$ & $1.37 \cdot 10^{-10}$ & $2.52 \cdot 10^{-9}$ & $1.39 \cdot 10^{-10}$&$5.28 \cdot 10^{-9}$ & $5.45 \cdot 10^{-9}$ & 0.97 \\ 200 & $6.28 \cdot 10^{-10}$ & $2.99 \cdot 10^{-11}$ & $6.33 \cdot 10^{-10}$ & $3.01 \cdot 10^{-11}$&$1.32 \cdot 10^{-9}$ & $1.43 \cdot 10^{-9}$ & 0.92 \\ 400 & $1.58 \cdot 10^{-10}$ & $9.44 \cdot 10^{-12}$ & $1.58 \cdot 10^{-10}$ & $9.56 \cdot 10^{-12}$&$3.35 \cdot 10^{-10}$ & $3.58 \cdot 10^{-10}$ & 0.94 \\ \bottomrule \end{tabular} } \caption{Residuals and effectivities for fluid subdomain functional in case of uniform time-stepping in case $M_n = 2$ and $L_n = 1$ for all $n$.} \label{fluid_residuals_uniform_refined} \end{center} \end{table} \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{ccc|cccc|ccc} \toprule $N$ & $M$ & $L$ & $\theta_{f, k}$ & $\theta_{s, k}$ & $\vartheta_{f, k}$ & $\vartheta_{s, k}$& $\sigma_{k}$ & $\widetilde{J} - J(\vec{U}_k)$ &$\textnormal{eff}_k$\\ \midrule 50 & 56 & 50 & $3.08 \cdot 10^{-8}$ & $5.01 \cdot 10^{-10}$ & $3.16 \cdot 10^{-8}$ & $5.04 \cdot 10^{-10}$ & $6.34 \cdot 10^{-8}$ & $6.64 \cdot 10^{-8}$ & 0.95\\ 50 & 100 & 50 & $9.66 \cdot 10^{-9}$ & $4.99 \cdot 10^{-10}$ & $9.96 \cdot 10^{-9}$ & $5.01 \cdot 10^{-10}$& $2.06 \cdot 10^{-8}$ & $2.17 \cdot 10^{-8}$ & 0.95\\ 50 & 110 & 50 & $8.21 \cdot 10^{-9}$ & $4.99 \cdot 10^{-10}$ & $8.32 \cdot 10^{-9}$ & $5.02 \cdot 10^{-10}$& $1.75 \cdot 10^{-8}$ & $1.84 \cdot 10^{-8}$ & 0.95\\ 50 & 156 & 50 & $5.08 \cdot 10^{-9}$ & $4.99 \cdot 10^{-10}$ & $5.18 \cdot 10^{-9}$ & $4.97 \cdot 10^{-10}$ & $1.13 \cdot 10^{-8}$ & $1.20 \cdot 10^{-8}$ & 0.94\\ \bottomrule \end{tabular}} \caption{Residuals and effectivities for fluid subdomain functional in case of adaptive time-stepping.} \label{fluid_residuals_adaptive} \end{center} \end{table} In Table \ref{fluid_residuals_adaptive} we present findings in the case of adaptive time mesh refinement. We chose an initial configuration of uniform time-stepping without micro time-stepping for $N = 50$ and applied a sequence of adaptive refinements. On every level of refinement, the total number of time-steps is $M + L$. One can see that since the error is concentrated in the fluid domain, only time-steps corresponding to this space domain were refined. Again, effectivity gives very good results. The extrapolated error $\widetilde{J} - J(\vec{U}_k)$ is even more efficiently reduced. \subsection{Solid subdomain functional} \label{solid_functional} For the sake of symmetry, for the second example, we chose a functional concentrated on the solid subdomain \begin{equation*} J_f(\vec{U}) = 0, \quad J_s(\vec{U}) = \int_{0}^T \lambda\left(\mathbbm{1}_{\widetilde{\Omega}_s}(\vec{x})\nabla u_s, \nabla u_s\right)_s \diff t, \end{equation*} where $\widetilde{\Omega}_s = (2, 4) \times (-1, 0)$ is the right half of the solid subdomain. This time we set the right hand side according to Configuration~\ref{configuration_2}. Again, $\bar{I} = [0, 1]$. The derivative reads as \[ (J_s)'_{\vec{U}}(\boldsymbol{\Xi}_s) = \int_{0}^T 2\lambda\left(\mathbbm{1}_{\widetilde{\Omega}_s}(\vec{x})\nabla u_s, \nabla \xi_s\right)_s \diff t, \] and allows for a discretization according to~(\ref{functional:fluid}). Similarly, Table~\ref{solid_residuals_uniform_equal} gathers results for a sequence of uniform meshes without any micro time-stepping ($N + N$ micro time-steps). The last three solutions are used for extrapolation in time which gives $\widetilde{J} = 3.458826 \cdot 10^{-4}$. Also for this example, the effectivity is very satisfactory. On the finest discretization, the effectivity slightly declines. This might come from the limited accuracy of the reference value. Once more, on finer meshes, fluid residuals $\theta_{f, k}$, $\vartheta_{f, k}$ and solid residuals $\theta_{s, k}$ $\vartheta_{s, k}$ have similar values. This time, the residuals are concentrated in the solid subdomain and, in this case, the discrepancy is a bit bigger. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{c|cccc|ccc} \toprule $N$ & $\theta_{f, k}$ & $\theta_{s, k}$ &$\vartheta_{f, k}$ & $\vartheta_{s, k}$&$\sigma_{k}$ &$\widetilde{J} - J(\vec{U}_k)$ &$\textnormal{eff}_k$ \\ \midrule 50 & $2.03 \cdot 10^{-10}$ & $2.66 \cdot 10^{-6}$ & $1.93 \cdot 10^{-10}$ &$1.03 \cdot 10^{-5}$ & $1.30 \cdot 10^{-5}$ & $2.49 \cdot 10^{-5}$& 0.52 \\ 100 & $4.53 \cdot 10^{-11}$ & $2.59 \cdot 10^{-6}$ & $4.26 \cdot 10^{-11}$ & $2.67 \cdot 10^{-6}$& $5.26 \cdot 10^{-6}$ & $4.77 \cdot 10^{-6}$ & 1.10 \\ 200 & $1.28 \cdot 10^{-11}$ & $5.18 \cdot 10^{-7}$ & $1.26 \cdot 10^{-11}$ & $5.21 \cdot 10^{-7}$& $1.04 \cdot 10^{-6}$ & $9.80 \cdot 10^{-7}$ & 1.06 \\ 400 & $3.30 \cdot 10^{-12}$ & $1.17 \cdot 10^{-7}$ & $3.29 \cdot 10^{-12}$ & $1.17 \cdot 10^{-7}$& $2.34 \cdot 10^{-7}$ & $2.23 \cdot 10^{-7}$ & 1.05 \\ 800 & $8.32 \cdot 10^{-13}$ & $2.82 \cdot 10^{-8}$ & $8.32 \cdot 10^{-13}$ & $2.80 \cdot 10^{-8}$& $5.62 \cdot 10^{-8}$ & $5.07 \cdot 10^{-8}$ & 1.11 \\ \bottomrule \end{tabular}} \caption{Residuals and effectivities for solid subdomain functional in case of uniform time-stepping in case $M_n, L_n = 1$ for all $n$.} \label{solid_residuals_uniform_equal} \end{center} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=\textwidth]{Figure2.pdf} \end{center} \caption{Adaptive meshes for the solid functional. Top: uniform initial mesh; middle: 2 steps of adaptive refinement; bottom: 4 steps. Each plot shows the macro mesh (middle), the fluid mesh (top, in blue) and the solid mesh (bottom, in red). } \label{fig:refine} \end{figure} In Table~\ref{solid_residuals_uniform_refined} we display outcomes for a sequence of uniform meshes where each of the macro time-steps in the solid subdomain is split into two micro time-steps. That gives $N + 2N$ time-steps. Introducing micro time-stepping does not have a negative impact on the effectivity and significantly saves computational effort. Corresponding values of $\theta_{s,k}$ and $\vartheta_{s, k}$ in Tables \ref{solid_residuals_uniform_equal} and \ref{solid_residuals_uniform_refined} are almost the same. Residuals remain mostly concentrated in the solid subdomain. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{c|cccc|ccc} \toprule $N$ &$\theta_{f, k}$ &$\theta_{s, k}$ & $\vartheta_{f, k}$ &$\vartheta_{s, k}$&$\sigma_{k}$ & $\widetilde{J} - J(\vec{U}_k)$ & $\textnormal{eff}_k$ \\ \midrule 50 & $4.13 \cdot 10^{-10}$ & $2.61 \cdot 10^{-6}$ & $1.91 \cdot 10^{-9}$ &$2.68 \cdot 10^{-6}$ & $5.29 \cdot 10^{-6}$ & $4.68 \cdot 10^{-6}$& 1.13\\ 100 & $8.69 \cdot 10^{-11}$ & $5.20 \cdot 10^{-7}$ & $-3.72 \cdot 10^{-11}$ & $5.23 \cdot 10^{-7}$& $1.04 \cdot 10^{-6}$ & $9.54 \cdot 10^{-7}$ & 1.09 \\ 200 & $1.80 \cdot 10^{-11}$ & $1.17 \cdot 10^{-7}$ & $1.40 \cdot 10^{-12}$ & $1.17 \cdot 10^{-7}$ & $2.34 \cdot 10^{-7}$ & $2.16 \cdot 10^{-7}$ & 1.08 \\ 400 & $3.94 \cdot 10^{-12}$ & $2.82 \cdot 10^{-8}$ & $1.87 \cdot 10^{-12}$ & $2.80 \cdot 10^{-8}$ & $5.62 \cdot 10^{-8}$ & $4.90 \cdot 10^{-8}$ & 1.15 \\ \bottomrule \end{tabular}} \caption{Residuals and effectivities for solid subdomain functional in case of uniform time-stepping in case $M_n = 1$ and $L_n = 2$ for all $n$.} \label{solid_residuals_uniform_refined} \end{center} \end{table} Following the fluid example, in Table~\ref{solid_residuals_adaptive} we show calculation results in the case of adaptive time mesh refinement. Here as well we took the uniform time-stepping without micro time-stepping for $N = 50$ as the initial configuration and the total number of time-steps is $M + L$. Except for the last entry, only the time-steps corresponding to the solid domain were refined. On the finest mesh, the effectivity deteriorates. However, adaptive time-stepping is still the most effective in reducing the extrapolated error $\widetilde{J} - J(\vec{U}_k)$. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{ccc|cccc|ccc} \toprule $N$ & $M$ & $L$ & $\theta_{f, k}$ &$\theta_{s, k}$ &$\vartheta_{f, k}$ &$\vartheta_{s, k}$& $\sigma_{k}$ &$\widetilde{J} - J(\vec{U}_k)$ & $\textnormal{eff}_k$\\ \midrule 50 & 50 & 88 & $3.77 \cdot 10^{-10}$ & $6.57 \cdot 10^{-6}$ & $6.72 \cdot 10^{-8}$ & $6.91 \cdot 10^{-6}$ & $1.35 \cdot 10^{-5}$ & $1.06 \cdot 10^{-5}$ & 1.28\\ 50 & 50 & 166 & $5.17 \cdot 10^{-10}$ & $1.35 \cdot 10^{-6}$ & $7.16 \cdot 10^{-8}$ & $1.38 \cdot 10^{-6}$ & $2.80 \cdot 10^{-6}$ & $2.52 \cdot 10^{-6}$ & 1.11\\ 50 & 50 & 286 & $5.80 \cdot 10^{-10}$ & $4.54 \cdot 10^{-7}$ & $4.16 \cdot 10^{-8}$ & $4.56 \cdot 10^{-7}$ & $9.52 \cdot 10^{-7}$ & $7.34 \cdot 10^{-7}$ & 1.30\\ 54 & 54 & 400 & $5.70 \cdot 10^{-10}$ & $1.19 \cdot 10^{-7}$ & $4.12 \cdot 10^{-8}$ & $1.19 \cdot 10^{-7}$ & $2.81 \cdot 10^{-7}$ & $1.10 \cdot 10^{-7}$ & 2.55\\ \bottomrule \end{tabular}} \caption{Residuals and effectivities for solid subdomain functional in case of adaptive time-stepping.} \label{solid_residuals_adaptive} \end{center} \end{table} Finally, we show in Figure~\ref{fig:refine} a sequence of adaptive meshes that result from this adaptive refinement strategy. In the top row, we show the initial mesh with 50 macros steps and no further splitting in fluid and solid. For a better presentation, we only show a small subset of the temporal interval $[0.1,0.4]$. In the middle plot, we show the mesh after 2 steps of adaptive refinement and in the bottom line after 4 steps of adaptive refinement. Each plot shows the macro mesh, the fluid mesh (above) and the solid mesh (below). As expected, this example leads to a sub-cycling within the solid domain. For a finer approximation, the fluid problem also requires some local refinement. Whenever possible we avoid excessive subcycling by refining the macro mesh as described in Section~\ref{adaptivity}. \section*{Abstract} \label{abstract} We consider the dynamics of a parabolic and a hyperbolic equation coupled on a common interface and develop time-stepping schemes that can use different time-step sizes for each of the subproblems. The problem is formulated in a strongly coupled (monolithic) space-time framework. Coupling two different step sizes monolithically gives rise to large algebraic systems of equations where multiple states of the subproblems must be solved at once. For efficiently solving these algebraic systems, we inherit ideas from the partitioned regime and present two decoupling methods, namely a partitioned relaxation scheme and a shooting method. Furthermore, we develop an a posteriori error estimator serving as a mean for an adaptive time-stepping procedure. The goal is to optimally balance the time step sizes of the two subproblems. The error estimator is based on the dual weighted residual method and relies on the space-time Galerkin formulation of the coupled problem. As an example, we take a linear set-up with the heat equation coupled to the wave equation. We formulate the problem in a monolithic manner using the space-time framework. In numerical test cases, we demonstrate the efficiency of the solution process and we also validate the accuracy of the a posteriori error estimator and its use for controlling the time step sizes. \section{Introduction} \label{introduction} In this work, we are going to work with surface coupled multiphysics problems that are inspired by fluid-structure interaction (FSI) problems~\cite{Richter2017}. We couple the heat equation with the wave equation through an interface, where the typical FSI coupling conditions or Dirichlet-Neumann type act. Despite of its simplicity, each of the subproblems exhibits different temporal dynamics which is also found in FSI. The solution of the heat equation, as a parabolic problem, manifests smoothing properties, thus it can be characterized as a problem with slow temporal dynamics. The wave equation, on the other hand, is an example of a hyperbolic equation with highly oscillatory properties. FSI problems are characterized by two specific difficulties: the coupling of an equation of parabolic type with one of hyperbolic type gives rise to regularity problems at the interface. Further, the added mass effect~\cite{CausinGerbeauNobile2005}, which is present for problems coupling materials of a similar density, calls for discretization and solution schemes which are strongly coupled. This is the monolithic approach for modeling FSI, in contrast to partitioned approaches, where each of the subproblems is treated and solved as a separate system. While the monolithic approach allows for a more rigorous mathematical setting and the use of large time steps, the partitioned approach allows using fully optimized separate techniques for both of the subproblems. Most realizations for FSI, such as the technique described here, have to be regarded as a blend of both philosophies: while the formulation and discretization are monolithic, ideas of partitioned approaches are borrowed for solving the algebraic problems. Featuring distinct time scales in each of the problems, the use of multirate time-stepping schemes with adapted step sizes for fluid and solid is obvious. For parabolic problems, the concept of multirate time-stepping was discussed in \cite{Dawson1991}, \cite{Blum1992} and \cite{Faille2009}. In the hyperbolic setting, it was considered in~\cite{BergerMarsha1985}, \cite{Collino2003part1} \cite{Collino2003part2} and \cite{Piperno2006}. In the context of fluid-structure interactions, such subcycling methods are used in aeroelasticity~\cite{Piperno1997}, where explicit time integration schemes are used for the flow problem and implicit schemes for the solid problem~\cite{DeMoerlooseetal2018}. In the low Reynolds number regime, common in hemodynamics, the situation is different. Here, implicit and strongly coupled schemes are required by the added mass effect. Hence, large time steps can be applied for the flow problem, but smaller time steps might be required within the solid. A study on benchmark problems in fluid dynamics (Sch\"afer, Turek '96~\cite{SchaeferTurek1996}) and FSI presented in~\cite{HronTurek2006} shows that FSI problems demand a much smaller step size, although the problem configuration and the resulting nonstationary dynamics are very similar to oscillating solutions with nearly the same period~\cite{RichterWick2015_time}. We will derive a monolithic variational formulation for FSI like problems that can handle different time step sizes in the two subproblems. Implicit coupling of two problems with different step sizes will give rise to very large systems where multiple states must be solved at once. In Section~\ref{decoupling_methods} we will study different approaches for an efficient solution of these coupled systems, a simple partitioned relaxation scheme and a shooting like approach. Next, in Section~\ref{goal_oriented_estimation} we present a posteriori error estimators based on the dual weighted residual method~\cite{BeckerRannacher2001} for automatically identifying optimal step sizes for the two subproblems. Numerical studies on the efficiency of the time adaptation procedure are presented in Section \ref{numerical_results}. \section{Conclusion} In this paper, we have developed a multirate scheme and a temporal error estimate for a coupled problem that is inspired by fluid-structure interactions. The two subproblems, the heat equation and the wave equation, feature different temporal dynamics such that balanced approximation properties and stability demands ask for different step sizes. We introduced a monolithic variational Galerkin formulation for the coupled problem and then used a partitioned framework for solving the algebraic systems. Having different time-step sizes for each of the subproblems couples multiple states in each time-step, which would require an enormous computational effort. To solve this, we discussed two different decoupling methods: first, a simple relaxation scheme that alternates between fluid and solid problem and second, similar to the shooting method, where we defined a root-finding problem on the interface and used matrix-free Newton-Krylov method for quickly approximating the zero. Both of the methods were able to successfully decouple our specific example and showed good robustness concerning different subcycling of the multirate scheme in fluid- or solid-domain. However, the convergence of the shooting method was faster and it required fewer evaluations of the variational formulation. As the next step, we introduced a goal-oriented error estimate based on the dual weighted residual method to estimate errors with regard to functional evaluations. The monolithic space-time Galerkin formulation allowed to split the residual errors into contributions from the fluid and solid problems. Several numerical results for two different goal functionals show very good effectivity of the error estimate. Finally, we established the localization of the error estimator. That let us derive an adaptive refinement scheme for choosing optimal distinct time meshes for each problem. In future work, it remains to extend the methodology to nonlinear problems, in particular, to fully coupled fluid-structure interactions. \section{Acknowledgements} Both authors acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 314838170, GRK 2297 MathCoRe. TR further acknowledge supported by the Federal Ministry of Education and Research of Germany (project number 05M16NMA). \bibliographystyle{ieeetr} \section{Presentation of the model problem} \label{presentation_of_the_problem} Let us consider the time interval $I = [0, T]$ and two rectangular domains $\Omega_f = (0, 4) \times (0, 1)$, $\Omega_s = (0, 4) \times (0, -1)$. The interface is defined as $\Gamma \coloneqq \overline{\Omega}_f \cap \overline{\Omega}_s = (0,4)\times \{0\}$. The remaining boundaries are determined as $\Gamma_f^1 \coloneqq \{0 \} \times (0, 1)$, $\Gamma_f^2 \coloneqq (0, 4) \times \{ 1\}$, $\Gamma_f^3 \coloneqq \{4 \} \times (0, 1)$ and $\Gamma_s^1 \coloneqq \{0 \} \times (-1, 0)$, $\Gamma_s^2 \coloneqq (0, 4) \times \{ -1\}$, $\Gamma_s^3 \coloneqq \{4 \} \times (-1, 0)$. The domain is illustrated in Figure \ref{domain}. In the domain $\Omega_f$ we pose the heat equation parameterized by the diffusion parameter $\nu > 0$ with an additional transport term controlled by $\beta \in \mathds{R}^2$. In the domain $\Omega_s$ we set the wave equation. By $\sqrt{\lambda}$ we denote the propagation speed and by $\delta \geq 0$ a damping parameter. On the interface, we set both kinematic and dynamic coupling conditions. The former guarantees the continuity of displacement and velocity along the interface. The latter establishes the balance of normal stresses. The exact values of the parameters read as \[ \nu = 0.001,\quad \beta = \left(\begin{matrix} 2 \\ 0 \end{matrix}\right),\quad \lambda = 1000,\quad \delta = 0.1 \] and the complete set of equations is given by \begin{equation*} \begin{cases} \partial_t v_f - \nu \Delta v_f + \beta \cdot \nabla v_f = g_f,\quad - \Delta u_f = 0 & \textnormal{in } I \times \Omega_f, \\ \partial_t v_s - \lambda \Delta u_s - \delta \Delta v_s = g_s, \quad \partial_t u_s = v_s & \textnormal{in } I \times \Omega_s, \\ u_f = u_s,\quad v_f = v_s,\quad \lambda \partial_{\vec{n}_s} u_s = -\nu \partial_{\vec{n}_f}v_f & \textnormal{on } I \times \Gamma, \\ u_f = v_f = 0 & \textnormal{on } I \times \Gamma_f^2, \\ u_s = v_s = 0 & \textnormal{on } I \times \Gamma_s^1\cup \Gamma_s^3, \\ u_f(0) = v_f(0) = 0 & \textnormal{in } \Omega_f, \\ u_s(0) = v_s(0) = 0 & \textnormal{in } \Omega_s \\ \end{cases} \end{equation*} We use symbols $\vec{n}_f$ and $\vec{n}_s$ to distinguish between normal vectors for different space domains. The external forces are set to be products of functions of space and time ${g_f(\vec{x}, t) \coloneqq g_f^1(\vec{x})g^2(t)}$ and ${g_s(\vec{x}, t) \coloneqq g_s^1(\vec{x})g^2(t)}$ where $g_f^1$, $g_s^1$ are space components and $g^2$ is a time component. We will consider two configurations of the right hand side. In Configuration \ref{configuration_1}, the right hand side is concentrated in $\Omega_f$ where the space component consists of an exponential function centered around $\left(\frac{1}{2}, \frac{1}{2} \right)$. For Configuration \ref{configuration_2} we take a space component concentrated in $\Omega_s$ with an exponential function centered around $\left(\frac{1}{2}, -\frac{1}{2} \right)$. \begin{configuration}\label{configuration_1} \begin{alignat*}{2} g_f^1(\vec{x})\coloneqq &e^{-\left((x_1 - \frac{1}{2})^2 + (x_2 - \frac{1}{2})^2\right)}, \quad && \vec{x} \in \Omega_f \\ g_s^1(\vec{x})\coloneqq &0, \quad && \vec{x} \in \Omega_s \end{alignat*} \end{configuration} \begin{configuration}\label{configuration_2} \begin{alignat*}{2} g_f^1(\vec{x})\coloneqq &0,\quad && \vec{x} \in \Omega_f \\ g_s^1(\vec{x})\coloneqq &e^{-\left((x_1 - \frac{1}{2})^2 + (x_2 + \frac{1}{2})^2\right)}, \quad && \vec{x} \in \Omega_s \end{alignat*} \end{configuration} For both cases, we chose the same time component $g^2(t) \coloneqq \mathbbm{1}_{\big[\lfloor t \rfloor, \lfloor t \rfloor + \frac{1}{10}\big)}(t)$ for $t \in I$ illustrated in Figure \ref{g_2}. \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 1.0] \draw (0.0, 1.0) -- (4.0, 1.0) -- (4.0, -1.0); \draw (4.0, -1.0) -- (0.0, -1.0) -- (0.0, 1.0); \draw[dashed] (0.0, 0.0) -- (4.0, 0.0); \node at (0.35, 0.25) {$\Omega_f$}; \node at (-0.35, 0.5) {$\Gamma_f^1$}; \node at (-0.35, -0.5) {$\Gamma_s^1$}; \node at (4.35, 0.5) {$\Gamma_f^3$}; \node at (4.35, -0.5) {$\Gamma_s^3$}; \node at (0.35, -0.75) {$\Omega_s$}; \node at (2.0, 1.35) {$\Gamma_f^2$}; \node at (2.0, 0.35) {$\Gamma$}; \node at (2.0, - 1.35) {$\Gamma_s^2$}; \end{tikzpicture} \caption{View of the domain split into ``fluid'' $\Omega_f$ and ``solid'' $\Omega_s$ along the common interface~$\Gamma$. } \label{domain} \end{center} \end{figure} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.85] \draw (-0.5, 0) -- (5.5, 0); \draw[ultra thick] (0,0.5) -- (0.5, 0.5); \draw[ultra thick] (0.5,0) -- (5, 0); \draw[fill = black] (0, 0.5) circle (0.075cm); \draw[fill = white] (0.5, 0.5) circle (0.075cm); \draw[fill = black] (0.5, 0) circle (0.075cm); \draw[fill = white] (5, 0) circle (0.075cm); \node at (0, -0.3) {0}; \node at (0.5, -0.3) {0.1}; \node at (5, -0.3) {1}; \draw[white] (0.0, 1.0) -- (1.0, 1.0); \end{tikzpicture} \caption{Function $g_2$ on $I=[0,T)$ for $T = 1$. } \label{g_2} \label{proba} \end{center} \end{figure} Since our example might be treated as a simplified case of an FSI problem, in the text we will use the corresponding nomenclature. We will refer to domain $\Omega_f$ as the \textit{fluid domain} and the problem defined there as the \textit{fluid problem}. Similarly, we will use \textit{solid domain} and \textit{solid problem} phrases. \subsection{Continuous variational formulation} As the first step, let us introduce a family of Hilbert spaces, which will be later on used as the trial and test spaces for our variational problems \begin{equation*} X(V) = \left\{ v \in L^2(I, V) |\; \partial_t v \in L^2(I, V^*) \right\}. \end{equation*} Because we would like to incorporate the Dirichlet boundary conditions on $\Gamma_f^2$ and $\Gamma_s^1$, $\Gamma_s^3$ into spaces of solutions, for $\Upsilon \subset \partial \Omega$, we define \begin{equation*} H^1_0(\Omega; \Upsilon) = \left\{v \in H^1(\Omega)|\; v_{|\Upsilon} = 0 \right\}. \end{equation*} Note that $\left(H^1_0(\Omega;\Upsilon)\right)^* = H^{-1}(\Omega)$. For our example, we choose $H_f \coloneqq H^1_0(\Omega_f; \Gamma_f^2)$ and $H_s \coloneqq H^1_0(\Omega_s;\Gamma_s^1\cup\Gamma_s^3)$ for representing space. We take $X_f \coloneqq (X(H_f))^2$, $X_s\coloneqq (X(H_s))^2$ and $X = X_f \times X_s$ for space-time trial and test function spaces. Below we present notations for inner products and duality pairings: \begin{alignat*}{2} & (u, \varphi)_f\coloneqq (u, \varphi)_{L^2(\Omega_f)}, \quad && \langle u, \varphi \rangle_f \coloneqq \langle u, \varphi \rangle_{H^{-1}(\Omega_f) \times H_f}, \\ & (u, \varphi)_s\coloneqq (u, \varphi)_{L^2(\Omega_s)}, && \langle u, \varphi \rangle_s \coloneqq \langle u, \varphi \rangle_{H^{-1}(\Omega_s) \times H_s}, \\ & && \langle u, \varphi \rangle_{\Gamma} \coloneqq \langle u, \varphi \rangle_{H^{-\frac{1}{2}}(\Gamma) \times H^{\frac{1}{2}}(\Gamma)} \end{alignat*} To shorten the notation, we introduce the abbreviations \[ \begin{aligned} \vec{U}_f &\coloneqq \left(\begin{matrix} u_f \\ v_f \end{matrix}\right),& \vec{U}_s &\coloneqq \left(\begin{matrix} u_s \\ v_s \end{matrix}\right), & \vec{U} &\coloneqq \left(\begin{matrix} \vec{U}_f \\ \vec{U}_s \end{matrix}\right),\\ \boldsymbol{\Phi}_f &\coloneqq \left(\begin{matrix} \varphi_f \\ \psi_f \end{matrix}\right),& \boldsymbol{\Phi}_s &\coloneqq \left(\begin{matrix} \varphi_s \\ \psi_s \end{matrix}\right), & \boldsymbol{\Phi} &\coloneqq \left(\begin{matrix} \boldsymbol{\Phi}_f \\ \boldsymbol{\Phi}_s \end{matrix}\right). \end{aligned} \] After these preliminaries, we are ready to construct a continuous variational formulation of the problem. We define operators describing the fluid and the solid problem \begin{subequations} \begin{flalign} B_f(\vec{U})(\boldsymbol{\Phi}_f) \coloneqq &\int_I \langle \partial_t v_f, \varphi_f \rangle_f \diff t + \int_I a_f(\vec{U})(\boldsymbol{\Phi}_f) \diff t + (v_f(0), \varphi_f(0))_f, \label{b_f} \\ B_s(\vec{U})(\boldsymbol{\Phi}_s) \coloneqq &\int_I \langle \partial_t v_s, \varphi_s \rangle_s \diff t + \int_I \langle \partial_t u_s, \psi_s \rangle_s \diff t + \int_I a_s(\vec{U})(\boldsymbol{\Phi}_s) \diff t \label{b_s} \\ &\qquad + (v_s(0), \varphi_s(0))_s + (u_s(0), \psi_s(0))_s, \nonumber\\ F_f(\boldsymbol{\Phi}_f) \coloneqq &\int_I ( g_f, \varphi_f )_f \diff t, \nonumber\\ F_s(\boldsymbol{\Phi}_s) \coloneqq &\int_I ( g_s, \varphi_s )_s \diff t \nonumber \end{flalign} \end{subequations} with \begin{subequations} \begin{flalign} a_f(\vec{U})(\boldsymbol{\Phi}_f) & \coloneqq (\nu \nabla v_f, \nabla \varphi_f)_f + (\beta \cdot \nabla v_f, \varphi_f)_f + (\nabla u_f, \nabla \psi_f )_f \label{a_f} \\ &\qquad - \langle \partial_{\vec{n}_f} u_f, \psi_f \rangle_{\Gamma} + \frac{\gamma}{h}\langle u_f - u_s, \psi_f \rangle_{\Gamma} \nonumber \\ &\qquad - \langle \nu \partial_{\vec{n}_f} v_f, \varphi_f \rangle_{\Gamma} + \frac{\gamma}{h} \langle v_f - v_s, \varphi_f \rangle_{\Gamma}, \nonumber \\ a_s(\vec{U})(\boldsymbol{\Phi}_s) & \coloneqq (\lambda \nabla u_s, \nabla \varphi_s)_s + (\delta \nabla v_s, \nabla \varphi_s)_s - (v_s, \psi_s )_s \label{a_s} \\ & \qquad + \langle \nu \partial_{\vec{n}_f} v_f, \varphi_s \rangle_{\Gamma} - \langle \delta \partial_{\vec{n}_s} v_s, \varphi_s \rangle_{\Gamma}. \nonumber \end{flalign} \end{subequations} All the Laplacian terms were integrated by parts and the dynamic coupling condition was added. The kinematic coupling condition was incorporated into the fluid problem, while the dynamic condition became a part of the solid problem. The Dirichlet boundary conditions over the interface $\Gamma$ were formulated in a weak sense using Nitsche's method \cite{Nitsche1971}. We arbitrarily set $\gamma = 1000$, while $h$ is the mesh size. The compact version of the variational problem presents itself as: \begin{problem} Find $\vec{U} \in X$ such that \begin{flalign*} & B_f(\vec{U})(\boldsymbol{\Phi}_f) = F_f(\boldsymbol{\Phi}_f) \\ & B_s(\vec{U})(\boldsymbol{\Phi}_s) = F_s(\boldsymbol{\Phi}_s) \end{flalign*} for all $\boldsymbol{\Phi}_f \in X_f $ and $\boldsymbol{\Phi}_s \in X_s $. \label{continuous_problem} \end{problem} \subsection{Semi-discrete Petrov-Galerkin formulation} \label{semi-discrete_formulation} One of the main challenges emerging from the discretization of Problem \ref{continuous_problem} is the construction of a satisfactory time interval partitioning. Our main objectives include: \begin{enumerate} \item \textbf{Handling coupling conditions} \\ For the time interval $I = [0, T]$ we introduce a coarse time-mesh which is shared by both of the subproblems \[ 0 = t_0 < t_1 < ... < t_N = T,\quad k_n = t_n - t_{n - 1}, \quad I_n = (t_{n - 1}, t_n]. \] We will refer to this mesh as a \textit{macro time mesh}. \item \textbf{Allowing for different time-step sizes (possibly non-uniform) in both subproblems} \\ For each of the subintervals $I_n = (t_{n - 1}, t_{n}]$ we create two distinct submeshes corresponding to each of the subproblems $$ t_{n - 1} = t_{f, n}^0 < t_{f, n}^1 < ... < t_{f, n}^{M_n} = t_n, \quad k_{f, n}^m = t_{f, n}^{m} - t_{f, n}^{m - 1}, \quad I_{f, n}^m = (t_{f, n}^{m - 1}, t_{f, n}^{m}],$$ $$ t_{n - 1} = t_{s, n}^0 < t_{s, n}^1 < ... < t_{s, n}^{L_n} = t_n, \quad k_{s, n}^l = t_{s, n}^{l} - t_{s, n}^{l - 1}, \quad I_{s, n}^l = (t_{s, n}^{l - 1}, t_{s, n}^{l}].$$ We will refer to these meshes as \textit{micro time meshes}. \end{enumerate} We define grid sizes as: $$k_f : = \max_{n = 1,...,N} \max_{m = 1,...,M_{n}}k_{f, n}^m,\quad k_s : = \max_{n = 1,...,N} \max_{l = 1,...,L_{n}}k_{s, n}^l,$$ $$k \coloneqq \max \{k_f, k_s \}$$ As trial spaces, we chose spaces consisting of piecewise linear functions in time, \[ \begin{aligned} X^{1, n}_{f, k}& = \left\{ v \in C(\bar{I_n}, L^2(\Omega_f))|\; v|_{I_{f, n}^m} \in \mathcal{P}_1(I_{f, n}^m, H_f)\text{ for } m = 1,...,M_{n}\right\}, \\ X^1_{f, k}& = \left\{ v \in C(\bar{I}, L^2(\Omega_f))|\; v|_{I_n} \in X^{1, n}_{f, k} \text{ for } n = 1,...,N\right\}, \\ X^{1, n}_{s, k}& = \left\{ v \in C(\bar{I_n}, L^2(\Omega_s))|\; v|_{I_{s, n}^l} \in \mathcal{P}_1(I_{s, n}^l, H_s)\text{ for } l = 1,...,L_{n}\right\}, \\ X^1_{s, k} &= \left\{ v \in C(\bar{I}, L^2(\Omega_s))|\; v|_{I_n} \in X^{1, n}_{s, k} \text{ for } n = 1,...,N \right\}, \end{aligned} \] whereas we took spaces of piecewise constant functions as test spaces \[ \begin{aligned} Y^{0, n}_{f, k} &= \left\{ v \in L^2(I_n, L^2(\Omega_f))|\; v|_{I_{f, n}^m} \in \mathcal{P}_0(I_{f, n}^m, H_f) \text{ for } m = 1,...,M_{n} \right. \\ &\hspace{6cm} \left. \text{ and }v(t_{n - 1}) \in L^2(\Omega_f)\right\}, \\ Y^{0}_{f, k}& = \left\{ v \in L^2(I, L^2(\Omega_f))|\; v|_{I_{n}} \in Y^{0, n}_{f, k}\text{ for } n = 1,...,N\right\}, \\ Y^{0, n}_{s, k} &= \left\{ v \in L^2(I_n, L^2(\Omega_s))|\; v|_{I_{s, n}^l} \in \mathcal{P}_0(I_{s, n}^l, H_s)\text{ for }l = 1,...,L_{n} \right.\\ &\hspace{6cm} \left. \text{ and }v(t_{n - 1}) \in L^2(\Omega_s)\right.\}, \\ Y^{0}_{s, k} &= \left\{ v \in L^2(I, L^2(\Omega_s))|\; v|_{I_{n}} \in Y^{0, n}_{s, k}\text{ for } n = 1,...,N\right\}. \end{aligned} \] By $\mathcal{P}_r(I,H)$ we denote the space of polynomials with degree $r$ and values in $H$. To shorten the notation, we set \[ \begin{aligned} X_{f, k}^n&\coloneqq \left(X_{f, k}^{1, n} \right)^2,\quad& X_{s, k}^n &\coloneqq \left(X_{s, k}^{1, n} \right)^2, \quad& X_k^n &\coloneqq X_{f, k}^n \times X_{s, k}^n, \\ X_{f, k} &\coloneqq \left(X_{f, k}^1 \right)^2,& X_{s, k}&\coloneqq \left(X_{s, k}^1 \right)^2,& X_k & \coloneqq X_{f, k} \times X_{s, k}, \\ Y_{f, k}^n&\coloneqq \left(Y_{f, k}^{0, n} \right)^2,& Y_{s, k}^n& \coloneqq \left(Y_{s, k}^{0, n} \right)^2,& Y_k^n& \coloneqq Y_{f, k}^n \times Y_{s, k}^n, \\ Y_{f, k}& \coloneqq \left(Y_{f, k}^0 \right)^2,& Y_{s, k}& \coloneqq \left(Y_{s, k}^0 \right)^2,& Y_k& \coloneqq Y_{f, k} \times Y_{s, k}. \end{aligned} \] We assume that inner points of fluid and solid micro time-meshes do not necessarily coincide, i.~e. for every $n = 1, ..., N$, $m = 1, ..., M_{n} - 1$, $l = 1,...,L_{n} - 1$ we may have $t_{f, n}^{m} \neq t_{s, n}^{l}$. Because of this fact, a function defined on the fluid micro time-mesh can not be directly evaluated in the points of the solid micro time mesh, and vice versa. To solve this problem, we introduce nodal interpolation operators \[ i_n^f:X^n \to X_f^n \times \mathcal{P}_1(I_n, X_s^n),\quad i_n^s:X^n \to \mathcal{P}_1(I_n, X_f^n) \times X_s^n, \] where $X^n \coloneqq X\Big|_{I_n}$, $X_n^f \coloneqq X^f\Big|_{I_n}$, $X_n^s \coloneqq X^s\Big|_{I_n}$ and \begin{equation} \label{interpolation_operator_primal} \begin{aligned} i_n^f \vec{U}(t) &\coloneqq \left(\begin{matrix} \vec{U}_f(t) \\ \frac{t_n - t}{k_n}\vec{U}_s(t_{n - 1}) + \frac{t - t_{n - 1}}{k_n}\vec{U}_s(t_n) \end{matrix}\right), \\ i_n^s \vec{U}(t) &\coloneqq \left(\begin{matrix} \frac{t_n - t}{k_n}\vec{U}_f(t_{n - 1}) + \frac{t - t_{n - 1}}{k_n}\vec{U}_f(t_n) \\ \vec{U}_s(t)\end{matrix}\right). \end{aligned} \end{equation} Since the operators $B_f$ and $B_s$ are linear, the resulting scheme is equivalent to the Crank-Nicolson scheme up to the numerical quadrature of $F_f$, see also~\cite{ErikssonEstepHansboJohnson1995,Thomee1997}. Taking trial functions piecewise linear in time $\vec{U}_k \in X_k$ and test functions piecewise constant in time $\boldsymbol{\Phi}_{f, k} \in Y_{f, k}$, $\boldsymbol{\Phi}_{s, k} \in Y_{s, k}$, we can construct operators on every of the macro time-steps $I_n = (t_{n - 1}, t_n]$ \begin{equation}\label{b_f^n} \begin{aligned} B_f^n(\vec{U}_k)(\boldsymbol{\Phi}_{f, k}) \coloneqq & \sum_{m = 1}^{M_{n}} \bigg\{ (v_{f, k}(t_{f, n}^m) - v_{f, k}(t_{f, n}^{m - 1}), \varphi_{f, k}(t_{f, n}^{m}))_f \\ & \qquad + \frac{k_{f, n}^m}{2}a_f(i_n^f \vec{U}_{k}(t_{f, n}^m))(\boldsymbol{\Phi}_{f, k}(t_{f, n}^m)) \\ & \qquad + \frac{k_{f, n}^m}{2}a_f(i_n^f\vec{U}_{k}(t_{f, n}^{m - 1}))(\boldsymbol{\Phi}_{f, k}(t_{f, n}^m))\bigg\}, \end{aligned} \end{equation} \begin{equation}\label{b_s^n} \begin{aligned} B_s^n(\vec{U}_k)(\boldsymbol{\Phi}_{s, k}) \coloneqq & \sum_{l = 1}^{L_{n}} \bigg\{ (v_{s, k}(t_{s, n}^{l}) - v_{s, k}(t_{s, n}^{l - 1}), \varphi_{s, k}(t_{s, n}^{l})_s \\ & \qquad + (u_{s, k}(t_{s, n}^{l}) - u_{s, k}(t_{s, n}^{l - 1}), \psi_{s, k}(t_{s, n}^{l}))_s \\ & \qquad + \frac{k_{s, n}^l}{2}a_s(i_n^s\vec{U}_{k}(t_{s, n}^l))(\boldsymbol{\Phi}_{s, k}(t_{s, n}^l)) \\ & \qquad + \frac{k_{f, n}^l}{2}a_s(i_n^s\vec{U}_{k}(t_{s, n}^{l - 1}))(\boldsymbol{\Phi}_{s, k}(t_{s, n}^l)) \bigg\}, \end{aligned} \end{equation} \begin{equation*} \begin{aligned} F_f^n(\boldsymbol{\Phi}_{f, k}) \coloneqq & \sum_{m = 1}^{M_{n}} \left(\int_{I_{s, n}^m}g_f(t) \diff t, \varphi_{f, k}(t_{f, n}^m) \right)_f,\\ F_s^n(\boldsymbol{\Phi}_{s, k}) \coloneqq & \sum_{l = 1}^{L_{n}} \left(\int_{I_{s, n}^l}g_s(t) \diff t, \varphi_{s, k}(t_{s, n}^l) \right)_s \end{aligned} \end{equation*} Then, the forms on the whole time interval $I= [0, T]$ are just sums of the operators over the subintervals and initial conditions: \begin{subequations} \begin{flalign*} B_f(\vec{U}_k)(\boldsymbol{\Phi}_{f, k}) = & \sum_{n = 1}^{N} B_f^n(\vec{U}_k)(\boldsymbol{\Phi}_{f, k}) + (v_{f, k}(t_0), \varphi_{f, k}(t_0))_f,\\ B_s(\vec{U}_k)(\boldsymbol{\Phi}_{s, k}) = & \sum_{n = 1}^{N}B_s^n(\vec{U}_k)(\boldsymbol{\Phi}_{s, k}) + (v_{s, k}(t_0), \varphi_{s, k}(t_0))_s + (u_{s, k}(t_0), \psi_{s, k}(t_0))_s, \\ F_f(\boldsymbol{\Phi}_{f, k}) = & \sum_{n = 1}^{N} F_f^n(\boldsymbol{\Phi}_{f, k}), \\ F_s(\boldsymbol{\Phi}_{s, k}) = & \sum_{n = 1}^{N} F_s^n(\boldsymbol{\Phi}_{s, k}) \end{flalign*} \end{subequations} With that at hand, we can pose a semi-discrete variational problem: \begin{problem} Find $\vec{U}_k \in X_k$ such that: \begin{flalign*} & B_f(\vec{U}_k)(\boldsymbol{\Phi}_{f, k}) = F_f(\boldsymbol{\Phi}_{f, k}) \\ & B_s(\vec{U}_k)(\boldsymbol{\Phi}_{s, k}) = F_s(\boldsymbol{\Phi}_{s, k}) \end{flalign*} for all $\boldsymbol{\Phi}_{f, k} \in Y_{f, k}$ and $\boldsymbol{\Phi}_{s, k} \in Y_{s, k}$. \label{semi_discrete_problem} \end{problem}
{'timestamp': '2020-07-13T02:12:09', 'yymm': '2007', 'arxiv_id': '2007.05372', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.05372'}
ArXiv
\section{Introduction} Westerlund (1961) discovered that the 41day Cepheid RS Pup was surrounded by a remarkable nebulosity. This is in the shape of rudimentary rings but with much distorted structure and condensations. Havlen (1972) showed that portions of the nebulosity varied in the period of the Cepheid but with with various phase lags. A very beautiful set of measurements of phase lags at various points in the nebula has recently been obtained by Kervella et al. (2008) (= Kervella et al.). In general, the expected phase lag at a point $i$ may be written: \begin{equation} (N_{i} +\Delta \phi_{i}) = 5.7755 10^{-3} D\theta_{i} (1+ \sin \alpha_{i})/P\cos \alpha_{i} \end{equation} Here $\Delta \phi_{i}$ is the fractional phase lag, $N_{i}$ the whole number of pulsation periods elapsed, $D$ is the distance to RS Pup in parsecs, $\theta_{i}$ is the angular distance of $i$ from the star in arcsec, $P$ is the pulsation period in days and $\alpha_{i}$ is the angle between the line joining the star to $i$ and the plane of the sky (positive if $i$ is further away than the star, negative if it is nearer). The measured quantities are $\Delta \phi_{i}$ and $\theta_{i}$. $P$ is assumed known and here it is taken as 41.4389 days ( Kervella et al.). In an attempt to determine $D$, Kervella et al. assume $\alpha_{i} = \rm {constant} =0$. That is they assume that all the features measured by them lie in the plane of the sky and the values of $N_{i}$ are then chosen to obtain the best fit to this model. The justification for this assumption is that if the nebulosity consisted of a series of thin, uniform, spherical shells centred on the star, then the deviation of all measured points from the plane of the sky would be small. However an examination of the structure of the nebulosity (for instance from the figures in Kervella et al.) shows that it is far from corresponding to this idealized model. There is much distortion and density variation in the rudimentary rings. Kervella et al. place special emphasis on the ten condensations or blobs shown in their fig 7. The existence of such blobs is not consistent with the idealized model and leaves open the question of whether they or other features are actually in, or near, the plane of the sky. In view of these uncertainties it cannot be claimed that a definitive distance to RS Pup can be found based on the ``in-the-plane" assumption. In the next section this assumption is dropped and it is shown that a simple and astrophysically interesting model for the nebulosity is found if a distance for RS Pup is adopted from a period-luminosity relation. \section{An equatorial disc model} van Leeuwen et al. (2007) established a reddening-free period-luminosity relation in $V,I$ based on HST (Benedict et al. 2007) and revised Hipparcos parallaxes. This together with the data in table A1 of that paper leads to a predicted distance of 1728pc for RS Pup \footnote{The distance, $1830^{+109}_{-94}$pc derived from the pulsational parallax by Fouqu\'{e} et al. (2007) is not significantly different from this.}. Adopting this distance it is possible to use eq. 1 to study the three dimensional structure of the nebulosity. In principle the values of $N_{i}$ can be arbitrarily assigned. However they should obviously be chosen to account for apparent continuities in the structure and to conform to some simple, physically reasonable model. It was quickly found by trial and error that there is a consistent set of values of $N_{i}$ in which the points measured by Kervella et al. are further away than the star on the south side and nearer on the north, i.e. an inclined disc model is indicated. This is indeed the simplest model, if the uniform spherical shell model is rejected. In such a model the values of $N_{i}$ have to chosen such that $(N_{i} +\Delta \phi_{i})/\theta_{i}$ values are as near constant as possible in a given direction from the star and vary smoothly with direction. The details are given in Table 1. This contains data on the 31 points observed by Kervella et al. and I am greatly indebted to them for supplying detail of their observations which were not given in the original paper. The Table lists:\\ 1. Position number, $i$.\\ 2. Angular distance from the star, $\theta_{i}$, in arcsec.\\ 3.Azimuth of the point relative to the star, $\beta_{i}$, measured from north through east, in degrees.\\ 4. $\Delta \phi_{i}$ and its standard error.\\ 5. The value of $N_{i}$ ($=N_{i}^{K}$) adopted by Kervella et al. to fit their model assumptions.\\ 6. The distance, $d_{y}^{K}$, behind (positive) or in front (negative) of the plane of the sky through the star. This is found by using eq.1 together with adopted values of $N_{i}$ and $D$ to derive $\alpha_{i}$ in each case. Then, \begin{equation} d_{y} = 4.848.10^{-2} \theta_{i} D \tan\alpha_{i} \end{equation} where $d_{y}$ is in units of $10^{-4}$pc. For $d_{y}^{K}$ the value of $D$ estimated by Kervella et al. (1992 pc) was combined with their $N_{i}^{K}$ values.\\ 7. The value of $N_{i}$ adopted in the present paper\\ 8. The distance, $d_{y}$, behind or in front of the plane of the sky through the star assumed to be at its PL distance (1728pc) and adopting the revised values of $N_{i}$. The units are also in $10^{-4}$pc.\\ 9. The perpendicular distance $d_{x}$ of the point from the intersection of the disc with the plane of the sky and projected onto the plane of the sky. In the same units as $d_{y}$. This is given by: \begin{equation} d_{x} = 83.77\theta_{i} \sin(\beta_{i} -\gamma) \end{equation} where $\gamma$ is the angle (azimuth) at which the plane of the disc cuts the plane of the sky. In this test of the model this is take as close to the line from the star to point 9 (i.e. $\gamma = 80^{\circ}$).\\ \begin{table*} \centering \caption{The phase lag observations of Kervella et al. with derived linear positions} \begin{tabular}{rrrrrrrrr} \hline $i$ & $\theta_{i}$ & $\beta_{i}$ & $\Delta \phi_{i} \pm s.e.\;\;\;\;$ & $N_{i}^{K}$ & $d_{y}^{K}$ & $N_{i}$ & $d_{y}$ & $d_{x}$\\ \hline 1 & 21.10 & 139 & $0.983\pm 0.020$ & 5 & 43 & 5 & 290 & 1515\\ 2 & 21.16 & 170 & $0.809\pm 0.012$ & 5 & --24 & 5 & 233 & 1773\\ 3 & 21.65 & 196 & $0.989\pm 0.013$ & 5 & --7 & 5 & 252 & 1630\\ 4 & 16.58 & 186 & $0.576\pm 0.009$ & 4 & --10 & 4 & 190 & 1335\\ 5 & 16.03 & 167 & $0.504\pm 0.083$ & 4 & 19 & 4 & 210 & 1341\\ 6 & 12.89 & 213 & $0.102\pm 0.023$ & 3 & --179& 3 &--1 & 790\\ 7 & 10.96 & 249 & $0.098\pm 0.023$ & 3 & 19 & 3 & 148 & 175\\ 8 & 29.28 & 312 & $0.207\pm 0.036$ & 8 & 25 & 6 &--313 & --1933\\ 9 & 19.42 & 80 & $0.697\pm 0.016$ & 5 & 103 & 4 & 7 & 0 \\ 10 & 17.28 & 149 & $0.602\pm 0.014$ & 4 & --70 & 4 & 146 & 1351\\ 11 & 11.03 & 252 & $0.108\pm 0.004$ & 3 & 16 & 3 & 146 & 129\\ 12 & 11.84 & 201 & $0.692\pm 0.010$ & 3 & 133 & 3 & 259 & 850\\ 13 & 16.26 & 166 & $0.462\pm 0.001$ & 4 & --17 & 4 & 179 & 1359\\ 14 & 16.45 & 67 & $0.526\pm 0.001$ & 4 & --14 & 3 & --161 & --310\\ 15 & 16.98 & 148 & $0.572\pm 0.033$ & 4 & --49 & 4 & 159 & 1319\\ 16 & 17.04 & 187 & $0.589\pm 0.006$ & 4 & --49 & 4 & 160 & 1365\\ 17 & 17.58 & 86 & $0.394\pm 0.009$ & 4 & --178& 4 & 55 & 154\\ 18 & 18.84 & 247 & $0.543\pm 0.005$ & 5 & 108 & 4 & 2 & 355\\ 19 & 20.59 & 170 & $0.770\pm 0.015$ & 5 & 20 & 5 & 263& 1725\\ 20 & 20.99 & 140 & $0.965\pm 0.021$ & 5 & 48 & 5 & 293 & 1523\\ 21 & 24.43 & 255 & $0.925\pm 0.002$ & 6 & 50 & 5 & 15 & 178\\ 22 & 24.75 & 84 & $0.182\pm 0.025$ & 6 & --252& 6 & 76 & 145\\ 23 & 26.76 & 200 & $0.460\pm 0.003$ & 7 & 12& 7 & 329 & 1941\\ 24 & 28.35 & 273 & $0.081\pm 0.004$ & 7 & --289& 7 & 87 & --534\\ 25 & 28.74 & 156 & $0.717\pm 0.010$ & 8 & 247& 7 & 263 & 2336\\ 26 & 29.61 & 312 & $0.138\pm 0.001$ & 8 & --27 & 6 & -373 & --1955\\ 27 & 30.06 & 70 & $0.158\pm 0.015$ & 8 & --65& 7 & --28 & --437\\ 28 & 33.25 & 214 & $0.570\pm 0.055$ & 9 & 117 & 8 & 190 & 2004\\ 29 & 34.81 & 83 & $0.454\pm 0.006$ & 9 & --72 & 8 & 25 & 153\\ 30 & 37.76 & 275 & $0.956\pm 0.004$ &10 & 163 & 8 & --48 & -819\\ 31 & 39.14 & 283 & $0.938\pm 0.008$ &10 & 26 & 8 & --174 & --1281\\ \hline \end{tabular} \end{table*} Fig. 1 shows a plot of $d_{x}$ against $d_{y}$. This indicates a clear, apparently linear relation, between the two quantities. That is, the points lie in a tilted plane, presumably an equatorial disc. The line shown is a least squares fit through the origin and is given by: \begin{equation} d_{y} = 0.143(\pm 0.010)d_{x} \end{equation} The tilt of the disc to the plane of the sky is $\tan^{-1} 0.143 = 8^{\circ}.1\pm0^{\circ}.6$. The rms scatter about the line in $d_{y}$ is only $73.10^{-4}$pc, much smaller than the diameter of the disc, which is $\sim 6800.10^{-4}$pc out to the limits of the phase-lag survey. This rms scatter may be compared to the rms scatter of $d_{y}^{K}$ which is $111.10^{-4}$pc. No attempt has been made to optimize the disc model by, for instance, varying $\gamma$ to find a better fit. This might reduce the rms scatter slightly. However this is already small compared to the diameter of the part of the disc surveyed indicating a relatively thin disc. A realistic disc will have some significant depth perpendicular to its axis and, indeed, Kervella et al. note that their observations at some positions suggest smoothing attributed to a non-zero depth in the line of sight. An inclined disc model broadly similar the one just discussed could probably be derived for other distances, if there were good evidence for these. However it should be noted that to take the distance as a free parameter in an attempt to reduce the rms scatter in the model is to assume that the disc must conform as closely as possible to an idealized model which is perfectly flat and of negligible thickness. There is no a priori justification for such an assumption. \begin{figure} \includegraphics[width=8.5cm]{RSnewfig4.ps} \caption{A plot of the distances, $d_{y}$,from the plane of the sky through the star against $d_{x}$ the perpendicular distance, in the plane of the sky, to a line of azimuth $\gamma =80^{\circ}$. The units are $10^{-4}$pc. Note the expanded $d_{y}$ scale.} \end{figure} An equatorial disc model for RS Pup is particularly interesting from the astrophysical point of view. Whilst it has seemed possible, for instance, that Cepheids might have ejected shells in a previous evolutionary phase, it has been puzzling that only for RS Pup is such a prominent structure found. The interpretation of the nebulosity as a disc at a small angle to the plane of the sky opens up the possibility of a deeper understanding of this phenomenon and its rarity. Obvious possibilities are loss of mass in the equatorial plane by unusually rapid rotation and/or binary interaction at an earlier evolutionary stage. \section{Conclusion} The structure seen in the RS Pup nebulosity makes questionable the assumption that phase-lag observations all refer to points close to the plane of the sky and this makes distance estimates based on this assumption questionable. An inclined disc model at a distance predicted by a period-luminosity relation gives a good fit to the data and opens new possibilities for understanding the system, including a possible interacting binary model. \section*{Acknowledgments} I am grateful to Dr Pierre Kervella for a very helpful exchange of correspondence and to him and his colleagues for sending me detail of their beautiful work not given in their paper. I am also grateful to Dr Kurt van der Heyden for his help and to the referee for suggestions.
{'timestamp': '2008-04-01T09:28:29', 'yymm': '0804', 'arxiv_id': '0804.0092', 'language': 'en', 'url': 'https://arxiv.org/abs/0804.0092'}
ArXiv
\section{Introduction} \IEEEPARstart{T}{erahertz} (THz) and millimeter-wave (mmWave) are the key technologies for beyond 5G and next generation (6G) wireless communications, which can enable ultra-high data-rate communications and improve date security \cite{mmwaveSurvey,NatureShuping}. However, the severe propagation loss and blockage-prone nature in such high-frequency bands also pose greater challenges to information security, such as short secure propagation distance and unreliable secure communications \cite{Ma2018Nature,Ela2020Open}. These phenomena are more serious for THz bands due to higher frequency. Thus a new approach for information secure transmission in high frequency bands is urgently needed. Recently, the intelligent reflecting surface (IRS) \cite{Renzo2019J,Nad2020open,Nad2020TWC} has emerged as an invaluable way for widening signal coverage and overcome high pathloss of mmWave and THz systems \cite{Wang201908arXiv,Chen2019ICCC,Pan2020TWC} and has drawn increasing attention in secure communications. In IRS-assisted secure systems, the IRS intelligently adjusts its phase shifts to steer the signal power to desired user, and reduce information leakage \cite{RuiZhang2019IEEEWCL}. To maximize the secrecy rate, the active transmit beamforming and passive reflecting beamforming were jointly designed in \cite{Shen2019CL,Dong2020WCL,Robert2019GLOBECOM}. However, the above works mainly focus on microwave systems, while IRS-assisted secure mmWave/THz systems still remain unexplored. Moreover, extremely narrow beams of mmWave/THz waves can cause serious information leakage of beam misalignment and the costly implementation of continuous phase control. Besides, the blockage-prone nature of high frequency bands may lead to a serious secrecy performance loss, since eavesdroppers can not only intercept but also block legal communications. Motivated by the aforementioned problems, this letter investigates the IRS-assisted secure transmission in mmWave/THz bands. Under the discrete phase-shift assumption, a joint optimization problem of transmit beamforming and reflecting matrix is formulated to maximize the secrecy rate. It is proved that under the rank-one channel model, the transmit beamforming design is independent of reflecting matrix design. Thus the formulated non-convex problem is solved by converting it into two subproblems. The closed-form transmit beamforming is derived and the the hybrid beamforming structure is designed adopting orthogonal matching pursuit (OMP) method. Meanwhile, the semidefinite programming (SDP)-based method and the element-wise block coordinate descent (BCD) method are proposed to obtain the optimal discrete phase shifts. Simulation results demonstrate that the proposed methods can achieve near-optimal secrecy rate performance with discrete phase shifts. \section{System Model and Problem Formulation} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth {figure/THz_IRS_systemmodel_vNoDT3 \caption{System model for IRS-assisted secure mmWave/THz system, where BS communicates with desired user Bob, in the presence of an eavesdropper. \label{fig_1} \end{figure} The IRS-assisted secure mmWave/THz system is considered in this letter. As shown in \figref{fig_1}, one base station (BS) with $M$ antennas communicates with a single-antenna user Bob in the presence of a single-antenna eavesdropper Eve, which is an active user located near Bob. To protect confidential signals from eavesdropping, an IRS with a smart controller is adapted to help secure transmission. Note that due to deep path loss or obstacle blockage, there is no direct link between source and destination or eavesdropper. \subsection{Channel Model} It is assumed that the IRS with $N$ reflecting elements is installed on some high-rise buildings around the desired receiver Bob. Thus, the LoS path is dominant for the BS-IRS channel and the rank-one channel model is adopted \begin{equation} \mathbf{H}_{BI}^{H}=\sqrt{MN}\alpha_B G_{r}G_{t}\mathbf{a}\mathbf{b}^{H}, \end{equation} where $\alpha_B$ is the complex channel gain \cite{Bar2017TVT}, and $G_{r}$ and $G_{t}$ are receive and transmit antenna gain. $\mathbf{a}\in \mathbb{C}^{N\times 1}$ and $\mathbf{b}\in \mathbb{C}^{M\times 1}$ denote the array steering vector at IRS and BS, respectively. The IRS-Bob/Eve channel is assumed as\footnote{All channels are assumed to be perfectly known at BS, and the results derived can be considered as the performance upper bound. } \begin{equation} \mathbf{g}_k=\sqrt{\frac{N}{L}}\sum^L_{i=1}\alpha_{k,i}G_{r}^kG_{I}^{k}\mathbf{a}_{k,I}, \end{equation} where $k=\{D,E\}$, $L$ is the number of paths from IRS to $k$ node, $\mathbf{a}_{k,I}$ denotes the transmit array steering vector at IRS. \subsection{Signal Model} In the IRS-assisted secure mmWave/THz system, the BS transmits signal $s$ with power $P_s$ to an IRS, and the IRS adjusts phase shifts of each reflecting element to help reflect signal to Bob. We assume $\mathbf{\Theta}\!=\!\text{diag}\{ \!e^{j\theta_{1}}\!, e^{j\theta_{2}}\!,\dots,e^{j\theta_{N}}\!\}$ as the reflecting matrix, where $\theta_{i}$ is the phase shift of each element. Different from traditional IRS-assisted strategies with continuous phase shifts, the discrete phase shifts are considered in terms of IRS hardware implementation. Specifically, $\theta_{i}$ can only be chosen from a finite set of discrete values $\mathcal{F}\!=\!\{0,\Delta\theta,..., (L_P\!-\!1)\Delta\theta\}$, $L_P$ is the number of discrete values, and $\Delta\theta=2\pi/L_P$. The received signal at user Bob can be expressed as \begin{equation}\label{eq_3} y_D=\mathbf{g}_{D}^{H}\mathbf{\Theta}\mathbf{H}_{BI}^{H}\mathbf{w}s+n_D, \end{equation} where $\mathbb{E}\{|s|^2\}\!\!=\!\!1$, $n_D\!\sim\!\mathcal{CN}(0, \sigma_D^2)$ is the noise at destination. $\mathbf{w}=\mathbf{F}_{RF}\mathbf{f}_{BB}$ is the transmit beamforming at the BS, where $\mathbf{F}_{RF}\!\in\!\mathbb{C}^{M\times R}$ is analog beamformer and $\mathbf{f}_{BB}\!\in\! \mathbb{C}^{R\times 1}$ is digital beamformer, $R$ is the number of radio frequency (RF) chains. Similarly, the received signal at eavesdropper is written as \begin{equation}\label{eq_4} y_E=\mathbf{g}_E^{H}\mathbf{\Theta}\mathbf{H}_{BI}^{H}\mathbf{w}s+n_E, \end{equation} where $n_E \sim~\mathcal{CN}(0, \sigma_E^2)$ denotes the noise at eavesdropper. Then the system secrecy rate can be written as \begin{equation}\label{eq_Rs} R_s=\left[\log_2\left(\frac{1+\frac{1}{\sigma_D^2}|\mathbf{g}_{D}^{H}\mathbf{\Theta}\mathbf{H}_{BI}^{H}\mathbf{w}|^2} {1+\frac{1}{\sigma_E^2}|\mathbf{g}_{E}^{H}\mathbf{\Theta}\mathbf{H}_{BI}\mathbf{w}|^2}\right)\right]^+, \end{equation} where $[x]^+=\max\{0,x\}$. \vspace{-0.2cm} \subsection{Problem Formulation} To maximize the system secrecy rate, the joint optimization problem of transmit beamforming and reflecting matrix is formulated as \begin{equation*} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{3pt} (\mathrm{P1}):~ \max_{\mathbf{w},\mathbf{\Theta}} \frac{\sigma_D^2+|\mathbf{g}_{D}^{H}\mathbf{\Theta}\mathbf{H}_{BI}^{H}\mathbf{w}|^2} {\sigma_E^{2}+|\mathbf{g}_{E}^{H}\mathbf{\Theta}\mathbf{H}_{BI}^{H}\mathbf{w}|^2} \end{equation*} \begin{equation} \begin{array}{ll} \text{s.t.} &\|\mathbf{w}\|^2\leq P_s,\\ & \theta_i\in \mathcal {F}, \forall i.\\ \end{array \end{equation} The formulated problem is non-convex and is quite challenging to be solved directly, because of coupled variables $(\mathbf{w},\mathbf{\Theta})$ and the non-convex constraint of $\theta_i$. To cope with this difficulty, the original problem $\mathrm{P1}$ is converted into two subproblems, which can be solved alternatively. \section{Secrecy Rate Maximization} To find out solutions, we propose to convert problem $\mathrm{P1}$ into two subproblems. This idea is based on the fact that two subproblems are independent of each other. Specifically, the closed-form solution of $\mathbf{w}$ will be first derived. Then the SDP-based method and element-wise BCD method will be proposed to obtain the solution of reflecting matrix. \vspace{-0.2cm} \subsection{Transmit Beamforming Design} Since the rank-one channel is assumed from BS to IRS, the subproblem with respect to beamformer $\mathbf{w}$ is expressed as \begin{align} (\mathrm{P2.1}):~& \max_{\mathbf{w}} \frac{\sigma_D^2+|\alpha_{B}G_{r}G_{t}\mathbf{g}_{D}^{H}\mathbf{\Theta}\mathbf{a}|^2|\mathbf{b}^{H}\mathbf{w}|^2} {\sigma_E^{2}+|\alpha_{B}G_{r}G_{t}\mathbf{g}_{E}^{H}\mathbf{\Theta}\mathbf{a}|^2|\mathbf{b}^{H}\mathbf{w}|^2}\nonumber\\ &~\text{s.t.} ~~\|\mathbf{w}\|^2\leq P_s. \end{align} \begin{prop}\label{prop_1} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} Under the positive secrecy rate constraint, the suboptimal problem of $\mathbf{w}$ is equivalent to \begin{align} (\mathrm{P2.1}^{'}): \max_{\mathbf{w}} |\mathbf{b}^{H}\mathbf{w}|^2,\nonumbe ~~~\text{s.t.} ~\|\mathbf{w}\|^2\leq P_s. \end{align} It is independent of reflecting matrix design. For any value of $\mathbf{\Theta}$, the transmit beamformer solution is $\mathbf{w}^{opt}=\sqrt{P_s}\frac{\mathbf{b}}{\|\mathbf{b}\|}$. \end{prop} \begin{proof} See Appendix A. \end{proof} After obtaining $\mathbf{w}^{opt}$ from Proposition 1, $(\mathbf{F}_{RF}^{opt}, \mathbf{f}_{BB}^{opt})$ with a full-connected architecture can be easily derived by typical hybrid precoding methods, such as OMP algorithm \cite{Tropp2007TIT}. \subsection{Reflecting Matrix Design} To facilitate the following mathematical operations, we first define $\hat{\pmb{\theta}}=[e^{j\theta_1},e^{j\theta_2},..., e^{j\theta_N}]^{T}$, then we have $\mathbf{\Theta}=\text{diag}\{\hat{\pmb{\theta}}\}$. The subproblem of reflecting matrix can be rewritten as \vspace{-0.2cm} \begin{align} (\mathrm{P2.2}):~& \max_{\hat{\pmb{\theta}}} \frac{1+\frac{1}{\sigma_D^2}|\hat{\pmb{\theta}}^{T}\text{diag}\{\mathbf{g}_{D}^{H}\}\mathbf{H}_{BI}^{H}\mathbf{w}|^2} {1+\frac{1}{\sigma_E^2}|\hat{\pmb{\theta}}^{T}\text{diag}\{\mathbf{g}_{E}^{H}\}\mathbf{H}_{BI}^{H}\mathbf{w}|^2},\nonumber\\ &\text{s.t.} ~ \hat{\pmb{\theta}}=[e^{j\theta_1},e^{j\theta_2},...,e^{j\theta_N}]^{T}, \theta_i\in \mathcal {F}, \forall i.\nonumber \end{align}\label{P_2.2} It is obvious that the variable $\theta_i$ only takes a finite number of values from $\mathcal{F}$. Thus, problem $\mathrm{P2.2}$ is feasible to be solved with an exhaustive search method. However, due to a large feasible set of $\hat{\pmb{\theta}}$ ($N^{L_P}$ possibilities), the complexity of such method is considerably high. To cope with this, the SDP-based algorithm and element-wise BCD algorithm are proposed. \subsubsection{SDP-based Algorithm} Define $\mathbf{\Phi}=\hat{\pmb{\theta}}^{*}(\hat{\pmb{\theta}}^{*})^{H}$ and relax discrete variables $\theta_i$ into continuous $\theta_i\in[0,2\pi)$, i.e., $|e^{j\theta_i}|=1, \forall i$, then we can rewrite problem $\mathrm{P2.2}$ as $\max_{\mathbf{\Phi}\succeq 0} \frac{\text{tr}(\hat{\mathbf{R}}_{RD}\mathbf{\Phi})}{\text{tr}(\hat{\mathbf{R}}_{RE}\mathbf{\Phi})}$ with rank-one constraint, $\text{rank}(\mathbf{\Phi})=1$. Since $\text{rank}(\mathbf{\Phi})=1$ is a non-convex constraint, the semidefinite relaxation (SDR) is adopted to relax this constraint. Using Charnes-Cooper transformation approach, we define $\mathbf{X}=\mu\mathbf{\Phi}$ and $\mu=1/\text{tr}(\hat{\mathbf{R}}_{RE}\mathbf{\Phi})$. Then problem $\mathrm{P2.2}$ is rewritten as \begin{align}\label{eq_11} (\mathrm{P2.2^{'}}):~& \max_{\mu\geq 0,\mathbf{X}\succeq 0} \text{tr}(\hat{\mathbf{R}}_{RD}\mathbf{X}) \nonumber\\ &\begin{array}{ll} \text{s.t.}& \text{tr}(\hat{\mathbf{R}}_{RE}\mathbf{X})=1,\\ &\text{tr}(\mathbf{E}_{n}\mathbf{X})=\mu, \forall n.\\ \end{array \end{align} where $\hat{\mathbf{R}}_{RD}=\frac{1}{N}\mathbf{I}_{N}+\frac{1}{\sigma_{D}^2}\text{diag}\{\mathbf{g}_{D}^{*}\} \mathbf{H}_{BI}^{H}\mathbf{w}\mathbf{w}^{H}\mathbf{H}_{BI} \text{diag}\{\mathbf{g}_{D}\}$, $\hat{\mathbf{R}}_{RE}\!=\!\!\frac{1}{N}\mathbf{I}_{N}\!+\!\frac{1}{\sigma_{E}^2}\text{diag}\{\mathbf{g}_{E}^{*}\} \mathbf{H}_{BI}^{H}\mathbf{w}\mathbf{w}^{H}\mathbf{H}_{BI} \text{diag}\{\mathbf{g}_{E}\}$, and $\mathbf{E}_n$ means the element on position $(n,n)$ is $1$ and 0 otherwise. Problem $\mathrm{P2.2^{'}}$ is a standard SDP problem, and can be solved by adopting Interior-point method or CVX tools. \cite{QingWu2019TCOM} Then the rank-one solution $\hat{\pmb{\theta}}^{opt}\!\!=\![ e^{j\tilde{\theta}_1^{opt}},e^{j\tilde{\theta}_2^{opt}}, ...,e^{j\tilde{\theta}_N^{opt}}]^{T}$ can be achieved by the Gaussian randomization method. To obtain the discrete solution of problem $\mathrm{P2.2}$, we quantify the continuous solution $\hat{\pmb{\theta}}^{opt}$ as the nearest discrete value in set $\mathcal{F}$, and the following principle is adopted \begin{equation}\label{eq_12} \theta_{i}^{opt}=\arg\min_{\theta_i\in \mathcal{F}} |e^{j\tilde{\theta}_i^{opt}}-e^{j\theta_i}|, ~~\forall i. \end{equation} Note that since the closed-form solution of $\theta_i$ is not obtained, the entire process of SDP-based method needs to be done for each transmission block, which leads to high complexity. \subsubsection{Element-Wise BCD Algorithm} To obtain the closed-form solution of reflecting matrix, the element-wise BCD method \cite{Robert2019GLOBECOM} is employed in this section. Taking each $\theta_i$ as one block in the BCD, we can iteratively derive the continuous solutions of phase shifts using Proposition \ref{prop_2}. \begin{prop}\label{prop_2} There exists one and only one optimal $\theta_i^{opt}$ to maximize the secrecy rate, and \begin{equation}\label{eq_18} \small \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \theta_i^{opt}\!\!=\!\! \left\{\begin{array}{ll} \!\!\tilde{\theta}_i^{opt},\!\!&\!\!\! c_{D,i}d_{E,i}\cos(p_{E,i})\!<\!c_{E,i}d_{D,i}\!\cos(p_{D,i})\\ \!\!\tilde{\theta}_i^{opt}\!+\!\pi,\!\!&\!\!\! \text{otherwise} \!\!\end{array}\right. \end{equation} where $\tilde{\theta}_i^{opt}$ is shown in (\ref{eq_13}) on the top of next page, and \newcounter{mytempeqncnt} \begin{figure*}[!t] \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{12} \begin{equation}\label{eq_13} \tilde{\theta}_i^{opt}=-\arctan\!\left(\frac{c_{D,i}d_{E,i}\sin(p_{E,i})\!-\!c_{E,i}d_{D,i}\sin(p_{D,i})}{c_{D,i}d_{E,i}\cos(p_{E,i})\!-\!c_{E,i}d_{D,i}\cos(p_{D,i})}\right) -\arcsin\left(\!\frac{-d_{D,i}d_{E,i}\sin(p_{E,i}-p_{D,i})}{\sqrt{c_{D,i}^{2}d_{E,i}^{2}+\!c_{E,i}^{2}d_{D,i}^{2} \!-\!2c_{D,i}c_{E,i}d_{D,i}d_{E,i}\cos(p_{E,i}\!-\!p_{D,i})}}\!\right) \end{equation} \setcounter{equation}{\value{mytempeqncnt}} \hrulefill \vspace*{-15pt} \end{figure*} \setcounter{equation}{13} \begin{align &c_{k,i}=1+\frac{1}{\sigma_k^2}\left|g_{k,i}^{*}\mathbf{h}_{BI,i}^{H}\mathbf{w}\right|^2+\frac{1}{\sigma_k^2}\left|\sum_{m\neq i}e^{j\theta_{m}}g_{k,m}^{*}\mathbf{h}_{BI,m}^{H}\mathbf{w}\right|^2,\nonumber\\ &d_{k,i}=\frac{2}{\sigma_k^2}\left|\left(g_{k,i}^{*}\mathbf{h}_{BI,i}^{H}\mathbf{w}\right) \cdot \left(\sum_{m\neq i}e^{-j\theta_{m}}g_{k,m}\mathbf{w}^{H}\mathbf{h}_{BI,m}\right)\right|,\nonumber\\ &p_{k,i}=\angle\left(g_{k,i}^{*}\mathbf{h}_{BI,i}^{H}\mathbf{w}\sum_{m\neq i} e^{-j\theta_{m}}g_{k,m}\mathbf{w}^{H}\mathbf{h}_{BI,m}\right).\nonumber \end{align in which $k\in\{D,E\}$. \end{prop} \begin{proof} See Appendix B. \end{proof} With the above proposition, the optimal discrete solution of $\theta_i$ can be chosen using the same quantization principle in (\ref{eq_12}), and the entire element-wise BCD-based secrecy rate maximization algorithm can be summarized as Algorithm \ref{alg_1}. Since the objective function in $\mathrm{P2.2}$ is non-decreasing after each iteration and upper-bounded by a finite value of generalized eigenvalue problem \cite{Qiao2018TVT}, the convergence is guaranteed. \begin{algorithm} \caption{E-BCD based Secrecy Rate Maximization } \label{alg_1} \begin{algorithmic}[1] \STATE Initialize $\mathbf{\Theta}^{0}=\text{diag}\{e^{j\theta_{1}^{0}},e^{j\theta_{2}^{0}},...,e^{j\theta_{N}^{0}}\}$, $\varepsilon$, and set $n=0$. \STATE Find $\mathbf{w}^{opt}$ using Proposition \ref{prop_1}, then calculate optimal $\mathbf{F}_{BF}^{opt}$ and $\mathbf{f}_{BB}^{opt}$ using OMP method. \REPEAT \STATE $n=n+1$ \STATE \textbf{for} $i=1,2,...,N$ \textbf{do} \STATE calculate $\theta_{i}^{n}$ using Proposition 2, and quantify it as discrete solution. \STATE \textbf{end for} \UNTIL {$\|\mathbf{\Theta}^{n}-\mathbf{\Theta}^{n-1}\|\leq \varepsilon$}. \RETURN $(\mathbf{w}^{opt})^{'}=\mathbf{F}_{BF}^{opt}\mathbf{f}_{BB}^{opt}$ and $\mathbf{\Theta}^{opt}=\mathbf{\Theta}^{n}$ \end{algorithmic} \end{algorithm} \subsection{Complexity Analysis} The total complexity of the SDP-based method is about $\mathcal{O}(N_{gaus}N^8)$, which is mainly determined by the complexity of solving SDP problem and the number of rank-one solutions to construct the feasible set of Gaussian randomization method `$N_{gaus}$'. While the complexity of the proposed element-wise BCD method is about $\mathcal{O}(N(NM\!+\!L_P)N_{iter})$, which relies on the complexity of $\theta_i$ calculation and the iteration number $N_{iter}$ for $\mathbf{\Theta}^{opt}$. Intuitively, these methods have lower complexity than exhaustive research method $\mathcal{O}(N^{L_P+1}(N^2\!+\!NM))$, and the complexity of the element-wise BCD method is the lowest. \section{Simulation Results and Analysis} In this section, simulation results are presented to validate the secrecy rate performance in mmWave and THz bands. All results are obtained by averaging over $1000$ independent trials. We consider a scenario where the BS employs a uniform linear array (ULA) and the IRS is a uniform rectangular array (URA). Unless otherwise specified, the transmitter frequency is $f=0.3$~THz, and the transmit power is $P_s=25$~dBm, $M=16$. Due to a full-connected hybrid beamforming structure at BS, the RF chain number is less than $M$, $M_{RF}=10$. The number of discrete values in $\mathcal{F}$ is $L_P=2^3$, and $\Delta \theta=\frac{\pi}{4}$. Additionally, the antenna gain is set to $12$~dBi. The complex channel gain $\alpha_B$ and $\alpha_{k,i}$ can be obtained on the basis of \cite{Bar2017TVT}. \begin{figure}[!t] \centering \subfigure[$R_s$ versus $L_P$.]{ \begin{minipage}{0.2\textwidth \centering \includegraphics[width=1\textwidth]{figure/fig_1_1_THz_Lp}\\ \label{fig_2} \end{minipage}} \subfigure[$R_s$ versus $P_s$.]{ \begin{minipage}{0.2\textwidth} \centering \includegraphics[width=1\textwidth]{figure/fig_1_2_THz_Ps}\\ \label{fig_3} \end{minipage}} \caption{Secrecy rate performance for IRS-assisted mmWave and THz system, in which $N=4$, the BS-to-IRS and IRS-to-Bob distances are $d_{sr}=d_{rd}=5$~m, and Eve is located near Bob with IRS-Eve distance $d_{re}=5$~m.} \end{figure} The secrecy rate with different number of discrete values of set $\mathcal{F}$ is shown in \figref{fig_2}. With the increase of $L_P$, the secrecy rate of proposed methods approaches that of exhaustive search method (`Exh')\footnote{The solution of exhaustive search method is the optimal solution of discrete case, since all feasible solutions of phase shifts are searched.}. This is rather intuitive, since an increased number of discrete values leads to a reduction in quantization error in (\ref{eq_12}), thereby enhancing secrecy rate. Besides, it is obvious that discrete schemes are upper-bounded by the continuous solution (`Cont'). The secrecy rate versus transmit power $P_s$ is shown in \figref{fig_3}. Based on the optimal beamformer obtained from Proposition \ref{prop_1}, it is easily proved that the secrecy rate is an increasing function of $P_s$. Thus as $P_s$ increase, the secrecy rate increases monotonously. Similarly as \figref{fig_2}, \figref{fig_3} reveals that our proposed methods can achieve near optimal performance as exhaustive search method. Besides, results also demonstrate that the more antennas are equipped at BS, the higher secrecy rate is achieved. The effect of the number of reflecting elements on the secrecy rate of IRS-based and BS-based interception is also investigated.\footnote{Here, optimal $(\mathbf{w}, \mathbf{\Theta})$ for BS-based interception can be derived iteratively.}. As shown in \figref{fig_4}, as N increases from $10$ to $100$, the secrecy rate increases monotonously. This is because the more reflecting elements result in sharper reflecting beams, thereby enhancing information security. In particular, when Eve is located within the reflecting/transmit beam of IRS/BS, we assume that it can intercept and block $\rho$ portion of confidential signals. Intuitively, the more information is blocked, the worse secrecy rate can be achieved. Thus, compared with interception without blocking, this case is more serious for mmWave and THz communications. However, since the IRS-assisted secure transmission scheme is designed, the secrecy rate is significantly improved compared with secure oblivious approach (only maximizing information rate at the legal user). \begin{figure}[!t] \centering \subfigure[Eve intercepts IRS.] \begin{minipage}{0.2\textwidth \centering \includegraphics[width=1\textwidth]{figure/fig_resp_4_2IRS}\\%fig_4_1THz_N_nearr \label{fig_4a} \end{minipage}} \subfigure[Eve intercepts BS.] \begin{minipage}{0.2\textwidth} \centering \includegraphics[width=1\textwidth]{figure/fig_resp_4_1BS}\\%fig_4_2THz_N_blockrr2 \label{fig_4b} \end{minipage}} \caption{Secrecy rate versus the number of reflecting elements from $10$ to $100$. (a) Eve intercepts IRS, $d_{re}\!=\!5$~m for non-blocking, $d_{re}\!=\!2$~m for blocking, (b) Eve intercepts BS, $d_{se}\!=\!5$~m for non-blocking, $d_{se}\!=\!2$~m for blocking. } \label{fig_4} \end{figure} \section{Conclusion} This letter investigated the secrecy performance of IRS-assisted mmWave/THz systems. Considering the hardware limitation at the IRS, the transmit beamforming at the BS and discrete phase shifts at the IRS have been jointly designed to maximize the system secrecy rate. To deal with the formulated non-convex problem, the original problem was divided into two subproblems under the rank-one channel assumption. Then the closed-form beamforming solution was derived, and the reflecting matrix was obtained by the proposed SDP-based method and element-wise BCD method. Simulations demonstrated that our proposed methods can achieve the near optimal secrecy performance, and can combat eavesdropping occurring at the BS and the IRS. \appendices \section{Proof of Proposition 1} In $\mathrm{P2.1}$, the beamforming vector $\mathbf{w}$ is always coupled with $\mathbf{b}^{H}$, thus the secrecy rate can be seen as a function of $|\mathbf{b}^{H}\!\mathbf{w}|^2$. Under the positive secrecy rate constraint, i.e., $\scriptsize{|\mathbf{g}_{D}^{H}\!\mathbf{\Theta}\mathbf{a}|^2\!/\sigma_D^2\!\!>\!\!|\mathbf{g}_{E}^{H}\!\mathbf{\Theta}\mathbf{a}|^2\!/\sigma_E^2}$, it is easily proved that $R_s$ is an increasing function of $|\mathbf{b}^{H}\!\mathbf{w}|^2$. That is, $\mathrm{P2.1}$ is equivalent to $\max|\mathbf{b}^{H}\mathbf{w}|^2$ and is intuitively independent of $\mathbf{\Theta}$. \section{Proof of Proposition 2} Choosing each element at the IRS $\theta_i$ as one block of BCD, the objective function of $\mathrm{P2.2}$ can be reformulated as \begin{equation} f(\theta_i)=\frac{c_{D,i}+d_{D,i}\cos(\theta_i+p_{D,i})}{c_{E,i}+d_{E,i}\cos(\theta_i+p_{E,i})}, \end{equation} in which all parameters can be found in section III-B-2, and $c_{D,i},c_{E,i}\geq 1$, $c_{D,i}>d_{D,i}>0$, $c_{E,i}>d_{E,i}>0$. The sign of the derivative of $f(\theta_i)$ is determined by\footnote{To reduce complexity, $\sin(x)$-based expression is used instead of $\cos(x)$.} \begin{equation} h(x)=\sqrt{A_i^2+B_i^2}\sin(x)+C_i, \end{equation} where $x\!=\!\theta_i\!+\!\phi$, $A_{i}\!=\!c_{D,i}d_{E,i}\cos(p_{E,i})\!-\!c_{E,i}d_{D,i}\cos(p_{D,i})$, $B_{i}\!= \!c_{D,i}d_{E,i}\sin(p_{E,i})\!-c_{E,i}d_{D,i}\sin(p_{D,i})$, and $ C_i=d_{D,i}d_{E,i}\sin(p_{E,i}-p_{D,i})$, $\sin(\phi)=B_i\!/\!\sqrt{A_i^2\!+\!B_i^2}$, $\cos(\phi)=A_i/\sqrt{A_i^2+B_i^2}$. Then the main problem in deriving the unique optimal solution is to determine the value of $\phi$ or $\phi^{'}$. \begin{itemize} \item For $A_i>0$, we have $\cos(\phi)>0$ and $\phi=\arctan(\frac{B_i}{A_i})$. Thus, $\theta_i^{opt}=\pi-\arctan(\frac{B_i}{A_i})-\arcsin(\frac{-C_i}{\sqrt{A_i^2+B_i^2}})$. \item For $A_i\!<\!0$, we have $\cos(\phi)\!<\!0$ and $\phi=\pi+\arctan(\frac{B_i}{A_i})$. Thus $\theta_i^{opt}=-\arctan(\frac{B_i}{A_i})-\arcsin(\frac{-C_i}{\sqrt{A_i^2+B_i^2}})$. \end{itemize} \vspace{-0.2cm} \scriptsize \bibliographystyle{IEEEtran}
{'timestamp': '2020-05-28T02:20:30', 'yymm': '2005', 'arxiv_id': '2005.13451', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.13451'}
ArXiv
\section{Introduction} \label{intro} Prime factorization is a problem in the complexity class NP of problems that can be solved in polynomial time by a nondeterministic machine. Indeed, the prime factors can be verified efficiently by multiplication. At present, it is not known if the problem has polynomial computational complexity and, thus, is in the complexity class P. Nonetheless, the most common cryptographic algorithms rely on the assumption of hardness of factorization. Website certificates and bitcoin wallets are examples of resources depending on that assumption. Since no strict lower bound on the computational complexity is actually known, many critical services are potentially subject to future security breaches. Consequently, cryptographic keys have gradually increased their length to adapt to new findings. For example, the general number-field sieve~\cite{nfsieve} can break keys that would have been considered secure against previous factoring methods. Prime factorization is also important for its relation with quantum computing, since an efficient quantum algorithm for factorization is known. This algorithm is considered a main argument supporting the supremacy of quantum over classical computing. Thus, the search for faster classical algorithms is relevant for better understanding the actual gap between classical and quantum realm. Many of the known factoring methods use the ring of integers modulo $c$ as a common feature, where $c$ is the number to be factorized. Examples are Pollard's $\rho$ and $p-1$ algorithms~\cite{pollard1,pollard2}, Williams' $p+1$ algorithm~\cite{williams}, their generalization with cyclotomic polynomials~\cite{bach}, Lenstra elliptic curve factorization~\cite{lenstra}, and quadratic sieve~\cite{qsieve}. These methods end up to generate a number, say $m$, having a factor of $c$. Once $m$ is obtained, the common factor can be efficiently computed by the Euclidean algorithm. Some of these methods use only operations defined in the ring. Others, such as the elliptic-curve method, perform also division operations by pretending that $c$ is prime. If this operation fails at some point, the divisor is taken as outcome $m$. In other words, the purpose of these methods is to compute a zero over the field of integers modulo some prime factor of $c$ by possibly starting from some random initial state. Thus, the general scheme is summarized by a map $X\mapsto m$ from a random state $X$ in some set $\Omega$ to an integer $m$ modulo $c$. Different states may be tried until $m$ is equal to zero modulo some prime of $c$. The complexity of the algorithm depends on the computational complexity of generating $X$ in $\Omega$, the computational complexity of evaluating the map, and the average number of trials required to find a zero. In this paper, we employ this general scheme by focusing on a class $\Theta$ of maps defined as multivariate rational functions over prime fields $\mathbb{Z}_p$ of order $p$ and, more generally, over a finite field $\text{GF}(q)$ of order $q=p^k$ and degree $k$. The set $\Omega$ of inputs is taken equal to the domain of definition of the maps. More precisely, the maps are first defined over some algebraic number field $\mathbb{Q}(\alpha)$ of degree $k_0$, where $\alpha$ is an algebraic number, that is, solution of some irreducible polynomial $P_I$ of degree $k_0$. Then, the maps are reinterpreted over a finite field. Using the general scheme of the other methods, the class $\Theta$ takes to a factoring algorithm with polynomial complexity if \begin{enumerate} \item[(a)] the number of distinct zeros, say $N_P$, of the maps in $\Theta$ is arbitrarily large over $\mathbb{Q}(\alpha)$; \item[(b)] a large fraction of the zeros remain distinct when reinterpreted over a finite field whose order is greater than about $N_P^{1/M}$; \item[(c)] \label{cond_c} the product of the number of parameters by the field degree is upper-bounded by a sublinear power function of $\log N_P$; \item[(d)] the computational complexity of evaluating the map given any input is upper-bounded by a polynomial function in $\log N_P$. \end{enumerate} A subexponential factoring complexity is achieved with weaker scaling conditions on the map complexity, as discussed later in Sec.~\ref{sec_complex}. Later, this approach to factorization will be reduced to the search of rational points of a variety having an arbitrarily large number of rational intersection points with a hypersurface. The scheme employing rational functions resembles some existing methods, such as Pollard's $p-1$ algorithm. The main difference is that these algorithms generally rely on algebraic properties over finite fields, whereas the present scheme relies on algebraic properties over the field $\mathbb{Q}(\alpha)$. For example, Pollard's method ends up to build a polynomial $x^n-1$ with $p-1$ roots over a finite $\mathbb{Z}_p$, where $p$ is some prime factor of $c$. This feature of the polynomial comes from Fermat's little theorem and is satisfied if the integer $n$ has $p-1$ as factor. Thus, the existence of a large number of zeros of $x^n-1$ strictly depends on the field. Indeed, the polynomial does not have more than $2$ roots over the rationals. In our scheme, the main task is to find rational functions having a sufficiently large number of zeros over an algebraic number field. This feature is then inherited by the functions over finite fields. Some specific properties of finite fields can eventually be useful, such as the reducibility of $P_I$ over $\mathbb{Z}_p$. This will be mentioned later. Let us illustrate the general idea with an example. Suppose that the input of the factorization problem is $c=p p'$, with $p$ and $p'$ prime numbers and $p<p'$. Let the map be a univariate polynomial of the form \be\label{Eq1} P(x)=\prod_{i=1}^{N_P}(x-x_i), \ee where $x_i$ are integer numbers somehow randomly distributed in an interval between $1$ and $i_{max}\gg p'$. More generally, $x_i$ can be rational numbers $n_i/m_i$ with $n_i$ and/or $m_i$ in $\{1,\dots,i_{max}\}$. When reinterpreted modulo $p$ or modulo $p'$, the numbers $x_i$ take random values over the finite fields. If $N_P<p$, we expect that the polynomial has about $N_P$ distinct roots over the finite fields. Thus, the probability that $P(x)\mod p=0$ or $P(x)\mod p'=0$ is about $N_P/p$ or $N_P/p'$, respectively, which are the ratio between the number of zeros and the size of the input space $\Omega$ over the finite fields. The probability that $P(x)\mod c$ contains a nontrivial factor of $c$ is about $\frac{N_P}{p}\left(1-\frac{N_P}{p'}\right)+\frac{N_P}{p'}\left(1-\frac{N_P}{p}\right)$. Thus, if $N_P$ is of the order of $p$, we can get a nontrivial factor by the Euclidean algorithm in few trials. More specifically, if $p\simeq p'$ and $N_P\simeq \sqrt{c}/2$, then the probability of getting a nontrivial factor is roughly $1/2$. It is clear that a computational complexity of the map scaling subexponentially or polynomially in $\log N_P$ leads to a subexponential or polynomial complexity of the factoring algorithm. Thus, the central problem is to build a polynomial $P(x)$ or a rational function with friendly computational properties with respect to the number of zeros. The scheme can be generalized by taking multivariate maps with $M$ input parameters. In this case, the number of zeros needs to be of the order of $p^M$, which is the size of the input space over the field $\mathbb{Z}_p$. As a further generalization, the rational field can be replaced by an algebraic number field $\mathbb{Q}(\alpha)$ of degree $k_0$. A number in this field is represented as a $k_0$-dimensional vector over the rationals. Reinterpreting the components of the vector over a finite field $\mathbb{Z}_p$, the size of the sampling space is $p^{k_0 M}$, so that we should have $N_P\sim p^{k_0 M}$ in order to get nontrivial factors in few trials. Actually, this is the worst-case scenario since the reinterpretation of $\mathbb{Q}(\alpha)$ modulo $p$ can lead to a degree of the finite field much smaller than $k_0$. For example, if $\alpha$ is the root $e^{2\pi i/n}$ of the polynomial $x^n-1$, the degree of the corresponding finite field with characteristic $p$ collapses to $1$ if $n$ is a divisor of $p-1$. On one hand, it is trivial to build polynomials with an arbitrarily large number $N_P$ of roots over the rationals as long as the computational cost grows linearly in $N_P$. On the other hand, it is also simple to build polynomials with friendly computational complexity with respect to $\log N_P$ if the roots are taken over algebraically closed fields. The simplest example is the previously mentioned polynomial $P(x)=x^n-1$, which has $n$ distinct complex roots and a computational complexity scaling as $\log n$. However, over the rationals, this polynomial has at most $2$ roots. We can include other roots by extending the rational field to an algebraic number field, but the extension would have a degree proportional to the number of roots, so that the computational complexity of evaluating $P(x)$ would grow polynomially in the number of roots over the extension. \subsection{Algebraic-geometry rephrasing of the problem} It is clear that an explicit definition of each root of polynomial~(\ref{Eq1}) leads to an amount of memory allocation growing exponentially in $\log c$, so that the resulting factoring algorithm is exponential in time and space. Thus, the roots has to be defined implicitly by some simple rules. Considering a purely algebraic definition, we associates the roots to rational solutions of a set of $n$ non-linear polynomial equations in $n$ variables ${\bf x}=(x_1,\dots,x_n)$, \be P_k({\bf x})=0,\;\;\;\; k\in\{0,\dots,n-1\}. \ee The solutions are intersection points of $n$ hypersurfaces. The roots of $P(x)$ are defined as the values of some coordinate, say $x_n$, at the intersection points. By eliminating the $n-1$ variables $x_1,\dots,x_{n-1}$, we end up with a polynomial $P(x_n)$ with a number of roots generally growing exponentially in $n$. This solves the problem of space complexity in the definition of $P(x)$. There are two remaining problems. First, we have to choose the polynomials $P_0,\dots,P_{n-1}$ such that an exponentially large fraction of the intersection points are rational. Second, the variable elimination, given a value of $x_n$, has to be performed as efficiently as possible over a finite field. If the elimination has polynomial complexity, then factorization turns out to have polynomial complexity. Note that the elimination of $n-1$ variables given $x_n$ is equivalent to a consistency test of the $n$ polynomials. The first problem can be solved in a simple way by defining the polynomials as elements of the ideal generated by products of linear polynomials. Let us denote a linear polynomial with a symbol with a hat. For example, the quadratic polynomials \be\label{quadr_polys} G_i=\hat a_i\hat b_i, \;\;\;\;\ i\in\{1,\dots,n\} \ee have generally $2^n$ rational common zeros, provided that the coefficients of $\hat a_i$ and $\hat b_i$ are rational. Identifying the polynomials $P_0,\dots,P_{n-1}$ with elements of the ideal generated $G_1,\dots,G_n$, we have \begin{equation} \label{poly_eqs_simple} P_k=\sum_i c_{k,i}({\bf x})\hat a_i \hat b_i,\;\;\; i\in\{0,\dots,n-1\}, \end{equation} whose set of common zeros contains the $2^n$ rational points of the generators $G_i$. In particular, if the polynomials $c_{k,i}$ are set equal to constants, then the system $P_0=\dots P_{n-1}=0$ is equivalent to the system $G_1=\dots G_n=0$. At this point, the variable elimination is the final problem. A working method is to compute a Gr\"obner basis. For the purpose of factorizing $c=p p'$, the task is to evaluate a Gr\"obner basis to check if the $n$ polynomials given $x_n$ are consistent modulo $c$. If they are consistent modulo some non-trivial factor $p$ of $c$, we end up at some point with some integer equal to zero modulo $p$. However, the complexity of this computation is doubly exponential in the worst case. Thus, we have to search for a suitable set of polynomials with a large set of rational zeros such that there is an efficient algorithm for eliminating $n-1$ variables. The variable elimination is efficient if $n-1$ out of the $n$ polynomial equations $P_k=0$ form a suitable triangular system for some set of low-degree polynomials $c_{k,i}$. Let us assume that the last $n-1$ polynomials have the triangular form $$ \left. \begin{array}{r} P_{n-1}(x_{n-1},x_n) \\ P_{n-2}(x_{n-2},x_{n-1},x_n) \\ \dots \\ P_1(x_1,\dots,x_{n-2},x_{n-1},x_n) \end{array} \right\}, $$ such that the $k$-th polynomial is linear in $x_k$. Thus, the corresponding polynomial equations can be sequentially solved in the first $n-1$ variables through the system \be \label{rational_system} \begin{array}{l} x_{n-1}=\frac{{\cal N}_{n-1}(x_n)}{{\cal D}_{n-1}(x_n)} \\ x_{n-2}=\frac{{\cal N}_{n-2}(x_n,x_{n-1})}{{\cal D}_{n-2}(x_n,x_{n-1})} \\ \dots \\ x_1=\frac{{\cal N}_1(x_n,x_{n-1},\dots,x_2)}{{\cal D}_1(x_n,x_{n-1},\dots,x_2)}, \end{array} \ee where ${\cal D}_k\equiv \partial P_k/\partial x_k$ and ${\cal N}_k\equiv P_k|_{x_k=0}$. This system defines a parametrizable curve, say $\cal V$, in the algebraic set defined by the polynomials $P_1,\dots,P_{n-1}$, the variable $x_n$ being the parameter. Let us remind that a curve is parametrizable if and only if its geometric genus is equal to zero. The overall set of variables can be efficiently computed over a finite field, provided that the polynomial coefficients $c_{k,i}$ are not too complex. Once determined the variables, the remaining polynomial $P_0$ turns out to be equal to zero if ${\bf x}$ is an intersection point. Provided that the rational intersection points have distinct values of $x_n$ (which is essentially equivalent to state that the points are distinct and are in the variety $\cal V$), then the procedure generates a value $P_0({\bf x})$ which is zero modulo $p$ with high probability if $p$ is of the order of the number of rational intersection points. For this inference, it is pivotal to assume that a large fraction of rational points remain distinct when reinterpreted over the finite field. This algebraic-geometry rephrasing of the problem can be stated in a more general form. Let $\cal V$ and $\cal H$ be some irreducible curve in an $n$-dimensional space and a hypersurface, respectively. The curve $\cal V$ is not necessarily parametrizable, thus its genus may take strictly positive values. The points in $\cal H$ are the zero locus of some polynomial $P_0$. Let $N_P$ be the number of distinct intersection points between $\cal V$ and $\cal H$ over the rational field $\mathbb{Q}$. Over a finite field $\text{GF}(q)$, Weil's theorem states that the number of rational points, says $N_1$, of a smooth curve is bounded by the inequalities \be\label{weil_bounds} -2 g \sqrt{q} \le N_1-(q+1)\le 2 g\sqrt{q}, \ee where $g$ is the geometric genus of the curve. Generalizing to singular curves~\cite{aubry}, we have \be\label{aubry_eq} -2 g \sqrt{q}-\delta \le N_1-(q+1)\le 2 g\sqrt{q}+\delta, \ee where $\delta$ is the number of singularities, properly counted. These inequalities have the following geometric interpretation. For the sake of simplicity, let us assume that the singularities are ordinary double points. A singular curve, says $\cal S$, with genus $g$ is birationally equivalent to a smooth curve, say $\cal R$, with same genus, for which Wiel's theorem holds. That is, the rational points of $\cal R$ are bijectively mapped to the non-singular rational points of $\cal S$, apart from a possible finite set $\Omega$ of $2m$ points mapping to $m$ singular points of $\cal S$. The cardinality of $\Omega$ is at most $2\delta$ (attained when the $\delta$ singularities have tangent vectors over the finite field). We have two extremal cases. In one case, $\#\Omega=2\delta$, so that, $\cal S$ has $\delta$ points less than $\cal R$ (two points in $\Omega$ are merged into a singularity of $\cal S$). This gives the lower bound in~(\ref{aubry_eq}). In the second case, $\Omega$ is empty and the singular points of $\cal S$ are rational points. Thus, $\cal S$ has $\delta$ rational points more. Given this interpretation, Weil's upper bound still holds for the number of non-singular rational points, say $N_1'$, \be\label{weil2} N_1'\le (q+1)+2 g \sqrt{q}. \ee Thus, if the genus is much smaller than $\sqrt{q}$, $N_1'$ is upper-bounded by a number close to the order of the field. Now, let us assume that most of the $N_P$ rational points in ${\cal V}\cap{\cal H}$ over $\mathbb{Q}$ remain distinct when reinterpreted over $\mathbb{Z}_p$, with $p\simeq a N_P$, where $a$ is a number slightly greater than $1$, say $a=2$. We also assume that these points are not singularities of $\cal V$. Weil's inequality~(\ref{weil2}) implies that the curve does not have more than about $p$ points over $\mathbb{Z}_p$. Since $p\gtrsim N_1'\gtrsim N_P\sim p/2$, we have that the number of non-singular rational points of the curve is about the number of intersection points over $\mathbb{Z}_p$. This implies that a large fraction of points $\bf x\in \cal V$ over the finite field are also points of $\cal H$. We have the following. \begin{claim} \label{claim_rat_curve} Let ${\cal V}$ and ${\cal H}$ be an algebraic curve with genus $g$ and a hypersurface, respectively. The hypersurface is the zero locus of the polynomial $P_0$. Their intersection has $N_P$ distinct points over the rationals, which are not singularities of $\cal V$. Let us also assume that $g\ll \sqrt{N_P}$ and that most of the $N_P$ rational points remain distinct over $\mathbb{Z}_p$ with $p\gtrsim 2 N_P$. If we pick up at random a point in $\cal V$, then \be P_0({\bf x})=0 \mod p \ee with probability equal to about ratio $N_P/p$. \end{claim} If there are pairs $({\cal V},{\cal H})$ for every $N_P$ that satisfy the premises of this claim, then prime factorization is reduced to the search of rational points of a curve. Actually, these pairs always exist, as shown later in the section. Assuming that $c=p p'$ with $p\sim p'$, the procedure for factorizing $c$ is as follows. \begin{enumerate} \item[(1)] Take a pair $({\cal V},{\cal H})$ with $N_P\sim c^{1/2}$ such that the premises of Claim~\ref{claim_rat_curve} hold. \item[(2)] Search for a rational point ${\bf x}\in\cal V$ over $Z_p$. \item[(3)] Compute $\text{GCD}[P_0({\bf x}),c]$, the greatest common divisor of $P_0({\bf x})$ and $c$. \end{enumerate} The last step gives $\text{GCD}[P({\bf x}),c]$ equal to $1$, $c$ or one of the factors of $c$. The probability of getting a nontrivial factor can be made close to $1/2$ with a suitable tuning of $N_P$ (as shown later in Sec.~\ref{sec_algo}). Finding rational points of a general curve with genus greater than $2$ is an exceptionally complex problem. For example, just to prove that the plane curve $x^h+y^h=1$ with $h>2$ does not have zeros over the rationals took more than three centuries since Fermat stated it. Curves with genus $1$ are elliptic curves, which have an important role in prime factorization (See Lenstra algorithm~\cite{lenstra}). Here, we will focus on parametrizable curves, which have genus $0$. In particular, we will consider parametrizations generated by the sequential equations~(\ref{rational_system}). It is interesting to note that it is always possible to find a curve $\cal V$ with parametrization~(\ref{rational_system}) and a hypersurface $\cal H$ such that their intersection contains a given set of rational points. In particular, there is a set of polynomials $P_0,\dots,P_{n-1}$ of the form~(\ref{poly_eqs_simple}), such that the zero locus of the last $n-1$ polynomials contains a parametrizable curve with parametrization~(\ref{rational_system}), whose intersection with the hypersurface $P_0=0$ contains $2^n$ distinct rational points. Let the intersection points be defined by the polynomials~(\ref{quadr_polys}). Provided that $x_n$ is a separating variable, the set of intersection points admits the rational univariate representation~\cite{rouillier} \be \left\{ \begin{array}{l} x_{n-1}=\frac{\bar{\cal N}_{n-1}(x_n)}{{\bar{\cal D}}_{n-1}(x_n)} \\ x_{n-2}=\frac{{\bar{\cal N}}_{n-2}(x_n)}{\bar{\cal D}_{n-2}(x_n)} \\ \dots \\ x_1=\frac{\bar{\cal N}_1(x_n)}{\bar{\cal D}_1(x_n)} \\ {\bar{\cal N}}_0(x_n)=0 \end{array} \right. \ee The first $n-1$ equations are a particular form of equation~(\ref{rational_system}) and define a parametrizable curve with $x_n$ as parameter. The last equation can be replaced by some linear combination of the polynomials $G_i$. It is also interesting to note that the rational univariate representation is unique once the separating variable is chosen. This means that the parametrizable curve is uniquely determined by the set of intersection points and the variable that is chosen as parameter. It is clear that the curve and hypersurface obtained through this construction with a general set of polynomials $G_i$ satisfy the premises of Claim~\ref{claim_rat_curve}. Indeed, a large part of the common zeros of the polynomials $G_i$ are generally distinct over a finite field $\mathbb{Z}_p$ with $p\simeq N_P$. For example, the point $\hat a_1=\dots=\hat a_n=0$ is distinct from the other points if and only if $\hat b_i\ne 0$ at that point for every $i\in\{1,\dots,n\}$. Thus, the probability that a given point is not distinct over $\mathbb{Z}_p$ is of the order of $p^{-1}\sim N_P^{-1}$, hence a large part of the points are distinct over the finite field. There is an apparent paradox. With a suitable choice of the linear functions $\hat a_i$ and $\hat b_i$, the intersection points can be made distinct over a field $\mathbb{Z}_p$ with $p\ll N_P$, which contradicts Weil's inequality~(\ref{weil2}). The contradiction is explained by the fact that the curve is broken into the union of reducible curves over the finite field. In other words, some denominator ${\cal D}_k$ turns out to be equal to zero modulo $p$ at some intersection points. This may happen also with $p\sim N_P$, which is not a concern. Indeed, possible zero denominators can be used to find factors of $c$. \subsection{Contents} In Section~\ref{sec_algo}, we introduce the general scheme of the factoring algorithm based on rational maps and discuss its computational complexity in terms of the complexity of the maps, the number of parameters and the field degree. In Section~\ref{sec_arg_geo}, the factorization problem is reduced to the search of rational points of parametrizable algebraic varieties $\cal V$ having an arbitrarily large number $N_P$ of rational intersection points with a hypersurface $\cal H$. Provided that the $N_P$ grows exponentially in the space dimension, the factorization algorithm has polynomial complexity if the number of parameters and the complexity of evaluating a point in $\cal V$ over a finite field grow sublinearly and polynomially in the space dimension, respectively. Thus, the varieties $\cal V$ and $\cal H$ have to satisfy two requirements. On one side, their intersection has to contain a large set of rational points. On the other side, $\cal V$ has to be parametrizable and its points have to be computed efficiently given the values of the parameters. The first requirement is fulfilled with a generalization of the construction given by Eq.~(\ref{poly_eqs_simple}). First, we define an ideal $I$ generated by products of linear polynomials such that the associated algebraic set contains $N_P$ rational points. The relevant information on this ideal is encoded in a satisfiability formula (SAT) in conjunctive normal form (CNF) and a linear matroid. Then, we define $\cal V$ and $\cal H$ as elements of the ideal. By construction, ${\cal V}\cap{\cal H}$ contains the $N_P$ rational points. The ideal $I$ and the polynomials defining the varieties contain some coefficients. The second requirement is tackled in Sec.~\ref{build_up}. By imposing the parametrization of $\cal V$, we get a set of polynomial equations for the coefficients. These equations always admit a solution, provided that the only constraint on $\cal V$ and $\cal H$ is being an element of $I$. The task is to find an ideal $I$ and a set of coefficients such that the computation of points in $\cal V$ is as efficient as possible, given a number of parameters scaling sublinearly in the space dimension. In this general form, the problem of building the varieties $\cal V$ and $\cal H$ is quite intricate. A good strategy is to start with simple ideals and search for varieties in a subset of these ideals, so that the polynomial constraints on the unknown coefficients can be handled with little efforts. With these restrictions, it is not guaranteed that the required varieties exist, but we can have hints on how to proceed. This strategy is employed in Sec.~\ref{sec_quadr_poly}, where we consider an ideal generated by the polynomials~(\ref{quadr_polys}). The varieties are defined by linear combinations of these generators with constant coefficients, that is, $\cal H$ and $\cal V$ are in the zero locus of $P_0$ and $P_1,\dots,P_{n-1}$, respectively, defined by Eq.~(\ref{poly_eqs_simple}). The $2^n$ rational points associated with the ideal are taken distinct in $\mathbb{Q}$. First, we prove that there is no solution with one parameter ($M=1$), for a dimension greater than $4$. We give an explicit numerical example of a curve and hypersurface in dimension $4$. The intersection has $16$ rational points. We also give a solution with about $n/2$ parameters. Suggestively, this solution resembles a kind of retro-causal model. Retro-causality is considered one possible explanation of some strange aspects of quantum theory, such as non-locality and wave-function collapse after a measurement. Finally, we close the section by proving that there is a solution with $2\le M \le (n-1)/3$. This is shown by explicitly building a variety $\cal V$ with $(n-1)/3$ parameters. Whether it is possible to drop the number of parameters below this upper bound is left as an open problem. If $M$ grows sublinearly in $n$, then there is automatically a factoring algorithm with polynomial complexity, provided that the coefficients defining the polynomials $P_k$ are in $\mathbb{Q}$ and can be computed efficiently over a finite field. The conclusions and perspectives are drawn in Sec.~\ref{conclusion}. \section{General scheme and complexity analysis} \label{sec_algo} At a low level, the central object of the factoring algorithm under study is a class $\Theta$ of maps ${\vec\tau}\mapsto {\cal R}(\vec\tau)$ from a set $\vec\tau\equiv(\tau_1,\dots,\tau_M)$ of $M$ parameters over the field $\mathbb{Q}(\alpha)$ to a number in the same field, where $\cal R$ is a rational function, that is, the algebraic fraction of two polynomials. Let us write it as $$ {\cal R}(\vec\tau)\equiv \frac{{\cal N}(\vec\tau)}{{\cal D}(\vec\tau)}. $$ This function may be indirectly defined by applying consecutively simpler rational functions, as done in Sec.~\ref{sec_arg_geo}. Note that the computational complexity of evaluating ${\cal R}(\vec\tau)$ can be lower than the complexity of evaluating the numerator ${\cal N}(\vec\tau)$. For this reason we consider more general rational functions rather than polynomials. Both $M$ and $\alpha$ are not necessarily fixed in the class $\Theta$. We denote by $N_P$ the number of zeros of the polynomial $\cal N$ over $\mathbb{Q}(\alpha)$. The number $N_P$ is supposed to be finite, we will come back to this assumption later in Sec.~\ref{sec_infinite_points}. For the sake of simplicity, first we introduce the general scheme of the algorithm over the rational field. Then, we outline its extension to algebraic number fields. We mainly consider the case of semiprime input, that is, $c$ is taken equal to the product of two prime numbers $p$ and $p'$. This case is the most relevant in cryptography. If the rational points are somehow randomly distributed when reinterpreted over $\mathbb{Z}_p$, then the polynomial ${\cal N}$ has at least about $N_P$ distinct zeros over the finite field, provided that $N_P$ is sufficiently smaller than the size $p^{M}$ of the input space $\Omega$. We could have additional zeros in the finite field, but we conservatively assume that $N_P$ is a good estimate for the total number. For $N_P$ close to $p^M$, two different roots in the rational field may collapse to the same number in the finite field. We will account for that later in Sec.~\ref{sec_complex}. Given the class $\Theta$, the factorization procedure has the same general scheme as other methods using finite fields. Again, the value $m={\cal R}(\tau_1,\tau_2,\dots)$ is computed by pretending that $c$ is prime and $\mathbb{Z}/c\mathbb{Z}$ is a field. If an algebraic division takes to a contradiction during the computation of $\cal R$, the divisor is taken as outcome $m$. For the sake of simplicity, we neglect the zeros of $\cal D$ and consider only the zeros of $\cal N$. In Section~\ref{sec_arg_geo}, we will see that this simplification is irrelevant for a complexity analysis. It is clear that the outcome $m$ is zero in $\mathbb{Z}/c\mathbb{Z}$ with high probability for some divisor $p$ of $c$ if the number of zeros is about or greater than the number of inputs $p^{M}$. Furthermore, if $p'>p$ and $N_P$ is sufficiently smaller than $(p')^{M}$, then the outcome $m$ contains the nontrivial factor $p$ of $c$ with high probability. This is guaranteed if $N_P$ is taken equal to about $c^{M/2}$, which is almost optimal if $p\simeq p'$, as we will see later in Sec.~\ref{sec_complex}. Thus, we have the following. \begin{algorithm} \label{gen_algo0} Factoring algorithm with input $c=p p'$, $p$ and $p'$ being prime numbers. \item[(1)] \label{algo0_1} Choose a map in $\Theta$ with $M$ input parameters and $N_P$ zeros over the rationals such that $N_P\simeq c^{M/2}$ (see Sec.~\ref{sec_complex} for an optimal choice of $N_P$); \item[(2)] generate a set of $M$ random numbers $\tau_1,\dots,\tau_M$ over $\mathbb{Z}/c\mathbb{Z}$. \item[(3)] \label{algo0_3} compute the value $m={\cal R}(\tau_1,\dots,\tau_M)$ over $\mathbb{Z}/c\mathbb{Z}$ (by pretending that $c$ is prime). \item[(4)] \label{algo0_4} compute the greatest common divisor between $m$ and $c$. \item[(5)] if a nontrivial factor of $c$ is not obtained, repeat from point (2). \end{algorithm} The number $M$ of parameters may depend on the map picked up in $\Theta$. Let $M_{min}(N_P)$ be the minimum of $M$ in $\Theta$ for given $N_P$. The setting at point~(\ref{algo0_1}) is possible only if $M_{min}$ grows less than linearly in $\log N_P$, which is condition~(c) enumerated in the introduction. A tighter condition is necessary if the computational complexity of evaluating the map scales subexponentially, but not polynomially. This will be discussed with more details in Sec.~\ref{sec_complex}. If $c$ has more than two prime factors, $N_P$ must be chosen about equal to about $p^{M}$, where $p$ is an estimate of one prime factor. If there is no knowledge about the factors, the algorithm can be executed by trying different orders of magnitude of $N_P$ from $2$ to $c^{1/2}$. For example, we can increase the guessed $N_P$ by a factor $2$, so that the overall number of executions grows polynomially in $\log_2 p$. However, better strategies are available. A map with a too great $N_P$ ends up to produce zero modulo $p$ for every factor $p$ of $c$ and, thus, the algorithm always generates the trivial factor $c$. Conversely, a too small $N_P$ gives a too small probability of getting a factor. Thus, we can employ a kind of bisection search. A sketch of the search algorithm is as follows. \begin{enumerate} \item set $a_d=1$ and $a_u=c^M$; \item set $N_P=\sqrt{a_d a_u}$ and choose a map in $\Theta$ with $N_P$ zeros; \item execute Algorithm~\ref{gen_algo0} from point (2) and break the loop after a certain number of iterations; \item if a nontrivial factor is found, return it as outcome; \item if the algorithm found only the trivial divisor $c$, set $a_u=N_P$, otherwise set $a_d=N_P$; \item go back to point (2). \end{enumerate} This kind of search can reduce the number of executions of Algorithm~\ref{gen_algo0}. In the following, we will not discuss these optimizations for multiple prime factors, we will consider mainly semiprime integer numbers $c=p p'$. \subsection{Extension to algebraic number fields} Before outlining how the algorithm can be extended to algebraic number fields, let us briefly remind what a number field is. The number field $\mathbb{Q}(\alpha)$ is a rational field extension obtained by adding an algebraic number $\alpha$ to the field $\mathbb{Q}$. The number $\alpha$ is solution of some irreducible polynomial $P_I$ of degree $k_0$, which is also called the degree of $\mathbb{Q}(\alpha)$. The extension field includes all the elements of the form $\sum_{i=0}^{k_0-1} r_i \alpha^i$, where $r_i$ are rational numbers. Every power $\alpha^h$ with $h\ge k_0$ can be reduced to that form through the equation $P_I(\alpha)=0$. Thus, an element of $\mathbb{Q}(\alpha)$ can be represented as a $k_0$-dimensional vector over $\mathbb{Q}$. Formally, the extension field is defined as the quotient ring $\mathbb{Q}[X]/P_I$, the polynomial ring over $\mathbb{Q}$ modulo $P_I$. The quotient ring is also a field as long as $P_I$ is irreducible. Reinterpreting the rational function $\cal R$ over a finite field $\text{GF}(p^k)$ means to reinterpret $r_i$ and the coefficients of $P_I$ as integers modulo a prime number $p$. Since the polynomial $P_I$ may be reducible over $\mathbb{Z}_p$, the degree $k$ of the finite field is some value between $1$ and $k_0$ and equal to the degree of one the irreducible factors of $P_I$. Let $D_1,\dots,D_f$ be the factors of $P_I$. Each $D_i$ is associated with a finite field $\mathbb{Z}_p[X]/D_i\cong \text{GF}(p^{k_i})$, where $k_i$ is the degree of $D_i$. Smaller values of $k$ take to a computational advantage, as the size $p^{k M}$ of the input space $\Omega$ is smaller and the probability, about $N_P/p^{k M}$, of getting the factor $p$ is higher. For example, the cyclotomic number field with $\alpha=e^{2\pi i/n}$ has a degree equal to $\phi(n)$, where $\phi$ is the Euler totient function, which is asymptotically lower-bounded by $K n/\log\log n$, for every constant $K<e^{-\gamma}$, $\gamma$ being the Euler constant. In other words, the highest degree of the polynomial prime factors of $x^n-1$ is equal to $\phi(n)$. Let $P_I$ be equal to the factor with $e^{2\pi i/n}$ as root. If $n$ is a divisor of $p-1$ for some prime number $p$, then $P_I$ turns out to have linear factors over $\mathbb{Z}_p$. Thus, the degree of the number field collapses to $1$ when mapped to a finite field with characteristic $p$. Thus, the bound $k_0$ sets a worst case. For general number fields, the equality $k=k_0$ is more an exception than a rule, apart from the case of the rational field, for which $k=k_0=1$. For the sake of simplicity, let us assume for the moment that $k=k_0$ for one of the two factors of $c$, say $p'$. Algorithm~\ref{gen_algo0} is modified as follows. The map is chosen at point~(1) of Algorithm~\ref{gen_algo0} such that $N_P\simeq c^{k_0 M/2}$; the value $m$ computed at point~(3) is a polynomial over $\mathbb{Z}/c\mathbb{Z}$ of degree $k_0-1$; the greatest common divisor at point~(4) is computed between one of the coefficients of the polynomial $m$ and $c$. If the degree $k$ of the finite field of characteristic $p$ turns out to be smaller than $k_0$, we have to compute the polynomial greatest common divisor between $m$ and $P_I$ by pretending again that $\mathbb{Z}/c\mathbb{Z}$ is a field. If $m$ is zero over $\text{GF}(p^{k})$, then the Euclidean algorithm generates at some point a residual polynomial with the leading coefficient having $p$ as a factor (generally, all the coefficients turn out to have $p$ as a factor). If $k\ne k_0$ for both factors and most of the maps, then the algorithm ends up to generate the trivial factor $c$, so that we need to decrease $N_P$ until a non-trivial factor is found. \subsection{Complexity analysis} \label{sec_complex} The computational cost of the algorithm grows linearly with the product between the computational cost of the map, say ${\bf C}_0({\cal R})$, and the average number of trials, which is roughly $p^{k_0 M}/N_P$ provided that $N_P\ll p^{k_0 M}$ and $P_I$ is irreducible over $\mathbb{Z}_p$. The class $\Theta$ may contain many maps with a given number $N_P$ of zeros over some number field. We can choose the optimal map for each $N_P$, so that we express $k_0$, $M$ and $\cal R$ as functions of $\log N_P\equiv \xi$. The computational cost ${\bf C}_0({\cal R})$ is written as a function of $\xi$, ${\bf C}_0(\xi)$. Let us evaluate the computational complexity of the algorithm in terms of the scaling properties of $k_0(\xi)$, $M(\xi)$ and ${\bf C}_0({\xi})$ as functions of $\xi=\log N_P$. The complexity ${\bf C}_0(\xi)$ is expected to be a monotonically increasing function. If the functions $k_0(\xi)$ and $M(\xi)$ were decreasing, then they would asymptotically tend to a constant, since they are not less than $1$. Thus, we assume that these two functions are monotonically increasing or constant. As previously said, the polynomial $\cal N$ has typically about $N_P$ distinct roots over $\text{GF}(p^{k_0})$, provided that $N_P$ is sufficiently smaller than $p^{k_0 M}$. If $N_P$ is greater than $p^{k_0 M}$, then almost every value of $\vec\tau$ is a zero of the polynomial. Assuming that the zeros are somehow randomly distributed, the probability that a number picked up at random is different from any zero over $\text{GF}(p^{k_0})$ is equal to $(1-p^{-k_0 M})^{N_P}$. Thus, the number of roots over $\text{GF}(p^{k_0})$ is expected to be of the order of $p^{k_0 M} [1-(1-p^{-k_0 M})^{N_P}]$, which is about $N_P$ for $N_P\ll p^{k_0 M}$. Thus, the average number of trials required for getting a zero is \be N_{trials}\equiv \frac{1}{1-(1-p^{-k_0 M})^{N_P}}, \ee A trial is successful if it gives zero modulo some nontrivial factor of $c$, thus the number of required trials can be greater than $N_{trials}$ if some factors are close each other. Let us consider the worst case with $c=p p'$, where $p$ and $p'$ are two primes with $p'\simeq p$ such that $(p')^{k_0 M}\simeq p^{k_0 M}$. Assuming again that the roots are randomly distributed, the probability of a successful trial is $\text{Pr}_\text{succ}\equiv 2 [1-(1-p^{-k_0 M})^{N_P}](1-p^{-{k_0} M})^{N_P}$. The probability has a maximum equal to $1/2$ for $\xi$ equal to the value \be \xi_0\equiv\log\left[-\frac{\log 2}{\log(1-p^{-k_0 M})}\right]. \ee Evaluating the Taylor series at $p=\infty$, we have that \be \xi_0=k_0 M\log p+\log\log 2-\frac{1}{2 p^{k_0 M}}+O(p^{-2 k_0 M}). \ee The first two terms give a very good approximation of $\xi_0$. At the maximum, the ratio between the number of zeros and the number of states $p^{k_0 M}$ of the sampling space is about $\log 2$. It is worth to note that, for the same value of $\xi$, the probability of getting an isolated factor with $p\ll p'$ is again exactly $1/2$. Thus, we have in general \be N_P\simeq 0.69 p^{k_0 M} \Rightarrow \text{Pr}_\text{succ}=1/2. \ee Since the maximal probability is independent of $k_0$ and $M$, this value is also maximal if $k_0$ and $M$ are taken as functions of $\xi$. The maximal value $\xi_0$ is solution of the equation \be \label{eq_xi0} \xi_0=\log\left[-\frac{\log 2}{\log(1-p^{-f(\xi_0)})}\right]. \ee where $f(\xi)\equiv k_0(\xi) M(\xi)$. If the equation has no positive solution, then the probability is maximal for $\xi=0$. That is, the optimal map in the considered class is the one with $N_P=1$. This means that the number of states $p^{k_0 M}$ of the sampling space grows faster than the number of zeros. In particular, there is no solution for $\log p$ sufficiently large if $f(\xi)$ grows at least linearly (keep in mind that $f(\xi)\ge1$). Thus, the function $f(\xi)$ has to be a sublinear power function, as previously said. The computational cost of the algorithm for a given map ${\cal R}(\xi)$ is \be {\bf C}(p,\xi)\equiv \frac{{\bf C}_0(\xi)}{2 [1-(1-p^{-f(\xi)})^{\exp\xi}](1-p^{-f(\xi)})^{\exp\xi }}. \ee The optimal map for given $p$ is obtained by minimizing ${\bf C}(p,\xi)$ with respect to $\xi$. The computational complexity of the algorithm is \be\label{comp_complexity} {\bf C}(p)=\min_{\xi>0} {\bf C}(p,\xi)\equiv {\bf C}(p,\xi_m), \ee which satisfies the bounds \begin{equation} \label{bounds} {\bf C}_0(\xi_m)\le {\bf C}(p)\le 2{\bf C}_0(\xi_0). \end{equation} The upper bound in Eq.~(\ref{bounds}) is the value of ${\bf C}(p,\xi)$ at $\xi=\xi_0$, whereas the lower bound is the computational complexity of the map at the minimum $\xi_m$. It is intuitive that the complexity ${\bf C}_0(\xi)$ must be subexponential in order to have ${\bf C}(p)$ subexponential in $\log p$. This can be shown by contradiction. Suppose that the complexity ${\bf C}(p)$ is subexponential in $\log p$ and ${\bf C}_0(\xi)=\exp(a \xi)$ for some positive $a$. The lower bound in Eq.~(\ref{bounds}) implies that the optimal $\xi_m$ grows less than $\log p$. Asymptotically, \be \left. \frac{p^{k_0 M}}{N_P}\right|_{\xi=\xi_m}\sim e^{f(\xi_m)\log p-\xi_m}\ge K p^{1/2}, \ee for some constant $K$. Thus, the average number of trials grows exponentially in $\log p$, implying that the computational complexity is exponential, in contradiction with the premise. Since $f(\xi)$ and $\log {\bf C}_0(\xi)$ must grow less than linearly, we may assume that they are concave. \begin{property} \label{concave} The functions $f(\xi)$ and $\log {\bf C}_0(\xi)$ are concave, that is, \be \frac{d^2}{d\xi^2}f(\xi)\le 0, \;\;\; \frac{d^2}{d\xi^2}\log {\bf C}_0(\xi)\le 0. \ee \end{property} The lower bound in Eq.~(\ref{bounds}) depends on $\xi_m$, which depends on the function $C_0(\xi)$. A tighter bound which is also simpler to evaluate can be derived from Property~\ref{concave} and the inequality \be {\bf C}(p,\xi)\ge \frac{1}{2}e^{f(\xi)\log p-\xi}{\bf C}_0(\xi). \ee \begin{lemma} If Property~\ref{concave} holds and ${\bf C}(p)$ is asymptotically sublinear in $p$, then there is an integer $\bar p$ such that the complexity ${\bf C}(p)$ is bounded from below by $\frac{{\bf C}_0(\xi_0)}{2\log2}$ for $p>\bar p$. \end{lemma} {\it Proof}. The minimum $\xi_m$ is smaller than $\xi_0$, since the function ${\bf C}_0(\xi)$ is monotonically increasing. Thus, we have \be {\bf C}(p)=\min_{\xi\in\{0,\xi_0\}}{\bf C}(p,\xi)\ge \min_{\xi\in\{0,\xi_0\}} e^{f(\xi)\log p-\xi+\log C_0(\xi)}/2. \ee Since the exponential is monotonic and the exponent is concave, the objective function has a maximum and two local minima at the $\xi=0$ and $\xi=\xi_0$. Keeping in mind that $f(\xi)\ge 1$, The first local minimum is not less than $p {\bf C}_0(0)/2$. The second minimum is $e^{f(\xi_0)\log p-\xi_0} {\bf C}_0(\xi_0)/2$, which is greater than or equal to ${\bf C}_0(\xi_0)/(2 \log2)$. This can be proved by eliminating $p$ through Eq.~(\ref{eq_xi0}) and minimizing in $\xi_0$. Since ${\bf C}(p)$ is sublinear in $p$, there is an integer $\bar p$ such that the second minimum is global for $p>\bar p$. $\square$ Summarizing, we have \begin{equation} \label{bounds2} 0.72{\bf C}_0(\xi_0)\le {\bf C}(p)\le 2{\bf C}_0(\xi_0) \end{equation} for $p$ greater than some integer. Thus, the complexity analysis of the algorithm is reduced to study the asymptotic behavior of ${\bf C}_0(\xi_0)$. The upper bound is asymptotically tight, that is, $\xi=\xi_0$ is asymptotically optimal. Taking $$ f(\xi)=b \xi^\beta \text{ with } \beta\in[0:1), $$ the optimal value of $\xi$ is $$ \xi_0=(b \log p)^\frac{1}{1-\beta}+O(1). $$ The function $f(\xi)$ cannot be linear, but we can take it very close to a linear function, \be f(\xi)=b \frac{\xi}{(\log\xi)^\beta}, \;\;\;\; \gamma>1. \ee In this case, the optimal $\xi$ is $$ \xi_0=e^{(b\log p)^{1/\beta}}+O\left[(\log p)^{1/\beta} \right]. $$ There are three scenarios taking to subexponential or polynomial complexity. \begin{enumerate} \item[(a)] The functions ${\bf C}_0(\xi)$ and $f(\xi)$ scale polynomially as $\xi^\alpha$ and $\xi^\beta$, respectively, with $\beta\in[0:1)$. Then, the computational complexity ${\bf C}(p)$ scales polynomially in $\log p$ as $(\log p)^\frac{\alpha}{1-\beta}$. \item[(b)] The function ${\bf C}_0(\xi)$ is polynomial and $f(\xi)\sim\xi/(\log\xi)^\beta$ with $\beta>1$. Then the computational complexity ${\bf C}(p)$ scales subexponentially in $\log p$ as $\exp\left[b (\log p)^{1/\beta}\right]$. \item[(c)] The function ${\bf C}_0(\xi)$ and $f(\xi)$ are superpolynomial and polynomial respectively, with ${\bf C}_0(\xi)\sim\exp\left[b \xi^\alpha\right]$ and $f(\xi)\sim\xi^\beta$. If $\alpha+\beta<1$, then the complexity ${\bf C}(p)$ is subexponential in $\log p$ and scales as $\exp\left[b (\log p)^\frac{\alpha}{1-\beta}\right]$. \end{enumerate} The algorithm has polynomial complexity in the first scenario. The other cases are subexponential. This is also implied by the following. \begin{lemma} \label{litmus} The computational complexity ${\bf C}(p)$ is subexponential or polynomial in $\log p$ if the function ${\bf C}_0(\xi)^{f(\xi)}$ grows less than exponentially, that is, if $$ \lim_{\xi\rightarrow\infty}\frac{f(\xi)\log {\bf C}_0(\xi)}{\xi}=0. $$ In particular, the complexity is polynomial if ${\bf C}_0(\xi)$ is polynomial and $f(\xi)$ scales sublinearly. \end{lemma} This lemma can be easily proved directly from Eq.~(\ref{eq_xi0}) and the upper bound in Eq.~(\ref{bounds}), the former implying the inequality $\xi_0\le f(\xi_0)\log p+\log\log 2$. Let us prove the first statement. $$ \lim_{p\rightarrow\infty}\frac{\log{\bf C}(p)}{\log p}\le \lim_{\xi\rightarrow\infty}\frac{f(\xi)\log 2 {\bf C}_0(\xi)}{\xi-\log\log 2}= \lim_{\xi\rightarrow\infty}\frac{f(\xi)\log{\bf C}_0(\xi)}{\xi}=0. $$ Using the lower bound in Eq.~(\ref{bounds2}), the lemma can be strengthened by adding the inferences in the other directions (\emph{if} replaced by {\emph{if and only if}). Summarizing, we have the following. \begin{claim} \label{claim1} The factoring algorithm~\ref{gen_algo0} has subexponential (\emph{polynomial}) complexity if, for every $\xi=\log N_P>0$ with $N_P$ positive integer, there are rational univariate functions ${\cal R}(\vec\tau)=\frac{{\cal N}(\vec\tau)}{{\cal D}(\vec\tau)}$ of the parameters $\vec\tau=(\tau_1,\dots,\tau_{M(\xi)})$ over an algebraic number field $\mathbb{Q}(\alpha)$ of degree $k_0(\xi)$ with polynomials $\cal N$ and $\cal D$ coprime, such that \begin{enumerate} \item the number of distinct roots of $\cal N$ in $\mathbb{Q}(\alpha)$ is equal to about $N_P$. Most of the roots remain distinct when interpreted over finite fields of order equal to about $N_P^{1/M}$; \item given any value $\vec\tau$, the computation of ${\cal R}(\vec\tau)$ takes a number ${\bf C}_0(\xi)$ of arithmetic operations growing less than exponentially (\emph{polynomially}) in $\xi$; \item the function ${\bf C}_0(\xi)^{k_0(\xi) M(\xi)}$ is subexponential (\emph{the function $k_0(\xi) M(\xi)$ scales sublinearly}). \end{enumerate} \end{claim} Let us stress that the asymptotic complexity is less than exponential if and only if ${\bf C}_0(\xi)^{f(\xi)}$ is less than exponential. Thus, the latter condition is a litmus test for a given class of rational functions. However, the function ${\bf C}_0(\xi)^{f(\xi)}$ does not provide sufficient information on the asymptotic computational complexity of the factoring algorithm. The general number-field sieve is the algorithm with the best asymptotic complexity, which scales as $e^{a (\log p)^{1/3}}$. Thus, algorithm~\ref{gen_algo0} is asymptotically more efficient than the general number-field sieve if ${\bf C}_0(\xi)$ and $f(\xi)$ are asymptotically upper-bounded by a subexponential function $e^{b(\log p)^\alpha}$ and a power function $c \xi^\beta$, respectively, such that $\alpha<(1-\beta)/3$. In the limit case of $\beta\rightarrow 1$ and polynomial complexity of the map, the function $f(\xi)$ must be asymptotically upper-bounded by $b \xi/(\log\xi)^3$. \subsection{Number of rational zeros versus polynomial degree} Previously we have set upper bounds on the required computational complexity of the rational function $\cal R={\cal N}/{\cal D}$ in terms of the number of its rational zeros. For a polynomial (subexponential) complexity of prime factorization, the computational complexity ${\bf C}_0$ of $\cal R$ must scale polynomially (subexponentially) in the logarithm in the number of rational zeros. Thus, for a univariate rational function, it is clear that ${\bf C}_0$ has to scale polynomially (subexponentially) in the logarithm of the degree $d$ of $\cal N$, since the number of rational zeros is upper-bounded by the degree (fundamental theorem of algebra). An extension of this inference to multivariate functions is more elaborate, as upper bounds on the number of rational zeros are unknown. However, we are interested more properly to a set of $N_P$ rational zeros that remain in great part distinct when reinterpreted over a finite field whose order is greater than about $N_P^{1/M}$. Under this restriction, let us show that the number of rational zeros of a polynomial of degree $d$ and with $M$ variables is upper-bounded by $K d^{2 M}$ with some constant $K>0$. This bound allows us to extend the previous inference on ${\bf C}_0$ to the case of multivariate functions. Assuming that the $N_P$ rational zeros over $\mathbb{Q}$ are randomly distributed when reinterpreted over $\text{GF}(q)$, their number over the finite field is about $q^{M}\left[1-(1-q^{-M})^{N_P}\right]$, as shown previously. Since an upper bound on the number of zeros $N(q)$ of a smooth hypersurface over a finite field of order $q$ is known, we can evaluate an upper bound on $N_P$. Given the inequality~\cite{katz} \be\label{gen_weil} N(q)\le \frac{q^M-1}{q-1} +\left[(d-1)^M-(-1)^M\right]\left(1-d^{-1}\right)q^{(M-1)/2} \ee and \be \label{bound_ff} q^M\left[1-(1-q^{-M})^{N_P}\right]\le N(q), \ee we get an upper bound on $N_P$ for each $q$. Requiring that Eq.~(\ref{bound_ff}) is satisfied for every $q>N_P^{1/M}$, we get \be\label{up_bound} N_P< K d^\frac{2 M^2}{M+1}< K d^{2 M} \ee for some constant $K$ (the same result is obtained by assuming that Eq.~(\ref{bound_ff}) holds for every $q$). Note that a slight break of bound~(\ref{up_bound}) with $N_P$ growing as $d_0^{M^a}$ in $M$ for some particular $d=d_0$ and $a>1$ would make the complexity of prime factorization polynomial, provided the computational complexity of evaluating the function ${\cal R}$ is polynomial in $M$. This latter condition can be actually fulfilled, as shown with an example later. Ineq.~(\ref{gen_weil}) holds for smooth irreducible hypersurfaces. However, dropping these conditions are not expected to affect the bound~(\ref{up_bound}). For example, if $M=2$, then Ineq.~(\ref{gen_weil}) gives \be\label{weil_plane} N_P\le q+1+(d-1)(d-2)\sqrt{q} \ee which is the Weil's upper bound~(\ref{weil_bounds}) for a smooth plane curve, whose geometric genus $g$ is equal to $(d-1)(d-2)/2$. This inequality holds also for singular curves~\cite{aubry}. Indeed, this comes from the upper bound~(\ref{aubry_eq}) and the equality $g=(d-1)(d-2)/2-\delta$. Also reducibility does not affect Ineq~(\ref{up_bound}). It is simple to find examples of multivariate functions with a number of rational zero quite close to the bound $K d^{2 M}$. Trivially, there are polynomials ${\cal N}(\tau_1,\dots,\tau_M)$ of degree $d$ with a number of rational zeros at least equal to the number of coefficients minus $1$, that is, equal to $\bar N_P\equiv M!^{-1}\prod_{k=1}^M(d+k)-1\sim d^M/M!+O(d^{M-1})$. For $M=1$, this corresponds to take the univariate polynomial \be\label{univ_poly} {\cal N}(\tau)=(\tau-x_1)(\tau-x_2)\dots(\tau-x_d). \ee A better construction of a multivariate polynomial is a generalization of the univariate polynomial in Eq.~(\ref{univ_poly}). Given linear functions $L_{i,s}(\vec\tau)$, the polynomial $$ \tilde P=\sum_{i=1}^M\prod_{s=1}^d L_{i,s}(\vec\tau) $$ has generally a number of rational points $N_P$ at least equal to $d^M$, which is the square root of the upper bound, up to a constant. For $d<4$ and $M=2$, the number of rational zeros turns out to be infinite, since the genus is smaller than $2$ (see Sec.~\ref{sec_infinite_points} for the case of infinite rational points). A naive computation of $\tilde P(\vec\tau)$ takes $d M^2$ arithmetic operations, that is, its complexity is polynomial in $M$. This example provides an illustration of the complexity test described previously in Claim~\ref{claim1}. Expressing $d$ in terms of $M$ and $\xi=\log N_P$ and assuming that ${\bf C}_0\sim d M^2$, we have that $$ {\bf C}_0(\xi)=M^2 e^{M^{-1}\xi}, $$ which is subexponential in $\xi$ (provided that $M$ is a growing function of $\xi$), which is a necessary condition for a subexponential algorithm. However, the polynomial does not pass the litmus test, as ${\bf C}_0(\xi)^M$ grows exponentially. \subsubsection{The case of infinite rational zeros} \label{sec_infinite_points} Until now, we have assumed that the rational function has a finite number of rational zeros over the rationals. However, in the multivariate case, it is possible to have non-zero functions with an infinite number of zeros. For example, this is the case of a bivariate polynomials with genus equal to zero and one, which correspond to parametrizable curves and elliptic curves, respectively. We can also have functions whose zero locus contains linear subspaces with positive dimension, which can have infinite rational points. Since the probability of having ${\cal R}$ equal to zero modulo $p$ increases with the number of zeros over the rationals, this would imply that the probability is equal to $1$ if the number of zeros is infinite. This is not evidently the case. For example, if ${\cal R}$ is zero for $x_1=0$ and $M>1$, evidently the function has infinite rational points over $\mathbb{Q}$, but the number of points with $x_1=0$ over $\mathbb{Z}_p$ is $p^{M-1}$, which is $p$ times less than the number of points in the space. Once again, we are interested more properly to sets of $N_P$ rational zeros over $\mathbb{Q}$ such that a large fraction of them remain distinct over finite fields whose order is greater than about $N_P^{1/M}$. Under this condition, $N_P$ cannot be infinite and is constrained by Ineq.~(\ref{up_bound}). If there are linear subspaces with dimension $h>0$ in the zero locus of $\cal R$, we may fix some of the parameters $\vec\tau$, so that these spaces become points. In the next sections, we will build rational functions having isolated rational points and possible linear subspaces in the zero locus. If there are subspaces with dimension $k>0$ giving a dominant contribution to factorization, we can transform them to isolated rational points by fixing some parameters without changing the asymptotic complexity of the algorithm. Isolated rational points are the only relevant points for an asymptotic study of the complexity of the factoring algorithm, up to a dimension reduction. Thus, we will consider only them and will not care of the other linear subspaces. \section{Setting the problem in the framework of algebraic geometry} \label{sec_arg_geo} Since the number of zeros $N_P$ is constrained by Ineq.~(\ref{up_bound}), the complexity of computing the rational function ${\cal R}(\vec\tau)$ must be subexponential or polynomial in $\log d$ in order to have ${\bf C}_0(\xi)$ subexponential or polynomial. This complexity scaling is attained if, for example, $\cal R$ is a polynomial with few monomials. The univariate polynomial $P=\tau^d-1$, which is pivotal in Pollard's $p-1$ algorithm, can be evaluated with a number of arithmetic operations scaling polynomially in $\log d$. This is achieved by consecutively applying polynomial maps. For example, if $d=2^g$, then $\tau^d$ is computed through $g$ applications of the map $x\rightarrow x^2$ by starting with $x=\tau$. However, polynomials with few terms have generally few zeros over $\mathbb{Q}$. More general polynomials and rational functions with friendly computational complexity are obviously available and are obtained by consecutive applications of simple functions, as done for $\tau^d-1$. This leads us to formulate the factorization problem in the framework of algebraic geometry as an intersection problem. \subsection{Intersection points between a parametrizable variety and a hypersurface} \label{sec_intersection} Considering only the operations defined in the field, the most general rational functions ${\cal R}(\vec\tau)$ with low complexity can be evaluated through the consecutive application of a small set of simple rational equations of the form \be\label{ratio_eqs} \begin{array}{c} x_{n-M}=\frac{{\cal N}_{n-M}(x_{n-M+1},\dots,x_n)}{{\cal D}_{n-M}(x_{n-M+1},\dots,x_n)} \\ x_{n-M-1}=\frac{{\cal N}_{n-M-1}(x_{n-M},\dots,x_n)}{{\cal D}_{n-M-1}(x_{n-M},\dots,x_n)} \\ \vdots \\ x_1=\frac{{\cal N}_1(x_2,\dots,x_n)}{{\cal D}_1(x_2,\dots,x_n)} \\ {\cal R}=P_0(x_1,\dots,x_n), \end{array} \ee where $P_0$ is a polynomial. If the numerators and denominators ${\cal N}_k$ and ${\cal D}_k$ do not contain too many monomials, then the computation of ${\cal N}_k/{\cal D}_k$ can be performed efficiently. Assuming that the computational complexity of these rational functions is polynomial in $n$, the complexity of $\cal R$ is polynomial in $n$. The computation of ${\cal R}(\vec\tau)$ is performed by setting the last $M$ components $x_{n-M+1},\dots,x_n$ equal to $\tau_1,\dots,\tau_M$ and generating the sequence $x_{n-M},x_{n-M-1},\dots,x_1,{\cal R}$ according to Eqs.~(\ref{ratio_eqs}), which ends up with the value of ${\cal R}$. The procedure may fail to compute the right value of ${\cal R}(\vec\tau)$ if some denominator ${\cal D}_k(\vec\tau)\equiv {\cal D}_k[x_{k+1}(\vec\tau),\dots,x_n(\vec\tau)]$ turns out to be equal to zero during the computation. However, since our only purpose is to generate the zero of the field, we can take a zero divisor as outcome and stop the computation of the sequence. In this way, the algorithm generates a modified function ${\cal R}'(\vec\tau)$. Defining $\bar{\cal N}_k(\vec\tau)$ as the numerator of the rational function ${\cal D}_k(\vec\tau)$, we have \be {\cal R}'(\vec\tau)=\left\{ \begin{array}{lr} {\cal R}(\vec\tau) & \;\; \text{if}\;\; \bar{\cal N}_1(\vec\tau)\dots \bar{\cal N}_{n-M}(\vec\tau)\ne0 \\ 0 & \text{otherwise} \end{array} \right. \ee The function ${\cal R}'(\vec\tau)$ has the zeros of ${\cal N}(\vec\tau)\bar{\cal N}_1(\vec\tau)\dots \bar{\cal N}_{n-M}(\vec\tau)$. For later reference, let us define the following. \begin{algorithm} \label{algo1} Computation of ${\cal R}'(\vec\tau)$. \begin{enumerate} \item set $(x_{n-M+1},\dots,x_n)=(\tau_1,\dots,\tau_M)$; \item set $k=n-M>0$; \item\label{attr} set $x_k=\frac{{\cal N}_k(x_{k+1},\dots,x_n)}{{\cal D}_k(x_{k+1},\dots,x_n)}$. If the division fails, return the denominator as outcome; \item set $k=k-1$; \item\label{last_step} if $k=0$, return $P_0(x_1,\dots,x_n)$, otherwise go back to \ref{attr}. \end{enumerate} \end{algorithm} The zeros of the denominators are not expected to give an effective contribution on the asymptotic complexity of the factoring algorithm, otherwise it would be more convenient to reduce the number of steps of the sequence by one and replace the last function $P_0$ with the denominator ${\cal D}_1$. Let us show that. Let us denote by $N_1$ the number of rational zeros of ${\cal R}'$ over some finite field with all the denominators different from zeros. They are the zeros returned at step~\ref{last_step} of Algorithm~\ref{algo1}. Let $N_T$ be the total number of zeros. We remind that the factoring complexity is about $p^{k M}$ times the ratio between the complexity of $\cal R$ and the number of zeros. If the algorithm is more effective than the one with one step less and $P_0$ replaced with ${\cal D}_1$, then $\frac{C_T}{N_T}<\frac{C_T-C_1}{N_T-N_1}$. where $C_1$ and $C_T$ are the number of arithmetic operations of the last step and of the whole algorithm, respectively. Since $C_1\ge1$, we have $$ N_1>N_T C_T^{-1} $$ In order to have a subexponential factoring algorithm, $C_T$ must scale subexponentially in $\log N_T$. Thus, $$ N_1>N_T e^{-\alpha (\log N_T)^\beta} $$ for some positive $\alpha$ and $0<\beta<1$. That is, $$ \log N_1>\log N_T -\alpha (\log N_T)^\beta. $$ If we assume polynomial complexity, we get the tighter bound $$ \log N_1>\log N_T -\alpha \log\log N_T. $$ These inequalities imply that the asymptotic complexity of the factoring algorithm does not change if we discard the zero divisors at Step~\ref{attr} in Algorithm~\ref{algo1}. Thus, for a complexity analysis, we can consider only the zeros of ${\cal R}'(\vec\tau)$ with all the denominators ${\cal D}_k(\vec\tau)$ different from zero. This will simplify the subsequent discussion. Each of these zeros are associated with an $n$-tuple $(x_1,\dots,x_n)$, generated by Algorithm~\ref{algo1} and solutions of Eqs.~(\ref{ratio_eqs}). Let us denote by ${\cal Z}_P$ the set of these $n$-tuples. By definition, an element in ${\cal Z}_P$ is a zero of the set of $M-n+1$ polynomials \be \label{poly_affine} \begin{array}{l} P_0(x_1,\dots,x_n), \\ P_k(x_1,\dots,x_n)=x_k {\cal D}_k(x_{k+1},\dots,x_n)-{\cal N}_k(x_{k+1},\dots,x_n), \;\;\; k\in\{1,\dots,n-M\}. \end{array} \ee The last $n-M$ polynomials define an algebraic set of points, say $\cal A$, having one irreducible branch parametrizable by Eqs.~(\ref{ratio_eqs}). This branch defines an algebraic variety which we denote by $\cal V$. The algebraic set may have other irreducible components which do not care about. The polynomial $P_0$ defines a hypersurface, say $\cal H$. Thus, the set ${\cal Z}_P$ is contained in the intersection between $\cal V$ and $\cal H$. This intersection may contain singular points of $\cal V$ with ${\cal D}_k(x_{k+1},\dots,x_n)=0$ for some $k$, which are not relevant for a complexity analysis, as shown previously. Thus, the factorization problem is reduced to search for non-singular rational points of a parametrizable variety $\cal V$, whose intersection with a hypersurface $\cal H$ contains an arbitrarily large number $N_P$ of rational points. If $N_P$ and ${\bf C}_0$ scale exponentially and polynomially in the space dimension $n$, respectively, then the complexity of factorization is polynomial, provided that the number of parameters $M$ scales sublinearly as a power of $n$. In the limit case of \be\label{subexp_cond} M\sim n/(\log n)^\beta \ee with $\beta>1$, the complexity scales subexponentially as $e^{b (\log p)^{1/\beta}}$. Thus, if $M$ has the scaling property~(\ref{subexp_cond}) with $\beta>3$, then there is an algorithm outperforming asymptotically the general number field sieve. A subexponential computational complexity is also obtained if the complexity of evaluating a point in $\cal V$ is subexponential. The parametrization of $\cal V$ is a particular case of rational parametrization of a variety. We call it \emph{Gaussian parametrization} since the triangular form of the polynomials $P_1,\dots,P_{n-M}$ resembles Gaussian elimination. Note that this form is invariant under the transformation \be\label{poly_replacement} P_k\rightarrow P_k+\sum_{k'=k+1}^{n-M}\omega_{k,k'} P_{k'}. \ee The form is also invariant under the variable transformation \be\label{invar_trans} x_k\rightarrow x_k+\sum_{k'=k+1}^{n+1} \eta_{k,k'} x_{k'} \ee with $x_{n+1}=1$. It is interesting to note that if $N_P'$ out of the $N_P$ points in ${\cal Z}_P$ are collinear, then it is possible to build another variety with Gaussian parametrization and a hypersurface over a $(n-1)$-dimensional subspace such that their intersection contains the $N_P'$ points. For later reference, let us state the following. \begin{lemma} \label{lemma_dim_red} Let ${\cal Z}_P$ be the set of common zeros of the polynomials~(\ref{poly_affine}) with ${\cal D}_k(x_{k+1},\dots,x_n)\ne 0$ over ${\cal Z}_P$ for $k\in\{1,\dots,n-M\}$. If $N_P'$ points in ${\cal Z}_P$ are solutions of the linear equation $L(x_1,\dots,x_n)=0$, then there is a variety with Gaussian parametrization and a hypersurface over the $(n-1)$-dimensional subspace defined by $L(x_1,\dots,x_n)=0$ such that their intersection contains the $N_P'$ points. \end{lemma} {\it Proof.} Given the linear function $L(x_1,\dots,x_n)\equiv l_{n+1}+\sum_{k=1}^n l_k x_k$, let us first consider the case with $l_k=0$ for $k\in\{1,\dots,n-M\}$. Using the constraint $L=0$, we can set one of the $M$ variables $x_{n-M+1},\dots,x_n$ as a linear function of the remaining $M-1$ variables. Thus, we get a new set of polynomials retaining the original triangular form. The new parametrizable variety, say ${\cal V}'$, has $M-1$ parameters. The intersection of ${\cal V}'$ with the new hypersurface over the $(n-1)$-dimensional space contains the $N_P'$ points. Let us now consider the case with $l_k=0$ for $k\in\{1,\dots,\bar k\}$, where $\bar k$ is some integer between $0$ and $n-M-1$, such that $l_{\bar k+1}\ne 0$. We can use the constraint $L=0$ to set $x_{\bar k+1}$ as a linear function of $x_{\bar k+2},\dots,x_n$. We discard the polynomial $P_{\bar k+1}$ and eliminate the $(\bar k+1)$-th variable from the remaining polynomials. We get $n-1$ polynomials retaining the original triangular form in $n-1$ variables $x_1,\dots,x_{\bar k},x_{{\bar k}+2},\dots,x_n$. The intersection between the new parametrizable variety and the new hypersurface contain the $N_P'$ points $\square$. \newline This simple lemma will turn out to be a useful tool in different parts of the paper. In Section~\ref{sec_common_zeros}, we show how to build a set of polynomials $P_k$ with a given number $N_P$ of common rational zeros by using some tools of algebraic geometry described in Appendix~\ref{alg_geom}. In Sec.~\ref{build_up}, we close the circle by imposing the form~(\ref{poly_affine}) for the polynomials $P_k$ with the constraint that ${\cal D}_k(x_{k+1},\dots,x_n)\ne 0$ for $k\in\{1,\dots,n-M\}$ over the set of $N_P$ points. \subsection{Sets of polynomials with a given number of zeros over a number field} \label{sec_common_zeros} In this subsection, we build polynomials with given $N_P$ common rational zeros as elements of an ideal $I$ generated by products of linear functions. This construction is the most general. The relevant information on the ideal $I$ is summarized by a satisfiability formula in conjunctive normal form without negations and a linear matroid. The formula and the matroid uniquely determine the number $N_P$ of rational common zeros of the ideal. Incidentally, we also show that the information can be encoded in a more general formula with negations by a suitable choice of the matroid. Every finite set of points in an $n$-dimensional space is an algebraic set, that is, they are all the zeros of some set of polynomials. More generally, the union of every finite set of linear subspaces is an algebraic set. In the following, we will denote linear polynomials by a symbol with a hat; namely, $\hat a$ is meant as $a_{n+1}+\sum_{i=1}^n a_i x_i$. Let us denote by $\vec x$ the $(n+1)$-dimensional vector $(x_1,\dots,x_n,x_{n+1})$, where $x_{n+1}$ is an extra-component that is set equal to $1$. A linear polynomial $\hat a$ is written in the form $\vec a\cdot\vec x$. Let $V_1,\dots, V_{L}$ be a set of linear subspaces and $I_1,\dots,I_L$ their associated radical ideals. The codimension of the $k$-th subspace is denoted by $n_k$. The minimal set of generators of the $k$-th ideal contains $n_k$ independent linear polynomials, say $\hat a_{k,1},\dots,\hat a_{k,n_k}$, so that \be \vec x\in V_k \Leftrightarrow\vec a_{k,i}\cdot\vec x=0\;\; \forall i\in\{1,\dots,n_k\}. \ee If the codimension $n_k$ is equal to $n$, then $V_k$ contains one point. We are mainly interested to these points, whose number is taken equal to $N_P$. The contribution of higher dimensional subspaces to the asymptotic complexity of the factoring algorithm is irrelevant up to a dimension reduction (see also Sec.~\ref{sec_infinite_points} and the remark in the end of the section). Since only isolated points are relevant, we could just consider ideals whose zero loci contain only isolated points. However, we allow for the possible presence of subspaces with positive dimension since they may simplify the set of the generators or the form of the polynomials $P_k$ that eventually we want to build. Let $\cal Z$ be the union of the subspaces $V_k$. The product $I_1\cdot I_2\cdot\dots I_L\equiv \tilde I$ is associated with $\cal Z$, that is, ${\cal Z}={\bf V}(\tilde I)$. A set of generators of the ideal $\tilde I$ is \be \prod_{k=1}^{L}\hat a_{k,i_k}\equiv G_{i_1,\dots,i_L}(\vec x) \;\;\; i_r\in\{1,\dots,n_r\}, r\in\{1,\dots,L\}. \ee Thus, we have that \be \vec x\in {\cal Z} \Leftrightarrow G_{i_1,\dots,i_L}(\vec x)=0 \;\;\;\; i_r\in\{1,\dots,n_r\}, r\in\{1,\dots,L\}. \ee Polynomials in the ideal $\tilde I$ are zero in the set $\cal Z$. This construction is not the most general, as $\tilde I$ is not radical. Thus, there are polynomials that are not in $\tilde I$, but their zero locus contains $\cal Z$. Furthermore, the number of generators and the number of their factors grow polynomially and linearly in $N_P$, respectively. This makes it hard to build polynomials in $\tilde I$ whose complexity is polynomial in $\log N_P$. The radicalization of the ideal and the assumption of special arrangements of the subspaces in $\cal Z$ can reduce drastically both the degree of the generators and their number. For example, let us assume that $V_1$ and $V_2$ are two isolated points in the $n$-dimensional space and, thus, $n_1=n_2=n$. The overall number of generators $a_{1,1},\dots,a_{1,n}$ and $a_{2,1},\dots,a_{2,n}$ is equal to $2 n$. Thus, there are $n-1$ linear constraints among the generators. Using linear transformations, we can write these constraints as $$ \hat a_{1,i}=\hat a_{2,i}\equiv\hat a_i \;\;\;\forall i\in\{2,\dots,n\}. $$ Every generator $G_{i,i,i_3,\dots,i_L}$ with $i\ne 1$ is equal to $\bar G_{i,i_3,\dots,i_L}=a_i^2 \prod_{k=3}^L \hat a_{k,i_k}$. The polynomial $\bar G'_{i,i_3,\dots,i_L}\equiv a_i \prod_{k=3}^L \hat a_{k,i_k}$ is not an element of the ideal $\tilde I$, but it is an element of its radical. Indeed, it is zero in the algebraic set $\cal Z$. Thus, we extend the ideal by adding these new elements. This extension allows us to eliminate all the generators $G_{i_1,i_2,\dots,i_L}$ with $i_1=1$ or $i_2=1$, since they are generated by $\bar G'_{i,i_3,\dots,i_L}$. Thus, the new ideal has the generators, \begin{equation} \left. \begin{array}{l} \hat a_{1,1}\hat a_{2,1}\prod_{k=3}^L a_{k,i_k} \\ \hat a_i\prod_{k=3}^L a_{k,i_k},\;\;\; i\in\{2,\dots,n\} \end{array}\right\} i_r\in\{1,\dots,n_r\}, r\in\{3,\dots,L\}. \end{equation} Initially, we had $n^2\prod_{k=3}^{L}n_k$ generators. Now, their number is $n\prod_{k=3}^{L}n_k$. A large fraction of them has the degree reduced by one. We can proceed with the other points and further reduce both degrees and number of generators. Evidently, this procedure cannot take to a drastic simplification of the generators if the points in $\cal Z$ are in general position, since the generators must contain the information about these positions. A simplification is possible if the points have special arrangements taking to contraction of a large number of factors in the generators. Namely, coplanarity of points is the key feature that can take to a drastic simplification of the generators. In a $n$-dimensional space, there are at most $n$ coplanar points in general position. Let us consider algebraic sets containing larger groups of coplanar points. For example, let us assume that the first $m$ sets $V_1,\dots,V_m$ are distinct coplanar points, with $m\gg n$. Then, there is a vector $\vec a_1$ such that $\vec a_1\cdot\vec x=0$ for every $\vec x$ in the union of the first $m$ linear spaces. It is convenient to choose the linear polynomial $\vec a_1\cdot\vec x$ as common generator of the first $m$ ideals $I_1,\dots,I_m$. Let us set $\hat a_{k,1}=\hat a_1$ for $k\in\{1,\dots,m\}$. Every generator $G_{i_1,\dots,i_L}$ with $i_k=1$ for some $k\in\{1,\dots,m\}$ is contracted to a generator of the form $\hat a\prod_{k=m+1}^L \hat a_{k,i_k}$. If there are other groups of coplanar points, we can perform other contractions. \begin{definition} Given an integer ${\bar n}>n$, we define $\Gamma_s$ with $s\in\{1,\dots,{\bar n}\}$ as a set of $s$-tuples $(i_1,\dots,i_s)\in\{1,\dots,{\bar n}\}^s$ with $i_k < i_{k'}$ for $k < k'$. That is, \begin{equation} \forall s\in\{1,\dots,{\bar n}\},\;\;\; \Gamma_s\subseteq \{(i_1,\dots,i_s)\in\{1,\dots,{\bar n}\}^s| i_k < i_{k'},\forall k, k' \text{ s.t. } k<k' \}. \end{equation} \end{definition} The final result of the inclusion of elements of the radical ideal is another ideal, say $I$, with generators of the form \be\label{generators} \begin{array}{l} \hat a_{i_1} \;\;\;\forall i_1\in\Gamma_1 \\ \hat a_{i_1}\hat a_{i_2} \;\;\;\forall (i_1,i_2)\in\Gamma_2 \\ \dots \\ \hat a_{i_1}\hat a_{i_2}\dots\hat a_{i_{{\bar n}}} \;\;\;\forall (i_1,i_2,\dots,i_{{\bar n}})\in\Gamma_{{\bar n}}, \end{array} \ee where $\{\hat a_1,\dots,\hat a_{{\bar n}}\}\equiv\Phi$ is a set of ${\bar n}$ linear polynomials. Polynomials in this form generate the most general ideals whose zero loci contain a given finite set of points. This is formalized by the following. \begin{lemma} Every radical ideal associated with a finite set ${\cal Z}_P$ of points is generated by a set of polynomials of form~(\ref{generators}) for some $\bar n$. \end{lemma} {\it Proof}. This can be shown with a naive construction. Given a set $\bar S$ of $N_P$ points associated with the ideals $I_1,\dots,I_{N_P}$, the product $I_1\cdot \dots I_{N_P}$ is an ideal associated with the set $\bar S$, which can be radicalized by adding a certain number of univariate square-free polynomials as generators~\cite{seidenberg}. The resulting ideal is generated by a set of polynomials of form~(\ref{generators}). $\square$ \newline With the construction used in the proof, $\bar n$ ends up to be equal to the number of points in ${\cal Z}_P$, which is not optimal for our purposes. We are interested to keep $\bar n$ sufficiently small, possibly scaling polynomially in the dimension $n$. This is possible only if the points in the zero locus have a `high degree' of collinearity. Thus, a bound on $\bar n$ sets a restriction on ${\cal Z}$. The minimal information on $\Phi$ that is relevant for determining the number $N_P$ of points in $\cal Z$ is encoded in a linear \emph{matroid}, of which $\Phi$ is one linear representation. Thus, the sets $\Gamma_s$ and the matroid determine $N_P$. Note that the last set $\Gamma_{{\bar n}}$ has at most one element. The linear generators can be eliminated by reducing the dimension of the affine space, see Lemma~\ref{lemma_dim_red}. Thus, we can set $\Gamma_1=\emptyset$. Every subset $\Phi_{sub}$ of $\Phi$ is associated with a linear space $V_{sub}$ whose points are the common zeros of the linear functions in $\Phi_{sub}$. That is, $V_{sub}={\bf V}(I_{sub})$, where $I_{sub}$ is the ideal generated by $\Phi_{sub}$. Let us denote briefly by ${\bf V}(\Phi_{sub})$ the linear space ${\bf V}(I_{sub})$. This mapping from subsets of $\Phi$ to subspaces is not generally injective. Let $\hat a'\in\Phi\setminus \Phi_{sub}$ be a linear superposition of the functions in $\Phi_{sub}$, then $\Phi_{sub}$ and $\Phi_{sub}\cup\{\hat a'\}$ represent the same linear space. An injective mapping is obtained by considering only the maximal subset associated with a linear subspace. These maximal subsets are called \emph{flats} in matroid theory. \begin{definition} \emph{Flats} of the linear matroid $\Phi$ are defined as subsets $\Phi_{sub}\subseteq \Phi$ such that no function in $\Phi\setminus \Phi_{sub}$ is linearly dependent on the functions in $\Phi_{sub}$. \end{definition} Let us also define the closure of a subset of $\Phi$. \begin{definition} Given a subset $\Phi_{sub}$ of $\Phi$ associated with subspace $V$, its closure $\text{cl}(\Phi_{sub})$ is the flat associated with $V$. \end{definition} The number of independent functions in a flat is called \emph{rank} of the flat. The whole set $\Phi$ is a flat of rank $n+1$, which is associated with an empty space. Flats of rank $n$ define points of the $n$-dimensional affine space (with $x_{n+1}=1$). More generally, flats of rank $k$ define linear spaces of dimension $n-k$. The dimension of a flat $\Phi_{flat}$ is meant as the dimension of the space ${\bf V}(\Phi_{flat})$. The structure of the generators~(\ref{generators}) resembles a Boolean satisfiability problem (SAT) in conjunctive normal form without negations. Let us interpret $\hat a_i$ as a logical variable $a_i$ which is $\true$ or $\false$ if the function is zero or different from zero, respectively. Every subset $\Phi_{sub}\subseteq\Phi$ is identified with a string $(a_1,\dots,a_{{\bar n}})$ such that $a_i=\true$ if and only if $\hat a_i\in\Phi_{sub}$. The SAT formula associated with the generators~(\ref{generators}) is \be\label{SAT} \bigwedge\limits_{k=2}^{{\bar n}} \left( \bigvee\limits_{i\in \Gamma_k } a_i \right). \ee Given a flat $\Phi_{flat}$, the linear space ${\bf V}(\Phi_{flat})$ is a subset of $\cal Z$ if and only if $\Phi_{flat}$ is solution of the SAT formula. If a set $\Phi_{sub}\subseteq\Phi$ is solution of the SAT formula, then the flat $\text{cl}(\Phi_{sub})$ is also solution of the formula. Thus, satisfiability implies that there are flats as solutions of the formula. This does not mean that satisfiability implies that $\cal Z$ is non-empty. Indeed, if the dimension of ${\bf V}(\Phi_{sub})$ is negative for every solution $\Phi_{sub}$, then the set $\Phi$ is the only flat solution of the formula. We are interested to the isolated points in $\cal Z$. A point $p\in{\cal Z}$ is \emph{isolated} if there is a SAT solution $\Phi_{flat}$ with zero dimension such that $p\in{\bf V}(\Phi_{flat})$ and no flat $\Phi_{flat}'\subset\Phi_{flat}$ is solution of the Boolean formula. We denote by ${\cal Z}_P$ the subset in $\cal Z$ containing the isolated points. Since the number $N_P$ of isolated points is completely determined by the SAT formula and the linear matroid, the information on these two latter objects is the most relevant. Given them, the linear functions $\hat a_i$ have some free coefficients. {\bf Remark}. In general, we do not rule out sets $\cal Z$ containing subspaces with positive dimension, however these subspaces are irrelevant for the complexity analysis of the factoring algorithm. For example, if subspaces of dimension $d_s<M$ give a dominant contribution to factorization, then we can generally eliminate $d_s$ out of the $M$ parameters by setting them equal to constants, so that the subspaces are reduced to points. Furthermore, subspaces with dimension greater than $M-1$ are not in the parametrizable variety $\cal V$, whose dimension is $M$. Neither the overall contribution of all the subspaces with positive dimension can provide a significant change in the asymptotic complexity up to parameter deletions. Thus, only isolated points of $\cal Z$ are counted without loss of generality. \subsection{Boolean satisfiability and algebraic-set membership} As we said previously, the Boolean formula does not encode all the information about the number of isolated points in ${\cal Z}$, which also depends on the independence relations among the vectors $\vec a_i$, specified by the matroid. A better link between the SAT problem and the membership to ${\cal Z}$ can be obtained if we consider sets $\Phi$ with cardinality equal to $2 n$ and interpret half of the functions in $\Phi$ as negations of the others. Let us denote by $\hat a_{0,1},\dots,\hat a_{0,n}$ and $\hat a_{1,1},\dots,\hat a_{1,n}$ the $2n$ linear functions of $\Phi$. For general functions, we have the following. \begin{property} \label{indep_functions} The set of vectors $\{\vec a_{s_1,1},\dots,\vec a_{s_n,n},\vec a_{1-s_k,k}\}$ is independent for every string $\vec s=(s_1,\dots,s_n)\in\{0,1\}^n$ and every $k\in\{1,\dots,n\}$. \end{property} This property generally holds if the functions are picked up at random. Let us assume that $\Phi$ satisfies Property~\ref{indep_functions}. This implies that $\{\hat a_{s_1,1},\dots,\hat a_{s_n,n}\}$ are linearly independent and equal to zero at one point $\vec x_{\vec s}$. Furthermore, Property~\ref{indep_functions} also implies that different strings $\vec s$ are associated with different points $\vec x_{\vec s}$. \begin{lemma} \label{lemma_indep} Let $\{\vec a_{0,1},\dots,\vec a_{0,n},\vec a_{1,1},\dots,\vec a_{1,n}\}$ be a set of $2n$ vectors satisfying Property~\ref{indep_functions}. Let $\vec x_{\vec s}$ be the solution of the equations $\hat a_{s_1,1}=\dots=\hat a_{s_n,n}=0$. If $\vec s\ne\vec r$, then $\vec x_{\vec s}\ne\vec x_{\vec r}$. \end{lemma} {\it Proof}. Let us assume that $\vec x_{\vec s}=\vec x_{\vec r}$ with $\vec s\ne\vec r$. There is an integer $k\in\{1,\dots,n\}$ such that $s_k\ne r_k$. Thus, the set of vectors $\{\vec a_{s_1,1},\dots,\vec a_{s_n,n},\vec a_{1-s_k,k}\}$ are orthogonal to $\vec x_{\vec s}$. Since the dimension of the vector space is $n+1$, the set of $n+1$ vectors are linearly dependent, in contradiction with the hypotheses. $\square$ Now, let us define the set ${\cal Z}$ as the zero locus of the ideal generators \be\label{generators2} \begin{array}{l} \hat a_{0,i} \hat a_{1,i} \;\;\;\forall i\in\{1,\dots,n\} \\ \hat a_{s_1,i_1}\hat a_{s_2,i_2} \;\;\;\forall (s_1,i_1;s_2,i_2)\in\Gamma_2 \\ \dots \\ \hat a_{s_1,i_1}\hat a_{s_2,i_2}\dots\hat a_{s_n,i_n} \;\;\;\forall (s_1,i_1;s_2,i_2,\dots,s_n,i_n)\in\Gamma_n. \end{array} \ee The first $n$ generators provide an interpretation of $\hat a_{1,i}$ as negation of $\hat a_{0,i}$, as consequence of Property~\ref{indep_functions}. The $i$-th generator implies that $(a_{0,i},a_{1,i})$ is equal to $(\true,\false)$, $(\false,\true)$ or $(\true,\true)$. However, the last case is forbidden by Property~\ref{indep_functions}. Assume that $(a_{0,i},a_{1,i})$ is equal to $(\true,\true)$ for some $i$. Then, there would be $n+1$ functions $\hat a_{s_1,1},\dots,\hat a_{s_n,n},\hat a_{1-s_i,i}$ equal to zero, which is impossible since they are independent. Thus, the algebraic set defined by the first $n$ generators contains $2^n$ distinct points, as implied by Lemma~\ref{lemma_indep}, which are associated with all the possible states taken by the logical variables. The remaining generators set further constraints on these variables and define a Boolean formula in conjunctive normal form. With this construction there is a one-to-one correspondence between the points of the algebraic set ${\cal Z}$ and the solutions of the Boolean formula. There is a generalization of the generators~(\ref{generators2}) that allows us to weaken Property~\ref{indep_functions} while retaining the one-to-one correspondence. Let $R_1,\dots,R_m$ be $m$ disjoint non-empty sets such that $\cup_{k=1}^m R_k=\{1,\dots,n\}$. \begin{property} \label{indep_functions2} The set of vectors $\cup_{k=1}^m\{\vec a_{s_k,i}|i\in R_k\}\equiv A_{\vec s}$ is independent for every $\vec s=(s_1,\dots,s_m)\in\{0,1\}^m$. Furthermore, every vector $\vec a_{s,i}\notin A_{\vec s}$ is not in $\text{span}(A_{\vec s})$, with $s\in\{0,1\}$ and $i\in\{1,\dots,n\}$. \end{property} \begin{lemma} \label{lemma_indep2} Let $\{\vec a_{0,1},\dots,\vec a_{0,n},\vec a_{1,1},\dots,\vec a_{1,n}\}$ be a set of $2n$ vectors satisfying Property~\ref{indep_functions2}. Let $\vec x_{\vec s}$ be the solution of the equations $$ \hat a_{s_k,i}=0 \;\;\; i\in R_k, k\in\{1,\dots,m\} $$ for every $\vec s\in\{0,1\}^m$. If $\vec s\ne\vec r$, then $\vec x_{\vec s}\ne\vec x_{\vec r}$. \end{lemma} The generators~(\ref{generators2}) are generalized by replacing the first line with \be\label{block_gen} \hat a_{0,i} \hat a_{1,j}, \;\;\; (i,j)\in \cup_{k=1}^m (R_k \times R_k). \ee Provided that Property~\ref{indep_functions2} holds there is a one-to-one correspondence between the points in the algebraic set and the solutions of a SAT formula built according to the following interpretation. Each set of functions $\{\hat a_{0,i}|i\in R_k\}\equiv a_k$ is interpreted as a Boolean variable, which is true if the functions in there are equal to zero. The set $\{\hat a_{1,i}|i\in R_k\}$ is interpreted as negation of $\{\hat a_{0,i}|i\in R_k\}$. The SAT formula is built in obvious way from the set of generators. For example, the generator $\hat a_{0,i}\hat a_{0,j}$ with $i\in R_{1}$ and $j\in R_{2}$ induces the clause $a_{1} a_{2}$. Different generators can induce the same clause. Since the total number of solutions depends only on the SAT formula, it is convenient to take the maximal set of generators compatible with a given formula. That is, if $a_{1} a_{2}$ is a clause, then $\hat a_{0,i}\hat a_{0,j}$ is taken as a generator for every $i\in R_1$ and $j\in R_2$. \subsection{$3$SAT-like generators} \label{sec_3SAT} SAT problems have clauses with an arbitrarily large number of literals. Special cases are $2$SAT and $3$SAT, in which clauses have at most $2$ or $3$ literals. It is known that every SAT problem can be converted to a $3$SAT one by increasing the number of variables and replacing a clause with a certain number of smaller clauses containing the new variables. For example, the clause $a\lor b\lor c\lor d$ can be replaced by $a\lor b\lor x$ and $c\lor d\lor (\lnot x)$. An assignment satisfies the first clause if and only if the other two clauses are satisfied for some $x$. An identical reduction can be performed also on the generators~(\ref{generators}). For example, a generator in $I$ of the form $\hat a_1\hat a_2\hat a_3\hat a_4\equiv G_0$ can be replaced by $\hat a_1\hat a_2 y\equiv G_1$, $\hat a_1\hat a_2(1-y)\equiv G_2$ and $y(1-y)\equiv G_3$, where $y$ is an additional variable. Also in this case, $G_0$ is equal to zero if and only if $G_1$ and $G_2$ are zero for $y=0,1$. Furthermore, the new extended ideal contains the old one. Indeed, we have that $G_0=\hat a_3\hat a_4 G_1+\hat a_1\hat a_2 G_2$. Note that all the polynomials in the ideal $I$ are independent of the additional variables used in the reduction. Thus, if we build the polynomials~(\ref{poly_affine}) by using $3$SAT-like generators, then all these polynomials may be independent of some variables. Thus, we can consider generators in a $3$SAT form, \be\label{generators_3SAT} \begin{array}{l} \hat a_{i_1}\hat a_{i_2} \;\;\;\forall (i_1,i_2)\in\Gamma_2 \\ \hat a_{i_1}\hat a_{i_2}\hat a_{i_{3}} \;\;\;\forall (i_1,i_2,i_3)\in\Gamma_{3}. \end{array} \ee There is no loss of generality, provided that all the polynomials $P_k$ are possibly independent from $n_I$ variables. The number of isolated points satisfies the inequality \be \label{bound_N0_n} N_P\le 3^n. \ee The actual number can be considerably smaller, depending on the matroid and the number of clauses defining the Boolean formula. The bound is attained if $n_c=3 n$, the generators have the form $a_i b_i c_i$ with $i\in\{1,\dots,n\}$, and the independent sets of the matroid contain $n+1$ elements. If there are only clauses with $2$ literals, then the bound is \be N_P\le 2^n, \ee which is strict if the generators have the form $a_i b_i$ with $i\in\{1,\cdots,n\}$. A consequence of these constraints is that the number $M$ of parameters must scale sublinearly in $n$, \be \label{bound_M_n} M\le K n^\beta, \;\;\;\; 0\le \beta<1 \ee for some $K>0$. \section{Building up the parametrizable variety and the hyperplane} \label{build_up} In this section, we put together the tools introduced previously to tackle our problem of building the rational function ${\cal R}$ with the desired properties of being computationally simple and having a sufficiently large set of zeros. This problem has being reduced to the search of computationally simple polynomials $P_k$ of the form~(\ref{poly_affine}) with a number of common rational zeros growing sufficiently fast with the space dimension. To build these polynomials, we first choose a set of generators of the form~(\ref{generators}) such that the associated algebraic set $\cal Z$ has a set of $N_P$ points. Then, we write the polynomials $P_k$ as elements of the ideal associated with $\cal Z$. Finally, we impose that the polynomials $P_k$ have the form of Eqs.~(\ref{poly_affine}). \begin{procedure} \label{procedure} Building up of a parametrizable variety $\cal V$ with $M$ parameters and $N_P$ intersection points. \begin{enumerate} \item Take a set of ${\bar n}$ unknown non-homogeneous linear functions in $n$ variables with ${\bar n}>n$, say $\hat a_1,\dots,\hat a_{{\bar n}}$. Additionally, specify which set of vectors are linearly independent. In other words, a linear matroid with ${\bar n}$ elements is defined. \item Choose and ideal $I$ with generators of the form~(\ref{generators_3SAT}) such that the associated algebraic set $\cal Z$ contains a subset ${\cal Z}_P$ of $N_P$ isolated points over some given number field. \item Set the polynomials $P_s$ equal to elements of the ideal $I$ with $s\in\{0,\dots,n-M\}$. That is, \begin{equation} \label{poly_in_ideal} P_s(\vec x)=\sum_{(i,j)\in \Gamma_2} C_{s,i,j}(\vec x) \hat a_i \hat a_j+ \sum_{(i,j,k)\in \Gamma_3} D_{s,i,j,k}(\vec x) \hat a_i \hat a_j\hat a_k, \end{equation} The polynomials $P_s$ with $s\in\{1,\dots,n-M\}$ define an algebraic set $\cal A$. The polynomial $P_0$ defines a hyperplane $\cal H$. The number of parameters $M$ and the polynomial coefficients $C_{s,i,j}(\vec x)$ and $D_{s,i,j,k}(\vec x)$ are also unknown. \item Search for values of the coefficients such that there is a parametrizable branch $\cal V$ in $\cal A$ with a number of parameters as small as possible. All the polynomials $P_s$ with $s\in\{0,\dots,n-M\}$ are possibly independent of some subset of variables (see Sec.~\ref{sec_3SAT}). The polynomials ${\cal D}_k$, as defined in Eq.~(\ref{poly_affine}) must be different from zero in the set ${\cal Z}_P$. \end{enumerate} \end{procedure} More explicitly, the last step leads us to the following. \begin{problem} \label{problem1} Given the sets $\Gamma_2$ and $\Gamma_3$, and polynomials of the form~(\ref{poly_in_ideal}), find linear functions $\hat a_1,\dots,\hat a_{{\bar n}}$ and coefficients $C_{s,i,j}(\vec x)$, $D_{s,i,j,k}(\vec x)$ such that \be \label{gauss_constrs} \begin{array}{l} \frac{\partial P_s}{\partial x_k}=0, \;\;\;\; 1\le k < s\le n-M, \vspace{2mm} \\ \frac{\partial^2 P_s}{\partial x_s^2}=0, \;\;\;\; 1\le s\le n-M, \vspace{2mm} \\ \vec x\in{\cal Z}_P \Rightarrow \frac{\partial P_s}{\partial x_s}\equiv {\cal D}_s(x_{s+1},\dots,x_n)\ne 0, \end{array} \ee under the constraint that $(\hat a_1,\dots,\hat a_{{\bar n}})$ is the representation of a given matroid. \end{problem} {\bf Remark}. If the algebraic set associated with the ideal $I$ is zero-dimensional, this problem has always a solution for any $M$, since a rational univariate representation always exists (see introduction). Essentially, the task is to find ideals such that there is a solution with the coefficients $C_{s,i,j}(\vec x)$ and $D_{s,i,j,k}(\vec x)$ as simple as possible, so that their computation is efficient, given $\vec x$. Let us remind that the constraints~(\ref{gauss_constrs}) are invariant under transformations~(\ref{poly_replacement},\ref{invar_trans}). All the polynomials are possibly independent of a subset of $n_I$ variables, say $\{x_{n-n_I+1},\dots,x_{n}\}$, \begin{equation} \frac{\partial P_s}{\partial x_k}=0, \;\;\;\; \left\{ \begin{array}{l} s\in\{0,\dots,n-M\} \\ k\in\{n-n_I+1,\dots,n\} \end{array}\right. \end{equation} These $n_I$ variables can be set equal to constants, so that the actual number of significant parameters is $M-n_I$. The input of Problem~\ref{problem1} is given by a 3SAT formula of form~(\ref{generators_3SAT}) and a linear matroid. \begin{definition} A 3SAT formula of form~(\ref{generators_3SAT}) and a linear matroid with $\bar n$ elements is called a \emph{model}. \end{definition} \noindent In literature, the term `model' is occasionally used with a different meaning and refers to a solution of a SAT formula. Problem~\ref{problem1} in its general form is quite intricate. First, it requires the definition of a linear matroid and a SAT formula with an exponentially large number of solutions associated with isolated points. Whereas it is easy to find examples of matroids and Boolean formulas with this feature, it is not generally simple to characterize models with an exponentially large number of isolated points. Second, Eqs.~(\ref{gauss_constrs}) take to a large number of polynomial equations in the unknown coefficients. Lemma~\ref{lemma_dim_red} can help to reduce the search space by dimension reduction. This will be shown in Sec.~\ref{sec_reduc_2SAT} with a simple example. A good strategy is to start with simple models and low-degree coefficients in Eq.~(\ref{poly_in_ideal}). In particular, we can take the coefficients constant, as done later in Sec.~\ref{sec_quadr_poly}. This restriction does not guarantees that Problem~\ref{problem1} has a solution for a sufficiently small number of parameters $M$, but we can have some hints on how to proceed. \subsection{Required number of rational points vs space dimension} Let assume that the computational complexity ${\bf C}_0$ of $\cal R$ is polynomial in the space dimension $n$, that is, \begin{equation} \label{ident_dim} {\bf C}_0\sim n^{\alpha_0}. \end{equation} The factoring algorithm has polynomial complexity if \be \left. \begin{array}{l} K_1 n^\alpha\le \log N_P\le K_2 n \;\;\; 0<\alpha\le 1 \\ M\le (\log N_P)^\beta \;\;\; \beta<1 \end{array} \right\} \;\;\; (\text{polynomial complexity}) \ee for $n$ sufficiently great, where $K_1$ is some positive constant and $K_2=\log 3$. The upper bound is given by Eq.~(\ref{bound_N0_n}). The algorithm has subexponential complexity ${\bf C}\sim e^{b (\log N_P)^{\alpha}}$ with $0<\alpha<1$ if \be \left. \begin{array}{r} \log N_P\sim (\log n)^{1/\alpha}\;\;\; 0<\alpha<1, \\ M\sim (\log n)^\frac{\beta}{\alpha}\;\;\; 0\le\beta<1-\alpha. \end{array} \right\} \;\;\;\; (\text{subexponential complexity}) \ee The upper bound on $\beta$ comes from Lemma~\ref{litmus}. Thus, the number of rational points is required to scale much less than exponentially for getting polynomial or subexponential factoring complexity. Note that a slower increase of $N_P$ induces stricter bounds on $M$ in terms of $n$. \subsection{Reduction of models} \label{sec_reduc_2SAT} In this subsection, we describe an example of model reduction. The model reduction is based on Lemma~\ref{lemma_dim_red} and can be useful for simplifying Problem~\ref{problem1}. The task is to reduce a class of models associated with an efficient factoring algorithm to a class of simpler models taking to another efficient algorithm, so that it is sufficient to search for solutions of Problem~\ref{problem1} over the latter smaller class. In our example, the matroid contains $2n$ elements and is represented by the functions $\hat a_{0,1},\dots,\hat a_{0,n},\hat a_{1,1},\dots,\hat a_{1,n}$ satisfying Property~\ref{indep_functions}. \newline {\bf Model A.} \newline Matroid with representation $(\hat a_{0,1},\dots,\hat a_{0,n},\hat a_{1,1},\dots,\hat a_{1,n})$ satisfying Property~\ref{indep_functions}. \newline Generators: \be\label{gen_Scheme_A} \begin{array}{l} \hat a_{0,i} \hat a_{1,i} \;\;\; i\in \{1,\dots n\} \\ \hat a_{0,i} \hat a_{0,j} \;\;\; (i,j)\in \Gamma. \end{array} \ee \begin{definition} A diagonal model is defined as Model A with $\Gamma=\emptyset$. \end{definition} Clearly, an diagonal model defines an algebraic set with $2^n$ isolated points. Each point satisfies the linear equations \be \hat a_{s_i,i}=0, \;\;\; i\in \{1,\dots,n\} \ee for some $(s_1,\dots,s_n)\in\{0,1\}^n$. If there is an algorithm with polynomial complexity and associated with Model~A, then it is possible to prove that there is another algorithm with polynomial complexity and associated with a diagonal model. More generally, this formula reduction takes to a subexponential factoring algorithm, provided that the parent algorithm outperforms the quadratic sieve algorithm. If the parent algorithm outperforms the general number field sieve, then the reduced algorithm outperforms the quadratic sieve. Thus, if we are interested to find a competitive algorithm from Model~A, we need to search only the space of reduced formulas. If there is no algorithm outperforming the quadratic sieve with $\Gamma=\emptyset$, then there is no algorithm outperforming the general number field for $\Gamma\ne \emptyset$. \begin{theorem} If there is a factoring algorithm with subexponential asymptotic complexity $e^{a (\log p)^\gamma}$ and associated with Model~A, then there is another algorithm associated with the diagonal model with computational complexity upper-bounded by the function $e^{\bar a (\log p)^\frac{\gamma}{1-\gamma}}$ for some $\bar a>0$. In particular, if the first algorithm has polynomial complexity, also the latter has polynomial complexity. \end{theorem} {\it Proof}. Let us assume that the asymptotic computational complexity of the parent algorithm is $e^{a (\log p)^\gamma}$. For every $N_P$, there is a Model~A with $N_P$ isolated points and generating a rational function ${\cal R}$ with complexity ${\bf C}_0(\xi)$ scaling as $e^{a (\log N_P)^\alpha}$ and a number of parameters $M$ scaling as $(\log N_P)^\beta$, where $\gamma=\alpha/(1-\beta)$ and $0\le\beta<1$ (See~\ref{sec_complex}). We denote by $\cal Z$ the set of isolated points. Since the complexity ${\bf C}_0$ is lower-bounded by a linear function of the dimension $n$, we have \be\label{ineq_n_N0} \log n\le a (\log N_P)^\alpha+O(1). \ee Let $\hat a_{0,1},\dots,\hat a_{0,n},\hat a_{1,1},\dots,\hat a_{1,n}$ be the set of linear functions representing the matroid and satisfying Property~\ref{indep_functions}. The ideal generators are given by Eq.~(\ref{gen_Scheme_A}). Let $m$ be the maximum number of functions in $\{\hat a_{0,i},\dots,\hat a_{0,n}\}$ which are simultaneously different from zero for $\vec x\in \cal Z$. Thus, we have that \be\label{bound_N0} N_P\le \sum_{j=0}^m\frac{n!}{(n-j)! j!}\le \left(1+n^{-1}\right) {n}^{m} . \ee There is a point $\vec x_2$ in $\cal Z$ such that $\hat a_{0,1},\dots,\hat a_{0,m}$ are different from zero and $\hat a_{0,m+1}=\hat a_{0,m+1}=\dots=\hat a_{0,n}=0$, up to permutations of the indices. Let us set these last $n-m$ functions equal to zero by dimension reduction. The new set of generators is associated with another factoring algorithm (Lemma~\ref{lemma_dim_red}) and contains the clauses of the form \be \begin{array}{l} \hat a_{0,i} \hat a_{1,i} \;\;\; i\in \{1,\dots,m\} \\ \hat a_{0,i} \hat a_{0,j} \;\;\; (i,j)\in \bar\Gamma\subseteq \{1,\dots,m\} \times \{1,\dots,m\}. \end{array} \ee Since there is a point $\vec x_2$ such that $\hat a_{0,i}\ne$ for $i\in\{1,\dots,m\}$, the set $\bar\Gamma$ turns out to be empty, so that the reduced model is diagonal. The number of common zeros of the generators, say $N_1$, is equal to $2^m$. Using Ineq.~(\ref{bound_N0}), we have that \be (\log_2 N_1)(\log n)+\log\left(1+n^{-1}\right) \ge \log N_P. \ee Ineq.~(\ref{ineq_n_N0}) and this last inequality implies that \be \log N_P \le K (\log N_1)^{\frac{1}{1-\alpha}} \ee for some constant $K$. Since the computational complexity, say $\bar {\bf C}_0$ of the rational function $\cal R$ associated with the reduced model is not greater than ${\bf C}_0$, which scales as $e^{a (\log N_P)^\alpha}$, we have that \be \bar {\bf C}_0\le e^{\bar a (\log N_1)^\frac{\alpha}{1-\alpha}}, \ee for some constant $\bar a$. Similarly, since the number of parameters, say $\bar M$, of the reduced rational function is not greater than $M$, we have that \be \bar M\le \bar K (\log \bar N_1)^\frac{\beta}{1-\alpha} \ee for some constant $\bar K$. Thus, the resulting factoring algorithm has a computational complexity upper-bounded by $$ e^{\bar a(\log p)^\frac{\alpha}{1-\alpha-\beta} }= e^{\bar a(\log p)^\frac{\gamma}{1-\gamma} } $$ up to a constant factor. The last statement of the theorem is proved in a similar fashion. $\square$ The diagonal model with generators \be\label{simple_gene} G_i=\hat a_{0,i} \hat a_{1,i}, \;\;\;\; \forall i\in\{1,\dots,n\} \ee provides the simplest example of polynomials with an exponentially large number of common zeros. The algebraic set ${\cal Z}={\cal Z}_P$ contains $2^n$ points, which are distinct because of Property~\ref{indep_functions}. This guarantees that the generated ideal is radical. Thus, Hilbert's Nullstellensatz implies that every polynomial which is zero in ${\cal Z}$ can be written as $\sum_i F_i(\vec x) G_i(\vec x)$, where $F_1,\dots,F_n$ are polynomials (let us remind that $x_{n+1}=1$). We impose that the polynomials $P_0,\dots,P_{n-M}$ are in the ideal generated by $G_1,\dots,G_n$, that is, \be P_k(\vec x)=\sum_i C_{k,i}(\vec x) \hat a_{0,i} \hat a_{1,i} \;\;\; \forall k\in\{0,\dots,n-M\}. \ee As there is no particular requirement on $P_0$, we can just set $C_{0,i}(\vec x)$ equal to constants. In particular, we can take $P_0=\hat a_{0,1}\hat a_{1,1}$. In this case, the unknown variables of Problem~\ref{problem1} are the polynomials $C_{k,i}(\vec x)$ and the linear equations $\hat a_{s,k}$ under the constraints of Property~\ref{indep_functions}. In the following section we tackle this problem with $C_{k,i}(\vec x)$ constant. \section{Quadratic polynomials} \label{sec_quadr_poly} In this section, we illustrate the procedure described previously by considering the special case of $n-M+1$ quadratic polynomials in the ideal $I$ generated by the polynomials~(\ref{simple_gene}). Namely, we take the polynomials $P_l$ of the form \begin{equation} \label{quad_poly} P_l(\vec x)=\sum_{i=1}^n c_{l,i}\hat a_{0,i}\hat a_{1,i}, \;\;\;\; l\in\{0,\dots,n-M\}, \end{equation} where $c_{l,i}$ are rational numbers and the linear functions $\hat a_{s,i}$ satisfy Property~\ref{indep_functions}. Thus, there are $2^n$ common rational zeros of the $n-M+1$ polynomials, which are also the zeros of the generators~(\ref{simple_gene}). Each rational point is associated with a vector $\vec s\in\{0,1\}^n$ so that the linear equations $\vec a_{s_1,1}\cdot\vec x=0,\dots,\vec a_{s_n,n}\cdot\vec x=0$ are satisfied. First, we consider the case with one parameter ($M=1$). We also assume that all the $2^n$ rational points are in the parametrizable variety. Starting from these assumptions, we end up to build a variety $\cal V$ with a number $M$ of parameters equal to $n/2-1$ for $n$ even and $n\ge4$. Furthermore, we prove that there is no solution with $M=1$ if $n>4$. We give a numerical example for $n=4$, which takes to a rational function $\cal R$ with $16$ zeros. Then we build a parametrizable variety with a number of parameters equal to $(n-1)/3$. Thus, the minimal number of parameters is some value between $2$ and $(n-1)/3$ for the considered model with the polynomials of the form~(\ref{quad_poly}). \subsection{One parameter? ($M=1$)} Given polynomials~(\ref{quad_poly}) and vectors $\vec a_{s,i}$ satisfying Property~(\ref{indep_functions}), we search for a solution of Problem~\ref{problem1} under the assumption $M=1$. Let us first introduce some notations and definitions. We define the $(n-1)\times n$ matrices \begin{equation} \label{def_matr_M} {\bf M}^{\vec s}\equiv \left( \begin{array}{ccc} A_{1,1}^{(s_1)} & \dots & A_{1,n}^{(s_n)} \\ \vdots & \ddots & \vdots \\ A_{n-1,1}^{(s_1)} & \dots & A_{n-1,n}^{(s_n)} \end{array} \right), \end{equation} where \begin{equation} A_{k,i}^{(s)}\equiv\frac{\partial\hat a_{s,i}}{\partial x_k}, \end{equation} The square submatrix of ${\bf M}^{\vec s}$ obtained by deleting the $j$-th column is denoted by ${\bf M}_j^{\vec s}$, that is, \begin{equation} {\bf M}_j^{\vec s}= \left( \begin{array}{cccccc} A_{1,1}^{(s_1)} & \dots & A_{1,j-1}^{(s_{j-1})} & A_{1,j+1}^{(s_{j+1})} & \dots &A_{1,n}^{(s_n)} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ A_{n-1,1}^{(s_1)} & \dots & A_{n-1,j-1}^{(s_{j-1})} & A_{n-1,j+1}^{(s_{j+1})} & \dots &A_{n-1,n}^{(s_n)} \end{array} \right), \end{equation} The vectors $\vec a_{0,i}$ and $\vec a_{1,i}$ are also briefly denoted by $\vec a_i$ and $\vec b_i$, respectively. Similarly, we also use the symbols $A_{k,i}$ and $B_{k,i}$ for the derivatives $A_{k,i}^{(0)}$ and $A_{k,i}^{(1)}$. Problem~\ref{problem1} takes the specific form \begin{problem} \label{problem2} Find coefficients $c_{l,i}$ and vectors $\vec a_{s,i}$ satisfying Property~(\ref{indep_functions}) such that \bey \label{prob2_1} \sum_{i=1}^n c_{l,i} \left( A_{k,i}\vec a_i+B_{k,i}\vec b_i\right)=0 \;\;\;\; 1\le k < l\le n-1, \vspace{2mm} \\ \label{prob2_2} \sum_{i=1}^n c_{l,i} A_{l,i} B_{l,i}=0 \;\;\;\; 1\le l\le n-1, \vspace{2mm} \\ \label{prob2_3} \vec x\in{\cal Z}_P \Rightarrow \sum_{i=1}^n c_{l,i} \left( A_{l,i}\hat a_i+B_{l,i}\hat b_i\right)\ne 0 \;\;\;\; 1\le l\le n-1. \eey \end{problem} Let us stress again that the problem is invariant with respect to the transformations~(\ref{poly_replacement},\ref{invar_trans}), the latter taking to the transformation \be A_{k,i}^{(s)}\rightarrow A_{k,i}^{(s)}+\sum_{l=1}^{k-1}\bar\eta_{k,l}A_{l,i}^{(s)} \ee of the derivatives. We have the following. \begin{lemma} \label{lemma_inde_A} For every $\vec s\in\{0,1\}^n$ and $j\in\{1,\dots,n\}$, the matrix ${\bf M}_j^{\vec s}$ has maximal rank, that is, \begin{equation} \det {\bf M}_j^{\vec s}\ne 0. \end{equation} \end{lemma} {\it Proof}. Let us prove the lemma by contradiction. There is a $j\in\{1,\dots,n\}$, $l\in\{1,\dots,n-1\}$, and an $\vec s\in\{0,1\}^n$ such that the $l$-th row of ${\bf M}_j^{{\vec s}}$ is linearly dependent on the first $l-1$ rows. Thus, there are coefficients $\lambda_1,\dots,\lambda_{l-1}$ such that \begin{equation} A_{l,i}^{(s_l)}+\sum_{k=1}^{l-1}\lambda_k A_{k,i}^{(s_k)}=0 \;\;\; \forall i\ne j \end{equation} With a change of variables of the form of Eq.~(\ref{invar_trans}), this equation can be rewritten in the form \begin{equation} A_{l,i}^{(s_l)}=0 \;\;\; \forall i\ne j. \end{equation} Up to permutations $\hat a_i\leftrightarrow \hat b_i$, we have \begin{equation} \label{lin_dep_lemma} B_{l,i}=0 \;\;\; \forall i\ne j. \end{equation} From Eq.~(\ref{prob2_2}), we have $$ \sum_{i=1}^n c_{l,i} A_{l,i} B_{l,i}=0. $$ From this equation and Eq.~(\ref{lin_dep_lemma}), we get the equation $c_{l,j} A_{l,j} B_{l,j}=0$, implying that $c_{l,j} A_{l,j}=0$ or $c_{l,j} B_{l,j}=0$. Without loss of generality, let us take \be \label{clne0} c_{l,j} B_{l,j}=0. \ee Let $\vec x_0\in{\cal Z}_P$ be the vector orthogonal to $\vec b_1,\dots,\vec b_n$. From Eq.~(\ref{prob2_3}) we have that \be c_{l,j} B_{l,j}(\vec a_j\cdot\vec x_0)\ne 0, \ee which is in contradiction with Ineq.~(\ref{clne0}). $\square$ \begin{corollary} \label{corol_c} The coefficients $c_{n-1,i}$ are different from zero for every $i\in\{1,\dots,n\}$. \end{corollary} {\it Proof}. Let us assume that the statement is false. Up to permutations, we have that $c_{n-1,1}=0$. Lemma~\ref{lemma_inde_A} implies that there is an integer $i_0\in\{2,\dots,n\}$ such that $B_{n-1,i}=0$ for $i\notin\{1,i_0\}$, up to a transformation of the form of Eq.~(\ref{invar_trans}). Thus, \be 0=\sum_{i=1}^n c_{n-1,i}A_{n-1,i}B_{n-1,i}= c_{n-1,i_0} A_{n-1,i_0}B_{n-1,i_0}, \ee the first equality coming from Eq.~(\ref{prob2_2}). Lemma~\ref{lemma_inde_A} also implies that $A_{n-1,i_0}B_{n-1,i_0}\ne 0$ Thus, on one hand, we have that $c_{n-1,i_0}=0$. On the other hand, we have \be c_{n-1,i_0} B_{n-1,i_0}(\vec a_{i_0}\cdot\vec x_0)\ne 0 \ee from Eqs.~(\ref{prob2_3}), where $\vec x_0$ is the vector orthogonal to $\vec b_1,\dots,\vec b_n$. Thus, we have a contradiction. $\square$ Let us denote by ${\bf M}^{\vec s}_{j_1,\dots,j_m}$ the submatrix of ${\bf M}^{\vec s}$ obtained by deleting the last $m-1$ rows and the columns $j_1,\dots,j_m$. Given the coefficient matrix \be {\bf c}\equiv \left( \begin{array}{ccc} c_{0,1} & \dots & c_{0,n} \\ \vdots & \ddots & \vdots \\ c_{n-1,1} & \dots & c_{n-1,n} \end{array} \right), \ee let us define ${\bf c}_{j_1,\dots,j_m}$ as the $m\times m$ submatrix of ${\bf c}$ obtained by keeping the last $m$ rows and the columns $j_1,\dots,j_m$. Lemma~\ref{lemma_inde_A} and Corollary~\ref{corol_c} are generalized by the following. \begin{theorem} \label{theorem_det} For every $m\in\{1,\dots,n-1\}$, $\vec s\in\{0,1\}^n$, and $m$ distinct integers $j_1,\dots,j_m\in\{1,\dots,n\}$, the matrices ${\bf M}^{\vec s}_{j_1,\dots,j_m}$ and ${\bf c}_{j_1,\dots,j_m}$ have maximal rank, that is, \bey \label{det_reduc} \det {\bf M}^{\vec s}_{j_1,\dots,j_m}\ne 0, \\ \label{det_reduc2} \det {\bf c}_{j_1,\dots,j_m}\ne 0. \eey \end{theorem} {Proof.} The proof is by recursion. For $m=1$, the theorem comes from Lemma~\ref{lemma_inde_A} and Corollary~\ref{corol_c}. Thus, we just need to prove Eqs.~(\ref{det_reduc},\ref{det_reduc2}) by assuming that \bey \label{recur0} \det {\bf M}^{\vec s}_{j_1,\dots,j_{m-1}}\ne 0, \\ \label{recur1} \det {\bf c}_{j_1,\dots,j_{m-1}}\ne 0. \eey Let us first prove Eq.~(\ref{det_reduc}) by contradiction. If the equation is false, then there is an $\vec s_0\in\{0,1\}^n$ and $m$ distinct integers $i_1,\dots,i_m$ in $\{1,\dots,n\}$ such that $\det {\bf M}^{\vec s_0}_{i_1,\dots,i_m}=0$. By permutations, we can set $i_h=h$. By suitable exchanges of $\hat a_i$ and $\hat b_i$, we can set $s_i=1$ for every $i\in\{1,\dots,n\}$. There is an integer $l\in\{1,\dots,n-m\}$ such that $B_{l,i}=0$ for $i\in\{m+1,\dots,n\}$ up to a transformation of the form of Eq.~(\ref{invar_trans}). From Eqs.~(\ref{prob2_1},\ref{prob2_2}), we have the $m$ equations \be \begin{array}{r} \sum_{i=1}^m c_{l,i}A_{l,i}B_{l,i}=0 \\ \sum_{i=1}^m c_{n+1-m,i}A_{l,i}B_{l,i}=0 \\ \sum_{i=1}^m c_{n+2-m,i}A_{l,i}B_{l,i}=0 \\ \dots \\ \sum_{i=1}^m c_{n-2,i}A_{l,i}B_{l,i}=0 \\ \sum_{i=1}^m c_{n-1,i}A_{l,i}B_{l,i}=0. \end{array} \ee From Eq.~(\ref{recur0}), we have that $A_{l,i}\ne0$ and $B_{l,i}\ne0$ for some $i\in\{1,\dots,m\}$, so that \be \text{rank} \left( \begin{array}{ccc} c_{l,1} & \dots & c_{l,m} \\ c_{n+1-m,1} & \dots & c_{n+1-s,m} \\ c_{n+2-m,1} & \dots & c_{n+2-s,m} \\ \vdots & \ddots & \vdots \\ c_{n-2,1} & \dots & c_{n-1,m} \\ c_{n-1,1} & \dots & c_{n-1,m} \end{array} \right)< m. \ee Up to a transformation of the form of Eq.~(\ref{poly_replacement}), there is an integer $l_0\in\{n+1-m,\dots,n-1\}\cup\{l\}$ such that $c_{l_0,i}=0$ for $i\in\{1,\dots,m\}$. Eq.~(\ref{recur1}) implies that $l_0=l$. Thus, $c_{l,1}=\dots c_{l,m}=0$, but this contradicts Eq.~(\ref{prob2_3}) with $\vec x\in{\cal Z}_P$ orthogonal to $\vec b_1,\dots,\vec b_n$. Let us now prove Eq.~(\ref{det_reduc2}) by contradiction. If the equation is false, then there are $m$ distinct integers $i_1,\dots,i_m$ in $\{1,\dots,n\}$ such that $\det {\bf c}_{i_1,\dots,i_m}=0$. Without loss of generality, let us take $i_h=h$. Up to the transformation~(\ref{poly_replacement}), there is an integer $l\in\{n-m,\dots,n-1\}$ such that $c_{l,i}=0$ for $i\in\{1,\dots,m\}$. Eq.~(\ref{recur1}) implies that $l=n-m$. Thus, \be c_{n-m,1}=\dots=c_{n-m,m}=0. \ee Eq.~(\ref{det_reduc}) implies that there is an integer $i_0\in\{m+1,\dots,n\}$ such that $A_{n-m,i}=0$ for $i\in\{m+1,\dots,n\}\bs\{i_0\}$ up to transformation~(\ref{invar_trans}). Thus, we have from Eq.~(\ref{prob2_2}) that \be 0=\sum_{i=1}^n c_{n-m,i}A_{n-m,i}B_{n-m,i}= c_{n-m,i_0}A_{n-m,i_0}B_{n-m,i_0}. \ee Eq.~(\ref{det_reduc}) also implies that $A_{n-m,i_0}B_{n-m,i_0}\ne 0$, so that $c_{n-m,i_0}=0$, which is in contradiction with Eq.~(\ref{prob2_3}) for $\vec x$ orthogonal to $\vec a_1,\dots,\vec a_n$. $\square$ \newline In the following, this theorem will be used with $m\in\{1,2\}$. Since all the coefficients $c_{n-1,i}$ are different from zero, we can set them equal to $1$ by rescaling the vectors $\vec a_i$ or $\vec b_i$. Let us denote by $c_i$ the coefficients $c_{n-2,i}$. Theorem~\ref{theorem_det} with $m=2$ implies that $c_i\ne c_j$ for $i\ne j$. Eq.~(\ref{prob2_1}) with $l=n-1$ and $l=n-2$ takes the form \bey \label{nm2} \frac{\partial }{\partial x_k}P_{n-1}=\sum_{i=1}^n \left( A_{k,i}\vec a_i+B_{k,i}\vec b_i\right)=0 \;\;\;\; 1\le k \le n-2, \\ \label{nm3} \frac{\partial }{\partial x_k}P_{n-2}= \sum_{i=1}^n c_i\left( A_{k,i}\vec a_i+B_{k,i}\vec b_i\right)=0 \;\;\;\; 1\le k \le n-3. \eey These equations impose the form~(\ref{poly_affine}) to the last two polynomials, $P_{n-1}$ and $P_{n-2}$, which must be independent from $n-2$ and $n-3$ variables, respectively. The first $n-2$ vector equations are linearly independent. Let us assume the opposite. Then, there is a set of coefficients $\lambda_1,\dots,\lambda_{n-2}$ such that $\sum_{k=1}^{n-2}\lambda_k (A_{k,i},B_{k,i})=0$. But this is impossible because of Property~\ref{indep_functions}. It also contradicts Theorem~\ref{theorem_det}. The theorem also implies that Eqs.~(\ref{nm3}) are linearly independent. Since the vector space is $n+1$-dimensional, the vectors $\vec a_i$ and $\vec b_i$ must have $n-1$ vector constraints. Thus, at least $n-4$ out of Eqs.~(\ref{nm3}) are linearly dependent on Eqs.~(\ref{nm2}). First, let us show that $n-4$ is the maximal number of dependent equations. Assuming the converse, we have \be c_i(A_{k,i},B_{k,i})=\sum_{l=1}^{n-2}\lambda_{k,l} (A_{l,i},B_{l,i}) \;\;\;\;\; \forall k\in\{1,\dots,n-3\}. \ee for suitable coefficients $\lambda_{k,l}$. Let us define the linear superposition \be (A_i,B_i)\equiv \sum_{k=1}^{n-3} v_k (A_{k,i},B_{k,i}) \ee with the coefficients $v_k$. Let $\bf \Lambda$ be the $(n-2)\times(n-2)$ matrix with ${\bf \Lambda}_{k,n-2}=0$ and ${\bf \Lambda}_{k,l}=\lambda_{l,k}$ for $k\in\{1,\dots,n-2\}$ and $l\in\{1,\dots,n-3\}$. The coefficients $(v_1,\dots,v_{n-3})\equiv\vec v$ are defined by imposing the $n-4$ constraints \be ({\bf\Lambda}^s\vec v)_{n-2}=0 \;\;\;\;\; s\in\{1,\dots,n-4\}. \ee By construction, the pairs \be\label{expo_form} c_i^{k-1}(A_i,B_i) \;\;\;\; k\in\{1,\dots,n-2\} \ee are linear superpositions of the derivatives $(A_{k,i},B_{k,i})$ with $k\in\{1,\dots,n-2\}$. Furthermore, the first $n-3$ pairs are linear superpositions of $(A_{k,i},B_{k,i})$ with $k\in\{1,\dots,n-3\}$. That is, \be \label{expo_deps} \begin{array}{l} c_i^{k-1} (A_i,B_i)=\sum_{l=1}^{n-3}\bar\lambda_{k,l}(A_{l,i},B_{l,i}) \;\;\; k\in\{1,\dots,n-3\} \\ c_i^{n-3} (A_i,B_i)=\sum_{l=1}^{n-2}\bar\lambda_{n-2,l}(A_{l,i},B_{l,i}) \end{array} \ee for some coefficients $\bar\lambda_{k,l}$. From Lemma~\ref{lemma_inde_A} and Corollary~\ref{corol_c} we have that the $n-2$ pairs~(\ref{expo_form}) are linearly independent. Indeed, Corollary~\ref{corol_c} implies that $A_i\ne 0$ and $B_i\ne 0$ for every $i\in\{1,\dots,n\}$. Lemma~\ref{lemma_inde_A} implies that $c_i^{k-1}$ are linearly independent for $k\in\{1,\dots,n-2\}$. Equations~(\ref{expo_form},\ref{expo_deps}) can be also derived from Jordan's theorem, Lemma~\ref{lemma_inde_A} and Corollary~\ref{corol_c}. See Appendix~\ref{lin_alg_tools}. Thus, by a variable transformation, Eqs.~(\ref{nm2},\ref{nm3}) take the form \be \sum_{i=1}^n c_i^{k-1}\left( A_{i}\vec a_i+B_{i}\vec b_i\right)=0 \;\;\;\;\;\; k\in\{1,\dots,n-2\}. \ee and \be \frac{\partial (\hat a_i,\hat b_i)}{\partial x_k}=c_i^{k-1}(A_i, B_i) \;\;\;\;\;\; k\in\{1,\dots,n-2\}. \ee These equations imply that $\sum_{i=1}^n d_i^{k-1} A_i B_i=0$ for $k\in\{1,\dots,2n-5\}$. For $n>4$, we have in particular that \be \label{kill_AB} \sum_{i=1}^n c_i^{k-1} A_i B_i=0 \;\;\;\; k\in\{1,\dots,n\}. \ee Since \be \det\left( \begin{array}{ccc} 1 & \dots & 1 \\ c_1 & \dots & c_n \\ \vdots & \ddots & \vdots \\ c_1^{n-1} & \dots & c_n^{n-1} \end{array} \right)=\prod_{j>i} (c_j-c_i) \ee and $c_i\ne c_j$ for $i\ne j$, Eq.~(\ref{kill_AB}) implies that $A_i B_i=0$ for every $i\in\{1,\dots,n\}$. But this is in contradiction with Theorem~\ref{theorem_det}. Thus, let us take exactly $n-4$ out of Eqs.~(\ref{nm3}) linearly dependent on Eqs~(\ref{nm2}). Let $\bar k$ be an integer in $\{1,\dots,n-3\}$ such that Eq.~(\ref{nm3}) with $k=\bar k$ is linearly independent of Eqs.~(\ref{nm2}). Thus, $$ c_i(A_{k,i},B_{k,i})=\bar\lambda_k c_i (A_{\bar k,i},B_{\bar k,i})+ \sum_{l=1}^{n-2}\lambda_{k,l}(A_{l,i},B_{l,i}) \;\;\;\; k\in\{1,\dots,n-3\}\bs\{\bar k\}. $$ By a transformation of the first $n-3$ variables, we can rewrite this equation in the form. \be c_i(A_{k,i},B_{k,i})=\sum_{l=1}^{n-2}\lambda_{k,l}(A_{l,i},B_{l,i}) \;\;\;\; k\in\{1,\dots,n-4\}. \ee By a suitable transformation of the first $n-2$ variables, the $n-2$ pairs $(A_{k,i},B_{k,i})$ can be split in two groups (see Appendix~\ref{lin_alg_tools}), say, \be \left. \begin{array}{l} \frac{\partial}{\partial x_k'} (\hat a_i,\hat b_i) \equiv (A_{k,i}', B_{k,i}')=c_i^{k-1} (A_i',B_i') \;\;\;\; k\in\{1,\dots,n_1\} \\ \frac{\partial}{\partial x_k''}(\hat a_i,\hat b_i) \equiv (A_{k,i}'', B_{k,i}'')=c_i^{k-1} (A_i'',B_i'') \;\;\;\; k\in\{1,\dots,n_2\} \end{array} \right\} \;\;\; n_1+n_2=n-2. \ee Equations~(\ref{nm2}) become \begin{equation} \label{nm2_vector_equation} \begin{array}{l} \sum_{i=1}^n c_i^{k-1}\left( A_i' \vec b_i+B_i' \vec a_i\right)=0 \;\;\; k\in\{1,\dots,n_1\} \\ \sum_{i=1}^n c_i^{k-1}\left(\bar A_i'' \vec b_i+\bar B_i'' \vec a_i\right)=0 \;\;\; k\in\{1,\dots,n_2\}. \end{array} \end{equation} Given these $n-2$ vector constraints, all $n-2$ the derivatives $\partial P_{n-1}/\partial x_1',\dots,\partial P_{n-1}/\partial x_{n_1}'$, $\partial P_{n-1}/\partial x_1'',\dots,\partial P_{n-1}/\partial x_{n_2}''$ are equal to zero. Furthermore, we also have that $$ \begin{array}{l} \frac{\partial}{\partial x_k'}P_{n-2}=0 \;\;\; k\in\{1,\dots,n_1-1\} \\ \frac{\partial}{\partial x_k''}P_{n-2}=0 \;\;\; k\in\{1,\dots,n_2-1\}, \end{array} $$ so that $P_{n-2}$ is independent of $n-4$ out of the $n-2$ variables $x_1',\dots,x_{n_1}$, $x_1'',\dots,x_{n_2}''$. Thus, we need to add another vector equation such that $\left(w_1 \frac{\partial}{\partial x_{n_1}'}+w_2 \frac{\partial}{\partial x_{n_2}''}\right)P_{n-2}=0$ for some $(w_1,w_2)\ne (0,0)$. Up to a variable transformation, we can set $(w_1,w_2)=(1,0)$ so that the additional vector equation is \be \label{additional_equation} \sum_{i=1}^n c_i^{n_1}\left( A_i' \vec b_i+B_i' \vec a_i\right) =0. \ee Equations~(\ref{prob2_2},\ref{nm2_vector_equation},\ref{additional_equation}) imply that \bey \label{eqs_coef_AB} \sum_{i=1}^n c_i^{k-1} A_i' B_i'=0 \;\;\; k\in\{1,\dots,2 n_1\}, \\ \label{eqs_coef_AbBb} \sum_{i=1}^n c_i^{k-1} A_i'' B_i''=0 \;\;\; k\in\{1,\dots,2 n_2\}, \\ \label{eqs_coef_AbB} \sum_{i=1}^n c_i^{k-1} ( A_i' B_i''+A_i'' B_i')=0 \;\;\; k\in\{1,\dots,n_1+n_2\}. \eey Since $A_i' B_i'$ and $A_i' B_i'$ are not identically equal to zero (as consequence of Theorem~\ref{theorem_det}), the number of Eqs.~(\ref{eqs_coef_AB}) and Eqs.~(\ref{eqs_coef_AbBb}) is smaller than $n$, so that $$ n_1\le \frac{n-1}{2},\;\;\; n_2\le \frac{n-1}{2}. $$ Without loss of generality, we can assume that $n$ is even. Indeed, if Problem~\ref{problem1} can be solved for $n$ odd, then Lemma~\ref{lemma_dim_red} implies that it can be solve for $n$ even, and {\it viceversa}. Since $n_1+n_2=n-2$, we have that \be n_1=n_2=\frac{n-2}{2}. \ee Let $W_1,\dots, W_n$ be $n$ numbers defined by the equations \be \label{def_W} \sum_{i=1}^n c_i^{k-1} W_i=0 \;\;\;\; k\in\{1,\dots,n-1\} \ee up to a constant factor. Equations~(\ref{eqs_coef_AB},\ref{eqs_coef_AbBb},\ref{eqs_coef_AbB}) are equivalent to the equations \bey \label{AB} A_i' B_i'=(k_0+k_1 c_i) W_i, \\ \label{AbBb} A_i'' B_i''=(r_0+r_1 c_i) W_i, \\ \label{AbB} A_i' B_i''+A_i'' B_i'=(s_0+s_1 c_i) W_i. \eey These equations can be solved over the rationals for the coefficients $c_i$, $B_i'$ and $B_i''$ in terms of $A_i'$ and $A_i''$. The coefficients $c_i$ take a form which is independent of $W_i$, \be c_i=\frac{r_0 A_i'^{\, 2}+k_0 A_i''^{\,2}-s_0 A_i' A_i''}{r_1 A_i'^{\, 2}+k_1 A_i''^{\,2}-s_1 A_i' A_i''}, \ee so that we first evaluate $c_i$, then $W_i$ by Eq.~(\ref{def_W}) and, finally, $B_i'$ and $B_i''$ by Eqs.~(\ref{AB},\ref{AbBb}). It is possible to show that condition~(\ref{prob2_3}) for $l=n-1$ implies that $(k_1,r_1)\ne(0,0)$. Indeed, if $(k_1,r_1)=(0,0)$, then only half of the points in ${\cal Z}_P$ satisfies the inequality in the condition. Up to a variable change, we have $$ k_1\ne 0,\;\;\; s_1=0. $$ Up to now we have been able to solve all the conditions of Problem~\ref{problem2} which refer to the last two polynomials, that is, for $l=n-2,n-1$. The equations that need to be satisfied are Eqs.~(\ref{nm2_vector_equation},\ref{additional_equation}, \ref{def_W},\ref{AB},\ref{AbBb},\ref{AbB}). Let us rewrite them all together. \be \boxed{ \begin{array}{c} A_i' B_i'=(k_0+k_1 c_i) W_i, \;\;\; A_i'' B_i''=(r_0+r_1 c_i) W_i \\ A_i' B_i''+A_i'' B_i'=s_0 W_i, \;\;\; k_1\ne 0 \vspace{1mm} \\ \sum_{i=1}^n c_i^{k-1} W_i=0 \;\;\;\; k\in\{1,\dots,n-1\} \vspace{1mm} \\ \sum_{i=1}^n c_i^{k-1}\left( A_i' \vec b_i+B_i' \vec a_i\right)=0 \;\;\; k\in\{1,\dots,\frac{n}{2}\} \\ \sum_{i=1}^n c_i^{k-1}\left(\bar A_i'' \vec b_i+\bar B_i'' \vec a_i\right)=0 \;\;\; k\in\{1,\dots, \frac{n}{2}-1\} \end{array} } \ee Given $2n$ vectors $\vec a_1,\dots,\vec a_n,\vec b_1,\dots,\vec b_n$ satisfying these equations, there are $n-1$ directions $\vec u_1,\dots,\vec u_{n-1}$ such that \be \begin{array}{l} \vec u_{2k-1}\cdot \frac{\partial}{\partial\vec x} (\hat a_i,\hat b_i)=c_i^{k-1}(A_i',B_i') \;\;\; k\in\{1,\dots,\frac{n}{2}-1\} \\ \vec u_{2k}\cdot \frac{\partial}{\partial\vec x} (\hat a_i,\hat b_i)=c_i^{k-1}(A_i'',B_i'') \;\;\; k\in\{1,\dots,\frac{n}{2}\}. \end{array} \ee This can be easily verified by substitution. Let us define the coordinate system $(y_1,\dots,y_{n+1})\equiv \vec y$ such that \be \vec u_k\cdot \frac{\partial}{\partial\vec x} =\frac{\partial}{\partial y_k} \;\;\; k\in\{1,\dots,n-1\}. \ee Given the polynomials \begin{equation} \begin{array}{l} P_{n-1}=\sum_{i=1}^n \hat a_i\hat b_i \\ P_{n-2}=\sum_{i=1}^n c_i \hat a_i\hat b_i \end{array} \end{equation} with $\hat a_i=\vec a_i\cdot \vec y$ and $\hat b_i=\vec b_i\cdot \vec y$, it is easy to verify that \begin{equation} \begin{array}{l} \frac{\partial P_{n-1}}{\partial y_k}=0, \;\;\;\; k\in\{1,\dots,n-2\}, \\ \frac{\partial P_{n-2}}{\partial y_k}=0, \;\;\;\; k\in\{1,\dots,n-3\}, \\ \frac{\partial^2 P_{n-2}}{\partial y_{n-2}^2}=0. \end{array} \end{equation} The polynomial $P_{n-1}$ depends on $2$ variables (in the affine space) and the polynomial $P_{n-2}$ depends linearly on an additional variable $y_{n-2}$. Thus, the algebraic set of the two polynomials admits a Gaussian parametrization, that is, the equations $P_{n-1}=0$ and $P_{n-2}=0$ can be solved with \emph{a la} Gauss elimination of two variables. Note that the polynomial $P_{n-1}$ has rational roots by construction. The next step is to satisfy the conditions of Problem~\ref{problem2} for the other polynomials $P_{1},\dots,P_{n-3}$ by setting $c_{k,i}$ and the other remaining free coefficients. It is interesting to note that it is sufficient to take $c_{2s,i}=c_i^{n/2-s}$ with $s\in\{1,\dots,(n-4)/2\}$ for satisfying every condition of Problem~\ref{problem2} for $l$ even. The polynomials $P_2,P_4,\dots,P_{n-2}$ take the form \be\label{poly_even} P_{2s}=\sum_{i=1}^n c_i^{n/2-s} \hat a_i\hat b_i, \;\;\; s\in\{1,\dots,(n-4)/2\}. \ee Furthermore, we can choose $c_{1,i}$ such that $\partial^2 P_1/\partial x_1^2=0$. With this choice, we have that \be \left. \begin{array}{l} \frac{\partial P_l}{\partial y_k}=0, \;\;\;\; k\in\{1,\dots,l-1\} \\ \frac{\partial^2 P_l}{\partial y_l^2}=0 \end{array} \right\} \;\;\; l\in\{2,4,\dots,n-4,n-2\}\cup\{1,n-1\}. \ee Thus, we are halfway to solve Problem~\ref{problem2}, about half of the conditions are satisfied. The hard core of the problem is to solve the conditions for $P_1,P_3,\dots,P_{n-3}$. The form of Polynomials~(\ref{poly_even}) is not necessarily the most general. Thus, let us take a step backward and handle Problem~\ref{problem2} for the polynomial $P_{n-3}$ with the equations derived so far. We will find that this polynomial cannot satisfy the required conditions if $n>4$, so that the number of parameters has to be greater than $1$. Let us denote by $d_i$ the coefficients $c_{n-3,i}$. Eqs.~(\ref{prob2_1},\ref{prob2_2}) with $l=n-3$ give the equations $$ \sum_{i=1}^n e_i \left(A_{k,i} B_{k',i}+A_{k',i} B_{k,i}\right)=0 \;\;\; k,k'\in\{1,\dots,n-3\}, $$ which imply that $$ \begin{array}{l} \sum_{i=1}^n e_i c_i^{k+k'-2} A_i' B_i'=0\;\;\; k,l\in\{1,\dots,\frac{n}{2}-1\} \\ \sum_{i=1}^n e_i c_i^{k+k'-2} A_i'' B_i''=0\;\;\; k,l\in\{1,\dots,\frac{n}{2}-2\} \\ \sum_{i=1}^n e_i c_i^{k+k'-2} \left( A_i' B_i''+ A_i'' B_i'\right) =0\;\;\; \left\{ \begin{array}{l} k\in\{1,\dots,\frac{n}{2}-1\} \\ k'\in\{1,\dots,\frac{n}{2}-2\} \end{array} \right. \end{array} $$ that is, \be \label{eqs_for_e} \begin{array}{l} \sum_{i=1}^n e_i c_i^{k-1} A_i' B_i'=0\;\;\; k\in\{1,\dots,n-3\} \\ \sum_{i=1}^n e_i c_i^{k-1} A_i'' B_i''=0\;\;\; k\in\{1,\dots,n-5\} \\ \sum_{i=1}^n e_i c_i^{k-1} \left( A_i' B_i''+ A_i'' B_i'\right) =0\;\;\; k\in\{1,\dots,n-4\}. \end{array} \ee These equations imply that \be \begin{array}{l} e_i A_i' B_i'=F_{11}(c_i) W_i \\ e_i A_i'' B_i''=F_{22}(c_i) W_i \\ e_i \left(A_i' B_i''+ A_i'' B_i'\right) =F_{12}(c_i) W_i, \end{array}. \ee where $F_{11}(x)$, $F_{22}(x)$ and $F_{12}(x)$ are polynomials of degree lower than $3$, $5$ and $4$, respectively. Thus, \be e_i=\frac{F_{11}(c_i)}{k_0+k_1 c_i}=\frac{F_{22}(c_i)}{r_0+r_1 c_i}= \frac{F_{12}(c_i)}{s_0}. \ee The second and third equalities give polynomials of degree lower than $6$ and $5$, respectively. Since $c_i\ne c_j$ for $i\ne j$ and $n$ is even, the coefficients of these polynomials are equal to zero for $n>4$. In particular, $k_0+k_1 c_i$ divides $F_{11}(c_i)$ and, thus, $e_i$ is equal to a linear function of $c_i$. We have that $P_{n-3}=q_1 P_{n-2}+q_2 P_{n-1}$ for some constants $q_1$ and $q_2$, so that there is no independent polynomial $P_{n-3}$ satisfying the required conditions for $n>4$. In conclusion, we searched for a solution of Problem~\ref{problem2} with one parameter ($M=1$), but we ended up to find a solution with $n/2-1$ parameters. Let us stress that we have not proved that $M$ cannot be less than $n/2-1$, we have only proved that $P_{n-3}$ cannot satisfy the required conditions, so that solutions with $M>1$ may exist. Furthermore, we employed the condition $M=1$ in some intermediate inferences. Thus, to check the existence of better solutions, we need to consider the case $M\ne 1$ from scratch. For the sake of completeness, let us write down the solution for $n=4$. Eqs.~(\ref{eqs_for_e}) reduce to \be \sum_{i=1}^n e_i A_i' B_i'=0, \ee Up to a replacement $P_{1}\rightarrow \lambda_1 P_1+\lambda_2 P_2+\lambda_3 P_3$ for some constants $\lambda_i$ with $\lambda_1\ne 0$, we have that \be e_i=\frac{1}{k_0+k_1 c_i}. \ee Thus, the $4$ polynomials take the form \be \begin{array}{ll} P_0=\hat a_1\hat b_1, \;\;\;\; & P_1=\sum_{i=1}^{4}\frac{\hat a_i \hat b_i}{k_0+k_1 c_i} \\ P_2=\sum_{i=1}^{4}c_i \hat a_i \hat b_i \;\;\;\; & P_3=\sum_{i=1}^{4} \hat a_i \hat b_i. \end{array} \ee Let us give a numerical example with $4$ polynomial, built by using the derived equations. \subsubsection{Numerical example with $n=$4} Let us set $A_i'=i$, $A_i''=1$, $k_0=k_1=r_0=1$, $r_1=2$, and $s_0=3$. Up to a linear transformation of $x_3$ and $x_4$, this setting gives the polynomials \be \begin{array}{l} P_3(x_3,x_4)=\\ 5 x_3 \left(8427 x_4+9430\right)-209 \left(3 x_4 \left(393 x_4+880\right)+1478\right) \vspace{1mm} \\ P_2(x_2,x_3,x_4)= \\ 5538425 x_3^2+18810 \left(1445 x_2+5718 x_4+6421\right) x_3-786258 \left(3 x_4 \left(267 x_4+598\right)+1004\right) \vspace{1mm} \\ P_1(x_1,x_2,x_3,x_4)= \\ 2299 [205346285 x_3-38 (63526809 x_4+35594957)]- 5 [-2045057058 x_2^2+ \\ 1630827 (1813 x_3+1254 x_4) x_2+2891872832 x_3^2+495958966272 x_4^2+ \\ 4892481 x_1 (1254 x_2-1429 x_3-418)-87093628743 x_3 x_4] \vspace{1mm}\\ P_0(x_1,x_2,x_3,x_4)= \\ \left(627 x_1+627 x_2-46 x_3+1881 \left(x_4+1\right)\right) \left(5016 x_1+6270 x_2+2555 x_3-3762 \left(4 x_4+5\right)\right) \end{array} \ee Taking $x_4$ as the parameter $\tau$ and solving the equations $P_3=P_2=P_1=0$ with respect to $x_3$, $x_2$ and $x_1$, we replace the result in $P_0$ and obtain, up to a constant factor, \be {\cal R}(\tau)=\frac{\prod_{k=1}^{16}(\tau-\tau_k)}{Q_1^2(\tau)Q_2^2Q_3^2(\tau)}, \ee where \be \begin{array}{l} Q_1(\tau)=8427 \tau +9430, \\ Q_2(\tau)=3 \tau (393 \tau +880)+1478, \\ Q_3(\tau)=3 \tau (9 \tau (7 \tau (5367293625 \tau +24273841402)+288165964484)+1954792734568)+1657527934720, \\ (\tau_1,\dots\tau_{16})= -\left(\frac{86}{69},\frac{800}{681},\frac{122}{105},\frac{3166}{2775},\frac{ 140}{123},\frac{718}{633},\frac{2452}{2163},\frac{5558}{4929},\frac{2578}{2 289}, \frac{152}{135},\frac{1070}{951},\frac{3932}{3507},\frac{158}{141}, \frac{2072}{1851},\frac{1142}{1023},\frac{218}{201}\right) \end{array} \ee Over a finite field $\mathbb{Z}_p$, one can check that the numerator has about $16$ distinct roots for $p\gg 16$. For $p\simeq 16$, the roots are lower because of collision or because the denominator of some rational root $\tau_k$ is divided by $p$. \subsubsection{Brief excursus on retro-causality and time loops} Previously, we have built the polynomials~(\ref{poly_even}). Setting them equal to zero, we have a triangular system of about $n/2$ polynomial equations that can be efficiently solved in $n/2$ variables, say ${\bf x}_1$, given the value of the other variables, say ${\bf x}_2$. This system is more or less symmetric, that is, the variables ${\bf x}_2$ can be efficiently computed given the first block ${\bf x}_1$ (up to few variables). To determine the overall set of variables, we need the missing $n/2$ polynomials in the ideal $I$. It is possible to choose the coefficients $c_{l,i}$ of these polynomials in a such a way that the associated equations have again a triangular form with respect to one of the two blocks ${\bf x}_1$ and ${\bf x}_2$, up to few variables. Thus, we end up with two independent equations and a boundary condition, \be \begin{array}{l} {\bf x}_2={\cal R}_1({\bf x}_1), \\ {\bf x}_3={\cal R}_2({\bf x}_2), \\ {\bf x}_3={\bf x}_1, \end{array} \ee where ${\cal R}_1$ and ${\cal R}_2$ vectorial rational functions. The first two equations can be interpreted as time-forward and time-backward processes. The last equation identifies the initial state of the forward process with the final state of the backward process. The overall process can be seen also as a deterministic process in a time loop. This analogy is suggestive, since retro-causality is considered one possible explanation of quantum weirdness. Can a suitable break of causality allow for a description of quantum processes in a classical framework? To be physically interesting, this break should not lead to a computational power beyond the power of quantum computers, otherwise a fine tuning of the theory would be necessary to conceal, in a physical process, much of the power allowed by the causality break. A similar fine tuning is necessary if, for example, quantum non-locality is explained with superluminar interactions. These classical non-local theories need an artificial fine tuning to account for non-signaling of quantum theory. \subsection{$(n-1)/3$ parameters at most} In the previous subsection, we have built a class of curves defined by systems of $n-1$ polynomial equations such that about half of the variables can be efficiently solved over a finite field as functions of the remaining variables. These curves and the polynomial $P_0$ have $2^n$ rational intersection points. From a different perspective (discarding about $n/2$ polynomials), we have found a parametrizable variety with about $n/2$ parameters such that its intersection with some hypersurface has $2^n$ rational points. In this subsection, we show that the number of parameters can be dropped to about $n/3$ so that about $2n/3$ variables can be efficiently eliminated, at least. In the following, we consider space dimensions $n$ such that $n-1$ is a multiple of $3$. Let us define the integer \be n_1\equiv\frac{n-1}{3}. \ee Let us define the rational numbers $A_i,B_i,\bar A_i,\bar B_i$, $W_i$, and $c_i$ with $i\in\{1,\dots,n\}$ as a solution of the equations \be \begin{array}{c} A_i B_i=W_i,\;\;\; \bar A_i \bar B_i= W_i, \\ A_i \bar B_i+\bar A_i B_i=2 c_i W_i, \\ \sum_{i=1}^n c_i^{k-1} W_i=0 \;\;\;\; k\in\{1,\dots,n-1\}, \\ i\ne j \Rightarrow c_i\ne c_j. \end{array} \ee The procedure for finding a solution has been given previously. We define the polynomials \be P_s=\sum_i^n c_i^{s-1} \hat a_i \hat b_i, \;\;\;\; s\in\{1,\dots,n\}. \ee The linear functions $\hat a_i$ and $\hat b_i$ are defined by the $n-1$ linear equations \be \begin{array}{l} \sum_{i=1}^{n_1} c_i^{k-1}(A_i \hat b_i+ B_i \hat a_i)=0, \;\;\; k\in\{1,\dots,n_1\} \\ \sum_{i=1}^{n_1} c_i^{k-1}(\bar A_i \hat b_i+\bar B_i \hat a_i)=0, \;\;\; k\in\{1,\dots,2 n_1\}. \end{array} \ee These equations uniquely determine $\hat a_i$ and $\hat b_i$, up to a linear transformation of the variables $x_i,\dots,x_{n+1}$. Up to a linear transformation, we have \be \begin{array}{l} \frac{\partial (\hat a_i,\hat b_i)}{\partial x_k}=c_i^{k-1} (\bar A_i,\bar B_i), \;\;\; k\in\{1,\dots,n_1\} \\ \frac{\partial (\hat a_i,\hat b_i)}{\partial x_{k+n_1}}=c_i^{k-1} (A_i,B_i), \;\;\; k\in\{1,\dots,n_1\}. \end{array} \ee Since there are rational points in the curve, there is another variable, say $x_{2n_1+1}$, such that the second derivative $\partial^2 P_1/\partial x_{2n_1+1}^2$ is equal to zero. Using the above equations, we have \be \left. \begin{array}{l} \frac{\partial P_s}{\partial x_k}=0, \;\;\; k\in\{1,\dots,2 n_1-s+1\}, \\ \frac{\partial^2 P_s}{\partial x_k^2}=0, \;\;\; k=2 n_1-s+2. \end{array} \right\} \;\;\; s\in\{1,\dots,2 n_1+1 \}. \ee Thus, the first $2n_1+1=\frac{2n+1}{3}$ polynomials take the triangular form~(\ref{poly_affine}), up to a reorder of the indices. These polynomials define a parametrizable variety with $(n-1)/3$ parameters. Stated in a different way, there is a curve and a hypersurface such that their intersection contains $2^n$ points and at least $(2n+1)/3$ coordinates of the points in the curve can be evaluated efficiently given the value of the other coordinates. It is possible to show that all the intersection points are in the parametrizable variety, that is, they satisfy the third of Conditions~(\ref{gauss_constrs}). \section{Conclusion and perspectives} \label{conclusion} In this paper, we have reduced prime factorization to the search of rational points of a parametrizable variety $\cal V$ having an arbitrarily large number $N_P$ of rational points in the intersection with a hypersurface $\cal H$. To reach a subexponential factoring complexity, the number of parameters $M$ has to grow sublinearly in the space dimension $n$. In particular, If $N_P$ grows exponentially in $n$ and $M$ scales as a sublinear power of $n$, then the factoring complexity is polynomial (subexponential) if the computation of a rational point in $\cal V$, given the parameters, requires a number of arithmetic operations growing polynomially (subexponentially) in the space dimension. Here, we have considered a particular kind of rational parametrization. A set of $M$ coordinates, say $x_{n-M+1},\dots,x_n$, of the points in $\cal V$ are identified with the $M$ parameters, so that the first $n-M$ coordinates are taken equal to rational functions of the last $M$ coordinates. In particular, the parametrization is expressed in a triangular form. The $k$-th variable is taken equal to a rational function ${\cal R}_k={\cal N}_k/{\cal D}_k$ of the variables $x_{k+1},\dots,x_{n}$, with $k\in\{1,\dots,n-M\}$. That is, \be\label{triang_par} \begin{array}{l} x_k={\cal R}_k(x_{k+1},\dots,x_n), \;\;\; k\in\{1,\dots,n-M\}, \end{array} \ee which parametrize a variety in the zero locus of the $n-M$ polynomials, \be\label{triang_poly_form} P_k={\cal D}_k x_k-{\cal N}_k, \;\;\;\; k\in\{1,\dots,n-M\}. \ee To reach polynomial complexity, there are two requirements on these polynomials. First, they have to contain a number of monomials scaling polynomially in $n$, so that the computation of ${\cal R}_k$ is efficient. For example, we could require that the degree is upper-bounded by some constant. Second, their zero locus has to share an exponentially large number of rational points with some hypersurface $\cal H$ (a superpolynomial scaling $N_P\sim e^{b\,n^\beta}$ with $0<\beta<1$ is actually sufficient, provided that the growth of $M$ is sufficiently slow). The hypersurface is the zero locus of some polynomial $P_0$. Also the computation of $P_0$ at a point has to be efficient. We have proposed a procedure for building pairs $\{{\cal V},{\cal H}\}$ satisfying the two requirements. First, we define the set of $N_P$ rational points. This set can depend on some coefficients. Since $N_P$ has to grow exponentially in the dimension, we need to define them implicitly as common zeros of a set of polynomials, say $G_1,G_2,\dots$. The simplest way is to take $G_k$ as products of linear functions, like the polynomials~(\ref{quadr_polys}). These polynomials generate an ideal $I$. The relevant information on the generators is encoded in a satisfiability formula in conjunctive normal form without negations and a linear matroid. We have called these two objects a model. Second, we search for $n-M$ polynomials in $I$ with the triangular form~(\ref{triang_poly_form}). These polynomials always exist. Thus, the task is to find a solution such that the polynomials contain as few monomials as possible. This procedure is illustrated with the simplest example. The generators are taken equal to reducible quadratic polynomials of the form~(\ref{quadr_polys}), whose associated algebraic set contains $2^n$ rational points. We search for polynomials $P_k$ of the form $\sum_i c_i G_i$ with $c_i$ constant. First, we prove that there is no solution for $M=1$ and space dimension greater than $4$. Then, we find a solution for $M=(n-1)/3$. If there are solutions with $M$ scaling sublinearly in $n$, then a factoring algorithm with polynomial complexity automatically exists, since the computational complexity of the rational functions ${\cal R}_k$ is polynomial by construction. The existence of such solutions is left as an open problem. This work can proceed in different directions. First, it is necessary to investigate whether the studied model admits solutions with a much smaller set of parameters. The search has been performed in a subset of the ideal. Thus, if these solutions do not exist, we can possibly expand this subset (if it is sufficiently large, there is for sure a solution, but the polynomial complexity of ${\cal R}_k$ is not guaranteed anymore). We could also relax other hypotheses such as the distinguishibility of each of the $2^n$ rational points and their membership of the parametrizable variety. More general ideals are another option. In this context, we have shown that classes of models can be reduced to smaller classes by preserving the computational complexity of the associated factoring algorithms. This reduction makes the search space smaller. It is interesting to determine what is the minimal class of models obtained by this reduction. This is another problem left open. Apart from the search of better inputs of the procedure, there is a generalization of the procedure itself. The variety $\cal V$ has the parametrization~(\ref{triang_par}). However, there are more general parametrizable varieties which can be taken in consideration. It is also interesting to investigate if there is some deeper relation with retro-causality, time loops and, possibly, a connection with Shor algorithm. Indeed, in the attempt of lowering the geometric genus of one of the non-parametrizable curves derived in the previous section, we found a set of solutions for the coefficients over the cyclotomic number field, so that the resulting polynomials have terms taking the form of a Fourier transform. Quantum Fourier transform is a key tool in Shor's algorithm. This solution ends up to break the curve into the union of an exponential large number of parametrizable curves, thus it is not useful for our purpose. Nonetheless, the Fourier-like forms in the polynomials remains suggestive. Finally, the overall framework has some interesting relation with the satisfiability problem. Using a particular matroid, we have seen that there is a one-to-one correspondence between the points of an algebraic set and the solutions of a satisfiability formula (including also negations). To prove that a formula is satisfiable is equivalent to prove that a certain algebraic set is not empty. This mapping of SAT problems to an algebraic-geometry problem turns out to be a generalization of previous works using the finite field $\mathbb{Z}_2$, see for example Ref.~\cite{hung}. It can be interesting to investigate whether part of the machinery introduced here can be used for solving efficiently some classes of SAT formulae. \section{Acknowledgments} This work was supported by Swiss National Science Foundation (SNF) and Hasler foundation under the project n. 16057.
{'timestamp': '2022-09-26T02:14:26', 'yymm': '2209', 'arxiv_id': '2209.11650', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.11650'}
ArXiv
\section{Introduction} Four-dimensional $SU(N)$ gauge theory at zero temperature is known to be in a confining phase for all values of the bare coupling. A very large amount of work has been performed over the last decade in an effort to isolate the types of configurations in the functional measure responsible for maintaining one confining phase for arbitrarily weak coupling \cite{Rev}, \cite{LAT}. Nevertheless, a direct derivation of this unique feature of $SU(N)$ theories (shared only by non-abelian ferromagnetic spin systems in $2$ dimensions) has remained elusive. The origin of the difficulty is clear. It is the multi-scale nature of the problem: passage from a short distance ordered regime, where weak coupling perturbation theory is applicable, to a long distance strongly coupled disordered regime, where confinement and other collective phenomena emerge. Systems involving such dramatic change in physical behavior over different scales are hard to treat. Hydrodynamic turbulence, involving passage from laminar to turbulent flow, is another well-known example, which, in fact, shares some striking qualitative features with the confining QCD vacuum. The natural framework for addressing the problem from first principles is a Wilsonian renormalization group (RG) block-spinning procedure bridging short to long scales. The use of lattice regularization, i.e. the framework of lattice gauge theory (LGT) \cite{W}, is virtually mandatory in this context. There is no other known usable non-perturbative formulation of gauge theory that gives the path integral in closed form preserving non-perturbative gauge invariance and positivity of the transfer matrix (reflection positivity). Attempts at exact blocking constructions towards the `perfect action' along the Wilsonian renormalized trajectory \cite{H}, however, turn out, not surprisingly, to be exceedingly complicated. There are, nonetheless, approximate RG decimation procedures that can provide bounds on judicially chosen quantities. The basic idea in this paper is to obtain both upper and lower bounds for the partition function and the partition function in the presence of external center flux. The bounds are obtained by employing approximate decimations of the `potential moving' type \cite{M}, \cite{K}, which can be explicitly computed to any accuracy by simple algebraic operations. This leads to a rather simple construction constraining the behavior of the exact partition functions in the presence and in the absence of center flux; and, through them, the exact vortex free energy order parameter. The latter is the ratio of these two partition functions. It is thus shown to exhibit confining behavior for all values $0 < \beta < \infty$, of the inverse coupling $\beta=4/g^2$ defined at lattice spacing $a$ (UV cutoff). An earlier outline of the argument was given in \cite{T1}. As it will become clear in the following, there are two main ingredients here that allow this type of result to be obtained. The first is the use of approximate decimations that are easily explicitly computable at every step, while correctly reflecting the nature of RG flow in the exact theory. The second is to consider only partition functions, or (differences of) free energies, rather than the RG evolution of a full effective action that would allow computation of any observable at different scales. This more narrowly focused approach results into tremendous simplification compared to a general RG blocking construction. The presentation is for the most part quite explicit. Some simple propositions, mostly containing basic bounds, serve as building blocks of the argument. They are enumerated by roman numerals in the text below. Most proofs have been relegated to a series of appendices so as not to clutter what is essentially a simple construction. Only the case of gauge group $SU(2)$ is considered explicitly here. The same development, however, can be applied to other groups, and, most particularly, to $SU(3)$ which exhibits identical behavior under the approximate decimations. It will be helpful at this point to provide an outline of the steps in the argument developed in the rest of the paper. In section \ref{DEC}, starting with the pure $SU(2)$ LGT with partition function defined on a lattice of spacing $a$, we define a class of approximate decimation transformations to a coarser lattice of spacing $ba$. In section \ref{Z} the resulting partition function on this decimated lattice is shown to be an upper bound on the partition function on the original lattice. A similar rule can be devised for obtaining a partition function on the decimated lattice which gives a lower bound on the original partition function. One then interpolates between these bounds. For some appropriate value of the interpolating parameter, one thus obtains an exact integral representation of the original partition function. This representation is in terms of an effective action defined on the decimated lattice of spacing $ab$ plus a bulk free energy contribution resulting from the blocking $a \to ab$. Now, any such interpolation is not unique, and it is indeed expedient to consider different interpolation parametrizations. The resulting partition function representation is then invariant under such parametrization variations in its effective action. The other important ingredient is that the effective action in this representation is constrained between the effective actions corresponding to the upper and lower bound partition functions. Iterating this procedure in successive decimations, a representation of the partition function is obtained on progressively coarser lattices of spacing $a \to ab \to ab^2 \to \cdots \to ab^n$. In section \ref{TZ} we consider the partition function in the presence of external center flux. This is the flux of a center vortex, introduced by a $Z(2)$ `twist' in the action, and rendered topologically stable by winding around the lattice torus. The decimation-interpolation procedure just outlined for the partition function can be applied also in the presence of the external flux. A representation of the twisted partition function on progressively coarser lattices can then be obtained in a completely analogous manner. The ratio of the twisted to the untwisted partition function is the vortex free energy order parameter. Its behavior as a function of the size of the system characterizes the system's possible phases. By known correlation inequalities it can, furthermore, be related to the Wilson and t'Hooft order parameters. Our representations of the twisted and untwisted partition functions may now be used to represent the ratio (section \ref{Z-/Z}). One may exploit the parametrization invariance of these representations to ensure that the bulk free energy contributions resulting in each decimation step $ab^{m-1} \to ab^m$ explicitly cancel between numerator and denominator in the ratio. One is then left with a representation of the vortex free energy solely in terms of an effective action defined on a lattice of spacing $ab^n$. Now this effective action is constrained by the effective actions corresponding to the upper and lower bounds. The latter are easily explicitly computable by straightforward iteration of the potential-moving decimation rules. Under successive transformations they flow, for space-time dimension $d\leq 4$ and any original coupling $g$ defined at spacing $a$, to the strong coupling regime. This is the regime where the coefficients in the character expansion of the exponential of the action become sufficiently small for the strong coupling cluster expansion to converge. Confining behavior is the immediate result for the vortex free energy, and, hence, `area law' behavior for the Wilson loop (section \ref{CONf}). As it is well-known the theory contains only one free parameter, a physical scale which is conventionally taken to be (some multiple of) the string tension. This fact comes out in a natural way in the context of RG decimations, as we will see in the following. Fixing this scale then determines the dependence $g(a)$. The fact that $g(a)\to 0$ as $a\to 0$ is an essentially qualitative consequence of the flow exhibited by the decimations. Some concluding remarks are given in section \ref{SUM}. \section{Decimations} \label{DEC} \setcounter{equation}{0} \setcounter{Roman}{0} We work on a hypercubic lattice $\Lambda \subset {\rm\bf Z}^d$ of length $L_\mu$ in the $x^\mu$-direction, $\mu=1,\ldots ,d$, in units of the lattice spacing $a$. Individual bonds, plaquettes, 3-cubes, etc are generically denoted by $b$, $p$, $c$, etc. More specific notations such as $b_\mu$ or $p_{\mu\nu}$ are used to indicate elementary $m$-cells of particular orientation. We use the standard framework and common notations of LGT with gauge group $G$. Group elements are generically denoted by $U$, and the bond variables by $U_b \in G$. In this paper we take $G=SU(2)$. We start with some appropriate plaquette action $A_p$ defined on $\Lambda$, which, for definiteness, is taken to be the Wilson action \begin{equation} A_p(U_p,\beta) ={\beta\over 2}\;{\rm Re}\,\chi_{1/2}(U_p) \;, \qquad U_p=\prod_{b\in \partial p} U_b \;,\label{Wilson} \end{equation} with $\beta=4/g^2$ defining the lattice coupling $g$. The character expansion of the exponential of the plaquette action function is given by \begin{equation} \exp \left(A_p(U,\beta)\right) = \sum_j\;d_j\, F_j(\beta)\,\chi_j(U) \label{exp} \end{equation} with Fourier coefficients: \begin{equation} F_j(\beta) = \int\,dU\; \exp \left(A_p(U,\beta)\right) \,{1\over d_j}\,\chi_j(U)\;.\label{Fourier} \end{equation} Here $dU$ denotes Haar measure on $G$, and $\chi_j$ the character of the $j$-th representation of dimension $d_j$. So, for SU(2), the only case considered explicitly here, all characters are real, $j=0, {1\over 2}, 1, {3\over 2}, \ldots$, and $d_j=(2j+1)$. (\ref{Fourier}) implies that $F_0\geq F_j$, all $j\not=0$. Explicitly, one finds \begin{equation} F_j(\beta) = {2\over \beta}\, I_{d_j}(\beta) \, \end{equation} in terms of the modified Bessel function $I_\nu$. It will be convenient to work in terms of normalized coefficients: \begin{equation} c_j(\beta) = {\displaystyle F_j(\beta) \over \displaystyle F_0(\beta)} \;, \label{ncoeffs} \end{equation} so that \begin{eqnarray} \exp \left(A_p(U,\beta)\right) &=& F_0\,\Big[\, 1 + \sum_{j\not= 0} d_j\,c_j(\beta)\,\chi_j(U)\, \Big] \nonumber \\ & \equiv & F_0\;f_p(U,\beta)\,. \label{nexp} \end{eqnarray} The (normalized) partition function on lattice $\Lambda$ is then \begin{equation} Z_\Lambda(\beta) = \int dU_\Lambda\;\prod_{p\in \Lambda}\,f_p(U_p,\beta)\equiv \int\,d\mu_\Lambda^0 \;,\label{PF1} \end{equation} where $dU_\Lambda\equiv \prod_{b\in \Lambda}dU_b$, and expectations are computed with the measure $d\mu_\Lambda = d\mu_\Lambda^0 / Z_\Lambda(\beta)$. The action (\ref{Wilson}) is such that \begin{equation} F_j (\beta)\geq 0\;, \qquad \mbox{hence}\quad 1\geq c_j(\beta) \geq 0\qquad\quad \mbox{all}\quad j \;, \end{equation} which implies that the measure defined by (\ref{PF1}) satisfies reflection positivity (RP) both in planes without sites and in planes with sites. Note that $\lim_{\beta\to \infty}c_j(\beta)=1$. Let $\Lambda^{(n)}$ be the hypercubic lattice of spacing $b^na$, with integer $b\geq 2$, and $Z_{\Lambda^{(n)}}(\{c_j(n)\})$ denote a partition function of the form (\ref{PF1}) defined on $\Lambda^{(n)}$ in terms of some given set of coefficients $\{c_j(n)\}$: \begin{eqnarray} Z_{\Lambda^{(n)}}(\{c_j(n)\}) & = & \int dU_{\Lambda^{(n)}} \prod_{p\in \Lambda^{(n)}} \Big[\, 1 + \sum_{j\not= 0} d_j \, c_j(n)\chi_j(U_p)\,\Big] \nonumber \\ & \equiv & \int dU_{\Lambda^{(n)}} \prod_{p\in \Lambda^{(n)}}\,f_p(U_p,n) \equiv \int\,d\mu_{\Lambda^{(n)}}^0 \,,\label{PF2} \end{eqnarray} where $dU_{\Lambda^{(n)}}\equiv \prod_{b\in \Lambda^{(n)}}dU_b$. We also employ the notations \begin{equation} g_p(U,n) \equiv f_p(U,n) -1 = \sum_{j\not= 0} d_j\, c_j(n) \, \chi_j(U) \;,\label{g} \end{equation} and $\| \cdot\|$ for the $\|\cdot\|_\infty$-norm: \begin{equation} \|g(n)\| = \sum_{j\not=0} d_j^2\, c_j(n) \,.\label{gnorm} \end{equation} One has the simple but basic result: \prop{For $Z_{\Lambda^{(n)}}(\{c_j(n)\})$ given by (\ref{PF2}) with $c_j(n) \geq 0$ for all $j$, and periodic boundary conditions, (i) $\dZ{n}(\{c_j(n)\})$ is an increasing function of each $c_j(n)$: \begin{equation} \partial \dZ{n}(\{c_i(n)\}) / \partial c_j(n) \geq 0 \; ;\label{PFder0} \end{equation} (ii) \begin{equation} Z_{\Lambda^{(n)}}(\{c_j(n)\}) \geq \Big[\,1 + \sum_{j\not=0} d_j^2\, c_j(n)^6 \,\Big]^{|\Lambda^{(n)}|} \; .\label{PFlowerb1} \end{equation} } (\ref{PFder0}) is an immediate consequence of RP in planes without sites. The proof of (\ref{PFlowerb1}), also based on RP, is given in Appendix A. Strict inequality in fact holds in (\ref{PFder0}) and (\ref{PFlowerb1}), with equality only in the trivial case where all $c_j(n)$'s vanish. In particular, one has \begin{equation} Z_{\Lambda^{(n)}}(\{c_j(n)\}) \; > \; 1 \,.\label{PFlowerb2} \end{equation} Simple as (\ref{PFlowerb2}) is, it is not trivial, as it requires non-negativity of $c_j(n)$'s, and will be useful in the following. \subsection{Construction of decimation transformations} \label{DEC1} To perform an RG transformation $a \to ba$, the lattice is partitioned into $d$-dimensional decimation cells of side length $ba$. Various approximate decimation transformations may be devised involving the `weakening', i.e. decreasing the $c_j$'s of interior plaquettes, while compensating by `strengthening', i.e. increasing $c_j$'s of boundary plaquettes of each cell. The simplest such scheme \cite{M}, which is adopted in the following, implements complete removal of interior plaquettes. This may be pictured \cite{K} as moving the potentials due to interior plaquette interactions to the boundary. This `potential moving' may be performed as the composition of elementary steps. The elementary potential moving step is defined in terms of a $3$-dimensional cell of side length $ba$ in a given decimation direction, say the $x^\kappa$-direction, and length $a$ in the other two directions $\mu$, $\nu$. Two such $3$-cells adjacent along the $\kappa$-direction are shown in Figure~\ref{Dec1fig}. The $(b -1)$ interior plaquettes in each cell perpendicular to $x^\kappa$ (shaded) are removed, i.e. \begin{equation} A_p(U_p) \to 0 \label{potmove1} \end{equation} for the action at their original location, and displaced (arrows) in the positive $x^\kappa$ direction to the position of the corresponding plaquette (bold) on the cell boundary. There the displaced interior plaquettes are combined with the boundary plaquette into one plaquette $p$ with action `renormalized' by some appropriate amount\footnote{One may take this renormalization factor to depend on the move direction, but we need not consider these more general transformations here.} $\zeta_0$: \begin{equation} A_p(U) \to \zeta_0\,A_p(U) \;.\label{potmove2} \end{equation} \begin{figure}[ht] \resizebox{15cm}{!}{\input{Dec1.pstex_t}} \caption{Basic plaquette moving operation, $b=2$ \label{Dec1fig}} \end{figure} A complete transformation consists of performing this elementary operation successively in every lattice direction $\kappa=1, \ldots, d$ in such a way that eventually one is left only with plaquette interactions on a lattice of spacing $ba$. In practice, there is no reason for a choice other than $b=2$, but, for clarity, we keep general (integer) $b$. The result of a complete transformation is given by equations (\ref{RG1})-(\ref{RG5}) below, to which a reader may turn directly. To describe this process in more detail, let the lattice be partitioned into $d$-dimensional hypercubic decimation cells $\sigma^d$ of side length $b a$ in each lattice direction. Plaquettes interior to a $\sigma^d$ are defined as those not wholly contained in its $(d-1)$-dimensional boundary $\partial \sigma^d$. Consider the effect of successive application of the elementary moving operation to plaquettes of fixed orientation, say $[\mu\nu]$. There are $(d-2)$ normal directions $\kappa_i \not= \mu,\nu$, $i=1, \cdots, d-2$, in which a plaquette $p_{\mu\nu}$ can be moved. Interior $p_{\mu\nu}$'s in each $\sigma^d$ are first moved to the cell boundary $\partial \sigma$ in groups of $(b-1)$ parallel plaquettes, along , say, the positive $\kappa_1$-direction (as in Figure 1). They end up in the face $\sigma^{(d-1)}_{\kappa_1} \subset \partial \sigma^d$ perpendicular to the $\kappa_1$-axis. There each group is identified with the plaquette present at that location and merged in one plaquette $p_{\mu\nu} \in \sigma^{(d-1)}_{\kappa_1}$ with a `renormalized' action (\ref{potmove2})). Similarly, $p_{\mu\nu}$ plaquettes in each face $\sigma^{(d-1)}_{\kappa_i}\subset \partial \sigma^d$, with $i\not= 1, \mu,\nu$ are moved along the $\kappa_1$-axis in groups of $(b-1)$ to the face $\sigma^{(d-2)}_{\kappa_1\kappa_i} \subset \partial \sigma^{(d-1)}_{\kappa_i}$ normal to the $\kappa_1$ and $\kappa_i$ directions, where they are merged and renormalized. There are now $(d-3)$ directions inside the $(d-1)$-dimensional face $\sigma^{(d-1)}_{\kappa_1}$ in which a $[\mu\nu]$-plaquette can move. Thus in proceeding to apply the elementary moving operation successively in all directions, the once-moved-renormalized $p_{\mu\nu}$'s in $\sigma^{(d-1)}_{\kappa_1}$ are next moved, in groups of $(b-1)$ plaquettes in the positive $\kappa_2$-direction, to the face $\sigma^{(d-2)}_{\kappa_1\kappa_2} \subset \partial \sigma^{(d-1)}_{\kappa_1}$. Similarly, the once-moved-renormalized $p_{\mu\nu}$'s inside a face $\sigma^{(d-2)}_{\kappa_1\kappa_i}$ are moved provided $\kappa_2$ is among the $(d-4)$ available directions normal to a $[\mu\nu]$-plaquette inside $\sigma^{(d-2)}_{\kappa_1\kappa_i}$. Continuing this process in the remaining directions $\kappa_i$, $i=3,\ldots,(d-2)$, the set of $[\mu\nu]$-plaquettes on the initial lattice ends up in the $2$-dimensional faces $\sigma^2_{\kappa_1\kappa_2\ldots\kappa_{(d-2)}} \subset \partial \sigma^3_{\kappa_1\kappa_2\ldots\kappa_{(d-3)}} \linebreak \subset \cdots \subset \partial \sigma^{(d-1)}_{\kappa_1}$. The above process, described for plaquettes of one fixed orientation $[\mu\nu]$, is carried out for each of the $d(d-1)/2$ possible choices of plaquette orientation \cite{K}. The end result of the process is then a lattice having elementary $2$-faces of side length $b a$, each tiled by $b^2$ plaquettes of side length $a$. The action of each of these $b^2$ plaquettes has been renormalized according to (\ref{potmove2}) by a total factor of \begin{equation} \zeta_0^{(d-2)} \equiv \zeta \;. \label{totalren} \end{equation} This is expressed by (\ref{RG5}) below. The integrations over the bonds interior to each $2$-face of side length $ba$ are now carried out. This merges the $b^2$ tiling plaquettes into a single plaquette of side length $ba$. These integrations are exact and do not change the value of the partition function that resulted after the completion of the plaquette moving operations. We, however, allow further renormalizing the result of these integrations by introducing, in addition to $\zeta_0$, another parameter, $r$ (cf. (\ref{RG2}) below). This completes the decimation transformation to a hypercubic lattice of spacing spacing $ba$. The important feature of this decimation transformation is that it preserves the original one-plaquette form of the action, so the result can again be represented in the form (\ref{nexp}). The transformation rule for successive decimations \begin{eqnarray} & & a\, \to b\, a \,\to\, b^2 a \to\, \cdots \to\,b^{n-1} a \to \,b^n a \to \cdots\nonumber \\ & & \Lambda \to \Lambda^{(1)} \to \Lambda^{(2)} \to \cdots \to \Lambda^{(n-1)} \to \Lambda^{(n)} \to \cdots\;,\nonumber \end{eqnarray} is then: \begin{equation} f_p(U,n-1)\to F_0(n)\,f_p(U,n) = F_0(n)\,\Big[ 1 + \sum_{j\not= 0} d_j\,c_j(n)\,\chi_j(U) \Big] \,.\label{RG1} \end{equation} The $n$-th step coefficients $F_0(n)$, $c_j(n)$ are obtained from the coefficients $c_j(n-1)$ of the previous step by \begin{equation} c_j(n) = \hat{c}_j(n)^{b^2 r}\; , \label{RG2} \end{equation} \begin{equation} F_0(n) = \hat{F}_0(n)^{b^2} \label{RG3} \end{equation} where \begin{equation} \hat{c}_j(n)\equiv \hat{F}_j(n)/\hat{F}_0(n) \leq 1 \;, \qquad j\not= 0\;, \label{RG4} \end{equation} and \begin{equation} \hat{F}_j(n)= \int\,dU\;\Big[\,f(U,n-1)\,\Big]^\zeta\, {1\over d_j}\,\chi_j(U) \; . \label{RG5} \end{equation} The $n=0$ coefficients are the coefficients $c_j(\beta)$ on the original lattice $\Lambda$. (\ref{RG5}) encodes the end result of the plaquette moving - renormalization operations described above, with $\zeta$ of the form (\ref{totalren}); and (\ref{RG2}), (\ref{RG3}) that of the subsequent 2-dimensional integrations, and further renormalization by the parameter $r$. It is easily seen that $f_p(U, n) \;>\; 0$ given that this holds for $n=0$ (cf. (\ref{exp}), (\ref{nexp})). The effective plaquette action on lattice $\Lambda^{(n)}$ of spacing $b^n a$ is then \begin{eqnarray} f_p(U, n) & = & \Big[\, 1 + \sum_{j\not= 0} d_j\,c_j(n)\, \chi_j(U)\, \Big] \label{nexp1} \\ & \equiv & \exp\Big(\, A_p(U, n)\,\Big) \;, \label{actdef} \end{eqnarray} with effective couplings defined by the character expansion \begin{equation} A_p(U, n) = \beta_0(n) + \sum_{i\not= 0} \beta_i(n)\,d_i\,\chi_i(U) \;. \label{effact} \end{equation} A point on notation. In the above we used the notations $F_0(n)$, $c_j(n)$, $\beta_j(n)$, etc, which do not display the full set of explicit or implicit dependences of these quantities. Thus, a more complete notation is: \begin{eqnarray} c_j(n) &=& c_j(\,n,b,\zeta,r,\{c_j(n-1)\}\,) \nonumber \\ F_0(n) &=& F_0(\,n,b,\zeta,\{c_j(n-1)\}\,) \label{short-hand} \end{eqnarray} Dependence on the original coupling $\beta$ comes, of course, iteratively through the coefficients $\{c_j(n-1)\}$ of the preceding step. Because of the iterative nature of many of the arguments in this paper several explicit and implicit dependences propagate to most of the quantities used in the following. To prevent notation from getting out of hand we generally employ short-hand notations such as those on the l.h.s. of (\ref{short-hand}), unless specific reference to particular dependences is required. The resulting partition function after $n$ such decimation steps is: \begin{equation} Z_\Lambda(\beta, n) = \prod_{m=1}^n F_0(m)^{|\Lambda|/b^{md}}\; Z_{\Lambda^{(n)}}(\{c_j(n)\}) \; ,\label{PF2a} \end{equation} with $Z_{\Lambda^{(n)}}(\{c_j(n)\})$ of the form (\ref{PF2}) and coefficients (\ref{short-hand}) resulting after $n$ steps according to (\ref{RG2}) - (\ref{RG5}). The bulk free energy density resulting from decimating from scale $a$ to $b^n a$ is then $\sum_{m=1}^n \ln F_0(m) /b^{md}\,$, each term in this sum representing the contribution from $b^{(m-1)}a \to b^m a$ as specified by (\ref{RG1}). The partition function (\ref{PF2a}) is, of course, not equal to the original partition function $Z_\Lambda(\beta)$ of (\ref{PF1}) since the decimation transformation is not exact. How they are related will be addressed below. \subsection{Some properties of the decimation transformations}\label{DEC2} The transformation rule specified by (\ref{RG1})-(\ref {RG5}) is meaningful for real positive $\zeta$. Here, however, a basic distinction can be made. As it is clear from (\ref{RG5}), for {\it integer} $\zeta$ the important property of positivity of the Fourier coefficients in (\ref{RG1}) is maintained at each decimation step: \begin{equation} F_0(n)\geq 1\;, \qquad 1\geq c_j(n)\geq 0 \qquad \qquad (\mbox{integer} \ \zeta) \;. \label{+c} \end{equation} This means that reflection positivity is maintained at each decimation step. This clearly is not guaranteed to be the case for non-integer $\zeta$. Thus non-integer $\zeta$ results in transformations that, in general, violate the reflection positivity of the theory (assuming a reflection positive original action). It is important in this connection that, after each decimation step, the resulting action retains the original one-plaquette form, but will generally contain {\it all} representations in (\ref{effact}). Furthermore, among the effective couplings $\beta_j(m)$ negative ones will occur. These features are present in general, even after a single decimation step $a\to ba$ starting, as we did, with the single (fundamental) representation Wilson action (\ref{Wilson}). For integer $\zeta$, however, the resulting effective action (\ref{effact}), even in the presence of some negative couplings, still defines a reflection positive measure, since, as just noted, the expansion of its exponential (\ref{nexp1}) gives positive coefficients (\ref{+c}). It is also worth noting that, given a set of initial coefficients (\ref{ncoeffs}), the transformation rule (\ref{RG2}) - (\ref{RG5}) with integer $\zeta$ can be explicitly evaluated, to any desired accuracy, by purely algebraic operations, namely repeated application of the KG reduction rule \begin{equation} \chi_i\,\chi_j =\sum_{k=|i-j|}^{i+j} \chi_k \label{KG} \end{equation} in (\ref{RG5}) and character orthogonality -- no actual integrations need be carried out. The choice (cf. (\ref{totalren})) \begin{equation} \zeta_0 = 1 +(b -1 ) \qquad \Longrightarrow \qquad \zeta=b^{d-2} \label{MKz} \end{equation} is special. It increases the couplings of receiving plaquettes, at each basic moving step, by an amount exactly equal to that of the corresponding displaced plaquettes. This, together with $r=1$, is essentially the original choice in \cite{M} as reformulated in \cite{K}, and will be referred to as MK decimation. It will be important in the following.\footnote{It is worth noting in this context that in numerical investigations of the standard MK recursions in gauge theories \cite{NT-BGZ} fractional $b$, ($1< b <2$), which by (\ref{MKz}) corresponds to non-integer $\zeta$, has often been used.} There are various other interesting properties of the decimations that can be derived from (\ref{RG2}) - (\ref{RG5}). The following one is particularly important. The norm (\ref{gnorm}) of the coefficients obtained by application of (\ref{RG2}) - (\ref{RG5}) with integer $\zeta\geq 1$ and $r=1$ satisfies (Appendix D): \begin{equation} ||g(n+1)|| \leq \Big[\,\zeta\,||g(n)||\,\Big]^{b^2} \Big[\, 1+ ||g(n)||\,\Big]^{(\zeta-1)b^2} \,.\label{gnormrecur} \end{equation} Assume now that \begin{equation} ||g(n)|| \leq \exp (- C_n)\,. \qquad \quad C_n > 0 \,, \label{gnormU1} \end{equation} for some $n$. Then \begin{equation} ||g(n+1)|| \leq \Big[\,\zeta\,||g(n)||\,\Big]^{b^2} \exp \left(\, (\zeta-1)\, b^2\,\right) \leq \exp \Big[ -(\, C_n - k\,) b^2 \Big] \,, \label{gnormrecurU1} \end{equation} where $k= \ln \zeta + (\zeta-1)$. The recursion \begin{equation} C_{n+1} = C_n b^2 - k b^2 \label{gnormrecurU2} \end{equation} gives \begin{equation} C_{n+m} = \Big[\, C_n - {b^2 k\over b^2-1}\,\Big] b^{2m} + {b^2 k\over b^2-1} \,.\label{gnormsoln} \end{equation} \prop{If for some $n$ the norm of coefficients (\ref{gnorm}) obeys (\ref{gnormU1}) with \begin{equation} C_n > {b^2 k\over b^2-1}\, , \label{gnormU2} \end{equation} then, under iteration of the decimation transformation (\ref{RG2}) - (\ref{RG5}), $||g(n+m)||\to 0$ as $m\to \infty$ according to (\ref{gnormsoln}). }\\ This fall-off behavior is immediately recognizable as ``area-law''. If one assumes that $c_j(n)$ are small enough so that the theory is within the strong coupling regime, this behavior can be immediately deduced for the leading coefficient $c_{1/2}(n)$ directly from (\ref{RG2}) - (\ref{RG5}): \begin{equation} c_{1/2}(n+1) = c_{1/2}(n)^{b^2} \exp \Big(\,[\,\ln \zeta + O(c_{1/2}(n))\,]\,b^2\, \Big)\,. \label{RGstrong} \end{equation} The result (\ref{gnormrecurU1}) gives then an estimate of the corrections due to all higher representations. What is noteworthy here, however, is that the condition (\ref{gnormU2}) is rather weaker than the commonly stated conditions for being inside the convergence radius of the strong coupling cluster expansion (cf. section \ref{CONf}). We note two further properties of the decimation transformations (\ref{RG1}) - (\ref{RG5}). The first is that with $r=1$ they become exact in space-time dimension $d=2$ since then, from (\ref{totalren}), $\zeta=1$. The second is that, with $\zeta=b^{(d-2)}$, vanishing coupling $g=0$ is a fixed point in any $d$, i.e. MK decimation is exact at zero coupling. This follows simply from the fact that \[ \lim_{\beta\to \infty} \Big[\int d\nu(x)\; e^{\beta f(x)} \Big]^{1/\beta} =\mbox{ess. sup}\ e^{f(x)} \equiv \| e^f\| \] for any normalized measure $d\nu(x)$. Applying this to the result of performing the plaquette moving operation starting from (\ref{nexp}), and with $p^\prime\in \Lambda$ labeling the plaquettes tiling the plaquettes $p\in \Lambda^{(1)}$, one has \begin{eqnarray} & & \lim_{\beta\to \infty} \Bigg[\int dU_\Lambda \, \prod_{p\in \Lambda^{(1)}} \prod_{p^\prime \in p}\exp\Big(\beta b^{(d-2)}\,{1\over 2} \chi_{1/2}(U_{p^\prime})\Big) \Bigg]^{1/\beta} \nonumber \\ & = & \prod_{p\in\Lambda^{(1)}} \Big\|\exp \Big(b^{(d-2)}{1\over2} \chi_{1/2}\Big) \Big\|^{b^2} = e^{|\Lambda|} = \lim_{\beta\to \infty} \Bigg[\int dU_\Lambda\,\prod_{p\in \Lambda} \exp\Big(\beta \,{1\over 2} \chi_{1/2}(U_p)\Big) \Bigg]^{1/\beta} \;.\qquad \end{eqnarray} This clearly holds also for $r\not= 1$, as is evident from the fact that $\lim_{\beta\to \infty} c_j(\beta)=1$. This fixed point is easily seen to be unstable. \section{Partition function} \label{Z} \setcounter{equation}{0} \setcounter{Roman}{0} Since our decimations are not exact RG transformations, the partition function does not in general remain invariant under them. The subsequent development hinges on the following two basic propositions that relate partition functions under such a decimation. \subsection{Upper and lower bounds}\label{u-lPF} Consider a partition $Z_{\Lambda^{(n-1)}}$ on lattice $\Lambda^{(n-1)}$ of the form (\ref{PF2}) given in terms of some set of coefficients $\{c_j(n-1)\}$. Apply a decimation transformation (\ref{RG1}) - (\ref{RG5}) performed with $\zeta=b^{(d-2)}$. Denote the resulting coefficients by $c_j^U$, $F_0^U$, i.e. \begin{eqnarray} c_j^U(n,r) & \equiv & c_j(\, n,b,\,\zeta=b^{(d-2)}, r, \{c_j(n-1)\}\,) \label{upperc}\\ F_0^U(n) & \equiv & F_0( \, n,b,\, \zeta=b^{(d-2)}, \{c_j(n-1)\}\,) \label{upperF} \;. \end{eqnarray} Note that \begin{equation} c_j^U(n,r) = c_j^U(n,1)^r \, . \label{upperc1} \end{equation} \prop{ For $Z_{\Lambda^{(n-1)}}$ of the form (\ref{PF2}), a decimation transformation (\ref{RG1}) - (\ref{RG5}) with $\zeta=b^{d-2}$ and $0 < r\leq 1$ results in an upper bound on $Z_{\Lambda^{(n-1)}}$: \begin{equation} Z_{\Lambda^{(n-1)}}(\{c_j(n-1)\})\, \leq \, F_0^U(n)^{|\Lambda^{(n)}|}\, Z_{\Lambda^{(n)}}(\{c_j^U(n,r)\})\;.\label{U} \end{equation} The r.h.s. in (\ref{U}) is a monotonically decreasing function of $r$ on $0 < r\leq 1$. } Given partition function $Z_{\Lambda^{(n-1)}}$ on lattice $\Lambda^{(n-1)}$ of the form (\ref{PF2}) in terms of some set of coefficients $\{c_j(n-1)\}$, let \begin{eqnarray} c_j^L(n) & \equiv & c_j(n-1)^6 \label{lowerc1}\\ F_0^L(n) & \equiv & 1 \;.\label{lowerF1} \end{eqnarray} \prop{ For $\dZ{(n-1)}$, $\dZ{n}$ of the form (\ref{PF2}): \begin{equation} Z_{\Lambda^{(n)}}(\{c_j^L(n)\}) \, \leq \, Z_{\Lambda^{(n-1}}(\{ c_j(n-1)\}) \;. \label{L} \end{equation} } The proof of III.1 is given in Appendix A, where somewhat stronger results than (\ref{U}) are actually obtained. III.2 is a corollary of (\ref{PFlowerb1}) (Appendix A). For the argument in the rest of this paper, the precise form of the lower bound is in fact not important. By II.1(i) a further lower bound is obtained by replacing $c_j^L(n)$ in III.2 by, for example, \begin{equation} c_j^L(n) \equiv c_j(n-1)^6 \,c_j^U(n,r) \label{lowerc2} \end{equation} since $0\leq c_j^U(n,r)\leq 1$. Another choice is to simply set \begin{equation} c_j^L(n)=0 \,,\label{lowerc3} \end{equation} which is a restatement of (\ref{PFlowerb2}). A related lower bound, which, in analogy to the upper bound in III.1, can be formulated directly in terms of the transformations (\ref{RG1}) - (\ref{RG5}), is obtained by taking $c_j^L(n)$ in III.2 to be given by: \begin{eqnarray} c_j^L(n) & \equiv & c_j(\, n,b,\,\zeta=1, r=1, \{c_j(n-1)\}\,) \nonumber\\ & = & c_j(n-1)^{b^2} \,, \label{lowerc4}\\ F_0^L(n) & \equiv & F_0( \, n,b,\, \zeta=1, \{c_j(n-1)\}\,) \nonumber \\ & = & 1 \; .\label{lowerF2} \end{eqnarray} With this choice of $c_j^L(n)$, note that III.1 - III.2 imply the fact that the decimations (\ref{RG1}) - (\ref{RG4}) become exact for $d=2$ and $r=1$. III.1 says that, after removal of interior plaquettes, modifying the couplings of the remaining plaquettes by taking $\zeta=b^{d-2}$ (and $r\leq 1$) results into overcompensation. III.2 says that decimating plaquettes while leaving the couplings of the remaining plaquettes unaffected ($\zeta=1$, $r=1$) results in undercompensation. The proof of III.2 for $c_j^L(n)$ given by (\ref{lowerc4}) is similar to that of II.1, but need not be given here, since the weaker bounds above will suffice. In the following it will in fact be more convenient to take (\ref{lowerc2}) or (\ref{lowerc3}) for the definition of the lower bound coefficients $c_j^L(m)$. Use of the stronger lower bounds above may be preferable for numerical investigations, but does not contribute anything further to the argument in this paper. III.1 and III.2 give upper and lower bounds on the partition function after a decimation step. It is then natural to interpolate between these bounds. \subsection{Interpolation between upper and lower bounds}\label{interbounds} Introducing a parameter $\alpha \in [0,1]$, we define coefficients $\tilde{c}_j(m,\alpha,r)$ interpolating between $c_j^L$ at $\alpha=0$ and $c_j^U$ (\ref{upperc}) at $\alpha=1$: \begin{equation} \tilde{c}_j(m,\alpha, r) = (1-w(\alpha))\, c_j^L(m) + w(\alpha)\, c_j^U(m,r) \;, \quad \qquad 0 < r \leq 1. \label{interc1} \end{equation} with \begin{equation} w(0)=0\;, \qquad \quad w(1)=1\;, \quad \qquad w^\prime(\alpha) > 0 \;. \label{interc2} \end{equation} For example, \begin{equation} w(\alpha) = {e^\alpha-1\over e-1} \label{w} \end{equation} There is clearly a variety of other choices than (\ref{interc1}) for these interpolating coefficients. We always require that \begin{equation} \partial\, \tilde{c}_j(m,\alpha,r) /\partial \,\alpha > 0 \;, \label{interc3} \end{equation} which is satisfied by (\ref{interc1}) - (\ref{interc2}). Similarly, we define coefficients interpolating between (\ref{lowerF1}) and (\ref{upperF}). For our purposes it will be convenient to take \begin{equation} \tilde{F}_0(m,h, \alpha,t) = F_0^U(m)^{h_t(\alpha)} \label{interF1} \;, \end{equation} where $h_t$ denote a family of monotonically increasing smooth functions of $\alpha$, labeled by a parameter $t \in [t_a,t_b]$, and such that \begin{equation} h_t(0)=0\;, \qquad h_t(1)=1 \,. \label{hlimits} \end{equation} We write $h_t(\alpha) \equiv h(\alpha,t)$. Examples are\footnote{Supplementing these definitions at $\alpha=0$ as needed is understood. Thus, $h(\alpha,t)= 0$ on $\alpha \leq 0$ in the first example in (\ref{h}); and standard smoothing in the second example: replace $\alpha$ in $h$ by $g_\epsilon(\alpha)=\int \rho_\epsilon(\alpha-x) g(x)dx$, where $g(x)=x$ for $x>0$, $g(x) =0$ for $x\leq 0$, and $\rho_\epsilon(x)$ is $C^\infty$, has support inside $|x|^2\leq \epsilon^2$ and satisfies $\rho_\epsilon \geq 0$ and $\int \rho_\epsilon =1$. } \begin{eqnarray} h(\alpha,t) & = & \exp \left( -\, \sigma(t)\,{1-\alpha\over \alpha}\right)\;, \qquad h(\alpha,t) =\alpha^{\sigma(t)}\, , \qquad h(\alpha,t) = \tanh\,({\ \ \alpha \over \sigma(t)\,(1-\alpha)})\,, \nonumber \\ & & \qquad \qquad \qquad \hspace{2cm} 0 < \alpha \leq 1,\qquad 0 < t_a \leq t \leq t_b < \infty \,, \label{h} \end{eqnarray} where $\sigma(t)$ is a smooth monotonically increasing positive function on $[t_a,t_b]$, e.g. $\sigma(t) = t$. The interpolating partition function on $\Lambda^{(m)}$ constructed from $\tilde{c_j}$ and $\tilde{F}_0$ is now defined by \begin{equation} \tilde{Z}_{\Lambda^{(m)}}(\beta,h,\alpha,t,r) = \tilde{F}_0(m,h,\alpha,t)^{|\Lambda^{(m)}|}\, Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha,r)\}) \label{interPF1} \end{equation} where \begin{eqnarray} Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha,r)\}) & = & \int dU_{\Lambda^{(m)}}\;\prod_{p\in \Lambda^{(m)}}\,\Big[ 1 + \sum_{j\not= 0} d_j\, \tilde{c}_j(m,\alpha, r) \, \chi_j(U_p) \Big] \nonumber \\ & \equiv & \int dU_{\Lambda^{(m)}}\;\prod_{p\in \Lambda^{(m)}}\, f_p(U_p,m,\alpha,r) \,. \label{interPF2} \end{eqnarray} Combining II.1, (\ref{interc3}) and the fact that $\tilde{F}_0$ is, by definition, also an increasing function of $\alpha$ one has \\ \prop{ The interpolating free energies $\ln Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha, r)\})$ and $\ln \tilde{Z}_{\Lambda^{(m)}}(\beta,h, \alpha,t,r)$ are increasing functions of $\alpha$: \begin{equation} \partial \ln Z_{\Lambda^{(m)}}\Big(\{\tilde{c}_j(m,\alpha,r) \}\Big) /\partial \alpha \,>\, 0 \,. \label{interPFder1} \end{equation} \\ } Equality in (\ref{interPFder1}) applies only in the trivial case were all the coefficients $\tilde{c}_j$'s vanish. In terms of (\ref{interPF1}), III.1 and III.2 give \begin{equation} \tilde{Z}_{\Lambda^{(m)}}(\beta,h,0,t,r) \leq Z_{\Lambda^{(m-1)}} \leq \tilde{Z}_{\Lambda^{(m)}}(\beta,h,1,t,r) \,. \label{interI1} \end{equation} Now $\tilde{Z}_{\Lambda^{(m)}}(\beta,h, \alpha,t,r)$ is continuous in $\alpha$. It follows from (\ref{interI1}) that there exist a value of $\alpha$ in $(0,1)$: \begin{equation} \alpha(m, h,t,r,\{c_j(m-1)\},b,\Lambda)\equiv \alpha_{\Lambda,h}^{(m)}(t,r) \label{interI2} \end{equation} such that \begin{equation} \tilde{Z}_{\Lambda^{(m)}}(\beta,h,\alpha_{\Lambda,h}^{(m)}(t,r), t,r) = Z_{\Lambda^{(m-1)}} \,.\label{interI3} \end{equation} In other words, at each given value of $t$, $r$, there exist a value of $\alpha$ at which the partition function on $\Lambda^{(m)}$, resulting from a decimation transformation $\Lambda^{(m-1)} \to \Lambda^{(m)}$, equals the partition function on $\Lambda^{(m-1)}$. This value is unique by III.3. By construction, $\alpha_{\Lambda,h}^{(m)}(t,r)$ is such that (\ref{interI3}) remains invariant under variation of $t$, $r$ in their domain of definition, i.e. $\alpha_{\Lambda,h}^{(m)}(t,r)$ represents the level surface of the function $\tilde{Z}_{\Lambda^{(m)}}(\beta,h,\alpha, t,r)$ fixed by the value $Z_{\Lambda^{(m-1)}}$. The parametrization invariance under varying $t$ will be important later. We now examine the dependence on $t$, $r$ in (\ref{interI2}) more closely. Given $Z_{\Lambda^{(m-1)}}$ and some interpolation $h$, assume that (\ref{interI3}) is satisfied at the point $(t_0, r_0, \alpha=\alpha_{\Lambda,h}^{(m)})$. Then, by the implicit function theorem, applicable by III.3, there is a function $\alpha_{\Lambda,h}^{(m)}(t,r)$ with continuous derivatives such that $\alpha_{\Lambda,h}^{(m)}(t_0,r_0)=\alpha_{\Lambda,h}^{(m)}$, and uniquely satisfies (\ref{interI3}) in a sufficiently small neighborhood of $ (t_0, r_0, \alpha_{\Lambda,h}^{(m)})$. But since a solution to (\ref{interI3}) exists for each choice of $t,r$ in their domain of definition, this neighborhood can be extended by a standard continuity argument to all points of this domain. $\alpha_{\Lambda,h}^{(m)}(t,r)$ then represents the regular level surface of the function (\ref{interPF1}) fixed by (\ref{interI3}). Furthermore, \begin{equation} {\partial \alpha_{\Lambda,h}^{(m)}(t,r)\over \partial t} = v(\alpha_{\Lambda,h}^{(m)}(t,r), t, r) \;, \label{alphtder1} \end{equation} where \begin{equation} v(\alpha, t, r) \equiv - { \displaystyle {\partial h(\alpha,t) / \partial t} \over{\displaystyle {\partial h(\alpha,t)\over \partial \alpha} + A_{\Lambda^{(m)}}(\alpha, r)} } \;,\label{alphtder2} \end{equation} with \begin{equation} A_{\Lambda^{(m)}}(\alpha, r) \equiv {1\over \ln F_0^{U}(m) }\, {1\over |\Lambda^{(m)}| }\, {\partial \over \partial\alpha }\ln Z_{\Lambda^{(m)}}\,\Big(\{\tilde{c}_j( m, \alpha, r)\}\Big) > 0\;. \label{alphtder3} \end{equation} We will always assume that $h$ is chosen such that $\partial h/\partial t$ is negative. This is the case with the examples (\ref{h}). Then, from (\ref{alphtder2}), $v>0$ on $0 <\alpha < 1$, with $v=0$ at $\alpha=0$ and $\alpha=1$. It is also useful to equivalently view $\alpha_{\Lambda,h}^{(m)}(t,r)$ as the solution to the ODE \begin{eqnarray} d\alpha/dt & =& v(\alpha, t, r) \,, \qquad \alpha\in (0,1)\;,\label{ODE}\\ \alpha(t_0) & =& \alpha_{\Lambda,h}^{(m)} > 0\;, \qquad t_0 \in [t_a,t_b]\;. \nonumber \end{eqnarray} Then standard results of ODE theory imply the existence of a unique solution in a neighborhood of $\alpha_{\Lambda,h}^{(m)}>0$, which can in fact be extended indefinitely forward for all $t\geq t_0$.\footnote{Indeed, v is differentiable on $\alpha_{\Lambda,h}^{(m)}\leq \alpha\leq 1$ and vanishes at $\alpha=1$.} A short computation using (\ref{alphtder1}) gives \begin{equation} {d h(\alpha_{\Lambda,h}^{(m)}(t,r),t)\over dt} = - {\partial \alpha_{\Lambda,h}^{(m)}(t,r)\over \partial t} \, A_{\Lambda^{(m)}}(\alpha_{\Lambda,h}^{(m)}(t,r), r) \,, \label{htder} \end{equation} as it should for consistency with (\ref{interI3}). (\ref{htder}) and (\ref{alphtder1}) make apparent what the effect of a parametrization change due to a shift in $t$ is. Increasing (decreasing) $t$ increases (decreases) the contribution of $\ln Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m, \alpha_{\Lambda,h}^{(m)}(t,r),r)\})$ while decreasing (increasing) by an equal amount the contribution from $\ln F_0^{U}(m)\,h\big(\alpha_{\Lambda,h}^{(m)}(t,r), t \big)\,|\Lambda^{(m)}|$, so that the sum stays constant and equal to $\ln Z_{\Lambda^{(m-1)}}$ in accordance with (\ref{interI3}). The derivative w.r.t. $r$ is similarly given by (\ref{alphrder1}) - (\ref{alphrder2}) in Appendix B. Now, by (III.1), the upper bound in (\ref{interI1}) is optimized for $r=1$, which would appear to make consideration of other $r$ values unnecessary. The reason one may want, however, to vary $r$ away from unity is the following. The values $\alpha_{\Lambda,h}^{(m)}(t,r)$ lie in the interval $(0,1)$. Consider the possibility that one finds that $\alpha_{\Lambda,h}^{(m)}(t_m,1)$ differs from $1$ only by terms that vanish as the lattice size grows. This means that, since $v \geq 0$ in (\ref{alphtder1}), $\alpha_{\Lambda,h}^{(m)}(t,1)$ is, to within such terms, a constant function of $t$ for all $t\geq t_m$. For the purposes of the argument in the following sections we want to exclude this possibility, and ensure that, at least in some neighborhood of a chosen $t$ value, the derivative (\ref{alphtder1}) is non-vanishing by an amount independent of lattice size. We require that \begin{equation} \delta^\prime < \alpha_{\Lambda,h}^{(m)}(t,r)< 1-\delta \;, \label{collar} \end{equation} with $\delta > 0$, $\delta^\prime > 0$ independent of the lattice size $|\Lambda^{(m)}|$. The lower bound requirement is easily shown (Appendix B) to be automatically satisfied by combining II.1 and (\ref{interI3}). As it is also shown in Appendix B, one may always ensure that the upper bound requirement in (\ref{collar}) holds by choosing the decimation parameter $r$ to vary, if necessary, away from unity in the domain \begin{equation} 1 \geq r \geq 1-\epsilon \;, \label{rdomain} \end{equation} where $0 < \epsilon \ll 1$ with $\epsilon$ independent of $|\Lambda^{(m)}|$. With (\ref{collar}) in place, (\ref{alphtder1}) and (\ref{htder}) imply (Appendix B) that \begin{equation} {\displaystyle \partial \alpha_{\Lambda,h}^{(m)}\over \partial t} (t,r) \geq \eta_1(\delta) > 0 \, , \qquad \qquad - {\displaystyle d h\over \displaystyle dt}(\alpha_{\Lambda,h}^{(m)}(t,r),t)\geq \eta_2(\delta)>0 \, , \label{dercollar} \end{equation} where $\eta_1$, $\eta_2$ are lattice-size independent. Furthermore, if (\ref{collar}) already holds for $r=1$, it also holds for any $r$ in (\ref{rdomain}). We may as well then simplify matters in the following by setting the parameter $r$ to the value $r=1-\epsilon$ with some fixed small $\epsilon$. This $\epsilon$ may eventually be taken as small as one pleases after a sufficiently large number of decimations have been performed. This has an obvious meaning in the context of iterating the decimation transformation as pointed out in subsection \ref{disc1} below. We accordingly simplify notation by dropping explicit reference to $r$, except on occasions when a statement is made for general $r$ values. Thus we write $\alpha_{\Lambda,\,h}^{(m)} (t) \equiv \alpha_{\Lambda,\,h}^{(m)}(t,1-\epsilon)$, $c^U_j(m) \equiv c^U_j(m, 1-\epsilon))$, etc. \subsection{Representation of the partition function on decimated lattices} \label{repZ} So, starting on the original lattice spacing $a$, with partition function given in terms of coefficients $\{c_j(\beta)\}$, one may iterate the procedure represented by (\ref{interI1}) - (\ref{interI3}). Taking the same interpolation family $h$ in every cycle, an iteration cycle consists of the following steps. \begin{enumerate} \item[(i)] A decimation transformation $\Lambda^{(m-1)}\to \Lambda^{(m)}$ given by the rules (\ref{RG1}) - (\ref{RG5}) applied to the coefficients in $Z_{\Lambda^{(m-1)}}$, and resulting into the upper bound coefficients on $\Lambda^{(m)}$ according to (\ref{upperc}) - (\ref{upperF}) and (\ref{U}). Similarly, a lower bound on $\Lambda^{(m)}$ is obtained according to (\ref{L}) with lower bound coefficients given by (\ref{lowerF1}) and (\ref{lowerc2}) or (\ref{lowerc3}). \item[(ii)] Interpolation between the resulting upper and lower bound partition functions on $\Lambda^{(m)}$ according to (\ref{interc1}), (\ref{interF1}), and (\ref{interPF1}), (\ref{interPF2}). \item[(iii)] Fixing the value $0 < \alpha_{\Lambda,h}^{(m)}(t) < 1$, eq. (\ref{interI2}), so that the $(m-1)$-th step partition function $Z_{\Lambda^{(m-1)}}$ is preserved, eq. (\ref{interI3}). \item[(iv)] Picking a value of the parameter $t=t_m$, to fix the coefficients $\{\tilde{c}_j(m,\alpha_\Lambda^{(m)}(t_m))\}$ of the resulting partition function $Z_{\Lambda^{(m)}}$, and return to step (i). \end{enumerate} This scheme for the coefficients in $Z_{\Lambda^{(m)}}$ may be depicted as follows: \begin{equation} \begin{array}{c} \begin{array}{ccc} \hfill & c_j(\beta) & \hfill\\ & \begin{picture}(60,15) \put(60,15){\vector(-4,-1){80}} \end{picture} \begin{picture}(38,8) \put(20,6){\vector(0,-2){20}} \end{picture} \begin{picture}(60,15) \put(1,15){\vector(4,-1){80}} \end{picture} & \\ & & \\ \end{array} \\ \begin{array}{ccccc} \hfill \{c_j^L(1)\} & \leq & \{\tilde{c}_j(1,\alpha_{\Lambda,\,h}^{(1)} (t_1))\} &\leq & \{c_j^U(1)\}\hfill\\ & \begin{picture}(30,10) \put(30,10){\vector(-3,-1){50}} \end{picture} & \begin{picture}(38,8) \put(20,6){\vector(0,-2){20}} \end{picture} & \begin{picture}(30,10) \put(1,10){\vector(3,-1){50}} \end{picture} & \\ & & & & \\ \hfill \{c_j^L(2)\} & \leq & \{\tilde{c}_j(2,\alpha_{\Lambda,\,h}^{(2)} (t_2))\} &\leq & \{c_j^U(2)\}\hfill\\ & \begin{picture}(30,10) \put(30,10){\vector(-3,-1){50}} \end{picture} & \begin{picture}(38,8) \put(20,6){\vector(0,-2){20}} \end{picture} & \begin{picture}(30,10) \put(1,10){\vector(3,-1){50}} \end{picture} & \\ & & & & \\ \vdots & & \vdots & & \vdots \end{array}\\ \end{array} \label{S1} \end{equation} The result after $n$ iterations is then: \begin{eqnarray} Z_\Lambda(\beta) &= & \int dU_\Lambda\;\prod_{p\in \Lambda}\,f_p(U,\beta) \label{O}\\ & =& \left[\,\prod_{m=1}^n \tilde{F}_0(m,h,\alpha_{\Lambda,\,h}^{(m)}(t_m), t_m)^{|\Lambda|/ b^{md}}\,\right]\; \; Z_{\Lambda^{(n)}}\,\Big(\{\tilde{c}_j( n, \alpha_{\Lambda,\,h}^{(n)}(t_n)) \}\Big) \,. \label{A} \end{eqnarray} (\ref{A}) is an {\it exact integral representation} on the decimated lattice $\Lambda^{(n)}$ of the partition function $Z_\Lambda$ originally defined on the undecimated lattice $\Lambda$ by the integral representation (\ref{PF1}) or (\ref{O}). III.3 allows the iterative procedure leading to (\ref{A}) to be implemented in a slightly different manner, one that turns out later to be more convenient for our purposes. Since by III.3 \begin{equation} Z_{\Lambda^{(m)}}\,\Big(\{\tilde{c}_j(m, \alpha_{\Lambda,\,h}^{(m)}(t_m))\} \Big) \leq Z_{\Lambda^{(m)}}\,\Big(\{\tilde{c}_j(m, 1)\}\Big) = Z_{\Lambda^{(m)}}\,\Big(\{c_j^U(m)\}\Big) \,,\label{UPF} \end{equation} an upper bound for each successive iteration step is also obtained by applying III.1 to the r.h.s. rather than the l.h.s. of the inequality sign in (\ref{UPF}). The only resulting modification in the above procedure is in step (i): the upper bound coefficients $c^U_j(m)$ and $F_0^U(m)$ on $\Lambda^{(m)}$ are computed according to (\ref{upperc}) and (\ref{upperF}) but now using the set $\{c_j^U(m-1)\}$ rather than the set $\{\tilde{c}_j(m-1, \alpha_{\Lambda,\,h}^{(m-1)}(t_{m-1}))\}$ as the coefficient set of the previous step. The same alternative can be applied to the lower bounds in (\ref{S1}). Since, again by III.3, one has \begin{equation} Z_{\Lambda^{(m)}}\,\Big(\{c_j^L(m)\}\Big) = Z_{\Lambda^{(m)}}\,\Big(\{\tilde{c}_j(m, 0)\}\Big) \leq Z_{\Lambda^{(m)}}\,\Big(\{\tilde{c}_j(m, \alpha_{\Lambda,\,h}^{(m)}(t_m))\} \Big) \,, \label{LPF} \end{equation} a lower bound for each successive iteration step is also obtained by applying III.2 to the l.h.s. rather than the r.h.s. of the inequality sign in (\ref{LPF}). If one adopts (\ref{lowerc3}), this makes no difference since the lower bound coefficients equal zero at every step. If one uses (\ref{lowerc2}), the resulting modification to (\ref{S1}) is that in step (i) the lower bound coefficients $c^L_j(m)$ on $\Lambda^{(m)}$ are now computed using the set $\{c^L_j(m-1)\}$ rather than $\{\tilde{c}_j(m-1, \alpha_{\Lambda,\,h}^{(m-1)}(t_{m-1}))\}$ as the coefficient set of the previous step. One may adopt either or both modifications following from (\ref{UPF}) or (\ref{LPF}). Adopting both, the iterative scheme for the coefficients in $Z_{\Lambda^{(m)}}$ replacing (\ref{S1}) is: \begin{equation} \begin{array}{c} \begin{array}{ccc} \hfill & c_j(\beta) & \hfill\\ & \begin{picture}(60,15) \put(60,15){\vector(-4,-1){80}} \end{picture} \begin{picture}(20,10) \put(12,8){\vector(0,-2){20}} \end{picture} \begin{picture}(60,15) \put(1,15){\vector(4,-1){80}} \end{picture} & \\ & & \\ \end{array} \\ \begin{array}{ccccc} \hfill \{ c_j^L(1) \} \qquad & \leq & \{\tilde{c}_j(1,\alpha_\Lambda^{(1)}(t_1))\} &\leq & \qquad \{c_j^U(1)\}\hfill\\ & & & & \\ \begin{picture}(30,10) \put(6,10){\vector(0,-2){20}} \end{picture} & & \begin{picture}(30,10) \put(18,10){\vector(0,-2){20}} \end{picture} & & \qquad \begin{picture}(20,10) \put(1,10){\vector(0,-2){20}} \end{picture} \\ & & & & \\ \hfill \{ c_j^L(2) \} \qquad & \leq & \{\tilde{c}_j(2,\alpha_\Lambda^{(2)}(t_2))\} &\leq & \qquad \{c_j^U(2)\}\hfill \\ & & & & \\ \begin{picture}(30,10) \put(6,10){\vector(0,-2){20}} \end{picture} & & \begin{picture}(30,10) \put(18,10){\vector(0,-2){20}} \end{picture} & & \qquad \begin{picture}(20,10) \put(1,10){\vector(0,-2){20}} \end{picture} \\ & & & & \\ \vdots\quad\; & & \ \vdots & & \ \vdots \end{array}\\ \end{array} \label{S2} \end{equation} This again leads, after $n$ iterations, to the representation (\ref{A}). Note, however, that the actual numerical value of $\alpha_{\Lambda,\,h}^{(m)}(t_m)$ in (\ref{A}), fixed at each step by requiring (\ref{interI3}), will, in general, be different depending on whether scheme (\ref{S1}) or (\ref{S2}) is used for the iteration. Also note that the upper bounds $c^U_j(m)$ in (\ref{S2}) are not optimal compared to those in (\ref{S1}). The scheme (\ref{S2}), however, turns out to be more convenient for our purposes in the following. \subsection{Discussion of the representation (\ref{A})}\label{disc1} As indicated by the notation, on any finite lattice, the $\alpha_{\Lambda,\,h}^{(m)}$ values possess a lattice size dependence. This weak dependence enters as a correction that vanishes inversely with lattice size. Indeed, by the standard results on the existence of the thermodynamic limit of lattice systems, for a partition function $Z_{\Lambda^{(m)}}(\{c_j\})$ of the form (\ref{PF2}) on lattice $\Lambda^{(m)}$ with torus topology (periodic boundary conditions): \begin{equation} \ln Z_{\Lambda^{(m)}}(\{c_j\})= |\Lambda^{(m)}|\,\varphi(\{c_j\}) + \delta\varphi_{\Lambda^{(m)}}(\{c_j\}) \, , \end{equation} $\varphi(\{c_j\})$ being the free energy per unit volume in the infinite volume limit, and $\delta\varphi_{\Lambda^{(m)}}(\{c_j\})\leq O(\mbox{constant})$.\footnote{That is, there are no `surface terms' for torus topology. In fact surface terms arising with other, e.g. free, boundary conditions can be precisely defined as the difference in the free energies computed with periodic versus such other boundary conditions \cite{Fi}.} From this and (\ref{interI3}) it is straightforward to show that \begin{equation} \alpha_{\Lambda,h}^{(m)}(t,r) = \alpha_h^{(m)}(t,r) + \delta \alpha_{\Lambda,h}^{(m)}(t,r) \label{alphsplit1} \end{equation} with $\delta \alpha_{\Lambda,h}^{(m)}(t,r) \to 0$ as some inverse power of lattice size in the large volume limit. In fact, we have already established the presence of a lattice-size independent contribution in $\alpha_{\Lambda,h}^{(m)}(t,r)$ in an alternative manner through (\ref{collar}), i.e. the fact that in (\ref{alphsplit1}) one must have \begin{equation} \alpha_h^{(m)}(t,r) > \delta^\prime \;.\label{alphsplit2} \end{equation} An explicit expression for $\delta^\prime$ is given by (\ref{alphlowerb1}), (\ref{alphlowerb2}). At weak and strong coupling the $\alpha_h^{(m)}(t,r)$ values may be estimated analytically by comparison with the weak and strong coupling expansions, respectively. In general, starting from (\ref{interI1}), the location of $\alpha_{\Lambda,\,h}^{(m)}$ satisfying (\ref{interI3}) may be formulated as the fixed point of a contraction mapping. This allows in principle its numerical determination, for given values of all other parameters, to any desired accuracy. For our purposes here, however, the actual numerical values of the $\alpha_{\Lambda,\,h}^{(m)}$'s, beyond the fact that they are fixed between $0$ and $1$, will not be directly relevant. The main application of the representation (\ref{A}) in this paper will be to relate the behavior of the exact theory to that of the easily computable approximate decimations bounding it without explicit knowledge of the actual $\alpha_{\Lambda,\,h}^{(m)}$ values. It is important to be clear about the meaning of (\ref{A}). The partition function $Z_\Lambda(\beta)$ is originally given by its integral representation (\ref{O}) on lattice $\Lambda$ of spacing $a$. (\ref{A}) then gives another integral representation of $Z_\Lambda(\beta)$ in terms of an integrand defined on the coarser lattice $\Lambda^{(n)}$ of spacing $b^na$ plus a total bulk free energy contribution resulting from decimating between scales $a$ and $b^na$. The action $A_p(U,n,\alpha_{\Lambda,h}^{(n)})$ in $\dZ{n}(\{\tilde{c}_j(n, \alpha_{\Lambda,h}^{(n)}\})$ is constructed to reproduce this one physical quantity, i.e. the free energy $\ln Z_\Lambda(\beta)$, nothing more and nothing less. In particular, it is {\it not} implied that this action on $\Lambda^{(n)}$ can also be used to exactly compute any other observable. For that one would need to attempt the previous development from scratch with the corresponding operator inserted in the integrand. Recall that, by (\ref{interc3}), the coefficients $\tilde{c}_j(n,\alpha,r)$'s are increasing in $\alpha$, and $\tilde{c}_j(n,1,r) =c_j^U(n,r)$, $\;\tilde{c}_j(n,0,r) =c_j^L(n)$: \begin{equation} c_j^L(n) < \tilde{c}_j(\,n,\alpha_{\Lambda,h}^{(n)}(t)\,) < c_j^U(n)\; , \qquad \quad \quad 0 < \alpha_{\Lambda,h}^{(n)(t)}(t) < 1 \;.\label{cineq5} \end{equation} Thus, the coefficients $\tilde{c}_j(\,n,\alpha_{\Lambda,h}^{(n)}(t)\,)$ in the representation (\ref{A}) are bounded from above by $c_j^U(n)$ no matter what the actual values of $\alpha_{\Lambda,h}^{(n)}(t)$ are. When considering the implications of this bound under successive decimations the advantage of employing scheme (\ref{S2}), rather than (\ref{S1}), becomes clear. The coefficients $c_j^U(n)$ on the r.h.s. column in (\ref{S2}) are obtained by straightforward iteration of the decimation rules (\ref{RG2})-(\ref{RG5}) with $\zeta=b^{d-2}$; i.e. only knowledge of the $c_j^U(n-1)$, not of the $\tilde{c}_j(n-1, \alpha_{\Lambda,h}^{(n-1)}(t_{n-1}))$, is required to obtain the $c_j^U(n)$ at the $n$-th step. The flow of these $c_j^U(n)$ coefficients then constrains the flow of the exact representation coefficients $\tilde{c}_j(n, \alpha_{\Lambda,h}^{(n)}(t_n))$ according to (\ref{cineq5}) from above. In particular, {\it if the $c_j^U(n)$'s on the r.h.s. column in (\ref{S2}) approach the strong coupling fixed point, i.e. \begin{equation} F_0^U(n)\to 1, \quad \quad c_j^U(n) \to 0, \qquad \mbox{as}\quad n\to \infty \,,\label{scfp} \end{equation} so must the $\tilde{c}_j(n,\alpha_{\Lambda,h}^{(n)})$'s in the representation (\ref{A}).}\footnote{ To strictly draw the same conclusion from the alternative scheme (\ref{S1}) requires an additional step, such as showing that the $c_J^U(n)$'s computed according to the scheme (\ref{S1}) flow to the strong coupling regime if those computed according to (\ref{S2}) do.} Now the coefficients $c_j^U(n,r)$ at $r=1$ are the MK decimation coefficients (cf. section \ref{DEC2}). As it is well-known, the MK decimations for $SU(2)$ (and also $SU(3)$) are found by explicit evaluation to indeed flow to the strong coupling fixed point (\ref{scfp}) for all starting $\beta<\infty$ and $d\leq 4$. Above the critical dimension $d=4$, the decimations result in free spin wave behavior ($c_j^U(n,1) \to 1$ as $n\to \infty$) starting from any $\beta > \beta_0$, where $\beta_0 =O(1)$. Here, for reasons discussed at the end of section \ref{interbounds}, we take $r$ in the range (\ref{rdomain}). This may be viewed as fixing the direction from which the point $\zeta=b^{(d-2)}$, $r=1$ in the parameter space of the iteration (\ref{RG2}) - (\ref{RG5}) is approached. This is actually irrelevant for the flow behavior of the $ c_j^U(n,1-\epsilon)\equiv c_j^U(n)$ since, in the case of $SU(2)$ considered here, this point is a structurally stable point of the iteration.\footnote{It is, however, very much relevant in cases where this point is not structurally stable, e.g. in $U(1)$.} Note that zero lattice coupling, $g=0$, is a fixed point as it is for the MK decimations. This is also evident from $\lim_{\beta\to\infty}c_j(\beta)=1$ and III.2. What does (\ref{scfp}) combined with (\ref{cineq5}) imply about the question of confinement in the exact theory? The fact that the long distance part, $\dZ{n}(\{\tilde{c}_j(n, \alpha_{\Lambda,h}^{(n)})\})$, in (\ref{A}) flows in the strong coupling regime does not suffice to answer the question. It is the combined contributions from all scales between $a$ and $b^na$ in (\ref{A}) that add up to give the exact free energy $\ln Z_\Lambda(\beta)$. Indeed, recall that, by a parametrization change by shifts in $t$ at each decimation step, one can shift the relative amounts assigned to these various contributions keeping the total sum fixed (cf. remarks immediately following (\ref{htder}). This parametrization freedom will in fact be important in the following. On the other hand, the fact that by (\ref{cineq5}) the flow of $\tilde{c}_j(n,\alpha_{\Lambda,h}^{(n)}(t_n))$ to the strong coupling regime is independent of such parametrization changes is strongly suggestive. At any rate, to unambiguously determine the long distance behavior of the theory one needs to consider appropriate long distance order parameters. \section{`Twisted' partition function} \label{TZ} \setcounter{equation}{0} \setcounter{Roman}{0} The above derivation leading to the representation (\ref{A}) for the partition function cannot be applied in the presence of observables without modification. Thus, in the presence of operators involving external sources, such as the Wilson or 't Hooft loop, translation invariance is lost. Reflection positivity is also reduced to hold only in the plane bisecting a rectangular loop. Fortunately, there are other order parameters that can characterize the possible phases of the theory while avoiding most of these complications. They are the well-known vortex free energy, and its transform with respect to the center of the gauge group (electric flux free energy). They are in fact the natural order parameters in the present context since they are constructed out of partition functions, i.e. partition functions in the presence of external fluxes. Let $Z_\Lambda(\tau_{\mu\nu}, \beta)$ denote the partition function with action modified by the `twist' $\tau_{\mu\nu}$, i.e. an element of the group center, for every plaquette on a coclosed set of plaquettes ${\cal V}_{\mu\nu}$ winding through the periodic lattice in the $(d-2)$ directions perpendicular to the $\mu$, and $\nu$-directions, i.e. winding through every $[\mu\nu]$-plane for fixed $\mu, \nu$: \begin{equation} A_p(U_p) \to A_p(\tau_{\mu\nu} U_p) \;, \qquad \mbox{if} \quad p\in {\cal V}_{\mu\nu}\;. \label{twist1} \end{equation} A nontrivial twist ($\tau_{\mu\nu}\not=1$) represents a discontinuous gauge transformation on the set ${\cal V}_{\mu\nu}$ with multivaluedness in the group center. Thus, for group $SU(N)$, it introduces vortex flux characterized by elements of $\pi_1(SU(N)/Z(N))=Z(N)$. The vortex is rendered topologically stable by being wrapped around the lattice torus. In the case of $SU(2)$ explicitly considered here, there is only one nontrivial element, $\tau_{\mu\nu}=-1$. As indicated by the notation $Z_\Lambda(\tau_{\mu\nu},\beta)$, the twisted partition function depends only on the directions in which ${\cal V}_{\mu\nu}$ winds through the lattice, not the exact shape or location of ${\cal V}_{\mu\nu}$. This expresses the mod 2 conservation of flux. Indeed, a twist $\tau_{\mu\nu}=-1$ on the plaquettes forming a coclosed set ${\cal V}_{\mu\nu}$ can be moved to the plaquettes forming any other homologous coclosed set ${\cal V}^{\ \prime}$ by the change of variables $U_b \to -U_b$ for each bond $b$ in a set of bonds cobounded by ${\cal V} \cup {\cal V}^{\ \prime}$, leaving $Z_\Lambda(\tau_{\mu\nu},\beta)$ invariant. By the same token, $Z_\Lambda(\tau_{\mu\nu},\beta)$ is invariant under changes mod 2 in the number of homologous coclosed sets in $\Lambda$ carrying a twist. In the following, for definiteness, we fix, say, $\mu=1$, $\nu=2$, and drop further explicit reference to the $\mu$, $\nu$ indices. Also, we write $Z_\Lambda(-1,\beta) \equiv Z_\Lambda^{(-)}(\beta)$. (\ref{twist1}) implies that $Z_\Lambda^{(-)}$ is obtained from $Z_\Lambda$ by the replacement \begin{equation} f_p(U_p,a) \to f_p(-U_p,a)= \Big[\, 1 + \sum_{j\not= 0} (-1)^{2j}\,d_j\,c_j(\beta)\, \chi_j(U_p)\,\Big] \label{twist2}\,, \qquad \mbox{for each} \quad p\in {\cal V} \,, \end{equation} in (\ref{PF1}), (\ref{nexp}), i.e. only half-integer representations on plaquettes in ${\cal V}$ are affected. In general then, the twisted version of the partition function (\ref{PF2}) on $\Lambda^{(n)}$ is \begin{equation} Z^{(-)}_{\Lambda^{(n)}}(\{c_j(n)\}) = \int dU_{\Lambda^{(n)}}\; \prod_{p\in \Lambda^{(n)}}\,f^{(-)}_p(U_p,n) \; , \label{PF1atwist} \end{equation} with \begin{equation} f^{(-)}_p(U_p,n) = \Big[\, 1 + \sum_{j\not= 0} (-1)^{2j\,S_p[{\cal V}]}\,d_j\,c_j(n)\, \chi_j(U_p)\,\Big]\;.\label{PF1btwist} \end{equation} $S_p[{\cal V}]$ denotes the characteristic function of the plaquette set ${\cal V}$, i.e. $S_p[{\cal V}]=1$ if $p\in {\cal V}$, and $S_p[{\cal V}]=0$ otherwise. A simple result (Appendix A) of obvious physical significance is: \prop{ With $c_j(n) \geq 0$, all $j$, \begin{equation} Z^{(-)}_{\Lambda^{(n)}}(\{c_j(n)\}) \leq Z_{\Lambda^{(n)}}(\{c_j(n)\}) \,. \label{Z>Z-} \end{equation} } Strict inequality holds in fact in (\ref{Z>Z-}) for any nonvanishing $\beta$ on any finite lattice. Application of the decimation operation defined in section \ref{DEC} on some given $Z^{(-)}_{\Lambda^{(m-1)}}$ of the form (\ref{PF1atwist}) results in the rule \begin{equation} f^{(-)}_p(U, m-1) \to F_0(m)\,f^{(-)}_p(U,m) = F_0(m) \, \Big[ 1 + \sum_{j\not= 0} (-1)^{2j\,S_p[{\cal V}]}\,d_j\,c_j(m)\,\chi_j(U) \Big] \, ,\label{RG1twist} \end{equation} with coefficients $F_0(m)$, $c_j(m)$ computed according to the rules (\ref{RG2}) - (\ref{RG5}). Starting on lattice $\Lambda$, the twisted partition function resulting after $n$ such steps is \begin{equation} Z_\Lambda^{(-)}(\beta, n) = \prod_{m=1}^n F_0(m)^{|\Lambda|/b^{md}}\; Z_{\Lambda^{(n)}}^{(-)}(\{c_j(n)\}) \; . \label{PF2twist} \end{equation} Note that the flux is carried entirely in $Z_{\Lambda^{(n)}}^{(-)}$. Indeed, bulk free energy contributions from each $\Lambda^{(m-1)} \to \Lambda^{(m)}$ decimation step arise from local moving-integration operations within cells of side length $b$ on $\Lambda^{(m-1)}$, i.e. topologically trivial subsets, and are thus insensitive to the flux presence. The evolution with $n$ of the effective action in $Z_{\Lambda^{(n)}}^{(-)}$ then determines the manner in which flux spreads, which is characteristic of the phase the system is in. \subsection{Upper and lower bounds} In the presence of the flux, the measure in (\ref{PF1atwist}) possesses the property of reflection positivity only in $(d-1)$-dimensional planes perpendicular to any one of the directions $\rho \not= 1,2$ in which ${\cal V}$ winds around the lattice. One way of dealing with this is to simply consider the quantity \begin{equation} Z^+_{\Lambda^{(n)}}(\{c_j(n)\})\equiv {1\over 2} \Big(Z_{\Lambda^{(n)}}(\{c_j(n)\}) + Z_{\Lambda^{(n)}}^{(-)}(\{c_j(n)\})\Big) \label{Zplus} \end{equation} instead of $Z_{\Lambda^{(n)}}^{(-)}$. It is indeed easily checked that reflection positivity holds for the measure in $Z^+_{\Lambda^{(n)}}$ in all planes. A direct consequence of this (Appendix A) is then the analog of II.1: \prop{For $\dZ{n}^+(\{c_j(n)\})$ given by (\ref{Zplus}) with $c_j(n) \geq 0$ for all $j$, and periodic boundary conditions, (i) $\dZ{n}^+(\{c_j(n)\})$ is an increasing function of each $c_j(m)$: \begin{equation} \partial \dZ{n}^+(\{c_i(n)\}) / \partial c_j(n) \geq 0 \; ;\label{PFplusder0} \end{equation} (ii) \begin{equation} \dZ{n}^+(\{c_j(n)\}) \geq \Big[\,1 + \sum_{j\not=0} d_j^2\, c_j(n)^6 \,\Big]^{|\Lambda^{(n)}|} \; .\label{PFpluslowerb1} \end{equation} } Again, in these bounds equality holds only in the trivial case where all $c_j(n)$'s vanish. In particular, one has \begin{equation} \dZ{n}^+(\{c_j(n)\}) \; > \; 1 \,.\label{PFpluslowerb2} \end{equation} Note that these bounds are identical to those in II.1. This signifies the obvious fact that they bound from below by underestimating the bulk free energies proportional to the lattice volume, whereas the lattice size dependence of the free energy discrepancy between $\dZ{n}(\{c_j(n)\})$ and $\dZ{n}^{(-)}(\{c_j(n)\})$ is much weaker. Upper and lower bound statements analogous to III.1 and III.2 can be obtained for $Z^+_{\Lambda^{(n)}}$. One has: \prop{ For $Z_{\Lambda^{(n-1)}}^+$ of the form (\ref{Zplus}), a decimation transformation (\ref{RG1twist}), (\ref{RG2}) - (\ref{RG5}) with $\zeta=b^{d-2}$ and $0 < r\leq 1$ results in an upper bound on $Z_{\Lambda^{(n-1)}}^+$: \begin{equation} Z^+_{\Lambda^{(n-1)}}(\{c_j(n-1)\})\, \leq \, F_0^U(n)^{|\Lambda^{(n)}|}\, Z^+_{\Lambda^{(n)}}(\{c_j^U(n,r)\})\;. \label{Uplus} \end{equation} The r.h.s. in (\ref{Uplus}) is a monotonically decreasing function of $r$ on $0 < r\leq 1$. } \prop{ For $Z_{\Lambda^{(n-1)}}^+$ of the form (\ref{Zplus}): \begin{equation} Z^+_{\Lambda^{(n)}}(\{c_j^L(n)\}) \, \leq \, Z^+_{\Lambda^{(n-1}}(\{ c_j(n-1)\}) \;, \label{Lplus} \end{equation} where the coefficients $c_j^L(n)$ are given by (\ref{lowerc1}). } The proof of IV.3, as well as that of IV.4, an easy corollary of IV.2, are given in Appendix A. It then follows from (\ref{PFplusder0})) that (\ref{Lplus}) holds also with coefficients $c_j^L(n)$ given by (\ref{lowerc2}) or (\ref{lowerc3}). Again, in analogy to III.2, IV.4 also holds with $c_j^L$ given by (\ref{lowerc4}), but this form will not be used here. \subsection{Representation of $Z_\Lambda + Z_\Lambda^{(-)}$ on decimated lattices} The procedure of section \ref{Z} leading to the representation (\ref{A}) for $Z_\Lambda$ can now be applied to $Z^+_\Lambda= (Z_\Lambda + Z_\Lambda^{(-)})/2$. One introduces the interpolating coefficients $\tilde{c}_j(m,\alpha,r)$ given by eq. (\ref{interc1}), and $\tilde{F}_0(m,h,\alpha,t)$ given by eq. (\ref{interF1}) for some choice of interpolation function $h$ such as given by the examples (\ref{h}). The quantity corresponding to (\ref{interPF1}) is then given by \begin{equation} \tilde{Z}^+_{\Lambda^{(m)}}(\beta,h,\alpha,t,r) = \tilde{F}_0(m,h,\alpha,t)^{|\Lambda^{(m)}|}\, Z^+_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha,r)\}) \label{interPF1plus} \end{equation} where \begin{equation} Z^+_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha,r)\}) = {1\over2} \Big( Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha,r)\}) + Z^{(-)}_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha,r)\}) \Big) \label{interPF2plus} \end{equation} with $Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha,r)\})$ given by (\ref{interPF2}) and $Z^{(-)}_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha,r)\})$ given by (\ref{PF1atwist}) - (\ref{PF1btwist}) with coefficients $\tilde{c}_j(m,\alpha,r)$. We then have the analog of III.3: \prop{ The interpolating free energies $\ln Z^+_{\Lambda^{(m)}}(\{\tilde{c}_j(m,\alpha, r)\})$ and $\ln \tilde{Z}^+_{\Lambda^{(m)}}(\beta,h, \alpha,t,r)$ are increasing functions of $\alpha$: \begin{equation} \partial \ln Z^+_{\Lambda^{(m)}}\Big(\{\tilde{c}_j(m,\alpha, r)\} \Big) /\partial \alpha \, > \, 0 \,. \label{interPFder1plus} \end{equation} } In terms of (\ref{interPF1plus}), IV.3 and IV.4 give \begin{equation} \tilde{Z}^+_{\Lambda^{(m)}}(\beta,h,0,t,r) \leq Z^+_{\Lambda^{(m-1)}} \leq \tilde{Z}^+_{\Lambda^{(m)}}(\beta,h,1,t,r) \, . \label{interI1plus} \end{equation} which implies that there exist a value of $\alpha$ in $(0,1)$: \begin{equation} \alpha^+(m, h,t,r,\{c_j(m-1)\},b,\Lambda)\equiv \alpha_{\Lambda,\, h}^{+(m)}(t,r) \label{interI2plus} \end{equation} such that \begin{equation} \tilde{Z}^+_{\Lambda^{(m)}}(\beta,h,\alpha_{\Lambda,\, h}^{+(m)}(t,r), t,r) = Z^+_{\Lambda^{(m-1)}} \,.\label{interI3plus} \end{equation} This value is unique, for given values of $t, r$, by IV.5. $\alpha_{\Lambda,\, h}^{+(m)}(t,r)$ gives the regular level surface of the function $\tilde{Z}^+_{\Lambda^{(m)}}(\beta,h,\alpha,t,r)$ fixed by the value $Z^+_{\Lambda^{(m-1)}}$. All the considerations concerning the dependence on the parameters $t, r$ in the previous section carry over directly to $\alpha_{\Lambda,\, h}^{+(m)}(t, r)$. In particular, one has \begin{equation} {\partial \alpha_{\Lambda,\, h}^{+(m)}(t,r)\over \partial t} = v^+(\alpha_{\Lambda,\,h}^{+(m)}(t,r), t, r) \;, \label{alphplustder1} \end{equation} where \begin{equation} v^+(\alpha, t, r) \equiv - { \displaystyle {\partial h(\alpha,t) / \partial t} \over{\displaystyle {\partial h(\alpha,t)\over \partial \alpha} + A^+_{\Lambda^{(m)}}(\alpha, r)} } \;,\label{alphplustder2} \end{equation} with \begin{equation} A^+_{\Lambda^{(m)}}(\alpha, r) \equiv {1\over \ln F_0^{U}(m) }\, {1\over |\Lambda^{(m)}| }\, {\partial \over \partial\alpha } \ln Z^+_{\Lambda^{(m)}}\,\Big(\{\tilde{c}_j( n, \alpha, r)\}\Big) > 0\;. \label{alphplustder3} \end{equation} Again, we always assume that $h$ is chosen such that $\partial h/\partial t$ is negative. Then, from (\ref{alphplustder2}), $v^+>0$ on $0 <\alpha < 1$, with $v^+=0$ at $\alpha=0$ and $\alpha=1$. Also \begin{equation} {d h(\alpha_{\Lambda,h}^{+(m)}(t,r),t)\over dt} = - {\partial \alpha_{\Lambda,h}^{+(m)}(t,r)\over \partial t} \, A^+_{\Lambda^{(m)}}(\alpha_{\Lambda,h}^{+(m)}(t,r), r) \,. \label{hplustder} \end{equation} The derivative w.r.t. $r$ is similarly given by (\ref{alphplusrder1}). The values (\ref{interI2plus}) obey \begin{equation} \delta^{+\,\prime} < \alpha_{\Lambda,\,h}^{+(m)}(t,r)< 1-\delta^+ \label{collarplus} \end{equation} with lattice-size independent, positive $\delta^+$ and $\delta^{+\,\prime}$. Again, the lower bound is automatically satisfied, whereas the upper bound is ensured by letting the parameter $r$ vary, if necessary, in (\ref{rdomain}) (cf. Appendix B). From this it follows that the analog of (\ref{dercollar}): \begin{equation} {\displaystyle \partial \alpha_{\Lambda,\,h}^{+(m)}\over \partial t} (t,r) \geq \eta^+_1(\delta^+) > 0 \, , \qquad \qquad - {\displaystyle d h\over \displaystyle dt}(\alpha_{\Lambda,\,h}^{+(m)}(t,r),t) \geq \eta^+_2(\delta^+)>0 \, , \label{dercollarplus} \end{equation} holds for some lattice-size independent $\eta^+_1$, $\eta^+_2$. Since, furthermore, (\ref{collarplus}) holds for any $r$ if it already holds for $r=1$, we may again set $r = 1-\epsilon$, and, according to the convention introduced in the previous section, write $\alpha_{\Lambda,\,h}^{+(m)}(t) \equiv \alpha_{\Lambda,\,h}^{+(m)}(t, 1 -\epsilon)$, etc. As in the last section, one may iterate this procedure of performing a decimation transformation to produce upper and lower bounds according to (\ref{interI1plus}), and then fixing the value (\ref{interI2plus}) of the interpolating parameter $\alpha$ according to (\ref{interI3plus}). Assume that we choose the same interpolation family $h$ at every step. Then starting from the original lattice, after $n$ iterations one obtains \begin{eqnarray} Z^+_\Lambda(\beta) &= & {1\over 2} \Big( Z_\Lambda(\beta) + Z^{(-)}_\Lambda(\beta)\Big) \nonumber \\ & =& \left[\,\prod_{m=1}^n \tilde{F}_0(m, h,\alpha_{\Lambda,\, h}^{+(m)}(t_m), t_m)^{|\Lambda|/ b^{md}}\,\right]\; \; Z_{\Lambda^{(n)}}^+\,\Big(\{\tilde{c}_j( n, \alpha_{\Lambda,\, h}^{+(n)} (t_n))\}\Big) \,. \label{B} \end{eqnarray} The discussion in subsection \ref{disc1} concerning the representation (\ref{A}) of $Z_\Lambda$ applies equally well to (\ref{B}). In particular, note that again the existence of the large volume limit implies that \begin{equation} \alpha_{\Lambda,h}^{+(m)}(t,r) = \alpha_h^{+(m)}(t,r) + \delta \alpha_{\Lambda,h}^{+(m)}(t,r) \label{alphplussplit1} \end{equation} with $\delta \alpha_{\Lambda,h}^{+(m)}(t,r) \to 0$ as some inverse power of lattice size in the $|\Lambda^{(m)}|\to \infty$ limit. Alternatively, (\ref{collarplus}) already implies that one must have a lattice-size independent contribution $\alpha_h^{+(m)}(t,r) \,>\, \delta^{+\,\prime}\,>\,0$ in (\ref{alphplussplit1}) (cf. Appendix B). Again, either scheme (\ref{S1}) or (\ref{S2}) may be used to obtain (\ref{B}). For the reasons already noted, however, the latter scheme is more convenient for our considerations. Note, furthermore, that the bounding coefficients $c^U_j(m)$ and $c_j^L(m)$ in this scheme are the same for $Z_\Lambda$ and $Z^+_\Lambda$ since they do not depend on $\alpha_{\Lambda,\,h}^{(m-1)}$ or $\alpha_{\Lambda,\,h}^{+(m-1)}$. We, therefore, adopt it in what follows as the common iteration scheme for $Z_\Lambda$ and $Z^+_\Lambda$: \begin{equation} \begin{array}{c} \begin{array}{ccc} & \quad c_j(\beta) \qquad & \\ & \begin{picture}(60,20) \put(45,20){\vector(-4,-1){100}} \end{picture} \begin{picture}(30,20)\put(8,20){\vector(0,-2){20}}\end{picture} \begin{picture}(60,20) \put(1,20){\vector(4,-1){100}} \end{picture} & \end{array} \\ \begin{array}{ccccc} \hfill \{c_j^L(1)\} \quad & \leq & \{\tilde{c}_j(1,\alpha_{\Lambda,\,h}^{(1)}(t_1))\}, \; \{\tilde{c}_j(1,\alpha_{\Lambda,\,h^+}^{+(1)}(t^+_1))\} &\leq & \quad \{c_j^U(1)\}\hfill\\ \begin{picture}(30,20) \put(8,20){\vector(0,-2){20}} \end{picture} & & \begin{picture}(30,20)\put(10,20){\vector(0,-2){20}}\end{picture} & & \qquad \begin{picture}(30,20) \put(1,20){\vector(0,-2){20}} \end{picture} \\ \hfill \{c_j^L(2)\} \quad & \leq & \{\tilde{c}_j(2,\alpha_{\Lambda,\,h}^{(2)}(t_2))\}, \; \{\tilde{c}_j(2,\alpha_{\Lambda,\,h^+}^{+(2)}(t^+_2))\} &\leq & \quad \{c_j^U(2)\}\hfill \\ \begin{picture}(30,20) \put(8,20){\vector(0,-2){20}} \end{picture} & & \begin{picture}(30,20)\put(10,20){\vector(0,-2){20}}\end{picture} & & \qquad \begin{picture}(30,20) \put(1,20){\vector(0,-2){20}} \end{picture} \\ \vdots \; \quad & & \vdots & & \vdots \; \end{array}\\ \end{array} \label{S3} \end{equation} In (\ref{S3}) and in the following, the more detailed notation $h^+$ and $t^+$ is used for the choice of interpolation and $t$-parameter values occurring in (\ref{B}) whenever they need be distinguished from those used in the representation (\ref{A}) for $Z_\Lambda$, which can, of course, be chosen independently. As indicated by the notation, even for common choice of interpolation $h=h^+$ and of all other parameters, the values of $\alpha_{\Lambda,\,h}^{+(m)}(t ,r)$ fixed by the requirement (\ref{interI3plus}) are a priori distinct from those of $\alpha_{\Lambda,\,h}^{(m)}(t,r)$ fixed by (\ref{interI3}). It is easily seen, however, that for sufficiently large lattice volume they must nearly coincide. We examine this difference more precisely below. \section{The ratio $Z_\Lambda^{(-)}/Z_\Lambda$}\label{Z-/Z} \setcounter{equation}{0} \setcounter{Roman}{0} We may now compare $Z_\Lambda$ and $Z_\Lambda + Z^{(-)}_\Lambda$ by means of their representations (\ref{A}) and (\ref{B}) on successively decimated lattices. Consider then the ratio of $Z_\Lambda + Z_\Lambda^{(-)}$ and $Z_\Lambda$ as given by (\ref{B}) and (\ref{A}) with common choice of interpolation $h=h^+$ after one decimation: \begin{eqnarray} \left(\,1+ {Z_\Lambda^{(-)} \over Z_\Lambda }\,\right) & = & { 2 \tilde{Z}^+_{\Lambda^{(1)}}\,(\beta, h, \alpha_{\Lambda,\, h}^{+(1)}(t^+),\, t^+) \over \tilde{Z}_{\Lambda^{(1)}}\,(\beta, h, \alpha_{\Lambda,\,h}^{(1)}(t),\, t) } \label{ratio1a} \\ & = & \left(\,{ \tilde{Z}_{\Lambda^{(1)}}\,(\beta, h, \alpha_{\Lambda,\, h}^{+(1)}(t^+),\, t^+) \over \tilde{Z}_{\Lambda^{(1)}}\,(\beta, h, \alpha_{\Lambda,\,h}^{(1)}(t),\, t) }\,\right) \left(\,1+ { Z_{\Lambda^{(1)}}^{(-)}\,\Big(\{\tilde{c}_j(1,\alpha_{\Lambda,\,h}^{+(1)} (t^+))\}\Big) \over Z_{\Lambda^{(1)}}\,\Big(\{\tilde{c}_j(1,\alpha_{\Lambda,\,h}^{+(1)} (t^+))\}\Big) } \,\right) \label{ratio1} \end{eqnarray} By construction, the r.h.s. is invariant under independent variations of $t$ and $t^+$. Now since by IV.1 \begin{equation} 1< \left(\,1+ {Z_\Lambda^{(-)} \over Z_\Lambda }\,\right) < 2 \; \qquad \mbox{and} \qquad 1 < \left(\,1+ { Z_{\Lambda^{(1)}}^{(-)}\,\Big(\{\tilde{c}_j(1,\alpha_{\Lambda,\,h}^{+(1)} (t^+))\}\Big) \over Z_{\Lambda^{(1)}}\,\Big(\{\tilde{c}_j(1,\alpha_{\Lambda,\,h}^{+(1)}(t^+)) \}\Big) } \,\right) <2 \;,\label{ratiobounds} \end{equation} it follows that\footnote{(\ref{ratioconstr}) clearly holds for general values of the parameter $r$, not just for the values (\ref{rdomain}) used in (\ref{ratio1}).} \begin{equation} {1\over 2} < { \tilde{Z}_{\Lambda^{(1)}}\,(\beta, h, \alpha_{\Lambda,\, h}^{+(1)}(t^+),\, t^+) \over \tilde{Z}_{\Lambda^{(1)}}\,(\beta, h, \alpha_{\Lambda,\,h}^{(1)}(t),\, t) } < 2 \;. \label{ratioconstr} \end{equation} Though the bounds (\ref{ratiobounds}) are rather crude, the resulting constraint (\ref{ratioconstr}) is quite informative. First, it says that if in the equality (\ref{interI3}), i.e. \[ \tilde{Z}_{\Lambda^{(1)}}\,(\beta, h, \alpha_{\Lambda,\,h}^{(1)}(t), t) = Z_\Lambda \] one substitutes for $\alpha_{\Lambda,\,h}^{(1)}(t)$ the wrong level surface $\alpha_{\Lambda,\, h}^{+(1)}(t)$, the resulting discrepancy in the free energy per unit volume is at most $O(1/|\Lambda^{(1)}|)$. Furthermore, (\ref{ratioconstr}) constrains by how much $\alpha_{\Lambda,h}^{+(1)}(t)$ can differ from $\alpha_{\Lambda,h}^{+(1)}(t^+)$ at $t=t^+$. From the definition (\ref{interPF1}) and III.3, the change in $\tilde{Z}_{\Lambda^{(1)}}\, (\beta,h, \alpha, t, r)$ under a shift $\delta \alpha$ in $\alpha$ satisfies \begin{equation} |\,\delta \ln \tilde{Z}_{\Lambda^{(1)}}\, (\beta, h, \alpha, t, r)| > |\,\delta \alpha |\,|\Lambda^{(1)}| \ln F_0^U(1)\, {\partial h(\alpha,t)\over \partial \alpha} \;. \label{varcon} \end{equation} When combined with (\ref{varcon}), the constraint (\ref{ratioconstr}), taken at general $r$, implies that one must have \begin{equation} | \alpha_{\Lambda,\,h}^{+(1)}(t,r) - \alpha_{\Lambda,\,h}^{(1)}(t,r) | \leq O({1\over |\Lambda^{(1)}|}) \;. \label{alphdiff} \end{equation} This implies that in (\ref{alphsplit1}), (\ref{alphplussplit1}) one has $\alpha_h^{(1)}(t)=\alpha_h^{+(1)}(t)$, i.e. any difference occurs only in the parts $\delta \alpha_{\Lambda,\,h}^{(1)}$, $\delta \alpha_{\Lambda,\,h}^{ +(1)}$ that vary inversely with lattice size. Thus, in the large volume limit, this difference becomes unimportant if one is interested only in the computation of partition functions, or bulk free energies. This, however, is not the case for free energy differences such as the ratio (\ref{ratio1a}). Indeed, any discrepancy of the size (\ref{alphdiff}) means that the first factor in (\ref{ratio1}) can contribute as much as the second factor in round brackets. Thus the expression for the ratio of the twisted to the untwisted partition function given by (\ref{ratio1}), though exact, is not immediately useful for extracting this ratio on the coarser lattice. To address this issue one may make use of the $t$-parametrization invariance of (\ref{ratio1}). First the cancellation of the bulk energies generated in the integration from scale $a$ to $ba$ is made explicit as follows. For any given $t^+_1$, choose $t_1$ in $\tilde{Z}_{\Lambda^{(1)}} \,(\beta, \alpha_{\Lambda,h}^{(1)}(t_1), t_1)$ so that \begin{equation} h(\alpha_{\Lambda,\,h}^{(1)}(t_1), t_1) = h(\alpha_{\Lambda,\,h}^{+(1)}(t^+_1), t^+_1) \;.\label{hequal1} \end{equation} This is clearly always possible by (\ref{dercollar}) and (\ref{dercollarplus}), and by (\ref{alphdiff}); in fact, $t_1-t^+_1 = O(1/|\Lambda^{(1)}|)$. Then (\ref{ratio1a}) assumes the form \begin{equation} \left(\,1+ {Z_\Lambda^{(-)} \over Z_\Lambda }\,\right) = { 2 Z^+_{\Lambda^{(1)}}\,\Big(\{\tilde{c}_j(1,\alpha^+_{\Lambda,\,h}(t^+_1)) \}\Big) \over Z_{\Lambda^{(1)}}\,\Big(\{\tilde{c}_j(1,\alpha_{\Lambda,\,h}(t_1)) \}\Big) } \;. \label{ratio2} \end{equation} We may now iterate this procedure performing $(n-1)$ decimation steps according to the scheme (\ref{S3}), at each step choosing $t_m$, $t^+_m$ such that \begin{equation} h(\,\alpha_{\Lambda,\,h}^{(m)}(t_m), t_m) = h(\,\alpha_{\Lambda,\,h}^{+(m)}(t^+_m), t^+_m) \;, \qquad m=1,\ldots (n-1) \;. \label{hequal2} \end{equation} Carrying out a final $n$-th decimation step one obtains \begin{eqnarray} \left(\,1+ {Z_\Lambda^{(-)} \over Z_\Lambda }\,\right) & = & { 2\, \tilde{Z}^+_{\Lambda^{(n)}}\,(\beta, h, \alpha_{\Lambda,\, h}^{+(n)}(t^+),\, t^+) \over \tilde{Z}_{\Lambda^{(n)}}\,(\beta, h, \alpha_{\Lambda,\,h}^{(n)}(t),\, t) } \label{ratio3} \\ & = & { \tilde{Z}_{\Lambda^{(n)}}\,(\beta, h, \alpha_{\Lambda,\,h}^{+(n)}(t^+),\, t^+) \over \tilde{Z}_{\Lambda^{(n)}}\, (\beta, h, \alpha_{\Lambda,\,h}^{(n)}(t),\, t) } \,\left(\,1+ { Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,\tilde{c}_j(n,\alpha_{\Lambda,\,h}^{+(n)} (t^+))\,\}\Big) \over Z_{\Lambda^{(n)}}\,\Big(\{\,\tilde{c}_j(n,\alpha_{\Lambda,\,h}^{+(n)} (t^+))\,\}\Big) } \right) \label{ratio4} \end{eqnarray} The argument for $n=1$ (eq. (\ref{ratio1})) above may now be applied to (\ref{ratio4}) to conclude \begin{equation} | \alpha_{\Lambda,h}^{+(n)}(t,r) - \alpha_{\Lambda,h}^{(n)}(t,r) | \leq O({1\over |\Lambda^{(n)}|}) \;. \label{alphdiffN} \end{equation} Any such discrepancy between $\alpha_{\Lambda,h}^{+(n)}(t)$ and $\alpha_{\Lambda,h}^{(n)}(t)$ in (\ref{ratio4}) presents the same problem for extracting the ratio at scale $b^n a$ as at scale $ba$. In this sense (\ref{ratio4}) is not qualitatively different from the $n=1$ case (\ref{ratio1}). Transferring the discrepancy to large $n$, however, allows a technical simplification as we see below. Next, consider (\ref{ratio3}) rewritten as \begin{equation} \left(\,1+ {Z_\Lambda^{(-)} \over Z_\Lambda }\,\right) = \left( { Z^+_{\Lambda^{(n-1)}} \over \tilde{Z}^+_{\Lambda^{(n)}}\,(\beta, h, \alpha_{\Lambda,\,h}^{(n)}(t),\, t) } \right)\,\left(\,1+ { Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,\tilde{c}_j(n,\alpha_{\Lambda,\,h}^{(n)} (t))\,\}\Big) \over Z_{\Lambda^{(n)}}\,\Big(\{\, \tilde{c}_j(n,\alpha_{\Lambda,\,h}^{(n)} (t))\,\}\Big) } \,\right) \label{ratio5} \end{equation} by use of (\ref{interI3plus}). By construction (cf. (\ref{interI3})), $\alpha_{\Lambda,\,h}^{(n)}(t)$ is such that the r.h.s. in (\ref{ratio3}), hence in (\ref{ratio5}), is invariant under changes in the parameter $t$; but note that the two $\alpha_{\Lambda,\,h}^{(n)}$-dependent factors in round brackets on the r.h.s. in (\ref{ratio5}) are {\it not} separately invariant. If, for some given $t$, $\alpha_{\Lambda,\,h}^{(n)}(t)$ is larger (smaller) than $\alpha_{\Lambda,\,h}^{+(n)}(t)$, then, by IV.5, $\tilde{Z}^+_\Lambda\,(\beta, h, \alpha_{\Lambda,\,h}^{(n)}(t),\,t)$ is larger (smaller) than $\tilde{Z}^+_{\Lambda^{(n)}}\,(\beta,h, \alpha_{\Lambda,\,h}^{+(n)}(t), \, t)= Z^+_{\Lambda^{(n-1)}}$, and the second factor in round brackets on the r.h.s. of (\ref{ratio5}) overestimates (underestimates) the ratio $Z_\Lambda^{(-)}/ Z_\Lambda $. It is then natural to ask whether there exist a value $t= t_{\Lambda,h}^{(n)}$ such that \begin{equation} \tilde{Z}^+_{\Lambda^{(n)}}\,\Big(\beta, h,\alpha_{\Lambda, h}^{(n)}( t_{\Lambda,h}^{(n)}),\, t_{\Lambda,h}^{(n)} \Big) = Z^+_{\Lambda^{(n-1)}} \;.\label{interIfixplus} \end{equation} Note that the graphs of $\alpha_{\Lambda, h}^{(n)}(t)$ and $\alpha_{\Lambda, h}^{+(n)}(t)$ must intersect at $t_{\Lambda,\,h}^{(n)}$. A unique solution to (\ref{interIfixplus}) indeed exists as shown in Appendix C provided \begin{equation} A_{\Lambda^{(n)}}(\alpha, r) \geq A^+_{\Lambda^{(n)}}(\alpha, r) \label{A>A+} \end{equation} with $r$ in (\ref{rdomain}). An equivalent statement to (\ref{A>A+}) is \begin{equation} A_{\Lambda^{(n)}}(\alpha, r) \geq A^{(-)}_{\Lambda^{(n)}}(\alpha, r) \, , \label{A>A-} \end{equation} where $A^{(-)}_{\Lambda^{(m)}}(\alpha, r)$ is defined by (\ref{alphplustder3}) but with $Z^+_{\Lambda^{(n)}}$ replaced by $Z_{\Lambda^{(n)}}^{(-)}$. Assume now that under successive decimations the coefficients $c^U_j(m)$ in (\ref{S3}) evolve within the convergence radius of the strong coupling cluster expansion. Taking then $n$ in (\ref{ratio3}) sufficiently large, we need establish inequality (\ref{A>A+})\footnote{For Abelian systems, comparison inequalities of the type (\ref{A>A+}) either follow from Griffith's inequalities, or can be approached by the same methods. All such known methods fail in the non-Abelian case.} only at strong coupling. Within this expansion it is a straightforward exercise to establish the validity of (\ref{A>A+}), with strict inequality on any finite lattice. We summarize the above development in the following:\\ \prop{ Consider $n$ successive decimation steps performed according to the scheme (\ref{S3}). Assume that there is an $n_0$ such that the upper bound coefficients $c^U_j(n)$ become sufficiently small for $n\geq n_0$. Then the ratio of the twisted to the untwisted partition function on lattice $\Lambda$, of spacing $a$, has a representation on lattice $\Lambda^{(n)}$, of spacing $b^na$ and $n \geq n_0$, given by: \begin{equation} {Z_\Lambda^{(-)}(\beta) \over Z_\Lambda(\beta) } = { Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,\tilde{c}_j(n,\alpha_\Lambda^{*\,(n)})\,\} \Big) \over Z_{\Lambda^{(n)}}\,\Big(\{\, \tilde{c}_j(n,\alpha_\Lambda^{*\,(n)}) \,\}\Big) } \;, \label{ratio6} \end{equation} where \begin{equation} \alpha_\Lambda^{*\,(n)}\equiv \alpha_{\Lambda, h}^{(n)}(t_{\Lambda,\,h}^{(n)}) \;.\label{alphstar} \end{equation} Here, the function $\alpha_{\Lambda, h}^{(n)}(t)$ is defined by (\ref{interI3}), i.e. is the solution for $\alpha$ to \begin{equation} \tilde{Z}_{\Lambda^{(n)}}\,(\beta, h, \alpha,\, t) = Z_{\Lambda^{(n-1)}}\;, \label{interI3A} \end{equation} and $t_{\Lambda,\,h}^{(n)}$ is defined by (\ref{interIfixplus}), i.e. is the solution for $t$ to the equation \begin{equation} \tilde{Z}^+_{\Lambda^{(n)}}\,(\beta, h,\alpha_{\Lambda, h}^{(n)}(t),\,t) = Z^+_{\Lambda^{(n-1)}} \;.\label{interIfixplusA} \end{equation} } As indicated by the notation in (\ref{alphstar}), any dependence on $h$ must cancel in $\alpha_\Lambda^{*\,(n)}$. Indeed, Cauchy's form of the intermediate value theorem gives \begin{equation} { \ln Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,\tilde{c}_j(n,\alpha)\,\}\Big) - \ln Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,\tilde{c}_j(n,\alpha_\Lambda^{*\,(n)}) \,\}\Big) \over \ln Z_{\Lambda^{(n)}}\,\Big(\{\,\tilde{c}_j(n,\alpha)\,\}\Big) - \ln Z_{\Lambda^{(n)}}\,\Big(\{\,\tilde{c}_j(n,\alpha_\Lambda^{*\,(n)})\,\} \Big) } = { A_{\Lambda^{(n)}}^{(-)}(\xi) \over A_{\Lambda^{(n)}}(\xi) } \leq 1 \;, \label{Cauchy} \end{equation} for some $\xi$ between $\alpha_\Lambda^{*\,(n)}$ and $\alpha$, and use of (\ref{A>A-}) was made to obtain the last inequality. Setting $\alpha$ equal to $1$ in (\ref{Cauchy}), combining with (\ref{ratio6}), and using III.3, IV.5, gives \begin{equation} {Z_\Lambda^{(-)} \over Z_\Lambda } \geq { Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,c^U_j(n)\,\}\Big) \over Z_{\Lambda^{(n)}}\,\Big(\{\,c^U_j(n)\,\}\Big) } \; . \label{ratiolower} \end{equation} The upper bound coefficients in (\ref{S3}) then, which correspond to upper bounds for the partition functions $Z_\Lambda$ and $Z_\Lambda^{(-)}$, give a lower bound for the ratio $Z_\Lambda^{(-)}/Z_\Lambda$.\footnote{This result was first stated a long time ago in \cite{T}.} Setting $\alpha=0$ in (\ref{Cauchy}), similarly yields an upper bound. Thus:\\ \prop{ With the same conditions as in V.1 the ratio of the twisted to the untwisted partition function on lattice $\Lambda$ of spacing $a$ is bounded on lattice $\Lambda^{(n)}$ of spacing $b^na$ by: \begin{equation} { Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,c^L_j(n)\,\}\Big) \over Z_{\Lambda^{(n)}}\,\Big(\{\,c^L_j(n)\,\}\Big) } \geq {Z_\Lambda^{(-)} \over Z_\Lambda } \geq { Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,c^U_j(n)\,\}\Big) \over Z_{\Lambda^{(n)}}\,\Big(\{\,c^U_j(n)\,\}\Big) } \; . \label{ratiolowerupper} \end{equation} } Now, the ratio of the interpolating partition functions (\ref{interPF2plus}) and (\ref{interPF2}) interpolates monotonically between the upper and lower bounds in (\ref{ratiolowerupper}) since \begin{equation} {d\over d\alpha }\; { Z^{(-)}_{\Lambda^{(n)}}(\{\tilde{c}_j(n,\alpha\}) \over Z_{\Lambda^{(n)}}(\{\tilde{c}_j(n,\alpha)\}) } < 0 \label{ratioder} \end{equation} by (\ref{A>A-}). It follows that there exist a unique value $\alpha^{*\,(n)}_\Lambda$ of $\alpha$ at which this ratio of the interpolating partition functions equals $Z_\Lambda^{(-)} / Z_\Lambda $. This is a restatement of (\ref{ratio6}), but makes explicit the fact that this value is independent of $h$. In fact, it shows that all dependence on parametrization choices, i.e. the choice of parameters $t_m$ made in successive decimations, eventually cancels in $\alpha^{*\,(n)}_\Lambda$. Indeed, the latter can depend only on the number of decimations $n$ and the initial coupling $\beta$, since this is all the upper and lower bounds in (\ref{ratiolowerupper}) depend on. This, in retrospect, is as expected, since all bulk free-energy contributions depending on such choices were canceled in finally arriving at (\ref{ratio6}), but V.2 makes it manifest. (\ref{ratiolowerupper}) was obtained as a corollary of (\ref{ratio6}). An alternative approach would be to proceed in the reverse direction, i.e. establish (\ref{ratiolowerupper}) directly, from which (\ref{ratio6}) would follow by interpolation between the upper and lower bounds as in the previous paragraph. In other words, follow also in the case of the ratio of the partition functions the approach followed separately for the untwisted and twisted partition functions in the previous sections. This is further discussed in Appendix C. \section{Confinement}\label{CONf} \setcounter{equation}{0} \setcounter{Roman}{0} \subsection{Order parameters}\label{CONFOP} The vortex free energy $F_\Lambda^{(-)}$ is defined by the ratio of partition functions considered in the previous section: \begin{equation} \exp(-F_\Lambda^{(-)}(\beta)) = {Z_\Lambda^{(-)}(\beta) \over Z_\Lambda(\beta)} \;. \label{vfe} \end{equation} It represents the free energy cost for adding a vortex to the vacuum, the $Z(2)$ flux of the inserted vortex being rendered stable by wrapping around the toroidal lattice. As has been discussed in the literature, all possible phases of gauge theory (Higgs, Coulomb, or confinement) can be characterized by the behavior of (\ref{vfe}) as one lets the lattice become large. In particular, having taken the vortex to wind through the lattice in the directions $\kappa=3, \ldots, d$, a confining phase is signaled by the asymptotic behavior \begin{equation} F_\Lambda^{(-)}(\beta) \sim L\, \exp(\,-\hat{\sigma}(\beta) |A|\,) \;, \label{vfeconf} \end{equation} where $L\equiv \prod_{\kappa\not= 1,2}\,L_\kappa$, and $A\equiv L_1 L_2$. (\ref{vfeconf}) represents exponential spreading of the flux introduced by the twist on the set ${\cal V}$ in the transverse directions (creation of mass gap), with $ \hat{\sigma}(\beta)$ giving the exact string tension. Note that, according to (\ref{vfeconf}), $F_\Lambda^{(-)}(\beta))\to 0$ as $|\Lambda|\to \infty $ in any power-law fashion, i.e. one has `condensation' of the vortex flux. The behavior (\ref{vfeconf}) is dictated by physical reasoning \cite{tH}, \cite{MP}, and explicitly realized within the strong coupling expansion. As such free energies differences are generally notoriously difficult to measure accurately, demonstration of the behavior (\ref{vfeconf}) by numerical simulations at large $\beta$'s has been achieved only relatively recently \cite{KT}, \cite{Fetal}. The $Z(2)$ Fourier transform of (\ref{vfe}) \begin{equation} \exp(-F_\Lambda^{\rm el}(\beta)) = {1\over 2}\Big( \,1 - {Z_\Lambda^{(-)}(\beta) \over Z_\Lambda(\beta)} \,\Big) \label{efe} \end{equation} gives the corresponding dual (w.r.t. the gauge group center) order parameter, the color electric free energy. (\ref{vfe}) and (\ref{efe}) are ideal pure long-range order parameters. They do not suffer from the physically irrelevant but technically quite bothersome complications, such as loss of translational invariance, or mass renormalization and other short range contributions, that arise from the explicit introduction of external sources. Such external current sources are introduced in the definition of the Wilson and t'Hooft loops. Furthermore, the behavior of the latter can be bounded by that of (\ref{vfe}) and (\ref{efe}) \cite{TY}. In particular, the following relation holds. Let $C$ be a rectangular loop of minimal area $S$ lying in a $2$-dimensional $[12]$-plane. Then \cite{TY}: \begin{equation} \vev{W[C]}_\Lambda \leq \left[ \exp(-F_\Lambda^{\rm el}\right]^{S/A} \,, \label{W-vfebound} \end{equation} where $W[C]=\chi_{_{1/2}}\,\Big(\prod_{b\in C} U_b\Big)$ is the usual Wilson loop observable. It follows from (\ref{W-vfebound}) that confining behavior (\ref{vfeconf}) of the vortex free energy implies confining behavior (`area-law') for the Wilson loop. \subsection{Strong coupling cluster expansion and confinement} We now return to our considerations at the end of section \ref{Z} regarding the flow of the coefficients $\tilde{c}_j(n, \alpha_{\Lambda,h}^{(n)}(t))$ in our partition function representations (\ref{A}) and (\ref{B}). This flow is bounded from above by that of the MK coefficients $c_j^U(m)$ regardless of the specific value assumed by the $\alpha_{\Lambda,h}^{(m)}(t_m)$'s at each decimation step (cf (\ref{cineq5})). Furthermore, by explicit evaluation under the iteration rules (\ref{RG2}) - (\ref{RG5}), one finds that $c_j^U(n) \to 0$ as $n\to \infty$ for any initial $\beta$, provided $d\leq 4$. Thus, given any initial $\beta$, one may always take the number of iterations $n$ large enough so that the coefficients $c_j^U(n)$ become small enough to be within the region of convergence of the strong coupling expansion. Then by V.1: \begin{equation} \exp(-F_\Lambda^{(-)}(\beta)) = { Z_{\Lambda^{(n)}}^{(-)}\,\Big(\{\,\tilde{c}_j(n,\alpha_\Lambda^{*\,(n)})\,\} \Big) \over Z_{\Lambda^{(n)}}\,\Big(\{\, \tilde{c}_j(n,\alpha_\Lambda^{*\,(n)}) \,\}\Big) } \;. \label{vfeA} \end{equation} The vortex free energy may then be evaluated in terms of the coefficients $\tilde{c}_j(n,\alpha_\Lambda^{*\,(n)})$ directly on lattice $\Lambda^{(n)}$ of spacing $b^n a$ within a convergent strong coupling polymer expansion. Recall that, in the pure lattice gauge theory context, a polymer is a set $Y$ of connected plaquettes containing no `free' bond, i.e. no bond belonging to only one plaquette in $Y$ (see e.g. \cite{Mu}). The activity of a polymer $Y$ is defined by \begin{equation} z(Y)= \int\;\prod_{b\in Y} dU_b\;\prod_{p\in Y} g_p(U,n) \;, \label{zY} \end{equation} where \begin{equation} g_p(U,n) = \sum_{j\not= 0} d_j\, \tilde{c}_j(n,\alpha_\Lambda^{*\,(n)}) \, \chi_j(U_p) \;.\label{g1} \end{equation} The polymer expansion is then \begin{equation} \ln\dZ{n} = \sum_{X \subset \Lambda^{(n)}} \,a(X)\;\prod_{Y_i \in X} \; z(Y_i)^{n_i} \;, \label{clusterexp1} \end{equation} where the sum is over all linked clusters of polymers in $\Lambda^{(n)}$, each cluster $X$ consisting of a connected set of polymers $Y_i$, $i=1,\ldots,k_X$ with multiplicities $n_i$. The combinatorial factor $a(X)$ is given by \begin{equation} a(X)=\sum_{G(X)} (-1)^{l(G)} \, , \label{combfactor} \end{equation} where the sum is over all connected graphs on $X$ (full set (including multiplicities) $\{Y_i\}$ as vertices with a line connecting overlapping polymers) and $l(G)$ is the number of lines in the graph. In the case of $\dZ{n}^{(-)}$, the presence of the flux enters the activities (\ref{zY}) through the replacement (\ref{twist2}). We denote the resulting activities by $z^{(-)}(Y)$. This replacement does not affect polymers that are wholly contained in a simply connected part of $\Lambda^{(n)}$, since, in this case, the flux can be removed by a change of variables in the integrals in (\ref{zY}). Only clusters that contain at least one non-simply connected polymer forming a topologically non-trivially closed surface can be affected. Thus, one has \begin{equation} \ln\dZ{n}^{(-)} - \ln \dZ{n} = \sum_{X \subset \Lambda^{(n)}} \,a(X)\;\left(\,\prod_{Y_i \in X} \; z^{(-)}(Y_i)^{n_i} - \,\prod_{Y_i \in X} \; z(Y_i)^{n_i}\right)\;, \label{clusterexp2} \end{equation} where the sum is only over all such topologically nontrivial linked clusters, the contribution of all other clusters canceling in the difference. The minimal cluster of this type consists of a single polymer which is a 2-dimensional plane $\Pi: x_\mu=$const., $\mu=3,\ldots,d$ on $\Lambda^{(n)}$, thus of size $A^{(n)}=L_1^{(n)}L_2^{(n)}$, and activity \begin{equation} z(\Pi)= \sum_{{\rm half-int.}\atop j\geq 1/2} \tilde{c}_j(n,\alpha_\Lambda^{*(n)})^{A^{(n)}} = \tilde{c}_{1/2}(n,\alpha_\Lambda^{*(n)})^{\,A^{(n)}} \,[\, 1+ \sum_{{\rm half-int.}\atop j\geq 3/2} \left({\tilde{c}_j(n,\alpha^{*(n)}) \over \tilde{c}_{1/2}(n,\alpha^{*(n)})} \right)^{A^{(n)}}\,] \;.\label{lead} \end{equation} (Note that the terms from the higher representations in (\ref{lead}) become utterly negligible in the large volume limit.) There are $L^{(n)}=\prod_{\kappa\not=1,2}L_\kappa^{(n)}$ such minimal clusters giving the leading contribution in (\ref{clusterexp2}). This leading contribution is thus seen to give the confining behavior (\ref{vfeconf}). Nonleading contributions come from nonminimal clusters consisting of $\Pi$ with or without `decorations', and additional polymers touching $\Pi$. Such corrections have been evaluated in terms of the character expansion coefficients (the $\tilde{c}_j(n,\alpha_\Lambda^{*(n)})$'s in our case) to quite high order \cite{Mu}. They can be shown to exponentiate, so that \begin{equation} {1\over L}\,F_\Lambda^{(-)}(\beta) = \exp(- \hat{\sigma}_\Lambda\,A) \end{equation} with \begin{eqnarray} \hat{\sigma}_\Lambda & = & {1\over b^{2n}}\,\kappa_\Lambda(n,\alpha_\Lambda^{*(n)}) \nonumber\\ & = & {1\over b^{2n}}\, \Big[\, \kappa(n,\alpha_\Lambda^{*(n)}) + O\left( (\tilde{c}_{j+1/2}/\tilde{c}_{1/2})^{L_\mu^{(n)}}\right) + O(n/A^{(n)}) \,\Big] \;, \label{sigma1} \end{eqnarray} where \cite{Mu} \begin{equation} \kappa(n,\alpha_\Lambda^{*(n)}) = \Big[-\ln \tilde{c}_{1/2}(n,\alpha_\Lambda^{*(n)}) - 4\,\tilde{c}_{1/2}(n,\alpha_\Lambda^{*(n)})^4 +8\,\tilde{c}_{1/2}(n, \alpha_\Lambda^{*(n)})^6 + \ldots \Big] \,. \label{sigma2} \end{equation} By the convergence of the expansion \cite{Ca}, the large volume limit exists and is given given by $\hat{\sigma}=\kappa(n,\alpha^{*(n)})/b^{2n}$, where $\alpha^{*(n)}$ is the lattice independent part of $\alpha_\Lambda^{*(n)}$ (cf. (\ref{alphsplit1})). The number of iterations $n$ in the above expressions is taken large enough so that, given some initial $\beta$ on $\Lambda$, the resulting $c_j^U(n)$ are within the expansion convergence regime, and one can write the representation (\ref{vfeA}) by V.1. This implies the existence of a scale, a point to which we return below. Otherwise, $n$ is arbitrary. By construction, our procedure is such that the ratio (\ref{vfe}) is reproduced under successive decimations. Thus, given (\ref{vfeA}) at some $n$, suppose one performs one more decimation to lattice $\Lambda^{(n+1)}$. The condition that determines $\alpha_\Lambda^{*(n+1)}$ such that (\ref{vfeA}) is preserved is \begin{equation} \kappa_\Lambda(n,\alpha_\Lambda^{*(n)}) = {1\over b^2}\,\kappa_\Lambda(n+1, \alpha_\Lambda^{*(n+1)}) \;,\label{kappaI} \end{equation} which then results in constant string tension $\hat{\sigma}_\Lambda$ under successive decimations. Using (\ref{sigma2}) with (\ref{interc1}) and (\ref{RGstrong}), it is an easy exercise to solve (\ref{kappaI}), at least to leading approximation, for $\alpha^{*(n+1)}$. The $t$-parameter value $t_{h}^{(n+1)}$ this $\alpha^{*(n+1)}$ corresponds to can then also be easily obtained, if desired, from \[ h(\alpha^{*(n)},t)\ln F_0^U(n+1)\,|\Lambda^{(n+1)}| + \ln \dZ{n+1}\,\Big(\{\, \tilde{c}_j(n+1,\alpha^{*\,(n+1)}) = \ln \dZ{n}\,\Big(\{\, \tilde{c}_j(n,\alpha^{*\,(n)}) \Big) \,, \] with $\ln\dZ{n}$, $\ln\dZ{n+1}$ given by (\ref{clusterexp1}) - in fact, to leading approximation, $\ln\dZ{n+1}$ can be ignored. Note that this amounts to replacing the set of the two equations (\ref{interI3A}) - (\ref{interIfixplusA}) in V.1 by their ratio and one of them. This is indeed the most convenient procedure once (\ref{vfeA}) has been achieved. \subsection{String tension and asymptotic freedom} $\kappa(n,\alpha^{*(n)})$ is the string tension in lattice units of lattice $\Lambda^{(n)}$. It is a complicated, but well-defined function of the original coupling $\beta= 4/g^2$ defined on lattice $\Lambda$, eq. (\ref{Wilson}) (cf. remarks following (\ref{ratioder})). We write \begin{equation} \kappa(n,\alpha^{*(n)}) \equiv \hat{\sigma}(n,g) \,.\label{sigma3} \end{equation} In dimensional units the asymptotic string tension in $d=4$ (\ref{sigma1}) - (\ref{sigma2}) is then \begin{eqnarray} \sigma & = & {1\over a^2} {1\over b^{2n}}\, \hat{\sigma}(n,g) \label{sigma4a} \\ & = & {1\over a^2} \, \hat{\sigma}(g) \,. \label{sigma4b} \end{eqnarray} Here, as remarked above, $n$ is assumed greater than some required smallest $n(g)$. This (dynamically generated) physical scale, or some chosen multiple of it, is the only parameter in the theory. Fixing it specifies how the coupling $g$ must vary with changes of the (unphysical) lattice spacing $a$. It is convenient, and customary, to introduce a fixed scale $\Lambda_0$ serving as an arbitrary unit of physical scales. Setting \begin{equation} \Lambda_0^{\:-1} = a b^n \, ,\label{length} \end{equation} determines the lattice spacing $a$ such that it takes $n$ steps to reach length scale $1/\Lambda_0$: \begin{equation} n= {1\over \ln b} \ln {1\over a\, \Lambda_0} \,. \label{n-a} \end{equation} Fixing the string tension, given in units of $\Lambda_0$: \begin{equation} \sigma = k \Lambda_0^2 \;,\label{sigma5} \end{equation} implies \begin{equation} \hat{\sigma}(n,g) = k \label{sigmaI} \end{equation} for some constant $k$. (\ref{sigmaI}) specifies the dependence of the bare coupling $g$ on $n$, hence, through (\ref{n-a}), the dependence on the lattice spacing $a$. It gives then the value $g(a)$ specified by the value of the string tension. (This is, of course, equivalent to fixing (\ref{sigma4b}) directly.) Since \[ \hat{\sigma}(n+1,g+\Delta g)^{1/2} - \hat{\sigma}(n,g)^{1/2} = b\, [\,\hat{\sigma}(n,g+\Delta g)^{1/2} - \hat{\sigma}(n,g)^{1/2}\,] + (b-1) \hat{\sigma}(n,g)^{1/2} \] and $\Delta a = -(b-1)a/b$ for $\Delta n=1$, one has from (\ref{sigmaI}): \begin{equation} {\Delta \sqrt{\hat{\sigma}} \over \Delta g } (a {\Delta g\over \Delta a}) = \sqrt{\hat{\sigma}} \label{sigmaIdiff} \end{equation} If $(a{\Delta g /\Delta a})\equiv\beta(g)$, the `beta-function', is known, (\ref{sigmaIdiff}) can be integrated directly for $\hat{\sigma}(g)$. (This introduces a dimensional integration constant which can serve as the scale $\Lambda_0$). This is in fact the familiar textbook argument were one {\it assumes} the existence of a string tension so as to get (\ref{sigmaIdiff}), in which the standard weak coupling perturbative expression for the beta function is then used. For us, however, the existence of a non-zero string tension is the outcome of the process of successive decimations to coarser scales as developed above. This process embodies all relevant information in the theory. In particular, it also supplies the specification of the function $g(a)$. One can indeed construct the function $g(a)$ directly as follows:\\ (i) Starting with some initial value of $\beta=4/g^2$ perform successive decimations following the flow into the strong coupling regime with resulting string tension $\kappa(n,\alpha^{*(n))})$, eq. (\ref{sigma2}), at some $n=n_0$. Let $k$ denote the value of this string tension. The corresponding value of the lattice spacing $a_0$ is given by (\ref{n-a}), and $g=g(a_0)$. \\ (ii) Fix the string tension as in (\ref{sigmaI}). This is then satisfied at $n_0$, $g(a_0)$. \\ (iii) Vary $g$ away from $g(a_0)$ to determine $g$ such that, under successive decimations following the flow into the strong coupling regime, the resulting string tension satisfies (\ref{sigmaI}) for $n=n_0+1$.\\ (iv) Repeat (iii) for $n=n_0+2, n_0+3,\cdots, n_0-1, \cdots$. \\ This provides the functional relation $g(a)$. In particular, for $b=2$, it gives the sequence of values $g(a_0/2^l)$, $l=1,2,\ldots$, starting from some value $g(a_0)$.\footnote{This is the analog in the present context of the `staircase' procedure in \cite{Cr}.} Note that, according to (i) above, the number of decimations $n_0$ at which one chooses to apply V.1 to obtain (\ref{vfeA}), (\ref{sigma2}) amounts to fixing the string tension. This is the only physical parameter in the theory. A specification of $\Lambda_0$ is a specification of the value $g(a_0)$ at spacing $a_0$, which is a convention of no physical import. One then has in principle a constructive method for obtaining $g(a)$ by a sequence of simple algebraic operations. This is the coupling $g(a)$ as defined in the physical non-perturbative renormalization scheme specified by keeping the string tension fixed. A straightforward illustration of the method is provided by setting all $\alpha_\Lambda^{(n)}=1$, i.e. apply it to the flow according to the upper bound coefficients $c_j^U$ in (\ref{S1}). This yields $g(a)$ as given by MK decimations. We cannot apply it explicitly to the case of interest, i.e. the flow following the middle column coefficients in (\ref{S2}), since we do not determine them explicitly in this paper. The qualitative features at strong and weak coupling, however, are readily discernible. At strong coupling, i.e. small initial $\beta$, the number of decimations needed to reach a given string tension is of order unity, i.e. the lattice spacing $a$ is large: $a = O(\Lambda_0^{\,-1})$, and one is very far from any continuum limit. Successive decimations, by construction, reproduce the behavior seen within the strong coupling expansion, and the familiar strong coupling variation given by $\beta(g) \sim g \ln g$ is the result, as can be checked by a short computation. The opposite limit of large initial $\beta$ corresponds to large number of decimations, hence $a \ll \Lambda_0^{\,-1}$. Indeed, recall that $g=0$ is a fixed point of the decimations. Hence, for $\beta\to \infty$, one necessarily has $n\to \infty$ in order for, say, the leading upper bound coefficient $c_{1/2}^U(n)$ to reach any prescribed value $ < 1$. Thus, $a\Lambda_0 \to 0$. Note that this limit is well-defined by construction since everything is bounded and continuous under successive decimations. Asymptotic freedom, i.e. the statement that $g(a)\to 0$ as $a\to 0$, is then a direct qualitative consequence of the flow produced by the decimations. It is instructive to examine the actual manner in which $g(a)\to 0$ under the upper bound decimations, i.e. the $c_j^U(m)$'s in (\ref{S2}). Comparing two $g$ values that differ by one decimation step ($b=2$), one finds \begin{equation} {1\over g^2(a)} ={1\over g^2(2a)} + 2b_0 \,\ln2 + O(g^2) \end{equation} for sufficiently small $g(a)$. The constant $b_0=(1-1/b^2)/(24\ln b)$ underestimates the value $11/24\pi^2$ obtained in a continuum perturbative calculation by only about $3\%$. The actual flow (middle column in (\ref{S2})) is faster, corresponding to somewhat larger $b_0$. According to RG lore, a beta-function defined by other means, such as fixing some renormalized coupling within weak coupling perturbation theory, should coincide, in its universal first two terms, with that defined by the above physical non-perturbative scheme. This, however, is outside the scope of, and not of direct relevance for the main argument in this paper. To reiterate, the above procedure completely specifies the dependence $g(a)$ in the physical renormalization scheme defined by keeping the string tension fixed, and this dependence is necessarily such that $g(a)\to 0$ as $a\to 0$. \section{Concluding remarks}\label{SUM} In summary, we obtained a representation of the vortex free energy, originally defined on a lattice of spacing $a$, in terms of partition functions on a lattice of spacing $ab^n$. The effective action in this representation is bounded by the corresponding effective action resulting from potential moving decimations (MK decimations) from spacing $a$ to spacing $ab^n$. The latter are explicitly computable. Confining behavior is the result, starting from any initial coupling $g$ on spacing $a$, by taking the number of decimation $n$ large enough. It is worth remarking again that in an approach based on RG decimations the fact that the only parameter in the theory is a physical scale emerges in a natural way. Picking a number of decimations can be related to fixing the string tension. That this can be done only after flowing into the strong coupling regime reflects the fact that this dynamically generated scale is an `IR effect'. The coupling $g(a)$ is completely determined in its dependence on $a$ once the string tension is fixed. In particular, $g(a) \to $ as $a\to 0$. Note that this implies that there is no physically meaningful or unambiguous way of non-perturbatively viewing the short distance regime independently of the long distance regime. Computation of all physical observable quantities in the theory must then give a multiple of the string tension or a pure number. In the absence of other interactions, this scale provides the unit of length; there are in fact no free parameters.\footnote{This is part of the meaning of the common saying ``QCD is the perfect theory''.} There is a variety of other results related to the approach in this paper that could not be included here. We note, in particular, that the same procedure can be immediately transcribed to the Heisenberg $SU(2)$ spin model. Also, apart from analytical results, the considerations in this paper may be combined with Monte Carlo RG techniques to constrain the numerical construction of improved actions at different scales, a subject of perennial interest to the practicing lattice gauge theorist. We hope to report on these matters elsewhere. This research was partially supported by NSF grant NSF-PHY-0555693. \setcounter{equation}{0}
{'timestamp': '2007-07-15T20:37:41', 'yymm': '0707', 'arxiv_id': '0707.2179', 'language': 'en', 'url': 'https://arxiv.org/abs/0707.2179'}
ArXiv
\section{Introduction} \label{sec:Intro} The fractional quantum Hall effect (FQHE) at filling factor $\nu = 1/3$ in the lowest Landau level (LLL) was the first to be experimentally observed~\cite{Tsui82} and subsequently understood, as a result of Laughlin's trial wave function~\cite{Laughlin83}. The FQHE has since proven to be an incredibly rich platform for exploring the physics of strongly correlated electron systems. After the initial observation, many additional fractions were observed in the LLL at $\nu = n/(2pn\pm1)$ with $n$ and $p$ positive integers. These fractions are understood as an integer quantum Hall effect (IQHE) of composite fermions, where a composite fermion (CF) is an emergent particle consisting of an electron bound to an even number of vortices~\cite{Jain89, Jain07, Halperin20}. In contrast, the FQHE in the second LL (SLL) of GaAs is less well understood. Interestingly, even the physical origin of the $\nu=7/3$ FQHE, which corresponds to 1/3 filled SLL, has not been conclusively established. Exact diagonalization studies~\cite{Ambrumenil88, Peterson08, Peterson08b, Balram13b, Kusmierz18, Balram20} have convincingly shown that the actual state at $\nu=7/3$ for a zero width system is an incompressible FQHE state. However, the overlap of the exact ground state with the Laughlin state is not large, typically less than $60\%$ for systems accessible to exact diagonalization studies~\cite{Ambrumenil88, Peterson08, Peterson08b, Balram13b, Kusmierz18, Balram20}. [In contrast, the overlap of the Coulomb ground state at $\nu=1/3$ in the LLL with the Laughlin state is greater than $98\%$ for up to $N=15$ electrons~\cite{Balram20}.] Furthermore, the excitations of the 7/3 FQHE in exact diagonalization studies are qualitatively different from those at 1/3, and exact diagonalization studies also do not show a clearly identifiable branch of low-energy excitations, called the magnetoroton or the CF-exciton mode, as observed in the LLL~\cite{Girvin85, Girvin86, Dev92, Scarola00, Jolicoeur17}. As a result, the precise nature of the state at $\nu=7/3$ has remained a topic of debate~\cite{Read99, Balram13b, Johri14, Zaletel15, Peterson15, Jeong16, Balram19a, Balram20}. In a recent work, Balram \emph{et al}.\cite{Balram19a} have proposed, inspired by the parton paradigm for the FQHE~\cite{Jain89b}, that the $\nu=7/3$ FQHE is a $\mathbb{Z}_n$ topological superconductor, wherein bound states of $n$ composite bosons~\cite{Zhang89} undergo Bose-Einstein condensation. This generalizes the Zhang-Hansson-Kivelson theory of the 1/3 Laughlin state as a Bose-Einstein condensate of composite bosons~\cite{Zhang89}, with $\mathbb{Z}_1$ corresponding to the Laughlin wave function. While the different $\mathbb{Z}_n$ states share many topological quantum numbers, a key distinction between them is that the elementary quasiparticle has a charge of $-e/(3n)$, where $-e$ is the charge of the electron. Variational calculations in Ref.~\cite{Balram19a} suggest that the best candidate is the $\mathbb{Z}_3$ state, which has lower energy than the Laughlin state in the thermodynamic limit, and also a higher overlap with the exact SLL Coulomb ground state for systems where such a calculation is possible. The experimental observations at $\nu = 7/3$ are, however, largely consistent with the Laughlin state. In particular, shot noise~\cite{Dolev08, Venkatachalam11,Dolev11} and scanning single-electron transistor~\cite{Venkatachalam11} experiments at $\nu = 7/3$ have measured quasiparticles of charge $-e/3$. This raises the question: Why are experimental measurements consistent with the Laughlin state while theory suggests that better variational states exist? This question has motivated the present study. There can be several reasons for the discrepancy between theory and experiment. The theoretical calculations mentioned above do not include the effects of finite width, Landau level mixing, screening, and disorder, which can affect the variational comparisons. We consider in this article the competition between the different $\mathbb{Z}_n$ states as a function of the quantum well width. Our primary result is the prediction of a phase transition from the $\mathbb{Z}_4$ state at small widths into the Laughlin ($\mathbb{Z}_1$) state when the quantum well width exceeds approximately 1.5 magnetic lengths. We also predict a similar phase transition at $\nu=1/3$ in the zeroth Landau level of bilayer graphene as a function of the magnetic field. \section{$\mathbb{Z}_n$ parton wave function at $\nu=1/3$} \label{sec:wftns} The parton theory generalizes the Jain CF states~\cite{Jain07} to a larger class of candidate wave functions~\cite{Jain89b}. In the parton theory, one considers fractionalizing electrons into a set of fictitious particles called partons. The partons are fractionally charged, have the same density as electrons, and have filling factor $\nu_\alpha$, where $\alpha$ labels the parton species. An incompressible state is achieved when each parton species is in an IQHE state, i.e. $\nu_\alpha = n_\alpha$, with $n_\alpha$ an integer. (More generally, we can place the partons in any known incompressible states.) The partons are of course unphysical and must be combined back into physical electrons, which is equivalent to setting the parton coordinates $z_j^\alpha$ equal to the parent electron coordinates $z_j$, i.e. $z_j^\alpha = z_j$ for all $\alpha$. (The quantity $z_j = x_j - iy_j$ is the complex coordinate of the $j$th electron.) The resulting wave functions, labeled ``$n_1n_2n_3...$," are given by \begin{equation} \Psi^{n_1n_2n_3...}_\nu = \mathcal{P}_{\rm LLL} \prod_{n_\alpha} \Phi_{n_\alpha}(\{z_j\}), \end{equation} where $\Phi_n$ is the Slater determinant wave function for the state with $n$ filled Landau levels, and $\mathcal{P}_{\rm LLL}$ denotes projection into the LLL, as appropriate in the high field limit. The partons can also experience magnetic fields anti-parallel to the field experienced by electrons; these correspond to negative filling factors, which we denote as $\bar{n}$, with $\Phi_{\bar{n}}=\Phi_{-n}=\Phi_n^*$. To ensure that each parton species has the same density as the electron density, the charge of each parton species is given by $e_\alpha = -\nu e / \nu_\alpha$. The relation $\sum_\alpha e_\alpha=-e$ implies that the electron filling factor is given by $\nu = [\sum_\alpha \nu_\alpha^{-1}]^{-1}$. The Laughlin wave function at $\nu=1/3$ can be interpreted as the $111$ parton state. The Jain $n/(2pn+1)$ states appear as the $n11...$ states and the Jain $n/(2pn-1)$ states as $\bar{n}11...$; these correspond to the wave function $\Psi_{n/(2pn\pm1)} = \mathcal{P}_{\rm LLL} \Phi_{\pm n} \Phi_1^{2p}$. Many other parton states have recently been shown to be plausible for SLL and other FQHE~\cite{Wu17, Balram18, Bandyopadhyay18, Balram18a, Faugno19, Balram19, Kim19, Balram19a, Balram20, Faugno20a, Balram20a}. These states often have exotic properties, such as non-Abelian anyonic excitations~\cite{Wen91}. For $\nu = 1/3$, Balram \emph{et al}. proposed the $\mathbb{Z}_n$ parton states described by the wave function \begin{equation} \Psi_{1/3}^{\mathbb{Z}_n} = \mathcal{P}_{\rm LLL}\Phi_n\Phi_{\bar{n}}\Phi_1^3 \sim \Psi_{n/(2n+1)}\Psi_{n/(2n-1)} \Phi_1^{-1}, \end{equation} where in the last step we redefine the wave function as $[\mathcal{P}_{\rm LLL}\Phi_n \Phi_1^2] [\mathcal{P}_{\rm LLL}\Phi_{\bar{n}} \Phi_1^2]\Phi_1^{-1}$. (This grouping is chosen to facilitate the use of Jain-Kamilla projection method in our numerical simulations \cite{Jain07,Jain97, Jain97b, Davenport12, Moller05, Balram15a}. It is accepted and has been shown for many cases that the topological properties of the state do not depend on the details of the projection method \cite{Balram16b}.) Because the factor $\Phi_n\Phi_{\bar{n}}$ is real, all the $\mathbb{Z}_n$ states occur at the same ``shift"~\cite{Wen92} $\mathcal{S}=3$ in the spherical geometry. The physical interpretation of the wave function as a superconductor of composite bosons arises from the fact that $\Phi_n\Phi_{\bar{n}}$ represents a $\mathbb{Z}_n$ superconductor of electrons~\cite{Barkeshli13,Balram19a}, and the factor $\Phi_1^3$ attaches three vortices to each electron to convert it into a composite boson. The elementary excitation corresponds to an excitation in the factor $\Phi_n$ or $\Phi_{\bar{n}}$ and has a charge of magnitude $e/(3n)$. \section{Finite width phase diagram for $\nu=7/3$ in GaAs} We will use the spherical geometry~\cite{Haldane83} in our calculations, which considers $N$ electrons on the surface of a sphere subjected to a total flux $2Q\phi_0$, with $\phi_0=hc/e$ and $2Q$ is a positive integer. The radius of the sphere is $R=\sqrt{Q}\ell$, where $\ell=\sqrt{\hbar c/eB}$ is the magnetic length. The state with $n$ filled Landau levels can only be constructed for particle number $N$ divisible by $n$ and $N\geq n^2$. The same is true of the $\mathbb{Z}_n$ state. We approximate the confining potential as an infinite square well of width $w$ that results in a transverse wave function given by a sine function. The problem of electrons in the SLL interacting with the Coulomb interaction is equivalent to that of electrons in the LLL interacting with an effective interaction; the effective interaction for this system is given in Ref.~\cite{Toke08}. We consider well widths up to 5 magnetic lengths. (For convenience, we use the disk pseudopotentials for our calculations in the spherical geometry; this should not cause any corrections because we will perform the calculation for large systems and take the thermodynamic limit.) \begin{figure}[tbhp] \includegraphics[width=0.49\textwidth]{SLLTL} \caption{Thermodynamic extrapolations for the energy differences $\Delta E=E(\mathbb{Z}_n)-E_{\rm Laughlin}$ for various $\mathbb{Z}_n$ states at $\nu=7/3$. The results are for zero quantum well width. The $\mathbb{Z}_4$ state has the lowest energy in the thermodynamic limit.} \label{fig: TLs} \end{figure} \begin{figure}[tbhp] \includegraphics[width=0.49\textwidth]{SLLFW} \caption{Thermodynamic energies of various $\mathbb{Z}_n$ states at $\nu=7/3$ as a function of the quantum well width $w$. All energies are quoted relative to the Laughlin state. The $x$-axis is the quantum well width in units of magnetic length. A phase transition from the $\mathbb{Z}_4$ to the Laughlin state is seen to occur at $w\sim1.5 \ell$.} \label{fig: FWSLL} \end{figure} At zero width, the $\mathbb{Z}_4$ state has the lowest energy in the thermodynamic limit, as seen in Fig.~\ref{fig: TLs}. Here and below, all energies are quoted in units of $e^2/(\epsilon \ell)$ where $\epsilon$ is the dielectric constant of the material. (We note that the $\mathbb{Z}_4$ and $\mathbb{Z}_5$ states were not studied in Ref.~\cite{Balram19a}. Also, we cannot definitively rule out the $\mathbb{Z}_3$ state within the numerical accuracy of our calculations.) We have similarly determined the thermodynamic energies for quantum wells of various widths [see Appendix~\ref{sec: ThermoLim}]. In Fig.~\ref{fig: FWSLL}, we show how the energies of several $\mathbb{Z}_n$ states, measured relative to the energy of the Laughlin state, evolve as we increase the well-width. For a $w<1.5\ell$, the $\mathbb{Z}_4$ state has the lowest variational energy in the thermodynamic limit. For $w>1.5\ell$, the Laughlin state is preferred in our calculation, suggesting that the $\mathbb{Z}_4$ state should only be observed in samples with sufficiently low quantum well widths and/or low density. We add here that because of the numerical uncertainty in the thermodynamic energy differences and our simple model for the finite width, the critical value of $1.5\ell$ should be taken only as a first estimate. \section{$\mathbb{Z}_n$ state in bilayer graphene} We next ask if similar physics can appear elsewhere. We expect to find a transition in the zeroth Landau level of bilayer graphene (BLG) as a function of the magnetic field. The zeroth Landau level of BLG is exactly equivalent to the LLL of GaAs when the magnetic field is infinite and continuously interpolates to the SLL of GaAs as the magnetic field is decreased. As such, we expect that a $\mathbb{Z}_n$ state is stabilized below a critical field and the Laughlin state is favored above the critical field. The Coulomb interaction between electrons can be parameterized using Haldane pseudopotentials $V_m$~\cite{Haldane83}, which is the energy of two electrons in a relative angular momentum state $m$ in the disk geometry. The pseudopotentials in the zeroth LL of bilayer graphene are given by \begin{equation} V^{0-{\rm BLG}}_m(\theta) = \int_{0}^{\infty} dq~F^{0-{\rm BLG}}(\theta,q)e^{-q^2}L_m(q^2).\label{BLGVm} \end{equation} where the Fourier transformed form factor is \cite{Apalkov11} \begin{equation} F^{0-{\rm BLG}}(\theta,q) = \Bigg[\sin^{2}(\theta)L_{1}\Big(\frac{q^2}{2}\Big)+\cos^{2}(\theta)L_{0}\Big(\frac{q^2}{2}\Big) \Bigg]^2. \label{eq:ZerothLL_BLG_form_factor} \end{equation} Here we have set the magnetic length to unity, $L_r(x)$ is the $r$th order Laguerre polynomial and $\theta$ is a parameter that varies between $0$ and $\pi/2$ to control the relative proportion of the $n=0$ and $n=1$ LLs in the two-component wave function. At $\theta=0$ the form factor is that of the LLL of GaAs while for $\theta = \pi/2$, it is exactly the form factor for the SLL in GaAs. At the mid-way point, $\theta =\pi/4$, the form factor is that of the $n=1$ LL in monolayer graphene~\cite{Balram15c}. The value of $\theta$ is related to the magnetic field by $\tan^2{(\theta)}\propto \ell /(\hbar v_F)\propto 1/\sqrt{B}$, where $v_F$ is the Fermi velocity. For very large magnetic fields, we anticipate the physics of the LLL of GaAs. As the magnetic field is lowered, we first expect to see the physics of monolayer graphene appear (which has been shown to be well described by the composite fermion theory~\cite{Balram15c}) that eventually gives way to states exhibiting the physics of the SLL of GaAs at very small magnetic fields. \begin{figure}[tbhp] \includegraphics[width=0.4\textwidth]{BiGrNoSpinEdit} \caption{Energies of $\mathbb{Z}_n$ states in bilayer graphene, measured relative to the energy of the Laughlin state, as a function of the tangent of the mixing angle $\theta$. All energies represent thermodynamic limits. The $\mathbb{Z}_4$ state is seen to be favored for $\theta \gtrapprox 1.45$. The top axis shows $\hbar v_F/\ell$ in units of meV (see text for relation between $\theta$ and $\hbar v_F/\ell$). Energies are shown only in the vicinity of the transition. } \label{fig:BiGr} \end{figure} We construct an effective interaction as shown in Appendix~\ref{sec: effInt}, and obtain the thermodynamic energies of various candidate states as a function of $\theta$. The angle $\theta$ is related to measurable quantities through $\tan{\theta}= t \ell/(\sqrt{2}\hbar v_F)$ where $t$ is the hopping integral and $v_F$ is the Fermi velocity~\cite{Apalkov11}. Taking $t\sim 350\text{meV}$, as obtained from DFT calculations at zero magnetic field~\cite{Jung14}, we obtain $\hbar v_F/\ell = 350\text{meV}/(\sqrt{2}\tan{\theta})$. The top axis in Fig.~\ref{fig:BiGr} shows $\hbar v_F/\ell$ in units of meV. We find that the transition from the $\mathbb{Z}_{4}$ to the Laughlin state occurs approximately at $\hbar v_F/\ell\sim30$ meV. For graphene, with a typical Fermi velocity of $10^6$ $m/s$, this corresponds to a magnetic field strength of $B\approx 1.4$T. \section{Landau level mixing and spin} It is natural to ask if $\mathbb{Z}_n$ parton states can be relevant for $\nu=1/3$ in the LLL. We have performed extensive calculations as a function of quantum well width and density, also including LL mixing. We use a self-consistent LDA calculation to determine the transverse electron density at zero magnetic field at several electron densities and quantum well widths. We further include LL mixing through the so-called fixed phase diffusion Monte Carlo method~\cite{Ortiz93, Zhang16, Zhao18}. For all parameters we have considered, the Laughlin state remains the lowest energy state. The detailed results are given in Appendix~\ref{sec: ThermoLim}. Recent experiments have mapped out the spin polarization of the SLL in GaAs quantum wells~\cite{Yoo19}. They observe an anomalous spin depolarization between fillings factors 1/5 and 1/3. The Laughlin state is fully spin polarized, but $\mathbb{Z}_n$ states with $n>1$ allow for the possibility of spin-unpolarized or spin-partially polarized states. The generalization is analogous to that for Jain CF states to spin-singlet or partially spin-polarized states~\cite{Wu93, Park98, Jain07, Balram15a, Balram17}. Specifically, the $\mathbb{Z}_n$ state can be generalized to include spin as \begin{eqnarray} \Psi_{1/m}^{\mathbb{Z}_n(n,0;\bar{n}_\uparrow,\bar{n}_\downarrow)} &=& \mathcal{P}_{\rm LLL}\Phi_n\Phi_{\bar{n}_\uparrow,\bar{n}_\downarrow}\Phi_1^m \nonumber \\ &\sim & \Psi_{n/(2n+1)} \Phi_1^{m-4} \mathcal{P}_{\rm LLL} \Phi^*_{n_\uparrow,n_\downarrow} \Phi_1^2 \label{Znspin} \end{eqnarray} where $n =n_\uparrow + n_\downarrow$, and $\Phi_{n_\uparrow,n_\downarrow}$ represents the state with $n_\uparrow$ spin-up and $n_\downarrow$ spin-down filled Landau levels. These wave functions can be shown to satisfy the Fock cyclic conditions~\cite{Jain07}. In the above wave function, we have made the $\bar{n}$-parton spinful. An analogous wave function $\Psi_{1/m}^{\mathbb{Z}_n(n_\uparrow,n_\downarrow;\bar{n},0)}$ can be written where the $n$-parton is endowed with spin. Which configuration is preferred depends on the interaction. Our detailed calculations, shown in Appendix~\ref{sec: ThermoLim}, demonstrate that the fully spin-polarized states have better variational energies for all interactions considered in this article. \section{Discussion} \label{sec: discussion} Our work was motivated by an apparent discrepancy between theory and experiment for the FQHE at $\nu=7/3$: while theory finds the $\mathbb{Z}_4$ parton state to have lower energy than the Laughlin state, experiments are consistent with the latter. We find that when we take into account finite width corrections, there is a transition from the $\mathbb{Z}_4$ state into the Laughlin state at width $\sim 1.5 \ell$. All experimental observations of the 7/3 state appear to be for larger widths and thus fall in the region where the Laughlin state is favored. (Large mobilities, necessary for an observation of the 7/3 state, are typically obtained for relatively wide quantum wells because that minimizes the effect of interface roughening. One may alternatively go to low densities. The 7/3 state has been observed at very low densities~\cite{Pan14, Samkharadze17}, but even there, with 7/3 state occurring at $B\approx 0.9$T, the width of 65 nm translates approximately into 2.5 $\ell$.) It may be possible to decrease both the quantum well width and the density to get into the regime where the $\mathbb{Z}_4$ state is predicted. If a phase transition is observed (for example, by gap closing and reopening as a function of the density), it would provide evidence in favor of an unconventional $\mathbb{Z}_n$ state at small widths, and also of the role of large width in stabilizing the Laughlin state at 7/3. We note that a previous exact diagonalization calculation has also shown that the magnetoroton branch, absent at zero width, appears by the time the quantum well width is three magnetic lengths~\cite{Jolicoeur17}. Another exact diagonalization study has shown that finite width stabilizes the 7/3 Laughlin state~\cite{Balram20}. An additional experimental quantity, namely the chiral central charge, can be measured in thermal Hall conductance measurements and can sometimes distinguish between states with different topological content~\cite{Kane97, Cappelli02}. For the $\mathbb{Z}_n$ states, the chiral central charge is independent of the value of $n$ and so the thermal Hall conductance is predicted to be the same for all of these states. The measured value of the thermal Hall conductance at $7/3$ is consistent with all of these states \cite{Banerjee17b}. The Hall viscosity of all $\mathbb{Z}_n$ states is identical because they all have the same shift. These states are, however, not topologically equivalent as they have different topological entanglement entropies~\cite{Balram19a}. The clearest experimental signature distinguishing the states will be the charge of the fundamental quasiparticles. The Laughlin state has charge $-e/3$ quasiparticles while the $\mathbb{Z}_n$ state has charge $-e/(3n)$ quasiparticles. These quasiparticles can, in principle, be detected through scanning electron transistor experiments~\cite{Venkatachalam11}. The situation for shot noise experiments is more subtle. As Balram \emph{et al}. argued~\cite{Balram19a}, the $-e/(3n)$ quasiparticles are gapped at the edge and only the $-e/3$ quasiparticles can be excited at arbitrarily low temperatures. It may be possible, however, that the $-e/(3n)$ quasiparticles become relevant in shot noise experiments at somewhat elevated temperatures (or voltage bias). We note that the so-called anti-Read-Rezayi $4$-cluster (aRR$4$) state~\cite{Read99} also provides a plausible candidate wave function for the $7/3$ FQHE~\cite{Peterson15}. The energy of the aRR$4$ state is equal to the energy of the Laughlin state within numerical uncertainty~\cite{Peterson15}, in contrast to our $\mathbb{Z}_4$ state which has lower energy than Laughlin's. Furthermore, the aRR$4$ state has overlaps of 0.77 and 0.59 with the exact ground state for 10~\cite{Kusmierz18} and 12 particles, whereas the $\mathbb{Z}_2$ state has an overlap of 0.87 for 10 particles and $\mathbb{Z}_3$ has an overlap of 0.93 for 9 particles~\cite{Balram19a}. (The $\mathbb{Z}_4$ state requires a minimum of 16 particles, for which we cannot obtain overlaps.) Finally, assuming equilibration of all edge modes, the thermal Hall measurements at 7/3 are inconsistent with the chiral central charge of the aRR$4$ state~\cite{Banerjee17b}. \begin{acknowledgments} The work at Penn State was supported by the U. S. Department of Energy, Office of Basic Energy Sciences, under Grant no. DE-SC0005042. Some portions of this research were conducted with Advanced CyberInfrastructure computational resources provided by The Institute for CyberScience at The Pennsylvania State University. Some of the numerical calculations reported in this work were carried out on the Nandadevi supercomputer, which is maintained and supported by the Institute of Mathematical Science’s High Performance Computing center. One of us (Th.J.) acknowledges CEA-DRF for providing CPU time on the supercomputer COBALT at GENCI-CCRT. \end{acknowledgments}
{'timestamp': '2020-11-25T02:25:14', 'yymm': '2011', 'arxiv_id': '2011.12195', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.12195'}
ArXiv
\section{Introduction} The problem of the cosmological constant, interpreted as a vacuum energy, consists in that the vacuum energy obtained from the general relativity (GR) equations is much smaller, then the vacuum energy obtained from the particle theory (standard model). This discrepancy can be overcome if one chooses the initial conditions with the highest accuracy. This leads to the so called fine-tuning problem. In order to solve these problems several models have been proposed. One of these consists in the modification of the GR on the distances bigger then size of the present universe [1]. The extra dimensions, in this model, have to remain infinite (non-compact) in order to get consistent theory. From the other side the fine-tuning problem is solved by the statistical approach to the different vacua of the superstring theory with the compactified extra dimensions [2]. Each vacuum realizes a 4-dimensional particle theory with a hidden sector. Parameters of this sector determinate, among others, the vacuum energy of the 4-dimensional universe. Thus in the huge number of the superstring vacua some part of them can realize the observed small value of the cosmological constant. The p-brane solutions of the low-energetic supergravity in the type IIA/IIB string theory and the discovery of the D-branes in the open string theory give new view on the cosmological models ([3], [4]). These branes are extended and interact with each other by gravity. Each brane interacts also with itself. We consider an energy-momentum tensor induced on the D-brane by the non-trivial background given by p-brane solutions. This tensor projected on the D-brane world-volume has an interpretation of the cosmological constant. We present an explicit form of the cosmological constant as a function of the transverse directions to the D-brane. \section{Gravity generated by $p$-branes} The form of the gravity when the fundamental constitutes of matter are $p$% -branes is considered e.g. in [3], [4], [5] and [10]. Let us recall the form of an action and the solutions for the system consisting of a dilaton $\phi $, a graviton $g_{MN}$\ and an antisymmetric tensor $A_{M_{1}...M_{d}}$ of arbitrary rank $d$ in a $D$ dimensional space-time $R^{D}$ coupled to an extended object. The action $I_{D}\left( d\right) $ for $\phi ,g_{MN},A_{M_{1}...M_{d}}$ has the form [10]:% \begin{equation} I_{D}\left( d\right) =1/2\kappa ^{2}\int_{R^{D}}d^{D}x\sqrt{-g}(R\left( g\right) -1/2\left\vert d\phi \right\vert ^{2}-\frac{1}{2\left( d+1\right) !}% e^{-\phi a\left( d\right) }F^{2}), \tag{2.1} \end{equation} where: \begin{gather*} F^{2}=F_{M_{1}...M_{d+1}}F^{M_{1}...M_{d+1}}, \\ F_{M_{1}...M_{d+1}}=\left( dA\right) _{M_{1}...M_{d+1}}. \end{gather*}% The above fields are coupled to an elementary $d$-dimensional extended object ($\left( d-1\right) $-brane) $M$ with a world-volume metric $\gamma _{\mu \nu }$. This brane is embedded into $R^{D}$: \begin{equation*} X:M\rightarrow R^{D}. \end{equation*}% An action $S_{d}$ for this brane is given by: \begin{gather} S_{d}=T_{d}\int_{M\times \mathbf{R}}d^{d}\xi \lbrack -\frac{1}{2}\sqrt{% -\gamma }\gamma ^{\mu \nu }\partial _{\mu }X^{M}\partial _{\nu }X^{N}g_{MN}e^{a\phi /d}+ \notag \\ \frac{d-2}{2}\sqrt{-\gamma } \notag \\ -\frac{1}{d!}\varepsilon ^{\mu _{1}...\mu _{d}}\partial _{\mu _{1}}X^{M_{1}}...\partial _{\mu _{d}}X^{M_{d}}A_{M_{1}...M_{d}}], \tag{2.2} \end{gather}% where $\mu ,\nu =0,1,...,d-1$. Thus the action $I\left( D,d\right) $ for the system consists of \ the sum of the actions (2.1) and (2.2): \begin{equation} I\left( D,d\right) =I_{D}\left( d\right) +S_{d}. \tag{2.3} \end{equation}% In the action (2.3) there are five independent fields: \begin{enumerate} \item an antisymmetric field $A_{M_{1}...M_{d}}$, \item a metric $g_{MN}$ on $R^{D}$, \item a dilaton field $\phi $, \item a vector field $X$ which makes an embedding of the brane $M$ into $% R^{D}$, \item a metric $\gamma _{\mu \nu }$ on $M$. \end{enumerate} The equations of motion with the respect the above fields are: \begin{itemize} \item The condition $\frac{\delta I\left( D,d\right) }{\delta A}=0$ gives (the Maxwell equations with the sources): \begin{equation} d\ast \left( e^{-a\phi }F\right) =2\kappa ^{2}\ast J, \tag{2.4} \end{equation}% where the current $J$ is given by: \begin{gather} J^{M_{1}...M_{d}}\left( x\right) = \notag \\ T_{d}\int_{M\times \mathbf{R}}d^{d}\xi \varepsilon ^{\mu _{1}...\mu _{d}}\partial _{\mu _{1}}X^{M_{1}}...\partial _{\mu _{d}}X^{M_{d}}\chi , \tag{2.5} \end{gather}% and $\chi =\delta ^{D}\left( x-X\left( \xi \right) \right) /\sqrt{-g}$ \item The Einstein equations $\frac{\delta I\left( D,d\right) }{\delta g}=0$% : \begin{gather} R_{MN}-\frac{1}{2}g_{MN}R=\frac{1}{2}\left( \partial _{M}\phi \partial _{N}\phi -\frac{1}{2}g_{MN}\left\vert d\phi \right\vert ^{2}\right) \notag \\ +\kappa ^{2}T_{MN} \notag \\ +\frac{e^{-a\phi }}{2d!}\left( F_{M}^{M_{1}...M_{d}}F_{NM_{1}...M_{d}}-\frac{% 1}{2\left( d+1\right) }g_{MN}F^{2}\right) , \tag{2.6} \end{gather}% where $T_{MN}=g_{MA}g_{NB}T^{AB}$ is the energy-momentum tensor of the brane $M$ : \begin{equation} T^{AB}\left( x\right) =T_{d}\int_{M\times \mathbf{R}}d^{d}\xi \sqrt{-\gamma }% \gamma ^{\mu \nu }\partial _{\mu }X^{A}\partial _{\nu }X^{B}e^{a\phi /d}\chi , \tag{2.7} \end{equation}% and $\chi =\delta ^{D}\left( x-X\left( \xi \right) \right) /\sqrt{-g}$ \item The dilaton equation $\frac{\delta I\left( D,d\right) }{\delta \phi }% =0 $: \begin{gather} \partial _{M}\left( \sqrt{-g}g^{MN}\partial _{N}\phi \right) +\frac{a\sqrt{-g% }}{2\left( d+1\right) }e^{-a\phi }F^{2}= \notag \\ =-\kappa ^{2}\frac{a}{d}T_{d}\int_{M}d^{d}\xi \sqrt{-\gamma }\gamma ^{ij}\partial _{i}X^{M}\partial _{j}X^{N}g_{MN}e^{a\phi /d}\chi \sqrt[-]{-g}. \tag{2.8} \end{gather} \item The brane equations $\frac{\delta I\left( D,d\right) }{\delta X}=0$: \begin{gather} \partial _{\mu }\left( \sqrt{-\gamma }\gamma ^{\mu \nu }\partial _{\nu }X^{N}g_{MN}e^{a\phi /d}\right) + \notag \\ -\frac{1}{2}\sqrt{-\gamma }\gamma ^{\mu \nu }\partial _{\mu }X^{N}\partial _{\nu }X^{P}\partial _{M}\left( g_{NP}e^{a\phi /d}\right) = \notag \\ \frac{1}{d!}\varepsilon ^{\mu _{1}...\mu _{d}}\partial _{\mu _{1}}X^{M_{1}}...\partial _{\mu _{d}}X^{M_{d}}F_{MM_{1}...M_{d}}. \tag{2.9} \end{gather} \item The equations of motion $\frac{\delta I\left( D,d\right) }{\delta \gamma }=0$ for the world metric $\gamma $: \begin{equation} \gamma _{\mu \nu }=\partial _{\mu }X^{M}\partial _{\nu }X^{N}g_{MN}e^{a\phi /d}. \tag{2.10} \end{equation} \end{itemize} In order to solve the above coupled system of equations (2.4-2.9), it is assumed that $R^{D}$ has the topology of the Cartesian product [10]: \begin{equation} R^{D}=M\times N, \tag{2.11} \end{equation}% where $M$ is a $d$-dimensional manifold (($d-1$)-brane) with the Poincare symmetry group $P\left( d\right) $ and $N$ is an isotropic manifold with $% SO\left( D-d\right) $ symmetry group.\ The coordinates on $R^{D}$ are split: \begin{equation*} X^{M}=(x^{\mu },y^{m}), \end{equation*}% where $x^{\mu }$, $y^{m}$ concerns $M$ and $N,$\ respectively. The indices $% \mu $ and $m$ have the range:\ $\mu =0,1,...,d-1$ ; $m=1,...,D-d$. In this topology the ansatz for the metric $g_{MN}$ has the form: \begin{equation} ds^{2}=g_{MN}dx^{M}dx^{N}=e^{2A\left( y\right) }\eta _{\mu \nu }dx^{\mu }dx^{\nu }+e^{2B\left( y\right) }\delta _{mn}dy^{m}dy^{n}, \tag{2.12} \end{equation}% the metric $\eta $ is diagonal: $\left( \eta _{\mu \nu }\right) =diag(+1,-1,...,-1)$. The functions $A$ and $B$ depend only on $y=\left( \mathbf{y\cdot y}\right) ^{1/2}$. The form of the antisymmetric field $A$ is assumed below: \begin{equation} A_{\mu _{1}...\mu _{d}}=-\frac{1}{\det \left( g_{\mu \nu }\right) }% \varepsilon _{\mu _{1}...\mu _{d}}e^{C\left( y\right) }, \tag{2.13} \end{equation}% where: \begin{equation*} \varepsilon _{\mu _{1}...\mu _{d}}=g_{\mu _{1}\nu _{1}}...g_{\mu _{d}\nu _{d}}\varepsilon ^{\nu _{1}...\nu _{d}}, \end{equation*}% ($\varepsilon ^{01...d-1}=+1$) and the other components of $A$ are set to zero, $\det \left( g_{\mu \nu }\right) =\left( -1\right) ^{d-1}e^{2Ad}$. Thus the field $F$ has the form: \begin{equation} F_{m\mu _{1}...\mu _{d}}=-\frac{1}{\det \left( g_{\mu \nu }\right) }% \varepsilon _{\mu _{1}...\mu _{d}}\partial _{m}\left( e^{C\left( y\right) }\right) . \tag{2.14} \end{equation}% The dilaton field $\phi $ depends on $y$ since $N$ is isotropic: \begin{equation*} \phi =\phi \left( y\right) . \end{equation*} A static gauge choice for the vector field $X$ is also assumed: \begin{equation*} X^{\mu }=\xi ^{\mu }, \end{equation*}% where $\xi ^{\mu }$ are coordinates on the brane $M$. In this static gauge the field $X$ is equal to: \begin{equation*} X^{M}=\left( \xi ^{\mu },Y^{m}\right) . \end{equation*}% The directions $Y$ transverse to the brane $M$ are constant: $Y^{m}=const.$ It means that the brane is not moving in this special coordinates system. Under the above conditions the metric $\gamma $ (Eq. (2.10)) takes the form: \begin{equation*} \gamma _{\mu \nu }=\eta _{\mu \nu }e^{2A+a\phi /d}. \end{equation*}% One of the solutions for the above system with the flat asymptotic condition ( $g_{MN}\rightarrow \eta _{MN}$ ) is given by [10]: \begin{eqnarray} A\left( y\right) &=&\frac{\widetilde{d}}{2\left( d+\widetilde{d}\right) }% \left( C\left( y\right) -C_{0}\right) , \TCItag{2.15} \\ B\left( y\right) &=&-\frac{d}{2\left( d+\widetilde{d}\right) }\left( C\left( y\right) -C_{0}\right) , \TCItag{2.16} \\ e^{-C\left( y\right) } &=&\left\{ \begin{array}{cc} e^{-C_{0}}+\frac{k_{d}}{y^{\widetilde{d}}} & \text{for }\widetilde{d}>0 \\ e^{-C_{0}}+\frac{\kappa ^{2}T_{d}}{\pi }\ln y & \text{for }\widetilde{d}=0% \end{array}% \right. , \TCItag{2.17} \\ \frac{a}{d}\phi \left( y\right) &=&\frac{a^{2}}{4}\left( C\left( y\right) -C_{0}\right) +C_{0}, \TCItag{2.18} \\ a^{2}\left( d\right) &=&4-\frac{2\widetilde{d}d}{d+\widetilde{d}}, \TCItag{2.19} \end{eqnarray}% where: \begin{equation} k_{d}=\frac{2\kappa ^{2}T_{d}}{\Omega _{\widetilde{d}+1}\widetilde{d}} \tag{2.20} \end{equation}% and $\widetilde{d}=D-d-2$, $\Omega _{\widetilde{d}+1}$ is the volume of a ($% \widetilde{d}+1$)-dimensional sphere $S^{\widetilde{d}+1}$. Thus the metric $% g_{MN}$ is given by: \begin{gather} g_{MN}dX^{M}dX^{N}=\left( 1+\frac{k_{d}}{y^{\widetilde{d}}}e^{-C_{0}}\right) ^{-\frac{\widetilde{d}}{d+\widetilde{d}}}\eta _{\mu \nu }dx^{\mu }dx^{\nu }+ \notag \\ \left( 1+\frac{k_{d}}{y^{\widetilde{d}}}e^{-C_{0}}\right) ^{\frac{d}{d+% \widetilde{d}}}\delta _{mn}dy^{m}dy^{n}. \tag{2.21} \end{gather} The other solution for this system is given by a ($d+2$)-dimensional black-brane with the symmetry group: \begin{equation*} \mathbf{R\times }SO\left( d+1\right) \times SO\left( \widetilde{d}-1\right) . \end{equation*}% The metric for this system has the form: \begin{gather} ds^{2}=-\Delta _{+}\Delta _{-}^{-\frac{\widetilde{d}}{d+\widetilde{d}}% }dt^{2}+\Delta _{+}^{-1}\Delta _{-}^{\frac{a^{2}}{2d}-1}dr^{2}+r^{2}\Delta _{-}^{\frac{a^{2}}{2d}}d\Omega _{d+1}^{2} \notag \\ +\Delta _{-}^{\frac{d}{d+\widetilde{d}}}dX_{i}dX^{i}, \tag{2.22} \end{gather}% where $i=1,...,\widetilde{d}-1$ and: \begin{eqnarray} e^{-2\phi } &=&\Delta _{-}^{a}, \TCItag{2.23} \\ \Delta _{\pm } &=&1-\left( \frac{r_{\pm }}{r}\right) ^{d}, \TCItag{2.24} \\ F_{d+1} &=&\left( r_{+}r_{-}\right) ^{d/2}\varepsilon _{d+1}d, \TCItag{2.25} \end{eqnarray}% and $\varepsilon _{d+1}$ is the volume form of the ($d+1$)-dimensional sphere $S^{d+1}$with the metric $d\Omega _{d+1}^{2}=h_{ab}d\varphi ^{a}d\varphi ^{b}$. The radii $r_{+}$ and $r_{-}$ are related to the mass $% \emph{M}_{\widetilde{d}}$ per unit ($\widetilde{d}-1$)-volume and to the magnetic charge $g_{\widetilde{d}}$: \begin{eqnarray} \emph{M}_{\widetilde{d}} &=&\int d^{D-d}\Theta _{00}=\frac{\Omega _{d+1}}{% 2\kappa ^{2}}[\left( d+1\right) r_{+}^{d}-r_{-}^{d}], \TCItag{2.26} \\ g_{\widetilde{d}} &=&\frac{1}{\sqrt{2}\kappa }\int_{S^{d+1}}e^{-a\phi }\ast F=\frac{\Omega _{d+1}}{\sqrt{2}\kappa }d\left( r_{+}r_{-}\right) ^{d/2}, \TCItag{2.27} \end{eqnarray}% where $\Theta _{MN}$ is the total energy-momentum tensor for the system and $% \ast $ is the Hodge duality operator with respect to the metric (2.22). In the case when $r_{+}=r_{-}=r_{0}$ the mass and charge are given by: \begin{equation} \emph{M}_{\widetilde{d}}=\sqrt{2}\kappa g_{\widetilde{d}}. \tag{2.28} \end{equation}% It means that this brane becomes the extremal p-brane (BPS state). In this case the metric (2.22) takes the form: \begin{equation} ds^{2}=\Delta ^{\frac{d}{d+\widetilde{d}}}\left( -dt^{2}+dX_{i}dX^{i}\right) +\Delta ^{-\frac{\widetilde{d}}{d+\widetilde{d}}}\left( d\rho ^{2}+\rho ^{2}d\Omega _{d+1}^{2}\right) , \tag{2.29} \end{equation}% where $\rho ^{d}=r^{d}-r_{0}^{d}$ and $\Delta =1+\left( r_{0}/\rho \right) ^{d}$. \section{A D-brane motion in the field of the black-brane} We consider a $D_{d^{\prime }-1}$-brane $M$ embedded in the background of the ($d+2$)-blackbrane $N$ in the $D$-dimensional space-time $R^{D}$. This ($% d+2$)-blackbrane wraps a $\left( d+1\right) $-dimensional sphere. The metric of $R^{D}$ in the presence of blackbrane is given by Eq. (2.22). Thus the metric $\gamma _{\alpha \beta }$ induced on $M$ by $g_{MN}$ has the form: \begin{equation} \gamma _{\alpha \beta }=g_{MN}\frac{\partial X^{M}}{\partial \xi ^{\alpha }}% \frac{\partial X^{N}}{\partial \xi ^{\beta }}, \tag{3.1} \end{equation}% where $X$ is an embedding of $M$ in $R^{D}$: \begin{eqnarray*} X &:&N\times \mathbf{R}^{1}\rightarrow R^{D}, \\ X^{M} &=&X^{M}\left( \xi ^{0},\xi ^{a}\right) \end{eqnarray*}% and $\alpha ,\beta =0,1,...,d^{\prime }-1$ , $n=1,...,d^{\prime }-1$. We assume that the time in $R^{D}$ and in the worldvolume $M$ is the same and $% d^{\prime }-1$ directions of $N$ are parallel to $M.$ Thus the embedding of $% X^{M}$ has the form: \begin{equation} X^{M}\left( \xi ^{0},\xi ^{a}\right) =\left( \xi ^{0},\xi ^{a},X^{m}\left( \xi ^{0}\right) \right) , \tag{3.2} \end{equation}% where $a=1,...,d^{\prime }-1$ and $m=1,...,D-d^{\prime }$. The coordinates on $M$ and $R^{D}$ selected in this way form the static gauge. For the metric $g_{MN}$ (which is produced by (d+2)-brane wrapped on $S^{d+1}$) equal to% \begin{equation} ds^{2}=\lambda _{0}dt^{2}+\lambda _{1}\sum_{i=1}^{\widetilde{d}% -1}dX_{i}^{2}+\lambda _{2}dr^{2}+r^{2}\lambda _{3}d\Omega _{d+1} \tag{3.3} \end{equation}% the metric $\gamma _{\alpha \beta }$ induced by the embedding (3.2) takes the form:% \begin{eqnarray} \gamma _{00} &=&\lambda _{0}+\lambda _{1}\sum_{i=d^{\prime }}^{\widetilde{d}% -1}\overset{\cdot }{X}_{i}^{2}+\lambda _{2}\overset{\cdot }{r}% ^{2}+r^{2}\lambda _{3}\overset{\cdot }{\mathbf{\varphi }}^{2}, \TCItag{3.4} \\ \gamma _{ab} &=&\lambda _{1}\delta _{ab},\text{for }d^{\prime }-1\leq \widetilde{d}-1 \TCItag{3.5} \end{eqnarray}% and $\gamma _{a0}=0$, where:% \begin{equation*} \overset{\cdot }{\mathbf{\varphi }}^{2}=h_{rs}\overset{\cdot }{\varphi }^{r}% \overset{\cdot }{\varphi }^{s}, \end{equation*}% and $h_{rs}=h_{rs}\left( \varphi \right) $ ($r,s=1,...,d+1$)\ is the metric on $S^{d+1}$. The coordinates $X^{m}$ in the metric (3.3) are as follows:% \begin{equation*} X^{m}=\left( X^{i},r,\varphi ^{s}\right) , \end{equation*}% where $i=d^{\prime },...,\widetilde{d}-1$. In the case when the metric of the background has the form:% \begin{equation} ds^{2}=\lambda _{0}dt^{2}+\lambda _{1}\sum_{i=1}^{\widetilde{d}% -1}dX_{i}^{2}+\lambda _{2}\sum_{m=1}^{d+2}dX_{m}^{2}, \tag{3.6} \end{equation}% the induced metric takes the form:% \begin{eqnarray} \gamma _{00} &=&\lambda _{0}+\lambda _{1}\sum_{i=d^{\prime }}^{\widetilde{d}% -1}\overset{\cdot }{X}_{i}^{2}+\lambda _{2}\sum_{m=1}^{d+2}\overset{\cdot }{X% }_{m}^{2}, \TCItag{3.7} \\ \gamma _{0a} &=&0, \TCItag{3.8} \\ \gamma _{ab} &=&\lambda _{1}\delta _{ab}\text{, for }d^{\prime }-1\leq \widetilde{d}-1. \TCItag{3.9} \end{eqnarray}% If $d^{\prime }-1\geq \widetilde{d}-1$ the metric $\gamma $ is given by:% \begin{eqnarray} \gamma _{00} &=&\lambda _{0}+\lambda _{2}\sum_{m=1}^{d+2}\overset{\cdot }{X}% _{m}^{2}, \TCItag{3.10} \\ \gamma _{a_{1}b_{1}} &=&\lambda _{1}\delta _{a_{1}b_{1}}\text{ for }% a_{1},b_{1}=1,...,\widetilde{d}-1, \TCItag{3.11a} \\ \gamma _{a_{2}b_{2}} &=&\lambda _{2}\delta _{a_{2}b_{2}}\text{ for }% a_{2},b_{2}=\widetilde{d},...,d^{\prime }-1. \TCItag{3.11b} \end{eqnarray} Thus in the gauge (3.2)\ the metric $\gamma $ induced on $M$ by the blackbrane $N$ (the latter producing the background metric (2.27)) has the form: \begin{eqnarray} \gamma _{00} &=&-\Delta _{+}\Delta _{-}^{-\frac{\widetilde{d}}{d+\widetilde{d% }}}+\Delta _{+}^{-1}\Delta _{-}^{\frac{a^{2}}{2d}-1}\overset{\cdot }{r}% ^{2}+r^{2}\Delta _{-}^{\frac{a^{2}}{2d}}\overset{\cdot }{\mathbf{\varphi }}% ^{2}+\Delta _{-}^{\frac{d}{d+\widetilde{d}}}\overset{\cdot }{X}_{i}\overset{% \cdot }{X}^{i}, \TCItag{3.12} \\ \gamma _{0a} &=&0, \TCItag{3.13} \\ \gamma _{ab} &=&\Delta _{-}^{\frac{d}{d+\widetilde{d}}}\delta _{ab}. \TCItag{3.14} \end{eqnarray}% For the static case $\overset{\cdot }{X}_{i}=0$ ($i=1,...,\widetilde{d}-1$) $% \gamma _{00}$ takes the form: \begin{equation} \gamma _{00}=-\Delta _{+}\Delta _{-}^{-\frac{\widetilde{d}}{d+\widetilde{d}}% }+\Delta _{+}^{-1}\Delta _{-}^{\frac{a^{2}}{2d}-1}\overset{\cdot }{r}% ^{2}+r^{2}\Delta _{-}^{\frac{a^{2}}{2d}}\overset{\cdot }{\mathbf{\varphi }}% ^{2}. \tag{3.15} \end{equation} \section{The energy-momentum tensor for a D-brane} The energy-momentum tensor of the ($d^{\prime }-1$)-brane $M$ in the background $g_{MN}$ is given by Eq. (2.7) and is expressed by the matrix: \begin{equation} \left( T^{MN}\right) =\left( \begin{array}{ccc} T^{00} & T^{0a} & T^{0m} \\ T^{a0} & T^{ab} & T^{am} \\ T^{m0} & T^{ma} & T^{mn}% \end{array}% \right) , \tag{4.1} \end{equation}% where $a,b=1,...,d^{\prime }-1$ and $m,n=1,...,D-d^{\prime }$ . The generic form of the background metric $g_{MN}$ produced by a ($d-1$)-brane is given by (see (3.3) and (3.6)): \begin{equation} \left( g_{MN}\right) =\left( \begin{array}{cc} \left( \begin{array}{cc} \lambda _{0} & \left( 0\right) \\ \left( 0\right) & \lambda _{1}I_{\widetilde{d}-1}% \end{array}% \right) & 0 \\ 0 & \left( g_{rs}\right)% \end{array}% \right) , \tag{4.2} \end{equation}% where $I_{\widetilde{d}-1}$ is ($\widetilde{d}-1$)-dimensional unit matrix and $r,s=1,...,d+2$. Thus the induced metric $\gamma _{\mu \nu }$ on the brane $M$ for the embedding (3.2) has the form given either by Eqs.(3.4-3.5) or by Eqs.(3.7-3.11). The components of the energy-momentum tensor $T^{MN}$ for the metric (4.2) in the embedding (3.2) take the form: \begin{eqnarray} T^{\mu \nu } &=&T_{d}\sqrt{\frac{\gamma }{g}}\gamma ^{\mu \nu }e^{a\phi /d}% \widehat{\delta }, \TCItag{4.3} \\ T^{m0} &=&T^{0m}=T_{d}\sqrt{\frac{\gamma }{g}}\gamma ^{00}\overset{\cdot }{X}% ^{m}e^{a\phi /d}\widehat{\delta }, \TCItag{4.4} \\ T^{mn} &=&T_{d}\sqrt{\frac{\gamma }{g}}\gamma ^{00}\overset{\cdot }{X}^{m}% \overset{\cdot }{X}^{n}e^{a\phi /d}\widehat{\delta }, \TCItag{4.5} \end{eqnarray}% where $\widehat{\delta }=\delta ^{D-d}\left( x^{m}-X^{m}\left( \xi ^{0}\right) \right) $. Other components ($T^{a0}$ , $T^{am}$) are equal to zero and $\gamma =\det \left( \gamma _{\mu \nu }\right) $, $g=\det \left( g_{MN}\right) $. \subsection{Cosmological constant induced by the blackbranes} The ratio of the determinants $\gamma /g$ for the metrics (3.3), (3.4) and (3.5) is given by: \begin{equation} \gamma /g=\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}\Gamma }{r^{2\left( d+1\right) }\lambda _{2}\lambda _{3}^{d+1}\det h}, \tag{4.6} \end{equation}% where: \begin{equation} \Gamma =1+\frac{\lambda _{1}}{\lambda _{0}}\sum_{i=d^{\prime }}^{\widetilde{d% }-1}\overset{\cdot }{X}_{i}^{2}+\frac{\lambda _{2}}{\lambda _{0}}\overset{% \cdot }{r}^{2}+r^{2}\frac{\lambda _{3}}{\lambda _{0}}\overset{\cdot }{% \mathbf{\varphi }}^{2}, \tag{4.7} \end{equation}% and $\det h$ is the determinant of the metric $h_{rs}$ on the sphere $% S^{d+1} $.\ For the embedding (3.2) and in the metric (3.3) the time-dependent components $\overset{\cdot }{X}^{m}$in Eqs.(4.4-4.5) have the form:% \begin{equation*} \overset{\cdot }{X}^{m}=\left( \overset{\cdot }{X}^{i},\overset{\cdot }{r},% \overset{\cdot }{\varphi }^{s}\right) , \end{equation*}% where $i=d^{\prime },...,\widetilde{d}-1$ and $s=1,...,d+1$ ($\varphi ^{s}$ are coordinates on $S^{d+1}$). In this way we obtain the explicit form of $% T^{MN}$: \begin{eqnarray} T^{\mu \nu } &=&\frac{T_{d^{\prime }}}{r^{d+1}}\sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}\Gamma }{\lambda _{2}\lambda _{3}^{d+1}\det h% }}\gamma ^{\mu \nu }e^{a\phi /d}\widehat{\delta }, \TCItag{4.8} \\ T^{m0} &=&\frac{T_{d^{\prime }}}{r^{d+1}}\sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}\Gamma }{\lambda _{2}\lambda _{3}^{d+1}\det h}}\frac{\overset% {\cdot }{X}^{m}e^{a\phi /d}}{\lambda _{0}\Gamma }\widehat{\delta }, \TCItag{4.9} \\ T^{mn} &=&\frac{T_{d^{\prime }}}{r^{d+1}}\sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}\Gamma }{\lambda _{2}\lambda _{3}^{d+1}\det h}}\frac{\overset% {\cdot }{X}^{m}\overset{\cdot }{X}^{n}e^{a\phi /d}}{\lambda _{0}\Gamma }% \widehat{\delta }, \TCItag{4.10} \end{eqnarray}% since $\gamma _{00}=\lambda _{0}\Gamma $ and $\gamma ^{00}=\left( \lambda _{0}\Gamma \right) ^{-1}$. The $D$-brane tension $T_{d^{\prime }}$ is given by [12, 13]: \begin{equation*} T_{d^{\prime }}^{2}=\frac{\pi }{\kappa _{\left( 10\right) }^{2}}\left( 4\pi ^{2}\alpha ^{\prime }\right) ^{4-d^{\prime }}. \end{equation*}% The pull-back of the $T^{MN}$ by the embedding $X$ gives the energy-momentum tensor $\widetilde{T}_{\mu \nu }$ on the D-brane: \begin{equation} \widetilde{T}_{\mu \nu }=T^{AB}g_{AM}g_{BN}\frac{\partial X^{M}}{\partial \xi ^{\mu }}\frac{\partial X^{N}}{\partial \xi ^{\nu }}. \tag{4.11} \end{equation}% Thus we obtain from (4.11): \begin{gather*} \widetilde{T}_{00}=T^{00}g_{00}^{2}+T^{m_{1}n_{1}}g_{m_{1}m}g_{n_{1}n}% \overset{\cdot }{X}^{m}\overset{\cdot }{X}^{n} \\ \widetilde{T}_{0a}=0, \\ \widetilde{T}_{ab}=T^{cd}g_{ca}g_{bd}, \end{gather*}% where \begin{eqnarray*} g_{00} &=&\lambda _{0,} \\ \left( g_{m_{1}m}\right) &=&\left( \begin{array}{ccc} \lambda _{1}I_{\widetilde{d}-d^{\prime }} & 0 & 0 \\ 0 & \lambda _{2} & 0 \\ 0 & 0 & r^{2}\lambda _{3}\left( h_{ps}\right)% \end{array}% \right) , \\ \left( g_{ac}\right) &=&\lambda _{1}I_{d^{\prime }-1}. \end{eqnarray*}% Because $g_{mn}\overset{\cdot }{X}^{m}\overset{\cdot }{X}^{n}=\lambda _{0}\left( \Gamma -1\right) $ we get from (4.8-10): \begin{gather} \widetilde{T}_{00}=\frac{T_{d^{\prime }}}{r^{d+1}}\sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}}{\lambda _{2}\lambda _{3}^{d+1}\Gamma \det h% }}e^{a\phi /d}\lambda _{0}\left[ \Gamma ^{2}-2\Gamma +2\right] , \tag{4.12} \\ \widetilde{T}_{ab}=\frac{T_{d^{\prime }}}{r^{d+1}}\sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}\Gamma }{\lambda _{2}\lambda _{3}^{d+1}\det h% }}e^{a\phi /d}\lambda _{1}\delta _{ab}, \tag{4.13} \end{gather}% modulo delta functions. In the static case ($\overset{\cdot }{X}^{m}=0$) $\ \Gamma =1,$ so the Eqs. (4.12-13) take the form: \begin{equation} \widetilde{T}_{\mu \nu }=\frac{T_{d^{\prime }}}{r^{d+1}}\sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}}{\lambda _{2}\lambda _{3}^{d+1}\det h}}% e^{a\phi /d}\gamma _{\mu \nu }. \tag{4.14} \end{equation}% This tensor consists of the part \begin{equation} \Lambda _{b}\left( r;d^{\prime },d\right) =\frac{T_{d^{\prime }}}{r^{d+1}}% \sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}}{\lambda _{2}\lambda _{3}^{d+1}\det h}}e^{a\phi /d}, \tag{4.15} \end{equation}% which depends only on the direction $r$ transverse to the brane $M$. Thus for a fixed position of $M$ in the ambient space $R$ the quantity $\Lambda _{b}$ has the constant value. The equations of gravity on $M$ take the form: \begin{equation} R_{\mu \nu }-\frac{1}{2}\gamma _{\mu \nu }R=t_{\mu \nu }+\widetilde{T}_{\mu \nu }, \tag{4.16} \end{equation}% where $R_{\mu \nu }$ is Ricci tensor and $R$ is scalar curvature with respect to the metric $\gamma _{\mu \nu }$ and $t_{\mu \nu }$ is the energy-momentum tensor for the matter and fields on the D-brane. Because $% \widetilde{T}_{\mu \nu }$ is the product of $\Lambda _{b}$ (which is constant on $M$) and the metric $\gamma _{\mu \nu },$ the equation (4.16) takes the form:% \begin{equation*} R_{\mu \nu }-\frac{1}{2}\gamma _{\mu \nu }R=t_{\mu \nu }+\Lambda _{b}\gamma _{\mu \nu }. \end{equation*}% Thus $\Lambda _{b}$ can be identified as a cosmological constant which is produced by the other branes. For the metric (2.22) one obtains that: \begin{equation} \Lambda _{b}\left( r;d^{\prime },d\right) =\frac{T_{d^{\prime }}}{r^{2\left( d+1\right) }}\sqrt{\frac{\Delta _{+}}{\det h}}\Delta _{-}^{\sigma }, \tag{4.17} \end{equation}% where:% \begin{equation} \sigma \left( d^{\prime },d;D\right) =\frac{\widetilde{d}\left( 3-d\right) +d^{\prime }d}{2\left( D-2\right) }-\frac{3}{d}+\frac{1}{2}. \tag{4.18} \end{equation}% In the static case induced by the ($d-1)$-dimensional blackbrane the term $% \Lambda _{b}$ on the $D_{d^{\prime }-1}$-brane takes the form:% \begin{equation} \Lambda _{b}\left( r;d^{\prime },d\right) =\frac{T_{d^{\prime }}}{r^{2\left( d+1\right) }}\sqrt{\frac{1}{\det h}}\left( 1-\frac{r_{+}^{d}}{r^{d}}\right) ^{1/2}\left( 1-\frac{r_{-}^{d}}{r^{d}}\right) ^{\sigma }, \tag{4.19} \end{equation}% where the radial coordinate $r$ is interpreted as a distance from the center of the blackbrane wrapped on $S^{d+1}$ to the center of the ($d^{\prime }-1)- $brane. The dimensions of the blackbranes change from $0$ to $D-1$. Thus the total term $\Lambda _{b}$ induced by the set of blackbranes of different dimensions can be expressed by the following sum:% \begin{equation} \Lambda _{b}\left( r_{1},...,r_{D-1};d^{\prime }\right) =\sum_{d=1}^{D-1}\Lambda _{b}\left( r_{d};d^{\prime },d\right) , \tag{4.20} \end{equation}% where $r_{d}$ is the distance from ($d-1)-$dimensional brane to ($d^{\prime }-1)-$brane. In the case, when $D=10$ and $d^{\prime }=4,$% \begin{equation} \sigma \left( 4,d;10\right) =2-\frac{3}{d}+\frac{1}{16}\left( d-7\right) d. \tag{4.21} \end{equation}% Thus% \begin{equation} \Lambda _{b}\left( r;4,d\right) =\frac{T_{4}}{r^{2\left( d+1\right) }}\sqrt{% \frac{1}{\det h}}\left( 1-\frac{r_{+}^{d}}{r^{d}}\right) ^{1/2}\left( 1-% \frac{r_{-}^{d}}{r^{d}}\right) ^{2-\frac{3}{d}+\frac{1}{16}\left( d-7\right) d} \tag{4.22} \end{equation}% and the total cosmological constant is given by:% \begin{equation} \Lambda _{b}\left( r_{1},...,r_{9};4\right) =\sum_{d=1}^{9}\Lambda _{b}\left( r_{d};4,d\right) . \tag{4.23} \end{equation}% In this way we showed that the induced cosmological constant is the function of dimensions of the blackbranes and distances from them to the 4-dimensional brane. In the non-static case ($\Gamma \neq 1$) we introduce a scalar field $\phi $ which is related to the\ transverse coordinates of the blackbrane:% \begin{equation} \phi ^{2}=\frac{\lambda _{1}}{\left\vert \lambda _{0}\right\vert }% \sum_{i=d^{\prime }}^{\widetilde{d}-1}\overset{\cdot }{X}_{i}^{2}+\frac{% \lambda _{2}}{\left\vert \lambda _{0}\right\vert }\overset{\cdot }{r}% ^{2}+r^{2}\frac{\lambda _{3}}{\left\vert \lambda _{0}\right\vert }\overset{% \cdot }{\mathbf{\varphi }}^{2}. \tag{4.24} \end{equation}% Thus:% \begin{equation} \Gamma =1-\phi ^{2}, \tag{4.25} \end{equation}% since $\lambda _{0}$ is negative. The Eqs. (4.12-13) take the forms:% \begin{eqnarray} \widetilde{T}_{00} &=&\Lambda _{b}\frac{1+\phi ^{4}}{\sqrt{1-\phi ^{2}}}% \gamma _{00}, \TCItag{4.26} \\ \widetilde{T}_{ab} &=&\Lambda _{b}\sqrt{1-\phi ^{2}}\gamma _{ab}, \TCItag{4.27} \end{eqnarray}% where $\Lambda _{b}$ is given by (4.19) or by (4.22) for $d^{\prime }=4$. Let us compare in the commoving frame ($u_{a}=0$) this induced energy-momentum tensor to an energy-momentum tensor for a perfect fluid $% T_{\mu \nu }$ with an energy density $\varepsilon $ and a pressure $p$:% \begin{equation} T_{\mu \nu }=\left( \varepsilon +p\right) u_{\mu }u_{\nu }-p\gamma _{\mu \nu }. \tag{4.28} \end{equation} As a result of this comparison one obtains:% \begin{eqnarray} \varepsilon &=&\Lambda _{b}\frac{1+\phi ^{4}}{\sqrt{1-\phi ^{2}}}, \TCItag{4.29} \\ p &=&-\Lambda _{b}\sqrt{1-\phi ^{2}}. \TCItag{4.30} \end{eqnarray}% The corresponding state equation has the form:% \begin{equation} w=p/\varepsilon =-\frac{1-\phi ^{2}}{1+\phi ^{4}}. \tag{4.31} \end{equation} For the variety of the blackbranes we get a set of the fields $\phi _{d}.$ Thus the effective energy and the pressure have the form:% \begin{eqnarray} \varepsilon &=&\sum_{d=1}^{9}\Lambda _{b}\left( r_{d};4,d\right) \frac{% 1+\phi _{d}^{4}}{\sqrt{1-\phi _{d}^{2}}}, \TCItag{4.32} \\ p &=&-\sum_{d=1}^{9}\Lambda _{b}\left( r_{d};4,d\right) \sqrt{1-\phi _{d}^{2}% }. \TCItag{4.33} \end{eqnarray}% In this case the state equation is:% \begin{equation} w=-\frac{\sum_{d=1}^{9}\Lambda _{b}\left( r_{d};4,d\right) \sqrt{1-\phi _{d}^{2}}}{\sum_{d=1}^{9}\Lambda _{b}\left( r_{d};4,d\right) \frac{1+\phi _{d}^{4}}{\sqrt{1-\phi _{d}^{2}}}}. \tag{4.34} \end{equation}% One can see from above that for the certain values of the fields $\phi _{d}$ the state equation assumes the form $w\leq -1/3$ which corresponds to the exotic matter interpretation on the D$3$-brane. \subsection{Cosmological constant induced by the branes without horizon} In this case the background metric is given by (2.21) and the induced metric $\gamma $ is given by (3.7-9) for $d^{\prime }-1\leq \widetilde{d}-1$ and by (3.10-11a,b) for $d^{\prime }-1\geq \widetilde{d}-1$. Thus the ratio of the corresponding determinants has the form:% \begin{equation} \frac{\det \gamma }{\det g}=\left\{ \begin{array}{c} \frac{\lambda _{1}^{d^{\prime }-\widetilde{d}}}{\lambda _{2}^{d+2}}\Gamma \text{ for }d^{\prime }-1\leq \widetilde{d}-1, \\ \lambda _{2}^{d^{\prime }+2d+4-D}\Omega \text{ for }d^{\prime }-1\geq \widetilde{d}-1,% \end{array}% \right. \tag{4.35} \end{equation}% where:% \begin{eqnarray} \Gamma &=&1+\frac{\lambda _{1}}{\lambda _{0}}\sum_{i=d^{\prime }}^{% \widetilde{d}-1}\overset{\cdot }{y}_{i}^{2}+\frac{\lambda _{2}}{\lambda _{0}}% \sum_{m=1}^{d+2}\overset{\cdot }{y}_{m}^{2}, \TCItag{4.36} \\ \Omega &=&1+\frac{\lambda _{2}}{\lambda _{0}}\sum_{m=1}^{d+2}\overset{\cdot }% {y}_{m}^{2}. \TCItag{4.37} \end{eqnarray}% Let us consider first the case when $d^{\prime }-1\leq \widetilde{d}-1$. Thus:% \begin{eqnarray} T^{\mu \nu } &=&T_{d^{\prime }}\sqrt{\frac{\lambda _{1}^{d^{\prime }-% \widetilde{d}}\Gamma }{\lambda _{2}^{d+1}}}\gamma ^{\mu \nu }e^{a\phi /d}% \widehat{\delta }, \TCItag{4.38} \\ T^{m0} &=&T_{d^{\prime }}\sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}% }\Gamma }{\lambda _{2}^{d+1}}}\gamma ^{00}\overset{\cdot }{y}^{m}e^{a\phi /d}% \widehat{\delta }, \TCItag{4.39} \\ T^{mn} &=&T_{d^{\prime }}\sqrt{\frac{\lambda _{1}^{d^{\prime }-\widetilde{d}% }\Gamma }{\lambda _{2}^{d+1}}}\gamma ^{00}\overset{\cdot }{y}^{m}\overset{% \cdot }{y}^{n}e^{a\phi /d}\widehat{\delta }. \TCItag{4.40} \end{eqnarray}% Proceeding as before one obtains the following pull-back of this tensor on the $D_{d^{\prime }-1}$-brane:% \begin{gather} \widetilde{T}_{00}=T^{00}g_{00}^{2}+T^{m_{1}n_{1}}g_{m_{1}m}g_{n_{1}n}% \overset{\cdot }{y}^{m}\overset{\cdot }{y}^{n} \tag{4.41} \\ \widetilde{T}_{0a}=0, \tag{4.42} \\ \widetilde{T}_{ab}=T^{cd}g_{ca}g_{bd}, \tag{4.43} \end{gather}% where:\ \ \begin{eqnarray} g_{00} &=&\lambda _{0,} \TCItag{4.44} \\ \left( g_{m_{1}m}\right) &=&\left( \begin{array}{cc} \lambda _{1}I_{\widetilde{d}-d^{\prime }} & \\ & \lambda _{2}I_{d+2}% \end{array}% \right) , \TCItag{4.45} \\ \left( g_{ac}\right) &=&\lambda _{1}I_{d^{\prime }-1}. \TCItag{4.46} \end{eqnarray}% Thus:% \begin{eqnarray} \widetilde{T}_{00} &=&T_{d^{\prime }}\sqrt{\frac{\lambda _{1}^{d^{\prime }-% \widetilde{d}}}{\lambda _{2}^{d+1}\Gamma }}e^{a\phi /d}\lambda _{0}\left[ \Gamma ^{2}-2\Gamma +2\right] , \TCItag{4.47} \\ \widetilde{T}_{ab} &=&T_{d^{\prime }}\sqrt{\frac{\lambda _{1}^{d^{\prime }-% \widetilde{d}}\Gamma }{\lambda _{2}^{d+1}}}e^{a\phi /d}\lambda _{1}\delta _{ab}. \TCItag{4.48} \end{eqnarray}% In the static case ($\Gamma =1$) this energy-momentum tensor has the form:% \begin{equation} \widetilde{T}_{\mu \nu }=\Lambda _{o}^{\prime }\gamma _{\mu \nu }, \tag{4.49} \end{equation}% where:% \begin{equation} \Lambda _{o}^{\prime }\left( d^{\prime },d,D;y\right) =T_{d^{\prime }}\left( 1+\frac{k_{d}}{y^{\widetilde{d}}}e^{-C_{0}}\right) ^{\sigma } \tag{4.50} \end{equation}% and \begin{equation} \sigma =\frac{d}{2\left( d+\widetilde{d}\right) }\left( d^{\prime }+2% \widetilde{d}-2\right) . \tag{4.51} \end{equation}% In the superstring regime $d+\widetilde{d}=8$ and for $d^{\prime }=4$ the exponent $\sigma $ is equal to:% \begin{equation} \sigma =\frac{d\left( 18-d\right) }{16} \tag{4.52} \end{equation}% and $d\leq 4$. Thus the term $\Lambda _{o}^{\prime }$ takes the form:% \begin{equation} \Lambda _{o}^{\prime }\left( 4,d,10;y\right) =T_{4}\left( 1+\frac{k_{d}}{y^{% \widetilde{d}}}e^{-C_{0}}\right) ^{\frac{d\left( 18-d\right) }{16}} \tag{4.53} \end{equation}% and for $y\rightarrow \infty $ tends to $T_{4}$. The second case is for $d^{\prime }-1\geq \widetilde{d}-1$. The induced energy-momentum tensor is given by:% \begin{eqnarray} \widetilde{T}_{00} &=&T_{d^{\prime }}\sqrt{\frac{\lambda _{2}^{d^{\prime }+2d+4-D}}{\Omega }}e^{a\phi /d}\lambda _{0}\left[ \Omega ^{2}-2\Omega +2% \right] , \TCItag{4.54} \\ \widetilde{T}_{ab} &=&T_{d^{\prime }}\sqrt{\lambda _{2}^{d^{\prime }+2d+4-D}\Omega }e^{a\phi /d}\lambda _{1}\delta _{ab}. \TCItag{4.55} \end{eqnarray}% Thus as before in the static case ($\Omega =1$) we obtain:% \begin{equation} \widetilde{T}_{\mu \nu }=\Lambda _{o}^{\prime \prime }\gamma _{\mu \nu }, \tag{4.56} \end{equation}% where:% \begin{equation} \Lambda _{o}^{\prime \prime }\left( d^{\prime },d,D;y\right) =T_{d^{\prime }}\left( 1+\frac{k_{d}}{y^{\widetilde{d}}}e^{-C_{0}}\right) ^{-\sigma }, \tag{4.57} \end{equation}% and% \begin{equation} \sigma =\frac{\widetilde{d}}{2\left( d+\widetilde{d}\right) }\left( D+d^{\prime }+d\right) . \tag{4.58} \end{equation}% In the superstring regime $d+\widetilde{d}=8$ and for $d^{\prime }=4$ the exponent $\sigma $ is equal to:% \begin{equation} \sigma =\frac{\left( d+14\right) \left( 10-d\right) }{16} \tag{4.59} \end{equation}% and $d>4$ . Thus the term $\Lambda _{o}^{\prime \prime }$ takes the form:% \begin{equation} \Lambda _{o}^{\prime \prime }\left( 4,d,10;y\right) =T_{4}\left( 1+\frac{% k_{d}}{y^{\widetilde{d}}}e^{-C_{0}}\right) ^{-\frac{\left( d+14\right) \left( 10-d\right) }{16}}. \tag{4.60} \end{equation} In the presence of the variety of branes with different dimensions the total induced cosmological constant is given by:% \begin{equation} \Lambda \left( y_{1},...,y_{9}\right) =\sum\limits_{d=1}^{4}\Lambda _{o}^{\prime }\left( 4,d,10;y_{d}\right) +\sum_{d=5}^{9}\Lambda _{o}^{\prime \prime }\left( 4,d,10;y_{d}\right) . \tag{4.61} \end{equation} In the non-static case we introduce as before a field $\phi $ defined by:% \begin{equation} \phi ^{2}=\frac{\lambda _{1}}{\left\vert \lambda _{0}\right\vert }% \sum_{i=d^{\prime }}^{\widetilde{d}-1}\overset{\cdot }{y}_{i}^{2}+\frac{% \lambda _{2}}{\left\vert \lambda _{0}\right\vert }\sum_{m=1}^{d+2}\overset{% \cdot }{y}_{m}^{2}. \tag{4.62} \end{equation}% Thus:% \begin{eqnarray} \widetilde{T}_{00} &=&\Lambda \frac{1+\phi ^{4}}{\sqrt{1-\phi ^{2}}}\gamma _{00}, \TCItag{4.63} \\ \widetilde{T}_{ab} &=&\Lambda \sqrt{1-\phi ^{2}}\gamma _{ab}, \TCItag{4.64} \end{eqnarray}% where $\Lambda $ is expressed by (4.53) for $d\leq 4$ and by (4.60) for $d>4$% . Proceeding as at the end of the previous section one gets energy, pressure and the state equation for the exotic matter induced by a non-blackbrane on D% $3$-brane:% \begin{eqnarray} \varepsilon &=&\Lambda \frac{1+\phi ^{4}}{\sqrt{1-\phi ^{2}}}, \TCItag{4.65} \\ p &=&-\Lambda \sqrt{1-\phi ^{2}}, \TCItag{4.66} \end{eqnarray}% \begin{equation} w=p/\varepsilon =-\frac{1-\phi ^{2}}{1+\phi ^{4}}, \tag{4.67} \end{equation}% where $\Lambda $ is the same as in (4.63). For the variety of branes one gets:% \begin{eqnarray} \varepsilon &=&\sum_{d=1}^{4}\Lambda _{0}^{\prime }\left( r_{d};4,d\right) \frac{1+\phi _{d}^{4}}{\sqrt{1-\phi _{d}^{2}}}+\sum_{d=5}^{9}\Lambda _{0}^{\prime \prime }\left( r_{d};4,d\right) \frac{1+\phi _{d}^{4}}{\sqrt{% 1-\phi _{d}^{2}}}, \TCItag{4.68} \\ p &=&-\sum_{d=1}^{4}\Lambda _{o}^{\prime }\left( r_{d};4,d\right) \sqrt{% 1-\phi _{d}^{2}}-\sum_{d=1}^{9}\Lambda _{o}^{\prime \prime }\left( r_{d};4,d\right) \sqrt{1-\phi _{d}^{2}}. \TCItag{4.69} \end{eqnarray}% In this case the state equation is:% \begin{equation} w=-\frac{\sum_{d=1}^{4}\Lambda _{o}^{\prime }\left( r_{d};4,d\right) \sqrt{% 1-\phi _{d}^{2}}+\sum_{d=1}^{9}\Lambda _{o}^{\prime \prime }\left( r_{d};4,d\right) \sqrt{1-\phi _{d}^{2}}}{\sum_{d=1}^{4}\Lambda _{0}^{\prime }\left( r_{d};4,d\right) \frac{1+\phi _{d}^{4}}{\sqrt{1-\phi _{d}^{2}}}% +\sum_{d=5}^{9}\Lambda _{0}^{\prime \prime }\left( r_{d};4,d\right) \frac{% 1+\phi _{d}^{4}}{\sqrt{1-\phi _{d}^{2}}}}. \tag{4.70} \end{equation} \subsection{The cosmological constant} In order to get the total induced cosmological constant on the D3-brane we should take into account all kinds of p-branes with different dimensions and with different distances from their centers to the center of D3-brane. Thus collecting the results (4.23) and (4.61) one obtains:% \begin{equation} \Lambda =\sum_{d=1}^{9}\Lambda _{b}\left( 4,d,10;r_{d}\right) +\sum\limits_{d=1}^{4}\Lambda _{o}^{\prime }\left( 4,d,10;y_{d}\right) +\sum_{d=5}^{9}\Lambda _{o}^{\prime \prime }\left( 4,d,10;y_{d}\right) . \tag{4.71} \end{equation}% The dynamic case is described on the D$3$-brane by a set of fields $\{\phi _{S}\}$ (where $S=0,1,...,9$ is the dimension of the p-brane in the 10-dimensional ambient space). The global energy-momentum tensor for the perfect fluid on the D$3$-brane is produced by the configurations of branes of different kinds in 10-dimensional ambient time-space. Thus the cosmological constant induced on the D3-brane can be fitted to the observed one by the appropriate choice of the parameters appearing in (4.71). As it is well-known the evolution of the universe depends on the value of the cosmological constant. In the present time the universe is accelerated. This phenomenon is being usually explained by an assumption of the existence of the exotic matter described by the state equation $w<-1/3.$ Such an exotic matter produces negative pressure, which acts against gravitation. Our approach presented above allows us to obtain such a state equation by the appropriate choice of the values of certain parameters in (4.71). The Eq. (4.71) changes with time which means that the evolution of D3-brane presented above depends not only on the D3-brane contents but also on the p-branes configuration in the ambient space. \section{Conclusions} The form of the cosmological constant $\Lambda $ on the D3-brane $M$ has been derived as a pull-back of the energy-momentum tensor, the latter tensor being taken for the background produced by the different p-branes. The contributions coming from the gravity solutions for the p-branes have been taken into account only and both the gauge fields on $M$ and RR charges of the p-branes have been ignored. In this way the dependence of the cosmological constant on both the dimensions of the p-branes and their distances to $M$ has been obtained. In the dynamic case when $\phi \neq 0$ one obtains the energy-momentum tensor on $M$ which can be identified with the energy-momentum tensor for the perfect fluid on $M$. This perfect fluid representing some kind of the exotic matter has the state equations given either by Eq.(4.34) in the case of the blackbranes or by Eq. (4.70) in the case of the p-branes without horizon. The energy and pressure are given by (4.32) and (4.33) for the first case as well as by (4.68) and (4.69) for the second case. The pressure produced by this perfect fluid is negative and acts against gravitation. One can then say that the perfect fluid can present one of the factors determining the cosmological evolution of $M$ . Thus the evolution of $M$ depends on its position in the ambient space with respect to the other branes. As one can see the speed of sound $c_{s}$ is real as a function of the field $\phi $ for $\phi \in \left( -1,1\right) $ (here only one brane is considered):% \begin{equation*} c_{s}=\sqrt{\frac{dp}{d\varepsilon }}=\sqrt{\frac{1-\phi ^{2}}{1+4\phi ^{2}+\phi ^{3}-3\phi ^{4}}}, \end{equation*}% where $\varepsilon $ is given by the Eq. (4.29) and $p$ is given by the Eq. (4.30). The picture below shows the $\phi $-dependence of $c_{s}$: \FRAME{% dtbpFX}{3in}{2.0003in}{0pt}{}{}{Plot}{\special{language "Scientific Word";type "MAPLEPLOT";width 3in;height 2.0003in;depth 0pt;display "USEDEF";plot_snapshots TRUE;mustRecompute FALSE;lastEngine "MuPAD";xmin "-1";xmax "1";xviewmin "-1.002";xviewmax "1.002";yviewmin "-0.001";yviewmax "1.001";plottype 4;labeloverrides 3;numpoints 49;plotstyle "patch";axesstyle "normal";xis \TEXUX{v58144};yis \TEXUX{y};var1name \TEXUX{$\phi $};var2name \TEXUX{$y$};function \TEXUX{$\sqrt{\frac{1-\phi ^{2}}{1+4\phi ^{2}+\phi ^{3}-3\phi ^{4}}}$};linecolor "black";linestyle 1;pointstyle "point";linethickness 1;lineAttributes "Solid";var1range "-1,1";num-x-gridlines 49;curveColor "[flat::RGB:0000000000]";curveStyle "Line";rangeset"X";valid_file "T";tempfilename 'JZHF6700.wmf';tempfile-properties "XPR";}}The model presented above can be interpreted as one of the models for the explanation of the origin of the dark energy. In this model the dark energy is generated by the perfect fluid of the exotic matter\emph{. }In [14] the dark energy is associated with p-branes in the light-cone parametrization. \section{References} [1] G. R. Dvali, G. Gabadadze, M. Porrati, Phys. Lett. \textbf{B485} 208 (2000) (hep-th/0005016); G. R. Dvali, G. Gabadadze, Phys. Rev. \textbf{D63 }% 065007 (2001) (hep-th/0008054) [2] M. R. Douglas, \textit{Basic results in Vacuum Statistics,} hep-th/0409207 [3] A. Kehagis, E. Kiritsis, \textit{Mirage Cosmology},\ hep-th/9910174 [4] E. Kiritsis, \textit{D-branes in Standard Model building, Gravity and Cosmology}, hep-th/0310001 [5] C. Park, S. J. Sin, \textit{p-brane cosmology and phases of Brans-Dicke theory with matter}, Phys. Rev. \textbf{D57} (1998) 4620-4628 [6] C. P. Bachas, P. Bain, M. B. Green: JHEP 9905 (1999) 011; \textit{% Curvature terms in D-brane actions and their M-theory origin,} hep-th/9903210 [7] M. R. Douglas, D. Kabat, P. Pouliot, S. H. Shenker, \textit{D-branes and Short Distances in String Theory}, hep-th/9608024; Nucl. Phys. \textbf{485} (1997) 85 [8] S. Weinberg, \textit{Gravitation and Cosmology}, John Wiley and Sons, Inc., New York 1972 [9] A. Besse, \textit{Einstein Manifolds}, Springer Verlag, Berlin, Heidelberg, 1987 [10] M. J. Duff, R. R. Khuri, J. X. Lu, \textit{String solitons}, hep-th/9412184 ; M. J. Duff, \textit{Supermembranes}, hep-th/9611203 [11] D. Garfinkel, G. T. Horowitz, A. Strominger, \textit{Charged black holes in string theory},\ Phys. Rev. \textbf{D43 (}1991) 3140-3143 [12] J. Polchinski, Phys. Rev. Lett. \textbf{75 }(1995) 4724 [13] J. Polchinski, S. Chaudhuri, C. V. Johnson, \textit{Notes on D-Branes}, hep-th/9602052 [14] R. Jackiv, \textit{A particle field theorist's lectures on supersymmetric, non-abelian fluid mechanics and d-branes}, physics/0010042 \end{document}
{'timestamp': '2008-09-02T11:36:48', 'yymm': '0809', 'arxiv_id': '0809.0380', 'language': 'en', 'url': 'https://arxiv.org/abs/0809.0380'}
ArXiv
\section{Introduction} The unexpected accelerated expansion of the universe as predicted by recent series of observations is speculated by cosmologist as a smooth transition from decelerated era in recent past \cite{Riess:1998cb,Perlmutter:1998np,Spergel:2003cb,Allen:2004cd,Riess:2004nr}. The cosmologists are divided in opinion about the cause of this transition. One group has the opinion of modification of the gravity theory while others are in favour introducing exotic matter component. Due to two severe drawbacks \cite{RevModPhys.61.1} of the cosmological constant as a DE candidate dynamical DE models namely quintessence field (canonical scalar field), phantom field \cite{Caldwell:2003vq,Vikman:2004dc,Nojiri:2005sr,Saridakis:2009pj,Setare:2008mb} (ghost scalar field) or a unifield model named quintom \cite{Feng:2004ad,Guo:2004fq,Feng:2004ff,Feng:2004ff} are popular in the literature.\par However, a new cosmological problem arises due to the dynamical nature of the DE although vacuum energy and DM scale independently during cosmic evolution but why their energy densities are nearly equal today. To resolve this coincidence problem cosmologists introduce interaction between the DE and DM. As the choice of this interaction is purely phenomenological so various models appear to match the observational prediction. Although these models may resolve the above coincidence problem but a non-trivial, almost tuned sequence of cosmological eras \cite{Amendola:2006qi} appear as a result. Further, the interacting phantom DE models \cite{Chen:2008ft,Nunes:2004wn,Clifton:2007tn,Xu:2012jf,Zhang:2005jj,Fadragas:2014mra,Gonzalez:2007ht} deal with some special coupling forms, alleviating the coincidence problem.\par Alternatively cosmologists put forward with a special type of interaction between DE and DM where the DM particles has variable mass, depending on the scalar field representing the DE \cite{Anderson:1997un}. Such type of interacting model is physically more sound as scalar field dependent varying mass model appears in string theory or scalar-tensor theory \cite{PhysRevLett.64.123}. This type of interacting model in cosmology considers mass variation as linear \cite{Farrar:2003uw,Anderson:1997un,Hoffman:2003ru}, power law \cite{Zhang:2005rg} or exponential \cite{Berger:2006db,PhysRevD.66.043528,PhysRevD.67.103523,PhysRevD.75.083506,Amendola:1999er,Comelli:2003cv,PhysRevD.69.063517} on the scalar field. Among these the exponential dependence is most suitable as it not only solves the coincidence problem but also gives stable scaling behaviour.\par In the present work, varying mass interacting DE/DM model is considered in the background of homogeneous and isotropic space-time model. Due to highly coupled nonlinear nature of the Einstein field equations it is not possible to have any analytic solution. So by using suitable dimensionless variables the field equations are converted to an autonomous system. The phase space analysis of non-hyperbolic equilibrium points has been done by center manifold theory (CMT) for various choices of the mass functions and the scalar field potentials. The paper is organized as follows: Section \ref{BES} deals with basic equations for the varying mass interacting dark energy and dark matter cosmological model. Autonomous system is formed and critical points are determined in Section \ref{FASC}. Also stability analysis of all critical points for various choices of the involving parameters are shown in this section. Possible bifurcation scenarios \cite{10.1140/epjc/s10052-019-6839-8, 1950261, 1812.01975} by Poincar\'{e} index theory and global cosmological evolution have been examined in Section \ref{BAPGCE}. Finally, brief discussion and important concluding remarks of the present work is proposed in Section \ref{conclusion}. \section{Varying mass interacting dark energy and dark matter cosmological model : Basic Equations\label{BES}} Throughout this paper, we assume a homogeneous and isotropic universe with the flat Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) metric as follows: \begin{equation} ds^2=-dt^2+a^2(t)~d{\Sigma}^2, \end{equation} where `$t$' is the comoving time; $a(t)$ is the scale factor; $d{\Sigma}^2$ is the 3D flat space line element.\\ The Friedmann equations in the background of flat FLRW metric can be expressed as \begin{eqnarray} 3H^2&=&k^2(\rho_\phi +\rho_{_{DM}}),\label{equn2}\\ 2\dot{H}&=&-k^2(\rho_\phi +p_\phi +\rho_{_{DM}}),\label{equn3} \end{eqnarray} where `$\cdot $' denotes the derivative with respect to $t$; $\kappa(=\sqrt{8\pi G}$) is the gravitational coupling; $\{\rho_\phi,p_\phi\}$ are the energy density and thermodynamic pressure of the phantom scalar field $\phi$ (considered as DE) having expressions \begin{align} \begin{split} \rho_{\phi}&=-\frac{1}{2}\dot{\phi}^2+V(\phi),\\ p_\phi&=-\frac{1}{2}\dot{\phi}^2-V(\phi),\label{equn4} \end{split} \end{align} and $\rho_{_{DM}}$ is the energy density for the dark matter in the form of dust having expression \begin{align} \rho_{_{DM}}=M_{_{DM}}(\phi)n_{_{DM}},\label{equn5} \end{align} where $n_{_{DM}}$, the number density \cite{Leon:2009dt} for DM satisfies the number conservation equation \begin{align} \dot{n}_{_{DM}}+3H n_{_{DM}}=0.\label{equn6} \end{align} Now differentiating (\ref{equn5}) and using (\ref{equn6}) one has the DM conservation equation as \begin{align} \dot{\rho}_{_{DM}}+3H\rho_{_{DM}}=\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)\right\}\dot{\phi}\rho_{_{DM}},\label{equn7} \end{align} which shows that mass varying DM (in the form of dust) can be interpreted as a barotropic fluid with variable equation of state : $\omega_{_{DM}}=\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)\right\}\dot{\phi}$. Now due to Bianchi identity, using the Einstein field equations (\ref{equn2}) and (\ref{equn3}) the conservation equation for DE takes the form \begin{align} \dot{\rho}_{\phi}+ 3H(\rho_{\phi}+p_{\phi})=-\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)~\right\}\dot{\phi}\rho_{_{DM}}.\label{equn8} \end{align} or using (\ref{equn4}) one has \begin{align} \ddot{\phi}+3H\dot{\phi}-\frac{\partial V}{\partial \phi}=\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)\right\}\rho_{_{DM}}.\label{equn9} \end{align} The combination of the conservation equations (\ref{equn7}) and (\ref{equn8}) for DM (dust) and phantom DE (scalar) shows that the interaction between these two matter components depends purely on the mass variation, i.e., $Q=\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)\right\}\rho_{_{DM}}$. So, if $M_{_{DM}}$ is an increasing function of $\phi$, i.e., $Q>0$ then energy is exchanged from DE to DM while in the opposite way if $M_{_{DM}}$ is a decreasing function of $\phi$. Further, combining equations (\ref{equn7}) and (\ref{equn8}) the total matter $\rho_{tot}=\rho_{DM}+\rho_{DE}$ satisfies \begin{align} \dot{\rho}_{tot}+3H(\rho_{tot}+p_{tot})=0 \end{align} with \begin{align} \omega_{tot}=\frac{p_{\phi}}{\rho_{\phi}+\rho_{_{DM}}}=\omega_{\phi}\Omega_{\phi}. \end{align} Here $\omega_{\phi}=\frac{p_{\phi}}{\rho_{\phi}}$ is the equation of state parameter for phantom field and $\Omega_{\phi}=\frac{\rho_{\phi}}{\frac{3H^2}{\kappa^2}}$ is the density parameter for DE. \section{Formation of Autonomous System : Critical point and stability analysis\label{FASC}} In the present work the dimensionless variables can be taken as \cite{Leon:2009dt} \begin{eqnarray} x:&=&\frac{\kappa\dot{\phi}}{\sqrt{6}H}, \\ y:&=&\frac{\kappa\sqrt{V(\phi)}}{\sqrt{3}H}, \\ z:&=&\frac{\sqrt{6}}{\kappa \phi} \end{eqnarray} together with $N=\ln a$ and the expression of the cosmological parameters can be written as \begin{align} \Omega_{\phi}\equiv \frac{{\kappa}^2\rho_{\phi}}{3H^2}&=-x^2+y^2,\label{eq4} \end{align} \begin{equation} \omega_{\phi}= \frac{-x^2-y^2}{-x^2+y^2} \end{equation} and \begin{equation} \omega_{tot}=-x^2-y^2. \end{equation} For the scalar field potential we consider two well studied cases in the literature, namely the power-law \begin{equation} V(\phi)=V_0 \phi^{-\lambda} \end{equation} and the exponential dependence as \begin{equation} V(\phi)=V_1 e^{-\kappa\lambda \phi}. \end{equation} For the dark matter particle mass we also consider power-law \begin{eqnarray} M_{_{DM}}(\phi)&=& M_0 {\phi}^{-\mu} \end{eqnarray} and the exponential dependence as \begin{eqnarray} M_{_{DM}}(\phi)&=& M_1 e^{-\kappa\mu \phi}, \end{eqnarray} where $V_0,V_1,M_0,M_1 (>0)$ and $\lambda,\mu$ are constant parameters. Here we study the dynamical analysis of this cosmological system for five possible models. In Model $1$ (\ref{M1}) we consider $V(\phi)=V_0\phi^{-\lambda}, M_{_{DM}}(\phi)=M_0\phi^{-\mu}$, in Model $2$ (\ref{M2}) we consider $V(\phi)=V_0\phi^{-\lambda}, M_{_{DM}}(\phi)=M_1e^{-\kappa\mu\phi}$, in Model $3$ (\ref{M3}) we consider $V(\phi)=V_1e^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_0\phi^{-\mu}$, in Model $4$ (\ref{M4}) we consider $V(\phi)=V_1 e ^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_1e^{-\kappa\mu\phi}$ and lastly in Model $5$ (\ref{M5}) we consider $V(\phi)=V_2\phi^{-\lambda} e ^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_2\phi^{-\mu}e^{-\kappa\mu\phi}$, where $V_2=V_0V_1$ and $M_2=M_0M_1$. \subsection{Model 1: Power-law potential and power-law-dependent dark-matter particle mass \label{M1}} In this consideration evolution equations in Section \ref{BES} can be converted to an autonomous system as follows \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\lambda y^2 z}{2}-\frac{\mu}{2}z(1+x^2-y^2),\label{eq9} \\ y'&=&\frac{3}{2}y(1-x^2-y^2)-\frac{\lambda xyz}{2},\label{eq10} \\ z'&=&-xz^2,\label{eq11} \end{eqnarray} where $\lq$dash' over a variable denotes differentiation with respect to $ N=\ln a $.\bigbreak To obtain the stability analysis of the critical points corresponding to the autonomous system $(\ref{eq9}-\ref{eq11})$, we consider four possible choices of $\mu$ and $\lambda$ as $(i)$ $\mu\neq0$ and $\lambda\neq0$, $~~~(ii)$ $\mu\neq0$ and $\lambda=0$, $(iii)$ $\mu=0$ and $\lambda\neq0$, $(iv)$ $\mu=0$ and $\lambda=0$. \subsubsection*{Case-(i)$~$\underline{$\mu\neq0$ and $\lambda\neq0$}} In this case we have three real and physically meaningful critical points $A_1(0, 0, 0)$, $A_2(0, 1, 0)$ and $A_3(0, -1, 0)$. First we determine the Jacobian matrix at these critical points corresponding to the autonomous system $(\ref{eq9}-\ref{eq11})$. Then we shall find the eigenvalues and corresponding eigenvectors of the Jacobian matrix. After that we shall obtain the nature of the vector field near the origin for every critical points. If the critical point is hyperbolic in nature we use Hartman-Grobman theorem and if the critical point is non-hyperbolic in nature we use Center Manifold Theory \cite{Chakraborty:2020vkp}. At every critical points the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\ref{eq9}-\ref{eq11})$, value of cosmological parameters and the nature of the critical points are shown in Table \ref{TI}. \begin{table}[h] \caption{\label{TI}Table shows the eigenvalues, cosmological parameters and nature of the critical points corresponding to each critical points $(A_1-A_3)$.} \begin{tabular}{|c|c c c|c|c|c| c|c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$~Critical~ Points$\\$~~$\end{tabular} ~~ & $ \lambda_1 $ ~~ & $\lambda_2$ ~~ & $\lambda_3$& $~\Omega_\phi~$&$~\omega_\phi~$ &$~\omega_{tot}~$& $~q~$ & $Nature~of~critical~points$ \\ \hline\hline ~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\ $A_1(0,0,0)$ & $-\frac{3}{2}$ & $\frac{3}{2}$ & 0 & 0 & Undetermined & 0 &$\frac{1}{2}$& Non-hyperbolic\\ ~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\\hline ~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\ $A_2(0,1,0)$ & $-3$ & $-3$ & 0 & 1 & $-1$ & $-1$&$-1$& Non-hyperbolic\\ ~ & ~ & ~& ~& ~ & ~ & ~ & ~& ~\\ \hline ~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\ $A_3(0,-1,0)$ & $-3$ & $-3$ & $0$ & $1$ & $-1$ & $-1$&$-1$& Non-hyperbolic\\ ~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\ \hline \end{tabular} \end{table} \begin{center} $1.~Critical~Point~A_1$ \end{center} The Jacobian matrix at the critical point $A_1$ can be put as \begin{equation}\renewcommand{\arraystretch}{1.5} J(A_1)=\begin{bmatrix} -\frac{3}{2} & 0 & -\frac{\mu}{2}\\ ~~0 & \frac{3}{2} & ~~0\\ ~~0 & 0 & ~~0 \end{bmatrix}.\label{eq12} \end{equation} The eigenvalues of $J(A_1)$ are $-\frac{3}{2}$, $\frac{3}{2}$ and $0$. $[1, 0, 0]^T$ , $[0, 1, 0]^T$ and $[-\frac{\mu}{3}, 0, 1]^T$ are the eigenvectors corresponding to the eigenvalues $-\frac{3}{2}$, $\frac{3}{2}$ and 0 respectively. Since the critical point $A_1$ is non-hyperbolic in nature, so we use Center Manifold Theory for analyzing the stability of this critical point. From the entries of the Jacobian matrix we can see that there is a linear term of $z$ corresponding to the eqn.(\ref{eq9}) of the autonomous system $(\ref{eq9}-\ref{eq11})$. But the eigen value $0$ of the Jacobian matrix (\ref{eq12}) is corresponding to (\ref{eq11}). So we have to introduce another coordinate system $(X,~Y,~Z)$ in terms of $(x,~y,~z)$. By using the eigenvectors of the Jacobian matrix (\ref{eq12}), we introduce the following coordinate system \begin{equation}\renewcommand{\arraystretch}{1.5} \begin{bmatrix} X\\ Y\\ Z \end{bmatrix}\renewcommand{\arraystretch}{1.5} =\begin{bmatrix} 1 & 0 & \frac{\mu}{3} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}\renewcommand{\arraystretch}{1.5} \begin{bmatrix} x\\ y\\ z \end{bmatrix}\label{eq15} \end{equation} and in these new coordinate system the equations $(\ref{eq9}-\ref{eq11})$ are transformed into \begin{equation}\renewcommand{\arraystretch}{1.5} \begin{bmatrix} X'\\ Y'\\ Z' \end{bmatrix}\renewcommand{\arraystretch}{1.5} =\begin{bmatrix} -\frac{3}{2} & 0 & 0 \\ ~~0 & \frac{3}{2} & 0 \\ ~~0 & 0 & 0 \end{bmatrix} \begin{bmatrix} X\\ Y\\ Z \end{bmatrix} +\renewcommand{\arraystretch}{1.5} \begin{bmatrix} non\\ linear\\ terms \end{bmatrix}. \end{equation} By Center Manifold Theory there exists a continuously differentiable function $h$:$\mathbb{R}$$\rightarrow$$\mathbb{R}^2$ such that \begin{align}\renewcommand{\arraystretch}{1.5} h(Z)=\begin{bmatrix} X \\ Y \\ \end{bmatrix} =\begin{bmatrix} a_1Z^2+a_2Z^3+a_3Z^4 +\mathcal{O}(Z^5)\\ b_1Z^2+b_2Z^3+a_3Z^4 +\mathcal{O}(Z^5) \end{bmatrix}. \end{align} Differentiating both side with respect to $N$, we get \begin{eqnarray} X'&=&(2a_1Z+3a_2Z^2+4a_3Z^3)Z',\\ Y'&=&(2b_1Z+3b_2Z^2+4b_3Z^3)Z', \end{eqnarray} where $a_i$, $b_i$ $\in\mathbb{R}$. We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighbourhood of the origin. Comparing coefficients corresponding to power of Z we get, $a_1$=0, $a_2=\frac{2\mu^2}{27}$, $a_3=0$ and $b_i$=0 for all $i$. So, the center manifold is given by \begin{eqnarray} X&=&\frac{2\mu^2}{27}Z^3,\label{eq18}\\ Y&=&0\label{eq19} \end{eqnarray} and the flow on the Center manifold is determined by \begin{eqnarray} Z'&=&\frac{\mu}{3}Z^3+\mathcal{O}(Z^5).\label{eq20} \end{eqnarray} \begin{figure}[h] \includegraphics[width=1\textwidth]{A11} \caption{Vector field near the origin for the critical point $A_1$ in $XZ$-plane. L.H.S. figure is for $\mu>0$ and R.H.S. figure is for $\mu<0$. } \label{A_1} \end{figure} The flow on the center manifold depends on the sign of $\mu$. If $\mu>0$ then $Z'>0$ for $Z>0$ and $Z'<0$ for $Z<0$. Hence, we conclude that for $\mu>0$ the origin is a saddle node and unstable in nature (FIG.\ref{A_1}(a)). Again if $\mu<0$ then $Z'<0$ for $Z>0$ and $Z'>0$ for $Z<0$. So, we conclude that for $\mu<0$ the origin is a stable node, i.e., stable in nature (FIG.\ref{A_1}(b)). \bigbreak \begin{center} $2.~Critical~Point~A_2$ \end{center} The Jacobian matrix at $A_2$ can be put as \begin{equation}\renewcommand{\arraystretch}{1.5} J(A_2)=\begin{bmatrix} -3 & ~~0 & -\frac{\lambda}{2}\\ ~~0 & -3 & ~~0\\ ~~0 & ~~0 & ~~0 \end{bmatrix}\label{eq21}. \end{equation} The eigenvalues of the above matrix are $-3$, $-3$ and $0$. $[1, 0, 0]^T$ and $[0, 1, 0]^T$ are the eigenvectors corresponding to the eigenvalue $-3$ and $\left[-\frac{\lambda}{6}, 0, 1\right]^T$ be the eigenvector corresponding to the eigenvalue $0$. Since the critical point $A_2$ is non-hyperbolic in nature, so we use Center Manifold Theory for analyzing the stability of this critical point. We first transform the coordinates into a new system $x=X,~ y=Y+1,~ z=Z$, such that the critical point $A_2$ moves to the origin. By using the eigenvectors of the Jacobian matrix $J(A_2)$, we introduce another set of new coordinates $(u,~v,~w)$ in terms of $(X,~Y,~Z)$ as \begin{equation}\renewcommand{\arraystretch}{1.5} \begin{bmatrix} u\\ v\\ w \end{bmatrix}\renewcommand{\arraystretch}{1.5} =\begin{bmatrix} 1 & 0 & \frac{\lambda}{6} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}\renewcommand{\arraystretch}{1.5} \begin{bmatrix} X\\ Y\\ Z \end{bmatrix}\label{eq24} \end{equation} and in these new coordinates the equations $(\ref{eq9}-\ref{eq11})$ are transformed into \begin{equation} \renewcommand{\arraystretch}{1.5} \begin{bmatrix} u'\\ v'\\ w' \end{bmatrix} =\begin{bmatrix} -3 & ~~0 & 0 \\ ~~0 & -3 & 0 \\ ~~0 & ~~0 & 0 \end{bmatrix} \begin{bmatrix} u\\ v\\ w \end{bmatrix} + \begin{bmatrix} non\\ linear\\ terms \end{bmatrix}. \end{equation} By center manifold theory there exists a continuously differentiable function $h$:$\mathbb{R}$$\rightarrow$$\mathbb{R}^2$ such that \begin{align}\renewcommand{\arraystretch}{1.5} h(w)=\begin{bmatrix} u \\ v \\ \end{bmatrix} =\begin{bmatrix} a_1w^2+a_2w^3 +\mathcal{O}(w^4)\\ b_1w^2+b_2w^3 +\mathcal{O}(w^4) \end{bmatrix}. \end{align} Differentiating both side with respect to $N$, we get \begin{eqnarray} u'&=&(2a_1w+3a_2w^2)w'+\mathcal{O}(w^3)\label{eq25}\\ v'&=&(2b_1w+3b_2w^2)w'+\mathcal{O}(w^3)\label{eq26} \end{eqnarray} where $a_i$, $b_i$ $\in\mathbb{R}$. We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighbourhood of the origin. Comparing coefficients corresponding to power of $w$ both sides of (\ref{eq25}) and (\ref{eq26}), we get $a_1$=0, $a_2=\frac{\lambda^2}{108}$ and $b_1=\frac{\lambda^2}{72}$, $b_2=0$. So, the center manifold can be written as \begin{eqnarray} u&=&\frac{\lambda^2}{108}w^3,\label{eqn27}\\ v&=&\frac{\lambda^2}{72}w^2\label{eqn28} \end{eqnarray} \begin{figure} \includegraphics[width=1\textwidth]{A12} \caption{Vector field near the origin for the critical point $A_2$ in (uw)-plane. L.H.S. figure is for $\lambda>0$ and R.H.S. figure is for $\lambda<0$. } \label{19} \end{figure} \begin{figure} \includegraphics[width=1\textwidth]{A22} \caption{Vector field near the origin for the critical point $A_2$ in $(vw)$-plane. L.H.S. figure is for $\lambda>0$ and R.H.S. figure is for $\lambda<0$.} \label{20} \end{figure} and the flow on the center manifold is determined by \begin{eqnarray} w'&=&\frac{\lambda}{6}w^3+\mathcal{O}(w^4) .\label{eq29} \end{eqnarray} Here we see the center manifold and the flow on the center manifold is completely same as the center manifold and the flow which was determined in \cite{1111.6247} and the stability of the vector field near the origin depends on the sign of $\lambda$. If $\lambda<0$ then $w'<0$ for $w>0$ and $w'>0$ for $w<0$. So, for $\lambda<0$ the origin is a stable node, i.e., stable in nature. Again if $\lambda>0$ then $w'>0$ for $w>0$ and $w'<0$ for $w<0$. So, for $\lambda>0$ the origin is a saddle node, i.e., unstable in nature. The vector field near the origin are shown as in FIG.\ref{19} and FIG.\ref{20} separately for $(wu)$-plane and $(wv)$-plane respectively. As the new coordinate system $(u,~v,~w)$ is topologically equivalent to the old one, hence the origin in the new coordinate system, i.e., the critical point $A_2$ in the old coordinate system $(x,~y,~z)$ is a stable node for $\lambda<0$ and a saddle node for $\lambda>0$. \begin{center} $3.~Critical~Point~A_3$ \end{center} The Jacobian matrix at the critical point $A_3$ is same as (\ref{eq21}). So, the eigenvalues and corresponding eigenvectors are also same as above. Now we transform the coordinates into a new system $x=X,~ y=Y-1,~ z=Z$, such that the critical point is at the origin. Then by using the matrix transformation (\ref{eq24}) and after putting similar arguments as above, the expressions of the center manifold can be written as \begin{eqnarray} u&=&-\frac{\lambda^2}{108}w^3\label{eqn30},\\ v&=&-\frac{\lambda^2}{72}w^2\label{eqn31} \end{eqnarray} and the flow on the center manifold is determined by \begin{eqnarray} w'&=&\frac{\lambda}{6}w^3+\mathcal{O}(w^4) .\label{eqn32} \end{eqnarray} Here also the stability of the vector field near the origin depends on the sign of $\lambda$. Again as the expression of the flow on the center manifold is same as (\ref{eq29}). So we can conclude as above that for $\lambda<0$ the origin is a stable node,i.e., stable in nature and for $\lambda>0$ the origin is unstable due to its saddle nature. The vector fields near the origin on $uw$-plane and $vw$-plane are shown as in FIG.\ref{24} and FIG.\ref{25} respectively. Hence, the critical point $A_3$ is a stable node for $\lambda<0$ and a saddle node for $\lambda>0$.\bigbreak \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{A21} \caption{Vector field near the origin for the Critical point $A_3$ in $(uw)$-plane. L.H.S. figure is for $\lambda>0$ and R.H.S. figure is for $\lambda<0$.} \label{24} \end{figure} \begin{figure}[h] \includegraphics[width=1\textwidth]{A31} \caption{Vector field near the origin for the Critical point $A_3$ in $(vw)$-plane. L.H.S. figure is for $\lambda>0$ and R.H.S. figure is for $\lambda<0$.} \label{25} \end{figure} \newpage \subsubsection*{Case-(ii)$~$\underline{$\mu\neq0$ and $\lambda=0$}} In this case the autonomous system $(\ref{eq9}-\ref{eq11})$ changes into \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\mu}{2}z(1+x^2-y^2),\label{eq33} \\ y'&=&\frac{3}{2}y(1-x^2-y^2),\label{eq34} \\ z'&=&-xz^2.\label{eq35} \end{eqnarray} We have also three critical points corresponding to the above autonomous system, in which two are space of critical points. The critical points for this autonomous system are $C_1(0, 0, 0)$, $C_2(0,1, z_c)$ and $C_3(0,-1,z_c)$ where $z_c$ is any real number. Corresponding to the critical points $C_0$, $C_1$ and $C_2$ the eigenvalues of the Jacobian matrix, value of cosmological parameters and the nature of the critical points are same as $A_1$, $A_2$ and $A_3$ respectively. \begin{center} $1.~Critical~Point~C_1$ \end{center} The Jacobian matrix $J(C_1)$ for the autonomous system $(\ref{eq33}-\ref{eq35})$ at this critical point is same as (\ref{eq12}). So, all the eigenvalues and the corresponding eigenvectors are also same as for $J(C_1)$. If we put forward argument like the stability analysis of the critical point $A_1$ then the center manifold can be expressed as $(\ref{eq18}-\ref{eq19})$ and the flow on the center manifold is determined by $(\ref{eq20})$. So the stability of the vector field near the origin is same as for the critical point $A_1$. \begin{center} $2.~Critical~Point~C_2$ \end{center} The Jacobian matrix at the critical point $C_2$ can be put as \begin{equation}\renewcommand{\arraystretch}{1.5} J(C_2)=\begin{bmatrix} -3 & ~~\mu z_c & 0\\ ~~0 & -3 & 0\\ -z_c^2 & ~~0 & 0 \end{bmatrix}.\label{eq36} \end{equation} The eigenvalues of the above matrix are $-3$, $-3$, 0. $\left[1, 0, \frac{z_c^2}{3}\right]^T$ and $[0, 1, 0]^T$ are the eigenvectors corresponding to the eigenvalue -3 and $[0, 0, 1]^T$ be the eigenvector corresponding to the eigenvalue 0. To apply CMT for a fixed $z_c$, first we transform the coordinates into a new system $x=X,~ y=Y+1,~ z=Z+z_c$, such that the critical point is at the origin and after that if we put forward argument as above to determine center manifold, then the center manifold can be written as \begin{eqnarray} X&=&0,\label{eq37}\\ Y&=&0\label{eq38} \end{eqnarray} and the flow on the center manifold is determined by \begin{eqnarray} Z'&=&0.\label{eq39} \end{eqnarray} So, the center manifold is lying on the $Z$-axis and the flow on the center manifold can not be determined by (\ref{eq39}). Now, if we project the vector field on the plane which is parallel to $XY$-plane, i.e., the plane $Z=constant$(say), then the vector field is shown as in FIG.\ref{z_c}. So every point on $Z$- axis is a stable star. \begin{center} $2.~Critical~Point~C_3$ \end{center} If we put forward argument as above to obtain the center manifold and the flow on the center manifold. Then we will get the center manifold same as $(\ref{eq37}-\ref{eq38})$ and the flow on the center manifold is determined by (\ref{eq39}). In this case also we will get the same vector field as FIG.\ref{z_c}.\bigbreak From the above discussion, firstly we have seen that the space of critical points $C_2$ and $C_3$ are non-hyperbolic in nature but by using CMT we could not determine the vector field near those critical points and also the flow on the vector field. So, in this case the last eqn.(\ref{eq35}) of the autonomous system $(\ref{eq33}-\ref{eq35})$ did not provide any special behaviour. For this reason and as the expressions of $\Omega_\phi$, $\omega_\phi$ and $\omega_{total}$ depends only on $x$ and $y$ coordinates, we want to take only the first two equations of the autonomous system $(\ref{eq33}-\ref{eq35})$ and try to analyze the stability of the critical points which are lying on the plane, parallel to $xy-$plane, i.e., the plane $z=constant=c$ (say). In $z=c$ plane the first two equations in $(\ref{eq33}-\ref{eq35})$ can be written as \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\mu}{2}c(1+x^2-y^2),\label{eqn40} \\ y'&=&\frac{3}{2}y(1-x^2-y^2).\label{eqn41} \end{eqnarray} In this case we have five critical points corresponding to the autonomous system $(\ref{eqn40}-\ref{eqn41})$. The set of critical points, existence of critical points and the value of cosmological parameters are shown in Table \ref{T3} and the eigenvalues and the nature of critical points are shown in Table \ref{T4}. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{stable_z_c} \caption{Vector field near about every point on $Z-$axis for the critical points $C_2$ and $C_3$.} \label{z_c} \end{figure} \begin{table}[!] \caption{\label{T3}Table shows the set of critical points, existence of critical points and the value of cosmological parameters corresponding to the autonomous system $(\ref{eqn40}-\ref{eqn41})$. } \begin{tabular}{|c|c|c c |c c c c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$ CPs $\\$~$\end{tabular} ~~ & $ Existence $ ~~ & ~~$x$ ~~&~~ $y$& $\Omega_\phi$&~~$\omega_{\phi}$~~ &$\omega_{tot}$ & $~~~~q$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\$ E_1 $\\$~$\end{tabular} ~~ & $For~all~\mu~and~c $&$0$&$~~~1$&$1$&$-1$&$-1$&$~~-1$ \\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ E_2 $\\$~$\end{tabular} ~~ & $For~all~\mu ~and~c $ ~~ & ~~$0$ &$~~-1$~~&$1$& $~-1$ ~~& $-1$ & $~~-1$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ E_3 $\\$~$\end{tabular} ~~ & $For~all ~~\mu ~and~c $ ~~ & ~~$-\frac{\mu c}{3}$ ~~&~~$~~0$~&$-\frac{\mu^2 c^2}{9}$ & $~~-1$ ~~&~~ $-\frac{\mu^2 c^2}{9}$ &~~ $\frac{1}{2}\left(1-\frac{\mu^2 c^2}{3}\right)$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ E_4 $\\$~$\end{tabular} ~~ & \begin{tabular}{@{}c@{}}$ For~c\neq 0~ and~$\\$for~all~\mu\in \left(-\infty,-\frac{3}{c}\right]\cup\left[\frac{3}{c},\infty\right)$ \end{tabular} ~~ & ~~$-\frac{3}{\mu c}$ ~~&~~ $\sqrt{1-\frac{9}{\mu^2 c^2}}$~~&~~$\left(1-\frac{18}{\mu^2 c^2}\right)$&$~~~~\frac{\mu^2 c^2}{18-\mu^2c^2}$~~&~~$-1$ ~~& ~~$~-1$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ E_5 $\\$~$\end{tabular} ~~ & \begin{tabular}{@{}c@{}}$ For~c\neq 0~ and~$\\$for~all~\mu\in \left(-\infty,-\frac{3}{c}\right]\cup\left[\frac{3}{c},\infty\right)$ \end{tabular} ~~ & ~~$-\frac{3}{\mu c}$ ~~&~~ $-\sqrt{1-\frac{9}{\mu^2 c^2}}$~~&$\left(1-\frac{18}{\mu^2 c^2}\right)$ &~~$~~\frac{\mu^2 c^2}{18-\mu^2c^2}$~~&~~$-1$ ~~& ~~$~-1$\\ \hline \end{tabular} \end{table} \begin{table}[!] \caption{\label{T4}Table shows the eigenvalues $(\lambda_1, \lambda_2)$ of the Jacobian matrix corresponding to the critical points and the nature of all critical points $(E_1-E_5)$.} \begin{tabular}{|c|c c|c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$ Critical~Points $\\$~$\end{tabular} &$ ~~\lambda_1 $ & $~~\lambda_2$ & $ Nature~~ of~~ Critical~~ points$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\ $E_1$ \\$~$\end{tabular} & $-3$ & $ -3 $&Hyperbolic\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ E_2 $\\$~$\end{tabular} & $-3$ & $ -3 $& Hyperbolic\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ E_3 $\\$~$\end{tabular} & $-\frac{3}{2}\left(1+\frac{\mu^2c^2}{9}\right)$ & $\frac{3}{2}\left(1-\frac{\mu^2c^2}{9}\right)$& \begin{tabular}{@{}c@{}}$~~$\\Non-hyperbolic for $\mu c=\pm3$\\and\\hyperbolic for $\mu c\neq\pm3$\\$~~$\end{tabular}\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ E_4 $\\$~$\end{tabular} & \begin{tabular}{@{}c@{}}$~~$\\$\frac{-3+\sqrt{45-\frac{324}{\mu^2 c^2}}}{2}$ \\$~~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\ $\frac{-3-\sqrt{45-\frac{324}{\mu^2 c^2}}}{2}$\\$~~$\end{tabular} &\begin{tabular}{@{}c@{}}$~~$\\Non-hyperbolic for $\mu c=\pm3$\\and\\hyperbolic for $\mu c\neq\pm3$\\$~~$\end{tabular}\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ E_5 $\\$~$\end{tabular} & \begin{tabular}{@{}c@{}}$~~$\\$\frac{-3+\sqrt{45-\frac{324}{\mu^2 c^2}}}{2}$ \\$~~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\ $\frac{-3-\sqrt{45-\frac{324}{\mu^2 c^2}}}{2}$\\$~~$\end{tabular} &\begin{tabular}{@{}c@{}}Non-hyperbolic for $\mu c=\pm3$\\and\\hyperbolic for $\mu c\neq\pm3$\end{tabular}\\ \hline \end{tabular} \end{table} \newpage For avoiding similar arguments which we have mentioned for analyzing the stability of the above critical points, we only state the stability and the reason behind the stability of these critical points in a tabular form, which is shown as in Table \ref{T_stability}. \begin{table}[h] \caption{\label{T_stability}Table shows the stability and the reason behind the stability of the critical points $(E_1-E_5)$} \begin{tabular}{|c|c|c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$ CPs $\\$~$\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\ \hline\hline $E_1,~E_2$& Both are stable star & \begin{tabular}{@{}c@{}}$~~$\\As both eigenvalues $\lambda_1$ and $\lambda_2$ are negative and equal. By Hartman-\\Grobman theorem we can conclude that the critical points $E_1$ and \\$E_2$ both are stable star.\\$~~$\end{tabular}\\ \hline $E_3$&\begin{tabular}{@{}c@{}}$~~$\\ Stable node for $\mu c=-3$,\\saddle node for $\mu c=3$ ,\\ stable node for $\mu c>3~or,~<-3$,\\saddle node for $-3<\mu c<3$ \\$~~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\For $\mu c=-3:$\\After shifting the this critical point into the origin by taking the\\ transformation $x= X-\frac{\mu c}{3}$, $y= Y$ and by using CMT, the CM \\is given by $X=Y^2+\mathcal{O}(Y^4) $ and the flow on the CM is determined \\by $ Y'=-\frac{3}{2}Y^3+\mathcal{O}(Y^5)$. $Y'<0$ while $Y>0$ and for $Y<0$, $Y'>0$.\\ So, the critical point $E_3$ is a stable node (FIG.\ref{mu_c_3}(a)).\\$~~$\\ For $\mu c=3:$\\ The center manifold is given by $X=-Y^2+\mathcal{O}(Y^4) $ and the flow on\\ the center manifold is determined by $ Y'=\frac{3}{2}Y^3+\mathcal{O}(Y^5)$. $Y'<0$\\ while $Y<0$ and for $Y>0$, $Y'>0$. So, the critical point $E_3$ is a\\ saddle node (FIG.\ref{mu_c_3}(b)).\\$~~$\\For $\mu c>3~or,~\mu c<-3$:\\ Both of the eigenvalues $\lambda_1$ and $\lambda_2$ are negative and unequal. So by\\ Hartman-Grobman theorem the critical point $E_3$ is a stable node.\\ $~~$\\ For $-3<\mu c<3:$\\$\lambda_1$ is negative and $\lambda_2$ is positive. So by Hartman-Grobman theorem\\ the critical point $E_3$ is unstable node.\\$~~$\end{tabular} \\ \hline $E_4,~E_5$ &\begin{tabular}{@{}c@{}}$~~$\\Both are stable node for $\mu c=-3$,\\ saddle node for $\mu c=3$,\\ stable node for $\mu c>3~or,~<-3$\\$~~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\For $\mu c=3$ and $\mu c=-3$:\\ The expression of the center manifold and the flow on the center\\ manifold is same as the expressions for $\mu c=-3$ and $\mu c=-3$\\ cases respectively for $E_3$.\\ $~~$\\ For $\mu c>3,~or~<-3$:\\ Both of the eigenvalues $\lambda_1$ and $\lambda_2$ are negative and unequal.\\ Hence, by Hartman-Grobman theorem we can conclude that the critical\\ points $E_4$ and $E_5$ both are unstable in nature.\\$~~$ \end{tabular}\\ \hline \end{tabular} \end{table} Note that $\mu c\geq3$ and $\mu c\leq-3$ be the domain of existence of the critical point $E_4$ and $E_5$. For this reason we did not determine the stability analysis of the critical points $E_4$ and $E_5$ for $\mu c\in (-3,3)$. \begin{figure}[!] \centering \includegraphics[width=1\textwidth]{mu_c_3} \caption{Vector field near near the origin for the critical point $E_3$. L.H.S. for $\mu c=3$ and R.H.S. for $\mu c=-3$.} \label{mu_c_3} \end{figure} \newpage \subsubsection*{Case-(iii)$~$\underline{$\mu=0$ and $\lambda\neq 0$}} In this case the autonomous system $(\ref{eq9}-\ref{eq11})$ changes into \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\lambda y^2 z}{2},\label{eq40} \\ y'&=&\frac{3}{2}y(1-x^2-y^2)-\frac{\lambda xyz}{2},\label{eq41} \\ z'&=&-xz^2. \label{eq42} \end{eqnarray} Corresponding to the above autonomous system we have three space of critical points $P_1(0,0,z_c)$, $P_2(0,1,0)$ and $P_3(0,-1,0)$ where $z_c$ is any real number. The value of cosmological parameters, eigenvalues of the Jacobian matrix at those critical points corresponding to the autonomous system $(\ref{eq40}-\ref{eq42})$ and the nature of critical points $P_1$, $P_2$ and $P_3$ are same as for the critical points $A_1$, $A_2$ and $A_3$ respectively, shown as in Table \ref{TI}. \newpage \begin{center} $1.~Critical~Point~P_1$ \end{center} The Jacobian matrix at the critical point $P_1$ can be put as \begin{equation} \renewcommand{\arraystretch}{1.5} J(P_1)=\begin{bmatrix} -\frac{3}{2} & 0 & 0\\ ~~0 & \frac{3}{2}& 0\\ -z_c^2 & 0 & 0 \end{bmatrix}.\label{eq45} \end{equation} The eigenvalues of the above matrix are $-\frac{3}{2}$, $\frac{3}{2}$ and $0$ and $\left[1, 0, \frac{2}{3}z_c^2\right]^T$, $[0, 1, 0]^T$ and $[0, 0, 1]^T$ are the corresponding eigenvectors respectively. For a fixed $z_c$, first we shift the critical point $P_0$ to the origin by the coordinate transformation $x=X$, $y=Y$ and $z=Z+z_c$, if we put forward argument as above for non-hyperbolic critical points. Then, the center manifold can be written as $(\ref{eq37}-\ref{eq38})$ and the flow on the center manifold is determined by (\ref{eq39}). Similarly as above (the discussion of stability for the critical point $C_2$) we can conclude that the center manifold for the critical point $P_1$ is also lying in the $Z-$axis but flow on the center manifold can not be determined. Now, if we project the vector field on the plane which is parallel to $XY$-plane, i.e., the plane $Z=constant$(say), then the vector field is shown as in FIG.\ref{saddle_z_c}. So every point on $Z$- axis is a saddle node.\bigbreak \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{saddle_z_c} \caption{Vector field near about every point on $Z-$axis for the critical point $P_1$.} \label{saddle_z_c} \end{figure} Again if we want to obtain the stability of the critical points in the plane which is parallel to $xy$-plane, i.e., $z=constant=c$(say), then we only take the first two equations (\ref{eq40}) and (\ref{eq41}) of the autonomous system $(\ref{eq40}-\ref{eq42})$ and also replace $z$ by $c$ in those two equations. After that we can see that there exists three real and physically meaningful hyperbolic critical points $B_1(0,0)$, $B_2\left(-\frac{\lambda c}{6}, \sqrt{1+\frac{\lambda^2 c^2}{36}}\right)$ and $B_3\left(-\frac{\lambda c}{6}, -\sqrt{1+\frac{\lambda^2 c^2}{36}}\right)$. So by obtaining the eigenvalues of the Jacobian matrix corresponding to the autonomous system at those critical points and using Hartman-Grobman theorem we only state the stability of all critical points and also write the value of cosmological parameters corresponding to these critical points in tabular form, which is shown as in Table \ref{TB}.\bigbreak For the critical points $P_2$ and $P_3$ we have the same Jacobian matrix (\ref{eq21}) and if we will take the similar transformations (shifting and matrix) and then by using the similar arguments as $A_2$ and $A_3$ respectively, we conclude that the the stability of $P_2$ and $P_3$ is same as $A_2$ and $A_3$ respectively. \begin{table}[!] \caption{\label{TB}Table shows the eigenvalues $(\lambda_1, \lambda_2)$ of the Jacobian matrix, stability and value of cosmological parameters corresponding to the critical points and the nature of all critical points $(B_1-B_3)$.} \begin{tabular}{|c|c c|c|c c c c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$ Critical~Points $\\$~$\end{tabular} &$ ~~\lambda_1 $ & $~~\lambda_2$ & $ Stability$&$~\Omega_\phi~$& ~~$\omega_{\phi}$~~ &$\omega_{tot}$ & ~~$q$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\$B_1$\\$~$\end{tabular} &$-\frac{3}{2}$ & $\frac{3}{2}$&Stable star&$0$&Undetermined&$0$& $\frac{1}{2}$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$B_2,~B_3$\\$~$\end{tabular}&$-3\left(1+\frac{\lambda^2 c^2}{18}\right)$&$-3\left(1+\frac{\lambda^2 c^2}{36}\right)$& \begin{tabular}{@{}c@{}}$~~$\\Stable star for $\lambda c=0$\\and\\stable node for $\lambda c\neq 0$\\$~~$\end{tabular}& $1$&$-\left(1+\frac{\lambda^2 c^2}{18}\right)$&$-\left(1+\frac{\lambda^2 c^2}{18}\right)$&$-\left(1+\frac{\lambda^2 c^2}{12}\right)$\\ \hline \end{tabular} \end{table} \newpage \subsubsection*{Case-(iv)$~$\underline{$\mu=0$ and $\lambda=0$}} In this case the autonomous system $(\ref{eq9}-\ref{eq11})$ changes into \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2),\label{eq49} \\ y'&=&\frac{3}{2}y(1-x^2-y^2),\label{eq50} \\ z'&=&-xz^2\label{eq51}. \end{eqnarray} Corresponding to the above autonomous system we have three space of critical points $S_1(0,0,z_c)$, $S_2(0,1,z_c)$ and $S_3(0,-1,z_c)$ where $z_c$ is any real number, which are exactly same as $C_1$, $C_2$ and $C_3$. In this case also all critical points are non-hyperbolic in nature. By taking the possible shifting transformations (for $S_1~(x=X,y=Y,z=Z+z_c)$, for $S_2~(x=X,y=Y+1,z=Z+z_c)$ and for $S_3~(x=X,y=Y-1,z=Z+z_c)$ ) as above we can conclude that for all critical points the center manifold is given by $(\ref{eq37}-\ref{eq38})$ and the flow on the center manifold is determined by (\ref{eq39}), i.e., for all critical points the center manifold is lying on the $Z$-axis. Again if we plot the vector field in $Z=constant$ plane, we can see that for the critical point $S_1$ every points on $Z$-axis is a saddle node (same as FIG.\ref{saddle_z_c}) and for $S_2$ and $S_3$ every points on $Z$-axis is a stable star (same as FIG.\ref{z_c}). \subsection{Model 2: Power-law potential and exponentially-dependent dark-matter particle mass \label{M2}} In this consideration evolution equations in Section \ref{BES} can be converted to the autonomous system as follows \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\lambda y^2 z}{2}-\sqrt{\frac{3}{2}}\mu(1+x^2-y^2),\label{eq54} \\ y'&=&\frac{3}{2}y(1-x^2-y^2)-\frac{\lambda xyz}{2},\label{eq55} \\ z'&=&-xz^2,\label{eq56} \end{eqnarray} We have five critical points $L_1$, $L_2$, $L_3$, $L_4$ and $L_5$ corresponding to the above autonomous system. The set of critical points, their existence and the value of cosmological parameters at those critical points are shown as in Table \ref{TPLE} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at those critical points and the nature of the critical points are shown in Table \ref{TNE}.\par Here we only concern about the stability of the critical points for $\mu\neq 0$ and $\lambda\neq 0$ because for another possible cases we will get the similar types result which we have obtained for Model $1$. \begin{table}[h] \caption{\label{TPLE}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. } \begin{tabular}{|c|c|c c c|c|c|c| c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$Existence$&$x$&$y$&$z~~$& $~\Omega_\phi~$&$~\omega_\phi~$ &$~\omega_{tot}~$& $~q~$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\ $L_1$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&0&1&0 & 1 & $-1$ & $-1$&$-1$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\ $L_2$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&0&$-1$&0 & 1 & $-1$ & $-1$&$-1$\\ \hline $L_3$ & \begin{tabular}{@{}c@{}}$~~$\\For all \\$\mu\in\left(-\infty,-\sqrt{\frac{3}{2}}\right]\cup\left[\sqrt{\frac{3}{2}},\infty\right)$\\and all $\lambda$\\$~~$\end{tabular}&$-\frac{1}{\mu}\sqrt{\frac{3}{2}}$&$\sqrt{1-\frac{3}{2\mu^2}}$&0&$1-\frac{3}{\mu^2}$&$\frac{\mu^2}{3-\mu^2}$&$-1$&$-1$\\ \hline $L_4$ & \begin{tabular}{@{}c@{}}$~~$\\For all \\$\mu\in\left(-\infty,-\sqrt{\frac{3}{2}}\right]\cup\left[\sqrt{\frac{3}{2}},\infty\right)$\\and all $\lambda$\\$~~$\end{tabular}&$-\frac{1}{\mu}\sqrt{\frac{3}{2}}$&$-\sqrt{1-\frac{3}{2\mu^2}}$&0&$1-\frac{3}{\mu^2}$&$\frac{\mu^2}{3-\mu^2}$&$-1$&$-1$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\ $L_5$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&$-\sqrt{\frac{2}{3}}\mu$&$0$&$0$&$-\frac{2}{3}\mu^2$ & $1$&$-\frac{2}{3}\mu^2$&$\frac{1}{2}\left(1-2\mu^2\right)$ \\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{\label{TNE}The eigenvalues $(\lambda_1,\lambda_2,\lambda_3)$ of the Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at those critical points $(L_1-L_5)$ and the nature of the critical points} \begin{tabular}{|c|c c c|c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$\lambda_1$&$\lambda_2$&$\lambda_3$&$Nature~ of~ critical~ Points$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\$L_1$\\$~~$\end{tabular}&$-3$&$-3$&$0$&Non-hyperbolic\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$L_2$\\$~~$\end{tabular}&$-3$&$-3$&$0$&Non-hyperbolic\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$L_3$\\$~~$\end{tabular}&$-\frac{3}{2}\left(1+\frac{1}{\mu}\sqrt{-6+5\mu^2}\right)$&$-\frac{3}{2}\left(1-\frac{1}{\mu}\sqrt{-6+5\mu^2}\right)$&$0$&Non-hyperbolic\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$L_4$\\$~~$\end{tabular}&$-\frac{3}{2}\left(1+\frac{1}{\mu}\sqrt{-6+5\mu^2}\right)$&$-\frac{3}{2}\left(1-\frac{1}{\mu}\sqrt{-6+5\mu^2}\right)$&$0$&Non-hyperbolic\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$L_5$\\$~~$\end{tabular}&$-\frac{3}{2}$&$\frac{3}{2}$&$0$&Non-hyperbolic \\ \hline \end{tabular} \end{table} \begin{center} $1.~Critical~Point~L_1$ \end{center} The Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at the critical point $L_1$ can be put as \begin{equation} \renewcommand{\arraystretch}{1.5} J(L_1)=\begin{bmatrix} -3&\sqrt{6}\mu&-\frac{\lambda}{2}\\ ~~0&-3&~~0\\ ~~0 & ~~0& ~~0 \end{bmatrix}. \end{equation} The eigenvalues of $J(L_1)$ are $-3$, $-3$, $0$ and $[1,0,0]^T$, $\left[-\frac{\lambda}{6}, 0,1\right]^T$ are the eigenvectors corresponding to the eigenvalues $-3$ and $0$ respectively. Since the algebraic multiplicity corresponding to the eigenvalue $-3$ is $2$ but the dimension of the eigenspace corresponding to that eigenvalue is $1$, i.e., algebraic multiplicity and geometric multiplicity corresponding to the eigenvalue $-3$ are not equal to each other. So, the Jacobian matrix $J(L_1)$ is not diagonalizable. To determine the center manifold for this critical point there only arises a problem for presence of the nonzero element in the top position of third column of the Jacobian matrix. First we take the coordinate transformation $x=X,y=Y+1,z=Z$ which shift the critical point $L_1$ to the origin. Now we introduce another coordinate system which will remove the term in the top position of the third column. Since, there are only two linearly independent eigenvectors, so we have to obtain another linearly independent column vector that will help to construct the new coordinate system. Since, $[0,1,0]^T$ be the column vector which is linearly independent to the eigenvectors of $J(L_1)$. The new coordinate system $(u,v,w)$ can be written in terms of $(X,Y,Z)$ as (\ref{eq24}) and in these new coordinate system the equations $(\ref{eq54}-\ref{eq56})$ are transformed into \begin{equation}\renewcommand{\arraystretch}{1.5} \begin{bmatrix} u'\\ v'\\ w' \end{bmatrix}\renewcommand{\arraystretch}{1.5} =\begin{bmatrix} -3&\sqrt{6}\mu&0\\ ~~0&-3&~~0\\ ~~0 & ~~0& ~~0 \end{bmatrix} \begin{bmatrix} u\\ v\\ w \end{bmatrix} +\renewcommand{\arraystretch}{1.5} \begin{bmatrix} non\\ linear\\ terms \end{bmatrix}. \end{equation} By similar arguments which we have derived in the stability analysis of the critical point $A_2$, the center manifold can be written as (\ref{eqn27}-\ref{eqn28}) and the flow on the center manifold is determined by (\ref{eq29}). As the expression of center manifold and the flow are same as for the critical point $A_2$. So the stability of the critical point $L_1$ is same as the stability of $A_2$. \begin{center} $2.~Critical~Point~L_2$ \end{center} After shifting the critical points to the origin (by taking the shifting transformations $(x=X,y=Y-1,z=Z)$ and the matrix transformation (\ref{eq24})) and by putting the forward arguments which we have mentioned for the analysis of $L_1$, the center manifold can be expressed as $(\ref{eqn30}-\ref{eqn31})$ and the flow on the center manifold is determined by (\ref{eqn32}). So the stability of the critical point $L_2$ is same as the stability of $A_3$. \begin{center} $3.~Critical~Point~L_3$ \end{center} The Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at the critical point $L_3$ can be put as \begin{equation} \renewcommand{\arraystretch}{3} J(L_3)=\begin{bmatrix} -\frac{9}{2\mu^2}&\sqrt{1-\frac{3}{2\mu^2}}\left(\frac{3}{\mu}\sqrt{\frac{3}{2}}+\sqrt{6}\mu\right)&-\frac{\lambda}{2}\left(1-\frac{3}{2\mu^2}\right)\\ \frac{3}{\mu}\sqrt{\frac{3}{2}}\sqrt{1-\frac{3}{2\mu^2}}&-3\left(1-\frac{3}{2\mu^2}\right)&\frac{\lambda}{2\mu}\sqrt{\frac{3}{2}}\sqrt{1-\frac{3}{2\mu^2}}\\ ~~0 & ~~0& ~~0 \end{bmatrix}. \end{equation} The eigenvalues corresponding to the Jacobian matrix $J(L_3)$ are shown in Table.\ref{TNE}. From the existence of the critical point $L_3$ we can conclude that the eigenvalues of $J(L_3)$ always real. Since the critical point $L_3$ exists for $\mu\leq -\sqrt{\frac{3}{2}}$ or $\mu\geq \sqrt{\frac{3}{2}}$, our aim is to define the stability in all possible regions of $\mu$ for at least one choice of $\mu$ in these region. For this reason we will define the stability at four possible choices of $\mu$. We first determine the stability of this critical point at $\mu=\pm\sqrt{\frac{3}{2}}$. Then for $\mu< -\sqrt{\frac{3}{2}}$, we shall determine the stability of $L_3$ at $\mu=-\sqrt{3}$ and for $\mu>\sqrt{\frac{3}{2}}$, we shall determine the stability of $L_3$ at $\mu=\sqrt{3}$.\par For $\mu=\pm\sqrt{\frac{3}{2}}$, the Jacobian matrix $J(L_3)$ converts into $$ \begin{bmatrix} -3&0&0\\~~0&0&0\\~~0&0&0 \end{bmatrix} $$ and as the critical point $L_3$ converts into $(\mp 1,0,0)$, first we take the transformation $x=X\mp 1, y= Y, z=Z$ so that $L_3$ moves into the origin. As the critical point is non-hyperbolic in nature we use CMT for determining the stability of this critical point. From center manifold theory there exist a continuously differentiable function $h:$$\mathbb{R}^2$$\rightarrow$$\mathbb{R}$ such that $X=h(Y,Z)=aY^2+bYZ+cZ^2+higher~order~terms,$ where $a,~b,~c~ \epsilon~\mathbb{R}$. \\ Now differentiating both side with respect to $N$, we get \begin{eqnarray} \frac{dX}{dN}=[2aY+bZ ~~~~ bY+2cZ]\begin{bmatrix} \frac{dY}{dN}\\ ~\\ \frac{dZ}{dN}\\ \end{bmatrix}\label{equn52} \end{eqnarray} Comparing L.H.S. and R.H.S. of (\ref{equn52}) we get, $a=1$, $b=0$ and $c=0$, i.e., the center manifold can be written as \begin{eqnarray} X&=&\pm Y^2+higher~order~terms\label{eq65} \end{eqnarray} and the flow on the center manifold is determined by \begin{eqnarray} \frac{dY}{dN}&=&\pm\frac{\lambda}{2}YZ+higher~order~terms,\label{eq66}\\ \frac{dZ}{dN}&=&\pm Z^2+higher~order~terms\label{eq67}. \end{eqnarray} We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighborhood of the origin and here the lowest power term of the expression of center manifold depends only on $Y$. So, we draw the vector field near the origin only on $XY$-plane, i.e., the nature of the vector field implicitly depends on $Z$ not explicitly. Now we try to write the flow equations $(\ref{eq66}-\ref{eq67})$ in terms of $Y$ only. For this reason, we divide the corresponding sides of (\ref{eq66}) by the corresponding sides of (\ref{eq67}) and then we will get \begin{align*} &\frac{dY}{dZ}=\frac{\lambda}{2}\frac{Y}{Z}\\ \implies& Z=\left(\frac{Y}{C}\right)^{2/\lambda},~~\mbox{where $C$ is a positive arbitrary constant} \end{align*} After substituting this any of $(\ref{eq66})$ or $(\ref{eq67})$, we get \begin{align} \frac{dY}{dN}=\frac{\lambda}{2C^{2/\lambda}}Y^{1+2/\lambda} \end{align} As the power of $Y$ can not be negative or fraction, so we have only two choices of $\lambda$, $\lambda=1$ or $\lambda=2$. For $\lambda=1$ or, $\lambda=2$ both of the cases the origin is a saddle node, i.e., unstable in nature (FIG.\ref{L_21} is for $\mu=\sqrt{\frac{3}{2}}$ and FIG.\ref{L_2_1_1} is for $\mu=-\sqrt{\frac{3}{2}}$). Hence, for $\mu=\pm \sqrt{\frac{3}{2}}$, in the old coordinate system the critical point $L_3$ is unstable due to its saddle nature.\bigbreak \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{L21} \caption{Vector field near the origin when $\mu=\sqrt{\frac{3}{2}}$, for the critical point $L_3$. L.H.S. phase plot is for $\lambda=1$ and R.H.S. phase plot is for $\lambda=2$.} \label{L_21} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{L211} \caption{Vector field near the origin when $\mu=-\sqrt{\frac{3}{2}}$, for the critical point $L_3$. L.H.S. phase plot is for $\lambda=1$ and R.H.S. phase plot is for $\lambda=2$.} \label{L_2_1_1} \end{figure} For $\mu=\sqrt{3}$, the Jacobian matrix $J(L_3)$ converts into $$ \renewcommand{\arraystretch}{1.5} \begin{bmatrix} -\frac{3}{2}&~~\frac{9}{2}&-\frac{\lambda}{4}\\~~\frac{3}{2}&-\frac{3}{2}&~~\frac{\lambda}{4}\\~~0&~~0&~~0 \end{bmatrix}. $$ The eigenvalues of the above Jacobian matrix are $-\frac{3}{2}(1+\sqrt{3})$, $-\frac{3}{2}(1-\sqrt{3})$ and $0$ and the corresponding eigenvectors are $[-\sqrt{3},1,0]^T$, $[\sqrt{3},1,0]^T$ and $\left[-\frac{\lambda}{6},0,1\right]^T$ respectively. As for $\mu=\sqrt{3}$, the critical point $L_3$ converts into $\left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},0\right)$; so first we take the transformations $x= X-\frac{1}{\sqrt{2}}$, $y= Y+\frac{1}{\sqrt{2}}$ and $z= Z$ which shift the critical point to the origin. By using the eigenvectors of the above Jacobian matrix, we introduce a new coordinate system $(u,v,w)$ in terms of $(X,Y,Z)$ as \begin{equation}\renewcommand{\arraystretch}{1.5} \begin{bmatrix} u\\ v\\ w \end{bmatrix}\renewcommand{\arraystretch}{1.5} =\begin{bmatrix} -\frac{1}{2\sqrt{3}} & \frac{1}{2} & -\frac{\lambda}{12\sqrt{3}} \\ \frac{1}{2\sqrt{3}} & \frac{1}{2} & \frac{\lambda}{12\sqrt{3}}\\ 0 & 0 & 1 \end{bmatrix}\renewcommand{\arraystretch}{1.5} \begin{bmatrix} X\\ Y\\ Z \end{bmatrix} \end{equation} and in these new coordinates the equations $(\ref{eq54}-\ref{eq56})$ are transformed into \begin{equation} \renewcommand{\arraystretch}{1.5} \begin{bmatrix} -u'+v'\\ u'+v'\\ w' \end{bmatrix} =\begin{bmatrix} \frac{3}{2}(1+\sqrt{3})& -\frac{3}{2}(1-\sqrt{3}) & 0 \\ -\frac{3}{2}(1+\sqrt{3}) & -\frac{3}{2}(1-\sqrt{3}) & 0 \\ ~~0 & ~~0 & 0 \end{bmatrix} \begin{bmatrix} u\\ v\\ w \end{bmatrix} + \begin{bmatrix} non\\ linear\\ terms \end{bmatrix}. \end{equation} Now if we add $1$st and $2$nd equation of the above matrix equation and then divide both sides by $2$, then we get $v'$. Again, if we subtract $1$st equation from $2$nd equation and divide both sides by $2$, we get $u'$. Finally, in matrix form in the new coordinate system the autonomous system can be written as \begin{equation} \renewcommand{\arraystretch}{1.5} \begin{bmatrix} u'\\ v'\\ w' \end{bmatrix} =\begin{bmatrix} -\frac{3}{2}(1+\sqrt{3})& 0 & 0 \\ 0 & -\frac{3}{2}(1-\sqrt{3}) & 0 \\ 0 & ~~0 & 0 \end{bmatrix} \begin{bmatrix} u\\ v\\ w \end{bmatrix} + \begin{bmatrix} non\\ linear\\ terms \end{bmatrix}. \end{equation} If we put similar arguments which we have mentioned for the analysis of $A_2$, then the center manifold can be expressed as \begin{align} u&=\frac{2}{3(1+\sqrt{3})}\left\{\frac{(\sqrt{3}-1)\lambda^2-4\lambda}{48\sqrt{6}}\right \}w^2+\mathcal{O}(w^3),\label{eqn72}\\ v&=-\frac{2}{3(\sqrt{3}-1)}\left\{\frac{(\sqrt{3}+1)\lambda^2+4\lambda}{48\sqrt{6}}\right \}w^2+\mathcal{O}(w^3)\label{eqn73} \end{align} and the flow on the center manifold is determined by \begin{align} w'&=\frac{1}{\sqrt{2}}w^2+\mathcal{O}(w^3)\label{eqn74}. \end{align} From the flow equation we can easily conclude that the origin is a saddle node and unstable in nature. The vector field near the origin in $uw$-plane is shown as in FIG.\ref{L_22} and the vector field near the origin in $vw$-plane is shown as in FIG.\ref{L_2_2}. Hence, in the old coordinate system $(x,y,z)$, for $\mu=\sqrt{3}$ the critical point $L_3$ is unstable due to its saddle nature. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{L222} \caption{Vector field near the origin in $uw$-plane when $\mu=\sqrt{3}$, for the critical points $L_3$ and $L_4$. For the critical point $L_3$, the phase plot (a) is for $\lambda<0$ or $\lambda>\frac{4}{\sqrt{3}-1}$ and the phase plot (b) is for $0<\lambda<\frac{4}{\sqrt{3}-1}$. For the critical point $L_4$, the phase plot (a) is for $0<\lambda<\frac{4}{\sqrt{3}-1}$ and the phase plot (b) is for $\lambda<0$ or $\lambda>\frac{4}{\sqrt{3}-1}$.} \label{L_22} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{L22} \caption{Vector field near the origin in $vw$-plane when $\mu=\sqrt{3}$, for the critical points $L_3$ and $L_4$. For the critical point $L_3$, the phase plot (a) is for $\lambda<-\frac{4}{\sqrt{3}+1}$ or $\lambda>0$ and the phase plot (b) is for $-\frac{4}{\sqrt{3}+1}<\lambda<0$. For the critical point $L_4$, the phase plot (a) is for $-\frac{4}{\sqrt{3}+1}<\lambda<0$ and the phase plot (b) is for $\lambda<-\frac{4}{\sqrt{3}+1}$ or $\lambda>0$.} \label{L_2_2} \end{figure} Lastly, for $\mu=-\sqrt{3}$, we have the same eigenvalues $-\frac{3}{2}(1+\sqrt{3})$, $-\frac{3}{2}(1-\sqrt{3})$ and $0$ and the corresponding eigenvectors are $[\sqrt{3},1,0]^T$, $[-\sqrt{3},1,0]^T$ and $\left[-\frac{\lambda}{6},0,1\right]^T$ respectively of $J(L_3)$. After putting corresponding arguments which we have mentioned for $\mu=\sqrt{3}$ case, then we will get the same expressions $(\ref{eqn72}-\ref{eqn73})$ for center manifold and (\ref{eqn74}) for flow on the center manifold. So, for this case also we conclude that the critical point $L_3$ is a saddle node and unstable in nature. \newpage \begin{center} $4.~Critical~Point~L_4$ \end{center} The Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at the critical point $L_4$ can be put as \begin{equation} \renewcommand{\arraystretch}{3} J(L_4)=\begin{bmatrix} -\frac{9}{2\mu^2}&-\sqrt{1-\frac{3}{2\mu^2}}\left(\frac{3}{\mu}\sqrt{\frac{3}{2}}+\sqrt{6}\mu\right)&-\frac{\lambda}{2}\left(1-\frac{3}{2\mu^2}\right)\\ -\frac{3}{\mu}\sqrt{\frac{3}{2}}\sqrt{1-\frac{3}{2\mu^2}}&-3\left(1-\frac{3}{2\mu^2}\right)&-\frac{\lambda}{2\mu}\sqrt{\frac{3}{2}}\sqrt{1-\frac{3}{2\mu^2}}\\ ~~0 & ~~0& ~~0 \end{bmatrix}. \end{equation} For this critical point also we analyze the stability for the above four choices of $\mu$, i.e., $\mu=\pm\sqrt{\frac{3}{2}}$, $\mu=\sqrt{3}$ and $\mu=-\sqrt{3}$. \par For $\mu=\pm\sqrt{\frac{3}{2}}$, we will get the same expressions of center manifold (\ref{eq65}) and the flow on the center manifold $(\ref{eq66}-\ref{eq67})$. So, for this case the critical point $L_4$ is unstable due to its saddle nature. \par For $\mu=\sqrt{3}$, after putting corresponding arguments as $L_3$, the center manifold can be written as \begin{align} u&=\frac{2}{3(1+\sqrt{3})}\left\{\frac{(1-\sqrt{3})\lambda^2+4\lambda}{48\sqrt{6}}\right \}w^2+\mathcal{O}(w^3),\label{eqn76}\\ v&=\frac{2}{3(\sqrt{3}-1)}\left\{\frac{(\sqrt{3}+1)\lambda^2+4\lambda}{48\sqrt{6}}\right \}w^2+\mathcal{O}(w^3)\label{eqn77} \end{align} and the flow on the center manifold is determined by \begin{align} w'&=\frac{1}{\sqrt{2}}w^2+\mathcal{O}(w^3)\label{eqn78}. \end{align} From the flow equation we can conclude that the origin is a saddle node and hence in the old coordinate system $L_4$ is a saddle node, i.e., unstable in nature. The vector field near the origin in $uw$-plane is shown as in FIG.\ref{L_22} and the vector field near the origin in $vw$-plane is shown as in FIG.\ref{L_2_2}.\par For $\mu=-\sqrt{3}$ we also get the same expression of center manifold and flow equation as for $\mu=\sqrt{3}$ case. \begin{center} $5.~Critical~Point~L_5$ \end{center} First we shift the critical point $L_5$ to the origin by the transformation $x= X-\sqrt{\frac{2}{3}}\mu$, $y=Y$ and $z= Z$. For avoiding similar arguments which we have mentioned for the above critical points, we only state the main results center manifold and the flow equation for this critical point. The center manifold for this critical point can be written as \begin{align} X&=0,\\Y&=0 \end{align} and the flow on the center manifold can be obtained as \begin{align} \frac{dZ}{dN}=\sqrt{\frac{2}{3}}\mu Z^2 +\mathcal{O}(Z^3). \end{align} From the expressions of the center manifold we can conclude that the center manifold is lying on the $Z$-axis. From the flow on the center manifold FIG.\ref{z_center_manifold}, we conclude that the origin is unstable for both of the cases $\mu>0$ or $\mu<0$. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{z_center_manifold} \caption{Flow on the center manifold near the origin for the critical point $L_5$. (a) is for $\mu>0$ and (b) is for $\mu<0$.} \label{z_center_manifold} \end{figure} \subsection{Model 3: Exponential potential and power-law-dependent dark-matter particle mass \label{M3}} In this case evolution equations in Section \ref{BES} can be written to the autonomous system as follows \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda y^2-\frac{\mu}{2}z(1+x^2-y^2),\label{eq82} \\ y'&=&\frac{3}{2}y(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda xy,\label{eq83} \\ z'&=&-xz^2.\label{eq84} \end{eqnarray} We have three physical meaningful critical points $R_1$, $R_2$ and $R_3$ corresponding to the above autonomous system. The set of critical points, their existence and the value of cosmological parameters at those critical points corresponding to the autonomous system $(\ref{eq82}-\ref{eq84})$ shown in Table \ref{TPRE} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\ref{eq82}-\ref{eq84})$ at those critical points and the nature of the critical points are shown in Table \ref{TNRE}.\par Here we also concern about the stability of the critical points for $\mu\neq 0$ and $\lambda\neq 0$ because for another possible cases we will get the similar types result which we have obtained for Model $1$. \begin{table}[h] \caption{\label{TPRE}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. } \begin{tabular}{|c|c|c c c|c|c|c| c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$Existence$&$x$&$y$&$z~~$& $~\Omega_X~$&$~\omega_X~$ &$~\omega_{tot}~$& $~q~$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\$R_1$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&$0$&$0$&$0$&$0$&Undetermined&$0$&$\frac{1}{2}$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$R_2$\\$~~$\end{tabular}&For all $\mu$ and $\lambda$&$-\frac{\lambda}{\sqrt{6}}$&$\sqrt{1+\frac{\lambda^2}{6}}$&$0$&$1$&$-1-\frac{\lambda^2}{3}$&$-1-\frac{\lambda^2}{3}$&$-\frac{1}{2}\left(2+\lambda^2\right)$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$R_3$\\$~~$\end{tabular}&For all $\mu$ and $\lambda$&$-\frac{\lambda}{\sqrt{6}}$&$-\sqrt{1+\frac{\lambda^2}{6}}$&$0$&$1$&$-1-\frac{\lambda^2}{3}$&$-1-\frac{\lambda^2}{3}$&$-\frac{1}{2}\left(2+\lambda^2\right)$\\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{\label{TNRE}The eigenvalues $(\lambda_1,\lambda_2,\lambda_3)$ of the Jacobian matrix corresponding to the autonomous system $(\ref{eq82}-\ref{eq84})$ at those critical points $(R_1-R_3)$ and the nature of the critical points.} \begin{tabular}{|c|c c c|c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$\lambda_1$&$\lambda_2$&$\lambda_3$&$Nature~ of~ critical~ Points$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\$R_1$\\$~~$\end{tabular}&$-\frac{3}{2}$&$\frac{3}{2}$&$0$&Non-hyperbolic\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$R_2$\\$~~$\end{tabular}&$-(3+\lambda^2)$&$-\left(3+\frac{\lambda^2}{2}\right)$&$0$&Non-hyperbolic\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$R_3$\\$~~$\end{tabular}&$-(3+\lambda^2)$&$-\left(3+\frac{\lambda^2}{2}\right)$&$0$&Non-hyperbolic\\ \hline \end{tabular} \end{table} For avoiding similar types of argument, we only state the stability of every critical points and the reason behind the stability in the tabular form, which is shown as in Table \ref{T_R_stability}. \begin{table}[!] \caption{\label{T_R_stability}Table shows the stability and the reason behind the stability of the critical points $(R_1-R_3)$.} \begin{tabular}{|c|c|c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$ CPs $\\$~$\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\$R_1$\\$~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\For $\mu>0$, $R_1$ is a saddle node\\$~~$\\ and \\$~~$\\for $\mu<0$, $R_1$ is a stable node\\$~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\After introducing the coordinate transformation (\ref{eq15}),\\ we will get the same expression of center manifold\\ $(\ref{eq18}-\ref{eq19})$ and the flow on the center manifold is\\ determined by $(\ref{eq20})$(FIG.\ref{A_1}).\\$~$\end{tabular}\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$R_2,R_3$\\$~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\For $\lambda>0$ or $\lambda<0$, \\$~~$\\$R_2$ and $R_3$ both are unstable\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\After shifting $R_2$ and $R_3$ to the origin by using coordinate\\ transformation $\left(x=X-\frac{\lambda}{\sqrt{6}},y=Y+\sqrt{1+\frac{\lambda^2}{6}},z=Z\right)$ and\\ $\left(x=X-\frac{\lambda}{\sqrt{6}},y=Y-\sqrt{1+\frac{\lambda^2}{6}},z=Z \right)$ respectively,\\ we can conclude that the center manifold is lying on $Z$-axis\\ and the flow on the center manifold is determined by\\ $\frac{dZ}{dN}=\frac{\lambda}{\sqrt{6}}Z^2+\mathcal{O}(Z^3)$.\\$~~$\\ The origin is unstable for both of the cases $\lambda>0$\\ (same as FIG.\ref{z_center_manifold}\textbf{(a)}) and $\lambda<0$ (same as FIG.\ref{z_center_manifold}\textbf{(b)}).\\$~$\end{tabular}\\ \hline \end{tabular} \end{table} \subsection{Model 4: Exponential potential and exponentially-dependent dark-matter particle mass \label{M4}} In this consideration evolution equations in Section \ref{BES} can be written to the autonomous system as follows \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda y^2-\sqrt{\frac{3}{2}}\mu(1+x^2-y^2),\label{eq85} \\ y'&=&\frac{3}{2}y(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda xy.\label{eq86} \end{eqnarray} We ignore the equation corresponding to the auxiliary variable $z$ in the above autonomous system because the R.H.S. expression of $x'$ and $y'$ does not depend on $z$.\par \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{M_1} \caption{Vector field near the origin for the critical point $M_1$. L.H.S. for $\mu >3$ and R.H.S. for $\mu<0$.} \label{M_1} \end{figure} Corresponding to the above autonomous system we have four critical points $M_1$, $M_2$, $M_3$ and $M_4$. The set of critical points, their existence and the value of cosmological parameters at those critical points corresponding to the autonomous system $(\ref{eq85}-\ref{eq86})$ shown in Table \ref{TPME} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\ref{eq85}-\ref{eq86})$ at those critical points and the nature of the critical points are shown in Table \ref{TNME}.\bigbreak \begin{table}[h] \caption{\label{TPME}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. } \begin{tabular}{|c|c|c c|c|c|c| c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$Existence$&$x$&$y$& $~\Omega_X~$&$~\omega_X~$ &$~\omega_{tot}~$& $~q~$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\ $M_1$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&$-\sqrt{\frac{2}{3}}\mu$&$0$&$-\frac{2}{3}\mu^2$ & $1$&$-\frac{2}{3}\mu^2$&$\frac{1}{2}\left(1-2\mu^2\right)$ \\ \hline \begin{tabular}{@{}c@{}}$~~$\\$M_2$\\$~~$\end{tabular}&For all $\mu$ and $\lambda$&$-\frac{\lambda}{\sqrt{6}}$&$\sqrt{1+\frac{\lambda^2}{6}}$&$1$&$-1-\frac{\lambda^2}{3}$&$-1-\frac{\lambda^2}{3}$&$-\frac{1}{2}\left(2+\lambda^2\right)$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$M_3$\\$~~$\end{tabular}&For all $\mu$ and $\lambda$&$-\frac{\lambda}{\sqrt{6}}$&$-\sqrt{1+\frac{\lambda^2}{6}}$&$1$&$-1-\frac{\lambda^2}{3}$&$-1-\frac{\lambda^2}{3}$&$-\frac{1}{2}\left(2+\lambda^2\right)$\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$M_4$\\$~~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\For $\mu\neq\lambda$\\and\\ $\min\{\mu^2-\frac{3}{2},\lambda^2+3\}\geq\lambda\mu$\\$~~$\end{tabular}&$\frac{\sqrt{\frac{3}{2}}}{\lambda-\mu}$&$\frac{\sqrt{-\frac{3}{2}-\mu(\lambda-\mu)}}{|\lambda-\mu|}$&$\frac{\mu^2-\lambda\mu-3}{(\lambda-\mu)^2}$&$\frac{\mu(\lambda-\mu)}{\mu^2-\lambda\mu-3}$&$\frac{\mu}{\lambda-\mu}$&$\frac{1}{2}\left(\frac{\lambda+2\mu}{\lambda-\mu}\right)$\\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{\label{TNME}The eigenvalues $(\lambda_1,\lambda_2)$ of the Jacobian matrix corresponding to the autonomous system $(\ref{eq85}-\ref{eq86})$ at those critical points $(M_1-M_4)$ and the nature of the critical points.} \begin{tabular}{|c|c c|c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$\lambda_1$&$\lambda_2$&$Nature~ of~ critical~ Points$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\$M_1$\\$~~$\end{tabular}&$-\left(\frac{3}{2}+\mu^2\right)$$~~$&$~~$$-\left(\mu^2-\frac{3}{2}\right)+\lambda\mu$& \begin{tabular}{@{}c@{}}$~~$\\Hyperbolic if $\left(\mu^2-\frac{3}{2}\right)\neq\lambda\mu$,\\$~~$ \\ non-hyperbolic if $\left(\mu^2-\frac{3}{2}\right)=\lambda\mu$\\$~~$\end{tabular}\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$M_2$\\$~~$\end{tabular}&$-(3+\lambda^2)+\lambda\mu$&$-\left(3+\frac{\lambda^2}{2}\right)$&\begin{tabular}{@{}c@{}}$~~$\\Hyperbolic if $(\lambda^2+3)\neq\lambda\mu$,\\$~~$ \\ non-hyperbolic if $\left(\lambda^2+3\right)=\lambda\mu$\\$~~$\end{tabular}\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$M_3$\\$~~$\end{tabular}&$-(3+\lambda^2)+\lambda\mu$&$-\left(3+\frac{\lambda^2}{2}\right)$&\begin{tabular}{@{}c@{}}$~~$\\Hyperbolic if $(\lambda^2+3)\neq\lambda\mu$,\\$~~$ \\ non-hyperbolic if $\left(\lambda^2+3\right)=\lambda\mu$\\$~~$\end{tabular}\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$M_4$\\$~~$\end{tabular}&$\frac{a+d+\sqrt{(a-d)^2+4bc}}{2}$&$\frac{a+d-\sqrt{(a-d)^2+4bc}}{2}$&\begin{tabular}{@{}c@{}}$~~$\\Hyperbolic when $\mu^2-\frac{3}{2}>\lambda\mu$\\ and $\lambda^2+3>\lambda\mu$,\\$~~$\\non-hyperbolic when $\mu^2-\frac{3}{2}=\lambda\mu$\\ or $\lambda^2+3=\lambda\mu$\\$~~$\end{tabular}\\ \hline \end{tabular} \end{table} Note that for the critical point $M_4$ we have written the eigenvalues in terms of $a$, $b$, $c$ and $d$, where $a=-\frac{3}{2(\lambda-\mu)^2}(\lambda^2+3-\lambda\mu)$, $b=\mp\sqrt{\frac{3}{2}}\left(\frac{3}{(\lambda-\mu)^2}+2\right)\sqrt{-\frac{3}{2}-\mu(\lambda-\mu)}$, $c=\mp\sqrt{\frac{3}{2}}\left\{\frac{\lambda^2+3-\lambda\mu}{(\lambda-\mu)^2}\right\} \sqrt{-\frac{3}{2}-\mu(\lambda-\mu)}$, $d=-\frac{3}{(\lambda-\mu)^2}\left\{\left(\mu^2-\frac{3}{2}\right)-\lambda\mu\right\}$.\par Again, here we only state the stability of every critical points $(M_1-M_4)$ and the reason behind the stability in the tabular form, which is shown as in Table \ref{T_M_stability}. \begin{table}[!] \caption{\label{T_M_stability}Table shows the stability and the reason behind the stability of the critical points $(M_1-M_4)$} \begin{tabular}{|c|c|c|} \hline \hline \begin{tabular}{@{}c@{}}$~~$\\$ CPs $\\$~$\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\ \hline\hline \begin{tabular}{@{}c@{}}$~~$\\$ M_1 $\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\Stable node for $\left(\mu^2-\frac{3}{2}\right)>\lambda\mu$\\ $~~$\\and\\$~~$\\ saddle node for $\left(\mu^2-\frac{3}{2}\right)\leq\lambda\mu$\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\For $\left(\mu^2-\frac{3}{2}\right)>\lambda\mu$, as both eigenvalues \\of the Jacobian matrix at $M_1$ are negative, so by\\ Hartman-Grobman theorem we can conclude that\\ the critical point $M_1$ is a stable node.\\$~~$\\ For $\left(\mu^2-\frac{3}{2}\right)<\lambda\mu$, as one eigenvalue is positive\\ and another is negative, so by Hartman-Grobman theorem\\ we can conclude that the critical point $M_1$ is a saddle node.\\$~~$\\ For $\left(\mu^2-\frac{3}{2}\right)=\lambda\mu$, after shifting the critical point\\ $M_1$ to the origin by the coordinate transformation\\ $\left(x=X-\sqrt{\frac{2}{3}}\mu,y=Y\right)$, the center manifold can be written as \\$X=\frac{1}{\mu}\sqrt{\frac{3}{2}}Y^2+\mathcal{O}(Y^3)$\\ and the flow on the center manifold can be determined as\\ $\frac{dY}{dN}=\frac{9}{4\mu^2}Y^3+\mathcal{O}(Y^4)$.\\ Hence, for both of the cases $\mu>0$ and $\mu<0$ the origin\\ is a saddle node and unstable in nature (FIG.\ref{M_1}).\\$~~$\end{tabular}\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ M_2,M_3 $\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\Stable node for $\left(\lambda^2+3\right)>\lambda\mu$\\$~~$\\ and\\$~~$\\ saddle node for $\left(\lambda^2+3\right)\leq\lambda\mu$\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\For $\left(\lambda^2+3\right)>\lambda\mu$, as both eigenvalues \\of the Jacobian matrix at $M_2$ are negative, so by\\ Hartman-Grobman theorem we can conclude that\\ the critical point $M_2$ is a stable node.\\$~~$\\ For $\left(\lambda^2+3\right)<\lambda\mu$, as one eigenvalue is positive\\ and another is negative, so by Hartman-Grobman theorem\\ we can conclude that the critical point $M_2$ is a saddle node.\\$~~$\\ For $\left(\lambda^2+3\right)=\lambda\mu$, after shifting the critical point\\ $M_1$ to the origin by the coordinate transformation\\ $\left(x=X-\frac{\lambda}{\sqrt{6}},y=Y\pm\sqrt{1+\frac{\lambda^2}{6}}\right)$, the center manifold can be\\ written as $~~Y=\mp\frac{1}{2\sqrt{1+\frac{\lambda^2}{6}}}X^2+\mathcal{O}(X^3)$\\ and the flow on the center manifold can be determined as\\ $\frac{dX}{dN}=\frac{\lambda}{2}\sqrt{\frac{3}{2}}\left\{1-\frac{6}{\lambda^2}\pm\frac{12}{\lambda^2}\left(1+\frac{\lambda^2}{6}\right)^{\frac{3}{2}}\right\}X^2+\mathcal{O}(X^4)$.\\ Hence, for all possible values $\lambda$ due to the even power $X$\\ in the R.H.S. of the flow equation, the origin is\\ a saddle node and unstable in nature.\\$~~$\end{tabular}\\ \hline \begin{tabular}{@{}c@{}}$~~$\\$ M_4 $\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\Saddle node for both of the cases, i.e.,\\ $\mu^2-\frac{3}{2}=\lambda\mu$ or $\lambda^2+3=\lambda\mu$\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\ For $\mu^2-\frac{3}{2}=\lambda\mu$, as $M_4$ converts into\\ $M_1$, so we get the same stability like $M_1$.\\$~~$\\ For $\lambda^2+3=\lambda\mu$ as $M_4$ converts into $M_2$ and $M_3$, \\so we get the same stability like $M_2$ and $M_3$.\\$~$\end{tabular}\\\hline \end{tabular} \end{table} Also note that for hyperbolic case of $M_4$, the components of the Jacobian matrix $a,b,c$ and $d$ are very complicated and from the determination of eigenvalue, it is very difficult to provide any conclusion about the stability and for this reason we skip the stability analysis for this case. \subsection{Model 5: Product of exponential and power-law potential and product of exponentially-dependent and power-law-dependent dark-matter particle mass \label{M5}} In this consideration evolution equations in Section \ref{BES} can be written to the autonomous system as follows \begin{eqnarray} x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda y^2-\frac{\lambda}{2}y^2z-\sqrt{\frac{3}{2}}\mu(1+x^2-y^2)-\frac{\mu}{2}z(1+x^2-y^2),\label{eqn80} \\ y'&=&\frac{3}{2}y(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda xy-\frac{\lambda}{2}xyz,\label{eqn81}\\ z'&=&-xz^2\label{eqn82} \end{eqnarray} For determining the critical points corresponding to the above autonomous system, we first equate the R.H.S. of (\ref{eqn82}) with $0$. Then we have either $x=0$ or $z=0$. For $z=0$ then the above autonomous system converts in to the autonomous system of Model 4. So, then we will get the similar types of result as Model 4. When $x=0$, we have three physically meaningful critical points corresponding to the above autonomous system for $\mu\neq 0$ and $\lambda\neq 0$. For another choices of $\mu$ and $\lambda$ like Model 1, we will get similar types of results. The critical points are $N_1(0,0,-\sqrt{6})$, $N_2(0,1,-\sqrt{6})$ and $N_3(0,-1,-\sqrt{6})$ and all are hyperbolic in nature. As the $x$ and $y$ coordinates of these critical points are same as $A_1$, $A_2$ and $A_3$ and the value of cosmological parameters are not depending on $z$ coordinate, so we get the same result for the value of cosmological parameters as $A_1$, $A_2$ and $A_3$ respectively, which are presented in Table \ref{TI}. \begin{center} $1.~Critical~Point~N_1$ \end{center} The Jacobian matrix corresponding to the autonomous system (\ref{eqn80}-\ref{eqn82}) at the critical point $N_1$ has three eigenvalues $\frac{3}{2}$, $-\frac{1}{4}\left(3+\sqrt{9+48\mu}\right)$ and $-\frac{1}{4}\left(3-\sqrt{9+48\mu}\right)$ and the corresponding eigenvectors are $[0,1,0]^T$, $\left[\frac{1}{24}\left(3+\sqrt{9+48\mu}\right),0,1\right]^T$ and $\left[\frac{1}{24}\left(3-\sqrt{9+48\mu}\right),0,1\right]^T$ respectively. As the critical point is hyperbolic in nature, so we use Hartman-Grobman theorem for analyzing the stability of this critical point. From the determination of eigenvalues, we conclude that the stability of the critical point $N_1$ depends on $\mu$. For $\mu<-\frac{9}{48}$, the last two eigenvalues are complex conjugate with negative real parts. For $\mu\geq-\frac{9}{48}$, all eigenvalues are real.\par For $\mu<-\frac{9}{48}$, due to presence of negative real parts of last two eigenvalues, $yz$-plane is the stable subspace and as the first eigenvalue is positive, $x$-axis is the unstable subspace. Hence, the critical point $N_1$ is saddle-focus, i.e., unstable in nature. The phase portrait in $xyz$ coordinate system is shown as in FIG.\ref{focus_1}.\par \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{focus11} \caption{Phase portrait near the origin for the critical point $N_1$ in $xyz$ coordinate system. This phase portrait is drawn for $\mu=-1$.} \label{focus_1} \end{figure} For $\mu\geq-\frac{9}{48}$, always we have at least one positive eigenvalue and at least one negative eigenvalue and hence we can conclude that the critical point $N_1$ is unstable due to its saddle nature. \begin{center} $2.~Critical~Point~N_2~\&~ N_3$ \end{center} The Jacobian matrix corresponding to the autonomous system $(\ref{eqn80}-\ref{eqn82})$ at the critical point $N_2$ and $N_3$ has three eigenvalues $-3$, $-\frac{1}{2}\left(3+\sqrt{9+12\lambda}\right)$ and $-\frac{1}{2}\left(3-\sqrt{9+12\lambda}\right)$ and the corresponding eigenvectors are $[0,1,0]^T$, $\left[\frac{1}{12}\left(3+\sqrt{9+12\lambda}\right),0,1\right]^T$ and $\left[\frac{1}{12}\left(3-\sqrt{9+12\lambda}\right),0,1\right]^T$ respectively. From the determination of the eigenvalue, we conclude that the last two eigenvalues are complex conjugate while $\lambda<-\frac{3}{4}$ and the eigenvalues are real while $\lambda\geq-\frac{3}{4}$.\par For $\lambda<-\frac{3}{4}$, we can see that the last two eigenvalues are complex with negative real parts and first eigenvalue is always negative. Hence, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are stable focus-node in this case. The phase portrait in $xyz-$coordinate system is shown as in FIG.\ref{focus_2}.\par \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{focus2} \caption{Phase portrait near the origin for the critical point $N_2$ and $N_3$ in $xyz$ coordinate system. This phase portrait is drawn for $\lambda=-1$.} \label{focus_2} \end{figure} For $-\frac{3}{4}\leq\lambda<0$, we can see that all eigenvalues are negative. So, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are stable node in this case.\par For $\lambda>0$, we have two negative and one positive eigenvalues. Hence, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are saddle node and unstable in nature.\bigskip \section{Bifurcation Analysis by Poincar\'{e} index and Global Cosmological evolution \label{BAPGCE}} The flat potential plays a crucial role to obtain the bouncing solution. After the bounce, the flat potential naturally allows the universe to penetrate the slow-roll inflation regime, as a result of that making the bouncing universe compatible with observations.\par In Model 1 (\ref{M1}), for the inflationary scenario, we consider $\lambda$ and $\mu$ very small positive number so that $V(\phi) \approx V_0$ and $M_{DM} \approx M_0$. The Eqn. (\ref{eq11}) mainly regulate the flow along $Z$-axis. Due to Eqn. (\ref{eq11}) the overall 3-dimensional phase space splits up into two compartments and the $ZY$-plane becomes the separatrix. In the right compartment, for $x>0$, we have $z' <0$ and $z'>0$ in the left compartment. on the $ZY$ plane $z' \approx 0$. For $\lambda \neq 0$ and $\mu \neq 0$, all critical points are located on the Y-axis. As all cosmological parameters can be expressed in terms of $x$ and $y$, so we rigorously inspect the vector field on $XY$-plane. Due to Eqn. (\ref{eq4}), the viable phase-space region (say $S$) satisfies $y^2-x^2 \leqslant 1$ which is inside of a hyperbola centered at the origin (FIG.\ref{hyperbola}). On the $XY$-plane $z' \approx 0$. So on the $XY$-plane, by Hartman-Grobman theorem we can conclude there are four hyperbolic sectors around $A_1$ ($\alpha$-limit set) and one parabolic sector around each of $A_2$ and $A_3$ ($\omega$-limit sets). So, by Bendixson theorem, it is to be noted that, the index of $A_1|_{XY}$ is $1$ and the index of $A_2|_{XY}$ and $A_3|_{XY}$ is $-1$. If the initial position of the universe is in left compartment and near to the $\alpha$-limit, then the universe remains in the left compartment and moves towards $\omega$-limit set asymptotically at late time. Similar phenomenon happens in right compartment also. The universe experiences a fluid dominated non-generic evolution near $A_1$ for $\mu>0$ and a generic evolution for $\mu<0$. For sufficiently flat potential, near $A_2$ and $A_3$, a scalar field dominated non-generic and generic evolution occur for $\lambda>0$ and $\lambda<0$ respectively (see FIG. \ref{Model1}). \begin{figure}[h] \centering \includegraphics[width=.4\textwidth]{hyperbolic.pdf} \caption{Vector field on the projective plane by antipodal points identified of the disk.} \label{hyperbola} \end{figure} \begin{figure}[htbp!] \begin{subfigure}{0.34\textwidth} \includegraphics[width=.9\linewidth]{A1.png} \caption{} \label{fig:A1} \end{subfigure}% \begin{subfigure}{0.34\textwidth} \includegraphics[width=.9\linewidth]{A2.png} \caption{} \label{fig:A2} \end{subfigure}% \begin{subfigure}{.34\textwidth} \includegraphics[width=.9\linewidth]{A3.png} \caption{} \label{fig:A3} \end{subfigure} \caption{\label{Model1}\textit{Model 1}: Qualitative evolution of the physical variables $\omega_{total}$, $\omega_{\phi}$ and $q$ for perturbation of the parameters ($\lambda$ \& $\mu$) near the bifurcation values for three sets of initial conditions. (a) The initial condition near the point $A_1$. (b) The initial condition near the point $A_2$. (c) The initial condition near the point $A_3$. We observe that the limit of the physical parameter $\omega_{total}\rightarrow -1$. In early or present time the scalar field may be in phantom phase but the field is attracted to the de-Sitter phase.} \end{figure} The Poinca\'{e} index theorem \cite{0-387-95116-4} helps us to determine Euler Poincar\'{e} characteristic which is $\chi(S)=n-f-s$, where $n$, $f$, $s$ are the number of nodes, foci and saddle on $S$. Henceforward we consider index as Poinca\'{e} index. So for the vector field of case-(i)$|_{XY-plane}$, $\chi(S)=1$. This vector field can define a vector field on the projective plane, i.e., in 3-dimensional phase-space, if we consider a closed disk the $XY$-plane of radius one and centered at the origin, then we have the same vector field on the projective plane by antipodal point identified.\par For $z=constant (\neq 0)$ plane the above characterization of vector field changes as a vertical flow along $Z$-axis regulate the character of the vector field. Using Bendixson theorem \cite{0-387-95116-4} we can find the index of nonhyperbolic critical point by restricting the vector field on a suitable two-dimensional subspace.\par If we restrict ourselves on $XZ$-plane, $A_1$ is saddle in nature for $\mu > 0$. On the $XZ$ plane the index of $A_1$ is -1 for $\mu>0$ as four hyperbolic sectors are separated by two separatices around $A_1$. For $\mu<0$, there is only one parabolic sector and the index is zero (FIG.\ref{A_1}). On the $YZ$ plane $A_1$ swap its index with $XZ$ plane depending on the sign of $\mu$.\par On the uw-plane $A_2$ and $A_3$ have index 1 for $\lambda>0$ and -1 for $\lambda \leqslant 0$. On the uw-plane $A_2$ and $A_3$ have index -1 for $\lambda>0$ and 1 for $\lambda < 0$. At $\lambda=0$, the index of $A_2$ is 0 but the index of $A_3$ is 1. On uv-plane the index $A_2$ or $A_3$ is 1 and does not depend on $\lambda$. On the (uw)-plane around $A_2$ the number of hyperbolic sector is four and there is no elliptic sector. So the index of $A_2$ and $A_3$ $(origin)|_{uw~plane}/ _{vw~plane}$ is -1 for $\lambda>0$ and for $\lambda<0$ the index is 1 as there is no hyperbolic or elliptic orbit.\par A set of non-isolated equilibrium points is said to be normally hyperbolic if the only eigenvalues with zero real parts are those whose corresponding eigenvectors are tangent to the set. For the case (ii) to case (iv), we get normally hyperbolic critical points as the eigenvector $[0~ 0~ 1]^T$ (in new $(u,v,w)$ coordinate system) corresponding to only zero eigenvalue, is tangent to the line of critical points. The stability of a set which is normally hyperbolic can be completely classified by considering the signs of the eigenvalues in the remaining directions. So the character of the flow of the phase space for each $z=constant$ plane is identical to the $XY$-plane in the previous case. Thus the system (\ref{eq9}-\ref{eq11}) is structurally unstable \cite{0-387-95116-4} at $\lambda=0$ or $\mu=0$ or both. On the other hand, the potential changes its character from runaway to non-runaway as $\lambda$ crosses zero from positive to negative. Thus $\lambda=0$ and $\mu=0$ are the bifurcation values\cite{1950261}.\bigbreak Model 2 (\ref{M2}) contains five critical points $L_1-L_5$. For $\lambda>0$, the flow is unstable and for $\lambda<0$ the flow on the center manifold is stable. Around $L_2$, the character of the vector field same as $L_1$. For $\mu=\pm \sqrt{\frac{3}{2}}$, the flow on the center manifold at $L_3$ or $L_4$ depends on the sign of $\lambda$ (FIG.\ref{L_21} \& FIG.\ref{L_2_1_1}). On the other hand, $\mu>\sqrt{\frac{3}{2}}$ or $\mu< \sqrt{\frac{3}{2}}$ the flow on the center manifold does not depend on $\lambda$. For $\mu >0$, the flow on the center manifold at $L_5$ moves increasing direction of $z$. On the other hand, for $\mu <0$, the flow on the center manifold is in decreasing direction of $z$. The index of $L_1$ is same as $A_2$. For $\mu=\pm \sqrt{\frac{3}{2}}$ and $\lambda=1$, the index of $L_2|_{XY plane}$ is -1 as there are only four hyperbolic sectors. But for $\lambda=2$, there are two hyperbolic and one parabolic sectors, so the index is zero. The index of $L_3$ is same as $L_2$. The index of $L_4$ on $ZX$ or $XY$ plane is zero as there are two hyperbolic and one parabolic sector for each $\mu>0$ and $\mu<0$. So it is to be noted that, for $\lambda=0, \pm \sqrt{\frac{3}{2}} $ and $\mu=0$ the system is structurally unstable. \begin{figure}[htbp!] \begin{subfigure}{0.34\textwidth} \includegraphics[width=.9\linewidth]{L1.png} \caption{} \label{fig:L1} \end{subfigure}% \begin{subfigure}{0.34\textwidth} \includegraphics[width=.9\linewidth]{L2.png} \caption{} \label{fig:L2} \end{subfigure}% \begin{subfigure}{.34\textwidth} \includegraphics[width=.9\linewidth]{L3mun.png} \caption{} \label{fig:L3n} \end{subfigure} \begin{subfigure}{0.34\textwidth} \includegraphics[width=.9\linewidth]{L3mup.png} \caption{} \label{fig:L3n} \end{subfigure}% \begin{subfigure}{0.34\textwidth} \includegraphics[width=.9\linewidth]{L4mup.png} \caption{} \label{fig:L4p} \end{subfigure}% \begin{subfigure}{.34\textwidth} \includegraphics[width=.9\linewidth]{L5mup.png} \caption{} \label{fig:L5} \end{subfigure} \caption{\label{Model2}\textit{Model 2}: Some interesting qualitative evolution of the physical variables $\omega_{total}$, $\omega_{\phi}$ and $q$ for perturbation of the parameters ($\lambda$ \& $\mu$) near the bifurcation values for six sets of initial conditions. (a) The initial position near the point $L_1$. (b) The initial position near the point $L_2$. (c) The initial position near the point $L_3$ and $\mu<-\sqrt{\frac{3}{2}}$. (d) The initial position near the point $L_3$ and $\mu>\sqrt{\frac{3}{2}}$. (e) The initial position near the point $L_4$ and $\mu>\sqrt{\frac{3}{2}}$. (f) The initial position near the point $L_5$ and $\mu>0$. We observe that the limit of the physical parameter $\omega_{total}\rightarrow -1$. In early or present time the scalar field may be in phantom phase but the field is attracted to the de-Sitter phase except for (b) and (e). In (e) the scalar field crosses phantom boundary line and enters into the phantom phase in late timeand would cause Big-Rip.} \end{figure} The universe experiences a scalar field dominated non-generic evolution near $L_1$ and $L_2$ for $\lambda>0$ and a scalar field dominated generic evolution for $\lambda<0$ or on the z-nullcline. Near $L_3$ and $L_4$, a scalar field dominated non-generic evolution of the universe occur at $\mu \approx \pm \sqrt{\frac{3}{2}}$. At $\mu \approx 0$ a scaling non-generic evolution occur near $L_5$ (see FIG.\ref{Model2}). \bigbreak Model 3 (\ref{M3}) contains three critical points $R_1-R_3$. $R_1$ is saddle for all values of $\mu$. On the $xy$ plane the index of $R_1$ is same as $A_1$. On the projection of the $xy$-plane $R_2$ and $R_3$ are stable nodes for all values of $\lambda$. On the center manifold at $R_2$ or $R_3$, the flow is increasing direction along $z$-axis and the flow is decreasing direction along $z$-axis for $\lambda<0$. On the $XZ$ or $YZ$ plane, the index of $R_2$ or $R_3$ is zero as around each of them there are two hyperbolic and one parabolic sectors.Thus we note that, for $\mu=0$ and $\lambda=0$, the stability of the system bifurcate.\\ We observe that no scaling solutions or a tracking solutions exist in this specific model like in the quintessence theory. However, the critical points which describe the de Sitter solution do not exist in the case of quintessence for the exponential potential; the universe experiences a fluid dominated non-generic evolution near critical point $R_1$ and a scalar field dominated non-generic evolution near critical point $R_2$ and $R_3$. For sufficiently flat potential, early or present phantom/non-phantom universe is attracted to $\Lambda$CDM cosmological model (see FIG. \ref{fig:Model3}).\bigbreak Model 4 (\ref{M4}) contains four critical points $M_1-M_4$. $M_1-M_3$ are stable node for $\left(\lambda^2+3\right)>\lambda\mu$ (index 1) and saddle node (index zero) for $\left(\lambda^2+3\right)\leq\lambda\mu$, i.e., the stability of the system bifurcate at $\left(\lambda^2+3\right)=\lambda\mu$. Thus we find a generic evolution for $\left(\lambda^2+3\right)\neq \lambda\mu$ and no-generic otherwise. The kinetic dominated solution ($M_1$) and scalar field dominated solutions ($M_2$ and $M_3$) are stable for $\left(\lambda^2+3\right)>\lambda\mu$. For the energy density, near $M_2$ and $M_3$, we observe that at late times the scalar field dominates $\Omega_X=\Omega_\phi \rightarrow 1$ and $\Omega_m \rightarrow 0$, while the parameter for the equation of state $\omega_{tot}$ have the limits $\omega_{tot} \rightarrow -1$ for sufficiently flat potential.\bigbreak Model 5 (\ref{M5}) contains three critical points $N_1$, $N_2$, $N_3$. For $\mu< -\frac{3}{16}$, the Shilnikov's saddle index \cite{Shilnikov} of $N_1$ is $\nu_{N_1}=\frac{\rho_{N_1}}{\gamma_{N_1}}=0.5$ and saddle value is $\sigma_{N_1}=-\rho_{N_1}+\gamma_{N_1}=0.75$. As So Shilnikov condition \cite{Shilnikov} is satisfied as $\nu_{N_1}<1$ and $\sigma_{N_1}>0$. The second Shilnikov's saddle value $\sigma^{(2)}_{N_1}=-2\rho_{N_1}+\gamma_{N_1}=0$. So, by L. Shilnikov's theorem (Shilnikov, 1965) \cite{Shilnikov} there are countably many saddle periodic orbits in a neighborhood of the homoclinic loop of the saddle-focus $N_1$. As $\nu_{N_1}$ is invariant for any choice of $\mu$, so Shilnikov's bifurcation does not appear. For $-\frac{3}{16}<\mu < 0$, the vector field near $N_1$ is saddle in character. On the other hand, $N_1$ is saddle for $\mu>0$. So, $\mu=0$ is a bifurcation value for the bifurcation point $N_1$. Similarly, $\lambda=0$ is a bifurcation point for the bifurcation points $N_2$ and $N_3$. We observe scalar field dominated solutions near $N_2$ and $N_3$ which exists at bifurcation value, i.e., for sufficiently flat universe and attracted to $\Lambda$CDM cosmological model. \\ \begin{figure}[htbp!] \begin{subfigure}{0.34\textwidth} \includegraphics[width=.9\linewidth]{R1.png} \caption{} \label{fig:R1} \end{subfigure}% \begin{subfigure}{0.34\textwidth} \includegraphics[width=.9\linewidth]{R2.png} \caption{} \label{fig:R2} \end{subfigure}% \begin{subfigure}{.34\textwidth} \includegraphics[width=.9\linewidth]{R3.png} \caption{} \label{fig:R3} \end{subfigure} \caption{Qualitative evolution of the physical variables $\omega_{total}$, $\omega_{\phi}$ and $q$ for perturbation of the parameters ($\lambda$ \& $\mu$) near the bifurcation values each of \textit{Model 3}, \textit{Model 4} and \textit{Model 5} for three sets of initial conditions. The initial positions in (a), (b) and (c) are near \underline{$R_1$, $R_2$ and $R_3$} (\underline{$M_1$, $M_2$ and $M_3$}/\underline{$N_1$, $N_2$ and $N_3$}) respectively. \label{fig:Model3} } \end{figure} \section{Brief discussion and concluding remarks \label{conclusion}} The present work deals with a detailed dynamical system analysis of the interacting DM and DE cosmological model in the background of FLRW geometry. The DE is chose as a phantom scalar field with self-interacting potential while varying mass (a function of the scalar field) DM is chosen as dust. The potential of the scalar field and the varying mass of DM are chosen as exponential or power-law form (or a product of them) and five possible combination of them are studied.\bigbreak \textbf{Model 1: $V(\phi)=V_0\phi^{-\lambda}, M_{_{DM}}(\phi)=M_0\phi^{-\mu}$}\par For case (i), i.e., $\mu\neq 0, \lambda\neq 0$; there are three non-hyperbolic critical points $A_1$, $A_2$, $A_3$ of which $A_1$ corresponds to DM dominated decelerating phase (dust era) while $A_2$ and $A_3$ purely DE dominated and they represent the $\Lambda$CDM model (i.e., de-Sitter phase) of the universe.\par For case (ii), i.e., $\mu\neq 0, \lambda=0$; there is one critical point and two space of critical points. The cosmological consequence of these critical points are similar to case (i).\par For case (iii), i.e., $\mu= 0, \lambda\neq 0$; there is one space of critical points and two distinct critical points. But as before the cosmological analysis is identical to case (i).\par For the fourth case, i.e., $\mu=0, \lambda=0$; there are three space of critical points $(S_1,S_2,S_3)$ which are all non-hyperbolic in nature and are identical to the critical points in case (ii). Further, considering the vector fields in $Z=constant$ plane, it is found that for the critical point $S_1$, every point on $Z-$ axis is a saddle node while for critical points $S_2$ and $S_3$ every point on $Z-$axis is a stable star.\bigbreak \textbf{Model 2: $V(\phi)=V_0\phi^{-\lambda}, M_{_{DM}}(\phi)=M_1e^{-\kappa\mu\phi}$}\par The autonomous system for this model has five non-hyperbolic critical points $L_i$, $i=1,\ldots,5$. For $L_1$ and $L_2$, the cosmological model is completely DE dominated and the model describes cosmic evolution at the phantom barrier. The critical points $L_3$ and $L_4$ are DE dominated cosmological solution ($\mu^2>3$) representing the $\Lambda$CDM model. The critical point $L_5$ corresponds to ghost (phantom) scalar field and it describes the cosmic evolution in phantom domain ($2\mu^2>3$).\bigbreak \textbf{Model 3: $V(\phi)=V_1e^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_0\phi^{-\mu}$}\par There are three non-hyperbolic critical points in this case. The first one (i.e., $R_1$) is purely DM dominated cosmic evolution describing the dust era while the other two critical points (i.e., $R_2$, $R_3$) are fully dominated by DE and both describe the cosmic evolution in the phantom era.\bigbreak \textbf{Model 4: $V(\phi)=V_1 e ^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_1e^{-\kappa\mu\phi}$}\par The autonomous system so formed in this case has four critical points $M_i$, $i=1,\ldots,4$ which may be hyperbolic/non-hyperbolic depending on the parameters involved. The critical point $M_1$ represents DE as ghost scalar field and it describes the cosmic evolution in the phantom domain. For the critical points $M_2$ and $M_3$, the cosmic evolution is fully DE dominated and is also in the phantom era. The cosmic era corresponding to the critical point $M_4$ describes scaling solution where both DM and DE contribute to the cosmic evolution.\bigbreak \textbf{Model 5: $V(\phi)=V_2\phi^{-\lambda} e ^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_2\phi^{-\mu}e^{-\kappa\mu\phi}$}\par This model is very similar to either model $4$ or model $1$, depending on the choices of the dimensionless variables $x$ and $z$. For $z=0$, the model reduces to model $4$ while for $x=0$ the model is very similar to model $1$ and hence the cosmological analysis is very similar to that.\par Finally, using Poincar\'{e} index theorem, Euler Poincar\'{e} characteristic is determined for bifurcation analysis of the above cases from the point of view of the cosmic evolution described by the equilibrium points. Lastly, inflationary era of cosmic evolution is studied by using bifurcation analysis. \begin{acknowledgements} The author Soumya Chakraborty is grateful to CSIR, Govt. of India for giving Junior Research Fellowship (CSIR Award No: 09/096(1009)/2020-EMR-I) for the Ph.D work. The author S. Mishra is grateful to CSIR, Govt. of India for giving Senior Research Fellowship (CSIR Award No: 09/096 (0890)/2017-EMR-I) for the Ph.D work. The author Subenoy Chakraborty is thankful to Science and Engineering Research Board (SERB) for awarding MATRICS Research Grant support (File No: MTR/2017/000407).\\ \end{acknowledgements} \bibliographystyle{unsrt}
{'timestamp': '2020-11-20T02:16:16', 'yymm': '2011', 'arxiv_id': '2011.09842', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.09842'}
ArXiv
\section{Introduction} In a cold dark matter (CDM) universe, dark matter haloes form hierarchically through accretion and merging(for a recent review, see \citet{Frenk2012}). Many rigorous and reliable predictions for the halo mass function and subhalo mass functions in CDM are provided by numerical simulations\citep[e.g.][]{springel2009, Gao2011, Colin2000, Hellwing2015, Bose2016}. When the small haloes merge to the larger systems, they become the subhaloes and suffer from environmental effects such as tidal stripping, tidal heating and dynamical friction that tend to remove the mass from them and even disrupt them \citep{ Tormen1998, Taffoni2003, Diemand2007, Hayashi2003, gao2004, springel2009, xie2015} . At the mean time, the satellite galaxies that reside in the subhaloes also experience environmental effects. The tidal stripping and ram-pressure can remove the hot gas halo from satellite galaxies which in turn cuts off their supply of cold gas and quenches star formation\citep{Balogh2000,Kawata2008,McCarthy2008,Wang2007,Guo2011,Wetzel2013}. Satellite galaxies in some cases also experience a mass loss in the cold gas component and stellar component during the interaction with the host haloes \citep{Gunn1972,Abadi1999,Chung2009, Mayer2001, Klimentowski2007, Kang2008, Chang2013}. Overall, the subhaloes preferentially lose their dark mass rather than the luminous mass, because the mass distribution of satellite galaxies is much more concentrated than that of the dissipationless dark matter particles. Simulations predict that the mass loss of infalling subhaloes depends inversely on their halo-centric radius \citep[e.g.][]{Springel2001, DeLucia2004, gao2004, xie2015}. Thus, the halo mass to stellar mass ratio of satellite galaxies should increase as a function of halo-centric radius. Mapping the mass function of subhaloes from observations can provide important constraints on this galaxy evolution model. In observations, dark matter distributions are best measured with gravitational lensing. For dark matter subhalos, however, such observations are challenging due to their relative low mass compared to that of the host dark matter halo. The presence of subhalos can cause flux-ratio anomalies in multiply imaged lensing systems \citep{Mao1998,Metcalf2001,Mao2004,xu2009, Nierenberg2014}, perturb the locations, and change the image numbers \citep{kneib1996,Kneib2011}, and disturb the surface brightness of extended Einstein ring/arcs \citep{Koopmans2005,Vegetti2009a,Vegetti2009b,Vegetti2010,Vegetti2012}. However, due to the limited number of high quality images and the rareness of strong lensing systems, only a few subhalos have been detected through strong lensing observation. Besides, strong lensing effects can only probe the central regions of dark matter haloes \citep{Kneib2011}. Therefore, through strong lensing alone, it is difficult to draw a comprehensive picture of the co-evolution between subhalos and galaxies. Subhalos can also be detected in individual clusters through weak gravitational lensing or weak lensing combining strong lensing \citep[e.g.][]{Natarajan2007,Natarajan2009, Limousin2005,Limousin2007,Okabe2014}. In \citet{Natarajan2009}, Hubble Space Telescope images were used to investigate the subhalo masses of $L^*$ galaxies in the massive, lensing cluster, Cl0024+16 at z = 0.39, and to study the subhalo mass as a function of halo-centric radius. \citet{Okabe2014} investigated subhalos in the very nearby Coma cluster with imaging from the Subaru telescope. The deep imaging and the large apparent size of the cluster allowed them to measure the masses of subhalos selected by shear alone. They found 32 subhalos in the Coma cluster and measured their mass function. However, this kind of study requires very high quality images of massive nearby clusters, making it hard to extend such studies to large numbers of clusters. A promising alternative way to investigate the satellite-subhalo relation is through a stacking analysis of galaxy-galaxy lensing with large surveys. Different methods have been proposed in previous studies \citep[e.g.][]{Yang2006, Li2013,Pastor2011,Shirasaki2015}. Although the tangential shear generated by a single subhalo is small, by stacking thousands of satellite galaxies the statistical noise can be suppressed and the mean projected density profile around satellite galaxies can be measured. \citet{Li2014} selected satellite galaxies in the SDSS group catalog from \citep{Yang2007} and measured the weak lensing signal around these satellites with a lensing source catalog derived from the CFHT Stripe82 Survey \citep{Comparat2013}. This was the first galaxy-galaxy lensing measurement of subhalo masses in galaxy groups. However, the uncertainties of the measured subhalo masses were too large to investigate the satellite-subhalo relation as a function of halo-centric radius. In this paper, we apply the same method as \citet{Li2013,Li2014} to measure the galaxy-galaxy lensing signal for satellite galaxies in the SDSS redMaPPer cluster catalog \citep{Rykoff2014, Rozo2014}. Unlike the the group catalog of \citet{Yang2007} which is constructed using SDSS spectroscopic galaxies, the redMaPPer catalog relies on photometric cluster detections, allowing it to go to higher redshifts. As a result, there are more massive clusters in the redMaPPer cluster catalog. Therefore, we expect to signal-to-noise of the satellite galaxy lensing signals to be higher, enabling us to derive better constraints on subhalo properties. The paper is organized as follows. In section \ref{sec:data}, we describe the lens and source catalogs. In section \ref{sec:model}, we present our lens model. In section \ref{sec:res}, we show our observational results and our best fit lens model. The discussions and conclusions are presented in section \ref{sec:sum}. Throughout the paper, we adopt a $\Lambda$CDM cosmology with parameters given by the WMAP-7-year data \citep{komatsu2010} with $\Omega_L=0.728$, $\Omega_{M}=0.272$ and $h \equiv H_0/(100 {\rm km s^{-1} Mpc^{-1}}) = 0.73$. In this paper, stellar mass is estimated assuming a \cite{Chabrier2003} IMF. \section{Observational Data} \label{sec:data} \subsection{Lens selection and stellar masses} \label{sec:lens_select} We use satellite galaxies in the redMaPPer clusters as lenses. The redMaPPer cluster catalog is extracted from photometric galaxy samples of the SDSS Data Release 8 \citep[DR8,][]{Aihara2011} using the red-sequence Matched-filter Probabilistic Percolation cluster finding algorithm \citep{Rykoff2014}. The redMaPPer algorithm uses the 5-band $(ugriz)$ magnitudes of galaxies with a magnitude cut $i<21.0$ over a total area of 10,000 deg$^2$ to photometrically detect galaxy clusters. redMaPPer uses a multi-color richness estimator $\lambda$, defined to be the sum of the membership probabilities over all galaxies. In this work, we use clusters with richness $\lambda>20 $ and photometric redshift $z_{\rm cluster}<0.5$. In the overlapping region with the CFHT Stripe-82 Survey, we have a total of 634 clusters. For each redMaPPer cluster, member galaxies are identified according to their photometric redshift, color and their cluster-centric distance. To reduce the contamination induced by fake member galaxies, we only use satellite galaxies with membership probability $P_{\rm mem}>0.8$. The redMaPPer cluster finder identifies 5 central galaxy candidates for each cluster, each with an estimate of the probability $P_{\rm cen}$ that the galaxy in question is the central galaxy of the cluster. We remove all central galaxy candidates from our lens sample. For more details about redMaPPer cluster catalog, we refer the readers to \citet{Rykoff2014, Rozo2014}. Stellar masses are estimated for member galaxies in the redMaPPer catalog using the Bayesian spectral energy distribution (SED) modeling code {\tt iSEDfit} \citep{Moustakas2013}. Briefly, {\tt iSEDfit} determines the posterior probability distribution of the stellar mass of each object by marginalizing over the star formation history, stellar metallicity, dust content, and other physical parameters which influence the observed optical/near-infrared SED. The input data for each galaxy includes: the SDSS $ugriz$ {\tt model} fluxes scaled to the $r$-band {\tt cmodel} flux; the 3.4- and 4.6-$\micron$ ``forced" WISE \citep{Wright2010} photometry from \citet{Lang2014}; and the spectroscopic or photometric redshift for each object inferred from redMaPPer. We adopt delayed, exponentially declining star formation histories with random bursts of star formation superposed, the Flexible Stellar Population Synthesis (FSPS) model predictions of \citet{Conroy2009, Conroy2010}, and the \citet{Chabrier2003} initial mass function (IMF) from 0.1-100 $M_{\odot}$. For reference, adopting the \citet{Salpeter1955} IMF would yield stellar masses which are on average 0.25 dex (a factor of 1.8) larger. We apply a stellar mass cut of $M_{\rm star}>10^{10} \rm {h^{-1}M_{\odot}}$ to our satellite galaxy sample. In Fig.\ref{fig:mz}, we show the $M_{\rm star}$--$z_l$ distribution for our lens samples, where $z_l$ is the photometric redshift of the satellite galaxy assigned by the redMaPPer algorithm. The low stellar mass satellite galaxies are incomplete at higher redshift, but they will not affect the conclusion of this paper. \begin{figure} \includegraphics[width=0.4\textwidth]{fig1.pdf} \caption{The $M_{\rm star}$--$z_l$ distribution of lens galaxies, where $z_l$ is the photometric redshift, and $M_{\rm star}$ is in units of $M_{\odot}$. } \label{fig:mz} \end{figure} \subsection{The Source Catalog} The source catalog is measured from the Canada--France--Hawaii Telescope Stripe 82 Survey (CS82), which is an $i-$band imaging survey covering the SDSS Stripe82 region. With excellent seeing conditions --- FWHM between 0.4 to 0.8 arcsec --- the CS82 survey reaches a depth of $i_{\rm AB}\sim24.0$. The survey contains a total of 173 tiles, 165 of which from CS82 observations and 8 from CFHT-LS Wide \citep{Erben2013}. The CS82 fields were observed in four dithered observations with 410s exposure. The 5$\sigma$ limiting magnitude is $i_{\rm AB}\sim 24.0$ in a 2 arcsec diameter aperture. Each CFHTLenS science image is supplemented by a mask, indicating regions within which accurate photometry/shape measurements of faint sources cannot be performed, e.g. due to extended haloes from bright stars. According to \citet{Erben2013}, most of the science analysis are safe with sources with MASK$\le1$. After applying all the necessary masks and removing overlapping regions, the effective survey area reduces from 173 deg$^2$ to 129.2 deg$^2$. We also require that source galaxies to have FITCLASS= 0, where FITCLASS is the flag that describes the star/galaxy classification. Source galaxy shapes are measured with the lensfit method \citep{Miller2007, Miller2013}, closely following the procedure in \citet{ Erben2009, Erben2013}. The shear calibration and systematics of the lensfit pipeline are described in detail in \citet[][]{Heymans2012}. The specific procedures that are applied to the CS82 imaging are described in Erben et al. (2015, in preparation). Since the CS82 survey only provides the $i$-band images, the CS82 collaboration derived source photometric redshifts using the $ugriz$ multi-color data from the SDSS co-add \citep{Annis2014}, which reaches roughly 2 magnitudes deeper than the single epoch SDSS imaging. The photometric redshift (photo-z) of the background galaxies were estimated using a Bayesian photo-$z$ code \citep[BPZ,][]{Benitez2000, Bundy2015}. The effective weighted source galaxy number density is 4.5 per arcmin$^{2}$. Detailed systematic tests for this weak lensing catalog are described in Leauthaud et al. 2015 (in prep). \subsection{Lensing Signal Computation} In a galaxy-galaxy lensing analysis, the excess surface mass density , $\Delta\Sigma$ is inferred by measuring the tangential shear, $\gamma_t(R)$, where \begin{equation}\label{eq:ggl} \Delta\Sigma(R)=\gamma_t(R)\Sigma_{\rm crit}={\overline\Sigma}(<R)-\Sigma(R)\, , \end{equation} where ${\overline\Sigma}(<R)$ is the mean surface mass density within $R$, and $\Sigma(R)$ is the average surface density at the projected radius $R$, and $\Sigma_{\rm crit}$ is the lensing critical surface density \begin{equation} \Sigma_{\rm crit}=\frac{c^2}{4\pi G}\frac{D_s}{D_l D_{ls}}\, , \end{equation} where $D_{ls}$ is the angular diameter distance between the lens and the source, and $D_l$ and $D_s$ are the angular diameter distances from the observer to the lens and to the source, respectively. We select satellite galaxies as lenses and stack lens-source pairs in physical radial distance $R$ bins from $0.04$ to $1.5 {\rm h^{-1}Mpc}$. To avoid contamination from foreground galaxies, we remove lens-source pairs with $z_{s}-z_{l} < 0.1$, where $z_s$ and $z_l$ are lens redshift and source redshift respectively. We have also tested the robustness of our results by varying the selection criteria for source galaxies. We find that selecting lens-source pairs with $z_{s}-z_{l} > 0.05$ or $z_{s}-z_{l} > 0.15$ only changes the final lensing signal by less than 7\%, well below our final errors. For a given set of lenses, $\Delta\Sigma(R)$ is estimated using \begin{equation} \Delta\Sigma(R)=\frac{\sum_{l}\sum_{s}w_{ls}\gamma_t^{ls}\Sigma_{\rm crit}(z_l,z_s)}{\sum_{l}\sum_{s}w_{ls}}\, , \end{equation} where \begin{equation} w_{ls}=w_n\Sigma_{\rm crit}^{-2}(z_l,z_s)\, \end{equation} and $w_n$ is a weight factor defined by Eq.\, (8) in \citet{Miller2013}. $w_n$ is introduced to account for the intrinsic distribution of ellipticity and shape measurement uncertainties. In the lensfit pipeline, a calibration factor for the multiplicative error $m$ is estimated for each galaxy based on its signal-to-noise ratio, and the size of the galaxy. Following \citet{Miller2013}, we account for these multiplicative errors in the stacked lensing by the correction factor \begin{equation} 1+K(R)=\frac{\sum_{l}\sum_{s} w_{ls}(1+m_s)}{\sum_{l}\sum_{s} w_{ls}} \end{equation} The corrected lensing measurement is as: \begin{equation} \Delta\Sigma(R)^{\rm corrected}=\frac{\Delta\Sigma(R)}{1+K(R)} \end{equation} In this paper, we stack the tangential shears around satellite galaxies binned according to their projected halo-centric distance $r_p$, and fit the galaxy-galaxy lensing signal to obtain the subhalo mass of the satellite galaxies. We describe our theoretical lens models below. \section{The Lens Model} \label{sec:model} The surface density around a lens galaxy $\Sigma(R)$ can be written as: \begin{equation}\label{xi_gm} \Sigma(R)=\int_{0}^{\infty}\rho_{\rm g, m} \left(\sqrt{R^2+\chi^2} \right)\, {\rm d}\chi\, ; \end{equation} and the mean surface density within radius $R$ is \begin{equation}\label{xi_gm_in} \Sigma(< R)=\frac{2}{R^2} \int_0^{R} \Sigma(u) \, u \, {\rm d}u , \end{equation} where $\rho_{g, m}$ is the density profile around the lens, and $\chi$ is the physical distance along the line of sight. The excess surface density around a satellite galaxy is composed of three components: \begin{equation} \Delta\Sigma(R)=\Delta\Sigma_{\rm sub}(R)+\Delta\Sigma_{\rm host}(R, r_p)+\Delta\Sigma_{\rm star}(R)\, , \end{equation} where $\Delta\Sigma_{\rm sub}(R)$ is the contribution of the subhalo in which the satellite galaxy resides, $\Delta\Sigma_{\rm host}(R, r_p)$ is the contribution from the host halo of the cluster/group, where $r_p$ is the projected distance from the satellite galaxy to the center of the host halo, and $\Delta\Sigma_{\rm star}(R)$ is the contribution from the stellar component of the satellite galaxy. We neglect the two-halo term, which is the contribution from other haloes on the line-of-sight, because this contribution is only important at $R>3 {\rm h^{-1}Mpc}$ for clusters \citep[see][]{Shan2015}. At small scales where the subhalo term dominates, the contribution of the 2-halo term is at least an order of magnitude smaller than the subhalo term \citep{Li2009}. \subsection{Host halo contribution} We assume that host haloes are centered on the central galaxies of clusters, with a density profile following the NFW \citep{NFW97} formula: \begin{equation} \rho_{\rm host}(r)=\frac{\rho_{\rm 0,host}}{(1+r/r_{\rm s,host})^2(r/r_{\rm s,host})} \,, \end{equation} where $r_{\rm s,host}$ is the characteristic scale of the halo and $\rho_{\rm 0,host}$ is a normalization factor. Given the mass of a dark matter halo, its profile then only depends on the concentration parameter $c\equiv r_{\rm 200}/r_{\rm s,host}$, where $ r_{\rm 200}$ is a radius within which the average density of the halo equals to 200 times the universe critical density, $\rho_{\rm crit}$. The halo mass $M_{\rm 200}$ is defined as $M_{\rm 200}\equiv 4\pi/3r_{\rm 200}^3(200 \rho_{\rm crit})$. Various fitting formulae for mass-concentration relations have been derived from N-body numerical simulations \citep[e.g.][]{Bullock2001,Zhao2003,Dolag2004,Maccio2007,Zhao2009, Duffy2008, Neto2007}. These studies find that the concentration decreases with increasing halo mass. Weak lensing observations also measure this trend, but the measured amplitude of the mass-concentration relation is slightly smaller than that in the simulation \citep[e.g.][]{Mandelbaum2008, Miyatake2013, Shan2015}. Since there is almost no degeneracy between the subhalo mass and the concentration \citep{Li2013,George2012}, we expect that the exact choice of the mass-concentration relation should not have a large impact on our conclusions. Throughout this paper, we adopt the mass concentration relation from \citet{Neto2007}: % \begin{equation} c=4.67(M_{\rm 200}/10^{14} \rm {h^{-1}M_{\odot}})^{-0.11} \end{equation} % We stack satellite galaxies with different halo-centric distance $r_p$, and in host halos with different mass. Thus the lensing contribution from the host halo is an average of $\Delta{\Sigma}_{\rm host}$ over $r_p$ and host halo mass $M_{\rm 200}$. The host halo profile is modeled as follows. For each cluster, we can estimate its mass via the richness-mass relation of \citet{Rykoff2012}: \begin{equation} \label{eq:mass_rich} \ln{\left( \frac{M_{\rm 200}}{h_{\rm 70}^{-1}10^{14}M_{\odot}} \right)} = 1.48 + 1.06\ln(\lambda/60) \end{equation} In the redMaPPer catalog, each cluster has five possible central galaxies, each with probability $P_{\rm cen}$. We assume that the average $\Delta\Sigma$ contribution from the host halo to a satellite can be written as: \begin{equation} \Delta\bar{\Sigma}_{\rm host}(R)=A_0 \sum_{i}^{N_{\rm sat}}\sum_{j}^5\Delta\Sigma_{\rm host}(R|r_{\rm p, j}, M_{\rm 200}) P_{\rm cen, j}\, , \end{equation} where $r_{\rm p, j}$ is the projected distance between the satellite and the jth candidate of the host cluster center, $P_{\rm cen, j}$ is the probability of jth candidate to be the true center, and $N_{\rm sat}$ is the number of stacked satellite galaxies. $A_0$ is the only free parameter of the host halo contribution model. It describes an adjustment of the lensing amplitude. If the richness-mass relation is perfect, the best-fit $A_0$ should be close to unity. Note that, the subhalo mass determination is robust against the variation of the normalization in richness-mass relation. If we decrease the normalization in Eq. \ref{eq:mass_rich} by 20\%, the best-fit subhalo mass changes only by 0.01 dex, which is at least 15 times smaller than the $1\sigma$ uncertainties of $M_{\rm sub}$ (see table \ref{tab:para_nfw}). \subsection{Satellite contribution} In numerical simulations, subhalo density profiles are found to be truncated in the outskirts \citep[e.g.][]{Hayashi2003,springel2009, gao2004, xie2015}. In this work, we use a truncated NFW profile \citep[][tNFW, hereafter]{Baltz2009, Oguri2011} to describe the subhalo mass distribution: \begin{equation}\label{eq:rhosub} \rho_{\rm sub}(r)=\frac{\rho_{0}}{r/r_s(1+r/r_s)^2}\left(\frac{r_t^2}{r^2+r_t^2}\right)^2 \, \end{equation} where $r_t$ is the truncation radius of the subhalo, $r_s$ is the characteristic radius of the tNFW profile and $\rho_0$ is the normalization. The enclosed mass with $x\equiv r/r_s$ can be written as:% \begin{eqnarray} M(<x)&=&4\pi\rho_0 r_s^3 \frac{\tau^2}{2(\tau^2+1)^3(1+x)(\tau^2+x^2)}\nonumber\\ &&\hspace*{-16mm}\times\Bigl[(\tau^2+1)x\left\{x(x+1)-\tau^2(x-1)(2+3x) -2\tau^4\right\}\nonumber\\ &&\hspace*{-16mm}+\tau(x+1)(\tau^2+x^2)\left\{2(3\tau^2-1) {\rm arctan}(x/\tau)\right.\nonumber\\ &&\hspace*{-16mm}\left.+\tau(\tau^2-3)\ln(\tau^2(1+x)^2/(\tau^2+x^2))\right\} \Bigr], \label{eq:mbmo_nodim} \end{eqnarray} where $\tau\equiv r_t/r_s$. We define the subhalo mass $M_{\rm sub}$ to be the total mass within $r_t$. Given $M_{\rm sub}$, $r_s$ and $r_t$, the tangential shear $\gamma_t$ can be derived analytically (see the appendix in \citet{Oguri2011}). Previous studies have sometimes used instead the pseudo-isothermal elliptical mass distribution (PIEMD) model derived by \citet{Kassiola1993}) for modeling the mass distribution around galaxies \citep[e.g.][]{Limousin2009,Natarajan2009,Kneib2011}. The surface density of the PIEMD model can be written as: \begin{equation}\label{eq:PIEMD} \Sigma(R)=\frac{\Sigma_0 R_0}{1-R_0/R_t}\left(\frac{1}{\sqrt{R_0^2+R^2}} - \frac{1}{\sqrt{R_t^2+R^2}} \right)\, \end{equation} where $R_0$ is the core radius, $R_t$ is the truncation radius, and $\Sigma_0$ is a characteristic surface density. The subhalo mass $M_{\rm sub}$, which is defined to be the enclosed mass with $R_t$, can be written as: \begin{equation} M_{\rm sub}=\frac{2\pi \Sigma_0 R_0 R_t}{R_t-R_0}\left[ \sqrt{R_0^2-R^2} - \sqrt{R_t^2-R^2}+(R_t-R_0)\right] \end{equation} In this paper, we will fit the data with both tNFW and PIEMD models. Finally, the lensing contributed from stellar component is usually modeled as a point mass: \begin{equation} \Delta\Sigma_{\rm star}(R)=\frac{ M_{\rm star}}{R^2}\,, \end{equation} where $M_{\rm star}$ is the total stellar mass of the galaxy. \section{Results} \label{sec:res} \subsection{Dependence on the projected halo-centric radius} To derive the subhalo parameters, we calculate the $\chi^2$ defined as: \begin{equation} \chi^{2}=\sum_{ij}({\Delta\Sigma(R_i)-\Delta\Sigma(R_i)^{obs}})(\widehat{C_{ij}^{-1}})({\Delta\Sigma(R_j)-\Delta\Sigma(R_j)^{obs}} )\,, \label{chi2} \end{equation} where $\Delta\Sigma(R_i)$ and $\Delta\Sigma(R_i)^{obs}$ are the model and the observed lensing signal in the $i$th radial bin. The matrix $C_{ij}$ is the covariance matrix of data error between different radius bins. Even if the ellipticity of different sources are independent, the cross term of the covariance matrix could still be non-zero, due to some source galaxies are used more than once\citep[e.g.][]{Han2015}. The covariance matrix can be reasonably calculated with bootstrap method using the survey data themselves\citep{Mandelbaum2005}. In the paper, We divide the CS82 survey area into 45 equal area sub-regions. We then generate 3000 bootstrap samples by resampling the 45 sub-regions of CS82 observation data sets and calculate the covariance matrix using the bootstrap sample. Thus, the likelihood function can be given as: \begin{equation} L\propto \exp\left(-\frac{1}{2}\chi^2\right). \label{likelihood} \end{equation} We select satellite galaxies as described in section \ref{sec:lens_select} and measure the stacked lensing signal around satellites in three $r_p$ bins: $0.1<r_p<0.3$; $0.3<r_p<0.6$ and $0.6<r_p<0.9$(in unit of ${\rm h^{-1}Mpc}$). For each bin, we fit the stacked lensing signal with a Monte-Carlo-Markov-Chain(MCMC) method. For the tNFW subhalo case, we have 4 free parameters: $M_{\rm sub}$, the subhalo mass; $r_s$, the tNFW profile scale radius; $r_t$, the tNFW truncation radius, and $A_0$, the normalization factor of the host halo lensing contribution. We adopt flat priors with broad boundaries for these model parameters. We set the upper boundaries for $r_t$ and $r_s$ to be the value of the viral radius and the scale radius of an NFW halo of $10^{13}\rm {h^{-1}M_{\odot}}$. We choose these values because the subhalo masses of satellite galaxy in clusters are expected to be much smaller than $10^{13}\rm {h^{-1}M_{\odot}}$ \citep{Gao2012}. For the PIEMD case, we also have 4 free parameters: $M_{\rm sub}$, $R_0$, $R_t$ and $A_0$. We set the upper boundary of $R_t$ to be the same as $r_t$ in the tNFW case. We set the upper boundary of $R_0$ to be 20 kpc/h, which is much higher than the typical size of $R_0$ in observations\citep{Limousin2005, Natarajan2009}. We believe our choice of priors is very conservative. The detailed choices of priors are listed in table~\ref{tab:prior}. \begin{table} \begin{center} \caption{Flat priors for model parameters. $M_{\rm sub}$ is in units of $\rm {h^{-1}M_{\odot}}$ and distances are in units of ${\rm h^{-1}Mpc}$.} \begin{tabular}{|c|c|c|} \hline &&\\ &lower bound& upper bound \\ &&\\ \hline $A_{\rm 0}$ & 0.3& 2 \\ \hline $\log{M_{\rm sub}} $ & 9 & 13 \\ \hline $r_t$ (tNFW) & 0 & 0.35 \\ \hline $r_s$ (tNFW) & 0& 0.06 \\ \hline $R_0$(PIEMD) & 0 & 0.02\\ \hline $R_t$(PIEMD) & 0 & 0.35\\ \hline \end{tabular} \label{tab:prior} \end{center} \end{table} In Fig.\ref{fig:gglensing}, we show the observed galaxy-galaxy lensing signal. Red dots with error bars represent the observational data. The vertical error bars show 1 $\sigma$ errors estimated with the bootstrap resampling the lens galaxies. Horizontal error bars show the range of each radial bin. The lensing signals show clearly the characteristic shape described in figure 2 of \citet{Li2013}. The lensing signal from the subhalo term dominates the central part. On the other hand, the contribution from the host halo is nearly zero on small scale, and decreases to negative values at intermediate scale. This is because $\Sigma_{\rm host}(R)$ becomes increasingly larger compared to $\Sigma_{\rm host}(<R)$ at intermediate R. At radii where $R>r_p$, $\Delta\Sigma_{\rm host}(R)$ increases with R again. At large scales where $R>>r_p$, $\Delta\Sigma(R,r_p)$ approaches $\Delta\Sigma_{\rm host}(R,0)$. The solid lines show the best-fit results with the tNFW model. Dashed lines of different colors show the contribution from different components. The best-fit model parameters are listed in table~\ref{tab:para_nfw}. Note that, the value of the first point in the $[0.6, 0.9]$ $r_p$ bin is very low, which may be due to the relatively small number of source galaxies in the inner most radial bin. We exclude this point when deriving our best fit model. For comparison, we also show the best-fit parameters including this point in table~\ref{tab:para_nfw}. Fig.~\ref{fig:2dcontour} shows an example of the MCMC posterior distribution of the tNFW model parameters for satellites in the $[0.3, 0.6]{\rm h^{-1}Mpc}$ $r_p$ bin. The constraints on the subhalo mass $\log{M_{\rm sub}}$ are tighter than that in \citet{Li2014}($\sim\pm 0.7$). The amplitude of $A_0$ is slightly smaller than unity, implying that the clusters may be less massive than predicted by the mass-richness relation. However, we are still unable to obtain significant constraints on the structure parameters of the sub-halos. In principle, the density profile cut-off caused by tidal effects can be measured with tangential shear. However, the galaxy-galaxy lensing measurement stacks many satellites, leading to smearing of the signal. With the data used here, the tidal radius as a free parameter are not constrained. Some galaxy-galaxy lensing investigations introduced additional constraints in order to estimate the tidal radius. For example, \citet{Gillis2013b} assumed that galaxies of the same stellar mass but in different environments have similar sub-halo density profiles except the cut-off radius. With this additional assumption, they obtained $r_{\rm tidal}/r_{200}=0.26\pm0.14$. During the review process of our paper, \citet{Sifon2015} posted a similar galaxy-galaxy lensing measurement of satellite galaxies using the KiDs survey, and they also found that their data cannot distinguish models with or without tidal truncation. In Fig.\ref{fig:piemd}, we show the fitting results of the PIEMD model. The best-fit parameters are listed in table~\ref{tab:para_piemd}. For reference, we also over-plot the theoretical prediction of the best-fit tNFW model with blue dashed lines. Both models provide a good description of the data. The best-fitted $M_{\rm sub}$ and $A_0$ of the two models agree well with each other, showing the validity of our results. In numerical simulations, subhalos that are close to host halo centers are subject to strong mass stripping\citep{Springel2001, DeLucia2004, gao2004, xie2015}. The mass loss fraction of subhalos increases from $\sim 30\%$ at $r_{\rm 200}$ to $~90\%$ at 0.1$r_{\rm 200}$ \citep{gao2004,xie2015}. From galaxy-galaxy lensing in this work, we also find that the $M_{\rm sub}$ of $[0.6, 0.9]$ $r_p$ bin is much larger than that in $[0.1, 0.3]$ $r_p$ bin (by a factor of 18). In Fig.\ref{fig:ML}, we plot the subhalo mass-to-stellar mass ratio for three $r_p$ bins. The $M_{\rm sub}/M_{\rm star}$ ratio in the $[0.6, 0.9]{\rm h^{-1}Mpc}$ $r_p$ bin is about 12 times larger than that of the $[0.1, 0.3]{\rm h^{-1}Mpc}$ bin. If we include the first point in the $[0.6, 0.9]{\rm h^{-1}Mpc}$ bin, the $M_{\rm sub}/M_{\rm star}$ of the tNFW model decreases by 40\%. For reference, we over-plot the $M_{\rm sub}/M_{\rm star}$--$r_p$ predicted by the semi-analytical model of \citet{Guo2011}. We adopt the mock galaxy catalog generated with the \citet{Guo2011} model using the Millennium simulation \citep{Springel2006}. We select mock satellite galaxies with stellar masses $M_{\rm star}>10^{10}\rm {h^{-1}M_{\odot}}$ from clusters with $M_{\rm 200}>10^{14}/\rm {h^{-1}M_{\odot}}$. The median $M_{\rm sub}/M_{\rm star}$-$r_p$ relation is shown in Fig.\ref{fig:ML} with a black solid line. The green shaded region represents the parameter space where 68\% of mock satellites distributes. We see the semi-analytical model predicts an increasing $M_{\rm sub}/M_{\rm star}$ with $r_p$, but with a flatter slope than in our observations. Note that we have not attempted to recreate our detailed observational procedure here, so source and cluster selection might potentially explain this discrepancy. Particularly relevant here is the fact that our analysis relies on a probability cut $P_{\rm mem}>0.8$ for satellite galaxies, which implies that $\sim 10\%$ of our satellite galaxies may not be true members of the clusters, but galaxies on the line of sight. This contamination is difficult to eliminate completely in galaxy-galaxy lensing, because the uncertainties in the line-of-sight distances are usually larger than the sizes of the clusters themselves. In \citet{Li2013}, we used mock catalogs constructed from $N$-body simulations to investigate the impact of interlopers. It is found that 10\% of the galaxies identified as satellites are interlopers, and this introduces a contamination of $15\%$ in the lensing signal. We expect a comparable level of bias here, as shown in Appendix A. It should be noted, however, that the average membership probability of our satellite sample does not change significantly with $r_p$, implying that the contamination by fake member galaxies is similar at different $r_p$. We therefore expect that the contamination by interlopers will not lead to any qualitative changes in the shapes of the density profiles. A detailed comparison between the observation and simulation, taking into account the impact of interlopers, will be carried out in a forthcoming paper. \begin{figure} \includegraphics[width=0.4\textwidth]{fig2.pdf} \caption{Observed galaxy-galaxy lensing signal for satellite galaxies as a function of the radius. The top, middle and bottom panels show results for satellites with $r_p$ in the range $[0.1, 0.3]{\rm h^{-1}Mpc}$, $[0.3, 0.6]{\rm h^{-1}Mpc}$ and $[0.6, 0.9]{\rm h^{-1}Mpc}$, respectively. Red points with error bars represent the observational data. The vertical error bars show the 1 $\sigma$ bootstrap error. The horizontal error bars show the bin size. Black solid lines show the best-fit tNFW model prediction. Dashed lines of different colors show the contribution from the subhalo, the host halo, and the stellar mass respectively.} \label{fig:gglensing} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{fig3.pdf} \caption{The contours show 68\% and 95\% confidence intervals for the tNFW model parameters, $M_{\rm sub}$, $r_s$, $r_t$ and $A_0$. Results are shown for satellites with $r_p=[0.3, 0.6]{\rm h^{-1}Mpc}$. The last panel of each row shows the 1D marginalized posterior distributions. Note that the plotting range for $r_s$ and $r_t$ is exactly the same as the limits of our prior, so we actually do not have much constraints for these two parameters, except that high values are slightly disfavored for both. } \label{fig:2dcontour} \end{figure*} \begin{figure} \includegraphics[width=0.4\textwidth]{fig4.pdf} \caption{This figure is similar to Fig\ref{fig:gglensing}. Solid lines represent the best-fit PIEMD model prediction and blue dashed lines are the predictions of the tNFW model.} \label{fig:piemd} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{fig5.pdf} \caption{The subhalo mass-to-stellar mass ratio for galaxies in different $r_p$ bins. Vertical error bars show the 68\% confidence interval. Empty circles show the best-fitted $M_{\rm sub}/M_{\rm star}$ values for $[0.6, 0.9]{\rm h^{-1}Mpc}$ $r_p$ bin including the inner most observational point (see the bottom panel in Fig.\ref{fig:gglensing}). Horizontal error bars show the bin range of $r_p$. The green shaded region represents the parameter space where 68\% of the semi-analytical satellite galaxies distributes. The semi-analytical model also predicts an increasing $M_{\rm sub}/M_{\rm star}$ with $r_p$, but with a flatter slope.} \label{fig:ML} \end{figure} \begin{table*} \begin{center} \caption{The best-fit values of the tNFW model parameters for the stacked satellite lensing signal in different $r_p$ bins. $\log{M_{\rm sub}} $, and $A_{\rm 0}$ are the best-fit values for the subhalo mass, and the normalization factor. We do not show the best-fit of $r_s$ and $r_t$ as they are poorly constrained. $N_{\rm sat}$ is the number of satellite galaxies in each bin. $\langle \log M_{\rm star}\rangle$ is the average stellar mass of satellites. All errors indicate the 68\% confidence intervals. Masses are in units of $\rm {h^{-1}M_{\odot}}$. In our fiducial fitting process, we exclude the first point in $[0.6,0.9]{\rm h^{-1}Mpc}$ $r_p$ bin as a outlier (see Fig.\ref{fig:gglensing}). For comparison, the bottom row of the table shows the fitting results including this first point. } \begin{tabular}{l|c|c|c|c|c|c} \hline &&&&\\ $r_p$ range & $\langle \log(M_{\rm star}) \rangle$ & $\log{M_{\rm sub}} $ & $A_{\rm 0}$ & $M_{\rm sub}/ M_{\rm star}$ &$N_{\rm sat}$ & $\langle z_l \rangle$\\ &&&&\\ \hline & & && \\ $[0.1, 0.3]$ & 10.68 & $ 11.37 ^{+ 0.35 }_{- 0.35}$& $ 0.80^{+ 0.01}_{- 0.01}$ & $ 4.43^{+ 6.63}_{- 2.23}$ & 3963 &0.33 \\ & & && \\ \hline &&&&\\ $[0.3, 0.6]$ & 10.72 & $ 11.92 ^{+ 0.19 }_{- 0.18}$& $ 0.86^{+ 0.02}_{- 0.02}$ & $ 17.23^{+ 6.98}_{- 6.84} $ &2507 & 0.29\\ & & & &\\ \hline &&&&\\ $[0.6, 0.9]$ & 10.78 & $ 12.64 ^{+ 0.12 }_{- 0.11}$& $ 0.81^{+ 0.04}_{- 0.04}$ & $ 75.40^{+ 19.73}_{- 19.09} $&577 &0.24\\ & & & &\\ \hline &&&&\\ $[0.6, 0.9]^*$ & 10.78 & $ 12.49 ^{+ 0.13 }_{- 0.13}$& $ 0.81^{+ 0.04}_{- 0.04}$ & $ 54.64^{+ 15.58}_{- 15.80} $&577 &0.24\\ &&&&\\ \hline \end{tabular} \label{tab:para_nfw} \end{center} \end{table*} \begin{table} \begin{center} \caption{best-fitted parameters with PIEMD model. } \begin{tabular}{l|c|c|c|c} \hline &&&\\ $r_p$ range & $\langle \log(M_{\rm star}) \rangle$ & $\log{M_{\rm sub}} $ & $A_{\rm 0}$ & $M_{\rm sub} /M_{\rm star}$ \\ &&&\\ \hline & & & \\ $[0.1, 0.3]$ & 10.68 & $ 11.30 ^{+ 0.55 }_{- 0.57}$& $ 0.80^{+ 0.01}_{- 0.01}$ & $ 6.72^{+ 7.84}_{- 5.59} $\\ & & & \\ \hline &&&\\ $[0.3, 0.6]$ & 10.72 & $ 11.76 ^{+ 0.33 }_{- 0.32}$& $ 0.86^{+ 0.01}_{- 0.01}$ & $ 14.40^{+ 9.01}_{- 9.16} $\\ & & & \\ \hline &&&\\ $[0.6, 0.9]$ & 10.78 & $ 12.80 ^{+ 0.15 }_{- 0.15}$& $ 0.79^{+ 0.04}_{- 0.04}$ & $ 110.98^{+ 35.76}_{- 36.49} $\\ &&&\\ \hline &&&\\ $[0.6, 0.9]^*$ & 10.78 & $ 12.84 ^{+ 0.13 }_{- 0.13}$& $ 0.79^{+ 0.04}_{- 0.04}$ & $ 119.74^{+ 34.34}_{- 35.47} $\\ &&&\\ \hline \end{tabular} \label{tab:para_piemd} \end{center} \end{table} \subsection{Dependence on satellite stellar mass} In the CDM structure formation scenario, satellite galaxies with larger stellar mass tend to occupy larger haloes at infall time \citep[e.g.][]{Vale2004, Conroy2006,Yang_etal2012,Lu_etal2014}. In addition, massive haloes may retain more mass than lower mass ones at the same halo-centric radius \citep[e.g.][]{Conroy2006,Moster2010,Simha2012}. For these two reasons, we expect that satellite galaxies with larger stellar mass reside in more massive subhalos. To test this prediction, we select all galaxies with $r_p$ in the range $[0.3, 0.9]$ ${\rm h^{-1}Mpc}$, and split the them into two subsamples: $10<\log(M_{\rm star}/\rm {h^{-1}M_{\odot}})<10.5$ and $11<\log(M_{\rm star}/\rm {h^{-1}M_{\odot}})<12$. The galaxy-galaxy lensing signals of the two satellite samples are shown in Fig.\ref{fig:mstar}. It is clear that at small scales, where subhalos dominate, the lensing signals are larger around the more massive satellites. The best-fit subhalo mass for the high mass and low mass satellites are $\log(M_{\rm sub}/\rm {h^{-1}M_{\odot}})=11.14 ^{+ 0.66 }_{- 0.73}$ ($M_{\rm sub}/M_{\rm star}=19.5^{+19.8}_{-17.9}$) and $\log(M_{\rm sub}/\rm {h^{-1}M_{\odot}})=12.38 ^{+ 0.16 }_{- 0.16}$ ($M_{\rm sub}/M_{\rm star}=21.1^{+7.4}_{-7.7}$ respectively. \begin{figure} \includegraphics[width=0.5\textwidth]{fig6.pdf} \caption{Observed galaxy-galaxy lensing signal for satellite galaxies in different stellar mass bins at fixed $r_p$. The legend shows the $\log(M_{\rm star}/\rm {h^{-1}M_{\odot}})$ range for different data points. The solid lines are the best-fit tNFW models.} \label{fig:mstar} \end{figure} \section{Discussion and conclusion} \label{sec:sum} In this paper, we present measurements of the galaxy-galaxy lensing signal of satellite galaxies in redMaPPer clusters. We select satellite galaxies from massive clusters (richness $\lambda>20$) in the redMaPPer catalog. We fit the galaxy-galaxy lensing signal around the satellites using a tNFW profile and a PIEMD profile, and obtain the subhalo masses. We bin satellite galaxies according to their projected halo-centric distance $r_p$ and find that the best-fit subhalo mass of satellite galaxies increases with $r_p$. The best-fit $\log({M_{\rm sub}/\rm {h^{-1}M_{\odot}}})$ for satellites in the $r_p=[0.6, 0.9]{\rm h^{-1}Mpc}$ bin is larger than in the $r_p=[0.1, 0.3]{\rm h^{-1}Mpc}$ bin by a factor of 18. The $M_{\rm sub}/M_{\rm star}$ ratio is also found to increase as a function of $r_p$ by a factor of 12. We also find that satellites with more stellar mass tend to populate more massive subhalos. Our results provide evidence for the tidal striping effects on the subhalos of red sequence satellite galaxies, as expected on the basis of the CDM hierarchical structure formation scenario. Many previous studies have tried to test this theoretical prediction using gravitational lensing. Most of these previous studies focus on measuring subhalo mass in individual rich clusters. For example, \citet{Natarajan2009} report the measurement of dark matter subhalos as a function of projected halo-centric radius for the cluster C10024+16. They found that the mass of dark matter subhalos hosting early type $L^{*}$ galaxies increases by a factor of 6 from a halo-centric radius $r<0.6$ ${\rm h^{-1}Mpc}$ to $r\sim 4$ ${\rm h^{-1}Mpc}$. In recent work, \citet{Okabe2014} present the weak lensing measurement of 32 subhalos in the very nearby Coma cluster. They also found that the mass-to-light ratio of subhalos increase as a function of halo-centric radius: the $M/L_{\rm i'}$ of subhalos increases from 60 at 10$'$ to about 500 at 70$'$. Our work is complementary to these results. In the paper, we measure the galaxy-galaxy lensing signal of subhalos in a statistical sample of rich clusters. Our results lead to similar conclusions and show evidence for the effects of tidal striping. However, due to the statistical noise of the current survey, there is still a large uncertainties in the measurement of $M_{\rm sub}/M_{\rm star}$. Theoretically, the galaxy - halo connection can be established by studying how galaxies of different properties reside in dark matter halos of different masses, through models such as the halo occupation models \citet[e.g.][]{Jing_etal1998, PeacockSmith2000, BerlindWeinberg2002}, the conditional luminosity functions models \citep[e.g.][]{Yang_etal2003}, (sub)halo-abundance-matching \citep[ e.g.][]{Vale2004, Conroy2006, Wang2006, Yang_etal2012, Chaves-Montero2015}, and empirical models of star formation and mass assembly of galaxies in dark matter halos \citep[e.g.][]{Lu_etal2014, Lu_etal2015}. In most of these models, the connection is made between galaxy luminosities (stellar masses) and halo masses before a galaxy becomes a satellite, i.e. before a galaxy and its halo have experienced halo-specific environmental effects. The tidal stripping effects after a halo becomes a sub-halo can be followed in dark matter simulations. This, together with the halo-galaxy connection established through the various models, can be used to predict the subhalo - galaxy relation at the present day. During the revision of this paper, \citet{Han2016} posted a theoretical work on subhalo spatial and mass distributions. In Sec. 6.4 of their paper, they presented how our lensing measurements can be derived theoretically from a subhalo abundance matching point of view. With future data, the galaxy-galaxy lensing measurements of subhalos associated with satellites are expected to provide important constraints on galaxy formation and evolution in dark matter halos. The results of this paper also demonstrate the promise of next generation weak lensing surveys. In \citet{Li2013}, it is shown that the constraints on subhalo structures and $M_{\rm sub}/M_{\rm star}$ can be improved dramatically with next generation galaxy surveys such as LSST, due to the increase in both sky area (17000 deg$^2$) and the depth (40 gal/arcmin$^2$), which is crucial in constraining the co-evolution of satellite galaxies and subhalos. The space mission, such as Euclid, will survey a similar area of the sky (20000 deg$^2$) but with much better image resolutions (FWHM $0.1"$ versus $0.7"$ for LSST), which is expected to provide even better measurements of galaxy-galaxy lensing. The method described here can readily be extended to these future surveys. \section*{Acknowledgements} Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. The Brazilian partnership on CFHT is managed by the Laborat\'orio Nacional de Astrof\`isica (LNA). This work made use of the CHE cluster, managed and funded by ICRA/CBPF/MCTI, with financial support from FINEP and FAPERJ. We thank the support of the Laboratrio Interinstitucional de e-Astronomia (LIneA). We thank the CFHTLenS team for their pipeline development and verification upon which much of this surveys pipeline was built. LR acknowledges the NSFC(grant No.11303033,11133003), the support from Youth Innovation Promotion Association of CAS. HYS acknowledges the support from Marie-Curie International Fellowship (FP7-PEOPLE-2012-IIF/327561), Swiss National Science Foundation (SNSF) and NSFC of China under grants 11103011. HJM acknowledges support of NSF AST-0908334, NSF AST-1109354 and NSF AST-1517528. JPK acknowledges support from the ERC advanced grant LIDA and from CNRS. TE is supported by the Deutsche Forschungs-gemeinschaft through the Transregional Collaborative Research Centre TR 33 - The Dark Universe. AL is supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. BM acknowledges financial support from the CAPES Foundation grant 12174-13-0. MM is partially supported by CNPq (grant 486586/2013-8) and FAPERJ (grant E-26/110.516/2-2012).
{'timestamp': '2016-03-16T01:08:32', 'yymm': '1507', 'arxiv_id': '1507.01464', 'language': 'en', 'url': 'https://arxiv.org/abs/1507.01464'}
ArXiv
\section{Introduction} \label{Intro} Over the last two decades, data representation has evolved significantly. Several data types are used from different receiving sources, such as machine log data\footnote{Machine log data is machine-generated data like event logs, server data, and application logs.} and social media, which need to be processed, analyzed, and stored. This phenomenon has increased data volumes (see Fig. 1), and machine learning (ML) technology is the most promising approach to uphold big data analysis. However, due to the "over-fitting and over-parameterized model"\footnote{In the case of large and complex neural networks, the execution of an algorithm can take a prolonged time, indicating over learning (or over-fitting); this shows that while the learning system excels at fitting its learning data, it cannot generalize to new data. On the other hand, the over-parameterized model represents the number of parameters, where a model can contain more than the number of parameters estimated from data.} in some cases as neural networks \cite{panchal2011determination}, the complexity of existing problems \cite{yang2018end, leopold2018identifying}, as well as the centralized solutions which suffer in the case of drastic growth of data to store on single machines \cite{raicu2006astroportal}, make the solutions of traditional ML algorithms not feasible \cite{verbraeken2020survey}. Furthermore, it is computationally challenging to train sophisticated models like neural networks on a single machine. Hence, using a distributed system is necessary to distribute tasks across multiple machines and enhance the efficiency of optimization algorithms in terms of computing and communication costs while maintaining the highest level of accuracy. \begin{figure}[H] \centering \includegraphics[width=\textwidth,keepaspectratio]{statistic_id871513_amount-of-information-globally-2010-2024} \caption{Growth of data from 2010 to 2024 was obtained from\cite{ArneHolst}. The star (*) in the chart reflects Statista's prediction of rising global data from 2021 to 2024, based on reached 2020 figure with a 5-year compound annual growth rate (CAGR). In comparison, the International Data Corporation (IDC) provides the prediction figures from 2018 to 2020.} \label{fig:GrowthData} \end{figure} \subsection{The domain of the survey} Many data instances, high dimensional data, model and algorithm complexity, inference time constraints, prediction cascades\footnote{Speech recognition, object tracking, and machine translation, are a real-world issues that demand producing a series of interconnected predictions, this process known as prediction cascades. When considered as a single inference task, a cascade has a highly complicated joint output spaces, resulting in extremely high computing costs due to increasing computational complexity\cite{bekkerman2011scaling}.}, model selection, and parameter sweeps are reasons for scaling up ML \cite{bekkerman2011scaling}. Towards the deployment\footnote{To get the best performance while implementing DML, it is necessary to consider High-performance computing (HPC)-style hardware such as NVLink (is a high-speed GPU-to-GPU interconnect introduced by NVIDIA \cite{harris-2020}), InfiniBand networking (is an HPC networking communications technology with extremely high performance and low latency \cite{InfiniBand-2019}), and GPUs with capabilities such as GPUDirect (is a Magnum IO suite of technologies that improve data transportation and access for NVIDIA data center GPUs \cite{GPUDirect-2021}).} of DML, many considerations \cite{verbraeken2020survey} must be taken (see Fig. 2). \textit{ML algorithm} is a construction that allows making predictions from data and is based on three key features: type, goal, and method. The principal types are represented by supervised learning \cite{bekkerman2011scaling}, unsupervised learning \cite{bekkerman2011scaling}, semi-supervised learning \cite{chapelle2009semi}, and reinforcement learning \cite{ben2019demystifying}. The goal (purpose, objective) of machine learning is to find patterns in data and then make predictions based on those patterns, which are typically complicated, to help solve several types of problems (anomaly detection, classification, clustering, dimensionality reduction, representation learning, and regression), and the method is the model evolution's nature, which applies a variety of methodologies such as SGD, which goes back to the 1950s \cite{robbins1951stochastic}, SGD is a stochastic approximation variant of Gradient Descent (GD) and one of the most successful stochastic optimization algorithms. \begin{figure} \centering \includegraphics[width=90mm,scale=0.9]{DML_deployment_consideration.pdf} \caption{Road map of DML deployment considerations. Five considerations are taken into account: (1) represents the set of ML algorithms, (2) denotes the problem of the choice of hyperparameters in the optimization process, (3) is an essential concept in DML deployment, which serves to combine several models to improve predictive performance, which shows the idea of Ensemble learning, (4) represents the network typology considered in DML deployment namely centralized and decentralized, and (5) expresses the type of communication technique such as synchronous and asynchronous schemes.} \label{fig:RoadMapDMLdeploymentCons} \end{figure} The \textit{hyperparameters optimization} problem extends back to the 1990s\cite{kohavi1995automatic}. The choice of algorithm hyperparameters has a considerable influence on the performance of ML algorithms. The learning rate and momentum in SGD \cite{maclaurin2015gradient} as examples of choosing hyperparameters. However, the ideal combination of hyperparameters varies depending on the problem domain, ML model, and dataset \cite{verbraeken2020survey}. Several ways to solve this problem have been proposed, including Random searching of hyperparameters \cite{bergstra2012random}and Bayesian hyperparameter optimization strategies \cite{snoek2012practical}. The optimization ML algorithms are divided into distinct families discussed in the literature \cite{beck2017first, bottou2018optimization, wright2015coordinate, brooks1998markov, ensor1997stochastic, bergstra2012random, shahriari2015taking}, each with its own set of methodologies. Our research study focuses on first-order algorithms, which go back to 1847 \cite{beck2017first}. First-order algorithms represent the set of "methods that exploit information on values and gradients/subgradients\footnote{The subgradient method is an Iterative method for tackling convex minimization problems and applied to a non-differentiable objective function. The observable differences from the standard gradient method are in step lengths which are fixed in advance, and the function value might grow in the subgradient method. Unlike the gradient approach, this is a descent method that uses exact or approximate line search \cite{boyd2003subgradient}.} of the functions comprising the model under consideration" \cite{beck2017first}. Emulating human nature on a combination of different viewpoints to make an essential decision is the origin of \textit{ensemble learning} \cite{sagi2018ensemble}, a necessary concept in DML deployment. To enhance predictive performance, a single model is not accurate enough compared to combining them, which shows the idea of Ensemble learning \cite{sagi2018ensemble, verbraeken2020survey}. Ensemble learning is realized by several methods like AdaBoost \cite{freund1997decision}, which focuses on leveraging data misclassified by earlier models to train new models. The \textit{network topology} is another important concept for the design of a DML deployment, and the authors in \cite{verbraeken2020survey} presented fourth topologies: Centralized (Ensembling), Decentralized (Tree, Parameter Server), and Fully Distributed (Peer to Peer). The following section will discuss the segmentation of the topologies approved in our survey. Despite the benefits of distributed systems in ML, as using many processors to train a single task helps minimize training time, \textit{communication} is crucial in DML. The communication cost between processors can affect the system's scalability \cite{shi2018performance}, requiring data communication optimization. In a distributed system, the identification of tasks that may be conducted in parallel, the task execution order, and ensuring a balanced load distribution among the available machines must all be taken into account \cite{xing2016strategies}. After then, the techniques Bulk Synchronous Parallel \cite{xing2016strategies}, Stale Synchronous Parallel \cite{ho2013more}, Approximate Synchronous Parallel \cite{hsieh2017gaia}, Barrierless Asynchronous Parallel \cite{luo2020processing}, and Total Asynchronous Parallel \cite{hsieh2017gaia} may improve communication efficiency. In contrast, our survey focuses on synchronous/asynchronous algorithms, the most prevalent and commonly utilized methods in DML. However, heterogeneity, openness\footnote{Openness: is a feature that indicates the ability to extend and reimplement a computer system. Moreover, the ability to introduce new resource-sharing services and make them available for usage by a range of client programs, determines the openness of distributed systems \cite{coulouris2005distributed}.}, security, scalability, failure handling, concurrency, transparency, quality of service are the main challenges in distributed systems \cite{coulouris2005distributed}, hence the distributed ML systems. Handling failures is challenging, and many techniques are used to deal with various types of faults (hardware or software), including tolerating failures. \cite{coulouris2005distributed}. Fault tolerance in distributed environments is defined as the ability of a system to continue performing its intended task with the presence of failures \cite{ahmed2013survey}. The types of possible failures\cite{jalote1994fault} in distributed systems are depicted in Fig. 3, and the Byzantine failure is the general one\cite{lamport1982byzantine}. A Byzantine failure is where a component deviates out of its intended behavior unexpectedly and shows Byzantine behavior \cite{mostefaoui2018randomized}. This undesirable behavior is known as adversarial, compromised, attack, malicious, and arbitrary failure. It can be malevolent or merely the consequence of a transient malfunction that changed the component's local state, causing it to behave in an unanticipated manner \cite{mostefaoui2018randomized}. This failure type was first described in the synchronous distributed systems context \cite{lamport1982byzantine, pease1980reaching, raynal2010fault} before being examined in the asynchronous distributed systems situation \cite{attiya2004distributed, lynch1996distributed, raynal2012concurrent}. \begin{figure}[H] \centering \begin{tikzpicture}[>=stealth',join=bevel,auto,on grid,decoration={markings, mark=at position .5 with \arrow{>}}] \coordinate (structuralNode) at (2:2cm); \coordinate (originNode) at (0:0cm); \draw[-, very thick,black] (structuralNode.south) -- (0.5,0) node[pos=0]{ } node[pos=0.1]{} node[pos=0.4]{} node[pos=0.7]{} node[pos=1]{}; \draw[fill,black] (barycentric cs:structuralNode=0.9,originNode=0.0) circle (1.5pt); \draw[fill,black] (barycentric cs:structuralNode=0.65,originNode=0.2) circle (1.5pt); \draw[fill,black] (barycentric cs:structuralNode=0.35,originNode=0.35) circle (1.5pt); \draw[fill,black] (barycentric cs:structuralNode=1.5,originNode=4.0) circle (1.5pt); \draw[black] (0,0) circle (2.3cm); \draw[black,postaction={decorate,decoration={raise=-2.5ex,text along path,text align=right,text={Byzantine}}}] (0,0) circle (1.8cm); \draw[black,postaction={decorate,decoration={raise=-2.5ex,text along path,text align=right,text={Timing}}}] (0,0) circle (1.3cm); \draw[black,postaction={decorate,decoration={raise=-2.5ex,text along path,text align=right,text={Omission}}}] (0,0) circle (0.75cm); \draw[white,postaction={decorate,decoration={raise=-2.5ex,text along path,text align=right,text={Crash}}}] (0,0) circle (0.25cm); \end{tikzpicture} \caption{Faults type in a distributed system. Crash: it causes loss of the internal state of the component, Omission: the component does not respond to some inputs, Timing: the component reacts too early or too late, Byzantine: an arbitrary behavior of the component.} \label{fig:FaultsTypes} \end{figure} \subsection{Problem formulation and motivation of the survey} This study aims to analyze the proposed BFT approaches for DML, considering synchronous and asynchronous training processes in centralized and decentralized settings. Examining to what extent BFT approaches in DML meet the requirements (Byzantine resilience methods, tolerate Byzantine fault in the distributed SGD algorithm, preserving privacy in collaborative DL ... etc.) and the obstacles they confront. The Byzantine faults in distributed systems include software, hardware, computation errors, and network problems as propagating wrong data between nodes, which apply to distributed machine learning, thus impacting negatively on the intended results. In more detail, the problem of Byzantine fault in DML can occur during training (concerns the adversarial gradients) as \cite{blanchard2017machine}, or inference (concerns the adversarial examples\footnote{Represent the maliciously generated inputs \cite{szegedy2014intriguing} and consists of small imperceptible changes to humans in a regular test sample, but they may misclassify the examples in many ML models, including deep neural networks \cite{yuan2019adversarial, wang2016theoretical, NEURIPS2020_00e26af6}}) like \cite{wang2016theoretical}. The former is our study center, where the sets of adversaries during training can know the information of other nodes and replace the data transmission among them with arbitrary values, which may break the robustness of distributed optimization methods used in DML. Several studies, including \cite{blanchard2017machine, weng2019deepchain, chen2018draco, el2018hidden, melis2019exploiting, hitaj2017deep}, have explained that an adversary in the training phase can inject erroneous or malicious data samples as false labels or inputs. We can take an example that a Byzantine participant in the training process can select an arbitrary input, resulting in unwanted gradients. Hence, as a possible consequence when aggregating, we may get incorrect gradients. Deducing sensitive and private information is another problem. In this context, adversaries might use the inference attack, for example, to extract sensitive model information. Furthermore, they might fool an already trained model with contradicting inputs, changing the model's parameters, or altering the gradient in the incorrect direction, leading it to deviate arbitrarily, shifting the average vector away from the desired direction. These scenarios can disrupt the training process, where they serve to achieve unwanted gradients, which significantly impact the parameter update stage and, therefore, negatively impact the expected model outcome. This study concentrates on Byzantine failure, which occurs when worker machines act arbitrarily, making data privacy more challenging to protect \cite{weng2019deepchain}. Higher accuracy, for example, is desired in several artificial neural network tasks, such as image recognition; this requires a huge number of data to train deep learning models, resulting in high computational costs \cite{gupta2016model, chilimbi2014project}. Although distributed deep learning solves the latter issue, privacy remains a challenge throughout the training process due to inferring critical training data using intermediate gradients when partitioned and stored data separately \cite{song2017machine, melis2019exploiting, orekondy2018gradient}. Escaping saddle points is a significantly more challenging task. As shown in \cite{yin2019defending}, the adversary may modify the landscape of the loss function around a saddle point. Another challenge is achieving optimal convergence that Byzantine attackers may prevent. Byzantine attackers can send distorted data or effects the algorithms to force them to converge to the attackers' choice-value \cite{cao2019distributed}. \textbf{Specifically, the studied problem at the core of our analysis is how to ensure the robustness of the SGD-based training algorithms in DML}. In the literature, and to the best of our knowledge, we observe that despite extensive research in the field of Byzantine fault tolerance in distributed machine learning, only one overview has been published \cite{yang2020adversary}. Since there is no consensus on the usage of the words distributed and decentralized in literature, Yang et al.\cite{yang2020adversary} chose to use them separately. They focused on the latest advances in Byzantine-resilient inference and learning. In this context, they provided the latest findings on the resilience of distributed/decentralized detection, estimation, and ML training algorithms in the presence of Byzantine attacks. The authors presented some approaches on Byzantine-resilient of distributed stochastic gradient descent and decentralized learning one, with comparative numerical experiments. Similarities and dissimilarities between distributed SGD approaches and those decentralized methods are provided in their work. However, this overview does not include any Byzantine fault tolerance approaches based on Blockchain or coding schemes in distributed machine learning to strengthen the discussion of proposed BFT solutions. Moreover, the authors have not classified the techniques used by BFT approaches in the overview, in addition to very restricted approaches that have been explained. We push further the study, making it more exhaustive and covering the full structure of BFT in DML, which helps readers grasp the whole structure of BFT in DML. \subsection{Scope of the survey} On this hand, we define the scope of this survey as follow: \begin{itemize} \item Preliminaries of distributed machine learning \item Description of key techniques used in BFT for DML \item Description of fundamental approaches proposed to deal with the BFT issue in DML. \end{itemize} A discussion and a comparative analysis of the studied approaches are also given in subsequent sections. \subsection{Our contributions} The purpose of this paper is to survey the proposed approaches targeted to tolerate Byzantine faults in the context of first-order optimization methods based on SGD. Although works on this subject have already been conducted, a complete study that includes the most current approaches, the classification of techniques used by these approaches, and the open problems in BFT in the DML is still lacking. The points that we will cover are provided in the following paragraph This study seeks to give a full overview of the BFT in the DML with preliminary definitions, achievements, and challenges, conducive to providing the reader with a basic understanding of our proposed work. Therefore, the reader will be aware of current research trends in this scope and will be able to identify the most demanding concerns. The description of the contributions to recent field literature in this work can be summarized as follows: \begin{itemize} \item Providing an overview with a classification of the techniques and approaches used to achieve solutions of BFT in DML. \item Identifying and examining solutions based on filtering schemes, coding schemes, and Blockchain technology to deal with BFT in ML with software environments. \item Providing insights into existing BFT approaches in DML and future directions. \end{itemize} \subsection{Survey organization} The remainder of this paper is structured as follows: Section 2 presents the essential terminologies related to DML. Section 3 describes fault tolerance in DML frameworks and provides a detailed study of tolerant systems, focusing on Byzantine ones in SGD. Section 4 discusses and compares the set of research works presented in section 3. Finally, section 5 offers the conclusion of our survey. \section{Towards distributed machine learning} \subsection{Machine learning and Deep learning} ML is one of the most successful Artificial Intelligence (AI) models, using algorithms capable of solving problems with progressively less human intervention. In other words, ML algorithms use data samples (training data) as inputs and produce predictions as output values without any explicit programming. Among the most successful subfields of ML algorithms is Deep Learning (DL). The latter is based on bio-inspired Artificial Neural Networks (ANNs) and multilayer ANNs form a Deep Neural Network (DNN), which is illustrated in the architecture of DL (see Fig. 4). \begin{figure} \centering \includegraphics[width=90mm,scale=0.8]{DNN_Architecture.png} \caption{Deep Neural Networks Architecture adapted from \cite{haris-iqbal-2018}. The DNN architecture example of 8-layer (could be more or less). Layers 1 through 5 comprise a convolution block, Rectified Linear Units (RELU), and pooling layers. Layer 6 represents the flattening layer. Layers 7 and 8 depict the dense layer and softmax layer, respectively.} \label{fig:DNN_Arch} \end{figure} DL methods are representation-based learning architectures as a cascade of linear and non-linear modules targeted to build different levels of abstractions in a hierarchical manner \cite{ lecun2015deep}. In large-scale learning, the complexity of ML models is growing \cite{srivastava2015training}, which has driven learning schemes that require a set of computational resources. Consequently, the distribution of the computations among several workers in machine learning becomes necessary. A distributed training architecture is illustrated in Fig. 5. \begin{figure}[H] \centering \includegraphics[width=90mm,scale=0.8]{distributed_training_architecture.pdf} \caption{General distributed training architecture. In the DML, the training process is realized based on data-parallel or model-parallel techniques. In the former, the model is replicated across n workers. Each worker performs its computations then shares the updates with the other workers. In the latter, the model is divided across n workers, with each worker having a part of the model. Both techniques are based on synchronous or asynchronous updates, and more details are provided in the following section} \label{fig:General_DistTrainArch} \end{figure} \subsection{Distributed machine learning} Distributed systems offer the advantage of enabling networked computers to cooperate to tackle complicated tasks that need several computers, reducing the time required to discover a solution. Hence giving performance gains when it comes to machine learning. When it concerns distribution, two standard techniques are used to divide the problem among various computers (worker nodes) for efficient training: $ Data-parallelism $ or $Model-parallelism$. Both techniques are described in \cite{verbraeken2020survey} as follows: In $ Data-parallelism $, the data is partitioned on various worker nodes with a full model copy. All worker nodes then apply the same algorithm to distinct data sets, with the results from each being a single coherent output somehow merged. In $Model-parallelism$, various worker nodes are in charge of processing accurate copies of the whole data sets on different parts of a single model. As a result, the model is comprised of all model parts. The approaches mentioned above may also be used in conjunction \cite{xing2016strategies}. In the same context, several DML approaches have been developed as those based on the distributed SGD optimization algorithm \cite{agarwal2018cpsgd, basu2019qsparse, shi2019distributed}. Optimization methods, topologies, and communication are among the main features taken into account when deploying a DML, which we present in subsequent sections. \subsubsection{Optimization methods} Optimization has piqued the interest of researchers as a key component of machine learning. With the exponential rise of data and model complexity, optimization approaches in machine learning confront increasing hurdles. The commonly used optimization techniques are addressed in the literature \cite{bottou2018optimization,sun2019survey}, while we concentrate on the popular optimization methods family represented by first-order optimization methods (see Fig. 6). The latter is the set of methods that use the information included in the functions that make up the model being studied. \begin{figure} \centering \begin{forest} for tree={ draw=black, rectangle, align=center, child anchor=north, parent anchor=south, drop shadow, l sep+=12.5pt, edge path={ \noexpand\path[color=black, rounded corners=5pt, >={Stealth[length=10pt]}, line width=1pt, ->, \forestoption{edge}] (!u.parent anchor) -- +(0,-8pt) -| (.child anchor)\forestoption{edge label}; }, where level={3}{tier=tier3}{}, where level={0}{l sep-=5pt}{}, where level={1}{ if n={1}{ edge path={ \noexpand\path[color=black, rounded corners=5pt, >={Stealth[length=10pt]}, line width=1pt, ->, \forestoption{edge}](!u.parent anchor) -- +(0,-5pt) -| (.child anchor)\forestoption{edge label}; } }{ edge path={ \noexpand\path[color=black, rounded corners=5pt, >={Stealth[length=10pt]}, line width=1pt, ->, \forestoption{edge}](!u.parent anchor) -- +(0,-5pt) -| (.child anchor)\forestoption{edge label}; }, } }{} } [First-order\\optimization methods [Basic gradient\\descent methods\\and their variants [Gradient descent\\SGD\\Mini-Batch gradient] [Gradient methods\\with\\momentum\\\cite{polyak1964some, qian1999momentum}] [Adaptive learning\\rate methods\\ \cite{duchi2011adaptive, zeiler2012adadelta, tieleman2012lecture, kingma2014adam, dozat2016incorporating}] [Variance\\reduction\\methods [Stochastic\\average gradient\cite{le2012stochastic}] [Stochastic variance\\reduction gradient\\\cite{johnson2013accelerating}] [SAGA\\algorithm\cite{defazio2014saga}] ] ] [Alternating\\direction methods\\of Multipliers\cite{gabay1976dual} ] [Frank-Wolfe\\Method\cite{frank1956algorithm}] ] \end{forest} \caption{Types of First Order Optimization Methods in ML} \label{fig:first_order_Opt_MLMeth} \end{figure} ML optimization is the process of fitting out the hyperparameters (or evaluating the weights) using one optimization technique. In the literature, gradient descent is the most widely used optimization method \cite{banavlikar2018crop}. To reduce the prediction error of an ML model, Gradient Descent can be used. GD minimizes an objective function $J(\theta)$ as follows \cite{ruder2016overview}: \begin{algorithm} \caption{Gradient Descent} \label{alg-gd} \begin{algorithmic}[1] \REPEAT \STATE \( \theta := \theta - \alpha \cdot \nabla J(\theta)\) \UNTIL{convergence} \end{algorithmic} \end{algorithm} Where: \begin{itemize} \item$\theta$: model parameters \item$\alpha$: the learning rate that determines the steps size taken to achieve good local minima at the convergence. \item$J$: objective function \item$\nabla J(\theta)$: first-order derivatives. \end{itemize} Our interest is the SGD optimization method, which behaves according to the following algorithm and differs from the GD, as shown in Fig 7: \begin{algorithm}[H] \caption{Stochastic Gradient Descent} \label{alg-sgd} \begin{algorithmic}[2] \STATE Randomly shuffle training examples \STATE for \( i:=1, \ldots, m\{ \) \REPEAT \STATE \( \theta := \theta - \alpha \cdot \nabla J(\theta; x(i);y(i))\)\\ \STATE \( \} \) \UNTIL{convergence} \end{algorithmic} \end{algorithm} Where: \begin{itemize} \item $m$: the number of training examples, \item $x(i) ,y(i)$: the training examples. \end{itemize} \begin{figure}[H] \centering \begin{tabular}{cc} \begin{tikzpicture}[ dot/.style = {circle, fill, inner sep=1pt, node contents={}}, ell/.style = {ellipse, draw=black, rotate=-5, minimum width=2*#1, minimum height=#1,node contents={}}, every edge/.style = {draw, ->,black, thick}, every edge quotes/.style = {font=\scriptsize, inner sep=1pt, auto, sloped}, ] \coordinate (originNode) at (0:0cm); \foreach \i [count=\c from 1] in {1,6, 12, 18, 24, 30} \node (n\c) [ell=\i mm, line width=0.5pt]; \node [dot]; \draw (n6.north west) edge [] (n5.west) (n5.west) edge [] (n4.north west) (n4.north west) edge [] (n3.west) (n3.west) edge [] (n2.north west) (n2.north west) edge [] (n1.west) (n1.west) edge [] (originNode); \end{tikzpicture} \begin{tikzpicture}[ dot/.style = {circle, fill, inner sep=1pt,node contents={}}, ell/.style = {ellipse, draw=black, rotate=-5, minimum width=2*#1, minimum height=#1,node contents={}}, every edge/.style = {draw, ->,black, thick}, every edge quotes/.style = {font=\scriptsize, inner sep=1pt, auto, sloped} ] \coordinate (origin) at (0:0cm); \foreach \i [count=\c from 1] in {1,6, 12, 18, 24, 30} \node (n\c) [ell=\i mm, line width=0.5pt]; \node [dot]; \draw (n6.north west) edge [] (n5.north west) (n5.north west) edge [] (n4.north west) (n4.north west) edge [] (n3.north west) (n3.north west) edge [] (n2.north west) (n2.north west) edge [] (n1.north west) (n1.north west) edge [] (origin); \end{tikzpicture} \end{tabular} \caption{SGD (left) and GD (right) comparison adapted from \cite{shalev2014understanding}. In both GD and SGD, we iteratively update a set of parameters to minimize an error function. GD traverses the full dataset once before each update, while SGD selects just one data point randomly. SGD frequently converges considerably quicker compared to GD, although the error function is not as well reduced as in the case of GD.} \label{fig:SGDandGD} \end{figure} The comparison discussed in \cite{ruder2016overview} and shown in Fig. 7 show that GD optimization can be costly in bulky datasets. It calculates the gradient of the cost function based on the complete training set for every update. Therefore, GD can be very slow and intractable in practice. Unlike SGD, which uses for each update one data point randomly to perform the parameter update, which makes it much faster. In addition, the convergence of GD is uncertain when there are several local minima (non-convex cases), which means when a global minima is required, the local minima may confine the training method to an inferior solution \cite{cetin1993global}. On the other hand, SGD avoids this problem by starting with a big step size and then reducing it after getting far enough away from the starting point \cite{kleinberg2018alternative}. Despite the advantages of SGD, there are some challenges (as to improve the SGD robustness \cite{dean2012large}, Saddle Points and Selecting the Learning Rate \cite{ketkar2017stochastic}) that reinforce to propose other variants of SGD (Momentum \cite{qian1999momentum}, Nesterov accelerated gradient \cite{nesterov1983method}, Adagrad \cite{duchi2011adaptive}, Adadelta \cite{ zeiler2012adadelta}, RMSprop \cite{tieleman2012lecture}, Adam and AdaMax \cite{kingma2014adam}, Nadam \cite{dozat2016incorporating}). Additionally, for large-batch training, AdaScale SGD, another proposed variant that reliably adapts learning rates \cite{pmlr-v119-johnson20a}. Furthermore, in over-parameterized learning, an accelerated convergence rate was reached by MaSS \cite{liu2019accelerating} over SGD. Naturally, SGD is a sequential algorithm \cite{bottou1998online}. When the crucial growth in training data size over the years (see Fig. 1) becomes a fact issue; hence, parallelized and distributed implementation is necessary for SGD and its variants. Distributed SGD implementations \cite{zhang2015deep} typically take the following form: \begin{itemize} \item The parameter server performs learning rounds to the training data \cite{li2013parameter}. \item Then it diffuses the vector of parameters to the workers \item Each worker calculates an estimate of the gradient \item The parameter server aggregates the workers' calculation results \item Finally, the parameter server updates the parameters vector \end{itemize} We cover in the next subsection the most common ways of communication used with SGD. \subsubsection{Communication} Synchronous/Asynchronous approaches are the most communication methods used in the training process with distributed SGD. \begin{figure} \centering \includegraphics[width=100mm,scale=1.0]{DistributedSGD_Syn_ASyn.pdf} \caption{Asynchronous distributed SGD vs. Synchronous distributed SGD. Using Three worker machines and a parameter server are required (the number of worker machines shown in the figure is an example; there may be more, as well as the parameter server, which is based on the topology used in the training process, as indicated in the next subsection). The computation happens across all workers and the parameter server, with workers submitting gradients to the parameter server, which delivers the updated model to the workers. The gradients transfer and updated model are occurred synchronously in synchronous distributed SGD (right) and asynchronously in asynchronous distributed SGD (left).} \label{fig:SynchAsynchDistSGD} \end{figure} Fig. 8 shows the synchronous distributed SGD vs. the asynchronous distributed SGD. The distinction between these methods is that no model update occurs until all workers have successfully calculated and delivered their gradients in the synchronous training case. In contrast to asynchronous training, no worker waits for a model update from another worker. SGD optimization method (see Algorithm 2) works by determining the best weight parameters ($ \theta $) of the model by minimizing the objective function ($ J $). Thus, at each iteration, we minimize the error regarding the model's current parameters by following the gradient direction of the objective function of a single sample. When training deep models on large datasets, we need to distribute our computations using multiple machines. As we mentioned previously, synchronous and asynchronous methods are the most popular communication approaches used in the DML. We will discuss them with the distributed SGD. Depending on the architecture, multiple workers process data in parallel, and each worker connects with the parameter server and processes a mini-batch of data dependently on the others in the synchronous approach and independently of the others in the asynchronous approach \cite{chen2016revisiting}. \paragraph{\bf{ Synchronous distributed SGD }} Several works \cite{das2016distributed, zhang2018adaptive, zhao2019dynamic} used the synchronous method. The computation in the synchronous approach is deterministic and straightforward to implement. Every iteration, each worker calculates the model gradients in parallel using various mini-batches of data. Then, the parameter server waits for all workers to communicate their gradients before aggregating them and transmitting the updated parameters to all workers \cite{chen2016revisiting}. \paragraph{\bf{ Asynchronous distributed SGD }} The works \cite{lian2018asynchronous, reddi2015variance, zhang2013asynchronous} among the research papers are based on the asynchronous method. The asynchronous steps communication is presented in \cite{chen2016revisiting}. The worker obtains from the parameter server the most recent model parameters required to process the current mini-batch; It then computes gradients of the loss concerning these parameters; and finally, these gradients are returned to the parameter server, which updates the model accordingly. In reality, When a worker retrieves the model's parameters while other workers update the parameter server, an overlap may occur, resulting in inconsistent results. Furthermore, there is no guarantee that model updates may have happened while a worker calculates its stochastic gradient; consequently, the resulting gradients are stale and often calculated regarding out-of-date parameters \cite{chen2016revisiting}.\\ The two most common distributed machine learning methods (synchronous/asynchronous) are built within a (centralized/decentralized) setting that we develop in the subsequent section. \subsubsection{Topologies} We mentioned in the introduction four possible topologies discussed in \cite{verbraeken2020survey} based on the degree of distribution\footnote{The degree of distribution is the frequency distribution of a network system's node degree over the whole communication network.}. Due to the lack of agreement on the definitions of distributed and decentralized \cite{yang2020adversary}, we centered our survey on most topologies used in BFT for DML. We classified them into centralized and decentralized (multiple parameter servers, fully distributed) settings. \begin{figure} \centering \includegraphics[width=\textwidth,keepaspectratio]{settings_diagram.pdf} \caption{ (a) Centralized setting, which shows a central node computing (server) and worker nodes connecting with the server (b) Decentralized setting, case of multiple parameter servers that shows several severs connecting with a decentralized set of workers (c) Decentralized setting, case of fully distributed setting that shows independent nodes that collaborate the solution together.} \label{fig:settings} \end{figure} \begin{enumerate} \item \verb| Centralized setting|: the centralized setting is the classical one of the distributed machine learning paradigm, consisting of the parameter server model \cite{li2014scaling}, where there is a central node computing (server) and a set of data holders (worker nodes) that connect to the server (see Fig. 9.a). The worker nodes perform the gradient calculation locally and stream it to the server that updates the model parameter. \item \verb| Decentralized setting|: to eliminate single points of failure in a centralized setting, there's a need to use a decentralized setting that excludes the necessary central entity (server) in the centralized settings. The decentralized setting focused on the cooperation among entities to produce the final result \cite{yang2020adversary}. \paragraph{\bf{ Decentralized setting based on multiple parameter servers }} In this setting, instead of using one parameter server to manage parts of the model, multiple centralized parameter servers are used with a decentralized set of workers \cite{verbraeken2020survey} (see Fig. 9.b). \paragraph{\bf{ Fully distributed setting }} There are no central nodes in a fully distributed setting, so there are no single points of failure. This setting can be represented by a directed or undirected graph, where the workers communicate with each other and transit their parameter vectors to their neighboring nodes. Once the parameter vectors have been received, each worker node calculates the gradient locally and broadcasts it to its neighbors. Finally, the worker nodes aggregate the training model parameter from each neighboring worker node and update the model \cite{guo2020towards, lian2017can}. A \textit{peer-to-peer} network is an example of implementation of this setting (see Fig. 9.c) also the based topology of \textit{Blockchain} technology, the core technology used to create the cryptocurrency \cite{nakamoto2008peer}. In other terms, Blockchain is a peer-to-peer distributed ledger technology that enables secure transactions and a bond of trust with its participating users \cite{dave2019survey}. Some of the works given in this survey rely on this technology in their proposed approaches, such as \cite{weng2019deepchain, chen2018machine, lugan2019secure, zhao2019mobile, rathore2019blockdeepnet}. \end{enumerate} \section{Byzantine fault tolerance in distributed machine learning} \subsection{Fault tolerance in distributed machine learning} A system is fault-tolerant if it can detect and eliminate fault-caused errors or crashes and automatically recover its execution. Following the growth of data and the necessity of an efficient system in many fields, including machine learning, fault tolerance is becoming increasingly crucial \cite{zhou2003evolving, leung2008fault, simon1995fault,arad1997fault}. The checkpoint/restart is the simplest form of fault tolerance in machine learning \cite{ben2019demystifying}, and the SCAR (Self-Correcting Algorithm Recovery) framework was developed by Aurick et al. \cite{qiao2019fault} among approaches that used this form. The presented framework is based on the parameter server architecture (PS)\cite{li2013parameter, ho2013more,li2014communication}. It minimizes the size of the perturbation caused by failure, which is reduced by 78\% to 95\% \cite{qiao2019fault} compared to fault tolerance training algorithms and models with a traditional checkpoint. The increased number of proposed Byzantine fault-tolerant techniques is proportional to the number of different software faults that rise in tandem with various malicious attacks. In this paper, we provide a comparative study among the works in the literature that aim to tolerate Byzantine fault in the first-order optimization methods, especially in the distributed SGD optimization methods. \subsection{Byzantine fault tolerance in first-order optimization methods} We classified the Byzantine fault tolerance approaches that deal with first-order optimization methods, precisely the SGD method, into two categories: synchronous and asynchronous training approaches. Based on these two categories shown in Fig. 8, we present a set of proposed algorithms, which are analyzed as follows: \subsubsection{Synchronous training approaches VS Asynchronous training approaches} Synchronous stochastic gradient descent is used in the training process when worker communication is dependent on the parameter server. In this case, the algorithms must wait for the parameter server to aggregate all the gradients sent by the workers before updating the new model and moving on to the next iteration. However, asynchronous stochastic gradient descent is used when worker communication with the parameter server is independent. In this situation, the algorithm does not wait until the collection of worker gradients is complete so that the settings server can update the new model and move on to the next iteration.\\ We classify the techniques used in first-order optimization methods for synchronous/asynchronous Byzantine fault tolerance into three types, as shown in Fig. 10. \begin{figure} [H] \centering \includegraphics[width=100mm,scale=1.0]{BFT_IN_SGD.pdf} \caption{Brief description of the different types of techniques used in BFT in DML. In comparison to the asynchronous training method, a large part of BFT approaches in DML use the synchronous training method. Synchronous approaches use various techniques, while asynchronous approaches use the filter scheme technique.} \label{fig:BFT_typesTechniques_descr} \end{figure} \begin{enumerate} \item \verb|Byzantine fault tolerance based on filtering schemes|: The aggregation rule as a geometric median \cite{cohen2016geometric} is used to calculate the average of the input vectors \cite{el2020robust} in the learning process. In the distributed training process and before the model update phase, aggregation must be passed by collecting the gradients to compute their average. Most of the BFT approaches in the literature are based on the filtering schemes presented by gradient filters (aggregation rules). In this type of method, the mistrustful gradients are filtering in some approaches before averaging by existing robust aggregation rules \cite{alistarh2018byzantine, xie2018zeno} as those based on median or mean \cite {tianxiang2019aggregation}. Another way to be accredited by other works is to use a robust aggregation rule that replaces the averaging aggregation operation \cite{blanchard2017byzantine, blanchard2017machine}. Other schemes are used in several ways, as \cite{su2018securing}, which used the iterative filtering based on robust mean estimation \cite{steinhardt2018resilience}, and the norm filtering used by \cite{gupta2019byzantine_b, gupta2019byzantine_a, li2019rsa} in addition to General Update Function (GUF) used with average aggregation by \cite{guo2020towards}. \item \verb|Byzantine fault tolerance based on coding schemes|: these schemes are also known as redundancy schemes in some literary works. The two names come from the combination of redundant gradient calculations and the encoding scheme. The basic idea of the redundant mechanism is that each node evaluates a redundant gradient instead of a single gradient. More precisely, each computation node has a redundancy rate, which presents the average number of gradients assigned. So, in the presence of Byzantine workers, the parameter server can always recover the sum of the correct encoded gradients per iteration \cite{chen2018draco}. The coded gradient presents the phase of coding theory, as shown in \cite{zaharia2008improving}. The coding-theoretic techniques have been used before but not in the case of Byzantine fault tolerance; instead, they have been used to speed up distributed machine learning systems\cite{lee2017speeding}, gradient coding \cite{tandon2017gradient, raviv2018gradient}, encoding computation \cite{dutta2016short}, and data encoding\cite{karakus2017straggler}. \item \verb|Byzantine fault tolerance based on Blockchain|: Blockchain was presented as a decentralized system characterized by auditability, privacy, and persistence \cite{zheng2017overview}. When a faulty component in a computer system exhibits an arbitrary behavior, we discuss the Byzantine faults. This issue introduces firstly by the Byzantine Generals Problem \cite{lamport1982byzantine}, an example of consensus methods to tolerate this type of problem. It is known as Byzantine Fault Tolerant (BFT) consensus protocol. "Proof of Work" is the principal consensus protocol of Bitcoin's public Blockchain to tolerate Byzantine faults \cite{nakamoto2008bitcoin}. Blockchain technology is used in machine learning systems to overcome Byzantine challenges and preserve security and privacy. \end{enumerate} \subsection{BFT approaches in first-order optimization methods} This section presents various BFT approaches proposed in the literature. We introduce the principle of each approach, the BFT method used, with some advantages and weaknesses. Sections 3.3.1 and 3.3.2 discuss BFT approaches in synchronous training. Section 3.3.1 is centered on the SGD method and approaches based on other ML optimization methods shown in section 3.3.2. The BFT approaches in asynchronous training are presented in section 3.3.3, while section 3.3.4 introduces the BFT approaches in partially asynchronous training.\\ We use some abbreviations which are not standard. The authors are chosen according to the proposed approach.\\ We summarize the notations frequently used in the next subsections as follow: \begin{itemize} \item $n$: The total number of worker machines. \item $f$: Byzantine workers. \item $d$: The model dimension. \item $T$: The set of iterations in an iterative algorithm. \end{itemize} \subsubsection{Synchronous training Byzantine fault tolerance approaches based initially on the SGD} A brief description of each proposed approach is in the following table, along with the kind of technique it lies in. \begin{table}[H] \caption{Categorization of synchronous training BFT approaches based on the SGD. Each BFT synchronous training approach is presented with its basic technique (filtering scheme, coding scheme, Blockchain) and a summary of its main idea. \textbf{Zoom the PDF for better visualization}.} \label{tab:CategorizationSynBFT} \resizebox{0.99\columnwidth}{!}{ \begin{tabular}{p{2.5 cm} |p{2.5 cm}| p{14 cm}} Approach&Technique Type&Brief Description\\ \hline DeepChain&Blockchain&A robust Blockchain-based decentralized platform for privacy-preserving collaborative deep training\\ \hline Krum and Multi-Krum&Filtering scheme&The first proved Byzantine resilience method for distributed SGD, which satisfies the aggregation rule resilience property when there are several Byzantine workers\\ \hline ByzantineSGD&Filtering scheme&An optimal SGD version of sampling and temporal complexity, with optimal convex ML convergence rates\\ \hline GBT-SGD&Filtering scheme&Presents the property of dimensional Byzantine resilience and generalized Byzantine failures\\ \hline Tremean,Phocas&Filtering scheme&Two robust aggregation rules, along with proof of Byzantine resilience properties\\ \hline DRACO&Coding scheme&Is based on algorithmic redundancy and incorporates coding theory concepts\\ \hline Bulyan&Filtering scheme&Decreases the adversarial leeway that leads to sub-optimal models\\ \hline Zeno&Filtering scheme&Uses a ranking-based preference mechanism when workers are suspected of being malicious\\ \hline LearningChain&Blockchain&A decentralized privacy-preserving and secure machine learning system based on Blockchain and addresses and non-linear learning models\\ \hline RSA&Filtering scheme&A collection of robust stochastic sub-gradient methods for distributed learning over heterogeneous datasets with an undetermined number of Byzantine workers\\ \hline SIGNSGD&Filtering scheme&Is based on the sign of workers gradient vector and majority vote aggregation in the overall update\\ \hline AGGREG-ATHOR&Filtering scheme&A rapid and lightweight framework that provides weak resilience through Multi-Krum and strong resilience via Bulyan\\ \hline GuanYu&Filtering scheme&The first method for dealing with both Byzantine parameter servers and Byzantine workers\\ \hline Trimmed mean&Filtering scheme&An aggregation rule-based on the mean aggregation rule that satisfies the condition of the dimensional Byzantine resilience\\ \hline FABA&Filtering scheme&A fast aggregation algorithm that eliminates outliers from uploaded gradients and acquires gradients near genuine gradients\\ \hline BRIDGE&Filtering scheme&A decentralized learning approach; it can learn effective models while dealing with a specific number of Byzantine nodes\\ \hline LICM-SGD&Filtering scheme&A Lipschitz-inspired coordinate-wise median SGD method, which does not need to be aware of the Byzantine workers' number\\ \hline Gradient-Filter CGC&Filtering scheme&A parallelized SGD method to deal with Byzantine faulty workers in synchronous settings, based on comparative gradient clipping\\ \hline LIUBEI&Filtering scheme&Filtering out Byzantine servers without communicating with all servers and cutting down communication cycles\\ \hline DETOX&(Filtering, coding) schemes&Is based on algorithmic redundancy and a robust aggregation method to deal with BFT in DML\\ \hline RRR-BFT&Coding scheme&A deterministic and randomized scheme to tolerate Byzantine fault in the parallelized-SGD learning algorithm\\ \hline Stochastic-Sign SGD&Filtering scheme&Is based on stochastic-sig; it improves privacy and accuracy by the compressor dp-sign and enhances the learning performance by using the error-feedback scheme\\ \end{tabular} \bigskip } \end{table} \begin{enumerate} \item \verb|Byzantine fault tolerance approaches based on filtering schemes|: \paragraph{\textbf{Krum and Multi-Krum}} Blanchard et al. \cite{blanchard2017byzantine, blanchard2017machine} present the problem of tolerating Byzantine failure in the distributed SGD algorithms. The authors show that no aggregation rule in the previously proposed approach of \cite{blanchard2017byzantine} can tolerate one Byzantine failure. As the presence of only one attack means that the SGD algorithm does not converge, Krum is the proposed framework that allows defending against \textit{Gaussian} and \textit{Omniscient} Byzantine workers. This method to improve the tolerance of the SGD algorithm against adversarial workers is based on a combination of a convex function with a robust aggregation rule, which is a variant of the geometric median, with the assumption that $2f + 2 < n$. More precisely, they defined the Krum aggregation rule as follows: for any $i\neq j, i\rightarrow j$ denoted that $V_j\in(n - f - 2)$ approach vectors to $V_i$. For each worker $i$, the authors defined the score $s(i)$ where: $s(i)= \sum_{i\rightarrow j} \| V_i - V_j \|^2$. This sum belongs to the $n-f-2$ approaching vectors to $V_i$. Then, the authors defined the Krum function $KR(V_1, . . . , V_n)= V_{i_*}$, where, for all $i$: $s(i_*) \leq s(i)$. Krum satisfies the Byzantine-resilient concept, which is founded from two conditions: \begin{enumerate} \item Lower bound on the scalar product of the vector F (the output vector) and g(the real gradient): the Euclidean distance generated by the aggregation rule between F and g must be minimum. \item The moments of the correct gradient estimator G control the F's moments: that means this condition allows the aggregation rule to do this control instead of making it by the bounds on the moments of G that control the SGD dynamics' discrete nature's effects\cite{bottou1998online}. \end{enumerate} In the experimental evaluation, the authors defined Multi-Krum, a variant of Krum. Multi-Krum calculates the score as in the Krum function for each vector. Then, in multiple rounds of Krum, it takes the average on selected several vectors. The time complexity achieved by Krum is $O(n^2d)$.\\ \textit{Weaknesses:} Krum didn't converge in the non-convex function, and the Byzantine worker must be less than $n/2$. \paragraph{\textbf{ByzantineSGD}} Alistarh et al. \cite{alistarh2018byzantine} present the ByzantineSGD algorithm that solves the distributed stochastic convex optimization problem. Despite the $ alpha $ malicious Byzantine workers, the authors aim to converge to an optimum convex objective solution with this optimal variation of SGD. This method aggregates the reports sent by the machines using the median of means and tries to identify the honest workers by comparing their shared gradients with the median and then using the correct gradient information for parameter update. ByzantineSGD achieves optimally two fundamental criteria (sample complexity and computational complexity). With few data samples, it should achieve high accuracy -sample complexity- and by distributing computation should achieve the preserving runtime speedups -computational complexity-. ByzantineSGD stays useful and converges when the dimension grows. So the authors prove the optimal robustness of the proposed algorithm.\\ \textit{Weaknesses:} ByzantineSGD didn't apply in the decentralized model, and it was less practical due to its need for the estimated upper bound of the variances of the stochastic gradients. \paragraph{\textbf{GBT-SGD}} Xie et al. \cite{xie2018generalized} to tolerate the Byzantine failure in the distributed synchronous SGD, a GBT-SGD (Generalized Byzantine-tolerant SGD) was proposed by the authors, which presented in three median-based aggregation rules. The attacks can be corrupt the model training where it converges slowly or make an incompatible converge to the lousy solution. The proposed aggregation rules based on median (GeoMed: geometric median, MarMed: marginal median, and MeaMed: mean around the median) prove the convergence to the good quality solutions by the synchronous SGD in a condition that $f < n/2$ for each dimension, and this property namely “dimensional Byzantine resilience”. The main advantages of this approach are ensuring aggregation rules convergence despite the Byzantine values and the low computation cost. According to the authors \cite{xie2018generalized} is the first approach that presents the property of dimensional Byzantine resilience (DBR) and generalized Byzantine failures (GBF) for synchronous SGD; also, it applies under non-convex settings.\\ \textit{Weaknesses:} GBT-SGD aggregation rules become incapable if more than half of the existing workers are Byzantine. \paragraph{\textbf{Phocas, Trmean}} Xie et al. \cite{xie2018phocas} presented the dimensional Byzantine-resilient algorithms (Phocas and Trmean), the Byzantine-resilient aggregation rules that are based on the generalized of the classic Byzantine resilience property \cite{blanchard2017machine}. The first condition is that the number of correct workers is higher than Byzantine workers \cite{xie2018generalized}. The authors define trimmed-mean-based aggregation rule Trmean, which is used to describe another aggregation rule, Phocas. The proposed rules use fewer assumptions and guarantee the convergence in the synchronous SGD, which is proved for the general smooth loss functions. According to the results of the experiments, Phocas and Trmean are performing, and it has linear time complexities.\\ \textit{Weaknesses:} Trmean's worst is under omniscient attack, which slows down the convergence and is difficult to implement in reality. \paragraph{\textbf{Bulyan}} El Mahdi et al. \cite{el2018hidden} treat the problem of the lousy convergence of SGD to an ineffectual model in the context of poisoning attacks in high dimension $d\gg 1$ with non-convex loss function, where the authors show the deficiency of the existing approaches which proven resilient despite the existing of Byzantine workers \cite{chen2017distributed, blanchard2017machine}. The proposed solution is the Bulyan algorithm, a defense hybrid mechanism. To pick the true gradients, it uses the geometric-median-based method, and to determine the update direction, it uses the coordinate-wise trimmed mean method. Bulyan reduces the leeway of an attacker (Byzantine workers) to an upper-bounded of $O(1\sqrt{d})$ and converges to successful learning models comparable to the reasonable benchmark.\\ \textit{Weaknesses:} Bulyan does not treat all the attacks problems. \paragraph{\textbf{Zeno}} Xie et al.\cite{xie2018zeno} generalized the works as the evasion and exploratory attacks \cite{xie2018generalized, xie2018phocas} and proposed Zeno. This new robust aggregation rule suspects the Byzantine workers in the SGD's distributed synchronous by using the weakest assumption (the existence of one correct worker at least).\\ \textit{Weaknesses:} Zeno requires the presence of a validation dataset on the parameter server, which is impractical in some circumstances. \paragraph{\textbf{RSA}} Li et al. \cite{li2019rsa} proposed RSA (Byzantine-Robust Stochastic Aggregation methods)to mitigate the Byzantine attacks' negative effects and make the learning task robust. RSA is a collection of resilient stochastic algorithms created to prevent Byzantine attacks on heterogeneous data sets. It used an aggregation of models to obtain a consensus model. In RSA, no assumption that the data are independent and identically distributed ( i.i.d. ). Firstly, and by using subgradient recursions, the authors propose $\ell_1$-norm RSA (Byzantine-robust stochastic aggregation), which is robust to arbitrary attacks from Byzantine workers, due to the introduction of the $\ell_1$-norm regularized problem, which minimizes every variable of workers to be close to the master's variable. After that, the authors generalize the $\ell_1$-norm regularized problem to the $\ell_p$-norm regularized problem. The latter helps alleviate the Byzantine workers’ harmful influence. As an advantage, the resulting SGD-based (RSA) methods have a convergence rate: the near-optimal solution, where the learning error depends on the number of Byzantine workers. Also, both SGD and RSA have a similar convergence rate in the Byzantine-free settings, which is an optimal solution at a sub-linear convergence rate.\\ \textit{Weaknesses:} the numerical test shows that $\ell_\infty$-norm does not offer competitive performance. \paragraph{\textbf{SIGNSGD}} Bernstein et al. \cite{bernstein2018signsgd} explored the SIGNSGD algorithm to obtain robust learning, where the set of workers, instead of transmitting their gradients, send only the sign of their gradient vector to the server. By majority vote, the global update is decided. The authors assume that (fast algorithmic convergence, good generalization performance, communication efficiency, robustness to network faults) as a goal that achieved by the SIGNSGD as follows: \begin{itemize} \item The authors proposed compressing all communication among the server and the workers to one bit to attain communication efficiency. \item Methods based on folk signs are performing well, which means SIGNSGD can quickly achieve algorithmic convergence. SIGNSGD provides a theoretical basis for fast algorithmic convergence in the mini-batch, and the authors show that its behavior changes theoretically from a high to low signal-to-noise ratio. \item SIGNSGD accredited by majority vote theory to aggregate gradients to achieve robustness to network faults, which means that no individual worker has too much power. Despite the assumed model that considers robustness to Byzantine workers or to a worker that reverses their gradient estimate, it is still not the most general. However, SIGNSGD can achieve Byzantine fault tolerance, while the good generalization performance is immediate. \end{itemize} After the above theoretical guarantees, the authors give empirical validation. As advantages, the authors prove the convergence of SIGNSGD in large and mini-batches, decrease communication by iteration compared to distributed SGD, and when more than half of the workers are Byzantine; SIGNSGD proves the robustness of majority voting, unlike SGD.\\ \textit{Weaknesses:} majority voting may require more optimization to prevent a single machine from becoming a communication bottleneck. Also, SIGNSGD still has a test set gap, and it fails to converge when the computing units are heterogeneous. \paragraph{\textbf{AGGREGATHOR}} Damaskinos et al. \cite{damaskinos2019aggregathor} presented AGGREGATHOR to leverage more workers to speed up learning. AGGREGATHOR was built around TensorFlow\footnote{TensorFlow is one of the Deep Learning frameworks. These Deep Learning frameworks may be interfaces or libraries that serve Data Scientists and ML Developers to develop Deep Learning models. Other frameworks include PyTorch, Keras, MXNet, and more.} and aimed to achieve two goals: \begin{itemize} \item The robust and distributed training on TensorFlow will be faster in the development and test cases. \item In the second goal, the authors want to enable the deployment of Byzantine-resilient learning algorithms exterior the academic environment. \end{itemize} Each worker might communicate with the server duplications, and use the model sent by 2/3 of the duplications. As GAR and updating the model in the server are deterministic, good servers will present identical models to workers. We can cite the advantages of the proposed framework in the following points: \begin{itemize} \item In the case of a saturated network, AGGREGATHOR used the communication protocol UDP instead of TCP, which provides a further speed up and makes it a performance booster for TensorFlow. And this advantage allows avoiding losing accuracy. \item Through MULTI-KRUM \cite{blanchard2017byzantine}, AGGREGATHOR ensures the first level of robustness (weak resilience), and by BULYAN \cite{el2018hidden}, it provides the second level of robustness (strong resilience). \item AGGREGATHOR has addressed a TensorFlow weakness to achieve Byzantine resilience. \item AGGREGATHOR is used securely in the training process to distribute any ML model developed for TensorFlow.\\ \end{itemize} \textit{Weaknesses:} the overhead of AGGREGATHOR, Furthermore, the interaction between gradient descent's specificity and the state machine replication technique used by AGGREGATHOR will be difficult to implement efficiently. \paragraph{\textbf{GuanYu}} El-Mhamdi et al. \cite{el2019sgd} presented GuanYu, theoretically is the first algorithm that tolerates Byzantine parameter servers and Byzantine workers. For each step of synchronous distributed SGD and to update the parameter vector, GuanYu uses several gradient aggregation rules (GAR) to aggregate all worker gradient estimates into one gradient. The authors use a contraction argument, taking advantage of the geometric properties of the median in high dimensional spaces, to demonstrate the Byzantine resilience of the proposed algorithm. Within each non-Byzantine server, this argument aims to avoid any drift on the models (with probability 1). As an advantage of GuanYu, it achieves optimal convergence and asynchrony. Also, the authors built GuanYu on TensorFlow and deployed it in a distributed configuration. Consequently, compared to TensorFlow vanilla's deployment, the authors show that GuanYu can tolerate Byzantine behavior with reasonable overhead.\\ \textit{Weaknesses:} GuanYu's assumption of a limited difference between models on correct parameter servers cannot be implemented in some cases. \paragraph{\textbf{Trimmed mean}} TianXiang et al. \cite{ tianxiang2019aggregation} introduced the concept of Byzantine resilience; they proposed a new aggregation rule, Trimmed mean based on the mean aggregation rule. The proposed aggregation rule indicates the optimization direction, and in the dataset, it refers to the data pruning, and the data with the most significant difference is eliminated. Trimmed mean achieves several advantages as the satisfaction of the Byzantine resilience conditions, a nearly linear time complexity O(dnlogn). When the authors compare the Trimmed mean with the set of aggregation rules, they prove its robustness, where it can still converge to the optimal global solution.\\ \textit{Weaknesses:} Trimmed mean can be applicable only in the convex environment. \paragraph{\textbf{FABA}} Xia et al. \cite{xia2019faba} proposed FABA, which helps the participating workers obtain correct gradients and avoid outliers in the distributed training. FABA is a fast aggregation algorithm against Byzantine attacks, which is based on four essential things in the input phase: \begin{itemize} \item The set of workers where it computed the gradients. \item The weights. \item The learning rate. \item The Byzantine workers assumed proportion. \end{itemize} Considering that $G_{mean}$ is the mean of all the gradients uploaded from workers, the FABA algorithm is executed as follows: \begin{enumerate} \item If the number of attack gradients $(k)$ is strictly less than the multiplication of all the gradients uploaded from the workers and The supposed proportion of Byzantine workers, FABA continues its execution; otherwise, it goes to step $5$. \item Calculate $G_{mean}$ as $g0$. \item For each gradient in $G_{mean}$, calculate the difference between $g0$ and it. Remove the one with the biggest difference from $G_{mean}$. \item In this step, FABA adds $1$ to the number of attack gradients, $ k = k + 1$, and returns to step $1$. \item Calculate the $G_{mean}$ as an aggregation result at a specific time. \item Update the weights and return them to each worker. \end{enumerate} Many advantages of the proposed algorithm FABA; Its easy implementation; in the presence of Byzantine workers, FABA is fast to converge, and it can adaptively adjust performance. Its ability to achieve the same performance and accuracy as the non-Byzantine case, and compared to the previous algorithms of FABA, shows a higher efficiency. The authors also show that the FABA algorithm’s aggregation gradients are close to the indeed calculated gradients by honest workers. The authors proved that these true gradients bounded the gradients in the aggregation’s moments.\\ \textit{Weaknesses:} Among FABA's satisfaction conditions, the condition that ensures that all gradients from honest workers collect together and that their difference is small. This requirement is simple to satisfy when the dataset that each worker receives is consistently selected, and the batch size is not too tiny, which is not the case when each worker has their own private datasets, and the distribution of the datasets is unknown. \paragraph{\textbf{BRIDGE}} Yang and Bajwa \cite{yang2019bridge} presented BRIDGE (Byzantine-resilient decentralized gradient descent), a new decentralized learning method, which is a particular case of the distributed gradient descent (DGD) algorithm \cite{nedic2009distributed}. Compared to the DGD algorithm, the BRIDGE algorithm has a screening step before each update, making it Byzantine resilient. The screening method called coordinate-wise trimmed mean allows the elimination of all b values (largest and smallest) in each dimension, where b presents the bound of Byzantine nodes that the algorithm can tolerate. Therefore, the authors combine dimension-wise trimmed mean with decentralized gradient descent to solve the problem of decentralized vector-valued learning under Byzantine settings. The authors show the efficiency of the proposed algorithm to deal with decentralized vector-valued learning issues, provide theoretical assurance for the strongly convex problems, and show by numerical experiments the utility of BRIDGE on non-convex learning problems.\\ \textit{Weaknesses:} Because of the distance-based design strategy utilized, BRIDGE is still susceptible to some Byzantine attacks. \paragraph{\textbf{LICM-SGD}} Yang et al. \cite{yang2019byzantine} proposed Lipschitz-inspired coordinate-wise median (LICM-SGD) to mitigate Byzantine attacks. As this intuition: "benign workers should generate stochastic gradients closely following the Lipschitz characteristics of the true gradients." the proposed LICM-SGD algorithm is inspired. Each true worker makes the stochastic gradient calculation based on the mini-batch and sends the result to the PS in the training process. The PS receives an arbitrary value in the case of a Byzantine worker. The PS and when it gets all stochastic gradients from the set of the workers, it computes the coordinate-wise median. In the last step, and before that, LICM-SGD updates the parameter vector, which makes the selection of true gradients based on Lipschitz characteristics. LICM-SGD features several advantages, as: \begin{itemize} \item In the non-convex setting, LICM-SGD converges certainly to the stationary region, where it can face up to half of the Byzantine workers. \item For the practical implementations, LICM-SGD is interesting, where it needs no information on the attackers' number and the Lipschitz constant. \item LICM-SGD has an optimal computational time complexity, as the standard SGD's time complexity under no attacks. \item As its low computational time complexity, LICM-SGD achieves a faster running time.\\ \end{itemize} \textit{Weaknesses:} LICM-SGD method is based only on the i.i.d data. \paragraph{\textbf{Gradient Filter CGC}} Gupta and Vaidya \cite{gupta2019byzantine_b} proposed the gradient-filter CGC $(Comparative$ $Gradient$ $Clipping)$ to achieve a solution for the linear regression problem by the master and to alleviate the Byzantine faulty workers' harmful effects, also to enhance the gradient aggregation phase of the parallelized SGD method. With the assumption of $ n > 2f $, the CGC works as follows, in each iteration, the algorithm work on clipped the set of the $f$ largest gradients to the $2$-norm, which is equal to the $2$-norm of, either the $(f +1)$-th largest gradient, or the $(n - f )$-th smallest gradient, and the rest of gradients remain unchanged. The resulting gradients are averaged in the next update of the current estimate. As an advantage, a good estimate of the regression parameter w was obtained by the parallelized SGD where the number of defective Byzantine workers is less than half the number of participating workers, also in the case of $n = f$, the upper limit of the guarantee estimation error, increases only linearly.\\ \textit{Weaknesses:} The synchronous master-workers system used by this approach still seems to be subject to a single point of failure. \paragraph{\textbf{LIUBEI}} El Mhamdi et al. \cite{mhamdi2019fast} presented the new work LIUBEI. LIUBEI, compared to standard non-Byzantine resilient algorithms, is a resilient Byzantine ML algorithm that does not trust any individual network component. This mechanism is based on the gradient aggregation rules (GARs), network synchrony, and Lipschitz continuity of the loss function. It replicates the parameter server on multiple machines to tolerate the Byzantine workers. The authors build LIUBEI using TensorFlow. LIUBEI introduces a new filtering mechanism used by workers to avoid extracting a suspected model from a parameter server composed of two components: the Lipschitz filter and models filter. The authors show that utilizing the Lipschitz filter without the model filter and vice versa does not ensure Byzantine resilience. The Lipschitz filter component is used in the filtering mechanism to limit the models' growing gradients. It uses the models' filter to limit the distance between models on the correct servers in two successive iterations. This bounded distance is guaranteed by the use of the $scatter / gather$ proposed protocol, which works iteratively in two main phases ($scatter$ and $gather$). In the scattering phase: \begin{enumerate} \item The set of parameter servers is working on its local date. \item There is no communication among the parameter servers. \item Each worker in each iteration communicates with $f_{ps} + 1$ server to pull the model ($f_{ps}$: the set of servers which can be Byzantine). \item Such a step continues for T learning iterations. \end{enumerate} And in the gather phase: \begin{enumerate} \item To aggregate the models, the set of parameter servers communicate together. \item By the set of workers, LIUBEI aggregates models from all servers. \item Such a step is carried out for a single learning iteration. \end{enumerate} Theoretically, the authors prove many advantages of LIUBEI, as Byzantine resilience obtained on both sides, servers, and workers, guarantees convergence, an accuracy loss of around 5\%, and about 24\% convergence compared to vanilla TensorFlow. Also, the throughput gain is 70\% compared to the Byzantine–resilient ML algorithm, which supposes an asynchrony of the network.\\ \textit{Weaknesses:} LIUBEI still requires more study in the context of enhancing convergence by using the gather step more often, as well as the overhead of communication in this step, which depends on data and model. \paragraph{\textbf{Stochastic-Sign SGD}} Jin et al. \cite{jin2020stochastic} are based on SIGNSGD \cite{bernstein2018signsgd} and used novel stochastic-sign-based gradient compressors to propose Stochastic-Sign SGD and another error-feedback of the proposed algorithm Stochastic-Sign SGD to enhance the federated learning performance. At first, the authors offer two stochastic compressors, sto-sign, and dp-sign. The sto-sign compressor extends to Sto-SIGNSGD. Workers in this scheme implement two-level stochastic quantization and send out the signs of quantized results as an alternate of directly passing signs of gradients. In addition, to ensure robustness and 1-bit compressed communication between server and worker, Sto-SIGNSGD used the majority vote rule in gradient aggregation. Then, a dp-sign differentially private stochastic compressor is proposed and extended to DP-SIGNSGD to improve privacy and accuracy and preserve communication efficiency. In addition, the authors developed the proposed algorithm for the Error-Feedback Stochastic-Sign SGD scheme, which exhibited the Stochastic-Sign SGD error feedback. In the majority vote operation, an error can be provoked, and the server uses this scheme to keep track of this error to compensate it during the next communication cycle. With the objectives achieved above, the proposed approach can deal with heterogeneous data distribution. The authors prove its convergence with the same rate as SIGNSGD in the situation of homogeneous data distribution. In addition, the authors theoretically guaranteed the Byzantine resilience of their proposed approaches.\\ \textit{Weaknesses:} The significant perturbation produced by local DP reduces accuracy. \begin{table}[H] \caption{Summary of some recent results concerning synchronous BFT in DML.($\checkmark$): means that the type of attack it takes into consideration in the analyzed approach, ($\times$) means the reverse of $\checkmark$, and ($-$) means that, to the best of our knowledge, information is not supplied in the original resource or other studies. $n$: The total number of worker machines. $f$: Byzantine workers. $d$: The model dimension.} \label{tab:attackSum} \begin{minipage}{\columnwidth} \begin{center} \small \begin{tabularx}{\linewidth}{llllllll} \toprule Approach&\parbox[t]{2cm}{Time complexity}&\parbox[t]{2.5cm}{Condition on Byzantine workers number}&\parbox[t]{1cm}{Gaussian attack}&\parbox[t]{1.5cm}{Omniscient attack}&\parbox[t]{1cm}{Bit-flip attack}&\parbox[t]{1cm}{Gambler attack}&\parbox[t]{1cm}{sign-flipping attack}\\ \midrule Krum, Multi-krum&$ O(n^2 d) $&$n > 2 f + 2$ &$\checkmark$&$\checkmark$&$\times$&$\times$&$\times$\\ GBT-SGD(GeoMed)&$ O (nd log^3 1/\varepsilon)$&$n>2f$&$\checkmark$&$\checkmark$&$\checkmark$&$\checkmark$&$\times$\\ GBT-SGD(MarMed)&$O(dn log n)$&$n>2f$&$\checkmark$&$\checkmark$&$\checkmark$&$\checkmark$&$\times$\\ GBT-SGD(MeaMed)&$O(dn)$&$n>2f$&$\checkmark$&$\checkmark$&$\checkmark$&$\checkmark$&$\times$\\ Phocas and Trmean&$O(dn)$&$n>2f$&$\checkmark$&$\checkmark$&$\checkmark$&$\checkmark$&$\times$\\ Zeno&$O(nd)$&unbounded number&$\times$&$\checkmark$&$\times$&$\times$&$\checkmark$\\ RSA&$-$&unbounded number&$\times$&$\times$&$\times$&$\times$&$\checkmark$\\ SIGNSGD&$-$&$n>2f $&$\checkmark$&$\times$&$\times$&$\times$&$\times$\\ Trimmed mean&$O(dnlogn)$&$n > 2f $&$\checkmark$&$\checkmark$&$\times$&$\times$&$\times$\\ FABA&$-$&$f .n, f<0.5$&$\checkmark$&$\times$&$\times$&$\times$&$\times$\\ \bottomrule \end{tabularx} \end{center} \bigskip \end{minipage} \end{table} \item \verb|Byzantine fault tolerance approaches based on coding schemes|: \paragraph{\textbf{DRACO}} Chen et al. \cite{chen2018draco} Present the first work, which deals with the Byzantine workers by using a specific encoding scheme with the redundant gradient calculations. More clearly, DRACO treats the problem of adversarial computes nodes based on coding theory to achieve robust distributed training. DRACO is the proposed framework to eliminate the adversarial update's effects by the parameter server (PS); this one uses the redundant gradients evaluated by each computes node. Firstly PS detects the arbitrarily malicious compute nodes; after that, by the correct nodes shipped the gradient updates, PS recovers the correct gradient average; by this method, DRACO removes the Byzantine values. DRACO applies to many training algorithms (the authors focus on mini-batch SGD) and is robust to adversarial compute nodes. Maintaining the correct update rule, DRACO achieves the theoretical lower bound of redundancy, which is necessary to resist adversaries.\\ \textit{Weaknesses:} The main limits of this framework are that it defends against a few numbers of Byzantine workers and the reduced performance of inexact models by their updating. \paragraph{\textbf{DETOX}} The authors in \cite{rajput2019detox} present a Byzantine-resilient distributed training framework, DETOX. This framework applies to PS architecture and is based on two steps. Firstly it uses the algorithm's redundancy to filter the majority of the Byzantine gradients. In the second step, DETOX performs a hierarchical robust aggregation method, where it makes a few groups by the partition of filtered gradients and aggregates them. After that, and to minimize the remaining traces' effect of the original Byzantine gradients, it uses any robust aggregator to the averaged gradients. As an advantage of this framework, its scalability and flexibility, and nearly linear complexity, and when it combines with any previous robust aggregation rule, DETOX improves its efficiency and robustness. Also, when compared to the vanilla implementations of Byzantine-robust aggregation, DETOX increases accuracy to 40\% in the event of heavy Byzantine attacks.\\ \textit{Weaknesses:} DETOX needs PS as an owner to partition the data among the workers, which breaks the data privacy. \paragraph{\textbf{RRR-BFT}} Gupta and Vaidya \cite{gupta2019randomized} proposed two schemes (deterministic and randomized) for guaranteeing exact fault-tolerance \footnote{ The authors in \cite{gupta2019randomized}expressed that despite the presence of Byzantine workers, a parallelized-SGD method has exact fault-tolerance if the Master asymptotically converges to a minimum point w (w is a minimum point of the average loss evaluated for the data points.) exactly.} in the parallelized-SGD method. These schemes are based on the "computation efficiency" \footnote{ The computation efficiency of the coding scheme was presented by the ratio of two types of numbers. The first is the gradient's number used for parameter update, and the second is the gradient's number computed in total by the workers.} of a coding scheme, and the authors considered the case where $f (< n=2) $ of the workers are Byzantine faulty. If Byzantine workers could send failure data to the Master, the proposed approach used the reactive redundancy mechanism to identify and isolate this type of worker. In the deterministic scheme: \begin{enumerate} \item The master uses a symbol (fault detection code) at each iteration. \item The master used the reactive redundancy mechanism when it detected any fault(s) to correct the faults and identify the Byzantine worker that caused the fault(s). \end{enumerate} In the randomized scheme: \begin{enumerate} \item This scheme is used to improve upon the deterministic scheme's computation efficiency. Instead of using all iterations, the fault-detection codes used by the master are chosen randomly in the intermittent iterations. \end{enumerate} The main advantage of this work is the exact fault tolerance and favorable computation efficiency. The master can optimize the compromise between the expected calculation efficiency and the parallelized learning algorithm's convergence rate.\\ \textit{Weaknesses:} RRR-BFT is still susceptible to a single point of failure.\\ \item \verb|Byzantine fault tolerance approaches based on Blockchain|: \paragraph{\textbf{DeepChain}} Weng et al. \cite{weng2019deepchain} presented DeepCHain for privacy-preserving deep learning training to provide auditability and fairness; this approach is a Blockchain-based technology and a decentralized mechanism to solve the perturbation problem in the collaborative training process, which caused by the malicious participants. DeepCHain is an incentive mechanism that pushes the participants to behave honestly for the training's correctness (collect gradients and update parameters) and jointly to share the obtained local gradients. The developed prototype proves DeepChain's performance in terms of confidentiality, auditability, and fairness.\\ \textit{Weaknesses:} the security challenge should be re-defined if DeepChain expands potential models' value to transfer learning. \paragraph{\textbf{LearningChain}} Chen et al. \cite{chen2018machine} discussed privacy and security problems in the linear/ non-linear learning models; the authors propose a framework based on Blockchain technology called LearningChain. LearningChain is a decentralized, privacy-preserving, and secure machine learning system. Explicitly, it consists of three processes. The first one is the \textit{Blockchain Initialization}; in this process, the computing nodes and data holders establish a connection and construct a fully decentralized peer-to-peer network. On the initial learning model settings, reach a consensus. The second process is the \textit{Local Gradient Computation}: based on the actual global model and common loss function, the data holders create pseudo-identities and calculate the local gradients. In the following, to perturb their local gradients, they use a differential privacy scheme. Finally, the messages are broadcast to all compute nodes after the computed local gradients are encapsulated with other related information. The third process is the \textit{Global Gradient Aggregation}: finding a solution to a mathematical puzzle is a condition of the computing nodes that compete for permission to add a new block to the chain. The aggregation scheme of tolerance to Byzantine attacks applies as soon as a compute node wins the game to aggregate the local gradients. And to update the model parameters w, it calculates the global gradient. Finally, the associated information adds a newly created block to the chain. LearningChain can train a global predictive model by repetitively performing local gradient computation and global gradient aggregation by participants' collaboration in the network while keeping their own data privacy. In summary, the advantages of the proposed approach are as follows: The authors designed the decentralized Stochastic Gradient Descent (SGD) algorithm to learn a general predictive model over the Blockchain. In the latter, the authors develop a differential mechanism based on privacy to protect each party's data privacy. And to protect the system and defend against any possible Byzantine attacks, they come up with a $l$-nearest aggregation algorithm. In the implementation phase, the authors prove the LearningChain efficiency and effectiveness on Ethereum, also by its extension, LearningChainEx applied to all types of Blockchain.\\ \textit{Weaknesses:} LearningChain used the proof-of-work consensus, which requires substantial energy for every transaction. However, the energy efficiency has not been verified. \end{enumerate} \subsubsection{Synchronous training Byzantine fault tolerance approaches in other machine learning optimization methods} \begin{enumerate} \item \verb|BFT approaches based filtering schemes|: \paragraph{\textbf{DSML-Adversarial}} Chen et al. \cite{chen2017distributed} considered the decentralized systems to solve the problem of distributed statistical learning with existing Byzantine attacks. To achieve their goal, they aim to offer a robust algorithm, where the system in dimension $d$, can learn the real underlying parameter. Hence, the authors propose a new variant of standard gradient descent based on the geometric median aggregation rule. They address the challenges facing Federated Learning (Security, Small local datasets versus high model complexity, Communication constraints) and propose their Byzantine gradient descent method. The latter is executed as follows: The parameter server partitions the set of working machines $n$ into $k$ batches. At each iteration, to augment the Byzantine-free batches’ resemblance and stop the Byzantine machines’ interruption, it groups the gradients received in non-superimposed batches. After that, it computes the mean of each local received gradient in each batch. Then, it calculates the geometric median of the “k” batch means. Finally, it performs a gradient descent update with the aggregated gradient. As advantages: the development method can tolerate Byzantine failures. With a small number of local data, it can learn a very complex model; also, it achieves a fast exponentially convergence.\\ \textit{Weaknesses:} the training data does not distribute equally, the irregular availability of mobile phones, and the algorithm does not adapt to the asynchronous parameter. Also, Federated Learning gives users control over their data through the ability to store training data locally. However, this data must be extracted to get a high-quality model trained. Therefore, the proposed algorithms must provide a precise description of the minimum confidentiality that has to be sacrificed in the Federated Learning paradigm. \paragraph{\textbf{ByzRDL}} Yin et al. \cite{yin2018byzantine} made a study around the BRDSL (Byzantine-robust distributed statistical learning) algorithms to solve the optimal statistical performance problem. The authors, based on the standard statistical setting of empirical risk minimization (ERM). In the goal of achieving the $Statistical$ $optimality$ and $Communication$ $efficiency$ simultaneously, they propose three algorithms. Two distributed gradient descent algorithms, where the first one is based on the coordinate-wise median, and the second is based on the coordinate-wise trimmed mean. The latest is the median-based robust algorithm, which only needs one communication round. The authors prove the statistical error rates for the three types of population loss functions (strongly convex, non-strongly convex, and non-convex) and the algorithms’ robustness against Byzantine failures. They verify the optimal error rates of the proposed algorithms: Under mild conditions, the median-based GD and Trimmed-mean-based GD achieve the order optimal for strongly convex loss. The median-based one-round algorithm achieves the optimal rate for strongly convex quadratic losses, similarly to the optimal error rate achieved by the robust distributed gradient descent algorithms. When the Byzantine workers are up to half of all workers, ByzRDL achieves convergence.\\ \textit{Weaknesses:} in the case where the Byzantine workers are more than half of all workers, ByzRDL does not converge. \paragraph{\textbf{DGDAlgorithm}} Cao et al. \cite{cao2019distributed} proposed an algorithm DGDAlgorithm Robust (Distributed Gradient Descent Algorithm Robust) to solve the problem of arbitrary Byzantine attackers which falsified the data. DGDAlgorithm based on the gradient computes then it tested the data sent by the workers. In the first step, the algorithm asks the PS to select a small set of clean data randomly. The next step is to compute a noisy version of the true gradient by selecting a small dataset at each iteration $t$. Finally, the unused information sent by the arbitrary workers is filtered by the noisy gradient, where the parameter server accepts the receiving gradients from the workers if the difference is within a threshold after it compares them with the noisy local gradients. Despite the existence of Byzantine workers, the authors prove the convergence towards the optimal value by the proposed algorithm.\\ \textit{Weaknesses:} DGDAlgorithm remains inapplicable on the non-convex cost function, vulnerable to the single point of failure due to the collection of gradients required on the parameter server. The threshold requires a manual adjustment. \paragraph{\textbf{Securing-DML}} Su and Xu \cite{su2018securing} propose a secured variant of the classical gradient descent method to tolerate the Byzantine workers in distributed/decentralized learning system. The authors considered the full gradient descent method and the $ d $–dimensional model. Based on the entire local sample, the proposed method asked each non-defective worker to calculate the gradient, then used the proposed filtering approach \cite{steinhardt2018resilience} for the problems of robust mean estimation to aggregate the gradients reported by the external workers robustly in the distributed statistical learning setting. The highly dependent among the iterations and the associated gradients can cause the arbitrary behavior of a set of Byzantine workers. To manage this unspecified dependence, the authors propose to set the sample covariance matrix's concentration of gradients uniformly at all possible model parameters. Under the sub-Gaussian gradients assumption, the authors \cite{charikar2017learning} derived the same uniform convergence concentration of the sample covariance matrix. In the simplest linear regression example, they show the sub-exponential instead of sub-Gaussian gradients. Therefore, in \cite{su2018securing}, the authors develop a new concentration inequality for sub-exponential distributions’ sample covariance matrices, which might be of independent interest. The version proposed of full gradient descent method can tolerate up to descent a Byzantine workers’ regular fraction and with $ O(logN) $ ($ N $: total number of data points distributed across $n$ workers) communication rounds, it converges to a statistical estimation error on the order of $ O(\sqrt{f/N} + \sqrt{d/N}) $.\\ \textit{Weaknesses:} Securing-DML remains inapplicable to the non-convex cost function. \paragraph{\textbf{ByzantinePGD}} Yin et al. \cite{yin2019defending} developed an algorithm, namely ByzantinePGD (Byzantine Perturbed Gradient Descent), to solve the real security problems in large-scale distributed learning, which are due to the existence of Byzantine machines with a non-convexity loss function. ByzantinePGD is based on the PGD algorithm described in \cite{jin2017escape} and employs the following strategy to combat Byzantine workers: \begin{enumerate} \item It uses the three robust aggregation rules (median \cite{kogler2016efficient}, trimmed mean, iterative filtering \cite{diakonikolas2019robust, diakonikolas2017being, steinhardt2018resilience}), where the authors present GradAGG and ValueAGG subroutines that collect gradients and function values robustly from workers. These subroutines use the terminology of inexact oracle to formalize the guaranteed accuracy. \item It escapes saddle points by random perturbation in multiple rounds. \end{enumerate} By this method, the authors prove that ByzantinePGD converges to an approximate local minimizer and escapes the saddle points with the existence of the Byzantine machine and the nonconvex function.\\ \textit{Weaknesses:} ByzantinePGD is still near-optimal in the high-dimensional setting \paragraph{\textbf{ByRDiE}} Yang and Bajwa \cite{yang2019byrdie} proposed the Byzantine resilient distributed coordinate descent (ByRDiE) algorithm to deal with Byzantine failures in decentralized settings. The authors proposed two main contributions in this work, where the algorithmic aspects of ByRDiE are leveraged on two separate lines of prior work. The first one, with Byzantine failures, an inexact solution is possible in certain types of scalar-valued distributed optimization issues \cite{su2015fault, su2016fault}. Second, break down the optimization problems of vector-valued into a sequence of scalar-valued problems with coordinate descent algorithms \cite{wright2015coordinate}. This phase represents the first main contribution of the proposed work and the second main contribution presented by the theoretical analysis. In particular, ByRDiE divides the empirical risk minimization (ERM) problem into P one-dimensional sub-problems using coordinate descent and then uses the Byzantine resilient approach presented in \cite{su2015fault} to solve each scalar-valued sub-problem. The proposed ByRDiE and the theoretical results ensure its ability to minimize the statistical risk in a fully distributed environment (Decentralized settings). The authors also prove the resilience of ByRDiE to cope with Byzantine network failures for distributed convex and non-convex learning tasks.\\ \textit{Weaknesses:} ByRDiE still needs to strengthen its results regarding near-safe convergence and derivation of explicit convergence rates. Moreover, ByRDiE is still vulnerable to various Byzantine attacks and requires additional theoretical study to functions of constant step size and non-convex risk functions. \paragraph{\textbf{SVRG-AdversarialML}} Bulusu et al. \cite{bulusu2019convex} chose to use the stochastic variance reduced gradient (SVRG) optimization method to trait the problem of Byzantine workers in distributed settings. In the convex case, the authors consider $n$ workers and the finite-sum problem and propose a new robust variant of SVRG, where if the objective function $F$ is convex with the existing of $f$-fraction of Byzantine workers, the SVRG proposed algorithm serves to: \begin{enumerate} \item Firstly, finds an optimal point $x$ within $T$ iterations $ ( T= \tilde{O}(\frac{1}{\gamma}+\frac{1}{n\gamma^2}+\frac{f^2}{\gamma^2}))$, with $\gamma$ denotes the discount factor. \item Secondly, $\mathbb{E}[F(x)- F(x^*)] $ must be satisfied. \end{enumerate} To achieve the SVRG's robustness, the authors calculate for each worker the intermediate gradients’ average through the number of iterations and calculate the median through all workers as the aggregation rule \cite{alistarh2018byzantine}. As an advantage of the proposed variant of the SVRG algorithm, the authors prove its convergence in the convex case and show its robustness in the adversarial settings (adversarial attacks and reduced variance).\\ \textit{Weaknesses:} SVRG-AdversarialML algorithm remains inapplicable in non-convex and strongly convex cases. \paragraph{\textbf{DRSL-Byzantine Mirror Descent}} Ding et al. \cite{ding2019distributed} developed a distributed robust learning algorithm based on mirror descent \cite{bubeck2015convex} to solve the distributed statistical learning problem, safeguard against adversarial workers, and share fault information in a high-dimensional learning system. Considering that $ f $ represents the set of Byzantine workers from $ n $ workers, DRSL-Byzantine Mirror Descent deals against these Byzantine workers for $f \in [ 0.1⁄2)$ using the median aggregation rule. Knowing that the authors define two different formulations of the sum of the population $ B_T^i $ and $ A_T^i$, they describe the robust variant of mirror descent to the Byzantine setting as follows: \begin{enumerate} \item For each honest worker $ (i) $, by the martingale concentration, $ B_T^i $ will concentrate in each iteration $t \in [T] $, on the population sum $B_T^\star$ of the gradient $ \bigtriangledown_t$, with an utmost deviation of $ \sqrt{T}$. \item The authors define a set of workers' machines as Byzantine machines if they are too far from the mean. \item With a small deviation in terms of $ B_t^i$, some Byzantine machines can be hidden. \item Hence, the authors identify such Byzantine machines by the martingale concentration for $ A_T^i$. In the case where $ A_T^i $ is too far away from the population sum $A_T^\star$, the authors consider them Byzantine machines. \item After that comes the step of removing the set of Byzantine workers and putting the rest in the set $\Omega_{t}$ (estimated collection of honest machines at iteration $t$). \end{enumerate} As advantages: The proposed method is robust against the Byzantine workers for $ f \in [0.1⁄2)$, also it achieves a statistical error bound $ O(1⁄\sqrt{nT}+ f⁄\sqrt{T}) $ for $T$ iterations in the convex and smooth objective function. This result concludes for a normed spaces' large class, and it corresponds to the known statistical error for the Byzantine stochastic gradient in the Euclidean space' framework. Byzantine mirror descent algorithm depends on dimension to the bound scales with the dual norm of the gradient, and in particular, its features depend logarithmically on the problem dimension d for probability simplex.\\ \textit{Weaknesses:} DRSL-Byzantine Mirror Descent is vulnerable to a single point of failure. \paragraph{\textbf{MOZI}} Guo et al. \cite{guo2020towards} proposed MOZI, a new Byzantine Fault Tolerance Algorithm (BFT) in decentralized learning systems. With an arbitrary number of faulty nodes, MOZI allows the honest node to train a correct model under powerful Byzantine attacks thanks to a uniform Byzantine-resilient aggregation rule that selects in each training iteration useful parameters updates and filter out the harmful ones. MOZI detects the Byzantine parameters thanks to distance-based and performance-based strategies. These strategies are used to achieve two goals. The first objective and to obtain better performance, the distance-based strategy aims to reduce the candidate nodes' scope. And the performance-based strategy is used to achieve the second goal, lifting all defective nodes to achieve robust Byzantine fault tolerance. The authors used a distance-based selection phase to handle sophisticated Byzantine attacks that can manipulate the baseline value. Each honest node used its own parameters as the baseline rather than the mean or medium value of received incorrect parameters. More specifically, the execution of a distance-based strategy is that each honest node selects a candidate pool from among the potential honest nodes based on comparing its own estimate with the Euclidean distance of each neighbor node's estimate. Therefore, improve the quality of selected nodes. To relieve the unrealistic assumption in performance-based strategies of the server's availability of validation datasets in centralized systems, performance testing of the chosen parameters in the performance-based selection phase is realized by each honest node using its training samples. More specifically, each honest node to test the performance of each estimate reuses the training sample as validation data. And to get the final update value, it calculates the average of the selected estimates whose loss values are less than its own estimate. In their theoretical analysis, the authors show the strong convergence of MOZI. The Experimental results prove the ability of MOZI within various system configurations and training tasks to deal with both sophisticated and simple Byzantine attacks with low computation overhead.\\ \textit{Weaknesses:} Mozi is intended only for supervised learning in i.i.d. \paragraph{\textbf{Distributed Momentum}} El-Mhamdi et al. \cite{el2020distributed} proved the robustness and the advantages of using Momentum on the workers' side (in considering that Momentum is generally used on the side of the server and Byzantine-resilient aggregation rules are not linear) to guarantee "quality gradient". Firstly, Momentum reduces the variance-standard ratio of estimating the gradient at the server level and grows the Byzantine resilient aggregation rules; secondly, its robustness affects distributed SGD. Also, the authors show that the combination between Momentum (at the worker side) and standard defense mechanisms (KRUM\cite{blanchard2017machine}, MEDIAN\cite{xie2018generalized}, BULYAN \cite{el2018hidden}) can be enough to defend and ensure the safety against the two presented attacks \cite{baruch2019little, xie2020fall}.\\ \textit{Weaknesses:} this work remains in need of a study and an in-depth analysis of the interaction between the dynamics of the two types of optimization methods (stale gradients and Momentum).\\ \item \verb|BFT approaches based coding schemes|: \paragraph{\textbf{BRDO based Data Encoding}} Data et al. \cite{data2020data} studied the distributed optimization in the master-worker architecture, with the presence of Byzantine adversaries, and focusing on two iterative algorithms: Proximal Gradient Descent (PGD) and Coordinate Descent (CD). The first algorithm is used in the data-parallel setting, while the second is used in the model-parallelism setting. In consideration that Gradient descent (GD) is a special case of both PGD and CD. The proposed algorithm based on data encoding/decoding: \begin{enumerate} \item In the data encoding, the algorithm uses the sparse encoding matrices in the data used by the set of worker nodes. \item An efficient decoding scheme using the error correction over real numbers \cite{candes2005decoding} has been developed at the master node to process the workers' inputs. \end{enumerate} In the case of gradient descent, the developed scheme is used to calculate the correct gradient, and in the case of coordinate descent, it facilitates the calculation at working nodes. The proposed encoding scheme extends efficiently to the data streaming model and achieves stochastic gradient descent (SGD) Byzantine-resilient. The proposed algorithm can tolerate an optimal number of corrupt worker nodes. In addition, it does not assume any distribution of statistical probabilities either on the data or on the Byzantine attack models. Also, it can tolerate up to 1/3 of the corrupt worker nodes, with a constant overhead on both complexities of computational and communication and the storage requirements compared to the distributed PGD / CD process, which doesn't offer any security against adversaries.\\ \textit{Weaknesses:} This approach assumes i.i.d data; hence it falls short in the case of federated learning where the data encoding across various nodes is impossible. \item \verb|BFT approaches based Blockchain|: \paragraph{\textbf{TCLearn}} Lugan et al. \cite{lugan2019secure} proposed scalable security architectures and Trusted Coalitions for Distributed Learning (TCLearn). TCLearn used adequate encryption and Blockchain mechanisms to preserve data privacy, ensure the reliable sequence of iterative learning, and share the learned model among each coalition member fairly. In the proposed approach: \begin{enumerate} \item The coalition members shared a CNN model which optimized in an iterative sequence. \item Each coalition member updates the shared model with new batches of local data in a sequential manner. \item Validated the shared model via a process involving coalition members, then stored on the Blockchain. \item Blockchain has provided an immutable ledger that allows retrieving each step of the model evolution. \item The authors propose a new consensus mechanism Federated Byzantine Agreement (FBA). The used Blockchain in the proposed TCLearn depended on FBA to furnish for each stage of learning the shared model an iterative certification process, control the updated CNN model quality, and avoid any inadequate training to be added corrupted increments to the model. Therefore, concatenation of a new block to the chain and its integrity has been validated and controlled respectively by the proposed FBA. \end{enumerate} According to the common rules of the coalition, the authors have developed three methods that correspond to three distinct levels of trust: \begin{itemize} \item TCLearn-A method: public learned model, in which each coalition member is responsible for ensuring the confidentiality of their own data. \item TCLearn-B Method: private learned model, where the coalition's members trust each other. \item TCLearn-C Method: prevent any dishonest behavior from any coalition members that do not trust each other. \end{itemize} As the main advantage, the encryption system and off-chain storage used by the proposed TCLearn architectures ensure data privacy preservation and protection against degradation.\\ \textit{Weaknesses:} The proposed approach has not addressed the costs associated with privacy preservation. \paragraph{\textbf{SDPP based-Blockchain System}} Zhao et al. \cite{zhao2019mobile} presented a system based on (mobile edge computing, Blockchain technology, and reputation-based crowdsourcing federated learning). The proposed system used federated learning (FL) technology and designed an intelligent system to assist industrialists of IoT devices in exploiting customers' data to predict customers' needs and possible consumption behaviors by building a machine learning model. Then to deal with potential adversaries in federated learning training which can exploit the learned model, the authors used the differential privacy technique and kept the customers' data confidential. Finally, they take advantage of Blockchain technology to enhance the security of model updates. The proposed system has many advantages, such as high computing power, confidentiality and auditing, and distributed storage. It preserves the privacy protection of customers' data and offers the security of the model updates. It provides incentives to customers to participate in the crowdsourcing tasks, which helps manufacturers of IoT smart devices improve their businesses based on predicting customers' consumption behaviors.\\ \textit{Weaknesses:} this system requires real-world IoT device manufacturers testing. \paragraph{\textbf{BlockDeepNet}} Rathore et al. \cite{rathore2019blockdeepnet} presented BlockDeepNet, a decentralized extensive data analysis approach that combines Blockchain technology and Deep Learning (DL) to achieve secure collaborative DL in IoT. In the Big Data analysis process on the modern IoT network, a set of attackers can exploit a large quantity of data maliciously. Therefore, the proposed BlockDeepNet system minimizes the possibility of data being negatively manipulated as follows: \begin{itemize} \item At the device level, collaborative DL is executed to beat privacy leaks and get enough data for DL. \item To guarantee the confidentiality and integrity of DL collaboration in the IoT, the authors use Blockchain technology, where local learning models supported at the device level are aggregated through blockchain transactions at the edge server level. \item To validate the effectiveness of the proposed approach, the authors develop a prototype model of BlockDeepNet in real-time scenarios. \end{itemize} BlockDeepNet experimental evaluation proves robustness, security, and feasibility of Big Data analysis task in IoT. It suggests BlockDeepNet mitigates the current issues as a single point of failure, privacy leak of IoT devices, lack of valuable data for DL, as well as data poisoning attacks. Therefore, Blockchain operation for DL in IoT achieves higher accuracy with acceptable latency and computational overhead.\\ \textit{Weaknesses:} BlockDeepNet still faces the problem of DL requirement of higher computing power. \end{enumerate} \subsubsection{Asynchronous training Byzantine fault tolerance approaches} \paragraph{\textbf{Kardam}} Damaskinos et al. \cite{damaskinos2018asynchronous} Introduce the Kardam SGD algorithm that tolerates the Asynchronous Byzantine behavior in ML. this algorithm was based on two essential components (filtering and dampening). Resilience is ensured by the filtering component found on scalar and against $1/3$ Byzantine workers. This filter acts as a self-stabilizer by leveraging the Lipschitzness of cost functions to protect the SGD model from adversarial attacks. It uses the frequency filter to prevent Byzantine workers from keeping honest workers from updating the training model by limiting the number of gradients received sequentially from a single worker. The dampening component is the second one, and through a generic gradient weighting scheme that adjusts to stale information, the dampening component bounds the convergence rate. Despite the existence of the Byzantine behaviour, Kardam ensures the convergence, which is almost certain. Kardam is the first SGD algorithm that tolerates the asynchronous distributed Byzantine behaviour in the authors' best knowledge.\\ \textit{Weaknesses:} Kardam demands a stronger restriction, where it can only deal with the number of Byzantine workers, which is less than 1/3 of the existing workers. Moreover, this is a weak assumption compared to the standard setting, where Byzantine workers can be half of existing workers. In addition, the filtering component can be suffered in the context of frequency filter, which is too harsh for asynchronous schemes. Finally, in terms of Byzantine resilience's cost, Kardam induces a slowdown. \paragraph{\textbf{DBT-SGD in Era of Big Data}} Jin et al. \cite{jin2019distributed} proposed two asynchronous SGD algorithms to tolerate an arbitrary number of Byzantine workers. In this work, the authors remove the need for a shared parameter server. The set of truthful collaborative workers is assumed to store the local model parameter, which can be used as the fundamental truth. With this basic truth, the workers, at any time, can fetch and filter the shared model parameters from the other co-workers. The first Algorithm corresponds to a known number of Byzantine workers, where an upper-bound $p$ of the number of Byzantine workers is assumed. To filter the erroneous data to avoid the Byzantine attack, each worker accepts an average of the parameters of the $n - p - 1$ model closest to the parameter of the honest worker model; finally, a gradient descent update based on this average value will be realized. In the second Algorithm, the number of Byzantine workers is unknown. Firstly, based on the local training samples evaluation by the worker, the latter accepts the model parameters that eventually lead to the lowest empirical risk. After that, to make the gradient descent update step, the worker takes an average over the accepted model parameters. As main advantages, the proposed algorithms work well against all types of Byzantine attacks examined and provide convergence.\\ \textit{Weaknesses:} the algorithms proposed to decide to accept certain information or not only take the current shared information into account and not use the past information. \paragraph{\textbf{Zeno++}} Xie et al. \cite{xie2020zeno++} proposed Zeno ++, the new robust, fully asynchronous procedure based on Zeno's Byzantine-tolerant synchronous SGD algorithm \cite{xie2018zeno}; the authors aim to tolerate Byzantine failures on anonymous workers (with the potential for an unlimited number of Byzantine workers). The idea of this approach is centered on the training process. After applying the candidate gradient to the model's parameters, Zeno ++ works to estimate the descent of the loss; it scores the received gradients and accepts them according to the score. The authors also propose the lazy update for efficient calculation. The main advantage of this work is its convergence for non-convex problems, and Zeno ++ does better than the previous approaches, as shown by the empirical results.\\ \textit{Weaknesses:} Zeno++ does not apply in all contexts, such as federated learning. \paragraph{\textbf{BASGD}} Yang and Li \cite{yang2020basgd} proposed for Byzantine Learning (BL) a new method, called Buffered Asynchronous Stochastic Gradient Descent (BASGD). In asynchronous BL, the server does not have a stored training instance, and it receives one gradient at a time, so it isn't elementary to show deference between trust and the gradient received by the server. Therefore, BASGD solves this problem by the buffers B ($0 <B <= n$), introduced on the server, used in the aggregation phase of the gradients and updating the parameters. The BASGD method is based on two components: $buffer$ and $aggregation$ $function$. In the Buffer component: the training process in BASGD and ASGD is the same, only a modification of the update rule on the server, as follows: \begin{enumerate} \item B buffers $(0 <B <= n)$ are introduced on the server. \item No update of the parameters when the gradient $(g)$ is received by the worker $(s)$ in the server buffer. \item Each buffer $b$ temporarily stores the received gradient, where $b = s$ $mod$ $B$. \item There is no execution of the new SGD step, only if all the buffers have been modified since the last SGD step. \item Between two iterations, more than one gradient may have been received by each buffer $b$. \item Each buffer stores the average of the gradients received. \item An update rule is applicable for each buffer $b$. \item After the update step, all buffers will be reset at the same time. \end{enumerate} In the aggregation function component: The reliable gradient of the gradients $B$ of the buffers in the updating parameter phase, where the server has access to the candidate gradients $B$, can be aggregated by a chosen aggregation function. The latter takes the average of all candidate gradients but satisfies the authors' q-Byzantine Robust (q-BR) condition\footnote{ For an aggregation function, (q-BR) condition illustrates its Byzantine resilience ability quantitatively.}. As advantages: BASGD is more general than SBL methods and can preserve ABL's privacy because the server does not have a stored training instance and resists against the set of attackers. It also has the same theoretical convergence rate as vanilla asynchronous SGD (ASGD). BASGD can significantly surpass vanilla ASGD, and other ABL baselines in case of error or attack exist on the part of workers.\\ \textit{Weaknesses:} The authors assumed a limited number of Byzantine workers to verify the BASGD convergence and resilience against attack or error. \paragraph{\textbf{ByzSGD}} El-Mhamdi et al. \cite{el2020genuinely} studied the issue of Byzantine-resilient distributed machine learning in a decentralized architecture. They present ByzSGD, which aims to tolerate Byzantine failures in both sides servers (several replicas of the parameter server) and workers in an asynchronous setting. And to improve the performance of the proposed algorithm, the authors took advantage of the synchronous setting and used it to reduce the number of communicated messages required by ByzSGD. The proposed ByzSGD algorithm-based on three significant schemes: \begin{enumerate} \item Scatter/Gather scheme: into correct servers, this scheme limits the maximum drift among models, which is due to the no communication among the servers in the scatter phase, that the honest servers communicate and apply together with a Distributed Median-based Contraction (DMC) module in the gather phase. \item Distributed Median Contraction (DMC): In high dimensional settings, DMC leverages the median geometric properties to approximate the honest servers' parameters closer to each other and achieve learning convergence. But ByzSGD still requires many communication messages, which are reduced by using the synchronous settings \item Minimum–Diameter Averaging (MDA): to tolerate Byzantine workers, ByzSGD used MDA as a statistically–robust gradient aggregation rule (GAR). \end{enumerate} As advantages: \begin{itemize} \item ByzSGD resolves the problem of Byzantine parameter servers (1/3) and Byzantine workers (1/3) in the asynchronous framework in an optimal way. \item It also guarantees Byzantine resilience without any adding communication rounds, unlike vanilla non-Byzantine alternatives. \item It used the synchronous setting to reduce the communication messages. \end{itemize} \textit{Weaknesses:} ByzSGD is still not applied to fully decentralized settings and non-iid data. \paragraph{\textbf{ColLearning Agreement}} El-Mhamdi et al. \cite{ el2020collaborative} solved Byzantine collaborative learning without trusting any node, considering the genuine Byzantine resilience. This approach is also based on asynchronous and heterogeneous settings, considering the general case of non–i.i.d. data. It is based on fully decentralized settings and the more general non–convex optimization case and the standard stochastic gradient descent as an applied scheme. The new form of agreement called averaging agreement is equivalent to collaborative learning, as the authors put it. Each node begins with an initial vector and searches to approve approximately on a common vector, guaranteeing that this common vector stills within an averaging constant of the maximum distance between the main vectors. Three asynchronous algorithms are presented by the authors to the averaging agreement as follows: \begin{enumerate} \item The first solution, Minimum–Diameter Averaging (MDA), needs $(n \geq ou \ge 6f + 1)$ to achieve asymptotically the best possible averaging constant based on the minimum volume ellipsoid. And the filtering scheme used by MDA ensures that no Byzantine input among the selected vectors exchanged among workers could be arbitrarily bad. \item The second solution, Broadcast-based Averaging Agreement, and despite its need for cryptographic signatures and many communication rounds, it can achieve optimal Byzantine resilience $ (n \geq ou \ge 3f + 1) $ based on the reliable broadcast. \item The third solution, Iterated Coordinate–wise Trimmed Mean ICwTM, does not rely on signatures, unlike the second one. In the algorithms of standard form, which do not use signatures, it is faster and reaches an optimal Byzantine resilience $ (n \geq ou \ge 4f + 1) $ based on the coordinate-wise trimmed mean. \end{enumerate} Despite the asynchronous setting considered in the proposed approach, the authors sometimes discuss the synchronous setting where they would suppose limits on the communication delays between truthful workers also their relative quickness. The Byzantine-resilient averaging agreement algorithm achieves a quasi-linear complexity.\\ \textit{Weaknesses:} in ColLearning Agreement, an asymptotically optimal averaging constant remains open, it does not study the trade-off between the averaging constant that can be done and the Byzantine nodes' number that can be tolerated, and it has not applied any privacy preservation techniques. \subsubsection{Partially Asynchronous training Byzantine fault tolerance approaches} \paragraph{\textbf{Norm Filtering and Norm-cap Filtering}} Gupta and Vaidya \cite{gupta2019byzantine_a}, to solve the Byzantine fault tolerance problem in distributed linear regression in a multi-agent system, propose the algorithms based on the deterministic gradient descent, where it generalized the problem mentioned above to the class of distributed optimization problems. The authors consider a system composed of a server and several agents (workers). The latter is composed of non-faulty/faulty agents. Each agent has a fixed amount of data points and responses. The server's goal is to identify the defective Byzantine agents, and the authors offer two norm-based filtering techniques to help the server achieve this goal \textit{(Norm Filtering and Norm-cap Filtering)}. In a deterministic way and to solve the mentioned regression problem, the proposed techniques can reinforce the original algorithm of gradient descent distributed when $f = n$ is lower than the specified threshold values. The first algorithm is the \textit{Gradient Descent with Norm Filtering}: this algorithm runs in two simple steps, where the server starts with the parameter' arbitrary estimate and iteratively updates it. \textit{\textbf{Step 1}}: At the actual estimated value of the parameter, the server aggregates the gradients of all agent costs. It sorts them in ascending order of their $2$-norms (breaking the links arbitrarily in order). \textbf{{Step 2}}: This step starts when the server uses the sum (vector) of the remaining gradients as the update direction, after filtering the gradients with $f$ largest $2$-norms. Hence, the results filtering scheme is called norm filtering. The second algorithm is \textit{Gradient Descent with Norm-Cap Filtering}: this algorithm is similar to the first, with the difference in the second step, where the server caps the $f$ largest gradients' norms by the norm of $ (f + 1)-th$ largest reported gradient, instead of eliminating the $f$ largest agents' gradients. Hence, the filtering scheme is referred to as norm-cap filtering. Several advantages of the proposed algorithms, such as the partially asynchronous system and in the probability distribution of the data points, no assumption is needed. Also, the computational overhead is log-linear and linear to deal with Byzantine defective agents, in the number of agents and dimension of data points, respectively.\\ \textit{Weaknesses:} Just the deterministic gradient descent method was assumed in the proposed approach. \section{Discussion} \label{Disc} This is the first survey of the literature on BFT in DML. the achieved results through the analysis phase illustrate that the proposed approaches to deal with Byzantine faults in DML address several challenging problems, whereas there are still some drawbacks. Analysis and evaluation are drawn in from more than 40 methods, and Fig. 11 shows the applicability of the Synchronous (Synch) training process with 84\% over the Asynchronous (Asynch) training process with 14\% and 2\% of Partially Asynchronous (PAsynch) training process \footnote{The total of the papers we analyzed is 43, divided into synchronous (36 papers), asynchronous (6 papers), and partially asynchronous (1 paper). We used the triple rule, and the result we got, we rounded it to the unit and got the ratios (84\%, 14\% and 2\%).}. Despite the idle time lost on waiting for slower workers in the synchronous training, several methods use this type of training to solve the problem of Byzantine workers in the Distributed Machine Learning training process, under two-level settings: centralized and decentralized. \begin{figure}[H] \centering \subfloat[][]{\resizebox{0.4\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, enlargelimits=0.14, ylabel={Number of analyzed approaches}, symbolic x coords={Synch,Asynch,PAsynch}, xtick=data, nodes near coords, nodes near coords align={vertical}, xtick pos=left, ytick pos=left ] \addplot coordinates {(Synch,36) (Asynch,6) (PAsynch,1)}; \end{axis} \end{tikzpicture} }} \subfloat[][]{\resizebox{0.5\textwidth}{!}{ \begin{tikzpicture}[nodes = {font=\sffamily}] \foreach \percent/\name in { 84/Synchronous, 14/Asynchronous, 2/Partially Asynchronous } { \ifx\percent\empty\else \global\advance\cyclecount by 1 \global\advance\ind by 1 \ifnum3<\cyclecount \global\cyclecount=0 \global\ind=0 \fi \pgfmathparse{{"teal!65","blue!40","teal!20","cyan!40"}[\the\ind]} \edef\color{\pgfmathresult} \draw[fill={\color},draw={\color}] (0,0) -- (0:3) arc (0:0+\percent*3.6:3) -- cycle; \node at (0+0.5*\percent*3.6:0.7*3) {\percent\,\%}; \node[pin=0+0.5*\percent*3.6:\name] at (0+0.5*\percent*3.6:3) {}; \pgfmathparse{0+\percent*3.6} \xdef0{\pgfmathresult} \fi }; \end{tikzpicture} }} \caption{(a) Applicability count (b) Applicability percentage of Byzantine fault tolerance training processes.} \label{percent_BFT_training_processes} \end{figure} It is conclusive that most centralized synchronous methods heavily rely on the assumption that the non-Byzantine workers lead the entire group, which means that most of the workers are non-Byzantine. Thus, such algorithms can eliminate the outliers from the candidates. The methods executed under centralized synchronous Byzantine fault tolerance remain relatively simple and easy to be implemented. But they still suffer from the fact that there is no guarantee that the number of Byzantine workers can be controlled during real-world attacks. For example, when the number of Byzantine workers is more than half of the existing workers, the proposed filtering schemes like Krum and Multi-Krum \cite{blanchard2017byzantine,blanchard2017machine}, DSML-Adversarial \cite{chen2017distributed}, ByzRDL \cite{yin2018byzantine}, ByzantineSGD \cite{alistarh2018byzantine}, GBT-SGD \cite{xie2018generalized} become incompetent. But, we notify that the synchronous SGD method Zeno \cite{xie2018zeno} is proposed for an unbounded number of Byzantine workers. Moreover, analyzed filtering schemes \cite{blanchard2017byzantine, blanchard2017machine, yin2018byzantine, chen2017distributed, gupta2019byzantine_b, mhamdi2019fast} need redundant data points to achieve exact fault tolerance, which may increase data storage and processing requirements. A generalized Byzantine failure with no such constraint is realized for the first time in the synchronous SGD by two filtering schemes methods (GBT-SGD and Phocas \cite{xie2018phocas}). But the filtering schemes in synchronous training still suffer from other problems as the single point of failure, which clearly shows in the DGDAlgorithm \cite{cao2019distributed}. DGDAlgorithm is a heuristic mechanism that calculates a noisy version of the true gradient, assuming the parameter server has a small part of the dataset locally. DGDAlgorithm still requires a parameter server to collect gradients, which can be vulnerable to the single point of failure. Furthermore, when comparing the existing filtering schemes among each other, like \cite{alistarh2018byzantine, blanchard2017byzantine, chen2017distributed}, the Gradient Filter CGC \cite{gupta2019byzantine_b} gest at comparable fault-tolerance. As well as, the Gradient Filter CGC has no assumption on the probability distribution of the data points, unlike ByzRDL. At the expense of fault tolerance, Gradient Filter CGC does not increase the workers’ computational workload. The most existing filtering schemes to tolerate Byzantine faults use robust aggregation rules as median and geometric median instead of the simple averaging aggregation operation or before averaging. In real-word, the geometric median-based methods have the best accuracy performance compared to those of coordinate-wise median-based methods, which are usually worse in practice, and LICM-SGD \cite{yang2019byzantine} also falls into the coordinate-wise median. In addition, the center of attention in some works is on detecting and removing adversaries and others on detecting and rejecting outliers in gradient aggregation based on gradients passed from workers to the server, unlike SIGNSGD \cite{bernstein2018signsgd} and Stochastic-Sign SGD \cite{jin2020stochastic} based on the sign of the gradient vectors and the stochastic sign of the gradient vectors, respectively. These methods used majority voting as a natural way to protect against less harmful defects and ensure convergence towards critical points without any guarantees on their quality. Moreover, these methods require the data stored in the workers are i.i.d., which is impractical in the federated learning setting. Unlike RSA \cite{li2019rsa}, which does not rely on i.i.d. assumption, and use model aggregation aiming to find a consensus model and integrate a regularized term with the objective function. However, it needs strong convexity. The RSA assuming aggregation model may suffer high communication costs, in contrast to Stochastic-Sign SGD, which preserves communication efficiency, can deal with heterogeneous data distribution and guarantees Byzantine resilience. The applicability of the Filtering Schemes (FSs) represents the most existence-based methods by 81\%, contrary to Coding Schemes (CSs), which are represented by 9\%, as shown in Fig. 12. \begin{figure}[h] \centering \subfloat[][]{\resizebox{0.4\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, enlargelimits=0.14, ylabel={Number of analyzed approaches}, symbolic x coords={FSs,CSs,Blockchain}, xtick=data, nodes near coords, nodes near coords align={vertical}, xtick pos=left, ytick pos=left ] \addplot coordinates {(FSs,35) (CSs,4) (Blockchain,5)}; \end{axis} \end{tikzpicture} }} \subfloat[][]{\resizebox{0.5\textwidth}{!}{ \begin{tikzpicture}[nodes = {font=\sffamily}] \foreach \percent/\name in { 81/Filtering Schemes, 12/Blockchain, 9/Coding Schemes } { \ifx\percent\empty\else \global\advance\cyclecount by 1 \global\advance\ind by 1 \ifnum3<\cyclecount \global\cyclecount=0 \global\ind=0 \fi \pgfmathparse{{"teal!65","blue!40","teal!20","cyan!40"}[\the\ind]} \edef\color{\pgfmathresult} \draw[fill={\color},draw={\color}] (0,0) -- (0:3) arc (0:0+\percent*3.6:3) -- cycle; \node at (0+0.5*\percent*3.6:0.7*3) {\percent\,\%}; \node[pin=0+0.5*\percent*3.6:\name] at (0+0.5*\percent*3.6:3) {}; \pgfmathparse{0+\percent*3.6} \xdef0{\pgfmathresult} \fi }; \end{tikzpicture} }} \caption{(a) Applicability count (b) Applicability percentage of Byzantine fault tolerance schemes.} \label{percent_BFT_schemes} \end{figure} DRACO \cite{chen2018draco} is a coding scheme that guarantees correct gradients recovery by increasing the workers' computational workload. Unlike Bulyan \cite{el2018hidden}, which requires heavy computational overhead on the PS side with $4f + 3$ workers, DRACO requires only $2f +1$ workers, making it strongly Byzantine-resilient. But the computation time of DRACO may be many times due to the encoding and decoding time, accordingly larger than the computation time of ordinary SGD, and the proposed method in AGGREGATHOR \cite{damaskinos2019aggregathor} with the redundant calculations can avoid this overhead. In addition, the storage redundancy factor required in DRACO compared to BRDO-based data encoding \cite{data2020data} increases linearly with the number of corrupted worker nodes, unlike BRDO-based data encoding, which is constant, which results in low compute at the worker node level. The authors in DETOX \cite{rajput2019detox} improve resiliency and overhead guarantees by combining deterministic coding scheme with robust aggregation rule; also, they show that computational efficiency will be enhanced in DRACO at the expense of exact fault tolerance when using the gradient filters. The above coding schemes based methods (DRACO, BRDO-based data encoding, DETOX) are so close to each other that they used a similar deterministic coding scheme. However, BRDO-based Data encoding has a $2n/(n-2f)$ storage redundancy factor, in contrast to DRACO, which has a $2f +1$ storage redundancy factor, and with the corrupt workers’ number increases linearly. As far as we know, RRR-BFT \cite{gupta2019randomized} is the first proposed method that tolerates Byzantine workers in the context of parallel learning by using the idea of reactive redundancy. In addition, RRR-BFT compares the computational efficiency of its deterministic coding schemes to DRACO and enhances it using the randomization technique. Furthermore, the server in the synchronous setting can improve the computing efficiency by combining the gradient-filter method with the randomized coding scheme or deterministic coding scheme in RRR-BFT or DETOX, respectively. However, according to the authors, when the server employed the gradient filter, it could not identify all of the existing Byzantine workers. In Coding schemes, the most current methods are based on deterministic coding schemes as DRACO, DETOX, BRDO data encoding. At the same time, RRT-BFT used the two types of coding schemes (deterministic and randomized). The most proposed centralized synchronous approaches are based on the SGD first-order optimization method. However, other publications, such as Distributed Momentum \cite{el2020distributed}, DRSL-Byzantine Mirror Descent \cite{ding2019distributed}, and SVRG-AdversarialML \cite{bulusu2019convex}, explore Momentum, Mirror Descent, and SVRG, respectively. Momentum reduces gradient variation at the server and reinforces Byzantine resilient aggregation rules. Mirror Descent is used to secure training data against faulty information sharing. In SVRG-AdversarialML, the authors benefit from the advantages of SVRG over SGD as the reducing variance and propose the first method-based SVRG to fight against Byzantine adversaries in the distributed setting. The main drawback of the discussed synchronous centralized methods is that they keep the training convergence and correctness at the expense of useful information. Also, these methods require certain assumptions that are not easy to apply in the federated learning paradigm, such as the need to relocate data points in DRACO. Additionally, the i.i.d. assumption is not the case in federated learning, where the units of computing are heterogeneous, and the difficulty of generalizing the existent algorithms in the non-i.i.d. setting. Furthermore, the existence of heterogeneous workers will certainly be due to the slowness of convergence. Finally, the synchronization with federated learning and edge computing offline workers cannot be done most of the time. On the other hand, Decentralized synchronous methods are recently represented by multi-server parameter method as (GuanYu \cite{el2019sgd}, DSML-Adversarial and Securing-DML \cite{su2018securing}), without any server as (ByRDiE \cite{yang2019byrdie}, BRIDGE \cite{yang2019bridge} and MOZI \cite{guo2020towards}) and with Blockchain technology as (DeepChain \cite{weng2019deepchain}, LearningChain \cite{chen2018machine}, TCLearn \cite{lugan2019secure}, SDPP based-Blockchain System \cite{zhao2019mobile} and BlockDeepNet \cite{rathore2019blockdeepnet}). For the proposed DSML-Adversarial, the authors, to simplify the explanation, assume in their article that there is only one parameter server, which is why we refer to the presented DSML-Adversarial with centralized synchronous methods discussed above. In addition, they show the applicability of their proposed method for multi-server parameters by their algorithm description and detailed analysis. GuanYu and Securing-DML assume the same decentralized setting as DSML-Adversarial with multi-server parameters. In addition, the parameter server architecture used in the centralized setting can be vulnerable to the single point of failure, as is done in DGDAlgorithm. So, eliminating the parameter server is necessary to avoid this issue and get a fully Decentralized setting. In ByRDiE and BRIDGE, the authors assume the fully decentralized setting. ByRDiE applies the trimmed-median rule to the coordinate descent optimization algorithm, and BRIDGE applies it to the SGD algorithm. Nevertheless, BRIDGE has the best communication cost compared to ByRDiE. Yet, both remain vulnerable to some Byzantine attacks, in contrast to Mozi, which deals with both simple and sophisticated Byzantine attacks with low computation overhead. Likewise, we survey several decentralized methods (DeepChain, LearningChain, TCLearn, SDPP-based-Blockchain System, BlockDeepNet) based on Blockchain technology. These methods mitigate the challenges in previous works such as poisoning attacks, single point of failure, data privacy, and existing heterogeneous workers as federated learning. Most Blockchain-based methods achieve collaborative training and improve security, privacy, and auditability. Through Fig. 11, we notice that 12\% of methods used Blockchain technology, which shows that researchers are slightly using them compared to filtering schemes. In the synchronous settings, the authors can assume that the network is perfect and delivers messages within a certain time, which remain relatively simple and easy to be implemented, so we find that most of the proposed approaches considered the synchronous setting by 84\% compared to 14\% of the asynchronous approaches. Asynchronous settings are more practical and realistic, where the network cannot be assumed to be perfect, and implementation is more complex. Asynchronous Byzantine learning is more general than Synchronous Byzantine learning, which motivates authors to change the research direction to the asynchronous setting. As well as, we present above that the workers in the synchronous setting have better computing capacity, it is necessary to wait for the struggling workers, which leads to wasted computing resources. As a result, Asynchronous Byzantine Learning is more convenient than Synchronous Byzantine Learning, which is another motivating reason for the authors they chose to propose solutions in the asynchronous framework to solve the Byzantine learning problems. Kardam \cite{damaskinos2018asynchronous} is the first proposed method that tolerates Byzantine learning in the asynchronous distributed SGD algorithm. Kardam filters out the outliers by using a different filtering scheme method to those used in the synchronous approaches. Its filtering scheme is based on the Lipschitzness filter and frequency filter. However, Kardam cannot resist malicious attacks due to the weak threat model assuming. The second proposed method to deal with Byzantine attackers in the asynchronous distributed SGD algorithm is proposed in DBT-SGD in Era of Big Data \cite{jin2019distributed}. Nonetheless, the authors \cite{jin2019distributed} proposed two different algorithms in a decentralized setting. They avoid the problem of a single point of failure caused by the parameter server, and they repose to the collaborative training among the set of workers. DBT-SGD in Era of Big Data is the first paper proposed to resolve the Byzantine problems in the asynchronous decentralized setting to the best of our knowledge; however, DBT-SGD in Era of Big Data can also be implemented with centralized settings. As a result, it may be vulnerable to a single point of failure. Another work based on the asynchronous setting is Zeno++ \cite{xie2020zeno++}. The author in Zeno++ proposes a strong threat model compared to the Kardam one; it also proves many advantages, unlike Kardam, which needs a majority of honest workers and workers’ bounded staleness, and when the server receives gradients, it needs to verify the identities of the workers. However, Zeno ++'s storage of the training data on the server for scoring will increase the risk of confidentiality leakage. BASGD \cite{yang2020basgd} uses two important components (buffer and aggregation function), proving its ability to resist error and malicious attack compared to Kardam. Unlike Zeno++, it avoids the problem of storing data on the server and preserves privacy in distributed learning. On the other hand, ByzSGD \cite{el2020genuinely} and ColLearning Agreement \cite{ el2020collaborative}, to the best of our knowledge, are the only proposed approaches in the decentralized asynchronous settings with DBT-SGD in the Era of Big Data. Where ByzSGD considers the decentralized parameter servers settings and increases the efforts made by the centralized synchronous algorithms, as a result tolerating Byzantine servers in addition to Byzantine workers. But ColLearning Agreement considers the fully decentralized settings as ByRDiE, BRIDGE, and MOZI. However, these methods suppose that the data is i.i.d. and consider only convex optimization. In genuine Byzantine environments, the correct models can be influenced by Byzantine workers to distance them from each other. In comparison, MOZI assumes that the models do not drift from each other on good workers, which is impractical. Moreover, despite the two hypotheses (configuration of decentralized parameter servers and genuinely Byzantine adversaries) of ByzSGD with the hypothesis that the workers are drawn from the homogeneous, i.i.d., this solution is not collaborative like ColLearning Agreement. To the best of our knowledge, the partially asynchronous training Byzantine fault tolerance setting is used by only one work \cite{gupta2019byzantine_a}. It takes advantage of this setting in terms of system robustness to bounded delay and achieves a log-linear and linear computational overhead. Distribution of the applicability of the Byzantine fault tolerance training processes and settings by discussed methods is provided in Fig. 13. Although Byzantine fault tolerance methods are widely used, the continued advancements in distributed machine learning, particularly federated learning, and the combination with other techniques such as IoT or fog computing increase the complexity of the systems as well as the probability of failure, broadening the scope of research on Byzantine fault tolerance methods in distributed machine learning. Byzantine Fault Tolerance (BFT) is a field of study that attempts to preserve the system's continuity and dependability in arbitrary behaviour, making it extremely important because it keeps the system operating even if certain components fail. Many machine learning algorithms have an objective function that expresses their goal. Commonly, the learning algorithm minimizes the objective function to optimize the model, as an example, a low error in the case of classifying customer satisfaction score in e-commerce into positive or negative, or incoming messages in the mail into legitimate or spam. The training process is terminated when a near-optimal/optimal solution is identified, or the model is converged. Byzantine refers to the malicious nodes (workers/servers) and their (sent/update) gradients. For example, the Byzantine workers lead the objective function to converge to ineffective models and preclude the distributed algorithm process from converging to a satisfactory state. The latter is the state where accomplish an accuracy with Byzantine workers that is equivalent to the one reached without any Byzantine workers. Several BFT solutions in DML achieve near-optimal/optimal/fast convergence, nearly linear time complexity, and its ability to achieve the same performance and accuracy as the non-Byzantine case. \begin{figure}[H] \centering \subfloat[][]{\resizebox{0.4\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ ybar, enlargelimits=0.14, legend style={at={(0.5,-0.18)}, anchor=north,legend columns=-1}, ylabel={Number of analyzed approaches}, symbolic x coords={Synch,Asynch,PAsynch}, xtick=data, nodes near coords, nodes near coords align={vertical}, xtick pos=left, ytick pos=left ] \addplot coordinates {(Synch,25) (Asynch,3) (PAsynch,1)}; \addplot coordinates {(Synch,11) (Asynch,3) (PAsynch,0)}; \legend{Centralized,Decentralized} \end{axis} \end{tikzpicture} }} \subfloat[][]{\resizebox{0.5\textwidth}{!}{ \begin{tikzpicture}[nodes = {font=\sffamily}] \foreach \percent/\name in { 58/Synch Centralized, 26/Synch Decentralized, 7/Asynch Centralized, 7/Asynch Decentralized, 2/PAsynch Centralized, } { \ifx\percent\empty\else \global\advance\cyclecount by 1 \global\advance\ind by 1 \ifnum3<\cyclecount \global\cyclecount=0 \global\ind=0 \fi \pgfmathparse{{"teal!65","blue!40","teal!20","cyan!40"}[\the\ind]} \edef\color{\pgfmathresult} \draw[fill={\color},draw={\color}] (0,0) -- (0:3) arc (0:0+\percent*3.6:3) -- cycle; \node at (0+0.5*\percent*3.6:0.7*3) {\percent\,\%}; \node[pin=0+0.5*\percent*3.6:\name] at (0+0.5*\percent*3.6:3) {}; \pgfmathparse{0+\percent*3.6} \xdef0{\pgfmathresult} \fi }; \end{tikzpicture} }} \caption{(a) Applicability count (b) Applicability percentage of Byzantine fault tolerance training processes and settings.} \label{percent_BFT_training_process_and_settings} \end{figure} A summary of analysis methods under several metrics (type of scheme, type of optimization method, centralized and decentralized settings) are given in Table 4, Table 5, Table 6, and Table 7. We point out that ( $ \bullet $ ) it means applicable, (N.A) it means not applicable, and ($-$) indicates that, to the best of our knowledge, information is not supplied in the original resource or other studies. \begin{table} [H] \caption{Summarizing different Synchronous Byzantine fault tolerance approaches.} \label{tab:SynBFT} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{p{4 cm} c c c c p{3 cm}} \toprule Approach&Filtering scheme&Coding scheme &Blockchain&Optimization method\\ \midrule DeepChain& N.A& N.A&$ \bullet $&SGD\\ Krum, Multi-Krum&$ \bullet $&N.A &N.A &SGD\\ DSML-Adversarial&$ \bullet $& N.A& N.A&GD\\ ByzRDL&$ \bullet $&N.A &N.A &GD\\ ByzantineSGD&$ \bullet $&N.A&N.A&SGD\\ GBT-SGD&$ \bullet $&N.A&N.A&SGD\\ DGDAlgorithm&$ \bullet $&N.A&N.A&GD\\ Securing-DML&$ \bullet $&N.A&N.A&GD\\ Tremean and Phocas&$ \bullet $&N.A&N.A&SGD\\ DRACO\footnote{applicable also to any first order method.}&N.A &$ \bullet $&N.A &Mini-batch SGD\\ Bulyan&$ \bullet $&N.A&N.A&SGD\\ ByzantinePGD\footnote{PGD:Perturbed Gradient Descent}&$ \bullet $&N.A&N.A&GD, PGD\\ Zeno&$ \bullet $&N.A&N.A&SGD\\ LearningChain&N.A&N.A&$ \bullet $&SGD\footnote{LearningChainEx applied to any gradient descent based machine learning systems.}\\ RSA & $ \bullet $ &N.A&N.A& SGD\\ signSGD & $ \bullet $ &N.A&N.A& SGD\\ BRDO based Data Encoding\footnote{PGD: Proximal Gradient Descent, CD: Coordinate Descent}&N.A&$ \bullet $&N.A&PGD and CD\\ AGGREGATHOR&$ \bullet $&N.A&N.A&SGD\\ GuanYu&$ \bullet $&N.A&N.A&SGD\\ Trimmed mean&$ \bullet $&N.A&N.A&SGD\\ TCLearn&N.A&N.A&$ \bullet $&GD\\ SDPP based-Blockchain System&N.A&N.A&$ \bullet $&$-$\\ ByRDiE&$ \bullet $&N.A&N.A&Coordinate Descent\\ BlockDeepNet&N.A&N.A&$ \bullet $&GD\\ FABA&$ \bullet $&N.A&N.A&SGD\\ BRIDGE&$ \bullet $&N.A&N.A&SGD\\ LICM-SGD&$ \bullet $&N.A&N.A&SGD\\ Gradient-Filter CGC & $ \bullet $&N.A &N.A & SGD\\ SVRG-AdversarialML&$ \bullet $&N.A&N.A&SVRG\\ LIUBEI&$ \bullet $&N.A&N.A&SGD\\ DRSL-Byzantine Mirror Descent&$ \bullet $&N.A&N.A&Mirror Descent\\ DETOX&$ \bullet $&$ \bullet $&N.A&Mini-batch SGD\\ RRR-BFT &N.A&$ \bullet $&N.A & SGD\\ MOZI&$ \bullet $&N.A&N.A&$-$\\ Stochastic-Sign SGD &$ \bullet $& N.A& N.A& SGD\\ Distributed Momentum&$ \bullet $&N.A &N.A &Momentum\\ \bottomrule \end{tabular} \end{center} \bigskip \end{minipage} \end{table} \begin{table}[H] \caption{Summarizing different Asynchronous Byzantine fault tolerance approaches} \label{tab:ASynBFT} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{p{4 cm} c p{3 cm} c p{3 cm} c p{2 cm}p{2 cm}} \toprule Approach&Filtering scheme&Coding scheme &Blockchain&Optimization method\\ \midrule Kardam&$ \bullet $&N.A&N.A&SGD\\ DBT-SGD in Era of Big Data&$ \bullet $&N.A&N.A&SGD\\ Zeno++&$ \bullet $&N.A&N.A&SGD\\ BASGD&$ \bullet $&N.A&N.A&SGD\\ ByzSGD&$ \bullet $&N.A&N.A&SGD\\ ColLearning Agreement &$ \bullet $&N.A&N.A&SGD\\ \bottomrule \end{tabular} \end{center} \bigskip \end{minipage} \end{table} \begin{table}[] \caption{Summarizing different Partially Asynchronous Byzantine fault tolerance approaches} \label{tab:PArtially_ASynch} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{p{4 cm} c p{3 cm} c p{3 cm} c p{2 cm}p{2 cm}} \toprule Approach&Filtering scheme&Coding scheme &Blockchain&Optimization method\\ \midrule Norm Filtering and Norm-cap Filtering&$ \bullet $&N.A&N.A&GD\\ \bottomrule \end{tabular} \end{center} \bigskip \end{minipage} \end{table} \begin{table}[] \caption{Summarizing different Centralized/Decentralized Byzantine fault tolerance approaches} \label{tab:CenDec} \begin{minipage}{\columnwidth} \begin{center} \begin{tabular}{p{4 cm} p{5 cm} p{5 cm}} \toprule Training process&Centralized&Decentralized\\ \midrule Synchronous&Krum and Multi-Krum \cite{blanchard2017byzantine,blanchard2017machine}, ByzRDL\cite{yin2018byzantine}, ByzantineSGD \cite{alistarh2018byzantine}, GBT-SGD \cite{xie2018generalized}, DGDAlgorithm \cite{cao2019distributed}, Phocas \cite{xie2018phocas}, DRACO \cite{chen2018draco}, Bulyan \cite{el2018hidden}, ByzantinePGD \cite{yin2019defending}, Zeno \cite{xie2018zeno}, RSA \cite{li2019rsa}, SIGNSGD \cite{bernstein2018signsgd}, BRDO based Data Encoding \cite{data2020data}, AGGREGATHOR \cite{damaskinos2019aggregathor}, Trimmed mean \cite{ tianxiang2019aggregation}, FABA \cite{xia2019faba}, LICM-SGD \cite{yang2019byzantine}, Gradient Filter CGC \cite{gupta2019byzantine_b}, SVRG-AdversarialML \cite{bulusu2019convex}, LIUBEI \cite{mhamdi2019fast}, DRSL-Byzantine Mirror Descent \cite{ding2019distributed}, DETOX \cite{rajput2019detox}, RRR-BFT \cite{gupta2019randomized}, Stochastic-Sign SGD \cite{jin2020stochastic}, Distributed Momentum \cite{el2020distributed}& DeepChain \cite{weng2019deepchain}, DSML-Adversarial\cite{chen2017distributed}, Securing-DML \cite{su2018securing}, GuanYu \cite{el2019sgd}, ByRDiE \cite{yang2019byrdie}, BRIDGE \cite{yang2019bridge}, MOZI \cite{guo2020towards}, LearningChain \cite{chen2018machine}, TCLearn \cite{lugan2019secure}, SDPP based-Blockchain System \cite{zhao2019mobile}, BlockDeepNet \cite{rathore2019blockdeepnet}\\ \\Asynchronous&Kardam \cite{damaskinos2018asynchronous}, Zeno++ \cite{xie2020zeno++}, BASGD \cite{yang2020basgd}&DBT-SGD in Era of Big Data \cite{jin2019distributed}, ByzSGD \cite{el2020genuinely}, ColLearning Agreement \cite{ el2020collaborative} \\ \\Partially Asynchronous&Norm Filtering and Norm-cap Filtering\cite{gupta2019byzantine_a}&N.A\\ \bottomrule \end{tabular} \end{center} \bigskip \end{minipage} \end{table} Based on the discussion above, and for future research, the following directions areas are recommended: \begin{enumerate} \item The most used optimization methods for the BFT problem in DML belong to the first-order optimization methods. Although the achieving results, several issues stay open as the high computational complexity or data privacy. Second-order methods are an interesting family optimization method that may give us more effective outcomes than first-order methods. \item SGD is the popular method used in the BFT problem in DML, while it presents challenges that have led it to improve it by its proposed variants like (Adagrad, Adadelta, RMSprop ... etc.). That implies that the existing BFT approaches in DML with SGD may be improved using these variants instead of the SGD method. So, it's interesting coming up with new solutions with these variants to create a more robust, reliable, and secure DML. \item Most of the analyzed approaches are based on synchronous/asynchronous training processes in centralized/decentralized settings (fewer approaches in the asynchronous). Overdone asynchronism can be hurtful to the convergence of some algorithms \cite{bertsekas1989parallel}, and synchronization cannot be done most of the time (case of federated learning). It would be exciting to enrich the results obtained by the analyzed approaches using other training processes, such as (semi-synchronous or partially asynchronous). In other words, exploring existing training processes can be used to avoid some problems in synchronous/asynchronous training processes. \item The widely used Internet of Things means more devices, more data, and more requirements for storing, transmitting, and processing that data, as well as keeping it secure. Blockchain technology shows promising results in the BFT problem in DML in terms of data confidentiality and ensuring trusted collaborative model training(considering that Blockchain was used sparingly). Guaranteeing the same or a better result in the case of scale participant number and optimally reducing cost communication are open issues with the advent of new paradigms (IoT, Big Data, Edge computing, etc.). Thus, Blockchain is an interesting technology to explore new solutions to open problems in current paradigms. \end{enumerate} The possibilities of the research area are summarized in Fig. 14. \begin{figure}[H] \centering \includegraphics[width=\textwidth,keepaspectratio]{TreeResearchDirection.pdf} \caption{Mind map of the BFT research area in DML. The mind map depicts the different optimization method families that can be used: existing techniques that may be developing (as filtering scheme, coding scheme and Blockchain), training topologies (as centralized and decentralized), training processes that can be combined in various ways, and existing problems in SGD, the most commonly used optimization method in DML, to propose more effective solutions. } \label{fig:MindMap_BFTinDML} \end{figure} \section{Conclusion} \label{Conclusion} The Byzantine fault tolerance of a distributed machine learning system is a characteristic that makes the system more reliable. Preventing the model training convergence by the Byzantine workers in the distributed machine learning systems is a significant issue, and the first-order optimization methods are improved to resolve this type of problem. In this paper, we provided a general survey of recent Byzantine fault tolerance approaches which have been discussed, and the following conclusions have been made: \begin{itemize} \item Works mainly aim to address synchronous settings rather than asynchronous and centralized settings rather than decentralized settings. \item The filtering scheme is used more frequently than the coding scheme and Blockchain technology in the studied works. \item Despite the fewer proposed methods based on Blockchain technology, the latter achieves many important results (avoid single point of failure, attains collaborative learning and data privacy, and prevent poisoning attacks) compared to other schemes. Blockchain can improve recent results and expand its use under several settings with combinations to different schemes as a research direction. \item Because node failures or adversarial behavior of specific nodes may make it difficult to develop a robust distributed algorithm in DML, the topic of BFT in DML becomes increasingly important. To handle these sorts of problems, We argue that it is required to be able to deal with node failures, heterogeneous nodes, malicious attackers, topology settings, communication issues, mainly but also other challenges, considering DML progress such as Federated learning. Byzantine fault tolerance in Federated Learning complements this survey and represents our future work. \end{itemize} This paper included a comparative study of the discussed approaches and shows possible future directions for research in the context of Byzantine fault tolerance in distributed machine learning systems. \printbibliography \end{document}
{'timestamp': '2022-05-06T02:16:52', 'yymm': '2205', 'arxiv_id': '2205.02572', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.02572'}
ArXiv
\section{Boundary ribbon operators with $\Xi(R,K)^\star$}\label{app:ribbon_ops} \begin{definition}\rm\label{def:Y_ribbon} Let $\xi$ be a ribbon, $r \in R$ and $k \in K$. Then $Y^{r \otimes \delta_k}_{\xi}$ acts on a direct triangle $\tau$ as \[\tikzfig{Y_action_direct},\] and on a dual triangle $\tau^*$ as \[\tikzfig{Y_action_dual}.\] Concatenation of ribbons is given by \[Y^{r \otimes \delta_k}_{\xi'\circ\xi} = Y^{(r \otimes \delta_k)_2}_{\xi'}\circ Y^{(r \otimes \delta_k)_1}_{\xi} = \sum_{x\in K} Y^{(x^{-1}\rightharpoonup r) \otimes \delta_{x^{-1}k}}_{\xi'}\circ Y^{r\otimes\delta_x}_{\xi},\] where we see the comultiplication $\Delta(r \otimes \delta_k)$ of $\Xi(R,K)^*$. Here, $\Xi(R,K)^*$ is a coquasi-Hopf algebra, and so has coassociative comultiplication (it is the multiplication which is only quasi-associative). Therefore, we can concatenate the triangles making up the ribbon in any order, and the concatenation above uniquely defines $Y^{r\otimes\delta_k}_{\xi}$ for any ribbon $\xi$. \end{definition} Let $s_0 = (v_0,p_0)$ and $s_1 = (v_1,p_1)$ be the sites at the start and end of a triangle. The direct triangle operators satisfy \[k'{\triangleright}_{v_0}\circ Y^{r\otimes \delta_k}_{\tau} =Y^{r\otimes \delta_{k'k}}_{\tau}\circ k'{\triangleright}_{v_0},\quad k'{\triangleright}_{v_1}\circ Y^{r\otimes\delta_k}_\tau = Y^{r\otimes\delta_{k'k^{-1}}}_\tau\circ k'{\triangleright}_{v_1}\] and \[[\delta_{r'}{\triangleright}_{s_i},Y^{r\otimes\delta_k}_{\tau}]= 0\] for $i\in \{1,2\}$. For the dual triangle operators, we have \[k'{\triangleright}_{v_i}\circ \sum_k Y^{r\otimes\delta_k}_{\tau^*} = Y^{(k'{\triangleright} r)\otimes\delta_k}_{\tau^*}\circ k'{\triangleright}_{v_i}\] again for $i\in \{1,2\}$. However, there do not appear to be similar commutation relations for the actions of $\mathbb{C}(R)$ on faces of dual triangle operators. In addition, in the bulk, one can reconstruct the vertex and face actions using suitable ribbons \cite{Bom,CowMa} because of the duality between $\mathbb{C}(G)$ and $\mathbb{C} G$; this is not true in general for $\mathbb{C}(R)$ and $\mathbb{C} K$. \begin{example}\label{ex:Yrib}\rm Given the ribbon $\xi$ on the lattice below, we see that $Y^{r\otimes \delta_k}_{\xi}$ acts only along the ribbon and trivially elsewhere. We have \[\tikzfig{Y_action_ribbon}\] if $g^2,g^4,g^6(g^7)^{-1}\in K$, and $0$ otherwise, and \begin{align*} &y^1 = (rx^1)^{-1}\\ &y^2 = ((g^2)^{-1}rx^2)^{-1}\\ &y^3 = ((g^2g^4)^{-1}rx^3)^{-1}\\ &y^4 = ((g^2g^4g^6(g^7)^{-1})^{-1}rx^3)^{-1} \end{align*} One can check this using Definition~\ref{def:Y_ribbon}. \end{example} It is claimed in \cite{CCW} that these ribbon operators obey similar equivariance properties with the site actions of $\Xi(R,K)$ as the bulk ribbon operators, but we could not reproduce these properties. Precisely, we find that when such ribbons are `open' in the sense of \cite{Kit, Bom, CowMa} then an intermediate site $s_2$ on a ribbon $\xi$ between either endpoints $s_0,s_1$ does \textit{not} satisfy \[\Lambda_{\mathbb{C} K}{\triangleright}_{s_2}\circ Y^{r\otimes \delta_k}_{\xi} = Y^{r\otimes \delta_k}_{\xi}\circ \Lambda_{\mathbb{C} K}{\triangleright}_{s_2}.\] in general, nor the corresponding relation for $\Lambda_{\mathbb{C}(R)}{\triangleright}_{s_2}$. \section{Measurements and nonabelian lattice surgery}\label{app:measurements} In Section~\ref{sec:surgery}, we described nonabelian lattice surgery for a general underlying group algebra $\mathbb{C} G$, but for simplicity of exposition we assumed that the projectors $A(v)$ and $B(p)$ could be applied deterministically. In practice, we can only make a measurement, which will only sometimes yield the desired projectors. As the splits are easier, we discuss how to handle these first, beginning with the rough split. We demonstrate on the same example as previously: \[\tikzfig{rough_split_calc}\] \[\tikzfig{rough_split_calc2}\] where we have measured the edge to be deleted in the $\mathbb{C} G$ basis. The measurement outcome $n$ informs which corrections to make. The last arrow implies corrections made using ribbon operators. These corrections are all unitary, and if the measurement outcome is $e$ then no corrections are required at all. The generalisation to larger patches is straightforward, but requires keeping track of multiple different outcomes. Next, we discuss how to handle the smooth split. In this case, we measure the edges to be deleted in the Fourier basis, that is we measure the self-adjoint operator $\sum_{\pi} p_{\pi} P_{\pi}{\triangleright}$ at a particular edge, where \[P_{\pi} := P_{e,\pi} = {{\rm dim}(W_\pi)\over |G|}\sum_{g\in G} {\rm Tr}_\pi(g^{-1}) g\] from Section~\ref{sec:lattice} acts by the left regular representation. Thus, for a smooth split, we have the initial state $|e\>_L$: \[\tikzfig{smooth_split_calc1}\] \[\tikzfig{smooth_split_calc2}\] \[\tikzfig{smooth_split_calc3}\] and afterwards we still have coefficients from the irreps of $\mathbb{C} G$. In the case when $\pi = 1$, we are done. Otherwise, we have detected quasiparticles of type $(e,\pi)$ and $(e,\pi')$ at two vertices. In this case, we appeal to e.g. \cite{BKKK, Cirac}, which claim that one can modify these quasiparticles deterministically using ribbon operators and quantum circuitry. The procedure should be similar to initialising a fresh patch in the zero logical state, but we do not give any details ourselves. Then we have the desired result. For merges, we start with a smooth merge, as again all outcomes are in the group basis. Recall that after generating fresh copies of $\mathbb{C} G$ in the states $\sum_{m\in G} m$, we have \[\tikzfig{smooth_merge_project}\] we then measure at sites which include the top and bottom faces, giving: \[\tikzfig{smooth_merge_measure_1}\] for some conjugacy classes ${\hbox{{$\mathcal C$}}}, {\hbox{{$\mathcal C$}}}'$. There are no factors of $\pi$ as the edges around each vertex already satisfy $A(v)|\psi\> = |\psi\>$. When ${\hbox{{$\mathcal C$}}} = {\hbox{{$\mathcal C$}}}' = \{e\}$, we may proceed, but otherwise we require a way of deterministically eliminating the quasiparticles detected at the top and bottom faces. Appealing to e.g. \cite{BKKK, Cirac} as earlier, we assume that this may be done, but do not give details. Alternatively one could try to `switch reference frames' in the manner of Pauli frames with qubit codes \cite{HFDM}, and redefine the Hamiltonian. The former method gives \[\tikzfig{smooth_merge_measure_2}\] Lastly, we measure the inner face, yielding \[\tikzfig{smooth_merge_measure_3}\] so $|j\>_L\otimes |k\>_L \mapsto \sum_{s\in {\hbox{{$\mathcal C$}}}''} \delta_{js,k} |js\>_L$, which is a direct generalisation of the result for when $G = \mathbb{Z}_n$ in \cite{Cow2}, where now we sum over the conjugacy class ${\hbox{{$\mathcal C$}}}''$ which in the $\mathbb{Z}_n$ case are all singletons. The rough merge works similarly, where instead of having quasiparticles of type $({\hbox{{$\mathcal C$}}},1)$ appearing at faces, we have quasiparticles of type $(e,\pi)$ at vertices. \section{Introduction} The Kitaev model is defined for a finite group $G$ \cite{Kit} with quasiparticles given by representations of the quantum double $D(G)$, and their dynamics described by intertwiners. In quantum computing, the quasiparticles correspond to measurement outcomes at sites on a lattice, and their dynamics correspond to linear maps on the data, with the aim of performing fault-tolerant quantum computation. The lattice can be any ciliated ribbon graph embedded on a surface \cite{Meu}, although throughout we will assume a square lattice on the plane for convenience. The Kitaev model generalises to replace $G$ by a finite-dimensional semisimple Hopf algebra, as well as aspects that work of a general finite-dimensional Hopf algebra. We refer to \cite{CowMa} for details of the relevant algebraic aspects of this theory, which applies in the bulk of the Kitaev model. We now extend this work with a study of the algebraic structure that underlies an approach to the treatment of boundaries. The treatment of boundaries here originates in a more categorical point of view. In the original Kitaev model the relevant category that defines the `topological order' in condensed matter terms\cite{LK} is the category ${}_{D(G)}\mathcal{M}$ of $D(G)$-modules, which one can think of as an instance of the `dual' or `centre' $\hbox{{$\mathcal Z$}}({\hbox{{$\mathcal C$}}})$ construction\cite{Ma:rep}, where ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$ is the category of $G$-graded vector spaces. Levin-Wen `string-net' models \cite{LW} are a sort of generalisation of Kitaev models specified now by a unitary fusion category $\mathcal{C}$ with topological order $\hbox{{$\mathcal Z$}}(\mathcal{C})$, meaning that at every site on the lattice one has an object in $\hbox{{$\mathcal Z$}}(\mathcal{C})$, and now on a trivalent lattice. Computations correspond to morphisms in the same category. A so-called gapped boundary condition of a string-net model preserves a finite energy gap between the vacuum and the lowest excited state(s), which is independent of system size. Such boundary conditions are defined by module categories of the fusion category ${\hbox{{$\mathcal C$}}}$. By definition, a (right) ${\hbox{{$\mathcal C$}}}$-module means\cite{Os,KK} a category ${\hbox{{$\mathcal V$}}}$ equipped with a bifunctor ${\hbox{{$\mathcal V$}}} \times {\hbox{{$\mathcal C$}}} \rightarrow {\hbox{{$\mathcal V$}}}$ obeying coherence equations which are a polarised version of the properties of $\mathop{{\otimes}}: {\hbox{{$\mathcal C$}}}\times{\hbox{{$\mathcal C$}}}\to {\hbox{{$\mathcal C$}}}$ (in the same way that a right module of an algebra obeys a polarised version of the axioms for the product). One can also see a string-net model as a discretised quantum field theory \cite{Kir2, Meu}, and indeed boundaries of a conformal field theory can also be similarly defined by module categories \cite{FS}. For our purposes, we care about \textit{indecomposable} module categories, that is module categories which are not equivalent to a direct sum of other module categories. Excitations on the boundary with condition $\mathcal{V}$ are then given by functors $F \in \mathrm{End}_{\hbox{{$\mathcal C$}}}(\mathcal{V})$ that commute with the ${\hbox{{$\mathcal C$}}}$ action\cite{KK}, beyond the vacuum state which is the identity functor $\mathrm{id}_{\mathcal{V}}$. More than just the boundary conditions above, we care about these excitations, and so $\mathrm{End}_{\hbox{{$\mathcal C$}}}(\mathcal{V})$ is the category of interest. The Kitaev model is not exactly a string-net model (the lattice in our case will not even be trivalent) but closely related. In particular, it can be shown that indecomposable module categories for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, the category of $G$-graded vector spaces, are\cite{Os2} classified by subgroups $K\subseteq G$ and cocycles $\alpha\in H^2(K,\mathbb{C}^\times)$. We will stick to the trivial $\alpha$ case here, and the upshot is that the boundary conditions in the regular Kitaev model should be given by ${\hbox{{$\mathcal V$}}}={}_K\hbox{{$\mathcal M$}}^G$ the $G$-graded $K$-modules where $x\in K$ itself has grade $|x|=x\in G$. Then the excitations are governed by objects of $\mathrm{End}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}}) \simeq {}_K\hbox{{$\mathcal M$}}_K^G$, the category of $G$-graded bimodules over $K$. This is necessarily equivalent, by Tannaka-Krein reconstruction\cite{Ma:tan} to the category of modules ${}_{\Xi(R,K)}\mathcal{M}$ of a certain quasi-Hopf algebra $\Xi(R,K)$. Here $R\subseteq G$ is a choice of transversal so that every element of $G$ factorises uniquely as $RK$, but the algebra of $\Xi(R,K)$ depends only on the choice of subgroup $K$ and not on the transversal $R$. This is the algebra which we use to define measurement protocols on the boundaries of the Kitaev model. One also has that $\hbox{{$\mathcal Z$}}({}_\Xi\hbox{{$\mathcal M$}})\simeq\hbox{{$\mathcal Z$}}(\hbox{{$\mathcal M$}}^G)\simeq{}_{D(G)}\hbox{{$\mathcal M$}}$ as braided monoidal categories. Categorical aspects will be deferred to Section~\ref{sec:cat_just}, our main focus prior to that being on a full understanding of the algebra $\Xi$, its properties and aspects of the physics. In fact, lattice boundaries of Kitaev models based on subgroups have been defined and characterised previously, see \cite{BSW, Bom}, with \cite{CCW} giving an overview for computational purposes, and we build on these works. We begin in Section~\ref{sec:bulk} with a recap of the algebras and actions involved in the bulk of the lattice model, then in Section~\ref{sec:gap} we accommodate the boundary conditions in a manner which works with features important for quantum computation, such as sites, quasiparticle projectors and ribbon operators. These sections mostly cover well-trodden ground, although we correct errors and clarify some algebraic subtleties which appear to have gone unnoticed in previous works. In particular, we obtain formulae for the decomposition of bulk irreducible representations of $D(G)$ into $\Xi$-representations which we believe to be new. Key to our results here is an observation that in fact $\Xi(R,K)\subseteq D(G)$ as algebras, which gives a much more direct route than previously to an adjunction between $\Xi(R,K)$-modules and $D(G)$-modules describing how excitations pass between the bulk and boundary. This is important for the physical picture\cite{CCW} and previously was attributed to an adjunction between ${}_{D(G)}\hbox{{$\mathcal M$}}$ and ${}_K\hbox{{$\mathcal M$}}_K^G$ in \cite{PS2}. In Section~\ref{sec:patches}, as an application of our explicit description of boundaries, we generalise the quantum computational model called \textit{lattice surgery} \cite{HFDM,Cow2} to the nonabelian group case. We find that for every finite group $G$ one can simulate the group algebra $\mathbb{C} G$ and its dual $\mathbb{C}(G)$ on a lattice patch with `rough' and `smooth' boundaries. This is an alternative model of fault-tolerant computation to the well-known method of braiding anyons or defects \cite{Kit,FMMC}, although we do not know whether there are choices of group such that lattice surgery is natively universal without state distillation. In Section~\ref{sec:quasi}, we look at $\Xi(R,K)$ as a quasi-Hopf algebra in somewhat more detail than we have found elsewhere. As well as the quasi-bialgebra structure, we provide and verify the antipode for any choice of transversal $R$ for which right-inversion is bijective. This case is in line with \cite{Nat}, but we will also consider antipodes more generally. We then show that an obvious $*$-algebra structure on $\Xi$ meets all the axioms of a strong $*$-quasi-Hopf algebra in the sense of \cite{BegMa:bar} coming out of the theory of bar categories. The key ingredient here is a somewhat nontrivial map that relates the complex conjugate $\Xi$-module to $V\mathop{{\otimes}} W$ to those of $W$ and $V$. We also give an extended series of examples, including one related to the octonions. Lastly, in Section~\ref{sec:cat_just}, we connect the algebraic notions up to the abstract description of boundaries conditions via module categories and use this to obtain more results about $\Xi(R,K)$. We first calculate the relevant categorical equivalence ${}_K\hbox{{$\mathcal M$}}_K^G \simeq {}_{\Xi(R,K)}\mathcal{M}$ concretely, deriving the quasi-bialgebra structure of $\Xi(R,K)$ precisely such that this works. Since the left hand side is independent of $R$, we deduce by Tannaka-Krein arguments that changing $R$ changes $\Xi(R,K)$ by a Drinfeld cochain twist and we find this cochain as a main result of the section. This is important as Drinfeld twists do not change the category of modules up to equivalence, so such aspects of the physics do not depend on $R$. Twisting arguments then imply that we have an antipode more generally for any $R$. We also look at ${\hbox{{$\mathcal V$}}} = {}_K\hbox{{$\mathcal M$}}^G$ as a module category for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$. Section~\ref{sec:rem} provides some concluding remarks relating to generalisations of the boundaries to models based on other Hopf algebras \cite{BMCA}. \subsection*{Acknowledgements} The first author thanks Stefano Gogioso for useful discussions regarding nonabelian lattice surgery as a model for computation. Thanks also to Paddy Gray \& Kathryn Pennel for their hospitality while some of this paper was written and to Simon Harrison for the Wolfson Harrison UK Research Council Quantum Foundation Scholarship, which made this work possible. The second author was on sabbatical at Cambridge Quantum Computing and we thank members of the team there. \section{Preliminaries: recap of the Kitaev model in the bulk}\label{sec:bulk} We begin with the model in the bulk. This is a largely a recap of eg. \cite{Kit, CowMa}. \subsection{Quantum double}\label{sec:double}Let $G$ be a finite group with identity $e$, then $\mathbb{C} G$ is the group Hopf algebra with basis $G$. Multiplication is extended linearly, and $\mathbb{C} G$ has comultiplication $\Delta h = h \otimes h$ and counit ${\epsilon} h = 1$ on basis elements $h\in G$. The antipode is given by $Sh = h^{-1}$. $\mathbb{C} G$ is a Hopf $*$-algebra with $h^* = h^{-1}$ extended antilinearly. Its dual Hopf algebra $\mathbb{C}(G)$ of functions on $G$ has basis of $\delta$-functions $\{\delta_g\}$ with $\Delta\delta_g=\sum_h \delta_h\mathop{{\otimes}}\delta_{h^{-1}g}$, ${\epsilon} \delta_g=\delta_{g,e}$ and $S\delta_g=\delta_{g^{-1}}$ for the Hopf algebra structure, and $\delta_g^* = \delta_{g}$ for all $g\in G$. The normalised integral elements \textit{in} $\mathbb{C} G$ and $\mathbb{C}(G)$ are \[ \Lambda_{\mathbb{C} G}={1\over |G|}\sum_{h\in G} h\in \mathbb{C} G,\quad \Lambda_{\mathbb{C}(G)}=\delta_e\in \mathbb{C}(G).\] The integrals \textit{on} $\mathbb{C} G$ and $\mathbb{C}(G)$ are \[ \int h = \delta_{h,e}, \quad \int \delta_g = 1\] normalised so that $\int 1 = 1$ for $\mathbb{C} G$ and $\int 1 = |G|$ for $\mathbb{C}(G)$. For the Drinfeld double we have $D(G)=\mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$ as in \cite{Ma:book}, with $\mathbb{C} G$ and $\mathbb{C}(G)$ sub-Hopf algebras and the cross relations $ h\delta_g =\delta_{hgh^{-1}} h$ (a semidirect product). The Hopf algebra antipode is $S(\delta_gh)=\delta_{h^{-1}g^{-1}h} h^{-1}$, and over $\mathbb{C}$ we have a Hopf $*$-algebra with $(\delta_g h)^* = \delta_{h^{-1}gh} h^{-1}$. There is also a quasitriangular structure which in subalgebra notation is \begin{equation}\label{RDG} \hbox{{$\mathcal R$}}=\sum_{h\in G} \delta_h\mathop{{\otimes}} h\in D(G) \otimes D(G).\end{equation} If we want to be totally explicit we can build $D(G)$ on either the vector space $\mathbb{C}(G)\mathop{{\otimes}} \mathbb{C} G$ or on the vector space $\mathbb{C} G\mathop{{\otimes}}\mathbb{C}(G)$. In fact the latter is more natural but we follow the conventions in \cite{Ma:book,CowMa} and use the former. Then one can say the above more explicitly as \[(\delta_g\mathop{{\otimes}} h)(\delta_f\mathop{{\otimes}} k)=\delta_g\delta_{hfh^{-1}}\mathop{{\otimes}} hk=\delta_{g,hfh^{-1}}\delta_g\mathop{{\otimes}} hk,\quad S(\delta_g\mathop{{\otimes}} h)=\delta_{h^{-1}g^{-1}h} \mathop{{\otimes}} h^{-1}\] etc. for the operations on the underlying vector space. As a semidirect product, irreducible representations of $D(G)$ are given by standard theory as labelled by pairs $({\hbox{{$\mathcal C$}}},\pi)$ consisting of an orbit under the action (i.e. by a conjugacy class ${\hbox{{$\mathcal C$}}}\subset G$ in this case) and an irrep $\pi$ of the isotropy subgroup, in our case \[ G^{c_0}=\{n\in G\ |\ nc_0 n^{-1}=c_0\}\] of a fixed element $c_0\in{\hbox{{$\mathcal C$}}}$, i.e. the centraliser $C_G(c_0)$. The choice of $c_0$ does not change the isotropy group up to isomorphism but does change how it sits inside $G$. We also fix data $q_c\in G$ for each $c\in {\hbox{{$\mathcal C$}}}$ such that $c=q_cc_0q_c^{-1}$ with $q_{c_0}=e$ and define from this a cocycle $\zeta_c(h)=q^{-1}_{hch^{-1}}hq_c$ as a map $\zeta: {\hbox{{$\mathcal C$}}}\times G\to G^{c_0}$. The associated irreducible representation is then \[ W_{{\hbox{{$\mathcal C$}}},\pi}=\mathbb{C} {\hbox{{$\mathcal C$}}}\mathop{{\otimes}} W_\pi,\quad \delta_g.(c\mathop{{\otimes}} w)=\delta_{g,c}c\mathop{{\otimes}} w,\quad h.(c\mathop{{\otimes}} w)=hch^{-1}\mathop{{\otimes}} \zeta_c(h).w \] for all $w\in W_\pi$, the carrier space of $\pi$. This constructs all irreps of $D(G)$ and, over $\mathbb{C}$, these are unitary in a Hopf $*$-algebra sense if $\pi$ is unitary. Moreover, $D(G)$ is semisimple and hence has a block decomposition $D(G){\cong}\oplus_{{\hbox{{$\mathcal C$}}},\pi} \mathrm{ End}(W_{{\hbox{{$\mathcal C$}}},\pi})$ given by a complete orthogonal set of self-adjoint central idempotents \begin{equation}\label{Dproj}P_{({\hbox{{$\mathcal C$}}},\pi)}={{\rm dim}(W_\pi)\over |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}}\sum_{n\in G^{c_0}}\mathrm{ Tr}_\pi(n^{-1})\delta_{c}\mathop{{\otimes}} q_c nq_c^{-1}.\end{equation} We refer to \cite{CowMa} for more details and proofs. Acting on a state, this will become a projection operator that determines if a quasiparticle of type ${\hbox{{$\mathcal C$}}},\pi$ is present. Chargeons are quasiparticles with ${\hbox{{$\mathcal C$}}}=\{e\}$ and $\pi$ an irrep of $G$, and fluxions are quasiparticles with ${\hbox{{$\mathcal C$}}}$ a conjugacy class and $\pi=1$, the trivial representation. \subsection{Bulk lattice model}\label{sec:lattice} Having established the prerequisite algebra, we move on to the lattice model itself. This first part is largely a recap of \cite{Kit, CowMa} and we use the notations of the latter. Let $\Sigma = \Sigma(V, E, P)$ be a square lattice viewed as a directed graph with its usual (cartesian) orientation, vertices $V$, directed edges $E$ and faces $P$. The Hilbert space $\hbox{{$\mathcal H$}}$ will be a tensor product of vector spaces with one copy of $\mathbb{C} G$ at each arrow in $E$. We have group elements for the basis of each copy. Next, to each adjacent pair of vertex $v$ and face $p$ we associate a site $s = (v, p)$, or equivalently a line (the `cilium') from $p$ to $v$. We then define an action of $\mathbb{C} G$ and $\mathbb{C}(G)$ at each site by \[ \includegraphics[scale=0.7]{Gaction.pdf}\] Here $h\in \mathbb{C} G$, $a\in \mathbb{C}(G)$ and $g^1,\cdots,g^4$ denote independent elements of $G$ (not powers). Observe that the vertex action is invariant under the location of $p$ relative to its adjacent $v$, so the red dashed line has been omitted. \begin{lemma}\label{lemDGrep} \cite{Kit,CowMa} $h{\triangleright}$ and $a{\triangleright}$ for all $h\in G$ and $a\in \mathbb{C}(G)$ define a representation of $D(G)$ on $\hbox{{$\mathcal H$}}$ associated to each site $(v,p)$. \end{lemma} We next define \[ A(v):=\Lambda_{\mathbb{C} G}{\triangleright}={1\over |G|}\sum_{h\in G}h{\triangleright},\quad B(p):=\Lambda_{\mathbb{C}(G)}{\triangleright}=\delta_e{\triangleright}\] where $\delta_{e}(g^1g^2g^3g^4)=1$ iff $g^1g^2g^3g^4=e$, which is iff $(g^4)^{-1}=g^1g^2g^3$, which is iff $g^4g^1g^2g^3=e$. Hence $\delta_{e}(g^1g^2g^3g^4)=\delta_{e}(g^4g^1g^2g^3)$ is invariant under cyclic rotations, hence $\Lambda_{\mathbb{C}(G)}{\triangleright}$ computed at site $(v,p)$ does not depend on the location of $v$ on the boundary of $p$. Moreover, \[ A(v)B(p)=|G|^{-1}\sum_h h\delta_e{\triangleright}=|G|^{-1}\sum_h \delta_{heh^{-1}}h{\triangleright}=|G|^{-1}\sum_h \delta_{e}h{\triangleright}=B(p)A(v)\] if $v$ is a vertex on the boundary of $p$ by Lemma~\ref{lemDGrep}, and more trivially if not. We also have the rest of \[ A(v)^2=A(v),\quad B(p)^2=B(p),\quad [A(v),A(v')]=[B(p),B(p')]=[A(v),B(p)]=0\] for all $v\ne v'$ and $p\ne p'$, as easily checked. We then define the Hamiltonian \[ H=\sum_v (1-A(v)) + \sum_p (1-B(p))\] and the space of vacuum states \[ \hbox{{$\mathcal H$}}_{\rm vac}=\{|\psi\>\in\hbox{{$\mathcal H$}}\ |\ A(v)|\psi\>=B(p)|\psi\>=|\psi\>,\quad \forall v,p\}.\] Quasiparticles in Kitaev models are labelled by representations of $D(G)$ occupying a given site $(v,p)$, which take the system out of the vacuum. Detection of a quasiparticle is via a {\em projective measurement} of the operator $\sum_{{\hbox{{$\mathcal C$}}}, \pi} p_{{\hbox{{$\mathcal C$}}},\pi} P_{\mathcal{C}, \pi}$ acting at each site on the lattice for distinct coefficients $p_{{\hbox{{$\mathcal C$}}},\pi} \in \mathbb{R}$. By definition, this is a process which yields the classical value $p_{{\hbox{{$\mathcal C$}}},\pi}$ with a probability given by the likelihood of the state prior to the measurement being in the subspace in the image of $P_{\mathcal{C},\pi}$, and in so doing performs the corresponding action of the projector $P_{\mathcal{C}, \pi}$ at the site. The projector $P_{e,1}$ corresponds to the vacuum quasiparticle. In computing terms, this system of measurements encodes a logical Hilbert subspace, which we will always take to be the vacuum space $\hbox{{$\mathcal H$}}_{\rm vac}$, within the larger physical Hilbert space given by the lattice; this subspace is dependent on the topology of the surface that the lattice is embedded in, but not the size of the lattice. For example, there is a convenient closed-form expression for the dimension of $\hbox{{$\mathcal H$}}_{\rm vac}$ when $\Sigma$ occupies a closed, orientable surface \cite{Cui}. Computation can then be performed on states in the logical subspace in a fault-tolerant manner, with unwanted excitations constituting detectable errors. In the interest of brevity, we forgo a detailed exposition of such measurements, ribbon operators and fault-tolerant quantum computation on the lattice. The interested reader can learn about these in e.g. \cite{Kit,Bom,CCW,CowMa}. We do give a brief recap of ribbon operators, although without much rigour, as these will be useful later. \begin{definition}\rm \label{def:ribbon} A ribbon $\xi$ is a strip of face width that connects two sites $s_0 = (v_0,p_0)$ and $s_1 = (v_1,p_1)$ on the lattice. A ribbon operator $F^{h,g}_\xi$ acts on the vector spaces associated to the edges along the path of the ribbon, as shown in Fig~\ref{figribbon}. We call this basis of ribbon operators labelled by $h$ and $g$ the \textit{group basis}. \end{definition} \begin{figure} \[ \includegraphics[scale=0.8]{Fig1.pdf}\] \caption{\label{figribbon} Example of a ribbon operator for a ribbon $\xi$ from $s_0=(v_0,p_0)$ to $s_1=(v_1,p_1)$.} \end{figure} \begin{lemma}\label{lem:concat} If $\xi'$ is a ribbon concatenated with $\xi$, then the associated ribbon operators in the group basis satisfy \[F_{\xi'\circ\xi}^{h,g}=\sum_{f\in G}F_{\xi'}^{f^{-1}hf,f^{-1}g}\circ F_\xi^{h,f}, \quad F^{h,g}_\xi \circ F^{h',g'}_\xi=\delta_{g,g'}F_\xi^{hh',g}.\] \end{lemma} The first identity shows the role of the comultiplication of $D(G)^*$, \[\Delta(h\delta_g) = \sum_{f\in G} h\delta_f\otimes f^{-1}hf\delta_{f^{-1}g}.\] using subalgebra notation, while the second identity implies that \[(F_\xi^{h,g})^\dagger = F_\xi^{h^{-1},g}.\] \begin{lemma}\label{ribcom}\cite{Kit} Let $\xi$ be a ribbon with the orientation as shown in Figure~\ref{figribbon} between sites $s_0=(v_0,p_0)$ and $s_1=(v_1,p_1)$. Then \[ [F_\xi^{h,g},f{\triangleright}_v]=0,\quad [F_\xi^{h,g},\delta_e{\triangleright}_p]=0,\] for all $v \notin \{v_0, v_1\}$ and $p \notin \{p_0, p_1\}$. \[ f{\triangleright}_{s_0}\circ F_\xi^{h,g}=F_\xi^{fhf^{-1},fg} \circ f{\triangleright}_{s_0},\quad \delta_f{\triangleright}_{s_0}\circ F_\xi^{h,g}=F_\xi^{h,g} \circ\delta_{h^{-1}f}{\triangleright}_{s_0},\] \[ f{\triangleright}_{s_1}\circ F_\xi^{h,g}=F_\xi^{h,gf^{-1}} \circ f{\triangleright}_{s_1},\quad \delta_f{\triangleright}_{s_1}\circ F_\xi^{h,g}=F_\xi^{h,g}\circ \delta_{fg^{-1}hg}{\triangleright}_{s_1}\] for all ribbons where $s_0,s_1$ are disjoint, i.e. when $s_0$ and $s_1$ share neither vertices or faces. The subscript notation $f{\triangleright}_v$ means the local action of $f\in \mathbb{C} G$ at vertex $v$, and the dual for $\delta_f{\triangleright}_s$ at a site $s$. \end{lemma} We call the above lemma the \textit{equivariance property} of ribbon operators. Such ribbon operators may be deformed according to a sort of discrete isotopy, so long as the endpoints remain the same. We formalised ribbon operators as left and right module maps in \cite{CowMa}, but skim over any further details here. The physical interpretation of ribbon operators is that they create, move and annihilate quasiparticles. \begin{lemma}\cite{Kit}\label{lem:ribs_only} Let $s_0$, $s_1$ be two sites on the lattice. The only operators in ${\rm End}(\hbox{{$\mathcal H$}})$ which change the states at these sites, and therefore create quasiparticles and change the distribution of measurement outcomes, but leave the state in vacuum elsewhere, are ribbon operators. \end{lemma} This lemma is somewhat hard to prove rigorously but a proof was sketched in \cite{CowMa}. Next, there is an alternate basis for these ribbon operators in which the physical interpretation becomes more obvious. The \textit{quasiparticle basis} has elements \begin{equation}F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,v} = \sum_{n\in G^{c_0}} \pi(n^{-1})_{ji} F_\xi^{c, q_c n q_d^{-1}},\end{equation} where ${\hbox{{$\mathcal C$}}}$ is a conjugacy class, $\pi$ is an irrep of the associated isotropy subgroup $G^{c_0}$ and $u = (c,i)$, $v = (d,j)$ label basis elements of $W_{{\hbox{{$\mathcal C$}}},\pi}$ in which $c,d \in {\hbox{{$\mathcal C$}}}$ and $i,j$ label a basis of $W_\pi$. This amounts to a nonabelian Fourier transform of the space of ribbons (that is, the Peter-Weyl isomorphism of $D(G)$) and has inverse \begin{equation}F_\xi^{h,g} = \sum_{{\hbox{{$\mathcal C$}}},\pi\in \hat{G^{c_0}}}\sum_{c\in{\hbox{{$\mathcal C$}}}}\delta_{h,gcg^{-1}} \sum_{i,j = 0}^{{\rm dim}(W_\pi)}\pi(q^{-1}_{gcg^{-1}}g q_c)_{ij}F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;a,b},\end{equation} where $a = (gcg^{-1},i)$ and $b=(c,j)$. This reduces in the chargeon sector to the special cases \begin{equation}\label{chargeon_ribbons}F_\xi^{'e,\pi;i,j} = \sum_{n\in G}\pi(n^{-1})_{ji}F_\xi^{e,n}\end{equation} and \begin{equation}F_\xi^{e,g} = \sum_{\pi\in \hat{G}}\sum_{i,j = 0}^{{\rm dim}(W_\pi)}\pi(g)_{ij}F_\xi^{'e,\pi;i,j}\end{equation} Meanwhile, in the fluxion sector we have \begin{equation}\label{fluxion_ribbons}F_\xi^{'{\hbox{{$\mathcal C$}}},1;c,d}=\sum_{n\in G^{c_0}}F_\xi^{c,q_c nq_d^{-1}}\end{equation} but there is no inverse in the fluxion sector. This is because the chargeon sector corresponds to the irreps of $\mathbb{C} G$, itself a semisimple algebra; the fluxion sector has no such correspondence. If $G$ is Abelian then $\pi$ are 1-dimensional and we do not have to worry about the indices for the basis of $W_\pi$; this then looks like a more usual Fourier transform. \begin{lemma}\label{lem:quasi_basis} If $\xi'$ is a ribbon concatenated with $\xi$, then the associated ribbon operators in the quasiparticle basis satisfy \[ F_{\xi'\circ\xi}^{'{\hbox{{$\mathcal C$}}},\pi;u,v}=\sum_w F_{\xi'}^{'{\hbox{{$\mathcal C$}}},\pi;w,v}\circ F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,w}\] and are such that the nonabelian Fourier transform takes convolution to multiplication and vice versa, as it does in the abelian case. \end{lemma} In particular, we have the \textit{ribbon trace operators}, $W^{{\hbox{{$\mathcal C$}}},\pi}_\xi := \sum_u F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,u}$. Such ribbon trace operators create exactly quasiparticles of the type ${\hbox{{$\mathcal C$}}},\pi$ from the vacuum, meaning that \[P_{({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>}{\triangleleft}_{s_1}P_{({\hbox{{$\mathcal C$}}},\pi)}.\] We refer to \cite{CowMa} for more details and proofs of the above. \begin{example}\rm \label{exDS3} Our go-to example for our expositions will be $G=S_3$ generated by transpositions $u=(12), v=(23)$ with $w=(13)=uvu=vuv$. There are then 8 irreducible representations of $D(S_3)$ according to the choices ${\hbox{{$\mathcal C$}}}_0=\{e\}$, ${\hbox{{$\mathcal C$}}}_1=\{u,v,w\}$, ${\hbox{{$\mathcal C$}}}_2=\{uv,vu\}$ for which we pick representatives $c_0=e$, $q_e=e$, $c_1=u$, $q_u=e$, $q_v=w$, $q_w=v$ and $c_2=uv$ with $q_{uv}=e,q_{vu}=v$ (with the $c_i$ in the role of $c_0$ in the general theory). Here $G^{c_0}=S_3$ with 3 representations $\pi=$ trivial, sign and $W_2$ the 2-dimensional one given by (say) $\pi(u)=\sigma_3, \pi(v)=(\sqrt{3}\sigma_1-\sigma_3)/2$, $G^{c_1}=\{e,u\}=\mathbb{Z}_2$ with $\pi(u)=\pm1$ and $G^{c_2}=\{e,uv,vu\}=\mathbb{Z}_3$ with $\pi(uv)=1,\omega,\omega^2$ for $\omega=e^{2\pi\imath\over 3}$. See \cite{CowMa} for details and calculations of the associated projectors and some $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$ operators. \end{example} \section{Gapped Boundaries}\label{sec:gap} While $D(G)$ is the relevant algebra for the bulk of the model, our focus is on the boundaries. For these, we require a different class of algebras. \subsection{The boundary subalgebra $\Xi(R,K)$}\label{sec:xi} Let $K\subseteq G$ be a subgroup of a finite group $G$ and $G/K=\{gK\ |\ g\in G\}$ be the set of left cosets. It is not necessary in this section, but convenient, to fix a representative $r$ for each coset and let $R\subseteq G$ be the set of these, so there is a bijection between $R$ and $G/K$ whereby $r\leftrightarrow rK$. We assume that $e\in R$ and call such a subset (or section of the map $G\to G/K$) a {\em transversal}. Every element of $G$ factorises uniquely as $rx$ for $r\in R$ and $x\in K$, giving a coordinatisation of $G$ which we will use. Next, as we quotiented by $K$ from the right, we still have an action of $K$ from the left on $G/K$, which we denote ${\triangleright}$. By the above bijection, this equivalently means an action ${\triangleright}:K\times R\to R$ on $R$ which in terms of the factorisation is determined by $xry=(x{\triangleright} r)y'$, where we refactorise $xry$ in the form $RK$ for some $y'\in R$. There is much more information in this factorisation, as will see in Section~\ref{sec:quasi}, but this action is all we need for now. Also note that we have chosen to work with left cosets so as to be consistent with the literature \cite{CCW,BSW}, but one could equally choose a right coset factorisation to build a class of algebras similar to those in \cite{KM2}. We consider the algebra $\mathbb{C}(G/K){>\!\!\!\triangleleft} \mathbb{C} K$ as the cross product by the above action. Using our coordinatisation, this becomes the following algebra. \begin{definition}\label{defXi} $\Xi(R,K)=\mathbb{C}(R){>\!\!\!\triangleleft} \mathbb{C} K$ is generated by $\mathbb{C}(R)$ and $\mathbb{C} K$ with cross relations $x\delta_r=\delta_{x{\triangleright} r} x$. Over $\mathbb{C}$, this is a $*$-algebra with $(\delta_r x)^*=x^{-1}\delta_r=\delta_{x^{-1}{\triangleright} r}x^{-1}$. \end{definition} If we choose a different transversal $R$ then the algebra does not change up to an isomorphism which maps the $\delta$-functions between the corresponding choices of representative. Of relevance to the applications, we also have: \begin{lemma} $\Xi(R,K)$ has the `integral element' \[\Lambda:=\Lambda_{\mathbb{C}(R)} \otimes \Lambda_{\mathbb{C} K} = \delta_e \frac{1}{|K|}\sum_{x\in K}x\] characterised by $\xi\Lambda={\epsilon}(\xi)\Lambda=\Lambda\xi$ for all $\xi\in \Xi$, and ${\epsilon}(\Lambda)=1$. \end{lemma} {\noindent {\bfseries Proof:}\quad } We check that \begin{align*} \xi\Lambda& = (\delta_s y)(\delta_e\frac{1}{|K|}\sum_{x\in K}x) = \delta_{s,y{\triangleright} e}\delta_s\frac{1}{|K|}\sum_{x\in K}yx= \delta_{s,e}\delta_e \frac{1}{|K|}\sum_{x\in K}x\\ &= {\epsilon}(\xi)\Lambda = \frac{1}{|K|}\sum_{x\in K}\delta_{e,x{\triangleright} y}\delta_e xy = \frac{1}{|K|}\sum_{x\in K}\delta_{e,y}\delta_e x = \Lambda\xi. \end{align*} And clearly, ${\epsilon}(\Lambda) = \delta_{e,e} {|K|\over |K|} = 1$. \endproof As a cross product algebra, we can take the same approach as with $D(G)$ to the classification of its irreps: \begin{lemma} Irreps of $\Xi(R,K)$ are classified by pairs $(\hbox{{$\mathcal O$}},\rho)$ where $\hbox{{$\mathcal O$}}\subseteq R$ is an orbit under the action ${\triangleright}$ and $\rho$ is an irrep of the isotropy group $K^{r_0}:=\{x\in K\ |\ x{\triangleright} r_0=r_0\}$. Here we fix a base point $r_0\in \hbox{{$\mathcal O$}}$ as well as $\kappa: \hbox{{$\mathcal O$}}\to K $ a choice of lift such that \[ \kappa_r{\triangleright} r_0 = r,\quad\forall r\in \hbox{{$\mathcal O$}},\quad \kappa_{r_0}=e.\] Then \[ V_{\hbox{{$\mathcal O$}},\rho}=\mathbb{C} \hbox{{$\mathcal O$}}\mathop{{\otimes}} V_\rho,\quad \delta_r(s\mathop{{\otimes}} v)=\delta_{r,s}s\mathop{{\otimes}} v,\quad x.(s\mathop{{\otimes}} v)=x{\triangleright} s\mathop{{\otimes}}\zeta_s(x).v,\quad \zeta_s(x)=\kappa^{-1}_{x{\triangleright} s}x\kappa_s\] for $v\in V_\rho$, the carrier space for $\rho$, and \[ \zeta: \hbox{{$\mathcal O$}}\times K\to K^{r_0},\quad \zeta_r(x)=\kappa_{x{\triangleright} r}^{-1}x\kappa_r.\] \end{lemma} {\noindent {\bfseries Proof:}\quad } One can check that $\zeta_r(x)$ lives in $K^{r_0}$, \[ \zeta_r(x){\triangleright} r_0=(\kappa_{x{\triangleright} r}^{-1}x\kappa_r){\triangleright} r_0=\kappa_{x{\triangleright} r}^{-1}{\triangleright}(x{\triangleright} r)=\kappa_{x{\triangleright} r}^{-1}{\triangleright}(\kappa_{x{\triangleright} r}{\triangleright} r_0)=r_0\] and the cocycle property \[ \zeta_r(xy)=\kappa^{-1}_{x{\triangleright} y{\triangleright} r}x \kappa_{y{\triangleright} r}\kappa^{-1}_{y{\triangleright} r}y \kappa_r=\zeta_{y{\triangleright} r}(x)\zeta_r(y),\] from which it is easy to see that $V_{\hbox{{$\mathcal O$}},\rho}$ is a representation, \[ x.(y.(s\mathop{{\otimes}} v))=x.(y{\triangleright} s\mathop{{\otimes}} \zeta_s(y). v)=x{\triangleright}(y{\triangleright} s)\mathop{{\otimes}}\zeta_{y{\triangleright} s}(x)\zeta_s(y).v=xy{\triangleright} s\mathop{{\otimes}}\zeta_s(xy).v=(xy).(s\mathop{{\otimes}} v),\] \[ x.(\delta_r.(s\mathop{{\otimes}} v))=\delta_{r,s}x{\triangleright} s\mathop{{\otimes}} \zeta_s(x). v= \delta_{x{\triangleright} r,x{\triangleright} s}x{\triangleright} s\mathop{{\otimes}}\zeta_s(x).v=\delta_{x{\triangleright} r}.(x.(s\mathop{{\otimes}} v)).\] One can show that $V_{\hbox{{$\mathcal O$}},\pi}$ are irreducible and do not depend up to isomorphism on the choice of $r_0$ or $\kappa_r$.\endproof In the $*$-algebra case as here, we obtain a unitary representation if $\rho$ is unitary. One can also show that all irreps can be obtained this way. In fact the algebra $\Xi(R,K)$ is semisimple and has a block associated to the $V_{\hbox{{$\mathcal O$}},\pi}$. \begin{lemma}\label{Xiproj} $\Xi(R,K)$ has a complete orthogonal set of central idempotents \[ P_{(\hbox{{$\mathcal O$}},\rho)}={\dim V_\rho\over |K^{r_0}|}\sum_{r\in\hbox{{$\mathcal O$}}}\sum_{n\in K^{r_0}} \mathrm{ Tr}_{\rho}(n^{-1})\delta_r\mathop{{\otimes}} \kappa_r n \kappa_r^{-1}.\] \end{lemma} {\noindent {\bfseries Proof:}\quad } The proofs are similar to those for $D(G)$ in \cite{CowMa}. That we have a projection is \begin{align*}P_{(\hbox{{$\mathcal O$}},\rho)}^2&={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,n\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(n^{-1})\sum_{r,s\in \hbox{{$\mathcal O$}}}(\delta_r\mathop{{\otimes}} \kappa_rm\kappa_r^{-1})(\delta_s\mathop{{\otimes}}\kappa_sn\kappa_s^{-1})\\ &={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,n\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(n^{-1})\sum_{r,s\in \hbox{{$\mathcal O$}}}\delta_r\delta_{r,s}\mathop{{\otimes}} \kappa_rm\kappa_r^{-1}\kappa_s n\kappa_s^{-1}\\ &={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,m'\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(m m'{}^{-1})\sum_{r\in \hbox{{$\mathcal O$}}}\delta_r\mathop{{\otimes}} \kappa_rm'\kappa_r^{-1}= P_{(\hbox{{$\mathcal O$}},\rho)} \end{align*} where we used $r=\kappa_r m\kappa_r^{-1}{\triangleright} s$ iff $s=\kappa_r m^{-1}\kappa_r^{-1}{\triangleright} r=\kappa_r m^{-1}{\triangleright} r_0=\kappa_r{\triangleright} r_0=r$. We then changed $mn=m'$ as a new variable and used the orthogonality formula for characters on $K^{r_0}$. Similarly, for different projectors to be orthogonal. The sum of projectors is 1 since \begin{align*}\sum_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}=\sum_{\hbox{{$\mathcal O$}}, r\in {\hbox{{$\mathcal C$}}}}\delta_r\mathop{{\otimes}} \kappa_r\sum_{\rho\in \hat{K^{r_0}}} \left({\dim V_\rho\over |K^{r_0}|}\sum_{n\in K^{r_0}} \mathrm{ Tr}_{\rho}(n^{-1}) n\right) \kappa_r^{-1}=\sum_{\hbox{{$\mathcal O$}},r\in\hbox{{$\mathcal O$}}}\delta_r\mathop{{\otimes}} 1=1, \end{align*} where the bracketed expression is the projector $P_\rho$ for $\rho$ in the group algebra of $K^{r_0}$, and these sum to 1 by the Peter-Weyl decomposition of the latter. \endproof \begin{remark}\rm In the previous literature, the irreps have been described using double cosets and representatives thereof \cite{CCW}. In fact a double coset in ${}_KG_K$ is an orbit for the left action of $K$ on $G/K$ and hence has the form $\hbox{{$\mathcal O$}} K$ corresponding to an orbit $\hbox{{$\mathcal O$}}\subset R$ in our approach. We will say more about this later, in Proposition~\ref{prop:mon_equiv}. \end{remark} An important question for the physics is how representations on the bulk relate to those on the boundary. This is afforded by functors in the two directions. Here we give a much more direct approach to this issue as follows. \begin{proposition}\label{Xisub} There is an inclusion of algebras $i:\Xi(R,K)\hookrightarrow D(G)$ \[ i(x)=x,\quad i(\delta_r)=\sum_{x\in K} \delta_{rx}.\] The pull-back or restriction of a $D(G)$-module $W$ to a $\Xi$-module $i^*(W)$ is simply for $\xi\in \Xi$ to act by $i(\xi)$. Going the other way, the induction functor sends a $\Xi$-module $V$ to a $D(G)$-module $D(G)\mathop{{\otimes}}_\Xi V$, where $\xi\in \Xi$ right acts on $D(G)$ by right multiplication by $i(\xi)$. These two functors are adjoint. \end{proposition} {\noindent {\bfseries Proof:}\quad } We just need to check that $i$ respects the relations of $\Xi$. Thus, \begin{align*} i(\delta_r)i(\delta_s)&=\sum_{x,y\in K}\delta_{rx}\delta_{sy}=\sum_{x\in K}\delta_{r,s}\delta_{rx}=i(\delta_r\delta_s), \\ i(x)i(\delta_r)&=\sum_{y\in K}x\delta_{ry}=\sum_{y\in K}\delta_{xryx^{-1}}x=\sum_{y\in K}\delta_{(x{\triangleright} r)x'yx^{-1}}x=\sum_{y'\in K}\delta_{(x{\triangleright} r)y'}x=i(\delta_{x{\triangleright} r} x),\end{align*} as required. For the first line, we used the unique factorisation $G=RK$ to break down the $\delta$-functions. For the second line, we use this in the form $xr=(x{\triangleright} r)x'$ for some $x'\in K$ and then changed variables from $y$ to $y'=x'yx^{-1}$. The rest follows as for any algebra inclusion. \endproof In fact, $\Xi$ is a quasi-bialgebra and at least when $(\ )^R$ is bijective a quasi-Hopf algebra, as we see in Section~\ref{sec:quasi}. In the latter case, it has a quantum double $D(\Xi)$ which contains $\Xi$ as a sub-quasi Hopf algebra. Moreover, it can be shown that $D(\Xi)$ is a `Drinfeld cochain twist' of $D(G)$, which implies it has the same algebra as $D(G)$. We do not need details, but this is the abstract reason for the above inclusion. (An explicit proof of this twisting result in the usual Hopf algebra case with $R$ a group is in \cite{BGM}.) Meanwhile, the statement that the two functors in the lemma are adjoint is that \[ \hom_{D(G)}(D(G)\mathop{{\otimes}}_\Xi V,W))=\hom_\Xi(V, i^*(W))\] for all $\Xi$-modules $V$ and all $D(G)$-modules $W$. These functors do not take irreps to irreps and of particular interest are the multiplicities for the decompositions back into irreps, i.e. if $V_i, W_a$ are respective irreps and $D(G)\mathop{{\otimes}}_\Xi V_i=\oplus_{a} n^i{}_a W_a$ then \[ {\rm dim}(\hom_{D(G)}(D(G)\mathop{{\otimes}}_\Xi V_i,W_a))={\rm dim}(\hom_\Xi(V_i,i^*(W_a)))\] and hence $i^*(W_a)=\oplus_i n^i_a V_i$. This explains one of the observations in \cite{CCW}. It remains to give a formula for these multiplicities, but here we were not able to reproduce the formula in \cite{CCW}. Our approach goes via a general lemma as follows. \begin{lemma}\label{lemfrobn} Let $i:A\hookrightarrow B$ be an inclusion of finite-dimensional semisimple algebras and $\int$ the unique symmetric special Frobenius linear form on $B$ such that $\int 1=1$. Let $V_i$ be an irrep of $A$ and $W_a$ an irrep of $B$. Then the multiplicity $V_i$ in the pull-back $i^*(W_a)$ (which is the same as the multiplicity of $W_a$ in $B\mathop{{\otimes}}_A V_i$) is given by \[ n^i{}_a={\dim(B)\over\dim(V_i)\dim(W_a)}\int i(P_i)P_a,\] where $P_i\in A$ and $P_a\in B$ are the associated central idempotents. Moreover, $i(P_i)P_a =0$ if and only if $n^i_a = 0$. \end{lemma} {\noindent {\bfseries Proof:}\quad } Recall that a linear map $\int:B\to \mathbb{C}$ is Frobenius if the bilinear form $(b,c):=\int bc$ is nondegenerate, and is symmetric if this bilinear form is symmetric. Also, let $g=g^1\mathop{{\otimes}} g^2\in B\mathop{{\otimes}} B$ (in a notation with the sum of such terms understood) be the associated `metric' such that $(\int b g^1 )g^2=b=g^1\int g^2b$ for all $b$ (it is the inverse matrix in a basis of the algebra). We say that the Frobenius form is special if the algebra product $\cdot$ obeys $\cdot(g)=1$. It is well-known that there is a unique symmetric special Frobenius form up to scale, given by the trace in the left regular representation, see \cite{MaRie:spi} for a recent study. In our case, over $\mathbb{C}$, we also know that a finite-dimensional semisimple algebra $B$ is a direct sum of matrix algebras ${\rm End}(W_a)$ associated to the irreps $W_a$ of $B$. Then \begin{align*} \int i(P_i)P_a&={1\over\dim(B)}\sum_{\alpha,\beta}\<f^\alpha\mathop{{\otimes}} e_\beta,i(P_i)P_a (e_\alpha\mathop{{\otimes}} f^\beta)\>\\ &={1\over\dim(B)}\sum_{\alpha}\dim(W_a)\<f^\alpha, i(P_i)e_\alpha\>={\dim(W_a)\dim(V_i)\over\dim(B)} n^i{}_a. \end{align*} where $\{e_\alpha\}$ is a basis of $W_a$ and $\{f^\beta\}$ is a dual basis, and $P_a$ acts as the identity on $\mathrm{ End}(W_a)$ and zero on the other blocks. We then used that if $i^*(W_a)=\oplus_i {n^i{}_a}V_i$ as $A$-modules, then $i(P_i)$ just picks out the $V_i$ components where $P_i$ acts as the identity. For the last part, the forward direction is immediate given the first part of the lemma. For the other direction, suppose $n^i_a = 0$ so that $i^*(W_a)=\oplus_j n^j_aV_j$ with $j\ne a$ running over the other irreps of $A$. Now, we can view $P_{a}\in W_{a}\mathop{{\otimes}} W_{a}^*$ (as the identity element) and left multiplication by $i(P_i)$ is the same as $P_i$ acting on $P_{a}$ viewed as an element of $i^*(W_{a})\mathop{{\otimes}} W_{a}^*$, which is therefore zero.\endproof We apply Lemma~\ref{lemfrobn} in our case of $A=\Xi$ and $B=D(G)$, where \[ \dim(V_i)=|\hbox{{$\mathcal O$}}|\dim(V_\rho), \quad \dim(W_a)=|{\hbox{{$\mathcal C$}}}|\dim(W_\pi)\] with $i=({\hbox{{$\mathcal C$}}},\rho)$ as described above and $a=({\hbox{{$\mathcal C$}}},\pi)$ as described in Section~\ref{sec:bulk}. \begin{proposition}\label{nformula} For the inclusion $i:\Xi\hookrightarrow D(G)$ in Proposition~\ref{Xisub}, the multiplicities for restriction and induction as above are given by \[ n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}= {|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|} \sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop r^{-1}c\in K}} |K^{r,c}|\sum_{\tau\in \hat{K^{r,c}} } n_{\tau,\tilde\rho|_{K^{r,c}}} n_{\tau, \tilde\pi|_{K^{r,c}}},\quad K^{r,c}=K^r\cap G^c,\] where $\tilde \pi(m)=\pi(q_c^{-1}mq_c)$ and $\tilde\rho(m)=\rho(\kappa_r^{-1}m\kappa_r)$ are the corresponding representation of $K^r,G^c$ decomposing as $K^{r,c}$ representations as \[ \tilde\rho|_{K^{r,c}}{\cong}\oplus_\tau n_{\tau,\tilde\rho|_{K^{r,c}}}\tau,\quad \tilde\pi|_{K^{r,c}}{\cong}\oplus_\tau n_{\tau,\tilde\pi|_{K^{r,c}}}\tau.\] \end{proposition} {\noindent {\bfseries Proof:}\quad } We include the projector from Lemma~\ref{Xiproj} as \[ i(P_{(\hbox{{$\mathcal O$}},\rho)})={{\rm dim}(V_\rho)\over |K^{r_0}|}\sum_{r\in \hbox{{$\mathcal O$}}, x\in K}\sum_{m\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\delta_{rx}\mathop{{\otimes}} \kappa_r m\kappa_r^{-1}\] and multiply this by $P_{({\hbox{{$\mathcal C$}}},\pi)}$ from (\ref{Dproj}). In the latter, we write $c=sy$ for the factorisation of $c$. Then when we multiply these out, for $(\delta_{rx}\mathop{{\otimes}} \kappa_r m \kappa_r^{-1})(\delta_{c}\mathop{{\otimes}} q_c n q_c^{-1})$ we will need $\kappa_r m\kappa_r^{-1}{\triangleright} s=r$ or equivalently $s=\kappa_r m^{-1}\kappa_r^{-1}{\triangleright} r=r$ so we are actually summing not over $c$ but over $y\in K$ such that $ry\in {\hbox{{$\mathcal C$}}}$. Also then $x$ is uniquely determined in terms of $y$. Hence \[ i(P_{(\hbox{{$\mathcal O$}},\rho)})P_{({\hbox{{$\mathcal C$}}},\pi)}={{\rm dim}(V_\rho){\rm dim}(W_\pi)\over |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}, n\in G^{c_0}}\sum_{r\in \hbox{{$\mathcal O$}}, y\in K | ry\in{\hbox{{$\mathcal C$}}}} \mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\pi(n^{-1}) \delta_{rx}\mathop{{\otimes}} \kappa_r m\kappa_r^{-1} q_c nq_c^{-1}\] Now we apply the integral of $D(G)$, $\int\delta_g\mathop{{\otimes}} h=\delta_{h,e}$ which requires \[ n=q_c^{-1}\kappa_r m^{-1}\kappa_r^{-1}q_c\] and $x=y$ for $n\in G^{c_0}$ given that $c=ry$. We refer to this condition on $y$ as $(\star)$. Remembering that $\int$ is normalised so that $\int 1=|G|$, we have from the lemma \begin{align*}n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}&={|G|\over\dim(V_i)\dim(W_a)}\int i(P_{(\hbox{{$\mathcal O$}},\rho)})P_{({\hbox{{$\mathcal C$}}},\pi)}\\ &={|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}}\sum_{{r\in \hbox{{$\mathcal O$}}, y\in K\atop (*), ry\in{\hbox{{$\mathcal C$}}}}} \mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\pi(q_{ry}^{-1}\kappa_r m\kappa_r^{-1}q_{ry}) \\ &={|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}}\sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop r^{-1}c\in K}}\sum_{m'\in K^r\cap G^c} \mathrm{ Tr}_\rho(\kappa_r^{-1}m'{}^{-1}\kappa_r)\mathrm{ Tr}_\pi(q_{c}^{-1} m q_{c}), \end{align*} where we compute in $G$ and view $(\star)$ as $m':=\kappa_r m \kappa_r^{-1}\in G^c$. We then use the group orthogonality formula \[ \sum_{m\in K^{r,c}}\mathrm{ Tr}_{\tau}(m^{-1})\mathrm{ Tr}_{\tau'}(m)=\delta_{\tau,\tau'}|K^{r,c}| \] for any irreps $\tau,\tau'$ of the group \[ K^{r,c}:=K^r\cap G^c=\{x\in K\ |\ x{\triangleright} r=r,\quad x c x^{-1}=c\}\] to obtain the formula stated. \endproof This simplifies in four (overlapping) special cases as follows. \noindent{(i) $V_i$ trivial: } \[ n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G|\over |{\hbox{{$\mathcal C$}}}||K||G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap K}\sum_{m\in K\cap G^c}\mathrm{ Tr}_\pi(q_c^{-1}mq_c)={|G| \over |{\hbox{{$\mathcal C$}}}| |K||G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap K} |K^c| n_{1,\tilde\pi}\] as $\rho=1$ implies $\tilde\rho=1$ and forces $\tau=1$. Here $K^c$ is the centraliser of $c\in K$. If $n_{1,\tilde\pi}$ is independent of the choice of $c$ then we can simplify this further as \[ n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G| |({\hbox{{$\mathcal C$}}}\cap K)/K|\over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|} n_{1,\pi|_{K^{c_0}}}\] using the orbit-counting lemma, where $K$ acts on ${\hbox{{$\mathcal C$}}}\cap K$ by conjugation. \noindent{(ii) $W_a$ trivial:} \[ n^{(\hbox{{$\mathcal O$}},\rho)}_{(\{e\},1)}= {|G|\over |\hbox{{$\mathcal O$}}||K^{r_0}||G|}\sum_{r\in \hbox{{$\mathcal O$}}\cap K}\sum_{m\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})=\begin{cases} 1 & {\rm if\ }\hbox{{$\mathcal O$}}, \rho\ {\rm trivial}\\ 0 & {\rm else}\end{cases} \] as $\hbox{{$\mathcal O$}}\cap K=\{e\}$ if $\hbox{{$\mathcal O$}}=\{e\}$ (but is otherwise empty) and in this case only $r=e$ contributes. This is consistent with the fact that if $W_a$ is the trivial representation of $D(G)$ then its pull back is also trivial and hence contains only the trivial representation of $\Xi$. \noindent{(iii) Fluxion sector:} \[ n^{(\hbox{{$\mathcal O$}},1)}_{({\hbox{{$\mathcal C$}}},1)}= {|G|\over |\hbox{{$\mathcal O$}}||{\hbox{{$\mathcal C$}}}||K^{r_0}| |G^{c_0}|} \sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop r^{-1}c\in K}} |K^r\cap G^c|.\] \noindent{(iv) Chargeon sector: } \[ n^{(\{e\},\rho)}_{(\{e\},\pi)}= n_{\rho, \pi|_{K}},\] where $\rho,\pi$ are arbitrary irreps of $K,G$ respectively and only $r=c=e$ are allowed so $K^{r,c}=K$, and then only $\tau=\rho$ contributes. \begin{example}\label{exS3n}\rm (i) We take $G=S_3$, $K=\{e,u\}=\mathbb{Z}_2$, where $u=(12)$. Here $G/K$ consists of \[ G/K=\{\{e, u\}, \{w, uv\}, \{v, vu\}\}\] and our standard choice of $R$ will be $R=\{e,uv, vu\}$, where we take one from each coset (but any other transversal will have the same irreps and their decompositions). This leads to 3 irreps of $\Xi(R,K)$ as follows. In $R$, we have two orbits $\hbox{{$\mathcal O$}}_0=\{e\}$, $\hbox{{$\mathcal O$}}_1=\{uv,vu\}$ and we choose representatives $r_0=e,\kappa_e=e$, $r_1=uv, \kappa_{uv}=e, \kappa_{vu}=u$ since $u{\triangleright} (uv)=vu$ for the two cases (here $r_1$ was denoted $r_0$ in the general theory and is the choice for $\hbox{{$\mathcal O$}}_1$). We also have $u{\triangleright}(vu)=uv$. Note that it happens that these orbits are also conjugacy classes but this is an accident of $S_3$ and not true for $S_4$. We have $K^{r_0}=K=\mathbb{Z}_2$ with representations $\rho(u)=\pm1$ and $K^{r_1}=\{e\}$ with only the trivial representation. (ii) For $D(S_3)$, we have the 8 irreps in Example~\ref{exDS3} and hence there is a $3\times 8$ table of the $\{n^i{}_a\}$. We can easily compute some of the special cases from the above. For example, the trivial $\pi$ restricted to $K$ is $\rho=1$, the sign representation restricted to $K$ is the $\rho=-1$ representation, the $W_2$ restricted to $K$ is $1\oplus -1$, which gives the upper left $2\times 3$ submatrix for the chargeon sector. Another 6 entries (four new ones) are given from the fluxion formula. We also have ${\hbox{{$\mathcal C$}}}_2\cap K=\emptyset$ so that the latter part of the first two rows is zero by our first special case formula. For ${\hbox{{$\mathcal C$}}}_1,\pm1$ in the first row, we have ${\hbox{{$\mathcal C$}}}_1\cap K=\{u\}$ with trivial action of $K$, so just one orbit. This gives us a nontrivial result in the $+1$ case and 0 in the $-1$ case. The story for ${\hbox{{$\mathcal C$}}}_1,\pm1$ in the second row follows the same derivation, but needs $\tau=-1$ and hence $\pi=-1$ for the nonzero case. In the third row with ${\hbox{{$\mathcal C$}}}_2,\pi$, we have $K^{r}=\{e\}$ so $G'=\{e\}$ and we only have $\tau=1=\rho$ as well as $\tilde\pi=1$ independently of $\pi$ as this is 1-dimensional. So both $n$ factors in the formula in Proposition~\ref{nformula} are 1. In the sum over $r,c$, we need $c=r$ so we sum over 2 possibilities, giving a nontrivial result as shown. For ${\hbox{{$\mathcal C$}}}_1,\pi$, the first part goes the same way and we similarly have $c$ determined from $r$ in the case of ${\hbox{{$\mathcal C$}}}_1,\pi$, so again two contributions in the sum, giving the answer shown independently of $\pi$. Finally, for ${\hbox{{$\mathcal C$}}}_0,\pi$ we have $r=\{uv,vu\}$ and $c=e$, and can never meet the condition $r^{-1}c\in K$. So these all have $0$. Thus, Proposition~\ref{nformula} in this example tells us: \[\begin{array}{c|c|c|c|c|c|c|c|c} n^i{}_a & {\hbox{{$\mathcal C$}}}_0,1 & {\hbox{{$\mathcal C$}}}_0,{\rm sign} & {\hbox{{$\mathcal C$}}}_0,W_2 & {\hbox{{$\mathcal C$}}}_1, 1& {\hbox{{$\mathcal C$}}}_1,-1 & {\hbox{{$\mathcal C$}}}_2,1& {\hbox{{$\mathcal C$}}}_2,\omega & {\hbox{{$\mathcal C$}}}_2,\omega^2\\ \hline\ \hbox{{$\mathcal O$}}_0,1&1 & 0 & 1 &1 & 0& 0 &0 &0 \\ \hline \hbox{{$\mathcal O$}}_0,-1&0 & 1&1& 0& 1&0 &0 & 0\\ \hline \hbox{{$\mathcal O$}}_1,1&0 &0&0 & 1& 1 &1 &1 & 1 \end{array}\] One can check for consistency that for each $W_a$, $\dim(W_a)$ is the sum of the dimensions of the $V_i$ that it contains, which determines one row from the other two. \end{example} \subsection{Boundary lattice model}\label{sec:boundary_lat} Consider a vertex on the lattice $\Sigma$. Fixing a subgroup $K \subseteq G$, we define an action of $\mathbb{C} K$ on $\hbox{{$\mathcal H$}}$ by \begin{equation}\label{actXi0}\tikzfig{CaK_vertex_action}\end{equation} One can see that this is an action as it is a tensor product of representations on each edge, or simply because it is the restriction to $K$ of the vertex action of $G$ in the bulk. Next, we define the action of $\mathbb{C} (R)$ at a face relative to a cilium, \begin{equation}\label{actXi}\tikzfig{CGK_face_action}\end{equation} with a clockwise rotation. That this is indeed an action is also easy to check explicitly, recalling that either $rK = r'K$ when $r= r'$ or $rK \cap r'K = \emptyset$ otherwise, for any $r, r'\in R$. These actions define a representation of $\Xi(R,K)$, which is just the bulk $D(G)$ action restricted to $\Xi(R,K)\subseteq D(G)$ by the inclusion in Proposition~\ref{Xisub}. This says that $x\in K$ acts as in $G$ and $\mathbb{C}(R)$ acts on faces by the $\mathbb{C}(G)$ action after sending $\delta_r \mapsto \sum_{a\in rK}\delta_a$. To connect the above representation to the topic at hand, we now define what we mean by a boundary. \subsubsection{Smooth boundaries} Consider the lattice in the half-plane for simplicity, \[\tikzfig{smooth_halfplane}\] where each solid black arrow still carries a copy of $\mathbb{C} G$ and ellipses indicate the lattice extending infinitely. The boundary runs along the left hand side and we refer to the rest of the lattice as the `bulk'. The grey dashed edges and vertices are there to indicate empty space and the lattice borders the edge with faces; we will call this case a `smooth' boundary. There is a site $s_0$ indicated at the boundary. There is an action of $\mathbb{C} K$ at the boundary vertex associated to $s_0$, identical to the action of $\mathbb{C} K$ defined above but with the left edge undefined. Similarly, there is an action of $\mathbb{C}(R)$ at the face associated to $s_0$. However, this is more complicated, as the face has three edges undefined and the action must be defined slightly differently from in the bulk: \[\tikzfig{smooth_face_action}\] \[\tikzfig{smooth_face_actionB}\] where the action is given a superscript ${\triangleright}^b$ to differentiate it from the actions in the bulk. In the first case, we follow the same clockwise rotation rule but skip over the undefined values on the grey edges, but for the second case we go round round anticlockwise. The resulting rule is then according to whether the cilium is associated to the top or bottom of the edge. It is easy to check that this defines a representation of $\Xi(R,K)$ on $\hbox{{$\mathcal H$}}$ associated to each smooth boundary site, such as $s_0$, and that the actions of $\mathbb{C}(R)$ have been chosen such that this holds. A similar principle holds for ${\triangleright}^b$ in other orientations of the boundary. The integral actions at a boundary vertex $v$ and at a face $s_0=(v,p)$ of a smooth boundary are then \[ A^b_1(v):=\Lambda_{\mathbb{C} K}{\triangleright}^b_v = {1\over |K|}\sum_k k{\triangleright}^b_v,\quad B^b_1(p):=\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{p} = \delta_e{\triangleright}^b_{p},\] where the superscript $b$ and subscript $1$ label that these are at a smooth boundary. We have the convenient property that \[\tikzfig{smooth_face_integral}\] so both the vertex and face integral actions at a smooth face each depend only on the vertex and face respectively, not the precise cilium, similar to the integral actions. \begin{remark}\rm There is similarly an action of $\mathbb{C}(G) {>\!\!\!\triangleleft} \mathbb{C} K \subseteq D(G)$ on $\hbox{{$\mathcal H$}}$ at each site in the next layer into the bulk, where the site has the vertex at the boundary but an internal face. We mention this for completeness, and because using this fact it is easy to show that \[A_1^b(v)B(p) = B(p)A_1^b(v),\] where $B(p)$ is the usual integral action in the bulk. \end{remark} \begin{remark}\rm In \cite{BSW} it is claimed that one can similarly introduce actions at smooth boundaries defined not only by $R$ and $K$ but also a 2-cocycle $\alpha$. This makes some sense categorically, as the module categories of $\hbox{{$\mathcal M$}}^G$ may also include such a 2-cocycle, which enters by way of a \textit{twisted} group algebra $\mathbb{C}_\alpha K$ \cite{Os2}. However, in Figure 6 of \cite{BSW} one can see that when the cocycle $\alpha$ is introduced all edges on the boundary are assumed to be copies of $\mathbb{C} K$, rather than $\mathbb{C} G$. On closer inspection, it is evident that this means that the action on faces of $\delta_e\in\mathbb{C}(R)$ will always yield 1, and the action of any other basis element of $\mathbb{C}(R)$ will yield 0. Similarly, the action on vertices is defined to still be an action of $\mathbb{C} K$, not $\mathbb{C}_\alpha K$. Thus, the excitations on this boundary are restricted to only the representations of $\mathbb{C} K$, without either $\mathbb{C}(R)$ or $\alpha$ appearing, which appears to defeat the purpose of the definition. It is not obvious to us that a cocycle can be included along these lines in a consistent manner. \end{remark} In quantum computational terms, in addition to the measurements in the bulk we now measure the operator $\sum_{\hbox{{$\mathcal O$}},\rho}p_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b$ for distinct coefficients $p_{\hbox{{$\mathcal O$}},\rho} \in \mathbb{R}$ at all sites along the boundary. \subsubsection{Rough boundaries} We now consider the half-plane lattice with a different kind of boundary, \[\tikzfig{rough_halfplane}\] This time, there is an action of $\mathbb{C} K$ at the exterior vertex and an action of $\mathbb{C}(R)$ at the face at the boundary with an edge undefined. Again, the former is just the usual action of $\mathbb{C} K$ with three edges undefined, but the action of $\mathbb{C}(R)$ requires more care and is defined as \[\tikzfig{rough_face_action}\] \[\tikzfig{rough_face_actionB}\] \[\tikzfig{rough_face_actionC}\] \[\tikzfig{rough_face_actionD}\] All but the second action are just clockwise rotations as in the bulk, but with the greyed-out edge missing from the $\delta$-function. The second action goes counterclockwise in order to have an associated representation of $\Xi(R,K)$ at the bottom left. We have similar actions for other orientations of the lattice. \begin{remark}\rm Although one can check that one has a representation of $\Xi(R,K)$ at each site using these actions and the action of $\mathbb{C} K$ defined before, this requires $g_1$ and $g_2$ on opposite sides of the $\delta$-function, and $g_1$ and $g_3$ on opposite sides, respectively for the last two actions. This means that there is no way to get $\delta_e{\triangleright}^b$ to always be invariant under choice of site in the face. Indeed, we have not been able to reproduce the implicit claim in \cite{CCW} that $\delta_e{\triangleright}^b$ at a rough boundary can be defined in a way that depends only on the face. \end{remark} The integral actions at a boundary vertex $v$ and at a site $s_0=(v,p)$ of a rough boundary are then \[ A_2^b(v):=\Lambda_{\mathbb{C} K}{\triangleright}^b_v = {1\over |K|}\sum_k k{\triangleright}^b_v,\quad B_2^b(v,p):=\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{s_0} = \delta_e{\triangleright}_{s_0}^b \] where the superscript $b$ and subscript $2$ label that these are at a rough boundary. In computational terms, we measure the operator $\sum_{\hbox{{$\mathcal O$}},\rho}p_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b$ at each site along the boundary, as with smooth boundaries. Unlike the smooth boundary case, there is not an action of, say, $\mathbb{C} (R){>\!\!\!\triangleleft} \mathbb{C} G$ at each site in the next layer into the bulk, with a boundary face but interior vertex. In particular, we do not have $B_2^b(v,p)A(v) = A(v)B_2^b(v,p)$ in general, but we can still consistently define a Hamiltonian. When the action at $v$ is restricted to $\mathbb{C} K$ we recover an action of $\Xi(R,K)$ again. As with the bulk, the Hamiltonian incorporating the boundaries uses the actions of the integrals. We can accommodate both rough and smooth boundaries into the Hamiltonian. Let $V,P$ be the set of vertices and faces in the bulk, $S_1$ the set of all sites $(v,p)$ at smooth boundaries, and $S_2$ the same for rough boundaries. Then \begin{align*}H&=\sum_{v_i\in V} (1-A(v_i)) + \sum_{p_i\in P} (1-B(p_i)) \\ &\quad + \sum_{s_{b_1} \in S_1} ((1 - A_1^b(s_{b_1}) + (1 - B_1^b(s_{b_1}))) + \sum_{s_{b_2} \in S_2} ((1 - A_2^b(s_{b_2}) + (1 - B_2^b(s_{b_2})).\end{align*} We can pick out two vacuum states immediately: \begin{equation}\label{eq:vac1}|{\rm vac}_1\> := \prod_{v_i,s_{b_1},s_{b_2}}A(v_i)A_1^b(s_{b_1})A_2^b(s_{b_2})\bigotimes_E e\end{equation} and \begin{equation}\label{eq:vac2}|{\rm vac}_2\> := \prod_{p_i,s_{b_1},s_{b_2}}B(p_i)B_1^b(s_{b_1})B_2^b(s_{b_2})\bigotimes_E \sum_{g \in G} g\end{equation} where the tensor product runs over all edges in the lattice. \begin{remark}\rm There is no need for two different boundaries to correspond to the same subgroup $K$, and the Hamiltonian can be defined accordingly. This principle is necessary when performing quantum computation by braiding `defects', i.e. finite holes in the lattice, on the toric code \cite{FMMC}, and also for the lattice surgery in Section~\ref{sec:patches}. We do not write out this Hamiltonian in all its generality here, but its form is obvious. \end{remark} \subsection{Quasiparticle condensation} Quasiparticles on the boundary correspond to irreps of $\Xi(R,K)$. It is immediate from Section~\ref{sec:xi} that when $\hbox{{$\mathcal O$}} = \{e\}$, we must have $r_0 = e, K^{r_0} = K$. We may choose the trivial representation of $K$ and then we have $P_{e,1} = \Lambda_{\mathbb{C}(R)} \otimes \Lambda_{\mathbb{C} K}$. We say that this particular measurement outcome corresponds to the absence of nontrivial quasiparticles, as the states yielding this outcome are precisely the locally vacuum states with respect to the Hamiltonian. This set of quasiparticles on the boundary will not in general be the same as quasiparticles defined in the bulk, as ${}_{\Xi(R,K)}\mathcal{M} \not\simeq {}_{D(G)}\mathcal{M}$ for all nontrivial $G$. Quasiparticles in the bulk can be created from a vacuum and moved using ribbon operators \cite{Kit}, where the ribbon operators are seen as left and right module maps $D(G)^* \rightarrow \mathrm{End}(\hbox{{$\mathcal H$}})$, see \cite{CowMa}. Following \cite{CCW}, we could similarly define a different set of ribbon operators for the boundary excitations, which use $\Xi(R,K)^*$ rather than $D(G)^*$. However, these have limited utility. For completeness we cover them in Appendix~\ref{app:ribbon_ops}. Instead, for our purposes we will keep using the normal ribbon operators. Such normal ribbon operators can extend to boundaries, still using Definition~\ref{def:ribbon}, so long as none of the edges involved in the definition are greyed-out. When a ribbon operator ends at a boundary site $s$, we are not concerned with equivariance with respect to the actions of $\mathbb{C}(G)$ and $\mathbb{C} G$ at $s$, as in Lemma~\ref{ribcom}. Instead we should calculate equivariance with respect to the actions of $\mathbb{C}(R)$ and $\mathbb{C} K$. We will study the matter in more depth in Section~\ref{sec:quasi}, but note that if $s,t\in R$ then unique factorisation means that $st=(s\cdot t)\tau(s,t)$ for unique elements $s\cdot t\in R$ and $\tau(s,t)\in K$. Similarly, if $y\in K$ and $r\in R$ then unique factorisation $yr=(y{\triangleright} r)(y{\triangleleft} r)$ defines $y{\triangleleft} r$ to be studied later. \begin{lemma}\label{boundary_ribcom} Let $\xi$ be an open ribbon from $s_0$ to $s_1$, where $s_0$ is located at a smooth boundary, for example: \[\tikzfig{smooth_halfplane_ribbon_short}\] and where $\xi$ begins at the specified orientation in the example, leading from $s_0$ into the bulk, rather than running along the boundary. Then \[x{\triangleright}^b_{s_0}\circ F_\xi^{h,g}=F_\xi^{xhx^{-1},xg} \circ x{\triangleright}^b_{s_0};\quad \delta_r{\triangleright}^b_{s_0}\circ F_\xi^{h,g}=F_\xi^{h,g} \circ\delta_{s\cdot(y{\triangleright} r)}{\triangleright}^b_{s_0}\] $\forall x\in K, r\in R, h,g\in G$, and where $sy$ is the unique factorisation of $h^{-1}$. \end{lemma} {\noindent {\bfseries Proof:}\quad } The first is just the vertex action of $\mathbb{C} G$ restricted to $\mathbb{C} K$, with an edge greyed-out which does not influence the result. For the second, expand $\delta_r{\triangleright}^b_{s_0}$ and verify explicitly: \[\tikzfig{smooth_halfplane_ribbon_shortA1}\] \[\tikzfig{smooth_halfplane_ribbon_shortA2}\] where we see $(s\cdot(y{\triangleright} r))K = s(y{\triangleright} r)\tau(s,y{\triangleright} r)^{-1}K = s(y{\triangleright} r)K = s(y{\triangleright} r)(y{\triangleleft} r)K = syrK = h^{-1}rK$. We check the other site as well: \[\tikzfig{smooth_halfplane_ribbon_shortB1}\] \[\tikzfig{smooth_halfplane_ribbon_shortB2}\] \endproof \begin{remark}\rm One might be surprised that the equivariance property holds for the latter case when $s_0$ is attached to the vertex at the bottom of the face, as in this case $\delta_r{\triangleright}^b_{s_0}$ confers a $\delta$-function in the counterclockwise direction, different from the bulk. This is because the well-known equivariance properties in the bulk \cite{Kit} are not wholly correct, depending on orientation, as pointed out in \cite[Section~3.3]{YCC}. We accommodated for this by specifying an orientation in Lemma~\ref{ribcom}. \end{remark} \begin{remark}\rm\label{rem:rough_ribbon} We have a similar situation for a rough boundary, albeit we found only one orientation for which the same equivariance property holds, which is: \[\tikzfig{rough_halfplane_ribbon}\] In the reverse orientiation, where the ribbon crosses downwards instead, equivariance is similar but with the introduction of an antipode. For other orientations we do not find an equivariance property at all. \end{remark} As with the bulk, we can define an excitation space using a ribbon between the two endpoints $s_0$, $s_1$, although more care must be taken in the definition. \begin{lemma}\label{Ts0s1} Let ${|{\rm vac}\>}$ be a vacuum state on a half-plane $\Sigma$, where there is one smooth boundary beyond which there are no more edges. Let $\xi$ be a ribbon between two endpoints $s_0, s_1$ where $s_0 = \{v_0,p_0\}$ is on the boundary and $s_1 = \{v_1,p_1\}$ is in the bulk, such that $\xi$ interacts with the boundary only once, when crossing from $s_0$ into the bulk; it cannot cross back and forth multiple times. Let $|\psi^{h,g}\>:=F_\xi^{h,g}{|{\rm vac}\>}$, and $\hbox{{$\mathcal T$}}_{\xi}(s_0,s_1)$ be the space with basis $|\psi^{h,g}\>$. (1)$|\psi^{h,g}\>$ is independent of the choice of ribbon through the bulk between fixed sites $s_0, s_1$, so long as the ribbon still only interacts with the boundary at the chosen location. (2)$\hbox{{$\mathcal T$}}_\xi(s_0,s_1)\subset\hbox{{$\mathcal H$}}$ inherits actions at disjoint sites $s_0, s_1$, \[ x{\triangleright}^b_{s_0}|\psi^{h,g}\>=|\psi^{ xhx^{-1},xg}\>,\quad \delta_r{\triangleright}^b_{s_0}|\psi^{h,g}\>=\delta_{rK,hK}|\psi^{h,g}\>\] \[ f{\triangleright}_{s_1}|\psi^{h,g}\>=|\psi^{h,gf^{-1}}\>,\quad \delta_f{\triangleright}_{s_1}|\psi^{h,g}\>=\delta_{f,g^{-1}h^{-1}g}|\psi^{h,g}\>\] where we use the isomorphism $|\psi^{h,g}\>\mapsto \delta_hg$ to see the action at $s_0$ as a representation of $\Xi(R,K)$ on $D(G)$. In particular it is the restriction of the left regular representation of $D(G)$ to $\Xi(R,K)$, with inclusion map $i$ from Lemma~\ref{Xisub}. The action at $s_1$ is the right regular representation of $D(G)$, as in the bulk. \end{lemma} {\noindent {\bfseries Proof:}\quad } (1) is the same as the proof in \cite[Prop.3.10]{CowMa}, with the exception that if the ribbon $\xi'$ crosses the boundary multiple times it will incur an additional energy penalty from the Hamiltonian for each crossing, and thus $\hbox{{$\mathcal T$}}_{\xi'}(s_0,s_1) \neq \hbox{{$\mathcal T$}}_{\xi}(s_0,s_1)$ in general. (2) This follows by the commutation rules in Lemma~\ref{boundary_ribcom} and Lemma~\ref{ribcom} respectively, using \[x{\triangleright}^b_{s_0}{|{\rm vac}\>} = \delta_e{\triangleright}^b_{s_0}{|{\rm vac}\>} = {|{\rm vac}\>}; \quad f{\triangleright}_{s_1}{|{\rm vac}\>} = \delta_e{\triangleright}_{s_1}{|{\rm vac}\>} = {|{\rm vac}\>}\] $\forall x\in K, f \in G$. For the hardest case we have \begin{align*}\delta_r{\triangleright}^b_{s_0}F^{h,g}{|{\rm vac}\>} &= F_\xi^{h,g} \circ\delta_{s\cdot(y{\triangleright} r)}{\triangleright}^b_{s_0}{|{\rm vac}\>}\\ &= F_\xi^{h,g}\delta_{s\cdot(y{\triangleright} r)K,K}{|{\rm vac}\>}\\ &= F_\xi^{h,g}\delta_{rK,hK}{|{\rm vac}\>}. \end{align*} For the restriction of the action at $s_0$ to $\Xi(R,K)$, we have that \[\delta_r\cdot\delta_hg = \delta_{rK,hK}\delta_hg = \sum_{a\in rK}\delta_{a,h}\delta_hg=i(\delta_r)\delta_hg.\] and $x\cdot \delta_hg = x\delta_hg = i(x)\delta_hg$. \endproof In the bulk, the excitation space $\hbox{{$\mathcal L$}}(s_0,s_1)$ is totally independent of the ribbon $\xi$ \cite{Kit,CowMa}, but we do not know of a similar property for $\hbox{{$\mathcal T$}}_\xi(s_0,s_1)$ when interacting with the boundary without the restrictions stated. We explained in Section~\ref{sec:xi} how representations of $D(G)$ at sites in the bulk relate to those of $\Xi(R,K)$ in the boundary by functors in both directions. Physically, if we apply ribbon trace operators, that is operators of the form $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$, to the vacuum, then in the bulk we create exactly a quasiparticle of type $({\hbox{{$\mathcal C$}}},\pi)$ and $({\hbox{{$\mathcal C$}}}^*,\pi^*)$ at either end. Now let us include a boundary. \begin{definition}Given an irrep of $D(G)$ provided by $({\hbox{{$\mathcal C$}}},\pi)$, we define the {\em boundary projection} $P_{i^*({\hbox{{$\mathcal C$}}},\pi)}\in \Xi(R,K)$ by \[ P_{i^*({\hbox{{$\mathcal C$}}},\pi)}=\sum_{(\hbox{{$\mathcal O$}},\rho)\ |\ n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}\ne 0} P_{(\hbox{{$\mathcal O$}},\rho)}\] i.e. we sum over the projectors of all the types of irreps of $\Xi(R,K)$ contained in the restriction of the given $D(G)$ irrep. \end{definition} It is clear that $P_{i^*({\hbox{{$\mathcal C$}}},\pi)}$ is a projection as a sum of orthogonal projections. \begin{proposition}\label{prop:boundary_traces} Let $\xi$ be an open ribbon extending from an external site $s_0$ on a smooth boundary with associated algebra $\Xi(R,K)$ to a site $s_1$ in the bulk, for example: \[\tikzfig{smooth_halfplane_ribbon}\] Then \[P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = 0\quad {\rm iff} \quad n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)} = 0.\] In addition, we have \[P_{i^*({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}^b_{s_0} W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} {\triangleleft}_{s_1} P_{({\hbox{{$\mathcal C$}}},\pi)},\] where we see the left action at $s_1$ of $P_{({\hbox{{$\mathcal C$}}}^*,\pi^*)}$ as a right action using the antipode. \end{proposition} {\noindent {\bfseries Proof:}\quad } Under the isomorphism in Lemma~\ref{Ts0s1} we have that $W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} \mapsto P_{({\hbox{{$\mathcal C$}}},\pi)} \in D(G)$. For the first part we therefore have \[P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} \mapsto i(P_{(\hbox{{$\mathcal O$}},\rho)}) P_{({\hbox{{$\mathcal C$}}},\pi)}\] so the result follows from the last part of Lemma~\ref{lemfrobn}. Since the sum of projectors over the irreps of $\Xi$ is 1, this then implies the second part: \[W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = \sum_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = P_{i^*({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>}.\] The action at $s_1$ is the same as for bulk ribbon operators. \endproof The physical interpretation is that application of a ribbon trace operator $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$ to a vacuum state creates a quasiparticle at $s_0$ of all the types contained in $i^*({\hbox{{$\mathcal C$}}},\pi)$, while still creating one of type $({\hbox{{$\mathcal C$}}}^*,\pi^*)$ at $s_1$; this is called the \textit{condensation} of $({{\hbox{{$\mathcal C$}}},\pi})$ at the boundary. While we used a smooth boundary in this example, the proposition applies equally to rough boundaries with the specified orientation in Remark~\ref{rem:rough_ribbon} by similar arguments. \begin{example}\rm In the bulk, we take the $D(S_3)$ model. Then by Example~\ref{exDS3}, we have exactly 8 irreps in the bulk. At the boundary, we take $K=\{e,u\} = \mathbb{Z}_2$ with $R = \{e,uv,vu\}$. As per the table in Example~\ref{exS3n} and Proposition~\ref{prop:boundary_traces} above, we then have for example that \[(P_{\hbox{{$\mathcal O$}}_0,-1}+P_{\hbox{{$\mathcal O$}}_1,1}){\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} {\triangleleft}_{s_1}P_{{\hbox{{$\mathcal C$}}}_1,-1}.\] We can see this explicitly. Recall that \[\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{s_0}{|{\rm vac}\>} = \Lambda_{\mathbb{C} K}{\triangleright}^b_{s_0}{|{\rm vac}\>} = {|{\rm vac}\>}.\] All other vertex and face actions give 0 by orthogonality. Then, \[P_{\hbox{{$\mathcal O$}}_0,-1} = {1\over 2}\delta_e \mathop{{\otimes}} (e-u); \quad P_{\hbox{{$\mathcal O$}}_1, 1} = (\delta_{uv} + \delta_{vu})\mathop{{\otimes}} e\] and \[W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1} = \sum_{c\in \{u,v,w\}}F_\xi^{c,e}-F_\xi^{c,c}\] by Lemmas~\ref{Xiproj} and \ref{lem:quasi_basis} respectively. For convenience, we break the calculation up into two parts, one for each projector. Throughout, we will make use of Lemma~\ref{boundary_ribcom} to commute projectors through ribbon operators. First, we have that \begin{align*} &P_{\hbox{{$\mathcal O$}}_0,-1}{\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = {1\over 2}(\delta_e \mathop{{\otimes}} (e - u)){\triangleright}^b_{s_0} \sum_{c\in \{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c}){|{\rm vac}\>}\\ &= {1\over 2}\delta_e{\triangleright}^b_{s_0}[\sum_{c\in\{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c})-(F_\xi^{u,u}-F_\xi^{e,u}+F_\xi^{v,u}-F_\xi^{v,uv}+F_\xi^{w,u}-F_\xi^{w,vu})]{|{\rm vac}\>}\\ &= {1\over 2}[(F_\xi^{u,e}-F_\xi^{u,u})\delta_e{\triangleright}^b_{s_0}+(F_\xi^{v,e}-F_\xi^{v,v})\delta_{vu}{\triangleright}^b_{s_0}+(F_\xi^{w,e}-F_\xi^{w,w})\delta_{uv}{\triangleright}^b_{s_0}\\ &+ (F^{u,e}_\xi-F^{u,u}_\xi)\delta_e{\triangleright}^b_{s_0} + (F^{v,uv}_\xi-F^{v,u}_\xi)\delta_{vu}{\triangleright}^b_{s_0} + (F^{w,vu}_\xi-F^{w,u}_\xi)\delta_{uv}{\triangleright}^b_{s_0}]{|{\rm vac}\>}\\ &= (F_\xi^{u,e}-F_\xi^{u,u}){|{\rm vac}\>} \end{align*} where we used the fact that $u = eu, v=vuu, w=uvu$ to factorise these elements in terms of $R,K$. Second, \begin{align*} P_{\hbox{{$\mathcal O$}}_1,1}{\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} &= ((\delta_{uv} + \delta_{vu})\mathop{{\otimes}} e){\triangleright}^b_{s_0}\sum_{c\in \{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c}){|{\rm vac}\>}\\ &= (F_\xi^{v,e}-F_\xi^{v,v}+F_\xi^{w,e}-F_\xi^{w,w})(\delta_e\mathop{{\otimes}} e){\triangleright}^b_{s_0}{|{\rm vac}\>}\\ &= (F_\xi^{v,e}-F_\xi^{v,v}+F_\xi^{w,e}-F_\xi^{w,w}){|{\rm vac}\>}. \end{align*} The result follows immediately. All other boundary projections of $D(S_3)$ ribbon trace operators can be worked out in a similar way. \end{example} \begin{remark}\rm Proposition~\ref{prop:boundary_traces} does not tell us exactly how \textit{all} ribbon operators in the quasiparticle basis are detected at the boundary, only the ribbon trace operators. We do not know a similar general formula for all ribbon operators. \end{remark} Now, consider a lattice in the plane with two boundaries, namely to the left and right, \[\tikzfig{smooth_twobounds}\] Recall that a lattice on an infinite plane admits a single ground state ${|{\rm vac}\>}$ as explained in\cite{CowMa}. However, in the present case, we may be able to also use ribbon operators in the quasiparticle basis extending from one boundary, at $s_0$ say, to the other, at $s_1$ say, such that no quasiparticles are detected at either end. These ribbon operators do not form a closed, contractible loop, as all undetectable ones do in the bulk; the corresponding states $|\psi\>$ are ground states and the vacuum has increased degeneracy. We can similarly induce additional degeneracy of excited states. This justifies the term \textit{gapped boundaries}, as the boundaries give rise to additional states with energies that are `gapped'; that is, they have a finite energy difference $\Delta$ (which may be zero) independently of the width of the lattice. \section{Patches}\label{sec:patches} For any nontrivial group, $G$ there are always at least two distinct choices of boundary conditions, namely with $K=\{e\}$ and $K=G$ respectively. In these cases, we necessarily have $R=G$ and $R=\{e\}$ respectively. Considering $K=\{e\}$ on a smooth boundary, we can calculate that $A^b_1(v) = \mathrm{id}$ and $B^b_1(s)g = \delta_{e,g} g$, for $g$ an element corresponding to the single edge associated with the boundary site $s$. This means that after performing the measurements at a boundary, these edges are totally constrained and not part of the large entangled state incorporating the rest of the lattice, and hence do not contribute to the model whatsoever. If we remove these edges then we are left with a rough boundary, in which all edges participate, and therefore we may consider the $K=\{e\}$ case to imply a rough boundary. A similar argument applies for $K=G$ when considered on a rough boundary, which has $A^b_2(v)g = A(v)g = {1\over |G|}\sum_k kg = {1\over |G|}\sum_k k$ for an edge with state $g$ and $B^b_2(s) = \mathrm{id}$. $K=G$ therefore naturally corresponds instead to a smooth boundary, as otherwise the outer edges are totally constrained by the projectors. From now on, we will accordingly use smooth to refer always to the $K=G$ condition, and rough for $K=\{e\}$. These boundary conditions are convenient because the condensation of bulk excitations to the vacuum at a boundary can be partially worked out in the group basis. For $K=\{e\}$, it is easy to see that the ribbon operators which are undetected at the boundary (and therefore leave the system in a vaccum state) are exactly those of the form $F_\xi^{e,g}$, for all $g\in G$, as any nontrivial $h$ in $F_\xi^{h,g}$ will be detected by the boundary face projectors. This can also be worked out representation-theoretically using Proposition~\ref{nformula}. \begin{lemma}\label{lem:rough_functor} Let $K=\{e\}$. Then the multiplicity of an irrep $({\hbox{{$\mathcal C$}}},\pi)$ of $D(G)$ with respect to the trivial representation of $\Xi(G,\{e\})$ is \[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)} = \delta_{{\hbox{{$\mathcal C$}}},\{e\}}{\rm dim}(W_\pi)\] \end{lemma} {\noindent {\bfseries Proof:}\quad } Applying Proposition~\ref{nformula} in the case where $V_i$ is trivial, we start with \[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G| \over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap \{e\}} |\{e\}^c| n_{1,\tilde\pi}\] where ${\hbox{{$\mathcal C$}}}\cap \{e\} = \{e\}$ iff ${\hbox{{$\mathcal C$}}}=\{e\}$, or otherwise $\emptyset$. Also, $\tilde\pi = \oplus_{{\rm dim}(W_\pi)} (\{e\},1)$, and if ${\hbox{{$\mathcal C$}}} = \{e\}$ then $|G^{c_0}| = |G|$. \endproof The factor of ${\rm dim}(W_\pi)$ in the r.h.s. implies that there are no other terms in the decomposition of $i^*(\{e\},\pi)$. In physical terms, this means that the trace ribbon operators $W^{e,\pi}_\xi$ are the only undetectable trace ribbon operators, and any ribbon operators which do not lie in the block associated to $(e,\pi)$ are detectable. In fact, in this case we have a further property which is that all ribbon operators in the chargeon sector are undetectable, as by equation~(\ref{chargeon_ribbons}) chargeon sector ribbon operators are Fourier isomorphic to those of the form $F_\xi^{e,g}$ in the group basis. In the more general case of a rough boundary for an arbitrary choice of $\Xi(R,K)$ the orientation of the ribbon is important for using the representation-theoretic argument. When $K=\{e\}$, for $F^{e,g}_\xi$ one can check that regardless of orientation the rough boundary version of Proposition~\ref{Ts0s1} applies. The $K=G$ case is slightly more complicated: \begin{lemma}\label{lem:smooth_functor} Let $K=G$. Then the multiplicity of an irrep $({\hbox{{$\mathcal C$}}},\pi)$ of $D(G)$ with respect to the trivial representation of $\Xi(\{e\},G)$ is \[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)} = \delta_{\pi,1}\] \end{lemma} {\noindent {\bfseries Proof:}\quad } We start with \[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={1 \over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}} |G^c| n_{1,\tilde\pi}.\] Now, $K^{r,c} = G^c$ and so $\tilde\pi = \pi$, giving $n_{1,\tilde\pi} = \delta_{1,\pi}$. Then $\sum_{c\in{\hbox{{$\mathcal C$}}}}|G^c| = |{\hbox{{$\mathcal C$}}}||G^{c_0}|$. \endproof This means that the only undetectable ribbon operators between smooth boundaries are those in the fluxion sector, i.e. those with assocated irrep $({\hbox{{$\mathcal C$}}}, 1)$. However, there is no factor of $|{\hbox{{$\mathcal C$}}}|$ on the r.h.s. and so the decomposition of $i^*({\hbox{{$\mathcal C$}}},1)$ will generally have additional terms other than just $(\{e\},1)$ in ${}_{\Xi(\{e\},G)}\hbox{{$\mathcal M$}}$. As a consequence, a fluxion trace ribbon operator $W^{{\hbox{{$\mathcal C$}}},1}_\zeta$ between smooth boundaries is undetectable iff its associated conjugacy class is a singlet, say ${\hbox{{$\mathcal C$}}}= \{c_0\}$, and thus $c_0 \in Z(G)$, the centre of $G$. \begin{definition}\rm A \textit{patch} is a finite rectangular lattice segment with two opposite smooth sides, each equipped with boundary conditions $K=G$, and two opposite rough sides, each equipped with boundary conditions $K=\{e\}$, for example: \[\tikzfig{patch}\] \end{definition} One can alternatively define patches with additional sides, such as in \cite{Lit1}, or with other boundary conditions which depend on another subgroup $K$ and transversal $R$, but we find this definition convenient. Note that our definition does not put conditions on the size of the lattice; the above diagram is just a conveniently small and yet nontrivial example. We would like to characterise the vacuum space $\hbox{{$\mathcal H$}}_{\rm vac}$ of the patch. To do this, let us begin with $|{\rm vac}_1\>$ from equation~(\ref{eq:vac1}), and denote $|e\>_L := |{\rm vac}_1\>$. This is the \textit{logical zero state} of the patch. We will use this as a reference state to calculate other states in $\hbox{{$\mathcal H$}}_{\rm vac}$. Now, for any other state $|\psi\>$ in $\hbox{{$\mathcal H$}}_{\rm vac}$, there must exist some linear map $D \in {\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$ such that $D|e\>_L = |\psi\>$, and thus if we can characterise the algebra of linear maps ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$, we automatically characterise $\hbox{{$\mathcal H$}}_{\rm vac}$. To help with this, we have the following useful property: \begin{lemma}\label{lem:rib_move} Let $F_\xi^{e,g}$ be a ribbon operator for some $g\in G$, with $\xi$ extending from the top rough boundary to the bottom rough boundary. Then the endpoints of $\xi$ may be moved along the rough boundaries with $G=\{e\}$ boundary conditions while leaving the action invariant on any vacuum state. \end{lemma} {\noindent {\bfseries Proof:}\quad } We explain this on an example patch with initial state $|\psi\> \in \hbox{{$\mathcal H$}}_{\rm vac}$ and a ribbon $\xi$, \[\tikzfig{bigger_patch}\] \[\tikzfig{bigger_patch2}\] using the fact that $a = cb$ and $m = lk$ by the definition of $\hbox{{$\mathcal H$}}_{\rm vac}$ for the second equality. Thus, we see that the ribbon through the bulk may be deformed as usual. As the only new component of the proof concerned the endpoints, we see that this property holds regardless of the size of the patch. \endproof One can calculate in particular that $F_\xi^{e,g}|e\>_L = \delta_{e,g}|e\>_L$, which we will prove more generally later. The undetectable ribbon operators between the smooth boundaries are of the form \[W^{{\hbox{{$\mathcal C$}}},1}_\xi = \sum_{n\in G} F_\zeta^{c_0,n}\] when ${\hbox{{$\mathcal C$}}} = \{c_0\}$ by Lemma~\ref{lem:smooth_functor}, hence $G^{c_0} = G$. Technically, this lemma only tells us the ribbon trace operators which are undetectable, but in the present case none of the individual component operators are undetectable, only the trace operators. There are thus exactly $|Z(G)|$ orthogonal undetectable ribbon operators between smooth boundaries. These do not play an important role, but we describe them to characterise the operator algebra on $\hbox{{$\mathcal H$}}_{\rm vac}$. They obey a similar rule as Lemma~\ref{lem:rib_move}, which one can check in the same way. In addition to the ribbon operators between sides, we also have undetectable ribbon operators between corners on the lattice. These corners connect smooth and rough boundaries, and thus careful application of specific ribbon operators can avoid detection from either face or vertex measurements, \[\tikzfig{corner_ribbons}\] where one can check that these do indeed leave the system in a vacuum using familiar arguments about $B(p)$ and $A(v)$. We could equally define such operators extending from either left corner to either right corner, and they obey the discrete isotopy laws as in the bulk. If we apply $F_\xi^{h,g}$ for any $g\neq e$ then we have $F_\xi^{h,g}|\psi\> =0$ for any $|\psi\>\in \hbox{{$\mathcal H$}}_{\rm vac}$, and so these are the only ribbon operators of this form. \begin{remark}\rm Corners of boundaries are algebraically interesting themselves, and can be used for quantum computation, but for brevity we skim over them. See e.g. \cite{Bom2,Brown} for details. \end{remark} These corner to corner, left to right and top to bottom ribbon operators span ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$, the linear maps which leave the system in vacuum. Due to Lemma~\ref{lem:ribs_only}, all other linear maps must decompose into ribbon operators, and these are the only ribbon operators in ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$ up to linearity. As a consequence, we have well-defined patch states $|h\>_L := \sum_gF^{h,g}_\xi|e\>_L$ for each $h\in G$, where $\xi$ is any ribbon extending from the bottom left corner to right. Now, working explicitly on the small patch below, we have \[\tikzfig{wee_patch}\] to start with, then: \[\tikzfig{wee_patch2}\] It is easy to see that we may always write $|h\>_L$ in this manner, for an arbitrary size of patch. Now, ribbon operators which are undetectable when $\xi$ extends from bottom to top are those of the form $F_\xi^{e,g}$, for example \[\tikzfig{wee_patch3}\] and so $F_\xi^{e,g}|h\>_L = \delta_{g,h}|h\>_L$, where again if we take a larger patch all additional terms will clearly cancel. Lastly, undetectable ribbon operators for a ribbon $\zeta$ extending from left to right are exactly those of the form $\sum_{n\in G} F_\zeta^{c_0,n}$ for any $c_0 \in Z(G)$. One can check that $|c_0 h\>_L = \sum_{n\in G} F_\zeta^{c_0,n} |h\>_L$, thus these give us no new states in $\hbox{{$\mathcal H$}}_{\rm vac}$. \begin{lemma}\label{lem:patch_dimension} For a patch with the $D(G)$ model in the bulk, ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac}) = |G|$. \end{lemma} {\noindent {\bfseries Proof:}\quad } By the above characterisation of undetectable ribbon operators, the states $\{|h\>_L\}_{h\in G}$ span ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac})$. The result then follows from the adjointness of ribbon operators, which means that the states $\{|h\>_L\}_{h\in G}$ are orthogonal. \endproof We can also work out that for $|{\rm vac}_2\>$ from equation~(\ref{eq:vac2}), we have $|{\rm vac}_2\> = \sum_h |h\>_L$. More generally: \begin{corollary}\label{cor:matrix_basis} $\hbox{{$\mathcal H$}}_{\rm vac}$ has an alternative basis with states $|\pi;i,j\>_L$, where $\pi$ is an irreducible representation of $G$ and $i,j$ are indices such that $0\leq i,j<{\rm dim}(V_\pi)$. We call this the quasiparticle basis of the patch. \end{corollary} {\noindent {\bfseries Proof:}\quad } First, use the nonabelian Fourier transform on the ribbon operators $F_\xi^{e,g}$, so we have $F_\xi^{'e,\pi;i,j} = \sum_{n\in G}\pi(n^{-1})_{ji}F_\xi^{e,n}$. If we start from the reference state $|1;0,0\>_L := \sum_h |h\>_L = |{\rm vac}_2\>$ and apply these operators with $\xi$ from bottom to top of the patch then we get \[|\pi;i,j\>_L = F_\xi^{'e,\pi;i,j}|1;0,0\>_L = \sum_{n\in G}\pi(n^{-1})_{ji} |n\>_L\] which are orthogonal. Now, as $\sum_{\pi\in \hat{G}}\sum_{i,j=0}^{{\rm dim}(V_\pi)} = |G|$ and we know ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac}) = |G|$ by the previous Lemma~\ref{lem:patch_dimension}, $\{|\pi;i,j\>_L\}_{\pi,i,j}$ forms a basis of ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac})$. \endproof \begin{remark}\rm Kitaev models are designed in general to be fault tolerant. The minimum number of component Hilbert spaces, that is copies of $\mathbb{C} G$ on edges, for which simultaneous errors will undetectably change the logical state and cause errors in the computation is called the `code distance' $d$ in the language of quantum codes. For the standard method of computation using nonabelian anyons \cite{Kit}, data is encoded using excited states, which are states with nontrivial quasiparticles at certain sites. The code distance can then be extremely small, and constant in the size of the lattice, as the smallest errors need only take the form of ribbon operators winding round a single quasiparticle at a site. This is no longer the case when encoding data in vacuum states on patches, as the only logical operators are specific ribbon operators extending from top to bottom, left to right or corner to corner. The code distance, and hence error resilience, of any vacuum state of the patch therefore increases linearly with the width of the patch as it is scaled, and so the square root of the number $n$ of component Hilbert spaces in the patch, that is $n\sim d^2$. \end{remark} \subsection{Nonabelian lattice surgery}\label{sec:surgery} Lattice surgery was invented as a method of fault-tolerant computation with the qubit, i.e. $\mathbb{C}\mathbb{Z}_2$, surface code \cite{HFDM}. The first author generalised it to qudit models using $\mathbb{C}\mathbb{Z}_d$ in \cite{Cow2}, and gave a fresh perspective on lattice surgery as `simulating' the Hopf algebras $\mathbb{C}\mathbb{Z}_d$ and $\mathbb{C}(\mathbb{Z}_d)$ on the logical space $\hbox{{$\mathcal H$}}_{\rm vac}$ of a patch. In this section, we prove that lattice surgery generalises to arbitrary finite group models, and `simulates' $\mathbb{C} G$ and $\mathbb{C}(G)$ in a similar way. Throughout, we assume that the projectors $A(v)$ and $B(p)$ may be performed deterministically for simplicity. In Appendix~\ref{app:measurements} we discuss the added complication that in practice we may only perform measurements which yield projections nondeterministically. \begin{remark}\rm When proving the linear maps that nonabelian lattice surgeries yield, we will use specific examples, but the arguments clearly hold generally. For convenience, we will also tend to omit normalising scalar factors, which do not impact the calculations as the maps are $\mathbb{C}$-linear. \end{remark} Let us begin with a large rectangular patch. We now remove a line of edges from left to right by projecting each one onto $e$: \[\tikzfig{split2}\] We call this a \textit{rough split}, as we create two new rough boundaries. We no longer apply $A(v)$ to the vertices which have had attached edges removed. If we start with a small patch in the state $|l\>_L$ for some $l\in G$ then we can explicitly calculate the linear map. \[\tikzfig{rough_split_project}\] where we have separated the two patches afterwards for clarity, showing that they have two separate vacuum spaces. We then have that the last expression is \[\tikzfig{rough_split_project2}\] Observe the factors of $g$ in particular. The state is therefore now $\sum_g |g^{-1}\>_L\otimes |gl\>_L$, where the l.h.s. of the tensor product is the Hilbert space corresponding to the top patch, and the r.h.s. to the bottom. A change of variables gives $\sum_g |g\>_L\otimes |g^{-1}l\>_L$, the outcome of comultiplication of $\mathbb{C}(G)$ on the logical state $|l\>_L$ of the original patch. Similarly, we can measure out a line of edges from bottom to top, for example \[\tikzfig{split1}\] We call this a \textit{smooth split}, as we create two new smooth boundaries. Each deleted edge is projected into the state ${1\over|G|}\sum_g g$. We also cease measurement of the faces which have had edges removed, and so we end up with two adjacent but disjoint patches. Working on a small example, we start with $|e\>_L$: \[\tikzfig{smooth_split_project}\] where in the last step we have taken $b\mapsto jc$, $g\mapsto kh$ from the $\delta$-functions and then a change of variables $j\mapsto jc^{-1}$, $k\mapsto kh^{-1}$ in the summation. Thus, we have ended with two disjoint patches, each in state $|e\>_L$. One can see that this works for any $|h\>_L$ in exactly the same way, and so the smooth split linear map is $|h\>_L \mapsto |h\>_L\otimes|h\>_L$, the comultiplication of $\mathbb{C} G$. The opposite of splits are merges, whereby we take two disjoint patches and introduce edges to bring them together to a single patch. For the rough merge below, say we start with the basis states $|k\>_L$ and $|j\>_L$ on the bottom and top. First, we introduce an additional joining edge in the state $e$. \[\tikzfig{merge1}\] This state $|\psi\>$ automatically satisfies $B(p)|\psi\> = |\psi\>$ everywhere. But it does not satisfy the conditions on vertices, so we apply $A(v)$ to the two vertices adjacent to the newest edge. Then we have the last expression \[\tikzfig{rough_merge_project}\] which by performing repeated changes of variables yields \[\tikzfig{rough_merge_project2}\] Thus the rough merge yields the map $|j\>_L\otimes|k\>_L\mapsto|jk\>_L$, the multiplication of $\mathbb{C} G$, where again the tensor factors are in order from top to bottom. Similarly, we perform a smooth merge with the states $|j\>_L, |k\>_L$ as \[\tikzfig{merg2}\] We introduce a pair of edges connecting the two patches, each in the state $\sum_m m$. \[\tikzfig{smooth_merge_project}\] The resultant patch automatically satisfies the conditions relating to $A(v)$, but we must apply $B(p)$ to the freshly created faces to acquire a state in $\hbox{{$\mathcal H$}}_{\rm vac}$, giving \[\tikzfig{smooth_merge_project2}\] where the $B(p)$ applications introduced the $\delta$-functions \[\delta_{e}(bf^{-1}m^{-1}),\quad \delta_{e}(dh^{-1}n^{-1}),\quad \delta_e(dj^{-1}b^{-1}bf^{-1}fkh^{-1}hd^{-1}) = \delta_e(j^{-1}k).\] In summary, the linear map on logical states is evidently $|j\>_L\otimes |k\>_L \mapsto \delta_{j,k}|j\>_L$, the multiplication of $\mathbb{C}(G)$. The units of $\mathbb{C} G$ and $\mathbb{C}(G)$ are given by the states $|e\>_L$ and $|1;0,0\>_L$ respectively. The counits are given by the maps $|g\>_L \mapsto 1$ and $|g\>_L\mapsto \delta_{g,e}$ respectively. The logical antipode $S_L$ is given by applying the antipode to each edge individually, i.e. inverting all group elements. For example: \[\tikzfig{antipode_1A}\] This state is now no longer in the original $\hbox{{$\mathcal H$}}_{\rm vac}$, so to compensate we must modify the lattice. We flip all arrows in the lattice -- this is only a conceptual flip, and does not require any physical modification: \[\tikzfig{antipode_1B}\] This amounts to exchanging left and right regular representations, and redefining the Hamiltonian accordingly. In the resultant new vacuum space, the state is now $|g^{-1}\>_L = F_\xi^{e,g^{-1}}|e\>_L$, with $\xi$ running from the bottom left corner to bottom right as previously. \begin{remark}\rm This trick of redefining the vacuum space is employed in \cite{HFDM} to perform logical Hadamards, although in their case the lattice is rotated by $\pi/2$, and the edges are directionless as the model is restricted to $\mathbb{C}\mathbb{Z}_2$. \end{remark} Thus, we have all the ingredients of the Hopf algebras $\mathbb{C} G$ and $\mathbb{C}(G)$ on the same vector space $\hbox{{$\mathcal H$}}_{\rm vac}$. For applications, one should like to know which quantum computations can be performed using these algebras (ignoring the subtlety with nondeterministic projectors). Recall that a quantum computer is called approximately universal if for any target unitary $U$ and desired accuracy ${\epsilon}\in\mathbb{R}$, the computer can perform a unitary $V$ such that $||V-U||\leq{\epsilon}$, i.e. the operator norm error of $V$ from $U$ is no greater than ${\epsilon}$. We believe that when the computer is equipped with just the states $\{|h\>_L\}_{h\in G}$ and the maps from lattice surgery above then one cannot achieve approximately universal computation \cite{Stef}, but leave the proof to a further paper. If we also have access to all matrix algebra states $|\pi;i,j\>_L$ as defined in Corollary~\ref{cor:matrix_basis}, we do not know whether the model of computation is then universal for some choice of $G$, and we do not know whether these states can be prepared efficiently. In fact, how these states are defined depends on a choice of basis for each irrep, so whether it is universal may depend not only on the choice of $G$ but also choices of basis. This is an interesting question for future work. \section{Quasi-Hopf algebra structure of $\Xi(R,K)$}\label{sec:quasi} We now return to our boundary algebra $\Xi$. It is known that $\Xi$ has a great deal more structure, which we give more explicitly in this section than we have seen elsewhere. This structure generalises a well-known bicrossproduct Hopf algebra construction for when a finite group $G$ factorises as $G=RK$ into two subgroups $R,K$. Then each acts on the set of the other to form a {\em matched pair of actions} ${\triangleright},{\triangleleft}$ and we use ${\triangleright}$ to make a cross product algebra $\mathbb{C} K{\triangleright\!\!\!<} \mathbb{C}(R)$ (which has the same form as our algebra $\Xi$ except that we have chosen to flip the tensor factors) and ${\triangleleft}$ to make a cross product coalgebra $\mathbb{C} K{>\!\!\blacktriangleleft} \mathbb{C}(R)$. These fit together to form a bicrossproduct Hopf algebra $\mathbb{C} K{\triangleright\!\blacktriangleleft} \mathbb{C}(R)$. This construction has been used in the Lie group version to construct quantum Poincar\'e groups for quantum spacetimes\cite{Ma:book}. In \cite{Be} was considered the more general case where we are just given a subgroup $K\subseteq G$ and a choice of transversal $R$ with the group identity $e\in R$. As we noted, we still have unique factorisation $G=RK$ but in general $R$ need not be a group. We can still follow the same steps. First of all, unique factorisation entails that $R\cap K=\{e\}$. It also implies maps \[{\triangleright} : K\times R \rightarrow R,\quad {\triangleleft}: K\times R\rightarrow K,\quad \cdot : R\times R \rightarrow R,\quad \tau: R \times R \rightarrow K\] defined by \[xr = (x{\triangleright} r)(x{\triangleleft} r),\quad rs = r\cdot s \tau(r,s)\] for all $x\in R, r,s\in R$, but this time these inherit the properties \begin{align} (xy) {\triangleright} r &= x {\triangleright} (y {\triangleright} r), \quad e {\triangleright} r = r,\nonumber\\ \label{lax} x {\triangleright} (r\cdot s)&=(x {\triangleright} r)\cdot((x{\triangleleft} r){\triangleright} s),\quad x {\triangleright} e = e,\end{align} \begin{align} (x{\triangleleft} r){\triangleleft} s &= \tau\left(x{\triangleright} r, (x{\triangleleft} r){\triangleright} s)^{-1}(x{\triangleleft} (r\cdot s)\right)\tau(r,s),\quad x {\triangleleft} e = x,\nonumber\\ \label{rax} (xy) {\triangleleft} r &= (x{\triangleleft} (y{\triangleright} r))(y{\triangleleft} r),\quad e{\triangleleft} r = e,\end{align} \begin{align} \tau(r, s\cdot t)\tau(s,t)& = \tau\left(r\cdot s,\tau(r,s){\triangleright} t\right)(\tau(r,s){\triangleleft} t),\quad \tau(e,r) = \tau(r,e) = e,\nonumber\\ \label{tax} r\cdot(s\cdot t) &= (r\cdot s)\cdot(\tau(r,s){\triangleright} t),\quad r\cdot e=e\cdot r=r\end{align} for all $x,y\in K$ and $r,s,t\in R$. We see from (\ref{lax}) that ${\triangleright}$ is indeed an action (we have been using it in preceding sections) but ${\triangleleft}$ in (\ref{rax}) is only only up to $\tau$ (termed in \cite{KM2} a `quasiaction'). Both ${\triangleright},{\triangleleft}$ `act' almost by automorphisms but with a back-reaction by the other just as for a matched pair of groups. Meanwhile, we see from (\ref{tax}) that $\cdot$ is associative only up to $\tau$ and $\tau$ itself obeys a kind of cocycle condition. Clearly, $R$ is a subgroup via $\cdot$ if and only if $\tau(r,s)=e$ for all $r,s$, and in this case we already see that $\Xi(R,K)$ is a bicrossproduct Hopf algebra, with the only difference being that we prefer to build it on the flipped tensor factors. More generally, \cite{Be} showed that there is still a natural monoidal category associated to this data but with nontrivial associators. This corresponds by Tannaka-Krein reconstruction to a $\Xi$ as quasi-bialgebra which in some cases is a quasi-Hopf algebra\cite{Nat}. Here we will give these latter structures explicitly and in maximum generality compared to the literature (but still needing a restriction on $R$ for the antipode to be in a regular form). We will also show that the obvious $*$-algebra structure makes a $*$-quasi-Hopf algebra in an appropriate sense under restrictions on $R$. These aspects are new, but more importantly, we give direct proofs at an algebraic level rather than categorical arguments, which we believe are essential for detailed calculations. Related works on similar algebras and coset decompositions include \cite{PS,KM1} in addition to \cite{Be,Nat,KM2}. \begin{lemma}\cite{Be,Nat,KM2} $(R,\cdot)$ has the same unique identity $e$ as $G$ and has the left division property, i.e. for all $t, s\in R$, there is a unique solution $r\in R$ to the equation $s\cdot r = t$ (one writes $r = s\backslash t$). In particular, we let $r^R$ denote the unique solution to $r\cdot r^R=e$, which we call a right inverse.\end{lemma} This means that $(R,\cdot)$ is a left loop (a left quasigroup with identity). The multiplication table for $(R,\cdot)$ has one of each element of $R$ in each row, which is the left division property. In particular, there is one instance of $e$ in each row. One can recover $G$ knowing $(R,\cdot)$, $K$ and the data ${\triangleright},{\triangleleft},\tau$\cite[Prop.3.4]{KM2}. Note that a parallel property of left inverse $(\ )^L$ need not be present. \begin{definition} We say that $R$ is {\em regular} if $(\ )^R$ is bijective. \end{definition} $R$ is regular iff it has both left and right inverses, and this is iff it satisfies $RK=KR$ by\cite[Prop.~3.5]{KM2}. If there is also right division then we have a loop (a quasigroup with identity) and under further conditions\cite[Prop.~3.6]{KM2} we have $r^L=r^R$ and a 2-sided inverse property quasigroup. The case of regular $R$ is studied in \cite{Nat} but this excludes some interesting choices of $R$ and we do not always assume it. Throughout, we will specify when $R$ is required to be regular for results to hold. Finally, if $R$ obeys a further condition $x{\triangleright}(s\cdot t)=(x{\triangleright} s){\triangleright} t$ in \cite{KM2} then $\Xi$ is a Hopf quasigroup in the sense introduced in \cite{KM1}. This is even more restrictive but will apply to our octonions-related example. Here we just give the choices for our go-to cases for $S_3$. \begin{example}\label{exS3R}\rm $G=S_3$ with $K=\{e,u\}$ has four choices of transversal $R$ meeting our requirement that $e\in R$. Namely \begin{enumerate} \item $R=\{e,uv,vu\}$ (our standard choice) {\em is a subgroup} $R=\mathbb{Z}_3$, so it is associative and there is 2-sided division and a 2-sided inverse. We also have $u{\triangleright}(uv)=vu, u{\triangleright} (vu)=uv$ but ${\triangleleft},\tau$ trivial. \item $R=\{e,w,v\}$ which is {\em not a subgroup} and indeed $\tau(v,w)=\tau(w,v)=u$ (and all others are necessarily $e$). There is an action $u{\triangleright} v=w, u{\triangleright} w=v$ but ${\triangleleft}$ is still trivial. For examples \begin{align*} vw&=wu \Rightarrow v\cdot w=w,\ \tau(v,w)=u;\quad wv=vu \Rightarrow w\cdot v=v,\ \tau(w,v)=u\\ uv&=wu \Rightarrow u{\triangleright} v=w,\ u{\triangleleft} v=u;\quad uw=vu \Rightarrow u{\triangleright} w=v,\ u{\triangleleft} w=u. \end{align*} This has left division/right inverses as it must but {\em not right division} as $e\cdot w=v\cdot w=w$ and $e\cdot v=w\cdot v=v$. We also have $v\cdot v=w\cdot w=e$ and $(\ )^R$ is bijective so this {\em is regular}. \item $R=\{e,uv, v\}$ which is {\em not a subgroup} and $\tau,{\triangleright},{\triangleleft}$ are all nontrivial with \begin{align*} \tau(uv,uv)&=\tau(v,uv)=\tau(uv,v)=u,\quad \tau(v,v)=e,\\ v\cdot v&=e,\quad v\cdot uv=uv,\quad uv\cdot v=e,\quad uv\cdot uv=v,\\ u{\triangleright} v&=uv,\quad u{\triangleright} (uv)=v,\quad u{\triangleleft} v=e,\quad u{\triangleleft} uv=e\end{align*} and all other cases determined from the properties of $e$. Here $v^R=v$ and $(uv)^R=v$ so this is {\em not regular}. \item $R=\{e,w,vu\}$ which is analogous to the preceding case, so {\em not a subgroup}, $\tau,{\triangleright},{\triangleleft}$ all nontrivial and {\em not regular}. \end{enumerate} \end{example} We will also need the following useful lemma in some of our proofs. \begin{lemma}\label{leminv}\cite{KM2} For any transversal $R$ with $e\in R$, we have \begin{enumerate} \item $(x{\triangleleft} r)^{-1}=x^{-1}{\triangleleft}(x{\triangleright} r)$. \item $(x{\triangleright} r)^R=(x{\triangleleft} r){\triangleright} r^R$. \item $\tau(r,r^R)^{-1}{\triangleleft} r=\tau(r^R,r^{RR})^{-1}$. \item $\tau(r,r{}^R)^{-1}{\triangleleft} r=r^R{}^R$. \end{enumerate} for all $x\in K, r\in R$. \end{lemma} {\noindent {\bfseries Proof:}\quad } The first two items are elementary from the matched pair axioms. For (1), we use $e=(x^{-1}x){\triangleleft} r=(x^{-1}{\triangleleft}(x{\triangleright} r))(x{\triangleleft} r)$ and for (2) $e=x{\triangleright}(r\cdot r^R)=(x{\triangleright} r)\cdot((x{\triangleleft} r){\triangleright} r^R)$. The other two items are a left-right reversal of \cite[Lem.~3.2]{KM2} but given here for completeness. For (3), \begin{align*} e&=(\tau(r,r^R)\tau(r,r^R)^{-1}){\triangleleft} r=(\tau(r,r^R){\triangleleft} (\tau(r,r^R){\triangleright} r))(\tau(r,r^R)^{-1}{\triangleleft} r)\\ &=(\tau(r,r^R){\triangleleft} r^{RR})(\tau(r,r^R)^{-1}{\triangleleft} r)\end{align*} which we combine with \[ \tau(r^R,r^{RR})=\tau(r\cdot r^R,r^{RR})\tau(r^R,r^{RR})=\tau(r\cdot r^R, \tau(r,r^R){\triangleright} r^{RR})(\tau(r,r^R){\triangleleft} r^{RR})=\tau(r,r^R){\triangleleft} r^{RR}\] by the cocycle property. For (4), $\tau(r,r^R){\triangleleft} r^R{}^R=(r\cdot r^R) \tau(r,r^R){\triangleleft} r^R{}^R=r\cdot (r^R\cdot r^R{}^R)=r$ by one of the matched pair conditions. \endproof Using this lemma, it is not hard to prove cf\cite[Prop.3.3]{KM2} that \begin{equation}\label{leftdiv}s\backslash t=s^R\cdot\tau^{-1}(s,s^R){\triangleright} t;\quad s\cdot(s\backslash t)=s\backslash(s\cdot t)=t,\end{equation} which can also be useful in calculations. \subsection{$\Xi(R,K)$ as a quasi-bialgebra} We recall that a quasi-bialgebra is a unital algebra $H$, a coproduct $\Delta:H\to H\mathop{{\otimes}} H$ which is an algebra map but is no longer required to be coassociative, and ${\epsilon}:H\to \mathbb{C}$ a counit for $\Delta$ in the usual sense $(\mathrm{id}\mathop{{\otimes}}{\epsilon})\Delta=({\epsilon}\mathop{{\otimes}}\mathrm{id})\Delta=\mathrm{id}$. Instead, we have a weaker form of coassociativity\cite{Dri,Ma:book} \[ (\mathrm{id}\mathop{{\otimes}}\Delta)\Delta=\phi((\Delta\mathop{{\otimes}}\mathrm{id})\Delta(\ ))\phi^{-1}\] for an invertible element $\phi\in H^{\mathop{{\otimes}} 3}$ obeying the 3-cocycle identity \[ (1\mathop{{\otimes}}\phi)((\mathrm{id}\mathop{{\otimes}}\Delta\mathop{{\otimes}}\mathrm{id})\phi)(\phi\mathop{{\otimes}} 1)=((\mathrm{id}\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\Delta)\phi)(\Delta\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\mathrm{id})\phi,\quad (\mathrm{id}\mathop{{\otimes}}{\epsilon}\mathop{{\otimes}}\mathrm{id})\phi=1\mathop{{\otimes}} 1\] (it follows that ${\epsilon}$ in the other positions also gives $1\mathop{{\otimes}} 1$). In our case, we already know that $\Xi(R,K)$ is a unital algebra. \begin{lemma}\label{Xibialg} $\Xi(R,K)$ is a quasi-bialgebra with \[ \Delta x=\sum_{s\in R}x\delta_s \mathop{{\otimes}} x{\triangleleft} s, \quad \Delta \delta_r = \sum_{s,t\in R} \delta_{s\cdot t,r}\delta_{s}\otimes \delta_{t},\quad {\epsilon} x=1,\quad {\epsilon} \delta_r=\delta_{r,e}\] for all $x\in K, r\in R$, and \[ \phi=\sum_{r,s\in R} \delta_r \otimes \delta_s \otimes \tau(r,s)^{-1},\quad \phi^{-1} = \sum_{r,s\in R} \delta_r\otimes \delta_s \otimes \tau(r,s).\] \end{lemma} {\noindent {\bfseries Proof:}\quad } This follows by reconstruction arguments, but it is useful to check directly, \begin{align*} (\Delta x)(\Delta y)&=\sum_{s,r}(x\delta_s\mathop{{\otimes}} x{\triangleleft} s)(y\delta_r\mathop{{\otimes}} y{\triangleleft} r)=\sum_{s,r}(x\delta_sy\delta_r\mathop{{\otimes}} ( x{\triangleleft} s)( y{\triangleleft} r)\\ &=\sum_{r,s}xy\delta_{y^{-1}{\triangleright} s}\delta_r\mathop{{\otimes}} (x{\triangleleft} s)(y{\triangleleft} r)=\sum_r xy \delta_r\mathop{{\otimes}} (x{\triangleleft}(y{\triangleright} r))(y{\triangleleft} r)=\Delta(xy) \end{align*} as $s=y{\triangleright} r$ and using the formula for $(xy){\triangleleft} r$ at the end. Also, \begin{align*} \Delta(\delta_{x{\triangleright} s}x)&=(\Delta\delta_{x{\triangleright} s})(\Delta x)=\sum_{r, p.t=x{\triangleright} s}\delta _p x\delta_r\mathop{{\otimes}} \delta_t x{\triangleleft} r\\ &=\sum_{r, p.t=x{\triangleright} s}x\delta_{x^{-1}{\triangleright} p}\delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{(x{\triangleleft} r)^{-1}{\triangleright} t}=\sum_{(x{\triangleright} r).t=x{\triangleright} s}x \delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{(x{\triangleleft} r)^{-1}{\triangleright} t}\\ &=\sum_{(x{\triangleright} r).((x{\triangleleft} r){\triangleright} t')=x{\triangleright} s}x \delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{t'}=\sum_{r\cdot t'=s}x\delta_r\mathop{{\otimes}} (x{\triangleleft} r)\delta_{t'}=(\Delta x)(\Delta \delta _s)=\Delta(x\delta_s) \end{align*} using the formula for $x{\triangleright}(r\cdot t')$. This says that the coproducts stated are compatible with the algebra cross relations. Similarly, one can check that \begin{align*} (\sum_{p,r}\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}} &\tau(p,r))((\mathrm{id}\mathop{{\otimes}}\Delta )\Delta x)=\sum_{p,r,s,t}(\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}} \tau(p,r))(x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}} (x{\triangleleft} s){\triangleleft} t)\\ &=\sum_{p,r,s,t}\delta_px\delta_s\mathop{{\otimes}}\delta_r(x{\triangleleft} s)\delta_t\mathop{{\otimes}} \tau(p,r)((x{\triangleleft} s){\triangleleft} t)\\ &=\sum_{s,t}x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}\tau(x{\triangleright} s,(x{\triangleleft} s){\triangleright} t)(x{\triangleleft} s){\triangleleft} t)\\ &=\sum_{s,t}x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}(x{\triangleleft}(s.t))\tau(s,t)\\ &=\sum_{p,r,s,t}(x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}(x{\triangleleft}(s.t))(\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}}\tau(p,r)\\ &=( (\Delta\mathop{{\otimes}}\mathrm{id})\Delta x ) (\sum_{p,r}\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}}\tau(p,r)) \end{align*} as $p=x{\triangleright} s$ and $r=(x{\triangleleft} s){\triangleright} t$ and using the formula for $(x{\triangleleft} s){\triangleleft} t$. For the remaining cocycle relations, we have \begin{align*} (\mathrm{id}\mathop{{\otimes}}{\epsilon}\mathop{{\otimes}}\mathrm{id})\phi = \sum_{r,s}\delta_{s,e}\delta_r\mathop{{\otimes}}\tau(r,s)^{-1} = \sum_r\delta_r\mathop{{\otimes}} 1 = 1\mathop{{\otimes}} 1 \end{align*} and \[ (1\mathop{{\otimes}}\phi)((\mathrm{id}\mathop{{\otimes}}\Delta\mathop{{\otimes}}\mathrm{id})\phi)(\phi\mathop{{\otimes}} 1)=\sum_{r,s,t}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}} \delta_t\tau(r,s)^{-1}\mathop{{\otimes}}\tau(s,t)^{-1}\tau(r,s\cdot t)\] after multiplying out $\delta$-functions and renaming variables. Using the value of $\Delta\tau(r,s)^{-1}$ and similarly multiplying out, we obtain on the other side \begin{align*} ((\mathrm{id}\mathop{{\otimes}}&\mathrm{id}\mathop{{\otimes}}\Delta)\phi)(\Delta\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\mathrm{id})\phi=\sum_{r,s,t}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\tau(r,s)^{-1}\delta_t\mathop{{\otimes}}(\tau(r,s)^{-1}{\triangleleft} t)\tau(r\cdot s,t)^{-1}\\ &=\sum_{r,s,t'}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\delta_{t'}\tau(r,s)^{-1}\mathop{{\otimes}}(\tau(r,s)^{-1}{\triangleleft} (\tau(r,s){\triangleright} t'))\tau(r\cdot s,\tau(r,s){\triangleright} t')^{-1}\\ &=\sum_{r,s,t'}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\delta_{t'}\tau(r,s)^{-1}\mathop{{\otimes}}(\tau(r,s){\triangleleft} t')^{-1}\tau(r\cdot s,\tau(r,s){\triangleright} t')^{-1}, \end{align*} where we change summation to $t'=\tau(r,s){\triangleright} t$ then use Lemma~\ref{leminv}. Renaming $t'$ to $t$, the two sides are equal in view of the cocycle identity for $\tau$. Thus, we have a quasi-bialgebra with $\phi$ as stated. \endproof We can also write the coproduct (and the other structures) more explicitly. \begin{remark}\rm (1) If we want to write the coproduct on $\Xi$ explicitly as a vector space, the above becomes \[ \Delta(\delta_r\mathop{{\otimes}} x)=\sum_{s\cdot t=r}\delta_s\mathop{{\otimes}} x\mathop{{\otimes}}\delta_t\mathop{{\otimes}} (x^{-1}{\triangleright} s)^{-1},\quad {\epsilon}(\delta_r\mathop{{\otimes}} x)=\delta_{r,e}\] which is ugly due to our decision to build it on $\mathbb{C}(R)\mathop{{\otimes}}\mathbb{C} K$. (2) If we built it on the other order then we could have $\Xi=\mathbb{C} K{\triangleright\!\!\!<} \mathbb{C}(R)$ as an algebra, where we have a right action \[ (f{\triangleright} x)(r)= f(x{\triangleleft} r);\quad \delta_r{\triangleright} x=\delta_{x^{-1}{\triangleleft} r}\] on $f\in \mathbb{C}(R)$. Now make a right handed cross product \[ (x\mathop{{\otimes}} \delta_r)(y\mathop{{\otimes}} \delta_s)= xy\mathop{{\otimes}} (\delta_r{\triangleright} y)\delta_s=xy\mathop{{\otimes}}\delta_s\delta_{r,y{\triangleleft} s}\] which has cross relations $\delta_r y=y\delta_{y^{-1}{\triangleleft} r}$. These are the same relations as before. So this is the same algebra, just we prioritise a basis $\{x\delta_r\}$ instead of the other way around. This time, we have \[ \Delta (x\mathop{{\otimes}}\delta_r)=\sum_{s\cdot t=r} x\mathop{{\otimes}}\delta_s\mathop{{\otimes}} x{\triangleright} s\mathop{{\otimes}}\delta_t.\] We do not do this in order to be compatible with the most common form of $D(G)$ as $\mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$ as in \cite{CowMa}. \end{remark} \subsection{$\Xi(R,K)$ as a quasi-Hopf algebra} A quasi-bialgebra is a quasi-Hopf algebra if there are elements $\alpha,\beta\in H$ and an antialgebra map $S:H\to H$ such that\cite{Dri,Ma:book} \[(S \xi_1)\alpha\xi_2={\epsilon}(\xi)\alpha,\quad \xi_1\beta S\xi_2={\epsilon}(\xi)\beta,\quad \phi^1\beta(S\phi^2)\alpha\phi^3=1,\quad (S\phi^{-1})\alpha\phi^{-2}\beta S\phi^{-3}=1\] where $\Delta\xi=\xi_1\mathop{{\otimes}}\xi_2$, $\phi=\phi^1\mathop{{\otimes}}\phi^2\mathop{{\otimes}}\phi^3$ with inverse $\phi^{-1}\mathop{{\otimes}}\phi^{-2}\mathop{{\otimes}}\phi^{-3}$ is a compact notation (sums of such terms to be understood). It is usual to assume $S$ is bijective but we do not require this. The $\alpha,\beta, S$ are not unique and can be changed to $S'=U(S\ ) U^{-1}, \alpha'=U\alpha, \beta'=\beta U^{-1}$ for any invertible $U$. In particular, if $\alpha$ is invertible then we can transform to a standard form replacing it by $1$. For the purposes of this paper, we therefore call the case of $\alpha$ invertible a (left) {\em regular antipode}. \begin{proposition}\label{standardS} If $(\ )^R$ is bijective, $\Xi(R,K)$ is a quasi-Hopf algebra with regular antipode \[ S(\delta_r\mathop{{\otimes}} x)=\delta_{(x^{-1}{\triangleright} r)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} r,\quad \alpha=\sum_{r\in R}\delta_r\mathop{{\otimes}} 1,\quad \beta=\sum_r\delta_r\mathop{{\otimes}} \tau(r,r^R).\] Equivalently in subalgebra terms, \[ S\delta_r=\delta_{r^R},\quad Sx=\sum_{s\in R}(x^{-1}{\triangleright} s)\delta_{s^R} ,\quad \alpha=1,\quad \beta=\sum_{r\in R}\delta_r\tau(r,r^R).\] \end{proposition} {\noindent {\bfseries Proof:}\quad } For the axioms involving $\phi$, we have \begin{align*}\phi^1\beta&(S \phi^2)\alpha\phi^3=\sum_{s,t,r}(\delta_s\mathop{{\otimes}} 1)(\delta_r\mathop{{\otimes}} \tau(r,r^R))(\delta_{t^R}\mathop{{\otimes}}\tau(s,t)^{-1})\\ &=\sum_{s,t}(\delta_s\mathop{{\otimes}}\tau(s,s^R))(\delta_{t^R}\mathop{{\otimes}} \tau(s,t)^{-1})=\sum_{s,t}\delta_s\delta_{s,\tau(s,s^R){\triangleright} t^R}\mathop{{\otimes}}\tau(s,s^R)\tau(s,t)^{-1}\\ &=\sum_{s^R.t^R=e}\delta_s\mathop{{\otimes}} \tau(s,s^R)\tau(s,t)^{-1}=1, \end{align*} where we used $s.(s^R.t^R)=(s.s^R).\tau(s,s^R){\triangleright} t^R=\tau(s,s^R){\triangleright} t^R$. So $s=\tau(s,s^R){\triangleright} t^R$ holds iff $s^R.t^R=e$ by left cancellation. In the sum, we can take $t=s^R$ which contributes $\delta_s\mathop{{\otimes}} e$. Here $s^R.t^R=s^R.(s^R)^R=e$; there is a unique element $t^R$ which does this and hence a unique $t$ provided $(\ )^R$ is injective, and hence a bijection. \begin{align*} S(\phi^{-1})\alpha&\phi^{-2}\beta S(\phi^{-3}) = \sum_{s,t,u,v}(\delta_{s^R}\otimes 1)(\delta_t\otimes 1)(\delta_u\otimes\tau(u,u^R))(\delta_{(\tau(s,t)^{-1}{\triangleright} v)^R}\otimes (\tau(s,t)^{-1}{\triangleleft} v))\\ &= \sum_{s,v}(\delta_{s^R}\otimes\tau(s^R,s^R{}^R))(\delta_{(\tau(s,s^R)^{-1}{\triangleright} v)^R}\otimes \tau(s,s^R)^{-1}{\triangleleft} v). \end{align*} Upon multiplication, we will have a $\delta$-function dictating that \[s^R = \tau(s^R,s^R{}^R){\triangleright} (\tau(s,s^R)^{-1}{\triangleright} v)^R,\] so we can use the fact that \begin{align*}s\cdot s^R = e &= s\cdot(\tau(s^R,s^R{}^R){\triangleright} (\tau(s,s^R)^{-1}{\triangleright} v)^R)\\ &= s\cdot(s^R\cdot(s^R{}^R\cdot (\tau(s,s^R)^{-1}{\triangleright} v)^R))\\ &= \tau(s,s^R){\triangleright} (s^R{}^R\cdot(\tau(s,s^R){\triangleright} v)^R), \end{align*} where we use similar identities to before. Therefore $s^R{}^R\cdot (\tau(s,s^R)^{-1}{\triangleright} v)^R = e$, so $(\tau(s,s^R)^{-1}{\triangleright} v)^R = s^R{}^R{}^R$. When $(\ )^R$ is injective, this gives us $v = \tau(s,s^R){\triangleright} s^R{}^R$. Returning to our original calculation we have that our previous expression is \begin{align*} \cdots &= \sum_s \delta_{s^R}\otimes \tau(s^R,s^R{}^R)(\tau(s,s^R)^{-1}{\triangleleft} (\tau(s,s^R){\triangleright} s^R{}^R))\\ &= \sum_s \delta_{s^R}\otimes \tau(s^R,s^R{}^R)(\tau(s,s^R){\triangleleft} s^R{}^R)^{-1} = \sum_s \delta_{s^R}\otimes 1 = 1 \end{align*} We now prove the antipode axiom involving $\alpha$, \begin{align*} (S(\delta_s \otimes& x)_1)(\delta_s \otimes x)_2 = \sum_{r\cdot t = s}(\delta_{(x^{-1}{\triangleright} r)^R}\otimes (x^{-1}{\triangleleft} r))(\delta_t\otimes (x^{-1}{\triangleleft} r)^{-1})\\ &= \sum_{r\cdot t = s}\delta_{(x^{-1}{\triangleright} r)^R, (x^{-1}{\triangleleft} r){\triangleright} t}\delta_{(x^{-1}{\triangleright} r)^R}\otimes 1 = \delta_{e,s}\sum_r \delta_{(x^{-1}{\triangleright} r)^R}\otimes 1 = {\epsilon}(\delta_s\otimes x)1. \end{align*} The condition from the $\delta$-functions is \[ (x^{-1}{\triangleright} r)^R=(x^{-1}{\triangleleft} r){\triangleright} t\] which by uniqueness of right inverses holds iff \[ e=(x^{-1}{\triangleright} r)\cdot (x^{-1}{\triangleleft} r){\triangleright} t=x^{-1}{\triangleright}(r\cdot t)\] which is iff $r.t=e$, so $t=r^R$. As we also need $r.t=s$, this becomes $\delta_{s,e}$ as required. We now prove the axiom involving $\beta$, starting with \begin{align*}(\delta_s\otimes& x)_1 \beta S((\delta_s\otimes x)_2) = \sum_{r\cdot t=s, p}(\delta_r\mathop{{\otimes}} x)(\delta_p\mathop{{\otimes}}\tau(p,p^R))S(\delta_t\mathop{{\otimes}} (x^{-1}{\triangleleft} r)^{-1})\\ &=\sum_{r\cdot t=s, p}(\delta_r\delta_{r,x{\triangleright} p}\mathop{{\otimes}} x\tau(p,p^R))(\delta_{((x^{-1}{\triangleleft} r){\triangleright} t)^R}\mathop{{\otimes}} (x^{-1}{\triangleleft} r){\triangleleft} t)\\ &=\sum_{r\cdot t=s}(\delta_r\mathop{{\otimes}} x\tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R))(\delta_{((x^{-1}{\triangleleft} r){\triangleright} t)^R}\mathop{{\otimes}} (x^{-1}{\triangleleft} r){\triangleleft} t). \end{align*} When we multiply this out, we will need from the product of $\delta$-functions that \[ \tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)^{-1}{\triangleright} (x^{-1}{\triangleright} r)=((x^{-1}{\triangleleft} r){\triangleright} t)^R,\] but note that $\tau(q,q{}^R)^{-1}{\triangleright} q=q^R{}^R$ from Lemma~\ref{leminv}. So the condition from the $\delta$-functions is \[ (x^{-1}{\triangleright} r)^R{}^R=((x^{-1}{\triangleleft} r){\triangleright} t)^R,\] so \[ (x^{-1}{\triangleright} r)^R=(x^{-1}{\triangleleft} r){\triangleright} t\] when $(\ )^R$ is injective. By uniqueness of right inverses, this holds iff \[ e=(x^{-1}{\triangleright} r)\cdot ((x^{-1}{\triangleleft} r){\triangleright} t)=x^{-1}{\triangleright}(r\cdot t),\] where the last equality is from the matched pair conditions. This holds iff $r\cdot t=e$, that is, $t=r^R$. This also means in the sum that we need $s=e$. Hence, when we multiply out our expression so far, we have \[\cdots=\delta_{s,e}\sum_r\delta_r\mathop{{\otimes}} x\tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)(x^{-1}{\triangleleft} r){\triangleleft} r^R=\delta_{s,e}\sum_r\delta_r\mathop{{\otimes}}\tau(r,r^R)=\delta_{s,e}\beta,\] as required, where we used \[ x\tau( x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)(x^{-1}{\triangleleft} r){\triangleleft} r^R=\tau(r,r^R)\] by the matched pair conditions. The subalgebra form of $Sx$ is the same using the commutation relations and Lemma~\ref{leminv} to reorder. It remains to check that \begin{align*}S(\delta_s&\mathop{{\otimes}} y)S(\delta_r\mathop{{\otimes}} x)=(\delta_{(y^{-1}{\triangleright} s)^R}\mathop{{\otimes}} y^{-1}{\triangleleft} s)(\delta_{(x^{-1}{\triangleright} x)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} r)\\ &=\delta_{r,x{\triangleright} s}\delta_{(y^{-1}{\triangleright} s)^R}\mathop{{\otimes}} (y^{-1}{\triangleleft} s)(x^{-1}{\triangleleft} r)=\delta_{r,x{\triangleright} s}\delta_{(y^{-1}x^{-1}{\triangleright} r)^R}\mathop{{\otimes}}( y^{-1}{\triangleleft}(x^{-1}{\triangleright} r))(x^{-1}{\triangleleft} r)\\ &=S(\delta_r\delta_{r,x{\triangleright} s}\mathop{{\otimes}} xy)=S((\delta_r\mathop{{\otimes}} x)(\delta_s\mathop{{\otimes}} y)), \end{align*} where the product of $\delta$-functions requires $(y^{-1}{\triangleright} s)^R=( y^{-1}{\triangleleft} s){\triangleright} (x^{-1}{\triangleright} r)^R$, which is equivalent to $s^R=(x^{-1}{\triangleright} r)^R$ using Lemma~\ref{leminv}. This imposes $\delta_{r,x{\triangleright} s}$. We then replace $s=x^{-1}{\triangleright} r$ and recognise the answer using the matched pair identities. \endproof \subsection{$\Xi(R,K)$ as a $*$-quasi-Hopf algebra} The correct notion of a $*$-quasi-Hopf algebra $H$ is not part of Drinfeld's theory but a natural notion is to have further structure so as to make the monoidal category of modules a bar category in the sense of \cite{BegMa:bar}. If $H$ is at least a quasi-bialgebra, the additional structure we need, fixing a typo in \cite[Def.~3.16]{BegMa:bar}, is the first three of: \begin{enumerate}\item An antilinear algebra map $\theta:H\to H$. \item An invertible element $\gamma\in H$ such that $\theta(\gamma)=\gamma$ and $\theta^2=\gamma(\ )\gamma^{-1}$. \item An invertible element $\hbox{{$\mathcal G$}}\in H\mathop{{\otimes}} H$ such that \begin{equation}\label{*GDelta}\Delta\theta =\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op}(\ ))\hbox{{$\mathcal G$}},\quad ({\epsilon}\mathop{{\otimes}}\mathrm{id})(\hbox{{$\mathcal G$}})=(\mathrm{id}\mathop{{\otimes}}{\epsilon})(\hbox{{$\mathcal G$}})=1,\end{equation} \begin{equation}\label{*Gphi} (\theta\mathop{{\otimes}}\theta\mathop{{\otimes}}\theta)(\phi_{321})(1\mathop{{\otimes}}\hbox{{$\mathcal G$}})((\mathrm{id}\mathop{{\otimes}}\Delta)\hbox{{$\mathcal G$}})\phi=(\hbox{{$\mathcal G$}}\mathop{{\otimes}} 1)((\Delta\mathop{{\otimes}}\mathrm{id})\hbox{{$\mathcal G$}}).\end{equation} \item We say the $*$-quasi bialgebra is strong if \begin{equation}\label{*Gstrong} (\gamma\mathop{{\otimes}}\gamma)\Delta\gamma^{-1}=((\theta\mathop{{\otimes}}\theta)(\hbox{{$\mathcal G$}}_{21}))\hbox{{$\mathcal G$}}.\end{equation} \end{enumerate} Note that if we have a quasi-Hopf algebra then $S$ is antimultiplicative and $\theta=* S$ defines an antimultiplicative antilinear map $*$. However, $S$ is not unique and it appears that specifying $\theta$ directly is more canonical. \begin{lemma} Let $(\ )^R$ be bijective. Then $\Xi$ has an antilinear algebra automorphism $\theta$ such that \[ \theta(x)=\sum_s x{\triangleleft} s\, \delta_{s^R},\quad \theta(\delta_s)=\delta_{s^R},\] \[\theta^2=\gamma(\ )\gamma^{-1};\quad \gamma=\sum_s\tau(s,s^R)^{-1}\delta_s,\quad\theta(\gamma)=\gamma.\] \end{lemma} {\noindent {\bfseries Proof:}\quad } We compute, \[ \theta(\delta_s\delta_t)=\delta_{s,t}\delta_{s^R}=\delta_{s^R,t^R}\delta_{s^R}=\theta(\delta_s)\theta(\delta_t)\] \[\theta(x)\theta(y)=\sum_{s,t}x{\triangleleft} s\delta_{s^R} y{\triangleleft} t\delta_{t^R}=\sum_{t}(x{\triangleleft} (y{\triangleright} t)) y{\triangleleft} t\delta_t=\sum_t (xy){\triangleleft} t\delta_{t^R}=\theta(xy)\] where imagining commuting $\delta_t$ to the left fixes $s=(y{\triangleleft} t){\triangleright} t^R=(y{\triangleright} t)^R$ to obtain the 2nd equality. We also have \[ \theta(x\delta_s)=\sum_tx{\triangleleft} t\delta_{t^R}\delta_{s^R}=x{\triangleleft} s\delta_{s^R}=\delta_{(x{\triangleleft} s){\triangleright} s^R}x{\triangleleft} s=\delta_{(x{\triangleright} s)^R}x{\triangleleft} s\] \[ \theta(\delta_{x{\triangleright} s}x)=\sum_t\delta_{(x{\triangleright} s)^R}x{\triangleleft} t\delta_{t^R}=\sum_t\delta_{(x{\triangleright} s)^R}\delta_{(x{\triangleleft} t){\triangleright} t^R}=\sum_t\delta_{(x{\triangleright} s)^R}\delta_{(x{\triangleright} t)^R} x{\triangleleft} t,\] which is the same as it needs $t=s$. Next \[ \gamma^{-1}=\sum_s \tau(s,s^R)\delta_{s^{RR}}=\sum_s \delta_s \tau(s,s^R),\] where we recall from previous calculations that $\tau(s,s^R){\triangleright} s^{RR}=s$. Then \begin{align*}\theta^2(x)&=\sum_s\theta(x{\triangleleft} s\delta_{s^R})=\sum_{s,t}(x{\triangleleft} s){\triangleleft} t\delta_{t^R}\delta_{s^R}=\sum_s(x{\triangleleft} s){\triangleleft} s\delta_{s^R}=\sum_s (x{\triangleleft} s){\triangleleft} x^R\delta_{s^{RR}}\\ &=\sum_s \tau(x{\triangleright} s,(x{\triangleright} s)^R)^{-1}x\tau(s,s^R)\delta_{s^{RR}}=\sum_{s,t}\tau(t,t^R)^{-1}\delta_{t} x\tau(s,s^R)\delta_{s^{RR}}\\ &=\sum_{s,t}\delta_{t^{RR}}\tau(t,t^R)^{-1}x\tau(s,s^R)\delta_{s^{RR}}=\gamma x\gamma^{-1}&\end{align*} where for the 6th equality if we were to commute $\delta_{s^{RR}}$ to the left, this would fix $t=x\tau(s,s^R){\triangleright} s^{RR}=x{\triangleright} s$. We then use $\tau(t,t^R)^{-1}{\triangleright} t=t^{RR}$ and recognise the answer. We also check that \begin{align*}\gamma\delta_s\gamma^{-1}&= \tau(s,s^R)^{-1}\delta_s\tau(s,s^R)=\delta_{s^{RR}}=\theta^2(\delta_s),\\ \theta(\gamma) &= \sum_{s,t}\tau(s,s^R)^{-1}{\triangleleft} t\delta_{t^R}\delta_{s^R}=\sum_s\tau(s,s^R)^{-1}{\triangleleft} s\delta_{s^R}=\sum_s\tau(s^R,s^{RR})^{-1}\delta_{s^R}=\gamma\end{align*} using Lemma~\ref{leminv}. \endproof Next, we find $\hbox{{$\mathcal G$}}$ obeying the conditions above. \begin{lemma} If $(\ )^R$ is bijective then equation (\ref{*GDelta}) holds with \[ \hbox{{$\mathcal G$}}=\sum_{s,t} \delta_{t^R}\tau(s,t)^{-1}\mathop{{\otimes}} \delta_{s^R}\tau(t,t^R) (\tau(s,t){\triangleleft} t^R)^{-1}, \] \[\hbox{{$\mathcal G$}}^{-1}=\sum_{s,t} \tau(s,t)\delta_{t^R}\mathop{{\otimes}} (\tau(s,t){\triangleleft} t^R)\tau(t,t^R)^{-1} \delta_{s^R}.\] \end{lemma} {\noindent {\bfseries Proof:}\quad } The proof that $\hbox{{$\mathcal G$}},\hbox{{$\mathcal G$}}^{-1}$ are indeed inverse is straightforward on matching the $\delta$-functions to fix the summation variables in $\hbox{{$\mathcal G$}}^{-1}$ in terms of $\hbox{{$\mathcal G$}}$. This then comes down to proving that the map $(s,t)\to (p,q):=(\tau(s,t){\triangleright} t^R, \tau'(s,t){\triangleright} s^R)$ is injective. Indeed, the map $(p,q)\mapsto (p,p\cdot q)$ is injective by left division, so it's enough to prove that \[ (s,t)\mapsto (p,p\cdot q)=(\tau(s,t){\triangleright} t^R, \tau(s,t){\triangleright}(t^R\cdot\tau(t,t^R)^{-1}{\triangleright} s^R))=((s\cdot t)\backslash s,(s\cdot t)^R)\] is injective. We used $(s\cdot t)\cdot \tau(s,t){\triangleright} t^R=s\cdot(t\cdot t^R)=s$ by quasi-associativity to recognise $p$, recognised $t^R\cdot\tau(t,t^R)^{-1}{\triangleright} s^R=t\backslash s^R$ from (\ref{leftdiv}) and then \[ (s\cdot t)\cdot \tau(s,t){\triangleright} (t\backslash s^R)=s\cdot(t\cdot(t\backslash s^R))=s\cdot s^R=e\] to recognise $p\cdot q$. That the desired map is injective is then immediate by $(\ )^R$ injective and elementary properties of division. We use similar methods in the other proofs. Thus, writing \[ \tau'(s,t):=(\tau(s,t){\triangleleft} t^R)\tau(t,t^R)^{-1}=\tau(s\cdot t, \tau(s,t){\triangleright} t^R)^{-1}\] for brevity, we have \begin{align*}\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op} \delta_r)&=\hbox{{$\mathcal G$}}^{-1}\sum_{p\cdot q=r}(\delta_{q^R}\mathop{{\otimes}}\delta_{p^R})=\sum_{s\cdot t=r}\tau(s,t)\delta_{t^R}\mathop{{\otimes}}\tau'(s,t)\delta_{s^R},\\ (\Delta\theta(\delta_r))\hbox{{$\mathcal G$}}^{-1}&=\sum_{p\cdot q=r^R}(\delta_p\mathop{{\otimes}}\delta_q)\hbox{{$\mathcal G$}}^{-1}=\sum_{p\cdot q=r^R} \tau(s,t)\delta_{t^R}\mathop{{\otimes}}\tau'(s,t)\delta_{s^R}, \end{align*} where in the second line, commuting the $\delta_{t^R}$ and $\delta_{s^R}$ to the left sets $p=\tau(s,t){\triangleright} t^R$, $q=\tau'(s,t){\triangleright} s^R$ as studied above. Hence $p\cdot q=r^R$ in the sum is the same as $s\cdot t=r$, so the two sides are equal and we have proven (\ref{*GDelta}) on $\delta_r$. Similarly, \begin{align*}\hbox{{$\mathcal G$}}^{-1}&(\theta\mathop{{\otimes}}\theta)(\Delta^{op} x)\\ &=\sum_{p,q,s,t} \left(\tau(p,q)\delta_{q^R}\mathop{{\otimes}} (\tau(p,q){\triangleleft} q^R)\tau(q,q^R)^{-1} \delta_{p^R} \right)\left((x{\triangleleft} s){\triangleleft} t\, \delta_{t^R}\mathop{{\otimes}}\delta_{(x{\triangleright} s)^R}x{\triangleleft} s\right)\\ &=\sum_{s,t}(x{\triangleleft} s\cdot t)\tau(s,t)\delta_{t^R}\mathop{{\otimes}} \tau(x{\triangleright}(s\cdot t),(x{\triangleleft} s\cdot t)\tau(s,t){\triangleright} t^R)^{-1}(x{\triangleleft} s)\delta_{s^R} \end{align*} where we first note that for the $\delta$-functions to connect, we need \[ p=x{\triangleright} s,\quad ((x{\triangleleft} s){\triangleleft} t){\triangleright} t^R=q^R,\] which is equivalent to $q=(x{\triangleleft} s){\triangleright} t$ since $e=(x{\triangleleft} s){\triangleright} (t\cdot t^R)=((x{\triangleleft} s){\triangleright} t)\cdot(( (x{\triangleleft} s){\triangleleft} t){\triangleright} t^R)$. In this case \[ \tau(p,q)((x{\triangleleft} s){\triangleleft} t)=\tau(x{\triangleright} s, (x{\triangleleft} s){\triangleright} t)((x{\triangleleft} s){\triangleleft} t)=(x{\triangleleft} s\cdot t)\tau(s,t)\] by the cocycle axiom. Similarly, $(x{\triangleleft} s)^{-1}{\triangleright}(x {\triangleright} s)^R=s^R$ by Lemma~\ref{leminv} gives us $\delta_{s^R}$. For its coefficient, note that $p\cdot q=(x{\triangleright} s)\cdot((x{\triangleleft} s){\triangleright} t)=x{\triangleright}(s\cdot t)$ so that, using the other form of $\tau'(p.q)$, we obtain \[ \tau(p\cdot q,\tau(p,q){\triangleright} q^R)^{-1}(x{\triangleleft} s)=\tau(x{\triangleright}(s\cdot t),\tau(p,q)((x{\triangleleft} s){\triangleleft} t){\triangleright} t^R)^{-1}(x{\triangleleft} s) \] and we use our previous calculation to put this in terms of $s,t$. On the other side, we have \begin{align*} (\Delta\theta(x))&\hbox{{$\mathcal G$}}^{-1}= \sum_t\Delta(x{\triangleleft} t\, \delta_{t^R} )\hbox{{$\mathcal G$}}^{-1}\\ &=\sum_{p,q,s\cdot r=t^R}x{\triangleleft} t\, \delta_s\tau(p,q)\delta_{q^R}\mathop{{\otimes}} (x{\triangleleft} t){\triangleleft} r\, \delta_r \tau(p\cdot q,\tau(p,q){\triangleright} q^R)^{-1}\delta_{p^R}\\ &=\sum_{p,q}x{\triangleleft}(p\cdot q)\, \tau(p,q)\delta_{q^R}\mathop{{\otimes}} (x{\triangleleft} p\cdot q){\triangleleft} s\, \tau(p\cdot q,s)^{-1}\delta_{p^R}, \end{align*} where, for the $\delta$-functions to connect, we need \[ s=\tau(p,q){\triangleright} q^R,\quad r=\tau'(p,q){\triangleright} p^R.\] The map $(p,q)\mapsto (s,r)$ has the same structure as the one we studied above but applied now to $p,q$ in place of $s,t$. It follows that $s\cdot r=(p\cdot q)^R$ and hence this being equal $t^R$ is equivalent to $p\cdot q=t$. Taking this for the value of $t$, we obtain the second expression for $(\Delta\theta(x))\hbox{{$\mathcal G$}}^{-1}$. We now use the identity for $(x{\triangleleft} p\cdot q){\triangleleft} s $ and $(p\cdot q)\cdot \tau(p,q){\triangleright} q^R=p\cdot(q\cdot q^R)=p$ to obtain the same as we obtained for $\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op} x)$ on $x$, upon renaming $s,t$ there to $p,q$. The proofs of (\ref{*Gphi}), (\ref{*Gstrong}) are similarly quite involved, but omitted given that it is known that the category of modules is a strong bar category. \endproof \begin{corollary}\label{corstar} For $(\ )^R$ bijective and the standard antipode in Proposition~\ref{standardS}, we have a $*$-quasi Hopf algebra with $\theta=* S$, where $x^*=x^{-1},\delta_s^*=\delta_s$ is the standard $*$-algebra structure on $\Xi$ as a cross product and $\gamma,\hbox{{$\mathcal G$}}$ are as above. \end{corollary}{\noindent {\bfseries Proof:}\quad } We check that \[*Sx=*(\sum_s \delta_{(x^{-1}{\triangleright} s)^R}x^{-1}{\triangleleft} s)=\sum_s (x^{-1}{\triangleleft} s)^{-1}\delta_{(x^{-1}{\triangleright} s)^R}=\sum_{s'}x{\triangleleft} s'\delta_{s'{}^R}=\theta(x),\] where $s'=x^{-1}{\triangleright} s$ and we used Lemma~\ref{leminv}. \endproof The key property of any quasi-bialgebra is that its category of modules is monoidal with associator $\phi_{V,W,U}: (V\mathop{{\otimes}} W)\mathop{{\otimes}} U\to V\mathop{{\otimes}} (W\mathop{{\otimes}} U)$ given by the action of $\phi$. In the $*$-quasi case, this becomes a bar category as follows\cite{BegMa:bar}. First, there is a functor ${\rm bar}$ from the category to itself which sends a module $V$ to a `conjugate', $\bar V$. In our case, this has the same set and abelian group structure as $V$ but $\lambda.\bar v=\overline{\bar\lambda v}$ for all $\lambda\in \mathbb{C}$, i.e. a conjugate action of the field, where we write $v\in V$ as $\bar v$ when viewed in $\bar V$. Similarly, \[ \xi.\bar v=\overline{\theta(\xi).v}\] for all $\xi\in \Xi(R,K)$. On morphisms $\psi:V\to W$, we define $\bar\psi:\bar V\to \bar W$ by $\bar \psi(\bar v)=\overline{\psi(v)}$. Next, there is a natural isomorphism $\Upsilon: {\rm bar}\circ\mathop{{\otimes}} \Rightarrow \mathop{{\otimes}}^{op}\circ({\rm bar}\times{\rm bar})$, given in our case for all modules $V,W$ by \[ \Upsilon_{V,W}:\overline{V\mathop{{\otimes}} W}{\cong} \bar W\mathop{{\otimes}} \bar V, \quad \Upsilon_{V,W}(\overline{v\mathop{{\otimes}} w})=\overline{ \hbox{{$\mathcal G$}}^2.w}\mathop{{\otimes}}\overline{\hbox{{$\mathcal G$}}^1.v}\] and making a hexagon identity with the associator, namely \[ (\mathrm{id}\mathop{{\otimes}}\Upsilon_{V,W})\circ\Upsilon_{V\mathop{{\otimes}} W, U}=\phi_{\bar U,\bar V,\bar W}\circ(\Upsilon_{W,U}\mathop{{\otimes}}\mathrm{id})\circ\Upsilon_{V,W\mathop{{\otimes}} U}\circ \overline{\phi_{V,W,U}}.\] We also have a natural isomorphism ${\rm bb}:\mathrm{id}\Rightarrow {\rm bar}\circ{\rm bar}$, given in our case for all modules $V$ by \[ {\rm bb}_V:V\to \overline{\overline V},\quad {\rm bb}_V(v)=\overline{\overline{\gamma.v}}\] and obeying $\overline{{\rm bb}_V}={\rm bb}_{\bar V}$. In our case, we have a strong bar category, which means also \[ \Upsilon_{\bar W,\bar V}\circ\overline{\Upsilon_{V,W}}\circ {\rm bb}_{V\mathop{{\otimes}} W}={\rm bb}_V\mathop{{\otimes}}{\rm bb}_W.\] Finally, a bar category has some conditions on the unit object $\underline 1$, which in our case is the trivial representation with these automatic. That $G=RK$ leads to a strong bar category is in \cite[Prop.~3.21]{BegMa:bar} but without the underlying $*$-quasi-Hopf algebra structure as found above. \begin{example}\label{exS3quasi} \rm {\sl (i) $\Xi(R,K)$ for $S_2\subset S_3$ with its standard transversal.} As an algebra, this is generated by $\mathbb{Z}_2$, which means by an element $u$ with $u^2=e$, and by $\delta_{0},\delta_{1},\delta_{2}$ for $\delta$-functions as the points of $R=\{e,uv,vu\}$. The relations are $\delta_i$ orthogonal and add to $1$, and cross relations \[ \delta_0u=u\delta_0,\quad \delta_1u=u\delta_2,\quad \delta_2u=u\delta_1.\] The dot product is the additive group $\mathbb{Z}_3$, i.e. addition mod 3. The coproducts etc are \[ \Delta \delta_i=\sum_{j+k=i}\delta_j\mathop{{\otimes}}\delta_k,\quad \Delta u=u\mathop{{\otimes}} u,\quad \phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1\] with addition mod 3. The cocycle and right action are trivial and the dot product is that of $\mathbb{Z}_3$ as a subgroup generated by $uv$. This gives an ordinary cross product Hopf algebra $\Xi=\mathbb{C}(\mathbb{Z}_3){>\!\!\!\triangleleft}\mathbb{C} \mathbb{Z}_2$. Here $S\delta_i=\delta_{-i}$ and $S u=u$. For the $*$-structure, the cocycle is trivial so $\gamma=1$ and $\hbox{{$\mathcal G$}}=1\mathop{{\otimes}} 1$ and we have an ordinary Hopf $*$-algebra. {\sl (ii) $\Xi(R,K)$ for $S_2\subset S_3$ with its second transversal.} For this $R$, the dot product is specified by $e$ the identity and $v\cdot w=w$, $w\cdot v=v$. The algebra has relations \[ \delta_e u=u\delta_e,\quad \delta_v u=u\delta_w,\quad \delta_w u=u\delta_v\] and the quasi-Hopf algebra coproducts etc. are \[ \Delta \delta_e=\delta_e\mathop{{\otimes}} \delta_e+\delta_v\mathop{{\otimes}}\delta_v+\delta_w\mathop{{\otimes}}\delta_w,\quad \Delta \delta_v=\delta_e\mathop{{\otimes}} \delta_v+\delta_v\mathop{{\otimes}}\delta_e+\delta_w\mathop{{\otimes}}\delta_v,\] \[ \Delta \delta_w=\delta_e\mathop{{\otimes}} \delta_w+\delta_w\mathop{{\otimes}}\delta_e+\delta_v\mathop{{\otimes}}\delta_w,\quad \Delta u=u\mathop{{\otimes}} u,\] \[ \phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1+(\delta_v \mathop{{\otimes}}\delta_w+\delta_w\mathop{{\otimes}}\delta_v )\mathop{{\otimes}} (u-1)=\phi^{-1}.\] The antipode is \[ S\delta_s=\delta_{s^R}=\delta_s,\quad S u=\sum_{s}\delta_{(u{\triangleright} s)^R}u=u,\quad \alpha=1,\quad \beta=\sum_s \delta_s\mathop{{\otimes}}\tau(s,s)=1\] from the antipode lemma, since the map $(\ )^R$ happens to be injective and indeed acts as the identity. In this case, we see that $\Xi(R,K)$ is nontrivially a quasi-Hopf algebra. Only $\tau(v,w)=\tau(w,v)=u$ are nontrivial, hence for the $*$-quasi Hopf algebra structure, we have \[ \gamma=1,\quad \hbox{{$\mathcal G$}}=1\mathop{{\otimes}} 1+(\delta_v\mathop{{\otimes}}\delta_w+\delta_w\mathop{{\otimes}}\delta_v)(u\mathop{{\otimes}} u-1\mathop{{\otimes}} 1)\] with $\theta=*S$ acting as the identity on our basis, $\theta(x)=x$ and $\theta(\delta_s)=\delta_s$. \end{example} We also note that the algebras $\Xi(R,K)$ here are manifestly isomorphic for the two $R$, but the coproducts are different, so the tensor products of representations is different, although they turn out isomorphic. The set of irreps does not change either, but how we construct them can look different. We will see in the next that this is part of a monoidal equivalence of categories. \begin{example}\rm $S_2\subset S_3$ with its 2nd transversal. Here $R$ has two orbits: (a) ${\hbox{{$\mathcal C$}}}=\{e\}$ with $r_0=e, K^{r_0}=K$ with two 1-diml irreps $V_\rho$ as $\rho$=trivial and $\rho={\rm sign}$, and hence two irreps of $\Xi(R,K)$; (b) ${\hbox{{$\mathcal C$}}}=\{w,v\}$ with $r_0=v$ or $r_0=w$, both with $K^{r_0}=\{e\}$ and hence only $\rho$ trivial, leading to one 2-dimensional irrep of $\Xi(R,K)$. So, altogether, there are again three irreps of $\Xi(R,K)$: \begin{align*} V_{(\{e\},\rho)}:& \quad \delta_r.1 =\delta_{r,e},\quad u.1 =\pm 1,\\ V_{(\{w,v\}),1)}:& \quad \delta_r. v=\delta_{r,v}v,\quad \delta_r. w=\delta_{r,w}w,\quad u.v= w,\quad u.w=v \end{align*} acting on $\mathbb{C}$ and on the span of $v,w$ respectively. These irreps are equivalent to what we had in Example~\ref{exS3n} when computing irreps from the standard $R$. \end{example} \section{Categorical justification and twisting theorem}\label{sec:cat_just} We have shown that the boundaries can be defined using the action of the algebra $\Xi(R,K)$ and that one can perform novel methods of fault-tolerant quantum computation using these boundaries. The full story, however, involves the quasi-Hopf algebra structure verified in the preceding section and now we would like to connect back up to the category theory behind this. \subsection{$G$-graded $K$-bimodules.} We start by proving the equivalence ${}_{\Xi(R,K)}\hbox{{$\mathcal M$}} \simeq {}_K\hbox{{$\mathcal M$}}_K^G$ explicitly and use it to derive the coproduct studied in Section~\ref{sec:quasi}. Although this equivalence is known\cite{PS}, we believe this to be a new and more direct derivation. \begin{lemma} If $V_\rho$ is a $K^{r_0}$-module and $V_{\hbox{{$\mathcal O$}},\rho}$ the associated $\Xi(R,K)$ irrep, then \[ \tilde V_{\hbox{{$\mathcal O$}},\rho}= V_{\hbox{{$\mathcal O$}},\rho}\mathop{{\otimes}} \mathbb{C} K,\quad x.(r\mathop{{\otimes}} v\mathop{{\otimes}} z).y=x{\triangleright} r\mathop{{\otimes}}\zeta_r(x).v\mathop{{\otimes}} (x{\triangleleft} r)zy,\quad |r\mathop{{\otimes}} v\mathop{{\otimes}} z|=rz\] is a $G$-graded $K$-bimodule. Here $r\in \hbox{{$\mathcal O$}}$ and $v\in V_\rho$ in the construction of $V_{\hbox{{$\mathcal O$}},\rho}$. \end{lemma} {\noindent {\bfseries Proof:}\quad } That this is a $G$-graded right $K$-module commuting with the left action of $K$ is trivial. That the left action works and is $G$-graded is \begin{align*}x.(y.(r\mathop{{\otimes}} v\mathop{{\otimes}} z))&=x.(y{\triangleright} r\mathop{{\otimes}} \zeta_r(y).v\mathop{{\otimes}} (y{\triangleleft} r)z)= xy{\triangleright} r\mathop{{\otimes}} \zeta_r(xy).v\mathop{{\otimes}} (x{\triangleleft}(y{\triangleright} r))(y{\triangleleft} r)z\\ &=xy{\triangleright} r\mathop{{\otimes}} \zeta_r(xy).v\mathop{{\otimes}} ((xy){\triangleleft} r)z\end{align*} and \[ |x.(r\mathop{{\otimes}} v\mathop{{\otimes}} z).y|=(x{\triangleright} r) (x{\triangleleft} r)zy= xrzy=x|r\mathop{{\otimes}} v \mathop{{\otimes}} z|y.\] \endproof \begin{remark}\rm Recall that we can also think more abstractly of $\Xi=\mathbb{C}(G/K){>\!\!\!\triangleleft} \mathbb{C} K$ rather than using a transversal. In these terms, a representation of $\Xi(R,K)$ as an $R$-graded $K$-module $V$ such that $|x.v|=x{\triangleleft} |v|$ now becomes a $G/K$-graded $K$-module such that $|x.v|=x|v|$, where $|v|\in G/K$ and we multiply from the left by $x\in K$. Moreover, the role of an orbit $\hbox{{$\mathcal O$}}$ above is played by a double coset $T=\hbox{{$\mathcal O$}} K\in {}_KG_K$. In these terms, the role of the isometry group $K^{r_0}$ is played by \[ K^{r_T}:=K\cap r_T K r_T^{-1}, \] where $r_T$ is any representative of the same double coset. One can take $r_T=r_0$ but we can also chose it more freely. Then an irrep is given by a double coset $T$ and an irreducible representation $\rho_T$ of $K^{r_T}$. If we denote by $V_{\rho_T}$ the carrier space for this then the associated irrep of $\mathbb{C}(G/K){>\!\!\!\triangleleft}\mathbb{C} K$ is $V_{T,\rho_T}=\mathbb{C} K\mathop{{\otimes}}_{K^{r_T}}V_{\rho_T}$ which is manifestly a $K$-module and we give it the $G/K$-grading by $|x\mathop{{\otimes}}_{K^{r_T}} v|=xK$. The construction in the last lemma is then equivalent to \[ \tilde V_{T,\rho_T}=\mathbb{C} K\mathop{{\otimes}}_{K^{r_T}} V_{\rho_T}\mathop{{\otimes}}\mathbb{C} K,\quad |x\mathop{{\otimes}}_{K^{r_T}} v\mathop{{\otimes}} z|=xz\] as manifestly a $G$-graded $K$-bimodule. This is an equivalent point of view, but we prefer our more explicit one based on $R$, hence details are omitted. \end{remark} Also note that the category ${}_K\hbox{{$\mathcal M$}}_K^G$ of $G$-graded $K$-bimodules has an obvious monoidal structure inherited from that of $K$-bimodules, where we tensor product over $\mathbb{C} K$. Here $|w\mathop{{\otimes}}_{\mathbb{C} K} w'|=|w||w'|$ in $G$ is well-defined and $x.(w\mathop{{\otimes}}_{\mathbb{C} K}w').y=x.w\mathop{{\otimes}}_{\mathbb{C} K} w'.y$ has degree $x|w||w'|y=x|w\mathop{{\otimes}}_{\mathbb{C} K}w'|y$ as required. \begin{proposition} \label{prop:mon_equiv} We let $R$ be a transversal and $W=V\mathop{{\otimes}} \mathbb{C} K$ made into a $G$-graded $K$-bimodule by \[ x.(v\mathop{{\otimes}} z).y=x.v\mathop{{\otimes}} (x{\triangleleft}|v|)zy, \quad |v\mathop{{\otimes}} z|= |v|z\in G,\] where now we view $|v|\in R$ as the chosen representative of $|v|\in G/K$. This gives a functor $F:{}_\Xi\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ which is a monoidal equivalence for a suitable quasibialgebra structure on $\Xi(R,K)$. The latter depends on $R$ since $F$ depends on $R$. \end{proposition} {\noindent {\bfseries Proof:}\quad } We define $F(V)$ as stated, which is clearly a right module that commutes with the left action, and the latter is a module structure as \[ x.(y.(v\mathop{{\otimes}} z))=x.(y.v\mathop{{\otimes}} (y{\triangleleft} |v|)z)=xy.v\mathop{{\otimes}} (x{\triangleleft} (y{\triangleright} |v|))(y{\triangleleft} |v|)z=(xy).(v\mathop{{\otimes}} z)\] using the matched pair axiom for $(xy){\triangleleft} |v|$. We also check that $|x.(v\mathop{{\otimes}} z).y|=|x.v|zy=(x{\triangleright} |v|)(x{\triangleleft} |v|)zy=x|v|zy=x|v\mathop{{\otimes}} z|y$. Hence, we have a $G$-graded $K$-bimodule. Conversely, if $W$ is a $G$-graded $K$-bimodule, we let \[ V=\{w\in W\ |\ |w|\in R\},\quad x.v=xv(x{\triangleleft} |v|)^{-1},\quad \delta_r.v=\delta_{r,|v|}v,\] where $v$ on the right is viewed in $W$ and we use the $K$-bimodule structure. This is arranged so that $x.v$ on the left lives in $V$. Indeed, $|x.v|=x|v|(x{\triangleleft} |v|)^{-1}=x{\triangleright} |v|$ and $x.(y.v)=xyv(y{\triangleleft} |v|)^{-1}(x{\triangleleft}(y{\triangleright} |v|))^{-1}=xyv((xy){\triangleleft} |v|)^{-1}$ by the matched pair condition, as required for a representation of $\Xi(R,K)$. One can check that this is inverse to the other direction. Thus, given $W=\oplus_{rx\in G}W_{rx}=\oplus_{x\in K} W_{Rx}$, where we let $W_{Rx}=\oplus_{r\in R}W_{rx}$, the right action by $x\in K$ gives an isomorphism $W_{Rx}{\cong} V\mathop{{\otimes}} x$ as vector spaces and hence recovers $W=V\mathop{{\otimes}}\mathbb{C} K$. This clearly has the correct right $K$-action and from the left $x.(v\mathop{{\otimes}} z)=xv(x{\triangleleft}|v|)^{-1}\mathop{{\otimes}} (x{\triangleleft}|v|)z$, which under the identification maps to $xv(x{\triangleleft}|v|)^{-1} (x{\triangleleft}|v|)z=xvz\in W$ as required given that $v\mathop{{\otimes}} z$ maps to $vz$ in $W$. Now, if $V,V'$ are $\Xi(R,K)$ modules then as vector spaces, \[ F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')=(V\mathop{{\otimes}} \mathbb{C} K)\mathop{{\otimes}}_{\mathbb{C} K} (V'\mathop{{\otimes}} \mathbb{C} K)=V\mathop{{\otimes}} V'\mathop{{\otimes}} \mathbb{C} K{\buildrel f_{V,V'}\over{\cong}}F(V\mathop{{\otimes}} V')\] by the obvious identifications except that in the last step we allow ourselves the possibility of a nontrivial isomorphism as vector spaces. For the actions on the two sides, \[ x.(v\mathop{{\otimes}} v'\mathop{{\otimes}} z).y=x.(v\mathop{{\otimes}} v')\mathop{{\otimes}} (x{\triangleleft} |v\mathop{{\otimes}} v'|)zy= x.v\mathop{{\otimes}} (x{\triangleleft} |v|).v'\mathop{{\otimes}} ((x{\triangleleft}|v|){\triangleleft}|v'|)zy,\] where on the right, we have $x.(v\mathop{{\otimes}} 1)=x.v \mathop{{\otimes}} x{\triangleleft}|v|$ and then take $x{\triangleleft}|v|$ via the $\mathop{{\otimes}}_{\mathbb{C} K}$ to act on $v'\mathop{{\otimes}} z$ as per our identification. Comparing the $x$ action on the $V\mathop{{\otimes}} V'$ factor, we need \[\Delta x=\sum_{r\in R}x\delta_r\mathop{{\otimes}} x{\triangleleft} r= \sum_{r\in R}\delta_{x{\triangleright} r}\mathop{{\otimes}} x \mathop{{\otimes}} 1\mathop{{\otimes}} x{\triangleleft} r\] as a modified coproduct without requiring a nontrivial $f_{V,V'}$ for this to work. The first expression is viewed in $\Xi(R,K)^{\mathop{{\otimes}} 2}$ and the second is on the underlying vector space. Likewise, looking at the grading of $F(V\mathop{{\otimes}} V')$ and comparing with the grading of $F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')$, we need to define $|v\mathop{{\otimes}} v'|=|v|\cdot|v'|\in R$ and use $|v|\cdot|v'|\tau(|v|,|v'|)=|v||v'|$ to match the degree on the left hand side. This amounts to the coproduct of $\delta_r$ in $\Xi(R,K)$, \[ \Delta\delta_r=\sum_{s\cdot t=r}\delta_s\mathop{{\otimes}}\delta_t=\sum_{s\cdot t=r} \delta_s\mathop{{\otimes}} 1\mathop{{\otimes}} \delta_t \mathop{{\otimes}} 1\] {\em and} a further isomorphism \[ f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)= v\mathop{{\otimes}} v'\mathop{{\otimes}}\tau(|v|,|v'|)z\] on the underlying vector space. After applying this, the degree of this element is $|v\mathop{{\otimes}} v'|\tau(|v|,|v'|)z=|v||v'|z=|v\mathop{{\otimes}} 1||v'\mathop{{\otimes}} z|$, which is the degree on the original $F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')$ side. Now we show that $f_{V,V'}$ respects associators on each side of $F$. Taking the associator on the $\Xi(R,K)$-module side as \[ \phi_{V,V',V''}:(V\mathop{{\otimes}} V')\mathop{{\otimes}} V''\to V\mathop{{\otimes}}(V'\mathop{{\otimes}} V''),\quad \phi_{V,V',V''}((v\mathop{{\otimes}} v')\mathop{{\otimes}} v'')=\phi^1.v\mathop{{\otimes}} (\phi^2.v'\mathop{{\otimes}}\phi^3.v'')\] and $\phi$ trivial on the $G$-graded $K$-bimodule side, for $F$ to be monoidal with the stated $f_{V,V'}$ etc, we need \begin{align*} F(\phi_{V,V,V'})&f_{V\mathop{{\otimes}} V',V''}f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)\\ &=F(\phi_{V,V,V'})f_{V\mathop{{\otimes}} V',V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}} (\tau(|v|,|v'|){\triangleleft}|v''|)z)\\ &=F(\phi_{V,V,V'})(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}}\tau(|v|.|v'|,\tau(|v|,|v'|){\triangleright} |v''|)(\tau(|v|,|v'|){\triangleleft}|v''|)z)\\ &=F(\phi_{V,V,V'})(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}} \tau(|v|,|v'|.|v''|)\tau(|v'|,|v''|)z,\\ f_{V,V'\mathop{{\otimes}} V''}&f_{V',V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}} z)=f_{V,V'\mathop{{\otimes}} V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}} \tau(|v'|,|v''|)z) \\ &=v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}}\tau(|v|,|v'\mathop{{\otimes}} v''|)\tau(|v'|,|v''|)z =v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}}\tau(|v|,|v'|.|v''|)\tau(|v'|,|v''|)z,\end{align*} where for the first equality we moved $\tau(|v|,|v'|)$ in the output of $f_{V,V'}$ via $\mathop{{\otimes}}_{\mathbb{C} K}$ to act on the $v''$. We used the cocycle property of $\tau$ for the 3rd equality. Comparing results, we need \[ \phi_{V,V',V''}((v\mathop{{\otimes}} v')\mathop{{\otimes}} v'')=v\mathop{{\otimes}}( v'\mathop{{\otimes}} \tau(|v|,|v'|)^{-1}.v''),\quad \phi=\sum_{s,t\in R}(\delta_s\mathop{{\otimes}} 1)\mathop{{\otimes}}(\delta_s\tens1)\mathop{{\otimes}} (1\mathop{{\otimes}} \tau(s,t)^{-1}).\] Note that we can write \[ f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)=(\sum_{s,t\in R}(\delta_s\mathop{{\otimes}} 1)\mathop{{\otimes}}(\delta_t\mathop{{\otimes}} 1)\mathop{{\otimes}} \tau(s,t)).(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)\] but we are not saying that $\phi$ is a coboundary since this is not given by the action of an element of $\Xi(R,K)^{\mathop{{\otimes}} 2}$. \endproof This derives the quasibialgebra structure on $\Xi(R,K)$ used in Section~\ref{sec:quasi} but now so as to obtain an equivalence of categories. \subsection{Drinfeld twists induced by change of transversal} We recall that if $H$ is a quasiHopf algebra and $\chi\in H\mathop{{\otimes}} H$ is a {\em cochain} in the sense of invertible and $(\mathrm{id}\mathop{{\otimes}} {\epsilon})\chi=({\epsilon}\mathop{{\otimes}}\mathrm{id})\chi=1$, then its {\em Drinfeld twist} $\bar H$ is another quasi-Hopf algebra \[ \bar\Delta=\chi^{-1}\Delta(\ )\chi,\quad \bar\phi=\chi_{23}^{-1}((\mathrm{id}\mathop{{\otimes}}\Delta)\chi^{-1})\phi ((\Delta\mathop{{\otimes}}\mathrm{id})\chi)\chi_{12},\quad \bar{\epsilon}={\epsilon}\] \[ S=S,\quad\bar\alpha=(S\chi^1)\alpha\chi^2,\quad \bar\beta=(\chi^{-1})^1\beta S(\chi^{-1})^2\] where $\chi=\chi^1\mathop{{\otimes}}\chi^2$ with a sum of such terms understood and we use same notation for $\chi^{-1}$, see \cite[Thm.~2.4.2]{Ma:book} but note that our $\chi$ is denoted $F^{-1}$ there. In categorical terms, this twist corresponds to a monoidal equivalence $G:{}_{H}\hbox{{$\mathcal M$}}\to {}_{H^\chi}\hbox{{$\mathcal M$}}$ which is the identity on objects and morphisms but has a nontrivial natural transformation \[ g_{V,V'}:G(V)\bar\mathop{{\otimes}} G(V'){\cong} G(V\mathop{{\otimes}} V'),\quad g_{V,V'}(v\mathop{{\otimes}} v')= \chi^1.v\mathop{{\otimes}}\chi^2.v'.\] The next theorem follows by the above reconstruction arguments, but here we check it directly. The logic is that for different $R,R'$ the category of modules are both monoidally equivalent to ${}_K\hbox{{$\mathcal M$}}_K^G$ and hence monoidally equivalent but not in a manner that is compatible with the forgetful functor to Vect. Hence these should be related by a cochain twist. \begin{theorem}\label{thmtwist} Let $R,\bar R$ be two transversals with $\bar r\in\bar R$ representing the same coset as $r\in R$. Then $\Xi(\bar R,K)$ is a cochain twist of $\Xi(R,K)$ at least as quasi-bialgebras (and as quasi-Hopf algebras if one of them is). The Drinfeld cochain is $\chi=\sum_{r\in R}(\delta_r\mathop{{\otimes}} 1)\mathop{{\otimes}} (1\mathop{{\otimes}} r^{-1}\bar r)$. \end{theorem} {\noindent {\bfseries Proof:}\quad } Let $R,\bar R$ be two transversals. Then for each $r\in R$, the class $rK$ has a unique representative $\bar rK$ with $\bar r\in R'$. Hence $\bar r= r c_r$ for some function $c:R\to K$ determined by the two transversals as $c_r=r^{-1}\bar r$ in $G$. One can show that the cocycle matched pairs are related by \[ x\bar{\triangleright} \bar r=(x{\triangleright} r)c_{x{\triangleright} r},\quad x\bar{\triangleleft} \bar r= c_{x{\triangleright} r}^{-1}(x{\triangleleft} r)c_r\] among other identities. On using \begin{align*} \bar s\bar t=sc_s tc_t=s (c_s{\triangleright} t)(c_s{\triangleleft} t)c_t&= (s\cdot c_s{\triangleright} t)\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t\\&=\overline{ s\cdot (c_s{\triangleright} t)}c_{s\cdot c_s{\triangleright} t}^{-1}\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t\end{align*} and factorising using $\bar R$, we see that \begin{equation}\label{taucond} \bar s\, \bar\cdot\, \bar t= \overline{s\cdot c_s{\triangleright} t},\quad\bar\tau(\bar s,\bar t)=c_{s\cdot c_s{\triangleright} t}^{-1}\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t.\end{equation} We will construct a monoidal functor $G:{}_{\Xi(R,K)}\hbox{{$\mathcal M$}}\to {}_{\Xi(\bar R,K)}\hbox{{$\mathcal M$}}$ with $g_{V,V'}(v\mathop{{\otimes}} v')= \chi^1.v\mathop{{\otimes}}\chi^2.v'$ for a suitable $\chi\in \Xi(R,K)^{\mathop{{\otimes}} 2}$. First, let $F:{}_{\Xi(R,K)}\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ be the monoidal functor above with natural isomorphism $f_{V,V'}$ and $\bar F:{}_{\Xi(\bar R,K)}\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ the parallel for $\Xi(\bar R,K)$ with isomorphism $\bar f_{V,V'}$. Then \[ C:F\to \bar F\circ G,\quad C_V:F(V)=V\mathop{{\otimes}}\mathbb{C} K\to V\mathop{{\otimes}} \mathbb{C} K=\bar FG(V),\quad C_V(v\mathop{{\otimes}} z)=v\mathop{{\otimes}} c_{|v|}^{-1}z\] is a natural isomorphism. Check on the right we have, denoting the $\bar R$ grading by $||\ ||$, the $G$-grading and $K$-bimodule structure \begin{align*} |C_V(v\mathop{{\otimes}} z)|&= |v\mathop{{\otimes}} c_{|v|}^{-1}z|= ||v||c_{|v|}^{-1}z=|v|z=|v\mathop{{\otimes}} z|,\\ x.C_V(v\mathop{{\otimes}} z).y&=x.(v\mathop{{\otimes}} c_{|v|}^{-1}z).y=x.v\mathop{{\otimes}} (x\bar{\triangleleft} ||v||)c_{|v|}^{-1}zy=x.v \mathop{{\otimes}} c_{x{\triangleright} |v|}^{-1} (x{\triangleleft} |v|)zy\\ &= C_V(x.(v\mathop{{\otimes}} z).y).\end{align*} We want these two functors to not only be naturally isomorphic but for this to respect that they are both monoidal functors. Here $\bar F\circ G$ has the natural isomorphism \[ \bar f^g_{V,V'}= \bar F(g_{V,V'})\circ \bar f_{G(V),G(V')}\] by which it is a monoidal functor. The natural condition on a natural isomorphism $C$ between monoidal functors is that $C$ behaves in the obvious way on tensor product objects via the natural isomorphisms associated to each monoidal functor. In our case, this means \[ \bar f^g_{V,V'}\circ (C_{V}\mathop{{\otimes}} C_{V'}) = C_{V\mathop{{\otimes}} V'}\circ f_{V,V'}: F(V)\mathop{{\otimes}} F(V')\to \bar F G(V\mathop{{\otimes}} V').\] Putting in the specific form of these maps, the right hand side is \[C_{V\mathop{{\otimes}} V'}\circ f_{V,V'}(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K v'\mathop{{\otimes}} z)=C_{V\mathop{{\otimes}} V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|)z)=v\mathop{{\otimes}} v'\mathop{{\otimes}} c^{-1}_{|v\mathop{{\otimes}} v'|}\tau(|v|,|v'|)z,\] while the left hand side is \begin{align*}\bar f^g_{V,V'}\circ (C_{V}\mathop{{\otimes}} C_{V'})&(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K v'\mathop{{\otimes}} z)=\bar f^g_{V,V'}(v\mathop{{\otimes}} c^{-1}_{|v|}\mathop{{\otimes}}_K v'\mathop{{\otimes}} c^{-1}_{|v'|}z)\\ &=\bar f^g_{V,V'}(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K c^{-1}_{|v|}.v'\mathop{{\otimes}} (c^{-1}_{|v|}\bar{\triangleright} ||v'||)c^{-1}_{|v'|}z)\\ &=\bar F(g_{V,V'})(v\mathop{{\otimes}} c^{-1}_{|v|}.v'\mathop{{\otimes}} \bar\tau(||v||,||c^{-1}_{|v|}.v'||)(c^{-1}_{|v|}\bar{\triangleright}||v'||)c^{-1}_{|v'|}z)\\ &=\bar F(g_{V,V'})(v\mathop{{\otimes}} c^{-1}_{|v|}.v'\mathop{{\otimes}} c^{-1}_{|v\mathop{{\otimes}} v'|}\tau(|v|,|v'|)z, \end{align*} using the second of (\ref{taucond}) and $|v\mathop{{\otimes}} v'|=|v|\cdot|v'|$. We also used $\bar f^g_{V,V'}=\bar F(g_{V,V'})\bar f_{G(V),G(V')}:\bar FG(V)\mathop{{\otimes}} \bar FG(V')\to \bar FG(V\mathop{{\otimes}} V')$. Comparing, we need $\bar F(g_{V,V'})$ to be the action of the element \[ \chi=\sum_{r\in R} \delta_r\mathop{{\otimes}} c_r\in \Xi(R,K)^{\mathop{{\otimes}} 2}.\] It follows from the arguments, but one can also check directly, that $\phi$ indeed twists as stated to $\bar\phi$ when these are given by Lemma~\ref{Xibialg}, again using (\ref{taucond}). \endproof The twisting of a quasi-Hopf algebra is again one. Hence, we have: \begin{corollary}\label{twistant} If $R$ has $(\ )^R$ bijective giving a quasi-Hopf algebra with regular antipode $S,\alpha=1,\beta$ as in Proposition~\ref{standardS} and $\bar R$ is another transversal then $\Xi(\bar R,K)$ in the twisting form of Theorem~\ref{thmtwist} has an antipode \[ \bar S=S,\quad \bar \alpha=\sum_r \delta_{r^R} c_r ,\quad \bar \beta =\sum_r \delta_r \tau(r,r^R)(c_r^{-1}{\triangleleft} r^R)^{-1} . \] This is a regular antipode if $(\ )^R$ for $\bar R$ is also bijective (i.e. $\bar\alpha$ is then invertible and can be transformed back to standard form to make it 1).\end{corollary} {\noindent {\bfseries Proof:}\quad } We work with the initial quasi-Hopf algebra $\Xi(R,K)$ and ${\triangleright},{\triangleleft},\tau$ refer to this but note that $\Xi(\bar R,K)$ is the same algebra when $\delta_r$ is identified with the corresponding $\delta_{\bar r}$. Then \begin{align*}\bar \alpha&=(S\chi^{1})\chi^{2}=\sum_r S\delta_r\mathop{{\otimes}} c_r=\delta_{r^R}c_r\end{align*} using the formula for $S\delta_r=\delta_{r^R}$ in Proposition~\ref{standardS}. Similarly, $\chi^{-1}=\sum_r \delta_r\mathop{{\otimes}} c_r^{-1}$ and we use $S,\beta$ from the above lemma, where \[ S (1\mathop{{\otimes}} x)= \sum_s \delta_{(x^{-1}{\triangleright} s)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} s=\sum_t\delta_{t^R}\mathop{{\otimes}} x^{-1}{\triangleleft}(x{\triangleright} t)=\sum_t\delta_{t^R}\mathop{{\otimes}} (x{\triangleleft} t)^{-1}.\] Then \begin{align*} \bar \beta &=\chi^{-1}\beta S\chi^{-2}=\sum_{r,s,t}\delta_r\delta_s\tau(s,s^R)\delta_{t^R}(c_r^{-1}{\triangleleft} t)^{-1}\\ &=\sum_{r,t} \delta_r\tau(r,r^R)\delta_{t^R}(c_r^{-1}{\triangleleft} t)^{-1}=\sum_{r,t}\delta_r\delta_{\tau(r,r^R){\triangleright} t^R}\tau(r,r^R) (c_r^{-1}{\triangleleft} t)^{-1}.\end{align*} Commuting the $\delta$-functions to the left requires $r=\tau(r,r^R){\triangleright} t^R$ or $r^{RR}=\tau(r,r^R)^{-1}{\triangleright} r= t^R$ so $t=r^R$ under our assumptions, giving the answer stated. If $(\ )^R$ is bijective then $\bar\alpha^{-1}=\sum_r c_r^{-1}\delta_{r^R}=\sum_r \delta_{c_r^{-1}{\triangleright} r^R}c_r^{-1}$ provides the left inverse. On the other side, we need $c_r^{-1}{\triangleright} r^R= c_s^{-1}{\triangleright} s^R$ iff $r=s$. This is true if $(\ )^{R}$ for $\bar R$ is also bijective. That is because, if we write $(\ )^{\bar R}$ for the right inverse with respect to $\bar R$, one can show by comparing the factorisations that \[ \bar s^{\bar R}=\overline{c_s^{-1}{\triangleright} s^R},\quad \overline{s^R}=c_s\bar{\triangleright} \bar s^{\bar R}\] and we use the first of these. \endproof \begin{example}\rm With reference to the list of transversals for $S_2\subset S_3$, we have four quasi-Hopf algebras of which two were already computed in Example~\ref{exS3quasi}. {\sl (i) 2nd transversal as twist of the first.} Here $\bar\Xi$ is generated by $\mathbb{Z}_2$ as $u$ again and $\delta_{\bar r}$ with $\bar R=\{e,w,v\}$. We have the same cosets represented by these with $\bar e=e$, $\overline{uv}=w$ and $\overline{vu}=v$, which means $c_e=e, c_{vu}=u, c_{uv}=u$. To compare the algebras in the two cases, we identify $\delta_0=\delta_e,\delta_1=\delta_w, \delta_2=\delta_v$ as delta-functions on $G/K$ (rather than on $G$) in order to identify the algebras of $\bar\Xi$ and $\Xi$. The cochain from Theorem~\ref{thmtwist} is \[ \chi=\delta_e\mathop{{\otimes}} e+(\delta_{vu}+\delta_{uv})\mathop{{\otimes}} u=\delta_0\mathop{{\otimes}} 1+ (\delta_1+\delta_2)\mathop{{\otimes}} u=\delta_0\mathop{{\otimes}} 1+ (1-\delta_0)\mathop{{\otimes}} u \] as an element of $\Xi\mathop{{\otimes}}\Xi$. One can check that this conjugates the two coproducts as claimed. We also have \[ \chi^2=1\mathop{{\otimes}} 1,\quad ({\epsilon}\mathop{{\otimes}}\mathrm{id})\chi=(\mathrm{id}\mathop{{\otimes}}{\epsilon})\chi=1.\] We spot check (\ref{taucond}), for example $v\bar\cdot w=\overline{vu}\, \bar\cdot\, \overline{uv}=\overline{uv}=\overline{vuvu}=\overline{vu( u{\triangleright} (uv))}$, as it had to be. We should therefore find that \[((\Delta\mathop{{\otimes}}\mathrm{id})\chi)\chi_{12}=((\mathrm{id}\mathop{{\otimes}}\Delta)\chi)\chi_{23}\bar\phi. \] We have checked directly that this indeed holds. Next, the antipode of the first transversal should twist to \[ \bar S=S,\quad \bar\alpha=\delta_e c_e+\delta_{uv}c_{vu}+\delta_{vu}c_{uv}=\delta_e(e-u)+u=\delta_e c_e+\delta_{vu}c_{vu}+\delta_{uv}c_{uv}=\bar\beta\] by Corollary~\ref{twistant} for twisting the antipode. Here, $U=\bar\alpha^{-1}=\bar\beta = U^{-1}$ and $\bar S'=U(S\ )U^{-1}$ with $\bar\alpha'=\bar\beta'=1$ should also be an antipode. We can check this: \[U u = (\delta_0(e-u)+u)u = \delta_0(u-e)+e = u(\delta_{u^{-1}{\triangleright} 0}(e-u)+u) = u U\] so $\bar S' u = UuU^{-1} = u$, and \[\bar S' \delta_1 = U(S\delta_1)U= U\delta_2 U = (\delta_0(e-u)+u)\delta_2(\delta_0(e-u)+u) = \delta_1.\] \bigskip {\sl (ii) 3rd transversal as a twist of the first.} A mixed up choice is $\bar R=\{e,uv,v\}$ which is not a subgroup so $\tau$ is nontrivial. One has \[ \tau(uv,uv)=\tau(v,uv)=\tau(uv,v)=u,\quad \tau(v,v)=e,\quad v.v=e,\quad v.uv=uv,\quad uv.v=e,\quad uv.uv=v,\] \[ u{\triangleright} v=uv,\quad u{\triangleright} (uv)=v,\quad u{\triangleleft} v=e,\quad u{\triangleleft} uv=e\] and all other cases implied from the properties of $e$. Here $v^R=v$ and $(uv)^R=v$. These are with respect to $\bar R$, but note that twisting calculations will take place with respect to $R$. Writing $\delta_0=\delta_e,\delta_1=\delta_{uv},\delta_2=\delta_v$ we have the same algebra as before (as we had to) and now the coproduct etc., \[ \bar\Delta u=u\mathop{{\otimes}} 1+\delta_0u\mathop{{\otimes}} (u-1),\quad \bar\Delta\delta_0=\delta_0\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_2+\delta_1\mathop{{\otimes}}\delta_2 \] \[ \bar\Delta\delta_1=\delta_0\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_1,\quad \bar\Delta\delta_2=\delta_0\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_1,\] \[ \bar\phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1+ (\delta_1\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_1)(u-1)=\bar\phi^{-1}\] for the quasibialgebra. We used the $\tau,{\triangleright},{\triangleleft},\cdot$ for $\bar R$ for these direct calculations. Now we consider twisting with \[ c_0=e,\quad c_1=(uv)^{-1}uv=1,\quad c_2=v^{-1}vu=u,\quad \chi=1\mathop{{\otimes}} 1+ \delta_2\mathop{{\otimes}} (u-1)=\chi^{-1}\] and check twisting the coproducts \[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(u\mathop{{\otimes}} u)(1\mathop{{\otimes}} 1+\delta_2u\mathop{{\otimes}} (u-1))=u\mathop{{\otimes}} 1+\delta_0\mathop{{\otimes}}(u-1)=\bar\Delta u, \] \[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_1)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_0,\] \[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_2)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_1,\] \[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_1)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_2.\] One can also check that (\ref{taucond}) hold, e.g. for the first half, \[ \bar 2=\bar 1\bar\cdot\bar 1=\overline{1+c_1{\triangleright} 1}=\overline{1+1},\quad \bar 0=\bar 1\bar\cdot\bar 2=\overline{1+c_1{\triangleright} 2}=\overline{1+2},\] \[ \bar 1=\bar2\bar\cdot\bar 1=\overline{2+c_2{\triangleright} 1}=\overline{2+2},\quad \bar 0=\bar2\bar\cdot\bar 2=\overline{2+c_2{\triangleright} 2}=\overline{2+1}\] as it must. Now we apply the twisting of antipodes in Corollary~\ref{twistant}, remembering to do calculations now with $R$ where $\tau,{\triangleleft}$ are trivial, to get \[ \bar S=S,\quad \bar\alpha=\delta_0+\delta_1c_2+\delta_2c_1=1+\delta_1(u-1),\quad \bar\beta=\delta_0+\delta_2c_2+\delta_1c_1=1+\delta_2(u-1),\] which obey $\bar\alpha^2=\bar\alpha$ and $\bar\beta^2=\bar\beta$ and are therefore not (left or right) invertible. Hence, we cannot set either equal to 1 by $U$ and there is an antipode, but it is not regular. One can check the antipode indeed works: \begin{align*}(Su)\alpha+ (Su) (S\delta_0)\alpha(u-1)&=u(1+\delta_1(u-1))+\delta_0 u(1+\delta_1(u-1))(u-1)\\ &=u+\delta_2(1-u)+\delta_0(1-u)=u+(1-\delta_1)(1-u)=\alpha\\ u\beta+\delta_0u\beta S(u-1)&=u(1+\delta_2(u-1))+\delta_0 u(1+\delta_2(u-1))(u-1)\\ &=u+\delta_1(1-u)+\delta_0(1-u)=u+(1-\delta_2)(1-u)=\beta \end{align*} \begin{align*} (S\delta_0)\alpha\delta_0&+(S\delta_2)\alpha\delta_2+(S\delta_1)\alpha\delta_2=\delta_0(1+\delta_1(u-1))\delta_0+(1-\delta_0)(1+\delta_1(u-1))\delta_2\\ &=\delta_0+(1-\delta_0)\delta_2+\delta_1(\delta_1 u-\delta_2)=\delta_0+\delta_2+\delta_1u=\alpha\\ \delta_0\beta S\delta_0&+\delta_2\beta S\delta_2+\delta_1\beta S\delta_2=\delta_0(1+\delta_2(u-1))\delta_0+(1-\delta_0)(1+\delta_2(u-1))\delta_1\\ &=\delta_0+(1-\delta_0)\delta_1+(1-\delta_0)\delta_2(u-1)\delta_1=\delta_0+\delta_1+\delta_2(\delta_2u-\delta_1)=\beta \end{align*} and more simply on $\delta_1,\delta_2$. The fourth transversal has a similar pattern to the 3rd, so we do not list its coproduct etc. explicitly. \end{example} In general, there will be many different choices of transversal. For $S_{n-1}\subset S_n$, the first two transversals for $S_2\subset S_3$ generalise as follows, giving a Hopf algebra and a strictly quasi-Hopf algebra respectively. \begin{example}\rm {\sl (i) First transversal.} Here $R=\mathbb{Z}_n$ is a subgroup with $i=0,1,\cdots,n-1$ mod $n$ corresponding to the elements $(12\cdots n)^i$. Neither subgroup is normal for $n\ge 4$, so both actions are nontrivial but $\tau$ is trivial. This expresses $S_n$ as a double cross product $\mathbb{Z}_n{\bowtie} S_{n-1}$ (with trivial $\tau$) and the matched pair of actions \[ \sigma{\triangleright} i=\sigma(i),\quad (\sigma{\triangleleft} i)(j)=\sigma(i+j)-\sigma(i)\] for $i,j=1,\cdots,n-1$, where we add and subtract mod $n$ but view the results in the range $1,\cdots, n$. This was actually found by twisting from the 2nd transversal below, but we can check it directly as follows. First. \[\sigma (1\cdots n)^i= (\sigma{\triangleright} i)(\sigma{\triangleleft} i)=(12\cdots n)^{\sigma(i)}\left((1\cdots n)^{-\sigma(i)}\sigma(12\cdots n)^i\right)\] and we check that the second factor sends $n\to i\to \sigma(i) \to n$, hence lies in $S_n$. It follows by the known fact of unique factorisation into these subgroups that this factor is $\sigma{\triangleleft} i$. Its action on $j=1,\cdots, n-1$ is \[ (\sigma{\triangleright} i)(j)=(12\cdots n)^{-\sigma(i)}\sigma(12\cdots n)^i(j)=\begin{cases} n-\sigma(i) & i+j=n\\ \sigma(i+j)-\sigma(i) & i+j\ne n\end{cases}=\sigma(i+j)-\sigma(i),\] where $\sigma(i+j)\ne \sigma(i)$ as $i+j\ne i$ and $\sigma(n)=n$ as $\sigma\in S_{n-1}$. It also follows since the two factors are subgroups that these are indeed a matched pair of actions. We can also check the matched pair axioms directly. Clearly, ${\triangleright}$ is an action and \[ \sigma(i)+ (\sigma{\triangleleft} i)(j)=\sigma(i)+\sigma(i+j)-\sigma(i)=\sigma{\triangleright}(i+j)\] for $i,j\in\mathbb{Z}_n$. On the other side, \begin{align*}( (\sigma{\triangleleft} i){\triangleleft} j)(k)&=(\sigma{\triangleleft} i)(j+k)-(\sigma{\triangleleft} i)(j)=\sigma(i+(j+k))-\sigma(i)-\sigma(i+j)+\sigma(i)\\ &=\sigma((i+j)+k)-\sigma(i+j)=(\sigma{\triangleleft}(i+j))(k),\\ ((\sigma{\triangleleft}(\tau{\triangleright} i))(\tau{\triangleleft} i))(j)&=(\sigma{\triangleleft}\tau(i))(\tau(i+j))-\tau(i))=\sigma(\tau(i)+\tau(i+j)-\tau(i)) -\sigma(\tau(i))\\ &= \sigma(\tau(i+j))-\sigma(\tau(i))=((\sigma\tau){\triangleleft} i)(j)\end{align*} for $i,j\in \mathbb{Z}_n$ and $k\in 1,\cdots,n-1$. This gives $ \mathbb{C} S_{n-1}{\triangleright\!\blacktriangleleft}\mathbb{C}(\mathbb{Z}_n)$ as a natural bicrossproduct Hopf algebra which we identify with $\Xi$ (which we prefer to build on the other tensor product order). From Lemma~\ref{Xibialg} and Proposition~\ref{standardS}, this is spanned by products of $\delta_i$ for $i=0,\cdots n-1$ as our labelling of $R=\mathbb{Z}_n$ and $\sigma\in S_{n-1}=K$, with cross relations $\sigma\delta_i=\delta_{\sigma(i)}\sigma$, $\sigma\delta_0=\delta_0\sigma$, and coproduct etc., \[ \Delta \delta_i=\sum_{j\in \mathbb{Z}_n}\delta_j\mathop{{\otimes}}\delta_{i-j},\quad \Delta\sigma=\sigma\delta_0+\sum_{i=1}^{n-1}(\sigma{\triangleleft} i),\quad {\epsilon}\delta_i=\delta_{i,0},\quad{\epsilon}\sigma=1,\] \[ S\delta_i=\delta_{-i},\quad S\sigma=\sigma^{-1}\delta_0+(\sigma^{-1}{\triangleleft} i)\delta_{-i},\] where $\sigma{\triangleleft} i$ is as above for $i=1,\cdots,n-1$. This is a usual Hopf $*$-algebra with $\delta_i^*=\delta_i$ and $\sigma^*=\sigma^{-1}$ according to Corollary~\ref{corstar}. \medskip {\sl (ii) 2nd transversal.} Here $R=\{e, (1\, n),(2\, n),\cdots,(n-1\, n)\}$, which has nontrivial ${\triangleright}$ in which $S_{n-1}$ permutes the 2-cycles according to the $i$ label, but again trivial ${\triangleleft}$ since \[ \sigma(i\, n)=(\sigma(i)\, n)\sigma,\quad \sigma{\triangleright} (i\ n)=(\sigma(i)\, n)\] for all $i=1,\cdots,n-1$ and $\sigma\in S_{n-1}$. It has nontrivial $\tau$ as \[ (i\, n )(j\, n)=(j\, n)(i\, j)\Rightarrow (i\, n)\cdot (j\, n)=(j\, n),\quad \tau((i\, n),(j\, n))=(ij)\] for $i\ne j$ and we see that $\cdot$ has right but not left division or left but not right cancellation. We also have $(in)\cdot(in)=e$ and $\tau((in),(in))=e$ so that $(\ )^R$ is the identity map, hence $R$ is regular. This transversal gives a cross-product quasiHopf algebra $\Xi=\mathbb{C} S_{n-1}{\triangleright\!\!\!<}_\tau \mathbb{C}(R)$ where $R$ is a left quasigroup (i.e. unital and with left cancellation) except that we prefer to write it with the tensor factors in the other order. From Lemma~\ref{Xibialg} and Proposition~\ref{standardS}, this is spanned by products of $\delta_i$ and $\sigma\in S_{n-1}$, where $\delta_0$ is the delta function at $e\in R$ and $\delta_i$ at $(i,n)$ for $i=1,\cdots,n-1$. The cross relations have the same algebra $\sigma\delta_i=\delta_{\sigma(i)}\sigma$ for $i=1,\cdots,n-1$ as before but now the tensor coproduct etc., and nontrivial associator \[\Delta\delta_0=\sum_{i=0}^{n-1}\delta_i\mathop{{\otimes}}\delta_i,\quad \Delta\delta_i=1\mathop{{\otimes}}\delta_i+\delta_i\mathop{{\otimes}}\delta_0,\quad \Delta \sigma=\sigma\mathop{{\otimes}}\sigma,\quad {\epsilon}\delta_i=\delta_{i,0},\quad{\epsilon}\sigma=1,\] \[ S\delta_i=\delta_{i},\quad S\sigma=\sigma^{-1},\quad \alpha=\beta=1,\] \[\phi=(1\mathop{{\otimes}}\delta_0+\delta_0\mathop{{\otimes}}(1-\delta_0)+\sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}}\delta_i)\mathop{{\otimes}} 1+ \sum_{i,j=1\atop i\ne j}^{n-1}\delta_i\mathop{{\otimes}}\delta_j\mathop{{\otimes}} (ij).\] This is a $*$-quasi Hopf algebra with the same $*$ as before but now nontrivial \[ \gamma=1,\quad \hbox{{$\mathcal G$}}=1\mathop{{\otimes}}\delta_0+\delta_0\mathop{{\otimes}}(1-\delta_0)+\sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}}\delta_i+ \sum_{i,j=1\atop i\ne j}^{n-1}\delta_i(ij)\mathop{{\otimes}}\delta_j(ij)\] from Corollary~\ref{corstar}. \medskip{\sl (iii) Twisting between the above two transversals.} We denote the first transversal $R=\mathbb{Z}_n$, where $i$ is identified with $(12\cdots n)^i$, and we denote the 2nd transversal by $\bar R$ with corresponding elements $\bar i=(i\ n)$. Then \[ c_i=(12\cdots n)^{-i}(i\ n)\in S_{n-1},\quad c_i(j)=\begin{cases} n-i & j=i\\ j-i & else \end{cases}\] for $i,j=1,\cdots,n-1$. If we use the stated ${\triangleright}$ for the first transversal then one can check that the first half of (\ref{taucond}) holds, \[ \overline{i+c_i{\triangleright} i}=\overline{i+n-i}=e=\bar i\bar\cdot \bar i,\quad \overline{i+c_i{\triangleright} j}=\overline{i+j-i}=\bar j=\bar i\bar\cdot \bar j\] as it must. We can also check that the actions are indeed related by twisting. Thus, \[ \sigma{\triangleleft}\bar i=c_{\sigma{\triangleright} i}^{-1}(\sigma{\triangleleft} i)c_i=(\sigma(i),n)(12\cdots n)^{\sigma(i)}(\sigma{\triangleleft} i)(12\cdots n)^{-i}(i,n)=(\sigma(i),n)\sigma(i,n)=\sigma\] \[ \sigma\bar{\triangleright} \bar i=(\sigma{\triangleright} i)c_{\sigma{\triangleright} i}=(12\cdots n)^{\sigma(i)}(12\cdots n)^{-\sigma(i)}(\sigma(i),n)=(\sigma(i),n),\] where we did the computation with $\mathbb{Z}_n$ viewed in $S_n$. It follows that the Hopf algebra from case (i) cochain twists to a simpler quasihopf algebra in case (ii). The required cochain from Theorem~\ref{thmtwist} is \[ \chi=\delta_0\mathop{{\otimes}} 1+ \sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}} (12\cdots n)^{-i}(in).\] \end{example} The above example is a little similar to the Drinfeld $U_q(g)$ as Hopf algebras which are cochain twists of $U(g)$ viewed as a quasi-Hopf algebra. We conclude with the promised example related to the octonions. This is a version of \cite[Example~4.6]{KM2}, but with left and right swapped and some cleaned up conventions. \begin{example}\rm We let $G=Cl_3{>\!\!\!\triangleleft} \mathbb{Z}_2^3$, where $Cl_3$ is generated by $1,-1$ and $e_{i}$, $i=1,2,3$, with relations \[ (-1)^2=1,\quad (-1)e_i=e_i(-1),\quad e_i^2=-1,\quad e_i e_j=-e_j e_i \] for $i\ne j$ and the usual combination rules for the product of signs. Its elements can be enumerated as $\pm e_{\vec a}$ where $\vec{a}\in \mathbb{Z}_2^3$ is viewed in the additive group of 3-vectors with entries in the field $\mathbb{F}_2=\{0,1\}$ of order 2 and \[ e_{\vec a}=e_1^{a_1}e_2^{a_2}e_3^{a_3},\quad e_{\vec a} e_{\vec b}=e_{\vec a+\vec b}(-1)^{\sum_{i\ge j}a_ib_j}. \] This is the twisted group ring description of the 3-dimensional Clifford algebra over $\mathbb{R}$ in \cite{AlbMa}, but now restricted to coefficients $0,\pm1$ to give a group of order 16. For an example, \[ e_{110}e_{101}=e_2e_3 e_1e_3=e_1e_2e_3^2=-e_1e_2=-e_{011}=-e_{110+101}\] with the sign given by the formula. We similarly write the elements of $K=\mathbb{Z}_2^3$ multiplicatively as $g^{\vec a}=g_1^{a_1}g_1^{a_2}g_3^{a_3}$ labelled by 3-vectors with values in $\mathbb{F}_2$. The generators $g_i$ commute and obey $g_i^2=e$. The general group product becomes the vector addition, and the cross relations are \[ (-1)g_i=g_i(-1),\quad e_i g_i= -g_i e_i,\quad e_i g_j=g_j e_i\] for $i\ne j$. This implies that $G$ has order 128. (i) If we take $R=Cl_3$ itself then this will be a subgroup and we will have for $\Xi(R,K)$ an ordinary Hopf $*$-algebra as a semidirect product $\mathbb{C} \mathbb{Z}_2^3{\triangleright\!\!\!<} \mathbb{C}(Cl_3)$ except that we build it on the opposite tensor product. (ii) Instead, we take as representatives the eight elements again labelled by 3-vectors over $\mathbb{F}_2$, \[ r_{000}=1,\quad r_{001}=e_3,\quad r_{010}=e_2,\quad r_{011}=e_2e_3g_1\] \[ r_{100}=e_1,\quad r_{101}=e_1e_3 g_2,\quad r_{110}=e_1e_2g_3,\quad r_{111}=e_1e_2e_3 g_1g_2g_3 \] and their negations, as a version of \cite[Example~4.6]{KM2}. This can be written compactly as \[ r_{\vec a}=e_{\vec a}g_1^{a_2 a_3}g_2^{a_1a_3}g_3^{a_1a_2}\] \begin{proposition}\cite{KM2} This choice of transversal makes $(R,\cdot)$ the octonion two sided inverse property quasigroup $G_{\O}$ in the Albuquerque-Majid description of the octonions\cite{AlbMa}, \[ r_{\vec a}\cdot r_{\vec b}=(-1)^{f(\vec a,\vec b)} r_{\vec a+\vec b},\quad f(\vec a,\vec b)=\sum_{i\ge j}a_ib_j+ a_1a_2b_3+ a_1b_2a_3+b_1a_2a_3 \] with the product on signed elements behaving as if bilinear. The action ${\triangleleft}$ is trivial, and left action and cocycle $\tau$ are \[ g^{\vec a}{\triangleright} r_{\vec b}=(-1)^{\vec a\cdot \vec b}r_{\vec b},\quad \tau(r_{\vec a},r_{\vec b})=g^{\vec a\times\vec b}=g_1^{a_2 b_3+a_3 b_2}g_2^{a_3 b_1+a_1b_3} g_3^{a_1b_2+a_2b_1}\] with the action extended with signs as if linearly and $\tau$ independent of signs in either argument. \end{proposition} {\noindent {\bfseries Proof:}\quad } We check in the group \begin{align*} r_{\vec a}r_{\vec b}&=e_{\vec a}g_1^{a_2 a_3}g_2^{a_1a_3}g_3^{a_1a_2}e_{\vec b}g_1^{b_2 b_3}g_2^{b_1b_3}g_3^{b_1b_2}\\ &=e_{\vec a}e_{\vec b}(-1)^{b_1a_2a_3+b_2a_1a_3+b_3a_1a_2} g_1^{a_2a_3+b_2b_3}g_2^{a_1a_3+b_1b_3}g_3^{a_1a_2+b_1b_2}\\ &=(-1)^{f(a,b)}r_{\vec a+\vec b}g_1^{a_2a_3+b_2b_3-(a_2+b_2)(a_3+b_3)}g_2^{a_1a_3+b_1b_3-(a_1+b_1)(a_3+b_3)}g_3^{a_1a_2+b_1b_2-(a_1+b_1)(a_2+b_2)}\\ &=(-1)^{f(a,b)}r_{\vec a+\vec b}g_1^{a_2b_3+b_2a_3} g_2^{a_1b_3+b_1a_3}g_3^{a_1b_2+b_1a_2}, \end{align*} from which we read off $\cdot$ and $\tau$. For the second equality, we moved the $g_i$ to the right using the commutation rules in $G$. For the third equality we used the product in $Cl_3$ in our description above and then converted $e_{\vec a+\vec b}$ to $r_{\vec a+\vec b}$. \endproof The product of the quasigroup $G_\O$ here is the same as the octonions product as an algebra over $\mathbb{R}$ in the description of \cite{AlbMa}, restricted to elements of the form $\pm r_{\vec a}$. The cocycle-associativity property of $(R,\cdot)$ says \[ r_{\vec a}\cdot(r_{\vec b}\cdot r_{\vec c})=(r_{\vec a}\cdot r_{\vec b})\cdot\tau(\vec a,\vec b){\triangleright} r_{\vec c}=(r_{\vec a}\cdot r_{\vec b})\cdot r_{\vec c} (-1)^{(\vec a\times\vec b)\cdot\vec c}\] giving -1 exactly when the 3 vectors are linearly independent as 3-vectors over $\mathbb{F}_2$. One also has $r_{\vec a}\cdot r_{\vec b}=\pm r_{\vec b}\cdot r_{\vec a}$ with $-1$ exactly when the two vectors are linearly independent, which means both nonzero and not equal, and $r_{\vec a} \cdot r_{\vec a}=\pm1 $ with $-1$ exactly when the one vector is linearly independent, i.e. not zero. (These are exactly the quasiassociativity, quasicommutativity and norm properties of the octonions algebra in the description of \cite{AlbMa}.) The 2-sided inverse is \[ r_{\vec a}^{-1}=(-1)^{n(\vec a)}r_{\vec a},\quad n(0)=0,\quad n(\vec a)=1,\quad \forall \vec a\ne 0\] with the inversion operation extended as usual with respect to signs. The quasi-Hopf algebra $\Xi(R,K)$ is spanned by $\delta_{(\pm,\vec a)}$ labelled by the points of $R$ and products of the $g_i$ with the relations $g^{\vec a}\delta_{(\pm, \vec b)}=\delta_{(\pm (-1)^{\vec a\cdot\vec b},\vec b)} g^{\vec a}$ and tensor coproduct etc., \[ \Delta \delta_{(\pm, \vec a)}=\sum_{(\pm', \vec b)}\delta_{(\pm' ,\vec b)}\mathop{{\otimes}}\delta_{(\pm\pm'(-1)^{n(\vec b)},\vec a+\vec b)},\quad \Delta g^{\vec a}=g^{\vec a}\mathop{{\otimes}} g^{\vec a},\quad {\epsilon}\delta_{(\pm,\vec a)}=\delta_{\vec a,0}\delta_{\pm,+},\quad {\epsilon} g^{\vec a}=1,\] \[S\delta_{(\pm,\vec a)}=\delta_{(\pm(-1)^{n(\vec a)},\vec a},\quad S g^{\vec a}=g^{\vec a},\quad\alpha=\beta=1,\quad \phi=\sum_{(\pm, \vec a),(\pm',\vec{b})} \delta_{(\pm,\vec a)}\mathop{{\otimes}}\delta_{(\pm',\vec{b})}\mathop{{\otimes}} g^{\vec a\times\vec b}\] and from Corollary~\ref{corstar} is a $*$-quasi-Hopf algebra with $*$ the identity on $\delta_{(\pm,\vec a)},g^{\vec a}$ and \[ \gamma=1,\quad \hbox{{$\mathcal G$}}=\sum_{(\pm, \vec a),(\pm',\vec{b})} \delta_{(\pm,\vec a)}g^{\vec a\times\vec b} \mathop{{\otimes}}\delta_{(\pm',\vec{b})}g^{\vec a\times\vec b}.\] The general form here is not unlike our $S_n$ example. \end{example} \subsection{Module categories context} This section does not contain anything new beyond \cite{Os2,EGNO}, but completes the categorical picture that connects our algebra $\Xi(R,K)$ to the more general context of module categories, adapted to our notations. Our first observation is that if $\mathop{{\otimes}}: {\hbox{{$\mathcal C$}}}\times {\hbox{{$\mathcal V$}}}\to {\hbox{{$\mathcal V$}}}$ is a left action of a monoidal category ${\hbox{{$\mathcal C$}}}$ on a category ${\hbox{{$\mathcal V$}}}$ (one says that ${\hbox{{$\mathcal V$}}}$ is a left ${\hbox{{$\mathcal C$}}}$-module) then one can check that this is the same thing as a monoidal functor $F:{\hbox{{$\mathcal C$}}}\to \mathrm{ End}({\hbox{{$\mathcal V$}}})$ where the set ${\rm End}({\hbox{{$\mathcal V$}}})$ of endofunctors can be viewed as a strict monoidal category with monoidal product the endofunctor composition $\circ$. Here ${\rm End}({\hbox{{$\mathcal V$}}})$ has monoidal unit $\mathrm{id}_{\hbox{{$\mathcal V$}}}$ and its morphisms are natural transformations between endofunctors. $F$ just sends an object $X\in {\hbox{{$\mathcal C$}}}$ to $X\mathop{{\otimes}}(\ )$ as a monoidal functor from ${\hbox{{$\mathcal V$}}}$ to ${\hbox{{$\mathcal V$}}}$. A monoidal functor comes with natural isomorphisms $\{f_{X,Y}\}$ and these are given tautologically by \[ f_{X,Y}(V): F(X)\circ F(Y)(V)=X\mathop{{\otimes}} (Y\mathop{{\otimes}} V)\cong (X\mathop{{\otimes}} Y)\mathop{{\otimes}} V= F(X\mathop{{\otimes}} Y)(V)\] as part of the monoidal action. Conversely, if given a functor $F$, we define $X\mathop{{\otimes}} V=F(X)V$ and extend the monoidal associativity of ${\hbox{{$\mathcal C$}}}$ to mixed objects using $f_{X,Y}$ to define $X\mathop{{\otimes}} (Y\mathop{{\otimes}} V)= F(X)\circ F(Y)V{\cong} F(X\mathop{{\otimes}} Y)V= (X\mathop{{\otimes}} Y)\mathop{{\otimes}} V$. The notion of a left module category is a categorification of the bijection between an algebra action $\cdot: A \mathop{{\otimes}} V\rightarrow V$ and a representation as an algebra map $A \rightarrow {\rm End}(V)$. There is an equally good notion of a right ${\hbox{{$\mathcal C$}}}$-module category extending $\mathop{{\otimes}}$ to ${\hbox{{$\mathcal V$}}}\times{\hbox{{$\mathcal C$}}}\to {\hbox{{$\mathcal V$}}}$. In the same way as one uses $\cdot$ for both the algebra product and the module action, it is convenient to use $\mathop{{\otimes}}$ for both in the categorified version. Similarly for the right module version. Another general observation is that if ${\hbox{{$\mathcal V$}}}$ is a ${\hbox{{$\mathcal C$}}}$-module category for a monoidal category ${\hbox{{$\mathcal C$}}}$ then ${\rm Fun}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})$, the (left exact) functors from ${\hbox{{$\mathcal V$}}}$ to itself that are compatible with the action of ${\hbox{{$\mathcal C$}}}$, is another monoidal category. This is denoted ${\hbox{{$\mathcal C$}}}^*_{{\hbox{{$\mathcal V$}}}}$ in \cite{EGNO}, but should not be confused with the dual of a monoidal functor which was one of the origins\cite{Ma:rep} of the centre $\hbox{{$\mathcal Z$}}({\hbox{{$\mathcal C$}}})$ construction as a special case. Also note that if $A\in {\hbox{{$\mathcal C$}}}$ is an algebra in the category then ${\hbox{{$\mathcal V$}}}={}_A{\hbox{{$\mathcal C$}}}$, the left modules of $A$ in the category, is a {\em right} ${\hbox{{$\mathcal C$}}}$-module category. If $V$ is an $A$-module then we define $V\mathop{{\otimes}} X$ as the tensor product in ${\hbox{{$\mathcal C$}}}$ equipped with an $A$-action from the left on the first factor. Moreover, for certain `nice' right module categories ${\hbox{{$\mathcal V$}}}$, there exists a suitable algebra $A\in {\hbox{{$\mathcal C$}}}$ such that ${\hbox{{$\mathcal V$}}}\simeq {}_A{\hbox{{$\mathcal C$}}}$, see \cite{Os2}\cite[Thm~7.10.1]{EGNO} in other conventions. For such module categories, ${\rm Fun}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})\simeq {}_A{\hbox{{$\mathcal C$}}}_A$ the category of $A$-$A$-bimodules in ${\hbox{{$\mathcal C$}}}$. Here, if given an $A$-$A$-bimodule $E$ in ${\hbox{{$\mathcal C$}}}$, the corresponding endofunctor is given by $E\mathop{{\otimes}}_A(\ )$, where we require ${\hbox{{$\mathcal C$}}}$ to be Abelian so that we can define $\mathop{{\otimes}}_A$. This turns $V\in {}_A{\hbox{{$\mathcal C$}}}$ into another $A$-module in ${\hbox{{$\mathcal C$}}}$ and $E\mathop{{\otimes}}_A(V\mathop{{\otimes}} X){\cong} (E\mathop{{\otimes}}_A V)\mathop{{\otimes}} X$, so the construction commutes with the right ${\hbox{{$\mathcal C$}}}$-action. Before we explain how these abstract ideas lead to ${}_K\hbox{{$\mathcal M$}}^G_K$, a more `obvious' case is the study of left module categories for ${\hbox{{$\mathcal C$}}} = {}_G\hbox{{$\mathcal M$}}$. If $K\subseteq G$ is a subgroup, we set ${\hbox{{$\mathcal V$}}} = {}_K\hbox{{$\mathcal M$}}$ for $i: K\subseteq G$. The functor ${\hbox{{$\mathcal C$}}}\to \mathrm{ End}({\hbox{{$\mathcal V$}}})$ just sends $X\in {\hbox{{$\mathcal C$}}}$ to $i^*(X)\mathop{{\otimes}}(\ )$ as a functor on ${\hbox{{$\mathcal V$}}}$, or more simply ${\hbox{{$\mathcal V$}}}$ is a left ${\hbox{{$\mathcal C$}}}$-module by $X\mathop{{\otimes}} V=i^*(X)\mathop{{\otimes}} V$. More generally\cite{Os2}\cite[Example~7..4.9]{EGNO}, one can include a cocycle $\alpha\in H^2(K,\mathbb{C}^\times)$ since we are only interested in monoidal equivalence, and this data $(K,\alpha)$ parametrises all indecomposable left ${}_G\hbox{{$\mathcal M$}}$-module categories. Moreover, here $\mathrm{ End}({\hbox{{$\mathcal V$}}})\simeq {}_K\hbox{{$\mathcal M$}}_K$, the category of $K$-bimodules, where a bimodule $E$ acts by $E\mathop{{\otimes}}_{\mathbb{C} K}(\ )$. So the data we need for a ${}_G\hbox{{$\mathcal M$}}$-module category is a monoidal functor ${}_G\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K$. This is of potential interest but is not the construction we were looking for. Rather, we are interested in right module categories of ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, the category of $G$-graded vector spaces. It turns out that these are classified by the exact same data $(K,\alpha)$ (this is related to the fact that the $\hbox{{$\mathcal M$}}^G,{}_G\hbox{{$\mathcal M$}}$ have the same centre) but the construction is different. Thus, if $K\subseteq G$ is a subgroup, we consider $A=\mathbb{C} K$ regarded as an algebra in ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$ by $|x|=x$ viewed in $G$. One can also twist this by a cocycle $\alpha$, but here we stick to the trivial case. Then ${\hbox{{$\mathcal V$}}}={}_A{\hbox{{$\mathcal C$}}}={}_K\hbox{{$\mathcal M$}}^G$, the category of $G$-graded left $K$-modules, is a right ${\hbox{{$\mathcal C$}}}$-module category. Explicitly, if $X\in {\hbox{{$\mathcal C$}}}$ is a $G$-graded vector space and $V\in{\hbox{{$\mathcal V$}}}$ a $G$-graded left $K$-module then \[ V\mathop{{\otimes}} X,\quad x.(v\mathop{{\otimes}} w)=v.x\mathop{{\otimes}} w,\quad |v\mathop{{\otimes}} w|=|v||w|,\quad \forall\ v\in V,\ w\in X\] is another $G$-graded left $K$-module. Finally, by the general theory, there is an associated monoidal category \[ {\hbox{{$\mathcal C$}}}^*_{\hbox{{$\mathcal V$}}}:={\rm Fun}_{{\hbox{{$\mathcal C$}}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})\simeq {}_K\hbox{{$\mathcal M$}}^G_K\simeq {}_{\Xi(R,K)}\hbox{{$\mathcal M$}}.\] which is the desired category to describe quasiparticles on boundaries in \cite{KK}. Conversely, if ${\hbox{{$\mathcal V$}}}$ is an indecomposable right ${\hbox{{$\mathcal C$}}}$-module category for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, it is explained in \cite{Os2}\cite[Example~7.4.10]{EGNO} (in other conventions) that the set of indecomposable objects has a transitive action of $G$ and hence can be identified with $G/K$ for some subgroup $K\subseteq G$. This can be used to put the module category up to equivalence in the above form (with some cocycle $\alpha$). \section{Concluding remarks}\label{sec:rem} We have given a detailed account of the algebra behind the treatment of boundaries in the Kitaev model based on subgroups $K$ of a finite group $G$. New results include the quasi-bialgebra $\Xi(R,K)$ in full generality, a more direct derivation from the category ${}_K\hbox{{$\mathcal M$}}^G_K$ that connects to the module category point of view, a theorem that $\Xi(R,K)$ changes by a Drinfeld twist as $R$ changes, and a $*$-quasi-Hopf algebra structure that ensures a nice properties for the category of representations (these form a strong bar category). On the computer science side, we edged towards how one might use these ideas in quantum computations and detect quasiparticles across ribbons where one end is on a boundary. We also gave new decomposition formulae relating representations of $D(G)$ in the bulk to those of $\Xi(R,K)$ in the boundary. Both the algebraic and the computer science aspects can be taken much further. The case treated here of trivial cocycle $\alpha$ is already complicated enough but the ideas do extend to include these and should similarly be worked out. Whereas most of the abstract literature on such matters is at the conceptual level where we work up to categorical equivalence, we set out to give constructions more explicitly, which we we believe is essential for concrete calculations and should also be relevant to the physics. For example, much of the literature on anyons is devoted to so-called $F$-moves which express the associativity isomorphisms even though, by Mac Lane's theorem, monoidal categories are equivalent to strict ones. On the physics side, the covariance properties of ribbon operators also involve the coproduct and hence how they are realised depends on the choice of $R$. The same applies to how $*$ interacts with tensor products, which would be relevant to the unitarity properties of composite systems. Of interest, for example, should be the case of a lattice divided into two parts $A,B$ with a boundary between them and how the entropy of states in the total space relate to those in the subsystem. This is an idea of considerable interest in quantum gravity, but the latter has certain parallels with quantum computing and could be explored concretely using the results of the paper. We also would like to expand further the concrete use of patches and lattice surgery, as we considered only the cases of boundaries with $K=\{e\}$ and $K=G$, and only a square geometry. Additionally, it would be useful to know under what conditions the model gives universal quantum computation. While there are broadly similar such ideas in the physics literature, e.g., \cite{CCW}, we believe our fully explicit treatment will help to take these forward. Further on the algebra side, the Kitaev model generalises easily to replace $G$ by a finite-dimensional semi-simple Hopf algebra, with some aspects also in the non-semisimple case\cite{CowMa}. The same applies easily enough to at least a quasi-bialgebra associated to an inclusion $L\subseteq H$ of finite-dimensional Hopf algebras\cite{PS3} and to the corresponding module category picture. Ultimately here, it is the nonsemisimple case that is of interest as such Hopf algebras (e.g. of the form of reduced quantum groups $u_q(g)$) generate the categories where anyons as well as TQFT topological invariants live. It is also known that by promoting the finite group input of the Kitaev model to a more general weak Hopf algebra, one can obtain any unitary fusion category in the role of ${\hbox{{$\mathcal C$}}}$\cite{Chang}. There remains a lot of work, therefore, to properly connect these theories to computer science and in particular to established methods for quantum circuits. A step here could be braided ZX-calculus\cite{Ma:fro}, although precisely how remains to be developed. These are some directions for further work. \section*{Data availability statement} Data sharing is not applicable to this article as no new data were created or analysed in this study. \input{appendix} \end{document} \section{Boundary ribbon operators with $\Xi(R,K)^\star$}\label{app:ribbon_ops} \begin{definition}\rm\label{def:Y_ribbon} Let $\xi$ be a ribbon, $r \in R$ and $k \in K$. Then $Y^{r \otimes \delta_k}_{\xi}$ acts on a direct triangle $\tau$ as \[\tikzfig{Y_action_direct},\] and on a dual triangle $\tau^*$ as \[\tikzfig{Y_action_dual}.\] Concatenation of ribbons is given by \[Y^{r \otimes \delta_k}_{\xi'\circ\xi} = Y^{(r \otimes \delta_k)_2}_{\xi'}\circ Y^{(r \otimes \delta_k)_1}_{\xi} = \sum_{x\in K} Y^{(x^{-1}\rightharpoonup r) \otimes \delta_{x^{-1}k}}_{\xi'}\circ Y^{r\otimes\delta_x}_{\xi},\] where we see the comultiplication $\Delta(r \otimes \delta_k)$ of $\Xi(R,K)^*$. Here, $\Xi(R,K)^*$ is a coquasi-Hopf algebra, and so has coassociative comultiplication (it is the multiplication which is only quasi-associative). Therefore, we can concatenate the triangles making up the ribbon in any order, and the concatenation above uniquely defines $Y^{r\otimes\delta_k}_{\xi}$ for any ribbon $\xi$. \end{definition} Let $s_0 = (v_0,p_0)$ and $s_1 = (v_1,p_1)$ be the sites at the start and end of a triangle. The direct triangle operators satisfy \[k'{\triangleright}_{v_0}\circ Y^{r\otimes \delta_k}_{\tau} =Y^{r\otimes \delta_{k'k}}_{\tau}\circ k'{\triangleright}_{v_0},\quad k'{\triangleright}_{v_1}\circ Y^{r\otimes\delta_k}_\tau = Y^{r\otimes\delta_{k'k^{-1}}}_\tau\circ k'{\triangleright}_{v_1}\] and \[[\delta_{r'}{\triangleright}_{s_i},Y^{r\otimes\delta_k}_{\tau}]= 0\] for $i\in \{1,2\}$. For the dual triangle operators, we have \[k'{\triangleright}_{v_i}\circ \sum_k Y^{r\otimes\delta_k}_{\tau^*} = Y^{(k'{\triangleright} r)\otimes\delta_k}_{\tau^*}\circ k'{\triangleright}_{v_i}\] again for $i\in \{1,2\}$. However, there do not appear to be similar commutation relations for the actions of $\mathbb{C}(R)$ on faces of dual triangle operators. In addition, in the bulk, one can reconstruct the vertex and face actions using suitable ribbons \cite{Bom,CowMa} because of the duality between $\mathbb{C}(G)$ and $\mathbb{C} G$; this is not true in general for $\mathbb{C}(R)$ and $\mathbb{C} K$. \begin{example}\label{ex:Yrib}\rm Given the ribbon $\xi$ on the lattice below, we see that $Y^{r\otimes \delta_k}_{\xi}$ acts only along the ribbon and trivially elsewhere. We have \[\tikzfig{Y_action_ribbon}\] if $g^2,g^4,g^6(g^7)^{-1}\in K$, and $0$ otherwise, and \begin{align*} &y^1 = (rx^1)^{-1}\\ &y^2 = ((g^2)^{-1}rx^2)^{-1}\\ &y^3 = ((g^2g^4)^{-1}rx^3)^{-1}\\ &y^4 = ((g^2g^4g^6(g^7)^{-1})^{-1}rx^3)^{-1} \end{align*} One can check this using Definition~\ref{def:Y_ribbon}. \end{example} It is claimed in \cite{CCW} that these ribbon operators obey similar equivariance properties with the site actions of $\Xi(R,K)$ as the bulk ribbon operators, but we could not reproduce these properties. Precisely, we find that when such ribbons are `open' in the sense of \cite{Kit, Bom, CowMa} then an intermediate site $s_2$ on a ribbon $\xi$ between either endpoints $s_0,s_1$ does \textit{not} satisfy \[\Lambda_{\mathbb{C} K}{\triangleright}_{s_2}\circ Y^{r\otimes \delta_k}_{\xi} = Y^{r\otimes \delta_k}_{\xi}\circ \Lambda_{\mathbb{C} K}{\triangleright}_{s_2}.\] in general, nor the corresponding relation for $\Lambda_{\mathbb{C}(R)}{\triangleright}_{s_2}$. \section{Measurements and nonabelian lattice surgery}\label{app:measurements} In Section~\ref{sec:surgery}, we described nonabelian lattice surgery for a general underlying group algebra $\mathbb{C} G$, but for simplicity of exposition we assumed that the projectors $A(v)$ and $B(p)$ could be applied deterministically. In practice, we can only make a measurement, which will only sometimes yield the desired projectors. As the splits are easier, we discuss how to handle these first, beginning with the rough split. We demonstrate on the same example as previously: \[\tikzfig{rough_split_calc}\] \[\tikzfig{rough_split_calc2}\] where we have measured the edge to be deleted in the $\mathbb{C} G$ basis. The measurement outcome $n$ informs which corrections to make. The last arrow implies corrections made using ribbon operators. These corrections are all unitary, and if the measurement outcome is $e$ then no corrections are required at all. The generalisation to larger patches is straightforward, but requires keeping track of multiple different outcomes. Next, we discuss how to handle the smooth split. In this case, we measure the edges to be deleted in the Fourier basis, that is we measure the self-adjoint operator $\sum_{\pi} p_{\pi} P_{\pi}{\triangleright}$ at a particular edge, where \[P_{\pi} := P_{e,\pi} = {{\rm dim}(W_\pi)\over |G|}\sum_{g\in G} {\rm Tr}_\pi(g^{-1}) g\] from Section~\ref{sec:lattice} acts by the left regular representation. Thus, for a smooth split, we have the initial state $|e\>_L$: \[\tikzfig{smooth_split_calc1}\] \[\tikzfig{smooth_split_calc2}\] \[\tikzfig{smooth_split_calc3}\] and afterwards we still have coefficients from the irreps of $\mathbb{C} G$. In the case when $\pi = 1$, we are done. Otherwise, we have detected quasiparticles of type $(e,\pi)$ and $(e,\pi')$ at two vertices. In this case, we appeal to e.g. \cite{BKKK, Cirac}, which claim that one can modify these quasiparticles deterministically using ribbon operators and quantum circuitry. The procedure should be similar to initialising a fresh patch in the zero logical state, but we do not give any details ourselves. Then we have the desired result. For merges, we start with a smooth merge, as again all outcomes are in the group basis. Recall that after generating fresh copies of $\mathbb{C} G$ in the states $\sum_{m\in G} m$, we have \[\tikzfig{smooth_merge_project}\] we then measure at sites which include the top and bottom faces, giving: \[\tikzfig{smooth_merge_measure_1}\] for some conjugacy classes ${\hbox{{$\mathcal C$}}}, {\hbox{{$\mathcal C$}}}'$. There are no factors of $\pi$ as the edges around each vertex already satisfy $A(v)|\psi\> = |\psi\>$. When ${\hbox{{$\mathcal C$}}} = {\hbox{{$\mathcal C$}}}' = \{e\}$, we may proceed, but otherwise we require a way of deterministically eliminating the quasiparticles detected at the top and bottom faces. Appealing to e.g. \cite{BKKK, Cirac} as earlier, we assume that this may be done, but do not give details. Alternatively one could try to `switch reference frames' in the manner of Pauli frames with qubit codes \cite{HFDM}, and redefine the Hamiltonian. The former method gives \[\tikzfig{smooth_merge_measure_2}\] Lastly, we measure the inner face, yielding \[\tikzfig{smooth_merge_measure_3}\] so $|j\>_L\otimes |k\>_L \mapsto \sum_{s\in {\hbox{{$\mathcal C$}}}''} \delta_{js,k} |js\>_L$, which is a direct generalisation of the result for when $G = \mathbb{Z}_n$ in \cite{Cow2}, where now we sum over the conjugacy class ${\hbox{{$\mathcal C$}}}''$ which in the $\mathbb{Z}_n$ case are all singletons. The rough merge works similarly, where instead of having quasiparticles of type $({\hbox{{$\mathcal C$}}},1)$ appearing at faces, we have quasiparticles of type $(e,\pi)$ at vertices. \section{Introduction} The Kitaev model is defined for a finite group $G$ \cite{Kit} with quasiparticles given by representations of the quantum double $D(G)$, and their dynamics described by intertwiners. In quantum computing, the quasiparticles correspond to measurement outcomes at sites on a lattice, and their dynamics correspond to linear maps on the data, with the aim of performing fault-tolerant quantum computation. The lattice can be any ciliated ribbon graph embedded on a surface \cite{Meu}, although throughout we will assume a square lattice on the plane for convenience. The Kitaev model generalises to replace $G$ by a finite-dimensional semisimple Hopf algebra, as well as aspects that work of a general finite-dimensional Hopf algebra. We refer to \cite{CowMa} for details of the relevant algebraic aspects of this theory, which applies in the bulk of the Kitaev model. We now extend this work with a study of the algebraic structure that underlies an approach to the treatment of boundaries. The treatment of boundaries here originates in a more categorical point of view. In the original Kitaev model the relevant category that defines the `topological order' in condensed matter terms\cite{LK} is the category ${}_{D(G)}\mathcal{M}$ of $D(G)$-modules, which one can think of as an instance of the `dual' or `centre' $\hbox{{$\mathcal Z$}}({\hbox{{$\mathcal C$}}})$ construction\cite{Ma:rep}, where ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$ is the category of $G$-graded vector spaces. Levin-Wen `string-net' models \cite{LW} are a sort of generalisation of Kitaev models specified now by a unitary fusion category $\mathcal{C}$ with topological order $\hbox{{$\mathcal Z$}}(\mathcal{C})$, meaning that at every site on the lattice one has an object in $\hbox{{$\mathcal Z$}}(\mathcal{C})$, and now on a trivalent lattice. Computations correspond to morphisms in the same category. A so-called gapped boundary condition of a string-net model preserves a finite energy gap between the vacuum and the lowest excited state(s), which is independent of system size. Such boundary conditions are defined by module categories of the fusion category ${\hbox{{$\mathcal C$}}}$. By definition, a (right) ${\hbox{{$\mathcal C$}}}$-module means\cite{Os,KK} a category ${\hbox{{$\mathcal V$}}}$ equipped with a bifunctor ${\hbox{{$\mathcal V$}}} \times {\hbox{{$\mathcal C$}}} \rightarrow {\hbox{{$\mathcal V$}}}$ obeying coherence equations which are a polarised version of the properties of $\mathop{{\otimes}}: {\hbox{{$\mathcal C$}}}\times{\hbox{{$\mathcal C$}}}\to {\hbox{{$\mathcal C$}}}$ (in the same way that a right module of an algebra obeys a polarised version of the axioms for the product). One can also see a string-net model as a discretised quantum field theory \cite{Kir2, Meu}, and indeed boundaries of a conformal field theory can also be similarly defined by module categories \cite{FS}. For our purposes, we care about \textit{indecomposable} module categories, that is module categories which are not equivalent to a direct sum of other module categories. Excitations on the boundary with condition $\mathcal{V}$ are then given by functors $F \in \mathrm{End}_{\hbox{{$\mathcal C$}}}(\mathcal{V})$ that commute with the ${\hbox{{$\mathcal C$}}}$ action\cite{KK}, beyond the vacuum state which is the identity functor $\mathrm{id}_{\mathcal{V}}$. More than just the boundary conditions above, we care about these excitations, and so $\mathrm{End}_{\hbox{{$\mathcal C$}}}(\mathcal{V})$ is the category of interest. The Kitaev model is not exactly a string-net model (the lattice in our case will not even be trivalent) but closely related. In particular, it can be shown that indecomposable module categories for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, the category of $G$-graded vector spaces, are\cite{Os2} classified by subgroups $K\subseteq G$ and cocycles $\alpha\in H^2(K,\mathbb{C}^\times)$. We will stick to the trivial $\alpha$ case here, and the upshot is that the boundary conditions in the regular Kitaev model should be given by ${\hbox{{$\mathcal V$}}}={}_K\hbox{{$\mathcal M$}}^G$ the $G$-graded $K$-modules where $x\in K$ itself has grade $|x|=x\in G$. Then the excitations are governed by objects of $\mathrm{End}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}}) \simeq {}_K\hbox{{$\mathcal M$}}_K^G$, the category of $G$-graded bimodules over $K$. This is necessarily equivalent, by Tannaka-Krein reconstruction\cite{Ma:tan} to the category of modules ${}_{\Xi(R,K)}\mathcal{M}$ of a certain quasi-Hopf algebra $\Xi(R,K)$. Here $R\subseteq G$ is a choice of transversal so that every element of $G$ factorises uniquely as $RK$, but the algebra of $\Xi(R,K)$ depends only on the choice of subgroup $K$ and not on the transversal $R$. This is the algebra which we use to define measurement protocols on the boundaries of the Kitaev model. One also has that $\hbox{{$\mathcal Z$}}({}_\Xi\hbox{{$\mathcal M$}})\simeq\hbox{{$\mathcal Z$}}(\hbox{{$\mathcal M$}}^G)\simeq{}_{D(G)}\hbox{{$\mathcal M$}}$ as braided monoidal categories. Categorical aspects will be deferred to Section~\ref{sec:cat_just}, our main focus prior to that being on a full understanding of the algebra $\Xi$, its properties and aspects of the physics. In fact, lattice boundaries of Kitaev models based on subgroups have been defined and characterised previously, see \cite{BSW, Bom}, with \cite{CCW} giving an overview for computational purposes, and we build on these works. We begin in Section~\ref{sec:bulk} with a recap of the algebras and actions involved in the bulk of the lattice model, then in Section~\ref{sec:gap} we accommodate the boundary conditions in a manner which works with features important for quantum computation, such as sites, quasiparticle projectors and ribbon operators. These sections mostly cover well-trodden ground, although we correct errors and clarify some algebraic subtleties which appear to have gone unnoticed in previous works. In particular, we obtain formulae for the decomposition of bulk irreducible representations of $D(G)$ into $\Xi$-representations which we believe to be new. Key to our results here is an observation that in fact $\Xi(R,K)\subseteq D(G)$ as algebras, which gives a much more direct route than previously to an adjunction between $\Xi(R,K)$-modules and $D(G)$-modules describing how excitations pass between the bulk and boundary. This is important for the physical picture\cite{CCW} and previously was attributed to an adjunction between ${}_{D(G)}\hbox{{$\mathcal M$}}$ and ${}_K\hbox{{$\mathcal M$}}_K^G$ in \cite{PS2}. In Section~\ref{sec:patches}, as an application of our explicit description of boundaries, we generalise the quantum computational model called \textit{lattice surgery} \cite{HFDM,Cow2} to the nonabelian group case. We find that for every finite group $G$ one can simulate the group algebra $\mathbb{C} G$ and its dual $\mathbb{C}(G)$ on a lattice patch with `rough' and `smooth' boundaries. This is an alternative model of fault-tolerant computation to the well-known method of braiding anyons or defects \cite{Kit,FMMC}, although we do not know whether there are choices of group such that lattice surgery is natively universal without state distillation. In Section~\ref{sec:quasi}, we look at $\Xi(R,K)$ as a quasi-Hopf algebra in somewhat more detail than we have found elsewhere. As well as the quasi-bialgebra structure, we provide and verify the antipode for any choice of transversal $R$ for which right-inversion is bijective. This case is in line with \cite{Nat}, but we will also consider antipodes more generally. We then show that an obvious $*$-algebra structure on $\Xi$ meets all the axioms of a strong $*$-quasi-Hopf algebra in the sense of \cite{BegMa:bar} coming out of the theory of bar categories. The key ingredient here is a somewhat nontrivial map that relates the complex conjugate $\Xi$-module to $V\mathop{{\otimes}} W$ to those of $W$ and $V$. We also give an extended series of examples, including one related to the octonions. Lastly, in Section~\ref{sec:cat_just}, we connect the algebraic notions up to the abstract description of boundaries conditions via module categories and use this to obtain more results about $\Xi(R,K)$. We first calculate the relevant categorical equivalence ${}_K\hbox{{$\mathcal M$}}_K^G \simeq {}_{\Xi(R,K)}\mathcal{M}$ concretely, deriving the quasi-bialgebra structure of $\Xi(R,K)$ precisely such that this works. Since the left hand side is independent of $R$, we deduce by Tannaka-Krein arguments that changing $R$ changes $\Xi(R,K)$ by a Drinfeld cochain twist and we find this cochain as a main result of the section. This is important as Drinfeld twists do not change the category of modules up to equivalence, so such aspects of the physics do not depend on $R$. Twisting arguments then imply that we have an antipode more generally for any $R$. We also look at ${\hbox{{$\mathcal V$}}} = {}_K\hbox{{$\mathcal M$}}^G$ as a module category for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$. Section~\ref{sec:rem} provides some concluding remarks relating to generalisations of the boundaries to models based on other Hopf algebras \cite{BMCA}. \subsection*{Acknowledgements} The first author thanks Stefano Gogioso for useful discussions regarding nonabelian lattice surgery as a model for computation. Thanks also to Paddy Gray \& Kathryn Pennel for their hospitality while some of this paper was written and to Simon Harrison for the Wolfson Harrison UK Research Council Quantum Foundation Scholarship, which made this work possible. The second author was on sabbatical at Cambridge Quantum Computing and we thank members of the team there. \section{Preliminaries: recap of the Kitaev model in the bulk}\label{sec:bulk} We begin with the model in the bulk. This is a largely a recap of eg. \cite{Kit, CowMa}. \subsection{Quantum double}\label{sec:double}Let $G$ be a finite group with identity $e$, then $\mathbb{C} G$ is the group Hopf algebra with basis $G$. Multiplication is extended linearly, and $\mathbb{C} G$ has comultiplication $\Delta h = h \otimes h$ and counit ${\epsilon} h = 1$ on basis elements $h\in G$. The antipode is given by $Sh = h^{-1}$. $\mathbb{C} G$ is a Hopf $*$-algebra with $h^* = h^{-1}$ extended antilinearly. Its dual Hopf algebra $\mathbb{C}(G)$ of functions on $G$ has basis of $\delta$-functions $\{\delta_g\}$ with $\Delta\delta_g=\sum_h \delta_h\mathop{{\otimes}}\delta_{h^{-1}g}$, ${\epsilon} \delta_g=\delta_{g,e}$ and $S\delta_g=\delta_{g^{-1}}$ for the Hopf algebra structure, and $\delta_g^* = \delta_{g}$ for all $g\in G$. The normalised integral elements \textit{in} $\mathbb{C} G$ and $\mathbb{C}(G)$ are \[ \Lambda_{\mathbb{C} G}={1\over |G|}\sum_{h\in G} h\in \mathbb{C} G,\quad \Lambda_{\mathbb{C}(G)}=\delta_e\in \mathbb{C}(G).\] The integrals \textit{on} $\mathbb{C} G$ and $\mathbb{C}(G)$ are \[ \int h = \delta_{h,e}, \quad \int \delta_g = 1\] normalised so that $\int 1 = 1$ for $\mathbb{C} G$ and $\int 1 = |G|$ for $\mathbb{C}(G)$. For the Drinfeld double we have $D(G)=\mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$ as in \cite{Ma:book}, with $\mathbb{C} G$ and $\mathbb{C}(G)$ sub-Hopf algebras and the cross relations $ h\delta_g =\delta_{hgh^{-1}} h$ (a semidirect product). The Hopf algebra antipode is $S(\delta_gh)=\delta_{h^{-1}g^{-1}h} h^{-1}$, and over $\mathbb{C}$ we have a Hopf $*$-algebra with $(\delta_g h)^* = \delta_{h^{-1}gh} h^{-1}$. There is also a quasitriangular structure which in subalgebra notation is \begin{equation}\label{RDG} \hbox{{$\mathcal R$}}=\sum_{h\in G} \delta_h\mathop{{\otimes}} h\in D(G) \otimes D(G).\end{equation} If we want to be totally explicit we can build $D(G)$ on either the vector space $\mathbb{C}(G)\mathop{{\otimes}} \mathbb{C} G$ or on the vector space $\mathbb{C} G\mathop{{\otimes}}\mathbb{C}(G)$. In fact the latter is more natural but we follow the conventions in \cite{Ma:book,CowMa} and use the former. Then one can say the above more explicitly as \[(\delta_g\mathop{{\otimes}} h)(\delta_f\mathop{{\otimes}} k)=\delta_g\delta_{hfh^{-1}}\mathop{{\otimes}} hk=\delta_{g,hfh^{-1}}\delta_g\mathop{{\otimes}} hk,\quad S(\delta_g\mathop{{\otimes}} h)=\delta_{h^{-1}g^{-1}h} \mathop{{\otimes}} h^{-1}\] etc. for the operations on the underlying vector space. As a semidirect product, irreducible representations of $D(G)$ are given by standard theory as labelled by pairs $({\hbox{{$\mathcal C$}}},\pi)$ consisting of an orbit under the action (i.e. by a conjugacy class ${\hbox{{$\mathcal C$}}}\subset G$ in this case) and an irrep $\pi$ of the isotropy subgroup, in our case \[ G^{c_0}=\{n\in G\ |\ nc_0 n^{-1}=c_0\}\] of a fixed element $c_0\in{\hbox{{$\mathcal C$}}}$, i.e. the centraliser $C_G(c_0)$. The choice of $c_0$ does not change the isotropy group up to isomorphism but does change how it sits inside $G$. We also fix data $q_c\in G$ for each $c\in {\hbox{{$\mathcal C$}}}$ such that $c=q_cc_0q_c^{-1}$ with $q_{c_0}=e$ and define from this a cocycle $\zeta_c(h)=q^{-1}_{hch^{-1}}hq_c$ as a map $\zeta: {\hbox{{$\mathcal C$}}}\times G\to G^{c_0}$. The associated irreducible representation is then \[ W_{{\hbox{{$\mathcal C$}}},\pi}=\mathbb{C} {\hbox{{$\mathcal C$}}}\mathop{{\otimes}} W_\pi,\quad \delta_g.(c\mathop{{\otimes}} w)=\delta_{g,c}c\mathop{{\otimes}} w,\quad h.(c\mathop{{\otimes}} w)=hch^{-1}\mathop{{\otimes}} \zeta_c(h).w \] for all $w\in W_\pi$, the carrier space of $\pi$. This constructs all irreps of $D(G)$ and, over $\mathbb{C}$, these are unitary in a Hopf $*$-algebra sense if $\pi$ is unitary. Moreover, $D(G)$ is semisimple and hence has a block decomposition $D(G){\cong}\oplus_{{\hbox{{$\mathcal C$}}},\pi} \mathrm{ End}(W_{{\hbox{{$\mathcal C$}}},\pi})$ given by a complete orthogonal set of self-adjoint central idempotents \begin{equation}\label{Dproj}P_{({\hbox{{$\mathcal C$}}},\pi)}={{\rm dim}(W_\pi)\over |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}}\sum_{n\in G^{c_0}}\mathrm{ Tr}_\pi(n^{-1})\delta_{c}\mathop{{\otimes}} q_c nq_c^{-1}.\end{equation} We refer to \cite{CowMa} for more details and proofs. Acting on a state, this will become a projection operator that determines if a quasiparticle of type ${\hbox{{$\mathcal C$}}},\pi$ is present. Chargeons are quasiparticles with ${\hbox{{$\mathcal C$}}}=\{e\}$ and $\pi$ an irrep of $G$, and fluxions are quasiparticles with ${\hbox{{$\mathcal C$}}}$ a conjugacy class and $\pi=1$, the trivial representation. \subsection{Bulk lattice model}\label{sec:lattice} Having established the prerequisite algebra, we move on to the lattice model itself. This first part is largely a recap of \cite{Kit, CowMa} and we use the notations of the latter. Let $\Sigma = \Sigma(V, E, P)$ be a square lattice viewed as a directed graph with its usual (cartesian) orientation, vertices $V$, directed edges $E$ and faces $P$. The Hilbert space $\hbox{{$\mathcal H$}}$ will be a tensor product of vector spaces with one copy of $\mathbb{C} G$ at each arrow in $E$. We have group elements for the basis of each copy. Next, to each adjacent pair of vertex $v$ and face $p$ we associate a site $s = (v, p)$, or equivalently a line (the `cilium') from $p$ to $v$. We then define an action of $\mathbb{C} G$ and $\mathbb{C}(G)$ at each site by \[ \includegraphics[scale=0.7]{Gaction.pdf}\] Here $h\in \mathbb{C} G$, $a\in \mathbb{C}(G)$ and $g^1,\cdots,g^4$ denote independent elements of $G$ (not powers). Observe that the vertex action is invariant under the location of $p$ relative to its adjacent $v$, so the red dashed line has been omitted. \begin{lemma}\label{lemDGrep} \cite{Kit,CowMa} $h{\triangleright}$ and $a{\triangleright}$ for all $h\in G$ and $a\in \mathbb{C}(G)$ define a representation of $D(G)$ on $\hbox{{$\mathcal H$}}$ associated to each site $(v,p)$. \end{lemma} We next define \[ A(v):=\Lambda_{\mathbb{C} G}{\triangleright}={1\over |G|}\sum_{h\in G}h{\triangleright},\quad B(p):=\Lambda_{\mathbb{C}(G)}{\triangleright}=\delta_e{\triangleright}\] where $\delta_{e}(g^1g^2g^3g^4)=1$ iff $g^1g^2g^3g^4=e$, which is iff $(g^4)^{-1}=g^1g^2g^3$, which is iff $g^4g^1g^2g^3=e$. Hence $\delta_{e}(g^1g^2g^3g^4)=\delta_{e}(g^4g^1g^2g^3)$ is invariant under cyclic rotations, hence $\Lambda_{\mathbb{C}(G)}{\triangleright}$ computed at site $(v,p)$ does not depend on the location of $v$ on the boundary of $p$. Moreover, \[ A(v)B(p)=|G|^{-1}\sum_h h\delta_e{\triangleright}=|G|^{-1}\sum_h \delta_{heh^{-1}}h{\triangleright}=|G|^{-1}\sum_h \delta_{e}h{\triangleright}=B(p)A(v)\] if $v$ is a vertex on the boundary of $p$ by Lemma~\ref{lemDGrep}, and more trivially if not. We also have the rest of \[ A(v)^2=A(v),\quad B(p)^2=B(p),\quad [A(v),A(v')]=[B(p),B(p')]=[A(v),B(p)]=0\] for all $v\ne v'$ and $p\ne p'$, as easily checked. We then define the Hamiltonian \[ H=\sum_v (1-A(v)) + \sum_p (1-B(p))\] and the space of vacuum states \[ \hbox{{$\mathcal H$}}_{\rm vac}=\{|\psi\>\in\hbox{{$\mathcal H$}}\ |\ A(v)|\psi\>=B(p)|\psi\>=|\psi\>,\quad \forall v,p\}.\] Quasiparticles in Kitaev models are labelled by representations of $D(G)$ occupying a given site $(v,p)$, which take the system out of the vacuum. Detection of a quasiparticle is via a {\em projective measurement} of the operator $\sum_{{\hbox{{$\mathcal C$}}}, \pi} p_{{\hbox{{$\mathcal C$}}},\pi} P_{\mathcal{C}, \pi}$ acting at each site on the lattice for distinct coefficients $p_{{\hbox{{$\mathcal C$}}},\pi} \in \mathbb{R}$. By definition, this is a process which yields the classical value $p_{{\hbox{{$\mathcal C$}}},\pi}$ with a probability given by the likelihood of the state prior to the measurement being in the subspace in the image of $P_{\mathcal{C},\pi}$, and in so doing performs the corresponding action of the projector $P_{\mathcal{C}, \pi}$ at the site. The projector $P_{e,1}$ corresponds to the vacuum quasiparticle. In computing terms, this system of measurements encodes a logical Hilbert subspace, which we will always take to be the vacuum space $\hbox{{$\mathcal H$}}_{\rm vac}$, within the larger physical Hilbert space given by the lattice; this subspace is dependent on the topology of the surface that the lattice is embedded in, but not the size of the lattice. For example, there is a convenient closed-form expression for the dimension of $\hbox{{$\mathcal H$}}_{\rm vac}$ when $\Sigma$ occupies a closed, orientable surface \cite{Cui}. Computation can then be performed on states in the logical subspace in a fault-tolerant manner, with unwanted excitations constituting detectable errors. In the interest of brevity, we forgo a detailed exposition of such measurements, ribbon operators and fault-tolerant quantum computation on the lattice. The interested reader can learn about these in e.g. \cite{Kit,Bom,CCW,CowMa}. We do give a brief recap of ribbon operators, although without much rigour, as these will be useful later. \begin{definition}\rm \label{def:ribbon} A ribbon $\xi$ is a strip of face width that connects two sites $s_0 = (v_0,p_0)$ and $s_1 = (v_1,p_1)$ on the lattice. A ribbon operator $F^{h,g}_\xi$ acts on the vector spaces associated to the edges along the path of the ribbon, as shown in Fig~\ref{figribbon}. We call this basis of ribbon operators labelled by $h$ and $g$ the \textit{group basis}. \end{definition} \begin{figure} \[ \includegraphics[scale=0.8]{Fig1.pdf}\] \caption{\label{figribbon} Example of a ribbon operator for a ribbon $\xi$ from $s_0=(v_0,p_0)$ to $s_1=(v_1,p_1)$.} \end{figure} \begin{lemma}\label{lem:concat} If $\xi'$ is a ribbon concatenated with $\xi$, then the associated ribbon operators in the group basis satisfy \[F_{\xi'\circ\xi}^{h,g}=\sum_{f\in G}F_{\xi'}^{f^{-1}hf,f^{-1}g}\circ F_\xi^{h,f}, \quad F^{h,g}_\xi \circ F^{h',g'}_\xi=\delta_{g,g'}F_\xi^{hh',g}.\] \end{lemma} The first identity shows the role of the comultiplication of $D(G)^*$, \[\Delta(h\delta_g) = \sum_{f\in G} h\delta_f\otimes f^{-1}hf\delta_{f^{-1}g}.\] using subalgebra notation, while the second identity implies that \[(F_\xi^{h,g})^\dagger = F_\xi^{h^{-1},g}.\] \begin{lemma}\label{ribcom}\cite{Kit} Let $\xi$ be a ribbon with the orientation as shown in Figure~\ref{figribbon} between sites $s_0=(v_0,p_0)$ and $s_1=(v_1,p_1)$. Then \[ [F_\xi^{h,g},f{\triangleright}_v]=0,\quad [F_\xi^{h,g},\delta_e{\triangleright}_p]=0,\] for all $v \notin \{v_0, v_1\}$ and $p \notin \{p_0, p_1\}$. \[ f{\triangleright}_{s_0}\circ F_\xi^{h,g}=F_\xi^{fhf^{-1},fg} \circ f{\triangleright}_{s_0},\quad \delta_f{\triangleright}_{s_0}\circ F_\xi^{h,g}=F_\xi^{h,g} \circ\delta_{h^{-1}f}{\triangleright}_{s_0},\] \[ f{\triangleright}_{s_1}\circ F_\xi^{h,g}=F_\xi^{h,gf^{-1}} \circ f{\triangleright}_{s_1},\quad \delta_f{\triangleright}_{s_1}\circ F_\xi^{h,g}=F_\xi^{h,g}\circ \delta_{fg^{-1}hg}{\triangleright}_{s_1}\] for all ribbons where $s_0,s_1$ are disjoint, i.e. when $s_0$ and $s_1$ share neither vertices or faces. The subscript notation $f{\triangleright}_v$ means the local action of $f\in \mathbb{C} G$ at vertex $v$, and the dual for $\delta_f{\triangleright}_s$ at a site $s$. \end{lemma} We call the above lemma the \textit{equivariance property} of ribbon operators. Such ribbon operators may be deformed according to a sort of discrete isotopy, so long as the endpoints remain the same. We formalised ribbon operators as left and right module maps in \cite{CowMa}, but skim over any further details here. The physical interpretation of ribbon operators is that they create, move and annihilate quasiparticles. \begin{lemma}\cite{Kit}\label{lem:ribs_only} Let $s_0$, $s_1$ be two sites on the lattice. The only operators in ${\rm End}(\hbox{{$\mathcal H$}})$ which change the states at these sites, and therefore create quasiparticles and change the distribution of measurement outcomes, but leave the state in vacuum elsewhere, are ribbon operators. \end{lemma} This lemma is somewhat hard to prove rigorously but a proof was sketched in \cite{CowMa}. Next, there is an alternate basis for these ribbon operators in which the physical interpretation becomes more obvious. The \textit{quasiparticle basis} has elements \begin{equation}F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,v} = \sum_{n\in G^{c_0}} \pi(n^{-1})_{ji} F_\xi^{c, q_c n q_d^{-1}},\end{equation} where ${\hbox{{$\mathcal C$}}}$ is a conjugacy class, $\pi$ is an irrep of the associated isotropy subgroup $G^{c_0}$ and $u = (c,i)$, $v = (d,j)$ label basis elements of $W_{{\hbox{{$\mathcal C$}}},\pi}$ in which $c,d \in {\hbox{{$\mathcal C$}}}$ and $i,j$ label a basis of $W_\pi$. This amounts to a nonabelian Fourier transform of the space of ribbons (that is, the Peter-Weyl isomorphism of $D(G)$) and has inverse \begin{equation}F_\xi^{h,g} = \sum_{{\hbox{{$\mathcal C$}}},\pi\in \hat{G^{c_0}}}\sum_{c\in{\hbox{{$\mathcal C$}}}}\delta_{h,gcg^{-1}} \sum_{i,j = 0}^{{\rm dim}(W_\pi)}\pi(q^{-1}_{gcg^{-1}}g q_c)_{ij}F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;a,b},\end{equation} where $a = (gcg^{-1},i)$ and $b=(c,j)$. This reduces in the chargeon sector to the special cases \begin{equation}\label{chargeon_ribbons}F_\xi^{'e,\pi;i,j} = \sum_{n\in G}\pi(n^{-1})_{ji}F_\xi^{e,n}\end{equation} and \begin{equation}F_\xi^{e,g} = \sum_{\pi\in \hat{G}}\sum_{i,j = 0}^{{\rm dim}(W_\pi)}\pi(g)_{ij}F_\xi^{'e,\pi;i,j}\end{equation} Meanwhile, in the fluxion sector we have \begin{equation}\label{fluxion_ribbons}F_\xi^{'{\hbox{{$\mathcal C$}}},1;c,d}=\sum_{n\in G^{c_0}}F_\xi^{c,q_c nq_d^{-1}}\end{equation} but there is no inverse in the fluxion sector. This is because the chargeon sector corresponds to the irreps of $\mathbb{C} G$, itself a semisimple algebra; the fluxion sector has no such correspondence. If $G$ is Abelian then $\pi$ are 1-dimensional and we do not have to worry about the indices for the basis of $W_\pi$; this then looks like a more usual Fourier transform. \begin{lemma}\label{lem:quasi_basis} If $\xi'$ is a ribbon concatenated with $\xi$, then the associated ribbon operators in the quasiparticle basis satisfy \[ F_{\xi'\circ\xi}^{'{\hbox{{$\mathcal C$}}},\pi;u,v}=\sum_w F_{\xi'}^{'{\hbox{{$\mathcal C$}}},\pi;w,v}\circ F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,w}\] and are such that the nonabelian Fourier transform takes convolution to multiplication and vice versa, as it does in the abelian case. \end{lemma} In particular, we have the \textit{ribbon trace operators}, $W^{{\hbox{{$\mathcal C$}}},\pi}_\xi := \sum_u F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,u}$. Such ribbon trace operators create exactly quasiparticles of the type ${\hbox{{$\mathcal C$}}},\pi$ from the vacuum, meaning that \[P_{({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>}{\triangleleft}_{s_1}P_{({\hbox{{$\mathcal C$}}},\pi)}.\] We refer to \cite{CowMa} for more details and proofs of the above. \begin{example}\rm \label{exDS3} Our go-to example for our expositions will be $G=S_3$ generated by transpositions $u=(12), v=(23)$ with $w=(13)=uvu=vuv$. There are then 8 irreducible representations of $D(S_3)$ according to the choices ${\hbox{{$\mathcal C$}}}_0=\{e\}$, ${\hbox{{$\mathcal C$}}}_1=\{u,v,w\}$, ${\hbox{{$\mathcal C$}}}_2=\{uv,vu\}$ for which we pick representatives $c_0=e$, $q_e=e$, $c_1=u$, $q_u=e$, $q_v=w$, $q_w=v$ and $c_2=uv$ with $q_{uv}=e,q_{vu}=v$ (with the $c_i$ in the role of $c_0$ in the general theory). Here $G^{c_0}=S_3$ with 3 representations $\pi=$ trivial, sign and $W_2$ the 2-dimensional one given by (say) $\pi(u)=\sigma_3, \pi(v)=(\sqrt{3}\sigma_1-\sigma_3)/2$, $G^{c_1}=\{e,u\}=\mathbb{Z}_2$ with $\pi(u)=\pm1$ and $G^{c_2}=\{e,uv,vu\}=\mathbb{Z}_3$ with $\pi(uv)=1,\omega,\omega^2$ for $\omega=e^{2\pi\imath\over 3}$. See \cite{CowMa} for details and calculations of the associated projectors and some $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$ operators. \end{example} \section{Gapped Boundaries}\label{sec:gap} While $D(G)$ is the relevant algebra for the bulk of the model, our focus is on the boundaries. For these, we require a different class of algebras. \subsection{The boundary subalgebra $\Xi(R,K)$}\label{sec:xi} Let $K\subseteq G$ be a subgroup of a finite group $G$ and $G/K=\{gK\ |\ g\in G\}$ be the set of left cosets. It is not necessary in this section, but convenient, to fix a representative $r$ for each coset and let $R\subseteq G$ be the set of these, so there is a bijection between $R$ and $G/K$ whereby $r\leftrightarrow rK$. We assume that $e\in R$ and call such a subset (or section of the map $G\to G/K$) a {\em transversal}. Every element of $G$ factorises uniquely as $rx$ for $r\in R$ and $x\in K$, giving a coordinatisation of $G$ which we will use. Next, as we quotiented by $K$ from the right, we still have an action of $K$ from the left on $G/K$, which we denote ${\triangleright}$. By the above bijection, this equivalently means an action ${\triangleright}:K\times R\to R$ on $R$ which in terms of the factorisation is determined by $xry=(x{\triangleright} r)y'$, where we refactorise $xry$ in the form $RK$ for some $y'\in R$. There is much more information in this factorisation, as will see in Section~\ref{sec:quasi}, but this action is all we need for now. Also note that we have chosen to work with left cosets so as to be consistent with the literature \cite{CCW,BSW}, but one could equally choose a right coset factorisation to build a class of algebras similar to those in \cite{KM2}. We consider the algebra $\mathbb{C}(G/K){>\!\!\!\triangleleft} \mathbb{C} K$ as the cross product by the above action. Using our coordinatisation, this becomes the following algebra. \begin{definition}\label{defXi} $\Xi(R,K)=\mathbb{C}(R){>\!\!\!\triangleleft} \mathbb{C} K$ is generated by $\mathbb{C}(R)$ and $\mathbb{C} K$ with cross relations $x\delta_r=\delta_{x{\triangleright} r} x$. Over $\mathbb{C}$, this is a $*$-algebra with $(\delta_r x)^*=x^{-1}\delta_r=\delta_{x^{-1}{\triangleright} r}x^{-1}$. \end{definition} If we choose a different transversal $R$ then the algebra does not change up to an isomorphism which maps the $\delta$-functions between the corresponding choices of representative. Of relevance to the applications, we also have: \begin{lemma} $\Xi(R,K)$ has the `integral element' \[\Lambda:=\Lambda_{\mathbb{C}(R)} \otimes \Lambda_{\mathbb{C} K} = \delta_e \frac{1}{|K|}\sum_{x\in K}x\] characterised by $\xi\Lambda={\epsilon}(\xi)\Lambda=\Lambda\xi$ for all $\xi\in \Xi$, and ${\epsilon}(\Lambda)=1$. \end{lemma} {\noindent {\bfseries Proof:}\quad } We check that \begin{align*} \xi\Lambda& = (\delta_s y)(\delta_e\frac{1}{|K|}\sum_{x\in K}x) = \delta_{s,y{\triangleright} e}\delta_s\frac{1}{|K|}\sum_{x\in K}yx= \delta_{s,e}\delta_e \frac{1}{|K|}\sum_{x\in K}x\\ &= {\epsilon}(\xi)\Lambda = \frac{1}{|K|}\sum_{x\in K}\delta_{e,x{\triangleright} y}\delta_e xy = \frac{1}{|K|}\sum_{x\in K}\delta_{e,y}\delta_e x = \Lambda\xi. \end{align*} And clearly, ${\epsilon}(\Lambda) = \delta_{e,e} {|K|\over |K|} = 1$. \endproof As a cross product algebra, we can take the same approach as with $D(G)$ to the classification of its irreps: \begin{lemma} Irreps of $\Xi(R,K)$ are classified by pairs $(\hbox{{$\mathcal O$}},\rho)$ where $\hbox{{$\mathcal O$}}\subseteq R$ is an orbit under the action ${\triangleright}$ and $\rho$ is an irrep of the isotropy group $K^{r_0}:=\{x\in K\ |\ x{\triangleright} r_0=r_0\}$. Here we fix a base point $r_0\in \hbox{{$\mathcal O$}}$ as well as $\kappa: \hbox{{$\mathcal O$}}\to K $ a choice of lift such that \[ \kappa_r{\triangleright} r_0 = r,\quad\forall r\in \hbox{{$\mathcal O$}},\quad \kappa_{r_0}=e.\] Then \[ V_{\hbox{{$\mathcal O$}},\rho}=\mathbb{C} \hbox{{$\mathcal O$}}\mathop{{\otimes}} V_\rho,\quad \delta_r(s\mathop{{\otimes}} v)=\delta_{r,s}s\mathop{{\otimes}} v,\quad x.(s\mathop{{\otimes}} v)=x{\triangleright} s\mathop{{\otimes}}\zeta_s(x).v,\quad \zeta_s(x)=\kappa^{-1}_{x{\triangleright} s}x\kappa_s\] for $v\in V_\rho$, the carrier space for $\rho$, and \[ \zeta: \hbox{{$\mathcal O$}}\times K\to K^{r_0},\quad \zeta_r(x)=\kappa_{x{\triangleright} r}^{-1}x\kappa_r.\] \end{lemma} {\noindent {\bfseries Proof:}\quad } One can check that $\zeta_r(x)$ lives in $K^{r_0}$, \[ \zeta_r(x){\triangleright} r_0=(\kappa_{x{\triangleright} r}^{-1}x\kappa_r){\triangleright} r_0=\kappa_{x{\triangleright} r}^{-1}{\triangleright}(x{\triangleright} r)=\kappa_{x{\triangleright} r}^{-1}{\triangleright}(\kappa_{x{\triangleright} r}{\triangleright} r_0)=r_0\] and the cocycle property \[ \zeta_r(xy)=\kappa^{-1}_{x{\triangleright} y{\triangleright} r}x \kappa_{y{\triangleright} r}\kappa^{-1}_{y{\triangleright} r}y \kappa_r=\zeta_{y{\triangleright} r}(x)\zeta_r(y),\] from which it is easy to see that $V_{\hbox{{$\mathcal O$}},\rho}$ is a representation, \[ x.(y.(s\mathop{{\otimes}} v))=x.(y{\triangleright} s\mathop{{\otimes}} \zeta_s(y). v)=x{\triangleright}(y{\triangleright} s)\mathop{{\otimes}}\zeta_{y{\triangleright} s}(x)\zeta_s(y).v=xy{\triangleright} s\mathop{{\otimes}}\zeta_s(xy).v=(xy).(s\mathop{{\otimes}} v),\] \[ x.(\delta_r.(s\mathop{{\otimes}} v))=\delta_{r,s}x{\triangleright} s\mathop{{\otimes}} \zeta_s(x). v= \delta_{x{\triangleright} r,x{\triangleright} s}x{\triangleright} s\mathop{{\otimes}}\zeta_s(x).v=\delta_{x{\triangleright} r}.(x.(s\mathop{{\otimes}} v)).\] One can show that $V_{\hbox{{$\mathcal O$}},\pi}$ are irreducible and do not depend up to isomorphism on the choice of $r_0$ or $\kappa_r$.\endproof In the $*$-algebra case as here, we obtain a unitary representation if $\rho$ is unitary. One can also show that all irreps can be obtained this way. In fact the algebra $\Xi(R,K)$ is semisimple and has a block associated to the $V_{\hbox{{$\mathcal O$}},\pi}$. \begin{lemma}\label{Xiproj} $\Xi(R,K)$ has a complete orthogonal set of central idempotents \[ P_{(\hbox{{$\mathcal O$}},\rho)}={\dim V_\rho\over |K^{r_0}|}\sum_{r\in\hbox{{$\mathcal O$}}}\sum_{n\in K^{r_0}} \mathrm{ Tr}_{\rho}(n^{-1})\delta_r\mathop{{\otimes}} \kappa_r n \kappa_r^{-1}.\] \end{lemma} {\noindent {\bfseries Proof:}\quad } The proofs are similar to those for $D(G)$ in \cite{CowMa}. That we have a projection is \begin{align*}P_{(\hbox{{$\mathcal O$}},\rho)}^2&={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,n\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(n^{-1})\sum_{r,s\in \hbox{{$\mathcal O$}}}(\delta_r\mathop{{\otimes}} \kappa_rm\kappa_r^{-1})(\delta_s\mathop{{\otimes}}\kappa_sn\kappa_s^{-1})\\ &={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,n\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(n^{-1})\sum_{r,s\in \hbox{{$\mathcal O$}}}\delta_r\delta_{r,s}\mathop{{\otimes}} \kappa_rm\kappa_r^{-1}\kappa_s n\kappa_s^{-1}\\ &={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,m'\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(m m'{}^{-1})\sum_{r\in \hbox{{$\mathcal O$}}}\delta_r\mathop{{\otimes}} \kappa_rm'\kappa_r^{-1}= P_{(\hbox{{$\mathcal O$}},\rho)} \end{align*} where we used $r=\kappa_r m\kappa_r^{-1}{\triangleright} s$ iff $s=\kappa_r m^{-1}\kappa_r^{-1}{\triangleright} r=\kappa_r m^{-1}{\triangleright} r_0=\kappa_r{\triangleright} r_0=r$. We then changed $mn=m'$ as a new variable and used the orthogonality formula for characters on $K^{r_0}$. Similarly, for different projectors to be orthogonal. The sum of projectors is 1 since \begin{align*}\sum_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}=\sum_{\hbox{{$\mathcal O$}}, r\in {\hbox{{$\mathcal C$}}}}\delta_r\mathop{{\otimes}} \kappa_r\sum_{\rho\in \hat{K^{r_0}}} \left({\dim V_\rho\over |K^{r_0}|}\sum_{n\in K^{r_0}} \mathrm{ Tr}_{\rho}(n^{-1}) n\right) \kappa_r^{-1}=\sum_{\hbox{{$\mathcal O$}},r\in\hbox{{$\mathcal O$}}}\delta_r\mathop{{\otimes}} 1=1, \end{align*} where the bracketed expression is the projector $P_\rho$ for $\rho$ in the group algebra of $K^{r_0}$, and these sum to 1 by the Peter-Weyl decomposition of the latter. \endproof \begin{remark}\rm In the previous literature, the irreps have been described using double cosets and representatives thereof \cite{CCW}. In fact a double coset in ${}_KG_K$ is an orbit for the left action of $K$ on $G/K$ and hence has the form $\hbox{{$\mathcal O$}} K$ corresponding to an orbit $\hbox{{$\mathcal O$}}\subset R$ in our approach. We will say more about this later, in Proposition~\ref{prop:mon_equiv}. \end{remark} An important question for the physics is how representations on the bulk relate to those on the boundary. This is afforded by functors in the two directions. Here we give a much more direct approach to this issue as follows. \begin{proposition}\label{Xisub} There is an inclusion of algebras $i:\Xi(R,K)\hookrightarrow D(G)$ \[ i(x)=x,\quad i(\delta_r)=\sum_{x\in K} \delta_{rx}.\] The pull-back or restriction of a $D(G)$-module $W$ to a $\Xi$-module $i^*(W)$ is simply for $\xi\in \Xi$ to act by $i(\xi)$. Going the other way, the induction functor sends a $\Xi$-module $V$ to a $D(G)$-module $D(G)\mathop{{\otimes}}_\Xi V$, where $\xi\in \Xi$ right acts on $D(G)$ by right multiplication by $i(\xi)$. These two functors are adjoint. \end{proposition} {\noindent {\bfseries Proof:}\quad } We just need to check that $i$ respects the relations of $\Xi$. Thus, \begin{align*} i(\delta_r)i(\delta_s)&=\sum_{x,y\in K}\delta_{rx}\delta_{sy}=\sum_{x\in K}\delta_{r,s}\delta_{rx}=i(\delta_r\delta_s), \\ i(x)i(\delta_r)&=\sum_{y\in K}x\delta_{ry}=\sum_{y\in K}\delta_{xryx^{-1}}x=\sum_{y\in K}\delta_{(x{\triangleright} r)x'yx^{-1}}x=\sum_{y'\in K}\delta_{(x{\triangleright} r)y'}x=i(\delta_{x{\triangleright} r} x),\end{align*} as required. For the first line, we used the unique factorisation $G=RK$ to break down the $\delta$-functions. For the second line, we use this in the form $xr=(x{\triangleright} r)x'$ for some $x'\in K$ and then changed variables from $y$ to $y'=x'yx^{-1}$. The rest follows as for any algebra inclusion. \endproof In fact, $\Xi$ is a quasi-bialgebra and at least when $(\ )^R$ is bijective a quasi-Hopf algebra, as we see in Section~\ref{sec:quasi}. In the latter case, it has a quantum double $D(\Xi)$ which contains $\Xi$ as a sub-quasi Hopf algebra. Moreover, it can be shown that $D(\Xi)$ is a `Drinfeld cochain twist' of $D(G)$, which implies it has the same algebra as $D(G)$. We do not need details, but this is the abstract reason for the above inclusion. (An explicit proof of this twisting result in the usual Hopf algebra case with $R$ a group is in \cite{BGM}.) Meanwhile, the statement that the two functors in the lemma are adjoint is that \[ \hom_{D(G)}(D(G)\mathop{{\otimes}}_\Xi V,W))=\hom_\Xi(V, i^*(W))\] for all $\Xi$-modules $V$ and all $D(G)$-modules $W$. These functors do not take irreps to irreps and of particular interest are the multiplicities for the decompositions back into irreps, i.e. if $V_i, W_a$ are respective irreps and $D(G)\mathop{{\otimes}}_\Xi V_i=\oplus_{a} n^i{}_a W_a$ then \[ {\rm dim}(\hom_{D(G)}(D(G)\mathop{{\otimes}}_\Xi V_i,W_a))={\rm dim}(\hom_\Xi(V_i,i^*(W_a)))\] and hence $i^*(W_a)=\oplus_i n^i_a V_i$. This explains one of the observations in \cite{CCW}. It remains to give a formula for these multiplicities, but here we were not able to reproduce the formula in \cite{CCW}. Our approach goes via a general lemma as follows. \begin{lemma}\label{lemfrobn} Let $i:A\hookrightarrow B$ be an inclusion of finite-dimensional semisimple algebras and $\int$ the unique symmetric special Frobenius linear form on $B$ such that $\int 1=1$. Let $V_i$ be an irrep of $A$ and $W_a$ an irrep of $B$. Then the multiplicity $V_i$ in the pull-back $i^*(W_a)$ (which is the same as the multiplicity of $W_a$ in $B\mathop{{\otimes}}_A V_i$) is given by \[ n^i{}_a={\dim(B)\over\dim(V_i)\dim(W_a)}\int i(P_i)P_a,\] where $P_i\in A$ and $P_a\in B$ are the associated central idempotents. Moreover, $i(P_i)P_a =0$ if and only if $n^i_a = 0$. \end{lemma} {\noindent {\bfseries Proof:}\quad } Recall that a linear map $\int:B\to \mathbb{C}$ is Frobenius if the bilinear form $(b,c):=\int bc$ is nondegenerate, and is symmetric if this bilinear form is symmetric. Also, let $g=g^1\mathop{{\otimes}} g^2\in B\mathop{{\otimes}} B$ (in a notation with the sum of such terms understood) be the associated `metric' such that $(\int b g^1 )g^2=b=g^1\int g^2b$ for all $b$ (it is the inverse matrix in a basis of the algebra). We say that the Frobenius form is special if the algebra product $\cdot$ obeys $\cdot(g)=1$. It is well-known that there is a unique symmetric special Frobenius form up to scale, given by the trace in the left regular representation, see \cite{MaRie:spi} for a recent study. In our case, over $\mathbb{C}$, we also know that a finite-dimensional semisimple algebra $B$ is a direct sum of matrix algebras ${\rm End}(W_a)$ associated to the irreps $W_a$ of $B$. Then \begin{align*} \int i(P_i)P_a&={1\over\dim(B)}\sum_{\alpha,\beta}\<f^\alpha\mathop{{\otimes}} e_\beta,i(P_i)P_a (e_\alpha\mathop{{\otimes}} f^\beta)\>\\ &={1\over\dim(B)}\sum_{\alpha}\dim(W_a)\<f^\alpha, i(P_i)e_\alpha\>={\dim(W_a)\dim(V_i)\over\dim(B)} n^i{}_a. \end{align*} where $\{e_\alpha\}$ is a basis of $W_a$ and $\{f^\beta\}$ is a dual basis, and $P_a$ acts as the identity on $\mathrm{ End}(W_a)$ and zero on the other blocks. We then used that if $i^*(W_a)=\oplus_i {n^i{}_a}V_i$ as $A$-modules, then $i(P_i)$ just picks out the $V_i$ components where $P_i$ acts as the identity. For the last part, the forward direction is immediate given the first part of the lemma. For the other direction, suppose $n^i_a = 0$ so that $i^*(W_a)=\oplus_j n^j_aV_j$ with $j\ne a$ running over the other irreps of $A$. Now, we can view $P_{a}\in W_{a}\mathop{{\otimes}} W_{a}^*$ (as the identity element) and left multiplication by $i(P_i)$ is the same as $P_i$ acting on $P_{a}$ viewed as an element of $i^*(W_{a})\mathop{{\otimes}} W_{a}^*$, which is therefore zero.\endproof We apply Lemma~\ref{lemfrobn} in our case of $A=\Xi$ and $B=D(G)$, where \[ \dim(V_i)=|\hbox{{$\mathcal O$}}|\dim(V_\rho), \quad \dim(W_a)=|{\hbox{{$\mathcal C$}}}|\dim(W_\pi)\] with $i=({\hbox{{$\mathcal C$}}},\rho)$ as described above and $a=({\hbox{{$\mathcal C$}}},\pi)$ as described in Section~\ref{sec:bulk}. \begin{proposition}\label{nformula} For the inclusion $i:\Xi\hookrightarrow D(G)$ in Proposition~\ref{Xisub}, the multiplicities for restriction and induction as above are given by \[ n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}= {|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|} \sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop r^{-1}c\in K}} |K^{r,c}|\sum_{\tau\in \hat{K^{r,c}} } n_{\tau,\tilde\rho|_{K^{r,c}}} n_{\tau, \tilde\pi|_{K^{r,c}}},\quad K^{r,c}=K^r\cap G^c,\] where $\tilde \pi(m)=\pi(q_c^{-1}mq_c)$ and $\tilde\rho(m)=\rho(\kappa_r^{-1}m\kappa_r)$ are the corresponding representation of $K^r,G^c$ decomposing as $K^{r,c}$ representations as \[ \tilde\rho|_{K^{r,c}}{\cong}\oplus_\tau n_{\tau,\tilde\rho|_{K^{r,c}}}\tau,\quad \tilde\pi|_{K^{r,c}}{\cong}\oplus_\tau n_{\tau,\tilde\pi|_{K^{r,c}}}\tau.\] \end{proposition} {\noindent {\bfseries Proof:}\quad } We include the projector from Lemma~\ref{Xiproj} as \[ i(P_{(\hbox{{$\mathcal O$}},\rho)})={{\rm dim}(V_\rho)\over |K^{r_0}|}\sum_{r\in \hbox{{$\mathcal O$}}, x\in K}\sum_{m\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\delta_{rx}\mathop{{\otimes}} \kappa_r m\kappa_r^{-1}\] and multiply this by $P_{({\hbox{{$\mathcal C$}}},\pi)}$ from (\ref{Dproj}). In the latter, we write $c=sy$ for the factorisation of $c$. Then when we multiply these out, for $(\delta_{rx}\mathop{{\otimes}} \kappa_r m \kappa_r^{-1})(\delta_{c}\mathop{{\otimes}} q_c n q_c^{-1})$ we will need $\kappa_r m\kappa_r^{-1}{\triangleright} s=r$ or equivalently $s=\kappa_r m^{-1}\kappa_r^{-1}{\triangleright} r=r$ so we are actually summing not over $c$ but over $y\in K$ such that $ry\in {\hbox{{$\mathcal C$}}}$. Also then $x$ is uniquely determined in terms of $y$. Hence \[ i(P_{(\hbox{{$\mathcal O$}},\rho)})P_{({\hbox{{$\mathcal C$}}},\pi)}={{\rm dim}(V_\rho){\rm dim}(W_\pi)\over |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}, n\in G^{c_0}}\sum_{r\in \hbox{{$\mathcal O$}}, y\in K | ry\in{\hbox{{$\mathcal C$}}}} \mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\pi(n^{-1}) \delta_{rx}\mathop{{\otimes}} \kappa_r m\kappa_r^{-1} q_c nq_c^{-1}\] Now we apply the integral of $D(G)$, $\int\delta_g\mathop{{\otimes}} h=\delta_{h,e}$ which requires \[ n=q_c^{-1}\kappa_r m^{-1}\kappa_r^{-1}q_c\] and $x=y$ for $n\in G^{c_0}$ given that $c=ry$. We refer to this condition on $y$ as $(\star)$. Remembering that $\int$ is normalised so that $\int 1=|G|$, we have from the lemma \begin{align*}n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}&={|G|\over\dim(V_i)\dim(W_a)}\int i(P_{(\hbox{{$\mathcal O$}},\rho)})P_{({\hbox{{$\mathcal C$}}},\pi)}\\ &={|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}}\sum_{{r\in \hbox{{$\mathcal O$}}, y\in K\atop (*), ry\in{\hbox{{$\mathcal C$}}}}} \mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\pi(q_{ry}^{-1}\kappa_r m\kappa_r^{-1}q_{ry}) \\ &={|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}}\sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop r^{-1}c\in K}}\sum_{m'\in K^r\cap G^c} \mathrm{ Tr}_\rho(\kappa_r^{-1}m'{}^{-1}\kappa_r)\mathrm{ Tr}_\pi(q_{c}^{-1} m q_{c}), \end{align*} where we compute in $G$ and view $(\star)$ as $m':=\kappa_r m \kappa_r^{-1}\in G^c$. We then use the group orthogonality formula \[ \sum_{m\in K^{r,c}}\mathrm{ Tr}_{\tau}(m^{-1})\mathrm{ Tr}_{\tau'}(m)=\delta_{\tau,\tau'}|K^{r,c}| \] for any irreps $\tau,\tau'$ of the group \[ K^{r,c}:=K^r\cap G^c=\{x\in K\ |\ x{\triangleright} r=r,\quad x c x^{-1}=c\}\] to obtain the formula stated. \endproof This simplifies in four (overlapping) special cases as follows. \noindent{(i) $V_i$ trivial: } \[ n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G|\over |{\hbox{{$\mathcal C$}}}||K||G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap K}\sum_{m\in K\cap G^c}\mathrm{ Tr}_\pi(q_c^{-1}mq_c)={|G| \over |{\hbox{{$\mathcal C$}}}| |K||G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap K} |K^c| n_{1,\tilde\pi}\] as $\rho=1$ implies $\tilde\rho=1$ and forces $\tau=1$. Here $K^c$ is the centraliser of $c\in K$. If $n_{1,\tilde\pi}$ is independent of the choice of $c$ then we can simplify this further as \[ n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G| |({\hbox{{$\mathcal C$}}}\cap K)/K|\over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|} n_{1,\pi|_{K^{c_0}}}\] using the orbit-counting lemma, where $K$ acts on ${\hbox{{$\mathcal C$}}}\cap K$ by conjugation. \noindent{(ii) $W_a$ trivial:} \[ n^{(\hbox{{$\mathcal O$}},\rho)}_{(\{e\},1)}= {|G|\over |\hbox{{$\mathcal O$}}||K^{r_0}||G|}\sum_{r\in \hbox{{$\mathcal O$}}\cap K}\sum_{m\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})=\begin{cases} 1 & {\rm if\ }\hbox{{$\mathcal O$}}, \rho\ {\rm trivial}\\ 0 & {\rm else}\end{cases} \] as $\hbox{{$\mathcal O$}}\cap K=\{e\}$ if $\hbox{{$\mathcal O$}}=\{e\}$ (but is otherwise empty) and in this case only $r=e$ contributes. This is consistent with the fact that if $W_a$ is the trivial representation of $D(G)$ then its pull back is also trivial and hence contains only the trivial representation of $\Xi$. \noindent{(iii) Fluxion sector:} \[ n^{(\hbox{{$\mathcal O$}},1)}_{({\hbox{{$\mathcal C$}}},1)}= {|G|\over |\hbox{{$\mathcal O$}}||{\hbox{{$\mathcal C$}}}||K^{r_0}| |G^{c_0}|} \sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop r^{-1}c\in K}} |K^r\cap G^c|.\] \noindent{(iv) Chargeon sector: } \[ n^{(\{e\},\rho)}_{(\{e\},\pi)}= n_{\rho, \pi|_{K}},\] where $\rho,\pi$ are arbitrary irreps of $K,G$ respectively and only $r=c=e$ are allowed so $K^{r,c}=K$, and then only $\tau=\rho$ contributes. \begin{example}\label{exS3n}\rm (i) We take $G=S_3$, $K=\{e,u\}=\mathbb{Z}_2$, where $u=(12)$. Here $G/K$ consists of \[ G/K=\{\{e, u\}, \{w, uv\}, \{v, vu\}\}\] and our standard choice of $R$ will be $R=\{e,uv, vu\}$, where we take one from each coset (but any other transversal will have the same irreps and their decompositions). This leads to 3 irreps of $\Xi(R,K)$ as follows. In $R$, we have two orbits $\hbox{{$\mathcal O$}}_0=\{e\}$, $\hbox{{$\mathcal O$}}_1=\{uv,vu\}$ and we choose representatives $r_0=e,\kappa_e=e$, $r_1=uv, \kappa_{uv}=e, \kappa_{vu}=u$ since $u{\triangleright} (uv)=vu$ for the two cases (here $r_1$ was denoted $r_0$ in the general theory and is the choice for $\hbox{{$\mathcal O$}}_1$). We also have $u{\triangleright}(vu)=uv$. Note that it happens that these orbits are also conjugacy classes but this is an accident of $S_3$ and not true for $S_4$. We have $K^{r_0}=K=\mathbb{Z}_2$ with representations $\rho(u)=\pm1$ and $K^{r_1}=\{e\}$ with only the trivial representation. (ii) For $D(S_3)$, we have the 8 irreps in Example~\ref{exDS3} and hence there is a $3\times 8$ table of the $\{n^i{}_a\}$. We can easily compute some of the special cases from the above. For example, the trivial $\pi$ restricted to $K$ is $\rho=1$, the sign representation restricted to $K$ is the $\rho=-1$ representation, the $W_2$ restricted to $K$ is $1\oplus -1$, which gives the upper left $2\times 3$ submatrix for the chargeon sector. Another 6 entries (four new ones) are given from the fluxion formula. We also have ${\hbox{{$\mathcal C$}}}_2\cap K=\emptyset$ so that the latter part of the first two rows is zero by our first special case formula. For ${\hbox{{$\mathcal C$}}}_1,\pm1$ in the first row, we have ${\hbox{{$\mathcal C$}}}_1\cap K=\{u\}$ with trivial action of $K$, so just one orbit. This gives us a nontrivial result in the $+1$ case and 0 in the $-1$ case. The story for ${\hbox{{$\mathcal C$}}}_1,\pm1$ in the second row follows the same derivation, but needs $\tau=-1$ and hence $\pi=-1$ for the nonzero case. In the third row with ${\hbox{{$\mathcal C$}}}_2,\pi$, we have $K^{r}=\{e\}$ so $G'=\{e\}$ and we only have $\tau=1=\rho$ as well as $\tilde\pi=1$ independently of $\pi$ as this is 1-dimensional. So both $n$ factors in the formula in Proposition~\ref{nformula} are 1. In the sum over $r,c$, we need $c=r$ so we sum over 2 possibilities, giving a nontrivial result as shown. For ${\hbox{{$\mathcal C$}}}_1,\pi$, the first part goes the same way and we similarly have $c$ determined from $r$ in the case of ${\hbox{{$\mathcal C$}}}_1,\pi$, so again two contributions in the sum, giving the answer shown independently of $\pi$. Finally, for ${\hbox{{$\mathcal C$}}}_0,\pi$ we have $r=\{uv,vu\}$ and $c=e$, and can never meet the condition $r^{-1}c\in K$. So these all have $0$. Thus, Proposition~\ref{nformula} in this example tells us: \[\begin{array}{c|c|c|c|c|c|c|c|c} n^i{}_a & {\hbox{{$\mathcal C$}}}_0,1 & {\hbox{{$\mathcal C$}}}_0,{\rm sign} & {\hbox{{$\mathcal C$}}}_0,W_2 & {\hbox{{$\mathcal C$}}}_1, 1& {\hbox{{$\mathcal C$}}}_1,-1 & {\hbox{{$\mathcal C$}}}_2,1& {\hbox{{$\mathcal C$}}}_2,\omega & {\hbox{{$\mathcal C$}}}_2,\omega^2\\ \hline\ \hbox{{$\mathcal O$}}_0,1&1 & 0 & 1 &1 & 0& 0 &0 &0 \\ \hline \hbox{{$\mathcal O$}}_0,-1&0 & 1&1& 0& 1&0 &0 & 0\\ \hline \hbox{{$\mathcal O$}}_1,1&0 &0&0 & 1& 1 &1 &1 & 1 \end{array}\] One can check for consistency that for each $W_a$, $\dim(W_a)$ is the sum of the dimensions of the $V_i$ that it contains, which determines one row from the other two. \end{example} \subsection{Boundary lattice model}\label{sec:boundary_lat} Consider a vertex on the lattice $\Sigma$. Fixing a subgroup $K \subseteq G$, we define an action of $\mathbb{C} K$ on $\hbox{{$\mathcal H$}}$ by \begin{equation}\label{actXi0}\tikzfig{CaK_vertex_action}\end{equation} One can see that this is an action as it is a tensor product of representations on each edge, or simply because it is the restriction to $K$ of the vertex action of $G$ in the bulk. Next, we define the action of $\mathbb{C} (R)$ at a face relative to a cilium, \begin{equation}\label{actXi}\tikzfig{CGK_face_action}\end{equation} with a clockwise rotation. That this is indeed an action is also easy to check explicitly, recalling that either $rK = r'K$ when $r= r'$ or $rK \cap r'K = \emptyset$ otherwise, for any $r, r'\in R$. These actions define a representation of $\Xi(R,K)$, which is just the bulk $D(G)$ action restricted to $\Xi(R,K)\subseteq D(G)$ by the inclusion in Proposition~\ref{Xisub}. This says that $x\in K$ acts as in $G$ and $\mathbb{C}(R)$ acts on faces by the $\mathbb{C}(G)$ action after sending $\delta_r \mapsto \sum_{a\in rK}\delta_a$. To connect the above representation to the topic at hand, we now define what we mean by a boundary. \subsubsection{Smooth boundaries} Consider the lattice in the half-plane for simplicity, \[\tikzfig{smooth_halfplane}\] where each solid black arrow still carries a copy of $\mathbb{C} G$ and ellipses indicate the lattice extending infinitely. The boundary runs along the left hand side and we refer to the rest of the lattice as the `bulk'. The grey dashed edges and vertices are there to indicate empty space and the lattice borders the edge with faces; we will call this case a `smooth' boundary. There is a site $s_0$ indicated at the boundary. There is an action of $\mathbb{C} K$ at the boundary vertex associated to $s_0$, identical to the action of $\mathbb{C} K$ defined above but with the left edge undefined. Similarly, there is an action of $\mathbb{C}(R)$ at the face associated to $s_0$. However, this is more complicated, as the face has three edges undefined and the action must be defined slightly differently from in the bulk: \[\tikzfig{smooth_face_action}\] \[\tikzfig{smooth_face_actionB}\] where the action is given a superscript ${\triangleright}^b$ to differentiate it from the actions in the bulk. In the first case, we follow the same clockwise rotation rule but skip over the undefined values on the grey edges, but for the second case we go round round anticlockwise. The resulting rule is then according to whether the cilium is associated to the top or bottom of the edge. It is easy to check that this defines a representation of $\Xi(R,K)$ on $\hbox{{$\mathcal H$}}$ associated to each smooth boundary site, such as $s_0$, and that the actions of $\mathbb{C}(R)$ have been chosen such that this holds. A similar principle holds for ${\triangleright}^b$ in other orientations of the boundary. The integral actions at a boundary vertex $v$ and at a face $s_0=(v,p)$ of a smooth boundary are then \[ A^b_1(v):=\Lambda_{\mathbb{C} K}{\triangleright}^b_v = {1\over |K|}\sum_k k{\triangleright}^b_v,\quad B^b_1(p):=\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{p} = \delta_e{\triangleright}^b_{p},\] where the superscript $b$ and subscript $1$ label that these are at a smooth boundary. We have the convenient property that \[\tikzfig{smooth_face_integral}\] so both the vertex and face integral actions at a smooth face each depend only on the vertex and face respectively, not the precise cilium, similar to the integral actions. \begin{remark}\rm There is similarly an action of $\mathbb{C}(G) {>\!\!\!\triangleleft} \mathbb{C} K \subseteq D(G)$ on $\hbox{{$\mathcal H$}}$ at each site in the next layer into the bulk, where the site has the vertex at the boundary but an internal face. We mention this for completeness, and because using this fact it is easy to show that \[A_1^b(v)B(p) = B(p)A_1^b(v),\] where $B(p)$ is the usual integral action in the bulk. \end{remark} \begin{remark}\rm In \cite{BSW} it is claimed that one can similarly introduce actions at smooth boundaries defined not only by $R$ and $K$ but also a 2-cocycle $\alpha$. This makes some sense categorically, as the module categories of $\hbox{{$\mathcal M$}}^G$ may also include such a 2-cocycle, which enters by way of a \textit{twisted} group algebra $\mathbb{C}_\alpha K$ \cite{Os2}. However, in Figure 6 of \cite{BSW} one can see that when the cocycle $\alpha$ is introduced all edges on the boundary are assumed to be copies of $\mathbb{C} K$, rather than $\mathbb{C} G$. On closer inspection, it is evident that this means that the action on faces of $\delta_e\in\mathbb{C}(R)$ will always yield 1, and the action of any other basis element of $\mathbb{C}(R)$ will yield 0. Similarly, the action on vertices is defined to still be an action of $\mathbb{C} K$, not $\mathbb{C}_\alpha K$. Thus, the excitations on this boundary are restricted to only the representations of $\mathbb{C} K$, without either $\mathbb{C}(R)$ or $\alpha$ appearing, which appears to defeat the purpose of the definition. It is not obvious to us that a cocycle can be included along these lines in a consistent manner. \end{remark} In quantum computational terms, in addition to the measurements in the bulk we now measure the operator $\sum_{\hbox{{$\mathcal O$}},\rho}p_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b$ for distinct coefficients $p_{\hbox{{$\mathcal O$}},\rho} \in \mathbb{R}$ at all sites along the boundary. \subsubsection{Rough boundaries} We now consider the half-plane lattice with a different kind of boundary, \[\tikzfig{rough_halfplane}\] This time, there is an action of $\mathbb{C} K$ at the exterior vertex and an action of $\mathbb{C}(R)$ at the face at the boundary with an edge undefined. Again, the former is just the usual action of $\mathbb{C} K$ with three edges undefined, but the action of $\mathbb{C}(R)$ requires more care and is defined as \[\tikzfig{rough_face_action}\] \[\tikzfig{rough_face_actionB}\] \[\tikzfig{rough_face_actionC}\] \[\tikzfig{rough_face_actionD}\] All but the second action are just clockwise rotations as in the bulk, but with the greyed-out edge missing from the $\delta$-function. The second action goes counterclockwise in order to have an associated representation of $\Xi(R,K)$ at the bottom left. We have similar actions for other orientations of the lattice. \begin{remark}\rm Although one can check that one has a representation of $\Xi(R,K)$ at each site using these actions and the action of $\mathbb{C} K$ defined before, this requires $g_1$ and $g_2$ on opposite sides of the $\delta$-function, and $g_1$ and $g_3$ on opposite sides, respectively for the last two actions. This means that there is no way to get $\delta_e{\triangleright}^b$ to always be invariant under choice of site in the face. Indeed, we have not been able to reproduce the implicit claim in \cite{CCW} that $\delta_e{\triangleright}^b$ at a rough boundary can be defined in a way that depends only on the face. \end{remark} The integral actions at a boundary vertex $v$ and at a site $s_0=(v,p)$ of a rough boundary are then \[ A_2^b(v):=\Lambda_{\mathbb{C} K}{\triangleright}^b_v = {1\over |K|}\sum_k k{\triangleright}^b_v,\quad B_2^b(v,p):=\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{s_0} = \delta_e{\triangleright}_{s_0}^b \] where the superscript $b$ and subscript $2$ label that these are at a rough boundary. In computational terms, we measure the operator $\sum_{\hbox{{$\mathcal O$}},\rho}p_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b$ at each site along the boundary, as with smooth boundaries. Unlike the smooth boundary case, there is not an action of, say, $\mathbb{C} (R){>\!\!\!\triangleleft} \mathbb{C} G$ at each site in the next layer into the bulk, with a boundary face but interior vertex. In particular, we do not have $B_2^b(v,p)A(v) = A(v)B_2^b(v,p)$ in general, but we can still consistently define a Hamiltonian. When the action at $v$ is restricted to $\mathbb{C} K$ we recover an action of $\Xi(R,K)$ again. As with the bulk, the Hamiltonian incorporating the boundaries uses the actions of the integrals. We can accommodate both rough and smooth boundaries into the Hamiltonian. Let $V,P$ be the set of vertices and faces in the bulk, $S_1$ the set of all sites $(v,p)$ at smooth boundaries, and $S_2$ the same for rough boundaries. Then \begin{align*}H&=\sum_{v_i\in V} (1-A(v_i)) + \sum_{p_i\in P} (1-B(p_i)) \\ &\quad + \sum_{s_{b_1} \in S_1} ((1 - A_1^b(s_{b_1}) + (1 - B_1^b(s_{b_1}))) + \sum_{s_{b_2} \in S_2} ((1 - A_2^b(s_{b_2}) + (1 - B_2^b(s_{b_2})).\end{align*} We can pick out two vacuum states immediately: \begin{equation}\label{eq:vac1}|{\rm vac}_1\> := \prod_{v_i,s_{b_1},s_{b_2}}A(v_i)A_1^b(s_{b_1})A_2^b(s_{b_2})\bigotimes_E e\end{equation} and \begin{equation}\label{eq:vac2}|{\rm vac}_2\> := \prod_{p_i,s_{b_1},s_{b_2}}B(p_i)B_1^b(s_{b_1})B_2^b(s_{b_2})\bigotimes_E \sum_{g \in G} g\end{equation} where the tensor product runs over all edges in the lattice. \begin{remark}\rm There is no need for two different boundaries to correspond to the same subgroup $K$, and the Hamiltonian can be defined accordingly. This principle is necessary when performing quantum computation by braiding `defects', i.e. finite holes in the lattice, on the toric code \cite{FMMC}, and also for the lattice surgery in Section~\ref{sec:patches}. We do not write out this Hamiltonian in all its generality here, but its form is obvious. \end{remark} \subsection{Quasiparticle condensation} Quasiparticles on the boundary correspond to irreps of $\Xi(R,K)$. It is immediate from Section~\ref{sec:xi} that when $\hbox{{$\mathcal O$}} = \{e\}$, we must have $r_0 = e, K^{r_0} = K$. We may choose the trivial representation of $K$ and then we have $P_{e,1} = \Lambda_{\mathbb{C}(R)} \otimes \Lambda_{\mathbb{C} K}$. We say that this particular measurement outcome corresponds to the absence of nontrivial quasiparticles, as the states yielding this outcome are precisely the locally vacuum states with respect to the Hamiltonian. This set of quasiparticles on the boundary will not in general be the same as quasiparticles defined in the bulk, as ${}_{\Xi(R,K)}\mathcal{M} \not\simeq {}_{D(G)}\mathcal{M}$ for all nontrivial $G$. Quasiparticles in the bulk can be created from a vacuum and moved using ribbon operators \cite{Kit}, where the ribbon operators are seen as left and right module maps $D(G)^* \rightarrow \mathrm{End}(\hbox{{$\mathcal H$}})$, see \cite{CowMa}. Following \cite{CCW}, we could similarly define a different set of ribbon operators for the boundary excitations, which use $\Xi(R,K)^*$ rather than $D(G)^*$. However, these have limited utility. For completeness we cover them in Appendix~\ref{app:ribbon_ops}. Instead, for our purposes we will keep using the normal ribbon operators. Such normal ribbon operators can extend to boundaries, still using Definition~\ref{def:ribbon}, so long as none of the edges involved in the definition are greyed-out. When a ribbon operator ends at a boundary site $s$, we are not concerned with equivariance with respect to the actions of $\mathbb{C}(G)$ and $\mathbb{C} G$ at $s$, as in Lemma~\ref{ribcom}. Instead we should calculate equivariance with respect to the actions of $\mathbb{C}(R)$ and $\mathbb{C} K$. We will study the matter in more depth in Section~\ref{sec:quasi}, but note that if $s,t\in R$ then unique factorisation means that $st=(s\cdot t)\tau(s,t)$ for unique elements $s\cdot t\in R$ and $\tau(s,t)\in K$. Similarly, if $y\in K$ and $r\in R$ then unique factorisation $yr=(y{\triangleright} r)(y{\triangleleft} r)$ defines $y{\triangleleft} r$ to be studied later. \begin{lemma}\label{boundary_ribcom} Let $\xi$ be an open ribbon from $s_0$ to $s_1$, where $s_0$ is located at a smooth boundary, for example: \[\tikzfig{smooth_halfplane_ribbon_short}\] and where $\xi$ begins at the specified orientation in the example, leading from $s_0$ into the bulk, rather than running along the boundary. Then \[x{\triangleright}^b_{s_0}\circ F_\xi^{h,g}=F_\xi^{xhx^{-1},xg} \circ x{\triangleright}^b_{s_0};\quad \delta_r{\triangleright}^b_{s_0}\circ F_\xi^{h,g}=F_\xi^{h,g} \circ\delta_{s\cdot(y{\triangleright} r)}{\triangleright}^b_{s_0}\] $\forall x\in K, r\in R, h,g\in G$, and where $sy$ is the unique factorisation of $h^{-1}$. \end{lemma} {\noindent {\bfseries Proof:}\quad } The first is just the vertex action of $\mathbb{C} G$ restricted to $\mathbb{C} K$, with an edge greyed-out which does not influence the result. For the second, expand $\delta_r{\triangleright}^b_{s_0}$ and verify explicitly: \[\tikzfig{smooth_halfplane_ribbon_shortA1}\] \[\tikzfig{smooth_halfplane_ribbon_shortA2}\] where we see $(s\cdot(y{\triangleright} r))K = s(y{\triangleright} r)\tau(s,y{\triangleright} r)^{-1}K = s(y{\triangleright} r)K = s(y{\triangleright} r)(y{\triangleleft} r)K = syrK = h^{-1}rK$. We check the other site as well: \[\tikzfig{smooth_halfplane_ribbon_shortB1}\] \[\tikzfig{smooth_halfplane_ribbon_shortB2}\] \endproof \begin{remark}\rm One might be surprised that the equivariance property holds for the latter case when $s_0$ is attached to the vertex at the bottom of the face, as in this case $\delta_r{\triangleright}^b_{s_0}$ confers a $\delta$-function in the counterclockwise direction, different from the bulk. This is because the well-known equivariance properties in the bulk \cite{Kit} are not wholly correct, depending on orientation, as pointed out in \cite[Section~3.3]{YCC}. We accommodated for this by specifying an orientation in Lemma~\ref{ribcom}. \end{remark} \begin{remark}\rm\label{rem:rough_ribbon} We have a similar situation for a rough boundary, albeit we found only one orientation for which the same equivariance property holds, which is: \[\tikzfig{rough_halfplane_ribbon}\] In the reverse orientiation, where the ribbon crosses downwards instead, equivariance is similar but with the introduction of an antipode. For other orientations we do not find an equivariance property at all. \end{remark} As with the bulk, we can define an excitation space using a ribbon between the two endpoints $s_0$, $s_1$, although more care must be taken in the definition. \begin{lemma}\label{Ts0s1} Let ${|{\rm vac}\>}$ be a vacuum state on a half-plane $\Sigma$, where there is one smooth boundary beyond which there are no more edges. Let $\xi$ be a ribbon between two endpoints $s_0, s_1$ where $s_0 = \{v_0,p_0\}$ is on the boundary and $s_1 = \{v_1,p_1\}$ is in the bulk, such that $\xi$ interacts with the boundary only once, when crossing from $s_0$ into the bulk; it cannot cross back and forth multiple times. Let $|\psi^{h,g}\>:=F_\xi^{h,g}{|{\rm vac}\>}$, and $\hbox{{$\mathcal T$}}_{\xi}(s_0,s_1)$ be the space with basis $|\psi^{h,g}\>$. (1)$|\psi^{h,g}\>$ is independent of the choice of ribbon through the bulk between fixed sites $s_0, s_1$, so long as the ribbon still only interacts with the boundary at the chosen location. (2)$\hbox{{$\mathcal T$}}_\xi(s_0,s_1)\subset\hbox{{$\mathcal H$}}$ inherits actions at disjoint sites $s_0, s_1$, \[ x{\triangleright}^b_{s_0}|\psi^{h,g}\>=|\psi^{ xhx^{-1},xg}\>,\quad \delta_r{\triangleright}^b_{s_0}|\psi^{h,g}\>=\delta_{rK,hK}|\psi^{h,g}\>\] \[ f{\triangleright}_{s_1}|\psi^{h,g}\>=|\psi^{h,gf^{-1}}\>,\quad \delta_f{\triangleright}_{s_1}|\psi^{h,g}\>=\delta_{f,g^{-1}h^{-1}g}|\psi^{h,g}\>\] where we use the isomorphism $|\psi^{h,g}\>\mapsto \delta_hg$ to see the action at $s_0$ as a representation of $\Xi(R,K)$ on $D(G)$. In particular it is the restriction of the left regular representation of $D(G)$ to $\Xi(R,K)$, with inclusion map $i$ from Lemma~\ref{Xisub}. The action at $s_1$ is the right regular representation of $D(G)$, as in the bulk. \end{lemma} {\noindent {\bfseries Proof:}\quad } (1) is the same as the proof in \cite[Prop.3.10]{CowMa}, with the exception that if the ribbon $\xi'$ crosses the boundary multiple times it will incur an additional energy penalty from the Hamiltonian for each crossing, and thus $\hbox{{$\mathcal T$}}_{\xi'}(s_0,s_1) \neq \hbox{{$\mathcal T$}}_{\xi}(s_0,s_1)$ in general. (2) This follows by the commutation rules in Lemma~\ref{boundary_ribcom} and Lemma~\ref{ribcom} respectively, using \[x{\triangleright}^b_{s_0}{|{\rm vac}\>} = \delta_e{\triangleright}^b_{s_0}{|{\rm vac}\>} = {|{\rm vac}\>}; \quad f{\triangleright}_{s_1}{|{\rm vac}\>} = \delta_e{\triangleright}_{s_1}{|{\rm vac}\>} = {|{\rm vac}\>}\] $\forall x\in K, f \in G$. For the hardest case we have \begin{align*}\delta_r{\triangleright}^b_{s_0}F^{h,g}{|{\rm vac}\>} &= F_\xi^{h,g} \circ\delta_{s\cdot(y{\triangleright} r)}{\triangleright}^b_{s_0}{|{\rm vac}\>}\\ &= F_\xi^{h,g}\delta_{s\cdot(y{\triangleright} r)K,K}{|{\rm vac}\>}\\ &= F_\xi^{h,g}\delta_{rK,hK}{|{\rm vac}\>}. \end{align*} For the restriction of the action at $s_0$ to $\Xi(R,K)$, we have that \[\delta_r\cdot\delta_hg = \delta_{rK,hK}\delta_hg = \sum_{a\in rK}\delta_{a,h}\delta_hg=i(\delta_r)\delta_hg.\] and $x\cdot \delta_hg = x\delta_hg = i(x)\delta_hg$. \endproof In the bulk, the excitation space $\hbox{{$\mathcal L$}}(s_0,s_1)$ is totally independent of the ribbon $\xi$ \cite{Kit,CowMa}, but we do not know of a similar property for $\hbox{{$\mathcal T$}}_\xi(s_0,s_1)$ when interacting with the boundary without the restrictions stated. We explained in Section~\ref{sec:xi} how representations of $D(G)$ at sites in the bulk relate to those of $\Xi(R,K)$ in the boundary by functors in both directions. Physically, if we apply ribbon trace operators, that is operators of the form $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$, to the vacuum, then in the bulk we create exactly a quasiparticle of type $({\hbox{{$\mathcal C$}}},\pi)$ and $({\hbox{{$\mathcal C$}}}^*,\pi^*)$ at either end. Now let us include a boundary. \begin{definition}Given an irrep of $D(G)$ provided by $({\hbox{{$\mathcal C$}}},\pi)$, we define the {\em boundary projection} $P_{i^*({\hbox{{$\mathcal C$}}},\pi)}\in \Xi(R,K)$ by \[ P_{i^*({\hbox{{$\mathcal C$}}},\pi)}=\sum_{(\hbox{{$\mathcal O$}},\rho)\ |\ n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}\ne 0} P_{(\hbox{{$\mathcal O$}},\rho)}\] i.e. we sum over the projectors of all the types of irreps of $\Xi(R,K)$ contained in the restriction of the given $D(G)$ irrep. \end{definition} It is clear that $P_{i^*({\hbox{{$\mathcal C$}}},\pi)}$ is a projection as a sum of orthogonal projections. \begin{proposition}\label{prop:boundary_traces} Let $\xi$ be an open ribbon extending from an external site $s_0$ on a smooth boundary with associated algebra $\Xi(R,K)$ to a site $s_1$ in the bulk, for example: \[\tikzfig{smooth_halfplane_ribbon}\] Then \[P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = 0\quad {\rm iff} \quad n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)} = 0.\] In addition, we have \[P_{i^*({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}^b_{s_0} W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} {\triangleleft}_{s_1} P_{({\hbox{{$\mathcal C$}}},\pi)},\] where we see the left action at $s_1$ of $P_{({\hbox{{$\mathcal C$}}}^*,\pi^*)}$ as a right action using the antipode. \end{proposition} {\noindent {\bfseries Proof:}\quad } Under the isomorphism in Lemma~\ref{Ts0s1} we have that $W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} \mapsto P_{({\hbox{{$\mathcal C$}}},\pi)} \in D(G)$. For the first part we therefore have \[P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} \mapsto i(P_{(\hbox{{$\mathcal O$}},\rho)}) P_{({\hbox{{$\mathcal C$}}},\pi)}\] so the result follows from the last part of Lemma~\ref{lemfrobn}. Since the sum of projectors over the irreps of $\Xi$ is 1, this then implies the second part: \[W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = \sum_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = P_{i^*({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>}.\] The action at $s_1$ is the same as for bulk ribbon operators. \endproof The physical interpretation is that application of a ribbon trace operator $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$ to a vacuum state creates a quasiparticle at $s_0$ of all the types contained in $i^*({\hbox{{$\mathcal C$}}},\pi)$, while still creating one of type $({\hbox{{$\mathcal C$}}}^*,\pi^*)$ at $s_1$; this is called the \textit{condensation} of $({{\hbox{{$\mathcal C$}}},\pi})$ at the boundary. While we used a smooth boundary in this example, the proposition applies equally to rough boundaries with the specified orientation in Remark~\ref{rem:rough_ribbon} by similar arguments. \begin{example}\rm In the bulk, we take the $D(S_3)$ model. Then by Example~\ref{exDS3}, we have exactly 8 irreps in the bulk. At the boundary, we take $K=\{e,u\} = \mathbb{Z}_2$ with $R = \{e,uv,vu\}$. As per the table in Example~\ref{exS3n} and Proposition~\ref{prop:boundary_traces} above, we then have for example that \[(P_{\hbox{{$\mathcal O$}}_0,-1}+P_{\hbox{{$\mathcal O$}}_1,1}){\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} {\triangleleft}_{s_1}P_{{\hbox{{$\mathcal C$}}}_1,-1}.\] We can see this explicitly. Recall that \[\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{s_0}{|{\rm vac}\>} = \Lambda_{\mathbb{C} K}{\triangleright}^b_{s_0}{|{\rm vac}\>} = {|{\rm vac}\>}.\] All other vertex and face actions give 0 by orthogonality. Then, \[P_{\hbox{{$\mathcal O$}}_0,-1} = {1\over 2}\delta_e \mathop{{\otimes}} (e-u); \quad P_{\hbox{{$\mathcal O$}}_1, 1} = (\delta_{uv} + \delta_{vu})\mathop{{\otimes}} e\] and \[W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1} = \sum_{c\in \{u,v,w\}}F_\xi^{c,e}-F_\xi^{c,c}\] by Lemmas~\ref{Xiproj} and \ref{lem:quasi_basis} respectively. For convenience, we break the calculation up into two parts, one for each projector. Throughout, we will make use of Lemma~\ref{boundary_ribcom} to commute projectors through ribbon operators. First, we have that \begin{align*} &P_{\hbox{{$\mathcal O$}}_0,-1}{\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = {1\over 2}(\delta_e \mathop{{\otimes}} (e - u)){\triangleright}^b_{s_0} \sum_{c\in \{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c}){|{\rm vac}\>}\\ &= {1\over 2}\delta_e{\triangleright}^b_{s_0}[\sum_{c\in\{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c})-(F_\xi^{u,u}-F_\xi^{e,u}+F_\xi^{v,u}-F_\xi^{v,uv}+F_\xi^{w,u}-F_\xi^{w,vu})]{|{\rm vac}\>}\\ &= {1\over 2}[(F_\xi^{u,e}-F_\xi^{u,u})\delta_e{\triangleright}^b_{s_0}+(F_\xi^{v,e}-F_\xi^{v,v})\delta_{vu}{\triangleright}^b_{s_0}+(F_\xi^{w,e}-F_\xi^{w,w})\delta_{uv}{\triangleright}^b_{s_0}\\ &+ (F^{u,e}_\xi-F^{u,u}_\xi)\delta_e{\triangleright}^b_{s_0} + (F^{v,uv}_\xi-F^{v,u}_\xi)\delta_{vu}{\triangleright}^b_{s_0} + (F^{w,vu}_\xi-F^{w,u}_\xi)\delta_{uv}{\triangleright}^b_{s_0}]{|{\rm vac}\>}\\ &= (F_\xi^{u,e}-F_\xi^{u,u}){|{\rm vac}\>} \end{align*} where we used the fact that $u = eu, v=vuu, w=uvu$ to factorise these elements in terms of $R,K$. Second, \begin{align*} P_{\hbox{{$\mathcal O$}}_1,1}{\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} &= ((\delta_{uv} + \delta_{vu})\mathop{{\otimes}} e){\triangleright}^b_{s_0}\sum_{c\in \{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c}){|{\rm vac}\>}\\ &= (F_\xi^{v,e}-F_\xi^{v,v}+F_\xi^{w,e}-F_\xi^{w,w})(\delta_e\mathop{{\otimes}} e){\triangleright}^b_{s_0}{|{\rm vac}\>}\\ &= (F_\xi^{v,e}-F_\xi^{v,v}+F_\xi^{w,e}-F_\xi^{w,w}){|{\rm vac}\>}. \end{align*} The result follows immediately. All other boundary projections of $D(S_3)$ ribbon trace operators can be worked out in a similar way. \end{example} \begin{remark}\rm Proposition~\ref{prop:boundary_traces} does not tell us exactly how \textit{all} ribbon operators in the quasiparticle basis are detected at the boundary, only the ribbon trace operators. We do not know a similar general formula for all ribbon operators. \end{remark} Now, consider a lattice in the plane with two boundaries, namely to the left and right, \[\tikzfig{smooth_twobounds}\] Recall that a lattice on an infinite plane admits a single ground state ${|{\rm vac}\>}$ as explained in\cite{CowMa}. However, in the present case, we may be able to also use ribbon operators in the quasiparticle basis extending from one boundary, at $s_0$ say, to the other, at $s_1$ say, such that no quasiparticles are detected at either end. These ribbon operators do not form a closed, contractible loop, as all undetectable ones do in the bulk; the corresponding states $|\psi\>$ are ground states and the vacuum has increased degeneracy. We can similarly induce additional degeneracy of excited states. This justifies the term \textit{gapped boundaries}, as the boundaries give rise to additional states with energies that are `gapped'; that is, they have a finite energy difference $\Delta$ (which may be zero) independently of the width of the lattice. \section{Patches}\label{sec:patches} For any nontrivial group, $G$ there are always at least two distinct choices of boundary conditions, namely with $K=\{e\}$ and $K=G$ respectively. In these cases, we necessarily have $R=G$ and $R=\{e\}$ respectively. Considering $K=\{e\}$ on a smooth boundary, we can calculate that $A^b_1(v) = \mathrm{id}$ and $B^b_1(s)g = \delta_{e,g} g$, for $g$ an element corresponding to the single edge associated with the boundary site $s$. This means that after performing the measurements at a boundary, these edges are totally constrained and not part of the large entangled state incorporating the rest of the lattice, and hence do not contribute to the model whatsoever. If we remove these edges then we are left with a rough boundary, in which all edges participate, and therefore we may consider the $K=\{e\}$ case to imply a rough boundary. A similar argument applies for $K=G$ when considered on a rough boundary, which has $A^b_2(v)g = A(v)g = {1\over |G|}\sum_k kg = {1\over |G|}\sum_k k$ for an edge with state $g$ and $B^b_2(s) = \mathrm{id}$. $K=G$ therefore naturally corresponds instead to a smooth boundary, as otherwise the outer edges are totally constrained by the projectors. From now on, we will accordingly use smooth to refer always to the $K=G$ condition, and rough for $K=\{e\}$. These boundary conditions are convenient because the condensation of bulk excitations to the vacuum at a boundary can be partially worked out in the group basis. For $K=\{e\}$, it is easy to see that the ribbon operators which are undetected at the boundary (and therefore leave the system in a vaccum state) are exactly those of the form $F_\xi^{e,g}$, for all $g\in G$, as any nontrivial $h$ in $F_\xi^{h,g}$ will be detected by the boundary face projectors. This can also be worked out representation-theoretically using Proposition~\ref{nformula}. \begin{lemma}\label{lem:rough_functor} Let $K=\{e\}$. Then the multiplicity of an irrep $({\hbox{{$\mathcal C$}}},\pi)$ of $D(G)$ with respect to the trivial representation of $\Xi(G,\{e\})$ is \[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)} = \delta_{{\hbox{{$\mathcal C$}}},\{e\}}{\rm dim}(W_\pi)\] \end{lemma} {\noindent {\bfseries Proof:}\quad } Applying Proposition~\ref{nformula} in the case where $V_i$ is trivial, we start with \[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G| \over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap \{e\}} |\{e\}^c| n_{1,\tilde\pi}\] where ${\hbox{{$\mathcal C$}}}\cap \{e\} = \{e\}$ iff ${\hbox{{$\mathcal C$}}}=\{e\}$, or otherwise $\emptyset$. Also, $\tilde\pi = \oplus_{{\rm dim}(W_\pi)} (\{e\},1)$, and if ${\hbox{{$\mathcal C$}}} = \{e\}$ then $|G^{c_0}| = |G|$. \endproof The factor of ${\rm dim}(W_\pi)$ in the r.h.s. implies that there are no other terms in the decomposition of $i^*(\{e\},\pi)$. In physical terms, this means that the trace ribbon operators $W^{e,\pi}_\xi$ are the only undetectable trace ribbon operators, and any ribbon operators which do not lie in the block associated to $(e,\pi)$ are detectable. In fact, in this case we have a further property which is that all ribbon operators in the chargeon sector are undetectable, as by equation~(\ref{chargeon_ribbons}) chargeon sector ribbon operators are Fourier isomorphic to those of the form $F_\xi^{e,g}$ in the group basis. In the more general case of a rough boundary for an arbitrary choice of $\Xi(R,K)$ the orientation of the ribbon is important for using the representation-theoretic argument. When $K=\{e\}$, for $F^{e,g}_\xi$ one can check that regardless of orientation the rough boundary version of Proposition~\ref{Ts0s1} applies. The $K=G$ case is slightly more complicated: \begin{lemma}\label{lem:smooth_functor} Let $K=G$. Then the multiplicity of an irrep $({\hbox{{$\mathcal C$}}},\pi)$ of $D(G)$ with respect to the trivial representation of $\Xi(\{e\},G)$ is \[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)} = \delta_{\pi,1}\] \end{lemma} {\noindent {\bfseries Proof:}\quad } We start with \[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={1 \over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}} |G^c| n_{1,\tilde\pi}.\] Now, $K^{r,c} = G^c$ and so $\tilde\pi = \pi$, giving $n_{1,\tilde\pi} = \delta_{1,\pi}$. Then $\sum_{c\in{\hbox{{$\mathcal C$}}}}|G^c| = |{\hbox{{$\mathcal C$}}}||G^{c_0}|$. \endproof This means that the only undetectable ribbon operators between smooth boundaries are those in the fluxion sector, i.e. those with assocated irrep $({\hbox{{$\mathcal C$}}}, 1)$. However, there is no factor of $|{\hbox{{$\mathcal C$}}}|$ on the r.h.s. and so the decomposition of $i^*({\hbox{{$\mathcal C$}}},1)$ will generally have additional terms other than just $(\{e\},1)$ in ${}_{\Xi(\{e\},G)}\hbox{{$\mathcal M$}}$. As a consequence, a fluxion trace ribbon operator $W^{{\hbox{{$\mathcal C$}}},1}_\zeta$ between smooth boundaries is undetectable iff its associated conjugacy class is a singlet, say ${\hbox{{$\mathcal C$}}}= \{c_0\}$, and thus $c_0 \in Z(G)$, the centre of $G$. \begin{definition}\rm A \textit{patch} is a finite rectangular lattice segment with two opposite smooth sides, each equipped with boundary conditions $K=G$, and two opposite rough sides, each equipped with boundary conditions $K=\{e\}$, for example: \[\tikzfig{patch}\] \end{definition} One can alternatively define patches with additional sides, such as in \cite{Lit1}, or with other boundary conditions which depend on another subgroup $K$ and transversal $R$, but we find this definition convenient. Note that our definition does not put conditions on the size of the lattice; the above diagram is just a conveniently small and yet nontrivial example. We would like to characterise the vacuum space $\hbox{{$\mathcal H$}}_{\rm vac}$ of the patch. To do this, let us begin with $|{\rm vac}_1\>$ from equation~(\ref{eq:vac1}), and denote $|e\>_L := |{\rm vac}_1\>$. This is the \textit{logical zero state} of the patch. We will use this as a reference state to calculate other states in $\hbox{{$\mathcal H$}}_{\rm vac}$. Now, for any other state $|\psi\>$ in $\hbox{{$\mathcal H$}}_{\rm vac}$, there must exist some linear map $D \in {\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$ such that $D|e\>_L = |\psi\>$, and thus if we can characterise the algebra of linear maps ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$, we automatically characterise $\hbox{{$\mathcal H$}}_{\rm vac}$. To help with this, we have the following useful property: \begin{lemma}\label{lem:rib_move} Let $F_\xi^{e,g}$ be a ribbon operator for some $g\in G$, with $\xi$ extending from the top rough boundary to the bottom rough boundary. Then the endpoints of $\xi$ may be moved along the rough boundaries with $G=\{e\}$ boundary conditions while leaving the action invariant on any vacuum state. \end{lemma} {\noindent {\bfseries Proof:}\quad } We explain this on an example patch with initial state $|\psi\> \in \hbox{{$\mathcal H$}}_{\rm vac}$ and a ribbon $\xi$, \[\tikzfig{bigger_patch}\] \[\tikzfig{bigger_patch2}\] using the fact that $a = cb$ and $m = lk$ by the definition of $\hbox{{$\mathcal H$}}_{\rm vac}$ for the second equality. Thus, we see that the ribbon through the bulk may be deformed as usual. As the only new component of the proof concerned the endpoints, we see that this property holds regardless of the size of the patch. \endproof One can calculate in particular that $F_\xi^{e,g}|e\>_L = \delta_{e,g}|e\>_L$, which we will prove more generally later. The undetectable ribbon operators between the smooth boundaries are of the form \[W^{{\hbox{{$\mathcal C$}}},1}_\xi = \sum_{n\in G} F_\zeta^{c_0,n}\] when ${\hbox{{$\mathcal C$}}} = \{c_0\}$ by Lemma~\ref{lem:smooth_functor}, hence $G^{c_0} = G$. Technically, this lemma only tells us the ribbon trace operators which are undetectable, but in the present case none of the individual component operators are undetectable, only the trace operators. There are thus exactly $|Z(G)|$ orthogonal undetectable ribbon operators between smooth boundaries. These do not play an important role, but we describe them to characterise the operator algebra on $\hbox{{$\mathcal H$}}_{\rm vac}$. They obey a similar rule as Lemma~\ref{lem:rib_move}, which one can check in the same way. In addition to the ribbon operators between sides, we also have undetectable ribbon operators between corners on the lattice. These corners connect smooth and rough boundaries, and thus careful application of specific ribbon operators can avoid detection from either face or vertex measurements, \[\tikzfig{corner_ribbons}\] where one can check that these do indeed leave the system in a vacuum using familiar arguments about $B(p)$ and $A(v)$. We could equally define such operators extending from either left corner to either right corner, and they obey the discrete isotopy laws as in the bulk. If we apply $F_\xi^{h,g}$ for any $g\neq e$ then we have $F_\xi^{h,g}|\psi\> =0$ for any $|\psi\>\in \hbox{{$\mathcal H$}}_{\rm vac}$, and so these are the only ribbon operators of this form. \begin{remark}\rm Corners of boundaries are algebraically interesting themselves, and can be used for quantum computation, but for brevity we skim over them. See e.g. \cite{Bom2,Brown} for details. \end{remark} These corner to corner, left to right and top to bottom ribbon operators span ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$, the linear maps which leave the system in vacuum. Due to Lemma~\ref{lem:ribs_only}, all other linear maps must decompose into ribbon operators, and these are the only ribbon operators in ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$ up to linearity. As a consequence, we have well-defined patch states $|h\>_L := \sum_gF^{h,g}_\xi|e\>_L$ for each $h\in G$, where $\xi$ is any ribbon extending from the bottom left corner to right. Now, working explicitly on the small patch below, we have \[\tikzfig{wee_patch}\] to start with, then: \[\tikzfig{wee_patch2}\] It is easy to see that we may always write $|h\>_L$ in this manner, for an arbitrary size of patch. Now, ribbon operators which are undetectable when $\xi$ extends from bottom to top are those of the form $F_\xi^{e,g}$, for example \[\tikzfig{wee_patch3}\] and so $F_\xi^{e,g}|h\>_L = \delta_{g,h}|h\>_L$, where again if we take a larger patch all additional terms will clearly cancel. Lastly, undetectable ribbon operators for a ribbon $\zeta$ extending from left to right are exactly those of the form $\sum_{n\in G} F_\zeta^{c_0,n}$ for any $c_0 \in Z(G)$. One can check that $|c_0 h\>_L = \sum_{n\in G} F_\zeta^{c_0,n} |h\>_L$, thus these give us no new states in $\hbox{{$\mathcal H$}}_{\rm vac}$. \begin{lemma}\label{lem:patch_dimension} For a patch with the $D(G)$ model in the bulk, ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac}) = |G|$. \end{lemma} {\noindent {\bfseries Proof:}\quad } By the above characterisation of undetectable ribbon operators, the states $\{|h\>_L\}_{h\in G}$ span ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac})$. The result then follows from the adjointness of ribbon operators, which means that the states $\{|h\>_L\}_{h\in G}$ are orthogonal. \endproof We can also work out that for $|{\rm vac}_2\>$ from equation~(\ref{eq:vac2}), we have $|{\rm vac}_2\> = \sum_h |h\>_L$. More generally: \begin{corollary}\label{cor:matrix_basis} $\hbox{{$\mathcal H$}}_{\rm vac}$ has an alternative basis with states $|\pi;i,j\>_L$, where $\pi$ is an irreducible representation of $G$ and $i,j$ are indices such that $0\leq i,j<{\rm dim}(V_\pi)$. We call this the quasiparticle basis of the patch. \end{corollary} {\noindent {\bfseries Proof:}\quad } First, use the nonabelian Fourier transform on the ribbon operators $F_\xi^{e,g}$, so we have $F_\xi^{'e,\pi;i,j} = \sum_{n\in G}\pi(n^{-1})_{ji}F_\xi^{e,n}$. If we start from the reference state $|1;0,0\>_L := \sum_h |h\>_L = |{\rm vac}_2\>$ and apply these operators with $\xi$ from bottom to top of the patch then we get \[|\pi;i,j\>_L = F_\xi^{'e,\pi;i,j}|1;0,0\>_L = \sum_{n\in G}\pi(n^{-1})_{ji} |n\>_L\] which are orthogonal. Now, as $\sum_{\pi\in \hat{G}}\sum_{i,j=0}^{{\rm dim}(V_\pi)} = |G|$ and we know ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac}) = |G|$ by the previous Lemma~\ref{lem:patch_dimension}, $\{|\pi;i,j\>_L\}_{\pi,i,j}$ forms a basis of ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac})$. \endproof \begin{remark}\rm Kitaev models are designed in general to be fault tolerant. The minimum number of component Hilbert spaces, that is copies of $\mathbb{C} G$ on edges, for which simultaneous errors will undetectably change the logical state and cause errors in the computation is called the `code distance' $d$ in the language of quantum codes. For the standard method of computation using nonabelian anyons \cite{Kit}, data is encoded using excited states, which are states with nontrivial quasiparticles at certain sites. The code distance can then be extremely small, and constant in the size of the lattice, as the smallest errors need only take the form of ribbon operators winding round a single quasiparticle at a site. This is no longer the case when encoding data in vacuum states on patches, as the only logical operators are specific ribbon operators extending from top to bottom, left to right or corner to corner. The code distance, and hence error resilience, of any vacuum state of the patch therefore increases linearly with the width of the patch as it is scaled, and so the square root of the number $n$ of component Hilbert spaces in the patch, that is $n\sim d^2$. \end{remark} \subsection{Nonabelian lattice surgery}\label{sec:surgery} Lattice surgery was invented as a method of fault-tolerant computation with the qubit, i.e. $\mathbb{C}\mathbb{Z}_2$, surface code \cite{HFDM}. The first author generalised it to qudit models using $\mathbb{C}\mathbb{Z}_d$ in \cite{Cow2}, and gave a fresh perspective on lattice surgery as `simulating' the Hopf algebras $\mathbb{C}\mathbb{Z}_d$ and $\mathbb{C}(\mathbb{Z}_d)$ on the logical space $\hbox{{$\mathcal H$}}_{\rm vac}$ of a patch. In this section, we prove that lattice surgery generalises to arbitrary finite group models, and `simulates' $\mathbb{C} G$ and $\mathbb{C}(G)$ in a similar way. Throughout, we assume that the projectors $A(v)$ and $B(p)$ may be performed deterministically for simplicity. In Appendix~\ref{app:measurements} we discuss the added complication that in practice we may only perform measurements which yield projections nondeterministically. \begin{remark}\rm When proving the linear maps that nonabelian lattice surgeries yield, we will use specific examples, but the arguments clearly hold generally. For convenience, we will also tend to omit normalising scalar factors, which do not impact the calculations as the maps are $\mathbb{C}$-linear. \end{remark} Let us begin with a large rectangular patch. We now remove a line of edges from left to right by projecting each one onto $e$: \[\tikzfig{split2}\] We call this a \textit{rough split}, as we create two new rough boundaries. We no longer apply $A(v)$ to the vertices which have had attached edges removed. If we start with a small patch in the state $|l\>_L$ for some $l\in G$ then we can explicitly calculate the linear map. \[\tikzfig{rough_split_project}\] where we have separated the two patches afterwards for clarity, showing that they have two separate vacuum spaces. We then have that the last expression is \[\tikzfig{rough_split_project2}\] Observe the factors of $g$ in particular. The state is therefore now $\sum_g |g^{-1}\>_L\otimes |gl\>_L$, where the l.h.s. of the tensor product is the Hilbert space corresponding to the top patch, and the r.h.s. to the bottom. A change of variables gives $\sum_g |g\>_L\otimes |g^{-1}l\>_L$, the outcome of comultiplication of $\mathbb{C}(G)$ on the logical state $|l\>_L$ of the original patch. Similarly, we can measure out a line of edges from bottom to top, for example \[\tikzfig{split1}\] We call this a \textit{smooth split}, as we create two new smooth boundaries. Each deleted edge is projected into the state ${1\over|G|}\sum_g g$. We also cease measurement of the faces which have had edges removed, and so we end up with two adjacent but disjoint patches. Working on a small example, we start with $|e\>_L$: \[\tikzfig{smooth_split_project}\] where in the last step we have taken $b\mapsto jc$, $g\mapsto kh$ from the $\delta$-functions and then a change of variables $j\mapsto jc^{-1}$, $k\mapsto kh^{-1}$ in the summation. Thus, we have ended with two disjoint patches, each in state $|e\>_L$. One can see that this works for any $|h\>_L$ in exactly the same way, and so the smooth split linear map is $|h\>_L \mapsto |h\>_L\otimes|h\>_L$, the comultiplication of $\mathbb{C} G$. The opposite of splits are merges, whereby we take two disjoint patches and introduce edges to bring them together to a single patch. For the rough merge below, say we start with the basis states $|k\>_L$ and $|j\>_L$ on the bottom and top. First, we introduce an additional joining edge in the state $e$. \[\tikzfig{merge1}\] This state $|\psi\>$ automatically satisfies $B(p)|\psi\> = |\psi\>$ everywhere. But it does not satisfy the conditions on vertices, so we apply $A(v)$ to the two vertices adjacent to the newest edge. Then we have the last expression \[\tikzfig{rough_merge_project}\] which by performing repeated changes of variables yields \[\tikzfig{rough_merge_project2}\] Thus the rough merge yields the map $|j\>_L\otimes|k\>_L\mapsto|jk\>_L$, the multiplication of $\mathbb{C} G$, where again the tensor factors are in order from top to bottom. Similarly, we perform a smooth merge with the states $|j\>_L, |k\>_L$ as \[\tikzfig{merg2}\] We introduce a pair of edges connecting the two patches, each in the state $\sum_m m$. \[\tikzfig{smooth_merge_project}\] The resultant patch automatically satisfies the conditions relating to $A(v)$, but we must apply $B(p)$ to the freshly created faces to acquire a state in $\hbox{{$\mathcal H$}}_{\rm vac}$, giving \[\tikzfig{smooth_merge_project2}\] where the $B(p)$ applications introduced the $\delta$-functions \[\delta_{e}(bf^{-1}m^{-1}),\quad \delta_{e}(dh^{-1}n^{-1}),\quad \delta_e(dj^{-1}b^{-1}bf^{-1}fkh^{-1}hd^{-1}) = \delta_e(j^{-1}k).\] In summary, the linear map on logical states is evidently $|j\>_L\otimes |k\>_L \mapsto \delta_{j,k}|j\>_L$, the multiplication of $\mathbb{C}(G)$. The units of $\mathbb{C} G$ and $\mathbb{C}(G)$ are given by the states $|e\>_L$ and $|1;0,0\>_L$ respectively. The counits are given by the maps $|g\>_L \mapsto 1$ and $|g\>_L\mapsto \delta_{g,e}$ respectively. The logical antipode $S_L$ is given by applying the antipode to each edge individually, i.e. inverting all group elements. For example: \[\tikzfig{antipode_1A}\] This state is now no longer in the original $\hbox{{$\mathcal H$}}_{\rm vac}$, so to compensate we must modify the lattice. We flip all arrows in the lattice -- this is only a conceptual flip, and does not require any physical modification: \[\tikzfig{antipode_1B}\] This amounts to exchanging left and right regular representations, and redefining the Hamiltonian accordingly. In the resultant new vacuum space, the state is now $|g^{-1}\>_L = F_\xi^{e,g^{-1}}|e\>_L$, with $\xi$ running from the bottom left corner to bottom right as previously. \begin{remark}\rm This trick of redefining the vacuum space is employed in \cite{HFDM} to perform logical Hadamards, although in their case the lattice is rotated by $\pi/2$, and the edges are directionless as the model is restricted to $\mathbb{C}\mathbb{Z}_2$. \end{remark} Thus, we have all the ingredients of the Hopf algebras $\mathbb{C} G$ and $\mathbb{C}(G)$ on the same vector space $\hbox{{$\mathcal H$}}_{\rm vac}$. For applications, one should like to know which quantum computations can be performed using these algebras (ignoring the subtlety with nondeterministic projectors). Recall that a quantum computer is called approximately universal if for any target unitary $U$ and desired accuracy ${\epsilon}\in\mathbb{R}$, the computer can perform a unitary $V$ such that $||V-U||\leq{\epsilon}$, i.e. the operator norm error of $V$ from $U$ is no greater than ${\epsilon}$. We believe that when the computer is equipped with just the states $\{|h\>_L\}_{h\in G}$ and the maps from lattice surgery above then one cannot achieve approximately universal computation \cite{Stef}, but leave the proof to a further paper. If we also have access to all matrix algebra states $|\pi;i,j\>_L$ as defined in Corollary~\ref{cor:matrix_basis}, we do not know whether the model of computation is then universal for some choice of $G$, and we do not know whether these states can be prepared efficiently. In fact, how these states are defined depends on a choice of basis for each irrep, so whether it is universal may depend not only on the choice of $G$ but also choices of basis. This is an interesting question for future work. \section{Quasi-Hopf algebra structure of $\Xi(R,K)$}\label{sec:quasi} We now return to our boundary algebra $\Xi$. It is known that $\Xi$ has a great deal more structure, which we give more explicitly in this section than we have seen elsewhere. This structure generalises a well-known bicrossproduct Hopf algebra construction for when a finite group $G$ factorises as $G=RK$ into two subgroups $R,K$. Then each acts on the set of the other to form a {\em matched pair of actions} ${\triangleright},{\triangleleft}$ and we use ${\triangleright}$ to make a cross product algebra $\mathbb{C} K{\triangleright\!\!\!<} \mathbb{C}(R)$ (which has the same form as our algebra $\Xi$ except that we have chosen to flip the tensor factors) and ${\triangleleft}$ to make a cross product coalgebra $\mathbb{C} K{>\!\!\blacktriangleleft} \mathbb{C}(R)$. These fit together to form a bicrossproduct Hopf algebra $\mathbb{C} K{\triangleright\!\blacktriangleleft} \mathbb{C}(R)$. This construction has been used in the Lie group version to construct quantum Poincar\'e groups for quantum spacetimes\cite{Ma:book}. In \cite{Be} was considered the more general case where we are just given a subgroup $K\subseteq G$ and a choice of transversal $R$ with the group identity $e\in R$. As we noted, we still have unique factorisation $G=RK$ but in general $R$ need not be a group. We can still follow the same steps. First of all, unique factorisation entails that $R\cap K=\{e\}$. It also implies maps \[{\triangleright} : K\times R \rightarrow R,\quad {\triangleleft}: K\times R\rightarrow K,\quad \cdot : R\times R \rightarrow R,\quad \tau: R \times R \rightarrow K\] defined by \[xr = (x{\triangleright} r)(x{\triangleleft} r),\quad rs = r\cdot s \tau(r,s)\] for all $x\in R, r,s\in R$, but this time these inherit the properties \begin{align} (xy) {\triangleright} r &= x {\triangleright} (y {\triangleright} r), \quad e {\triangleright} r = r,\nonumber\\ \label{lax} x {\triangleright} (r\cdot s)&=(x {\triangleright} r)\cdot((x{\triangleleft} r){\triangleright} s),\quad x {\triangleright} e = e,\end{align} \begin{align} (x{\triangleleft} r){\triangleleft} s &= \tau\left(x{\triangleright} r, (x{\triangleleft} r){\triangleright} s)^{-1}(x{\triangleleft} (r\cdot s)\right)\tau(r,s),\quad x {\triangleleft} e = x,\nonumber\\ \label{rax} (xy) {\triangleleft} r &= (x{\triangleleft} (y{\triangleright} r))(y{\triangleleft} r),\quad e{\triangleleft} r = e,\end{align} \begin{align} \tau(r, s\cdot t)\tau(s,t)& = \tau\left(r\cdot s,\tau(r,s){\triangleright} t\right)(\tau(r,s){\triangleleft} t),\quad \tau(e,r) = \tau(r,e) = e,\nonumber\\ \label{tax} r\cdot(s\cdot t) &= (r\cdot s)\cdot(\tau(r,s){\triangleright} t),\quad r\cdot e=e\cdot r=r\end{align} for all $x,y\in K$ and $r,s,t\in R$. We see from (\ref{lax}) that ${\triangleright}$ is indeed an action (we have been using it in preceding sections) but ${\triangleleft}$ in (\ref{rax}) is only only up to $\tau$ (termed in \cite{KM2} a `quasiaction'). Both ${\triangleright},{\triangleleft}$ `act' almost by automorphisms but with a back-reaction by the other just as for a matched pair of groups. Meanwhile, we see from (\ref{tax}) that $\cdot$ is associative only up to $\tau$ and $\tau$ itself obeys a kind of cocycle condition. Clearly, $R$ is a subgroup via $\cdot$ if and only if $\tau(r,s)=e$ for all $r,s$, and in this case we already see that $\Xi(R,K)$ is a bicrossproduct Hopf algebra, with the only difference being that we prefer to build it on the flipped tensor factors. More generally, \cite{Be} showed that there is still a natural monoidal category associated to this data but with nontrivial associators. This corresponds by Tannaka-Krein reconstruction to a $\Xi$ as quasi-bialgebra which in some cases is a quasi-Hopf algebra\cite{Nat}. Here we will give these latter structures explicitly and in maximum generality compared to the literature (but still needing a restriction on $R$ for the antipode to be in a regular form). We will also show that the obvious $*$-algebra structure makes a $*$-quasi-Hopf algebra in an appropriate sense under restrictions on $R$. These aspects are new, but more importantly, we give direct proofs at an algebraic level rather than categorical arguments, which we believe are essential for detailed calculations. Related works on similar algebras and coset decompositions include \cite{PS,KM1} in addition to \cite{Be,Nat,KM2}. \begin{lemma}\cite{Be,Nat,KM2} $(R,\cdot)$ has the same unique identity $e$ as $G$ and has the left division property, i.e. for all $t, s\in R$, there is a unique solution $r\in R$ to the equation $s\cdot r = t$ (one writes $r = s\backslash t$). In particular, we let $r^R$ denote the unique solution to $r\cdot r^R=e$, which we call a right inverse.\end{lemma} This means that $(R,\cdot)$ is a left loop (a left quasigroup with identity). The multiplication table for $(R,\cdot)$ has one of each element of $R$ in each row, which is the left division property. In particular, there is one instance of $e$ in each row. One can recover $G$ knowing $(R,\cdot)$, $K$ and the data ${\triangleright},{\triangleleft},\tau$\cite[Prop.3.4]{KM2}. Note that a parallel property of left inverse $(\ )^L$ need not be present. \begin{definition} We say that $R$ is {\em regular} if $(\ )^R$ is bijective. \end{definition} $R$ is regular iff it has both left and right inverses, and this is iff it satisfies $RK=KR$ by\cite[Prop.~3.5]{KM2}. If there is also right division then we have a loop (a quasigroup with identity) and under further conditions\cite[Prop.~3.6]{KM2} we have $r^L=r^R$ and a 2-sided inverse property quasigroup. The case of regular $R$ is studied in \cite{Nat} but this excludes some interesting choices of $R$ and we do not always assume it. Throughout, we will specify when $R$ is required to be regular for results to hold. Finally, if $R$ obeys a further condition $x{\triangleright}(s\cdot t)=(x{\triangleright} s){\triangleright} t$ in \cite{KM2} then $\Xi$ is a Hopf quasigroup in the sense introduced in \cite{KM1}. This is even more restrictive but will apply to our octonions-related example. Here we just give the choices for our go-to cases for $S_3$. \begin{example}\label{exS3R}\rm $G=S_3$ with $K=\{e,u\}$ has four choices of transversal $R$ meeting our requirement that $e\in R$. Namely \begin{enumerate} \item $R=\{e,uv,vu\}$ (our standard choice) {\em is a subgroup} $R=\mathbb{Z}_3$, so it is associative and there is 2-sided division and a 2-sided inverse. We also have $u{\triangleright}(uv)=vu, u{\triangleright} (vu)=uv$ but ${\triangleleft},\tau$ trivial. \item $R=\{e,w,v\}$ which is {\em not a subgroup} and indeed $\tau(v,w)=\tau(w,v)=u$ (and all others are necessarily $e$). There is an action $u{\triangleright} v=w, u{\triangleright} w=v$ but ${\triangleleft}$ is still trivial. For examples \begin{align*} vw&=wu \Rightarrow v\cdot w=w,\ \tau(v,w)=u;\quad wv=vu \Rightarrow w\cdot v=v,\ \tau(w,v)=u\\ uv&=wu \Rightarrow u{\triangleright} v=w,\ u{\triangleleft} v=u;\quad uw=vu \Rightarrow u{\triangleright} w=v,\ u{\triangleleft} w=u. \end{align*} This has left division/right inverses as it must but {\em not right division} as $e\cdot w=v\cdot w=w$ and $e\cdot v=w\cdot v=v$. We also have $v\cdot v=w\cdot w=e$ and $(\ )^R$ is bijective so this {\em is regular}. \item $R=\{e,uv, v\}$ which is {\em not a subgroup} and $\tau,{\triangleright},{\triangleleft}$ are all nontrivial with \begin{align*} \tau(uv,uv)&=\tau(v,uv)=\tau(uv,v)=u,\quad \tau(v,v)=e,\\ v\cdot v&=e,\quad v\cdot uv=uv,\quad uv\cdot v=e,\quad uv\cdot uv=v,\\ u{\triangleright} v&=uv,\quad u{\triangleright} (uv)=v,\quad u{\triangleleft} v=e,\quad u{\triangleleft} uv=e\end{align*} and all other cases determined from the properties of $e$. Here $v^R=v$ and $(uv)^R=v$ so this is {\em not regular}. \item $R=\{e,w,vu\}$ which is analogous to the preceding case, so {\em not a subgroup}, $\tau,{\triangleright},{\triangleleft}$ all nontrivial and {\em not regular}. \end{enumerate} \end{example} We will also need the following useful lemma in some of our proofs. \begin{lemma}\label{leminv}\cite{KM2} For any transversal $R$ with $e\in R$, we have \begin{enumerate} \item $(x{\triangleleft} r)^{-1}=x^{-1}{\triangleleft}(x{\triangleright} r)$. \item $(x{\triangleright} r)^R=(x{\triangleleft} r){\triangleright} r^R$. \item $\tau(r,r^R)^{-1}{\triangleleft} r=\tau(r^R,r^{RR})^{-1}$. \item $\tau(r,r{}^R)^{-1}{\triangleleft} r=r^R{}^R$. \end{enumerate} for all $x\in K, r\in R$. \end{lemma} {\noindent {\bfseries Proof:}\quad } The first two items are elementary from the matched pair axioms. For (1), we use $e=(x^{-1}x){\triangleleft} r=(x^{-1}{\triangleleft}(x{\triangleright} r))(x{\triangleleft} r)$ and for (2) $e=x{\triangleright}(r\cdot r^R)=(x{\triangleright} r)\cdot((x{\triangleleft} r){\triangleright} r^R)$. The other two items are a left-right reversal of \cite[Lem.~3.2]{KM2} but given here for completeness. For (3), \begin{align*} e&=(\tau(r,r^R)\tau(r,r^R)^{-1}){\triangleleft} r=(\tau(r,r^R){\triangleleft} (\tau(r,r^R){\triangleright} r))(\tau(r,r^R)^{-1}{\triangleleft} r)\\ &=(\tau(r,r^R){\triangleleft} r^{RR})(\tau(r,r^R)^{-1}{\triangleleft} r)\end{align*} which we combine with \[ \tau(r^R,r^{RR})=\tau(r\cdot r^R,r^{RR})\tau(r^R,r^{RR})=\tau(r\cdot r^R, \tau(r,r^R){\triangleright} r^{RR})(\tau(r,r^R){\triangleleft} r^{RR})=\tau(r,r^R){\triangleleft} r^{RR}\] by the cocycle property. For (4), $\tau(r,r^R){\triangleleft} r^R{}^R=(r\cdot r^R) \tau(r,r^R){\triangleleft} r^R{}^R=r\cdot (r^R\cdot r^R{}^R)=r$ by one of the matched pair conditions. \endproof Using this lemma, it is not hard to prove cf\cite[Prop.3.3]{KM2} that \begin{equation}\label{leftdiv}s\backslash t=s^R\cdot\tau^{-1}(s,s^R){\triangleright} t;\quad s\cdot(s\backslash t)=s\backslash(s\cdot t)=t,\end{equation} which can also be useful in calculations. \subsection{$\Xi(R,K)$ as a quasi-bialgebra} We recall that a quasi-bialgebra is a unital algebra $H$, a coproduct $\Delta:H\to H\mathop{{\otimes}} H$ which is an algebra map but is no longer required to be coassociative, and ${\epsilon}:H\to \mathbb{C}$ a counit for $\Delta$ in the usual sense $(\mathrm{id}\mathop{{\otimes}}{\epsilon})\Delta=({\epsilon}\mathop{{\otimes}}\mathrm{id})\Delta=\mathrm{id}$. Instead, we have a weaker form of coassociativity\cite{Dri,Ma:book} \[ (\mathrm{id}\mathop{{\otimes}}\Delta)\Delta=\phi((\Delta\mathop{{\otimes}}\mathrm{id})\Delta(\ ))\phi^{-1}\] for an invertible element $\phi\in H^{\mathop{{\otimes}} 3}$ obeying the 3-cocycle identity \[ (1\mathop{{\otimes}}\phi)((\mathrm{id}\mathop{{\otimes}}\Delta\mathop{{\otimes}}\mathrm{id})\phi)(\phi\mathop{{\otimes}} 1)=((\mathrm{id}\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\Delta)\phi)(\Delta\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\mathrm{id})\phi,\quad (\mathrm{id}\mathop{{\otimes}}{\epsilon}\mathop{{\otimes}}\mathrm{id})\phi=1\mathop{{\otimes}} 1\] (it follows that ${\epsilon}$ in the other positions also gives $1\mathop{{\otimes}} 1$). In our case, we already know that $\Xi(R,K)$ is a unital algebra. \begin{lemma}\label{Xibialg} $\Xi(R,K)$ is a quasi-bialgebra with \[ \Delta x=\sum_{s\in R}x\delta_s \mathop{{\otimes}} x{\triangleleft} s, \quad \Delta \delta_r = \sum_{s,t\in R} \delta_{s\cdot t,r}\delta_{s}\otimes \delta_{t},\quad {\epsilon} x=1,\quad {\epsilon} \delta_r=\delta_{r,e}\] for all $x\in K, r\in R$, and \[ \phi=\sum_{r,s\in R} \delta_r \otimes \delta_s \otimes \tau(r,s)^{-1},\quad \phi^{-1} = \sum_{r,s\in R} \delta_r\otimes \delta_s \otimes \tau(r,s).\] \end{lemma} {\noindent {\bfseries Proof:}\quad } This follows by reconstruction arguments, but it is useful to check directly, \begin{align*} (\Delta x)(\Delta y)&=\sum_{s,r}(x\delta_s\mathop{{\otimes}} x{\triangleleft} s)(y\delta_r\mathop{{\otimes}} y{\triangleleft} r)=\sum_{s,r}(x\delta_sy\delta_r\mathop{{\otimes}} ( x{\triangleleft} s)( y{\triangleleft} r)\\ &=\sum_{r,s}xy\delta_{y^{-1}{\triangleright} s}\delta_r\mathop{{\otimes}} (x{\triangleleft} s)(y{\triangleleft} r)=\sum_r xy \delta_r\mathop{{\otimes}} (x{\triangleleft}(y{\triangleright} r))(y{\triangleleft} r)=\Delta(xy) \end{align*} as $s=y{\triangleright} r$ and using the formula for $(xy){\triangleleft} r$ at the end. Also, \begin{align*} \Delta(\delta_{x{\triangleright} s}x)&=(\Delta\delta_{x{\triangleright} s})(\Delta x)=\sum_{r, p.t=x{\triangleright} s}\delta _p x\delta_r\mathop{{\otimes}} \delta_t x{\triangleleft} r\\ &=\sum_{r, p.t=x{\triangleright} s}x\delta_{x^{-1}{\triangleright} p}\delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{(x{\triangleleft} r)^{-1}{\triangleright} t}=\sum_{(x{\triangleright} r).t=x{\triangleright} s}x \delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{(x{\triangleleft} r)^{-1}{\triangleright} t}\\ &=\sum_{(x{\triangleright} r).((x{\triangleleft} r){\triangleright} t')=x{\triangleright} s}x \delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{t'}=\sum_{r\cdot t'=s}x\delta_r\mathop{{\otimes}} (x{\triangleleft} r)\delta_{t'}=(\Delta x)(\Delta \delta _s)=\Delta(x\delta_s) \end{align*} using the formula for $x{\triangleright}(r\cdot t')$. This says that the coproducts stated are compatible with the algebra cross relations. Similarly, one can check that \begin{align*} (\sum_{p,r}\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}} &\tau(p,r))((\mathrm{id}\mathop{{\otimes}}\Delta )\Delta x)=\sum_{p,r,s,t}(\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}} \tau(p,r))(x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}} (x{\triangleleft} s){\triangleleft} t)\\ &=\sum_{p,r,s,t}\delta_px\delta_s\mathop{{\otimes}}\delta_r(x{\triangleleft} s)\delta_t\mathop{{\otimes}} \tau(p,r)((x{\triangleleft} s){\triangleleft} t)\\ &=\sum_{s,t}x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}\tau(x{\triangleright} s,(x{\triangleleft} s){\triangleright} t)(x{\triangleleft} s){\triangleleft} t)\\ &=\sum_{s,t}x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}(x{\triangleleft}(s.t))\tau(s,t)\\ &=\sum_{p,r,s,t}(x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}(x{\triangleleft}(s.t))(\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}}\tau(p,r)\\ &=( (\Delta\mathop{{\otimes}}\mathrm{id})\Delta x ) (\sum_{p,r}\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}}\tau(p,r)) \end{align*} as $p=x{\triangleright} s$ and $r=(x{\triangleleft} s){\triangleright} t$ and using the formula for $(x{\triangleleft} s){\triangleleft} t$. For the remaining cocycle relations, we have \begin{align*} (\mathrm{id}\mathop{{\otimes}}{\epsilon}\mathop{{\otimes}}\mathrm{id})\phi = \sum_{r,s}\delta_{s,e}\delta_r\mathop{{\otimes}}\tau(r,s)^{-1} = \sum_r\delta_r\mathop{{\otimes}} 1 = 1\mathop{{\otimes}} 1 \end{align*} and \[ (1\mathop{{\otimes}}\phi)((\mathrm{id}\mathop{{\otimes}}\Delta\mathop{{\otimes}}\mathrm{id})\phi)(\phi\mathop{{\otimes}} 1)=\sum_{r,s,t}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}} \delta_t\tau(r,s)^{-1}\mathop{{\otimes}}\tau(s,t)^{-1}\tau(r,s\cdot t)\] after multiplying out $\delta$-functions and renaming variables. Using the value of $\Delta\tau(r,s)^{-1}$ and similarly multiplying out, we obtain on the other side \begin{align*} ((\mathrm{id}\mathop{{\otimes}}&\mathrm{id}\mathop{{\otimes}}\Delta)\phi)(\Delta\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\mathrm{id})\phi=\sum_{r,s,t}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\tau(r,s)^{-1}\delta_t\mathop{{\otimes}}(\tau(r,s)^{-1}{\triangleleft} t)\tau(r\cdot s,t)^{-1}\\ &=\sum_{r,s,t'}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\delta_{t'}\tau(r,s)^{-1}\mathop{{\otimes}}(\tau(r,s)^{-1}{\triangleleft} (\tau(r,s){\triangleright} t'))\tau(r\cdot s,\tau(r,s){\triangleright} t')^{-1}\\ &=\sum_{r,s,t'}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\delta_{t'}\tau(r,s)^{-1}\mathop{{\otimes}}(\tau(r,s){\triangleleft} t')^{-1}\tau(r\cdot s,\tau(r,s){\triangleright} t')^{-1}, \end{align*} where we change summation to $t'=\tau(r,s){\triangleright} t$ then use Lemma~\ref{leminv}. Renaming $t'$ to $t$, the two sides are equal in view of the cocycle identity for $\tau$. Thus, we have a quasi-bialgebra with $\phi$ as stated. \endproof We can also write the coproduct (and the other structures) more explicitly. \begin{remark}\rm (1) If we want to write the coproduct on $\Xi$ explicitly as a vector space, the above becomes \[ \Delta(\delta_r\mathop{{\otimes}} x)=\sum_{s\cdot t=r}\delta_s\mathop{{\otimes}} x\mathop{{\otimes}}\delta_t\mathop{{\otimes}} (x^{-1}{\triangleright} s)^{-1},\quad {\epsilon}(\delta_r\mathop{{\otimes}} x)=\delta_{r,e}\] which is ugly due to our decision to build it on $\mathbb{C}(R)\mathop{{\otimes}}\mathbb{C} K$. (2) If we built it on the other order then we could have $\Xi=\mathbb{C} K{\triangleright\!\!\!<} \mathbb{C}(R)$ as an algebra, where we have a right action \[ (f{\triangleright} x)(r)= f(x{\triangleleft} r);\quad \delta_r{\triangleright} x=\delta_{x^{-1}{\triangleleft} r}\] on $f\in \mathbb{C}(R)$. Now make a right handed cross product \[ (x\mathop{{\otimes}} \delta_r)(y\mathop{{\otimes}} \delta_s)= xy\mathop{{\otimes}} (\delta_r{\triangleright} y)\delta_s=xy\mathop{{\otimes}}\delta_s\delta_{r,y{\triangleleft} s}\] which has cross relations $\delta_r y=y\delta_{y^{-1}{\triangleleft} r}$. These are the same relations as before. So this is the same algebra, just we prioritise a basis $\{x\delta_r\}$ instead of the other way around. This time, we have \[ \Delta (x\mathop{{\otimes}}\delta_r)=\sum_{s\cdot t=r} x\mathop{{\otimes}}\delta_s\mathop{{\otimes}} x{\triangleright} s\mathop{{\otimes}}\delta_t.\] We do not do this in order to be compatible with the most common form of $D(G)$ as $\mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$ as in \cite{CowMa}. \end{remark} \subsection{$\Xi(R,K)$ as a quasi-Hopf algebra} A quasi-bialgebra is a quasi-Hopf algebra if there are elements $\alpha,\beta\in H$ and an antialgebra map $S:H\to H$ such that\cite{Dri,Ma:book} \[(S \xi_1)\alpha\xi_2={\epsilon}(\xi)\alpha,\quad \xi_1\beta S\xi_2={\epsilon}(\xi)\beta,\quad \phi^1\beta(S\phi^2)\alpha\phi^3=1,\quad (S\phi^{-1})\alpha\phi^{-2}\beta S\phi^{-3}=1\] where $\Delta\xi=\xi_1\mathop{{\otimes}}\xi_2$, $\phi=\phi^1\mathop{{\otimes}}\phi^2\mathop{{\otimes}}\phi^3$ with inverse $\phi^{-1}\mathop{{\otimes}}\phi^{-2}\mathop{{\otimes}}\phi^{-3}$ is a compact notation (sums of such terms to be understood). It is usual to assume $S$ is bijective but we do not require this. The $\alpha,\beta, S$ are not unique and can be changed to $S'=U(S\ ) U^{-1}, \alpha'=U\alpha, \beta'=\beta U^{-1}$ for any invertible $U$. In particular, if $\alpha$ is invertible then we can transform to a standard form replacing it by $1$. For the purposes of this paper, we therefore call the case of $\alpha$ invertible a (left) {\em regular antipode}. \begin{proposition}\label{standardS} If $(\ )^R$ is bijective, $\Xi(R,K)$ is a quasi-Hopf algebra with regular antipode \[ S(\delta_r\mathop{{\otimes}} x)=\delta_{(x^{-1}{\triangleright} r)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} r,\quad \alpha=\sum_{r\in R}\delta_r\mathop{{\otimes}} 1,\quad \beta=\sum_r\delta_r\mathop{{\otimes}} \tau(r,r^R).\] Equivalently in subalgebra terms, \[ S\delta_r=\delta_{r^R},\quad Sx=\sum_{s\in R}(x^{-1}{\triangleright} s)\delta_{s^R} ,\quad \alpha=1,\quad \beta=\sum_{r\in R}\delta_r\tau(r,r^R).\] \end{proposition} {\noindent {\bfseries Proof:}\quad } For the axioms involving $\phi$, we have \begin{align*}\phi^1\beta&(S \phi^2)\alpha\phi^3=\sum_{s,t,r}(\delta_s\mathop{{\otimes}} 1)(\delta_r\mathop{{\otimes}} \tau(r,r^R))(\delta_{t^R}\mathop{{\otimes}}\tau(s,t)^{-1})\\ &=\sum_{s,t}(\delta_s\mathop{{\otimes}}\tau(s,s^R))(\delta_{t^R}\mathop{{\otimes}} \tau(s,t)^{-1})=\sum_{s,t}\delta_s\delta_{s,\tau(s,s^R){\triangleright} t^R}\mathop{{\otimes}}\tau(s,s^R)\tau(s,t)^{-1}\\ &=\sum_{s^R.t^R=e}\delta_s\mathop{{\otimes}} \tau(s,s^R)\tau(s,t)^{-1}=1, \end{align*} where we used $s.(s^R.t^R)=(s.s^R).\tau(s,s^R){\triangleright} t^R=\tau(s,s^R){\triangleright} t^R$. So $s=\tau(s,s^R){\triangleright} t^R$ holds iff $s^R.t^R=e$ by left cancellation. In the sum, we can take $t=s^R$ which contributes $\delta_s\mathop{{\otimes}} e$. Here $s^R.t^R=s^R.(s^R)^R=e$; there is a unique element $t^R$ which does this and hence a unique $t$ provided $(\ )^R$ is injective, and hence a bijection. \begin{align*} S(\phi^{-1})\alpha&\phi^{-2}\beta S(\phi^{-3}) = \sum_{s,t,u,v}(\delta_{s^R}\otimes 1)(\delta_t\otimes 1)(\delta_u\otimes\tau(u,u^R))(\delta_{(\tau(s,t)^{-1}{\triangleright} v)^R}\otimes (\tau(s,t)^{-1}{\triangleleft} v))\\ &= \sum_{s,v}(\delta_{s^R}\otimes\tau(s^R,s^R{}^R))(\delta_{(\tau(s,s^R)^{-1}{\triangleright} v)^R}\otimes \tau(s,s^R)^{-1}{\triangleleft} v). \end{align*} Upon multiplication, we will have a $\delta$-function dictating that \[s^R = \tau(s^R,s^R{}^R){\triangleright} (\tau(s,s^R)^{-1}{\triangleright} v)^R,\] so we can use the fact that \begin{align*}s\cdot s^R = e &= s\cdot(\tau(s^R,s^R{}^R){\triangleright} (\tau(s,s^R)^{-1}{\triangleright} v)^R)\\ &= s\cdot(s^R\cdot(s^R{}^R\cdot (\tau(s,s^R)^{-1}{\triangleright} v)^R))\\ &= \tau(s,s^R){\triangleright} (s^R{}^R\cdot(\tau(s,s^R){\triangleright} v)^R), \end{align*} where we use similar identities to before. Therefore $s^R{}^R\cdot (\tau(s,s^R)^{-1}{\triangleright} v)^R = e$, so $(\tau(s,s^R)^{-1}{\triangleright} v)^R = s^R{}^R{}^R$. When $(\ )^R$ is injective, this gives us $v = \tau(s,s^R){\triangleright} s^R{}^R$. Returning to our original calculation we have that our previous expression is \begin{align*} \cdots &= \sum_s \delta_{s^R}\otimes \tau(s^R,s^R{}^R)(\tau(s,s^R)^{-1}{\triangleleft} (\tau(s,s^R){\triangleright} s^R{}^R))\\ &= \sum_s \delta_{s^R}\otimes \tau(s^R,s^R{}^R)(\tau(s,s^R){\triangleleft} s^R{}^R)^{-1} = \sum_s \delta_{s^R}\otimes 1 = 1 \end{align*} We now prove the antipode axiom involving $\alpha$, \begin{align*} (S(\delta_s \otimes& x)_1)(\delta_s \otimes x)_2 = \sum_{r\cdot t = s}(\delta_{(x^{-1}{\triangleright} r)^R}\otimes (x^{-1}{\triangleleft} r))(\delta_t\otimes (x^{-1}{\triangleleft} r)^{-1})\\ &= \sum_{r\cdot t = s}\delta_{(x^{-1}{\triangleright} r)^R, (x^{-1}{\triangleleft} r){\triangleright} t}\delta_{(x^{-1}{\triangleright} r)^R}\otimes 1 = \delta_{e,s}\sum_r \delta_{(x^{-1}{\triangleright} r)^R}\otimes 1 = {\epsilon}(\delta_s\otimes x)1. \end{align*} The condition from the $\delta$-functions is \[ (x^{-1}{\triangleright} r)^R=(x^{-1}{\triangleleft} r){\triangleright} t\] which by uniqueness of right inverses holds iff \[ e=(x^{-1}{\triangleright} r)\cdot (x^{-1}{\triangleleft} r){\triangleright} t=x^{-1}{\triangleright}(r\cdot t)\] which is iff $r.t=e$, so $t=r^R$. As we also need $r.t=s$, this becomes $\delta_{s,e}$ as required. We now prove the axiom involving $\beta$, starting with \begin{align*}(\delta_s\otimes& x)_1 \beta S((\delta_s\otimes x)_2) = \sum_{r\cdot t=s, p}(\delta_r\mathop{{\otimes}} x)(\delta_p\mathop{{\otimes}}\tau(p,p^R))S(\delta_t\mathop{{\otimes}} (x^{-1}{\triangleleft} r)^{-1})\\ &=\sum_{r\cdot t=s, p}(\delta_r\delta_{r,x{\triangleright} p}\mathop{{\otimes}} x\tau(p,p^R))(\delta_{((x^{-1}{\triangleleft} r){\triangleright} t)^R}\mathop{{\otimes}} (x^{-1}{\triangleleft} r){\triangleleft} t)\\ &=\sum_{r\cdot t=s}(\delta_r\mathop{{\otimes}} x\tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R))(\delta_{((x^{-1}{\triangleleft} r){\triangleright} t)^R}\mathop{{\otimes}} (x^{-1}{\triangleleft} r){\triangleleft} t). \end{align*} When we multiply this out, we will need from the product of $\delta$-functions that \[ \tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)^{-1}{\triangleright} (x^{-1}{\triangleright} r)=((x^{-1}{\triangleleft} r){\triangleright} t)^R,\] but note that $\tau(q,q{}^R)^{-1}{\triangleright} q=q^R{}^R$ from Lemma~\ref{leminv}. So the condition from the $\delta$-functions is \[ (x^{-1}{\triangleright} r)^R{}^R=((x^{-1}{\triangleleft} r){\triangleright} t)^R,\] so \[ (x^{-1}{\triangleright} r)^R=(x^{-1}{\triangleleft} r){\triangleright} t\] when $(\ )^R$ is injective. By uniqueness of right inverses, this holds iff \[ e=(x^{-1}{\triangleright} r)\cdot ((x^{-1}{\triangleleft} r){\triangleright} t)=x^{-1}{\triangleright}(r\cdot t),\] where the last equality is from the matched pair conditions. This holds iff $r\cdot t=e$, that is, $t=r^R$. This also means in the sum that we need $s=e$. Hence, when we multiply out our expression so far, we have \[\cdots=\delta_{s,e}\sum_r\delta_r\mathop{{\otimes}} x\tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)(x^{-1}{\triangleleft} r){\triangleleft} r^R=\delta_{s,e}\sum_r\delta_r\mathop{{\otimes}}\tau(r,r^R)=\delta_{s,e}\beta,\] as required, where we used \[ x\tau( x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)(x^{-1}{\triangleleft} r){\triangleleft} r^R=\tau(r,r^R)\] by the matched pair conditions. The subalgebra form of $Sx$ is the same using the commutation relations and Lemma~\ref{leminv} to reorder. It remains to check that \begin{align*}S(\delta_s&\mathop{{\otimes}} y)S(\delta_r\mathop{{\otimes}} x)=(\delta_{(y^{-1}{\triangleright} s)^R}\mathop{{\otimes}} y^{-1}{\triangleleft} s)(\delta_{(x^{-1}{\triangleright} x)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} r)\\ &=\delta_{r,x{\triangleright} s}\delta_{(y^{-1}{\triangleright} s)^R}\mathop{{\otimes}} (y^{-1}{\triangleleft} s)(x^{-1}{\triangleleft} r)=\delta_{r,x{\triangleright} s}\delta_{(y^{-1}x^{-1}{\triangleright} r)^R}\mathop{{\otimes}}( y^{-1}{\triangleleft}(x^{-1}{\triangleright} r))(x^{-1}{\triangleleft} r)\\ &=S(\delta_r\delta_{r,x{\triangleright} s}\mathop{{\otimes}} xy)=S((\delta_r\mathop{{\otimes}} x)(\delta_s\mathop{{\otimes}} y)), \end{align*} where the product of $\delta$-functions requires $(y^{-1}{\triangleright} s)^R=( y^{-1}{\triangleleft} s){\triangleright} (x^{-1}{\triangleright} r)^R$, which is equivalent to $s^R=(x^{-1}{\triangleright} r)^R$ using Lemma~\ref{leminv}. This imposes $\delta_{r,x{\triangleright} s}$. We then replace $s=x^{-1}{\triangleright} r$ and recognise the answer using the matched pair identities. \endproof \subsection{$\Xi(R,K)$ as a $*$-quasi-Hopf algebra} The correct notion of a $*$-quasi-Hopf algebra $H$ is not part of Drinfeld's theory but a natural notion is to have further structure so as to make the monoidal category of modules a bar category in the sense of \cite{BegMa:bar}. If $H$ is at least a quasi-bialgebra, the additional structure we need, fixing a typo in \cite[Def.~3.16]{BegMa:bar}, is the first three of: \begin{enumerate}\item An antilinear algebra map $\theta:H\to H$. \item An invertible element $\gamma\in H$ such that $\theta(\gamma)=\gamma$ and $\theta^2=\gamma(\ )\gamma^{-1}$. \item An invertible element $\hbox{{$\mathcal G$}}\in H\mathop{{\otimes}} H$ such that \begin{equation}\label{*GDelta}\Delta\theta =\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op}(\ ))\hbox{{$\mathcal G$}},\quad ({\epsilon}\mathop{{\otimes}}\mathrm{id})(\hbox{{$\mathcal G$}})=(\mathrm{id}\mathop{{\otimes}}{\epsilon})(\hbox{{$\mathcal G$}})=1,\end{equation} \begin{equation}\label{*Gphi} (\theta\mathop{{\otimes}}\theta\mathop{{\otimes}}\theta)(\phi_{321})(1\mathop{{\otimes}}\hbox{{$\mathcal G$}})((\mathrm{id}\mathop{{\otimes}}\Delta)\hbox{{$\mathcal G$}})\phi=(\hbox{{$\mathcal G$}}\mathop{{\otimes}} 1)((\Delta\mathop{{\otimes}}\mathrm{id})\hbox{{$\mathcal G$}}).\end{equation} \item We say the $*$-quasi bialgebra is strong if \begin{equation}\label{*Gstrong} (\gamma\mathop{{\otimes}}\gamma)\Delta\gamma^{-1}=((\theta\mathop{{\otimes}}\theta)(\hbox{{$\mathcal G$}}_{21}))\hbox{{$\mathcal G$}}.\end{equation} \end{enumerate} Note that if we have a quasi-Hopf algebra then $S$ is antimultiplicative and $\theta=* S$ defines an antimultiplicative antilinear map $*$. However, $S$ is not unique and it appears that specifying $\theta$ directly is more canonical. \begin{lemma} Let $(\ )^R$ be bijective. Then $\Xi$ has an antilinear algebra automorphism $\theta$ such that \[ \theta(x)=\sum_s x{\triangleleft} s\, \delta_{s^R},\quad \theta(\delta_s)=\delta_{s^R},\] \[\theta^2=\gamma(\ )\gamma^{-1};\quad \gamma=\sum_s\tau(s,s^R)^{-1}\delta_s,\quad\theta(\gamma)=\gamma.\] \end{lemma} {\noindent {\bfseries Proof:}\quad } We compute, \[ \theta(\delta_s\delta_t)=\delta_{s,t}\delta_{s^R}=\delta_{s^R,t^R}\delta_{s^R}=\theta(\delta_s)\theta(\delta_t)\] \[\theta(x)\theta(y)=\sum_{s,t}x{\triangleleft} s\delta_{s^R} y{\triangleleft} t\delta_{t^R}=\sum_{t}(x{\triangleleft} (y{\triangleright} t)) y{\triangleleft} t\delta_t=\sum_t (xy){\triangleleft} t\delta_{t^R}=\theta(xy)\] where imagining commuting $\delta_t$ to the left fixes $s=(y{\triangleleft} t){\triangleright} t^R=(y{\triangleright} t)^R$ to obtain the 2nd equality. We also have \[ \theta(x\delta_s)=\sum_tx{\triangleleft} t\delta_{t^R}\delta_{s^R}=x{\triangleleft} s\delta_{s^R}=\delta_{(x{\triangleleft} s){\triangleright} s^R}x{\triangleleft} s=\delta_{(x{\triangleright} s)^R}x{\triangleleft} s\] \[ \theta(\delta_{x{\triangleright} s}x)=\sum_t\delta_{(x{\triangleright} s)^R}x{\triangleleft} t\delta_{t^R}=\sum_t\delta_{(x{\triangleright} s)^R}\delta_{(x{\triangleleft} t){\triangleright} t^R}=\sum_t\delta_{(x{\triangleright} s)^R}\delta_{(x{\triangleright} t)^R} x{\triangleleft} t,\] which is the same as it needs $t=s$. Next \[ \gamma^{-1}=\sum_s \tau(s,s^R)\delta_{s^{RR}}=\sum_s \delta_s \tau(s,s^R),\] where we recall from previous calculations that $\tau(s,s^R){\triangleright} s^{RR}=s$. Then \begin{align*}\theta^2(x)&=\sum_s\theta(x{\triangleleft} s\delta_{s^R})=\sum_{s,t}(x{\triangleleft} s){\triangleleft} t\delta_{t^R}\delta_{s^R}=\sum_s(x{\triangleleft} s){\triangleleft} s\delta_{s^R}=\sum_s (x{\triangleleft} s){\triangleleft} x^R\delta_{s^{RR}}\\ &=\sum_s \tau(x{\triangleright} s,(x{\triangleright} s)^R)^{-1}x\tau(s,s^R)\delta_{s^{RR}}=\sum_{s,t}\tau(t,t^R)^{-1}\delta_{t} x\tau(s,s^R)\delta_{s^{RR}}\\ &=\sum_{s,t}\delta_{t^{RR}}\tau(t,t^R)^{-1}x\tau(s,s^R)\delta_{s^{RR}}=\gamma x\gamma^{-1}&\end{align*} where for the 6th equality if we were to commute $\delta_{s^{RR}}$ to the left, this would fix $t=x\tau(s,s^R){\triangleright} s^{RR}=x{\triangleright} s$. We then use $\tau(t,t^R)^{-1}{\triangleright} t=t^{RR}$ and recognise the answer. We also check that \begin{align*}\gamma\delta_s\gamma^{-1}&= \tau(s,s^R)^{-1}\delta_s\tau(s,s^R)=\delta_{s^{RR}}=\theta^2(\delta_s),\\ \theta(\gamma) &= \sum_{s,t}\tau(s,s^R)^{-1}{\triangleleft} t\delta_{t^R}\delta_{s^R}=\sum_s\tau(s,s^R)^{-1}{\triangleleft} s\delta_{s^R}=\sum_s\tau(s^R,s^{RR})^{-1}\delta_{s^R}=\gamma\end{align*} using Lemma~\ref{leminv}. \endproof Next, we find $\hbox{{$\mathcal G$}}$ obeying the conditions above. \begin{lemma} If $(\ )^R$ is bijective then equation (\ref{*GDelta}) holds with \[ \hbox{{$\mathcal G$}}=\sum_{s,t} \delta_{t^R}\tau(s,t)^{-1}\mathop{{\otimes}} \delta_{s^R}\tau(t,t^R) (\tau(s,t){\triangleleft} t^R)^{-1}, \] \[\hbox{{$\mathcal G$}}^{-1}=\sum_{s,t} \tau(s,t)\delta_{t^R}\mathop{{\otimes}} (\tau(s,t){\triangleleft} t^R)\tau(t,t^R)^{-1} \delta_{s^R}.\] \end{lemma} {\noindent {\bfseries Proof:}\quad } The proof that $\hbox{{$\mathcal G$}},\hbox{{$\mathcal G$}}^{-1}$ are indeed inverse is straightforward on matching the $\delta$-functions to fix the summation variables in $\hbox{{$\mathcal G$}}^{-1}$ in terms of $\hbox{{$\mathcal G$}}$. This then comes down to proving that the map $(s,t)\to (p,q):=(\tau(s,t){\triangleright} t^R, \tau'(s,t){\triangleright} s^R)$ is injective. Indeed, the map $(p,q)\mapsto (p,p\cdot q)$ is injective by left division, so it's enough to prove that \[ (s,t)\mapsto (p,p\cdot q)=(\tau(s,t){\triangleright} t^R, \tau(s,t){\triangleright}(t^R\cdot\tau(t,t^R)^{-1}{\triangleright} s^R))=((s\cdot t)\backslash s,(s\cdot t)^R)\] is injective. We used $(s\cdot t)\cdot \tau(s,t){\triangleright} t^R=s\cdot(t\cdot t^R)=s$ by quasi-associativity to recognise $p$, recognised $t^R\cdot\tau(t,t^R)^{-1}{\triangleright} s^R=t\backslash s^R$ from (\ref{leftdiv}) and then \[ (s\cdot t)\cdot \tau(s,t){\triangleright} (t\backslash s^R)=s\cdot(t\cdot(t\backslash s^R))=s\cdot s^R=e\] to recognise $p\cdot q$. That the desired map is injective is then immediate by $(\ )^R$ injective and elementary properties of division. We use similar methods in the other proofs. Thus, writing \[ \tau'(s,t):=(\tau(s,t){\triangleleft} t^R)\tau(t,t^R)^{-1}=\tau(s\cdot t, \tau(s,t){\triangleright} t^R)^{-1}\] for brevity, we have \begin{align*}\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op} \delta_r)&=\hbox{{$\mathcal G$}}^{-1}\sum_{p\cdot q=r}(\delta_{q^R}\mathop{{\otimes}}\delta_{p^R})=\sum_{s\cdot t=r}\tau(s,t)\delta_{t^R}\mathop{{\otimes}}\tau'(s,t)\delta_{s^R},\\ (\Delta\theta(\delta_r))\hbox{{$\mathcal G$}}^{-1}&=\sum_{p\cdot q=r^R}(\delta_p\mathop{{\otimes}}\delta_q)\hbox{{$\mathcal G$}}^{-1}=\sum_{p\cdot q=r^R} \tau(s,t)\delta_{t^R}\mathop{{\otimes}}\tau'(s,t)\delta_{s^R}, \end{align*} where in the second line, commuting the $\delta_{t^R}$ and $\delta_{s^R}$ to the left sets $p=\tau(s,t){\triangleright} t^R$, $q=\tau'(s,t){\triangleright} s^R$ as studied above. Hence $p\cdot q=r^R$ in the sum is the same as $s\cdot t=r$, so the two sides are equal and we have proven (\ref{*GDelta}) on $\delta_r$. Similarly, \begin{align*}\hbox{{$\mathcal G$}}^{-1}&(\theta\mathop{{\otimes}}\theta)(\Delta^{op} x)\\ &=\sum_{p,q,s,t} \left(\tau(p,q)\delta_{q^R}\mathop{{\otimes}} (\tau(p,q){\triangleleft} q^R)\tau(q,q^R)^{-1} \delta_{p^R} \right)\left((x{\triangleleft} s){\triangleleft} t\, \delta_{t^R}\mathop{{\otimes}}\delta_{(x{\triangleright} s)^R}x{\triangleleft} s\right)\\ &=\sum_{s,t}(x{\triangleleft} s\cdot t)\tau(s,t)\delta_{t^R}\mathop{{\otimes}} \tau(x{\triangleright}(s\cdot t),(x{\triangleleft} s\cdot t)\tau(s,t){\triangleright} t^R)^{-1}(x{\triangleleft} s)\delta_{s^R} \end{align*} where we first note that for the $\delta$-functions to connect, we need \[ p=x{\triangleright} s,\quad ((x{\triangleleft} s){\triangleleft} t){\triangleright} t^R=q^R,\] which is equivalent to $q=(x{\triangleleft} s){\triangleright} t$ since $e=(x{\triangleleft} s){\triangleright} (t\cdot t^R)=((x{\triangleleft} s){\triangleright} t)\cdot(( (x{\triangleleft} s){\triangleleft} t){\triangleright} t^R)$. In this case \[ \tau(p,q)((x{\triangleleft} s){\triangleleft} t)=\tau(x{\triangleright} s, (x{\triangleleft} s){\triangleright} t)((x{\triangleleft} s){\triangleleft} t)=(x{\triangleleft} s\cdot t)\tau(s,t)\] by the cocycle axiom. Similarly, $(x{\triangleleft} s)^{-1}{\triangleright}(x {\triangleright} s)^R=s^R$ by Lemma~\ref{leminv} gives us $\delta_{s^R}$. For its coefficient, note that $p\cdot q=(x{\triangleright} s)\cdot((x{\triangleleft} s){\triangleright} t)=x{\triangleright}(s\cdot t)$ so that, using the other form of $\tau'(p.q)$, we obtain \[ \tau(p\cdot q,\tau(p,q){\triangleright} q^R)^{-1}(x{\triangleleft} s)=\tau(x{\triangleright}(s\cdot t),\tau(p,q)((x{\triangleleft} s){\triangleleft} t){\triangleright} t^R)^{-1}(x{\triangleleft} s) \] and we use our previous calculation to put this in terms of $s,t$. On the other side, we have \begin{align*} (\Delta\theta(x))&\hbox{{$\mathcal G$}}^{-1}= \sum_t\Delta(x{\triangleleft} t\, \delta_{t^R} )\hbox{{$\mathcal G$}}^{-1}\\ &=\sum_{p,q,s\cdot r=t^R}x{\triangleleft} t\, \delta_s\tau(p,q)\delta_{q^R}\mathop{{\otimes}} (x{\triangleleft} t){\triangleleft} r\, \delta_r \tau(p\cdot q,\tau(p,q){\triangleright} q^R)^{-1}\delta_{p^R}\\ &=\sum_{p,q}x{\triangleleft}(p\cdot q)\, \tau(p,q)\delta_{q^R}\mathop{{\otimes}} (x{\triangleleft} p\cdot q){\triangleleft} s\, \tau(p\cdot q,s)^{-1}\delta_{p^R}, \end{align*} where, for the $\delta$-functions to connect, we need \[ s=\tau(p,q){\triangleright} q^R,\quad r=\tau'(p,q){\triangleright} p^R.\] The map $(p,q)\mapsto (s,r)$ has the same structure as the one we studied above but applied now to $p,q$ in place of $s,t$. It follows that $s\cdot r=(p\cdot q)^R$ and hence this being equal $t^R$ is equivalent to $p\cdot q=t$. Taking this for the value of $t$, we obtain the second expression for $(\Delta\theta(x))\hbox{{$\mathcal G$}}^{-1}$. We now use the identity for $(x{\triangleleft} p\cdot q){\triangleleft} s $ and $(p\cdot q)\cdot \tau(p,q){\triangleright} q^R=p\cdot(q\cdot q^R)=p$ to obtain the same as we obtained for $\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op} x)$ on $x$, upon renaming $s,t$ there to $p,q$. The proofs of (\ref{*Gphi}), (\ref{*Gstrong}) are similarly quite involved, but omitted given that it is known that the category of modules is a strong bar category. \endproof \begin{corollary}\label{corstar} For $(\ )^R$ bijective and the standard antipode in Proposition~\ref{standardS}, we have a $*$-quasi Hopf algebra with $\theta=* S$, where $x^*=x^{-1},\delta_s^*=\delta_s$ is the standard $*$-algebra structure on $\Xi$ as a cross product and $\gamma,\hbox{{$\mathcal G$}}$ are as above. \end{corollary}{\noindent {\bfseries Proof:}\quad } We check that \[*Sx=*(\sum_s \delta_{(x^{-1}{\triangleright} s)^R}x^{-1}{\triangleleft} s)=\sum_s (x^{-1}{\triangleleft} s)^{-1}\delta_{(x^{-1}{\triangleright} s)^R}=\sum_{s'}x{\triangleleft} s'\delta_{s'{}^R}=\theta(x),\] where $s'=x^{-1}{\triangleright} s$ and we used Lemma~\ref{leminv}. \endproof The key property of any quasi-bialgebra is that its category of modules is monoidal with associator $\phi_{V,W,U}: (V\mathop{{\otimes}} W)\mathop{{\otimes}} U\to V\mathop{{\otimes}} (W\mathop{{\otimes}} U)$ given by the action of $\phi$. In the $*$-quasi case, this becomes a bar category as follows\cite{BegMa:bar}. First, there is a functor ${\rm bar}$ from the category to itself which sends a module $V$ to a `conjugate', $\bar V$. In our case, this has the same set and abelian group structure as $V$ but $\lambda.\bar v=\overline{\bar\lambda v}$ for all $\lambda\in \mathbb{C}$, i.e. a conjugate action of the field, where we write $v\in V$ as $\bar v$ when viewed in $\bar V$. Similarly, \[ \xi.\bar v=\overline{\theta(\xi).v}\] for all $\xi\in \Xi(R,K)$. On morphisms $\psi:V\to W$, we define $\bar\psi:\bar V\to \bar W$ by $\bar \psi(\bar v)=\overline{\psi(v)}$. Next, there is a natural isomorphism $\Upsilon: {\rm bar}\circ\mathop{{\otimes}} \Rightarrow \mathop{{\otimes}}^{op}\circ({\rm bar}\times{\rm bar})$, given in our case for all modules $V,W$ by \[ \Upsilon_{V,W}:\overline{V\mathop{{\otimes}} W}{\cong} \bar W\mathop{{\otimes}} \bar V, \quad \Upsilon_{V,W}(\overline{v\mathop{{\otimes}} w})=\overline{ \hbox{{$\mathcal G$}}^2.w}\mathop{{\otimes}}\overline{\hbox{{$\mathcal G$}}^1.v}\] and making a hexagon identity with the associator, namely \[ (\mathrm{id}\mathop{{\otimes}}\Upsilon_{V,W})\circ\Upsilon_{V\mathop{{\otimes}} W, U}=\phi_{\bar U,\bar V,\bar W}\circ(\Upsilon_{W,U}\mathop{{\otimes}}\mathrm{id})\circ\Upsilon_{V,W\mathop{{\otimes}} U}\circ \overline{\phi_{V,W,U}}.\] We also have a natural isomorphism ${\rm bb}:\mathrm{id}\Rightarrow {\rm bar}\circ{\rm bar}$, given in our case for all modules $V$ by \[ {\rm bb}_V:V\to \overline{\overline V},\quad {\rm bb}_V(v)=\overline{\overline{\gamma.v}}\] and obeying $\overline{{\rm bb}_V}={\rm bb}_{\bar V}$. In our case, we have a strong bar category, which means also \[ \Upsilon_{\bar W,\bar V}\circ\overline{\Upsilon_{V,W}}\circ {\rm bb}_{V\mathop{{\otimes}} W}={\rm bb}_V\mathop{{\otimes}}{\rm bb}_W.\] Finally, a bar category has some conditions on the unit object $\underline 1$, which in our case is the trivial representation with these automatic. That $G=RK$ leads to a strong bar category is in \cite[Prop.~3.21]{BegMa:bar} but without the underlying $*$-quasi-Hopf algebra structure as found above. \begin{example}\label{exS3quasi} \rm {\sl (i) $\Xi(R,K)$ for $S_2\subset S_3$ with its standard transversal.} As an algebra, this is generated by $\mathbb{Z}_2$, which means by an element $u$ with $u^2=e$, and by $\delta_{0},\delta_{1},\delta_{2}$ for $\delta$-functions as the points of $R=\{e,uv,vu\}$. The relations are $\delta_i$ orthogonal and add to $1$, and cross relations \[ \delta_0u=u\delta_0,\quad \delta_1u=u\delta_2,\quad \delta_2u=u\delta_1.\] The dot product is the additive group $\mathbb{Z}_3$, i.e. addition mod 3. The coproducts etc are \[ \Delta \delta_i=\sum_{j+k=i}\delta_j\mathop{{\otimes}}\delta_k,\quad \Delta u=u\mathop{{\otimes}} u,\quad \phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1\] with addition mod 3. The cocycle and right action are trivial and the dot product is that of $\mathbb{Z}_3$ as a subgroup generated by $uv$. This gives an ordinary cross product Hopf algebra $\Xi=\mathbb{C}(\mathbb{Z}_3){>\!\!\!\triangleleft}\mathbb{C} \mathbb{Z}_2$. Here $S\delta_i=\delta_{-i}$ and $S u=u$. For the $*$-structure, the cocycle is trivial so $\gamma=1$ and $\hbox{{$\mathcal G$}}=1\mathop{{\otimes}} 1$ and we have an ordinary Hopf $*$-algebra. {\sl (ii) $\Xi(R,K)$ for $S_2\subset S_3$ with its second transversal.} For this $R$, the dot product is specified by $e$ the identity and $v\cdot w=w$, $w\cdot v=v$. The algebra has relations \[ \delta_e u=u\delta_e,\quad \delta_v u=u\delta_w,\quad \delta_w u=u\delta_v\] and the quasi-Hopf algebra coproducts etc. are \[ \Delta \delta_e=\delta_e\mathop{{\otimes}} \delta_e+\delta_v\mathop{{\otimes}}\delta_v+\delta_w\mathop{{\otimes}}\delta_w,\quad \Delta \delta_v=\delta_e\mathop{{\otimes}} \delta_v+\delta_v\mathop{{\otimes}}\delta_e+\delta_w\mathop{{\otimes}}\delta_v,\] \[ \Delta \delta_w=\delta_e\mathop{{\otimes}} \delta_w+\delta_w\mathop{{\otimes}}\delta_e+\delta_v\mathop{{\otimes}}\delta_w,\quad \Delta u=u\mathop{{\otimes}} u,\] \[ \phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1+(\delta_v \mathop{{\otimes}}\delta_w+\delta_w\mathop{{\otimes}}\delta_v )\mathop{{\otimes}} (u-1)=\phi^{-1}.\] The antipode is \[ S\delta_s=\delta_{s^R}=\delta_s,\quad S u=\sum_{s}\delta_{(u{\triangleright} s)^R}u=u,\quad \alpha=1,\quad \beta=\sum_s \delta_s\mathop{{\otimes}}\tau(s,s)=1\] from the antipode lemma, since the map $(\ )^R$ happens to be injective and indeed acts as the identity. In this case, we see that $\Xi(R,K)$ is nontrivially a quasi-Hopf algebra. Only $\tau(v,w)=\tau(w,v)=u$ are nontrivial, hence for the $*$-quasi Hopf algebra structure, we have \[ \gamma=1,\quad \hbox{{$\mathcal G$}}=1\mathop{{\otimes}} 1+(\delta_v\mathop{{\otimes}}\delta_w+\delta_w\mathop{{\otimes}}\delta_v)(u\mathop{{\otimes}} u-1\mathop{{\otimes}} 1)\] with $\theta=*S$ acting as the identity on our basis, $\theta(x)=x$ and $\theta(\delta_s)=\delta_s$. \end{example} We also note that the algebras $\Xi(R,K)$ here are manifestly isomorphic for the two $R$, but the coproducts are different, so the tensor products of representations is different, although they turn out isomorphic. The set of irreps does not change either, but how we construct them can look different. We will see in the next that this is part of a monoidal equivalence of categories. \begin{example}\rm $S_2\subset S_3$ with its 2nd transversal. Here $R$ has two orbits: (a) ${\hbox{{$\mathcal C$}}}=\{e\}$ with $r_0=e, K^{r_0}=K$ with two 1-diml irreps $V_\rho$ as $\rho$=trivial and $\rho={\rm sign}$, and hence two irreps of $\Xi(R,K)$; (b) ${\hbox{{$\mathcal C$}}}=\{w,v\}$ with $r_0=v$ or $r_0=w$, both with $K^{r_0}=\{e\}$ and hence only $\rho$ trivial, leading to one 2-dimensional irrep of $\Xi(R,K)$. So, altogether, there are again three irreps of $\Xi(R,K)$: \begin{align*} V_{(\{e\},\rho)}:& \quad \delta_r.1 =\delta_{r,e},\quad u.1 =\pm 1,\\ V_{(\{w,v\}),1)}:& \quad \delta_r. v=\delta_{r,v}v,\quad \delta_r. w=\delta_{r,w}w,\quad u.v= w,\quad u.w=v \end{align*} acting on $\mathbb{C}$ and on the span of $v,w$ respectively. These irreps are equivalent to what we had in Example~\ref{exS3n} when computing irreps from the standard $R$. \end{example} \section{Categorical justification and twisting theorem}\label{sec:cat_just} We have shown that the boundaries can be defined using the action of the algebra $\Xi(R,K)$ and that one can perform novel methods of fault-tolerant quantum computation using these boundaries. The full story, however, involves the quasi-Hopf algebra structure verified in the preceding section and now we would like to connect back up to the category theory behind this. \subsection{$G$-graded $K$-bimodules.} We start by proving the equivalence ${}_{\Xi(R,K)}\hbox{{$\mathcal M$}} \simeq {}_K\hbox{{$\mathcal M$}}_K^G$ explicitly and use it to derive the coproduct studied in Section~\ref{sec:quasi}. Although this equivalence is known\cite{PS}, we believe this to be a new and more direct derivation. \begin{lemma} If $V_\rho$ is a $K^{r_0}$-module and $V_{\hbox{{$\mathcal O$}},\rho}$ the associated $\Xi(R,K)$ irrep, then \[ \tilde V_{\hbox{{$\mathcal O$}},\rho}= V_{\hbox{{$\mathcal O$}},\rho}\mathop{{\otimes}} \mathbb{C} K,\quad x.(r\mathop{{\otimes}} v\mathop{{\otimes}} z).y=x{\triangleright} r\mathop{{\otimes}}\zeta_r(x).v\mathop{{\otimes}} (x{\triangleleft} r)zy,\quad |r\mathop{{\otimes}} v\mathop{{\otimes}} z|=rz\] is a $G$-graded $K$-bimodule. Here $r\in \hbox{{$\mathcal O$}}$ and $v\in V_\rho$ in the construction of $V_{\hbox{{$\mathcal O$}},\rho}$. \end{lemma} {\noindent {\bfseries Proof:}\quad } That this is a $G$-graded right $K$-module commuting with the left action of $K$ is trivial. That the left action works and is $G$-graded is \begin{align*}x.(y.(r\mathop{{\otimes}} v\mathop{{\otimes}} z))&=x.(y{\triangleright} r\mathop{{\otimes}} \zeta_r(y).v\mathop{{\otimes}} (y{\triangleleft} r)z)= xy{\triangleright} r\mathop{{\otimes}} \zeta_r(xy).v\mathop{{\otimes}} (x{\triangleleft}(y{\triangleright} r))(y{\triangleleft} r)z\\ &=xy{\triangleright} r\mathop{{\otimes}} \zeta_r(xy).v\mathop{{\otimes}} ((xy){\triangleleft} r)z\end{align*} and \[ |x.(r\mathop{{\otimes}} v\mathop{{\otimes}} z).y|=(x{\triangleright} r) (x{\triangleleft} r)zy= xrzy=x|r\mathop{{\otimes}} v \mathop{{\otimes}} z|y.\] \endproof \begin{remark}\rm Recall that we can also think more abstractly of $\Xi=\mathbb{C}(G/K){>\!\!\!\triangleleft} \mathbb{C} K$ rather than using a transversal. In these terms, a representation of $\Xi(R,K)$ as an $R$-graded $K$-module $V$ such that $|x.v|=x{\triangleleft} |v|$ now becomes a $G/K$-graded $K$-module such that $|x.v|=x|v|$, where $|v|\in G/K$ and we multiply from the left by $x\in K$. Moreover, the role of an orbit $\hbox{{$\mathcal O$}}$ above is played by a double coset $T=\hbox{{$\mathcal O$}} K\in {}_KG_K$. In these terms, the role of the isometry group $K^{r_0}$ is played by \[ K^{r_T}:=K\cap r_T K r_T^{-1}, \] where $r_T$ is any representative of the same double coset. One can take $r_T=r_0$ but we can also chose it more freely. Then an irrep is given by a double coset $T$ and an irreducible representation $\rho_T$ of $K^{r_T}$. If we denote by $V_{\rho_T}$ the carrier space for this then the associated irrep of $\mathbb{C}(G/K){>\!\!\!\triangleleft}\mathbb{C} K$ is $V_{T,\rho_T}=\mathbb{C} K\mathop{{\otimes}}_{K^{r_T}}V_{\rho_T}$ which is manifestly a $K$-module and we give it the $G/K$-grading by $|x\mathop{{\otimes}}_{K^{r_T}} v|=xK$. The construction in the last lemma is then equivalent to \[ \tilde V_{T,\rho_T}=\mathbb{C} K\mathop{{\otimes}}_{K^{r_T}} V_{\rho_T}\mathop{{\otimes}}\mathbb{C} K,\quad |x\mathop{{\otimes}}_{K^{r_T}} v\mathop{{\otimes}} z|=xz\] as manifestly a $G$-graded $K$-bimodule. This is an equivalent point of view, but we prefer our more explicit one based on $R$, hence details are omitted. \end{remark} Also note that the category ${}_K\hbox{{$\mathcal M$}}_K^G$ of $G$-graded $K$-bimodules has an obvious monoidal structure inherited from that of $K$-bimodules, where we tensor product over $\mathbb{C} K$. Here $|w\mathop{{\otimes}}_{\mathbb{C} K} w'|=|w||w'|$ in $G$ is well-defined and $x.(w\mathop{{\otimes}}_{\mathbb{C} K}w').y=x.w\mathop{{\otimes}}_{\mathbb{C} K} w'.y$ has degree $x|w||w'|y=x|w\mathop{{\otimes}}_{\mathbb{C} K}w'|y$ as required. \begin{proposition} \label{prop:mon_equiv} We let $R$ be a transversal and $W=V\mathop{{\otimes}} \mathbb{C} K$ made into a $G$-graded $K$-bimodule by \[ x.(v\mathop{{\otimes}} z).y=x.v\mathop{{\otimes}} (x{\triangleleft}|v|)zy, \quad |v\mathop{{\otimes}} z|= |v|z\in G,\] where now we view $|v|\in R$ as the chosen representative of $|v|\in G/K$. This gives a functor $F:{}_\Xi\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ which is a monoidal equivalence for a suitable quasibialgebra structure on $\Xi(R,K)$. The latter depends on $R$ since $F$ depends on $R$. \end{proposition} {\noindent {\bfseries Proof:}\quad } We define $F(V)$ as stated, which is clearly a right module that commutes with the left action, and the latter is a module structure as \[ x.(y.(v\mathop{{\otimes}} z))=x.(y.v\mathop{{\otimes}} (y{\triangleleft} |v|)z)=xy.v\mathop{{\otimes}} (x{\triangleleft} (y{\triangleright} |v|))(y{\triangleleft} |v|)z=(xy).(v\mathop{{\otimes}} z)\] using the matched pair axiom for $(xy){\triangleleft} |v|$. We also check that $|x.(v\mathop{{\otimes}} z).y|=|x.v|zy=(x{\triangleright} |v|)(x{\triangleleft} |v|)zy=x|v|zy=x|v\mathop{{\otimes}} z|y$. Hence, we have a $G$-graded $K$-bimodule. Conversely, if $W$ is a $G$-graded $K$-bimodule, we let \[ V=\{w\in W\ |\ |w|\in R\},\quad x.v=xv(x{\triangleleft} |v|)^{-1},\quad \delta_r.v=\delta_{r,|v|}v,\] where $v$ on the right is viewed in $W$ and we use the $K$-bimodule structure. This is arranged so that $x.v$ on the left lives in $V$. Indeed, $|x.v|=x|v|(x{\triangleleft} |v|)^{-1}=x{\triangleright} |v|$ and $x.(y.v)=xyv(y{\triangleleft} |v|)^{-1}(x{\triangleleft}(y{\triangleright} |v|))^{-1}=xyv((xy){\triangleleft} |v|)^{-1}$ by the matched pair condition, as required for a representation of $\Xi(R,K)$. One can check that this is inverse to the other direction. Thus, given $W=\oplus_{rx\in G}W_{rx}=\oplus_{x\in K} W_{Rx}$, where we let $W_{Rx}=\oplus_{r\in R}W_{rx}$, the right action by $x\in K$ gives an isomorphism $W_{Rx}{\cong} V\mathop{{\otimes}} x$ as vector spaces and hence recovers $W=V\mathop{{\otimes}}\mathbb{C} K$. This clearly has the correct right $K$-action and from the left $x.(v\mathop{{\otimes}} z)=xv(x{\triangleleft}|v|)^{-1}\mathop{{\otimes}} (x{\triangleleft}|v|)z$, which under the identification maps to $xv(x{\triangleleft}|v|)^{-1} (x{\triangleleft}|v|)z=xvz\in W$ as required given that $v\mathop{{\otimes}} z$ maps to $vz$ in $W$. Now, if $V,V'$ are $\Xi(R,K)$ modules then as vector spaces, \[ F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')=(V\mathop{{\otimes}} \mathbb{C} K)\mathop{{\otimes}}_{\mathbb{C} K} (V'\mathop{{\otimes}} \mathbb{C} K)=V\mathop{{\otimes}} V'\mathop{{\otimes}} \mathbb{C} K{\buildrel f_{V,V'}\over{\cong}}F(V\mathop{{\otimes}} V')\] by the obvious identifications except that in the last step we allow ourselves the possibility of a nontrivial isomorphism as vector spaces. For the actions on the two sides, \[ x.(v\mathop{{\otimes}} v'\mathop{{\otimes}} z).y=x.(v\mathop{{\otimes}} v')\mathop{{\otimes}} (x{\triangleleft} |v\mathop{{\otimes}} v'|)zy= x.v\mathop{{\otimes}} (x{\triangleleft} |v|).v'\mathop{{\otimes}} ((x{\triangleleft}|v|){\triangleleft}|v'|)zy,\] where on the right, we have $x.(v\mathop{{\otimes}} 1)=x.v \mathop{{\otimes}} x{\triangleleft}|v|$ and then take $x{\triangleleft}|v|$ via the $\mathop{{\otimes}}_{\mathbb{C} K}$ to act on $v'\mathop{{\otimes}} z$ as per our identification. Comparing the $x$ action on the $V\mathop{{\otimes}} V'$ factor, we need \[\Delta x=\sum_{r\in R}x\delta_r\mathop{{\otimes}} x{\triangleleft} r= \sum_{r\in R}\delta_{x{\triangleright} r}\mathop{{\otimes}} x \mathop{{\otimes}} 1\mathop{{\otimes}} x{\triangleleft} r\] as a modified coproduct without requiring a nontrivial $f_{V,V'}$ for this to work. The first expression is viewed in $\Xi(R,K)^{\mathop{{\otimes}} 2}$ and the second is on the underlying vector space. Likewise, looking at the grading of $F(V\mathop{{\otimes}} V')$ and comparing with the grading of $F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')$, we need to define $|v\mathop{{\otimes}} v'|=|v|\cdot|v'|\in R$ and use $|v|\cdot|v'|\tau(|v|,|v'|)=|v||v'|$ to match the degree on the left hand side. This amounts to the coproduct of $\delta_r$ in $\Xi(R,K)$, \[ \Delta\delta_r=\sum_{s\cdot t=r}\delta_s\mathop{{\otimes}}\delta_t=\sum_{s\cdot t=r} \delta_s\mathop{{\otimes}} 1\mathop{{\otimes}} \delta_t \mathop{{\otimes}} 1\] {\em and} a further isomorphism \[ f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)= v\mathop{{\otimes}} v'\mathop{{\otimes}}\tau(|v|,|v'|)z\] on the underlying vector space. After applying this, the degree of this element is $|v\mathop{{\otimes}} v'|\tau(|v|,|v'|)z=|v||v'|z=|v\mathop{{\otimes}} 1||v'\mathop{{\otimes}} z|$, which is the degree on the original $F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')$ side. Now we show that $f_{V,V'}$ respects associators on each side of $F$. Taking the associator on the $\Xi(R,K)$-module side as \[ \phi_{V,V',V''}:(V\mathop{{\otimes}} V')\mathop{{\otimes}} V''\to V\mathop{{\otimes}}(V'\mathop{{\otimes}} V''),\quad \phi_{V,V',V''}((v\mathop{{\otimes}} v')\mathop{{\otimes}} v'')=\phi^1.v\mathop{{\otimes}} (\phi^2.v'\mathop{{\otimes}}\phi^3.v'')\] and $\phi$ trivial on the $G$-graded $K$-bimodule side, for $F$ to be monoidal with the stated $f_{V,V'}$ etc, we need \begin{align*} F(\phi_{V,V,V'})&f_{V\mathop{{\otimes}} V',V''}f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)\\ &=F(\phi_{V,V,V'})f_{V\mathop{{\otimes}} V',V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}} (\tau(|v|,|v'|){\triangleleft}|v''|)z)\\ &=F(\phi_{V,V,V'})(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}}\tau(|v|.|v'|,\tau(|v|,|v'|){\triangleright} |v''|)(\tau(|v|,|v'|){\triangleleft}|v''|)z)\\ &=F(\phi_{V,V,V'})(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}} \tau(|v|,|v'|.|v''|)\tau(|v'|,|v''|)z,\\ f_{V,V'\mathop{{\otimes}} V''}&f_{V',V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}} z)=f_{V,V'\mathop{{\otimes}} V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}} \tau(|v'|,|v''|)z) \\ &=v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}}\tau(|v|,|v'\mathop{{\otimes}} v''|)\tau(|v'|,|v''|)z =v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}}\tau(|v|,|v'|.|v''|)\tau(|v'|,|v''|)z,\end{align*} where for the first equality we moved $\tau(|v|,|v'|)$ in the output of $f_{V,V'}$ via $\mathop{{\otimes}}_{\mathbb{C} K}$ to act on the $v''$. We used the cocycle property of $\tau$ for the 3rd equality. Comparing results, we need \[ \phi_{V,V',V''}((v\mathop{{\otimes}} v')\mathop{{\otimes}} v'')=v\mathop{{\otimes}}( v'\mathop{{\otimes}} \tau(|v|,|v'|)^{-1}.v''),\quad \phi=\sum_{s,t\in R}(\delta_s\mathop{{\otimes}} 1)\mathop{{\otimes}}(\delta_s\tens1)\mathop{{\otimes}} (1\mathop{{\otimes}} \tau(s,t)^{-1}).\] Note that we can write \[ f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)=(\sum_{s,t\in R}(\delta_s\mathop{{\otimes}} 1)\mathop{{\otimes}}(\delta_t\mathop{{\otimes}} 1)\mathop{{\otimes}} \tau(s,t)).(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)\] but we are not saying that $\phi$ is a coboundary since this is not given by the action of an element of $\Xi(R,K)^{\mathop{{\otimes}} 2}$. \endproof This derives the quasibialgebra structure on $\Xi(R,K)$ used in Section~\ref{sec:quasi} but now so as to obtain an equivalence of categories. \subsection{Drinfeld twists induced by change of transversal} We recall that if $H$ is a quasiHopf algebra and $\chi\in H\mathop{{\otimes}} H$ is a {\em cochain} in the sense of invertible and $(\mathrm{id}\mathop{{\otimes}} {\epsilon})\chi=({\epsilon}\mathop{{\otimes}}\mathrm{id})\chi=1$, then its {\em Drinfeld twist} $\bar H$ is another quasi-Hopf algebra \[ \bar\Delta=\chi^{-1}\Delta(\ )\chi,\quad \bar\phi=\chi_{23}^{-1}((\mathrm{id}\mathop{{\otimes}}\Delta)\chi^{-1})\phi ((\Delta\mathop{{\otimes}}\mathrm{id})\chi)\chi_{12},\quad \bar{\epsilon}={\epsilon}\] \[ S=S,\quad\bar\alpha=(S\chi^1)\alpha\chi^2,\quad \bar\beta=(\chi^{-1})^1\beta S(\chi^{-1})^2\] where $\chi=\chi^1\mathop{{\otimes}}\chi^2$ with a sum of such terms understood and we use same notation for $\chi^{-1}$, see \cite[Thm.~2.4.2]{Ma:book} but note that our $\chi$ is denoted $F^{-1}$ there. In categorical terms, this twist corresponds to a monoidal equivalence $G:{}_{H}\hbox{{$\mathcal M$}}\to {}_{H^\chi}\hbox{{$\mathcal M$}}$ which is the identity on objects and morphisms but has a nontrivial natural transformation \[ g_{V,V'}:G(V)\bar\mathop{{\otimes}} G(V'){\cong} G(V\mathop{{\otimes}} V'),\quad g_{V,V'}(v\mathop{{\otimes}} v')= \chi^1.v\mathop{{\otimes}}\chi^2.v'.\] The next theorem follows by the above reconstruction arguments, but here we check it directly. The logic is that for different $R,R'$ the category of modules are both monoidally equivalent to ${}_K\hbox{{$\mathcal M$}}_K^G$ and hence monoidally equivalent but not in a manner that is compatible with the forgetful functor to Vect. Hence these should be related by a cochain twist. \begin{theorem}\label{thmtwist} Let $R,\bar R$ be two transversals with $\bar r\in\bar R$ representing the same coset as $r\in R$. Then $\Xi(\bar R,K)$ is a cochain twist of $\Xi(R,K)$ at least as quasi-bialgebras (and as quasi-Hopf algebras if one of them is). The Drinfeld cochain is $\chi=\sum_{r\in R}(\delta_r\mathop{{\otimes}} 1)\mathop{{\otimes}} (1\mathop{{\otimes}} r^{-1}\bar r)$. \end{theorem} {\noindent {\bfseries Proof:}\quad } Let $R,\bar R$ be two transversals. Then for each $r\in R$, the class $rK$ has a unique representative $\bar rK$ with $\bar r\in R'$. Hence $\bar r= r c_r$ for some function $c:R\to K$ determined by the two transversals as $c_r=r^{-1}\bar r$ in $G$. One can show that the cocycle matched pairs are related by \[ x\bar{\triangleright} \bar r=(x{\triangleright} r)c_{x{\triangleright} r},\quad x\bar{\triangleleft} \bar r= c_{x{\triangleright} r}^{-1}(x{\triangleleft} r)c_r\] among other identities. On using \begin{align*} \bar s\bar t=sc_s tc_t=s (c_s{\triangleright} t)(c_s{\triangleleft} t)c_t&= (s\cdot c_s{\triangleright} t)\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t\\&=\overline{ s\cdot (c_s{\triangleright} t)}c_{s\cdot c_s{\triangleright} t}^{-1}\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t\end{align*} and factorising using $\bar R$, we see that \begin{equation}\label{taucond} \bar s\, \bar\cdot\, \bar t= \overline{s\cdot c_s{\triangleright} t},\quad\bar\tau(\bar s,\bar t)=c_{s\cdot c_s{\triangleright} t}^{-1}\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t.\end{equation} We will construct a monoidal functor $G:{}_{\Xi(R,K)}\hbox{{$\mathcal M$}}\to {}_{\Xi(\bar R,K)}\hbox{{$\mathcal M$}}$ with $g_{V,V'}(v\mathop{{\otimes}} v')= \chi^1.v\mathop{{\otimes}}\chi^2.v'$ for a suitable $\chi\in \Xi(R,K)^{\mathop{{\otimes}} 2}$. First, let $F:{}_{\Xi(R,K)}\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ be the monoidal functor above with natural isomorphism $f_{V,V'}$ and $\bar F:{}_{\Xi(\bar R,K)}\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ the parallel for $\Xi(\bar R,K)$ with isomorphism $\bar f_{V,V'}$. Then \[ C:F\to \bar F\circ G,\quad C_V:F(V)=V\mathop{{\otimes}}\mathbb{C} K\to V\mathop{{\otimes}} \mathbb{C} K=\bar FG(V),\quad C_V(v\mathop{{\otimes}} z)=v\mathop{{\otimes}} c_{|v|}^{-1}z\] is a natural isomorphism. Check on the right we have, denoting the $\bar R$ grading by $||\ ||$, the $G$-grading and $K$-bimodule structure \begin{align*} |C_V(v\mathop{{\otimes}} z)|&= |v\mathop{{\otimes}} c_{|v|}^{-1}z|= ||v||c_{|v|}^{-1}z=|v|z=|v\mathop{{\otimes}} z|,\\ x.C_V(v\mathop{{\otimes}} z).y&=x.(v\mathop{{\otimes}} c_{|v|}^{-1}z).y=x.v\mathop{{\otimes}} (x\bar{\triangleleft} ||v||)c_{|v|}^{-1}zy=x.v \mathop{{\otimes}} c_{x{\triangleright} |v|}^{-1} (x{\triangleleft} |v|)zy\\ &= C_V(x.(v\mathop{{\otimes}} z).y).\end{align*} We want these two functors to not only be naturally isomorphic but for this to respect that they are both monoidal functors. Here $\bar F\circ G$ has the natural isomorphism \[ \bar f^g_{V,V'}= \bar F(g_{V,V'})\circ \bar f_{G(V),G(V')}\] by which it is a monoidal functor. The natural condition on a natural isomorphism $C$ between monoidal functors is that $C$ behaves in the obvious way on tensor product objects via the natural isomorphisms associated to each monoidal functor. In our case, this means \[ \bar f^g_{V,V'}\circ (C_{V}\mathop{{\otimes}} C_{V'}) = C_{V\mathop{{\otimes}} V'}\circ f_{V,V'}: F(V)\mathop{{\otimes}} F(V')\to \bar F G(V\mathop{{\otimes}} V').\] Putting in the specific form of these maps, the right hand side is \[C_{V\mathop{{\otimes}} V'}\circ f_{V,V'}(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K v'\mathop{{\otimes}} z)=C_{V\mathop{{\otimes}} V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|)z)=v\mathop{{\otimes}} v'\mathop{{\otimes}} c^{-1}_{|v\mathop{{\otimes}} v'|}\tau(|v|,|v'|)z,\] while the left hand side is \begin{align*}\bar f^g_{V,V'}\circ (C_{V}\mathop{{\otimes}} C_{V'})&(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K v'\mathop{{\otimes}} z)=\bar f^g_{V,V'}(v\mathop{{\otimes}} c^{-1}_{|v|}\mathop{{\otimes}}_K v'\mathop{{\otimes}} c^{-1}_{|v'|}z)\\ &=\bar f^g_{V,V'}(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K c^{-1}_{|v|}.v'\mathop{{\otimes}} (c^{-1}_{|v|}\bar{\triangleright} ||v'||)c^{-1}_{|v'|}z)\\ &=\bar F(g_{V,V'})(v\mathop{{\otimes}} c^{-1}_{|v|}.v'\mathop{{\otimes}} \bar\tau(||v||,||c^{-1}_{|v|}.v'||)(c^{-1}_{|v|}\bar{\triangleright}||v'||)c^{-1}_{|v'|}z)\\ &=\bar F(g_{V,V'})(v\mathop{{\otimes}} c^{-1}_{|v|}.v'\mathop{{\otimes}} c^{-1}_{|v\mathop{{\otimes}} v'|}\tau(|v|,|v'|)z, \end{align*} using the second of (\ref{taucond}) and $|v\mathop{{\otimes}} v'|=|v|\cdot|v'|$. We also used $\bar f^g_{V,V'}=\bar F(g_{V,V'})\bar f_{G(V),G(V')}:\bar FG(V)\mathop{{\otimes}} \bar FG(V')\to \bar FG(V\mathop{{\otimes}} V')$. Comparing, we need $\bar F(g_{V,V'})$ to be the action of the element \[ \chi=\sum_{r\in R} \delta_r\mathop{{\otimes}} c_r\in \Xi(R,K)^{\mathop{{\otimes}} 2}.\] It follows from the arguments, but one can also check directly, that $\phi$ indeed twists as stated to $\bar\phi$ when these are given by Lemma~\ref{Xibialg}, again using (\ref{taucond}). \endproof The twisting of a quasi-Hopf algebra is again one. Hence, we have: \begin{corollary}\label{twistant} If $R$ has $(\ )^R$ bijective giving a quasi-Hopf algebra with regular antipode $S,\alpha=1,\beta$ as in Proposition~\ref{standardS} and $\bar R$ is another transversal then $\Xi(\bar R,K)$ in the twisting form of Theorem~\ref{thmtwist} has an antipode \[ \bar S=S,\quad \bar \alpha=\sum_r \delta_{r^R} c_r ,\quad \bar \beta =\sum_r \delta_r \tau(r,r^R)(c_r^{-1}{\triangleleft} r^R)^{-1} . \] This is a regular antipode if $(\ )^R$ for $\bar R$ is also bijective (i.e. $\bar\alpha$ is then invertible and can be transformed back to standard form to make it 1).\end{corollary} {\noindent {\bfseries Proof:}\quad } We work with the initial quasi-Hopf algebra $\Xi(R,K)$ and ${\triangleright},{\triangleleft},\tau$ refer to this but note that $\Xi(\bar R,K)$ is the same algebra when $\delta_r$ is identified with the corresponding $\delta_{\bar r}$. Then \begin{align*}\bar \alpha&=(S\chi^{1})\chi^{2}=\sum_r S\delta_r\mathop{{\otimes}} c_r=\delta_{r^R}c_r\end{align*} using the formula for $S\delta_r=\delta_{r^R}$ in Proposition~\ref{standardS}. Similarly, $\chi^{-1}=\sum_r \delta_r\mathop{{\otimes}} c_r^{-1}$ and we use $S,\beta$ from the above lemma, where \[ S (1\mathop{{\otimes}} x)= \sum_s \delta_{(x^{-1}{\triangleright} s)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} s=\sum_t\delta_{t^R}\mathop{{\otimes}} x^{-1}{\triangleleft}(x{\triangleright} t)=\sum_t\delta_{t^R}\mathop{{\otimes}} (x{\triangleleft} t)^{-1}.\] Then \begin{align*} \bar \beta &=\chi^{-1}\beta S\chi^{-2}=\sum_{r,s,t}\delta_r\delta_s\tau(s,s^R)\delta_{t^R}(c_r^{-1}{\triangleleft} t)^{-1}\\ &=\sum_{r,t} \delta_r\tau(r,r^R)\delta_{t^R}(c_r^{-1}{\triangleleft} t)^{-1}=\sum_{r,t}\delta_r\delta_{\tau(r,r^R){\triangleright} t^R}\tau(r,r^R) (c_r^{-1}{\triangleleft} t)^{-1}.\end{align*} Commuting the $\delta$-functions to the left requires $r=\tau(r,r^R){\triangleright} t^R$ or $r^{RR}=\tau(r,r^R)^{-1}{\triangleright} r= t^R$ so $t=r^R$ under our assumptions, giving the answer stated. If $(\ )^R$ is bijective then $\bar\alpha^{-1}=\sum_r c_r^{-1}\delta_{r^R}=\sum_r \delta_{c_r^{-1}{\triangleright} r^R}c_r^{-1}$ provides the left inverse. On the other side, we need $c_r^{-1}{\triangleright} r^R= c_s^{-1}{\triangleright} s^R$ iff $r=s$. This is true if $(\ )^{R}$ for $\bar R$ is also bijective. That is because, if we write $(\ )^{\bar R}$ for the right inverse with respect to $\bar R$, one can show by comparing the factorisations that \[ \bar s^{\bar R}=\overline{c_s^{-1}{\triangleright} s^R},\quad \overline{s^R}=c_s\bar{\triangleright} \bar s^{\bar R}\] and we use the first of these. \endproof \begin{example}\rm With reference to the list of transversals for $S_2\subset S_3$, we have four quasi-Hopf algebras of which two were already computed in Example~\ref{exS3quasi}. {\sl (i) 2nd transversal as twist of the first.} Here $\bar\Xi$ is generated by $\mathbb{Z}_2$ as $u$ again and $\delta_{\bar r}$ with $\bar R=\{e,w,v\}$. We have the same cosets represented by these with $\bar e=e$, $\overline{uv}=w$ and $\overline{vu}=v$, which means $c_e=e, c_{vu}=u, c_{uv}=u$. To compare the algebras in the two cases, we identify $\delta_0=\delta_e,\delta_1=\delta_w, \delta_2=\delta_v$ as delta-functions on $G/K$ (rather than on $G$) in order to identify the algebras of $\bar\Xi$ and $\Xi$. The cochain from Theorem~\ref{thmtwist} is \[ \chi=\delta_e\mathop{{\otimes}} e+(\delta_{vu}+\delta_{uv})\mathop{{\otimes}} u=\delta_0\mathop{{\otimes}} 1+ (\delta_1+\delta_2)\mathop{{\otimes}} u=\delta_0\mathop{{\otimes}} 1+ (1-\delta_0)\mathop{{\otimes}} u \] as an element of $\Xi\mathop{{\otimes}}\Xi$. One can check that this conjugates the two coproducts as claimed. We also have \[ \chi^2=1\mathop{{\otimes}} 1,\quad ({\epsilon}\mathop{{\otimes}}\mathrm{id})\chi=(\mathrm{id}\mathop{{\otimes}}{\epsilon})\chi=1.\] We spot check (\ref{taucond}), for example $v\bar\cdot w=\overline{vu}\, \bar\cdot\, \overline{uv}=\overline{uv}=\overline{vuvu}=\overline{vu( u{\triangleright} (uv))}$, as it had to be. We should therefore find that \[((\Delta\mathop{{\otimes}}\mathrm{id})\chi)\chi_{12}=((\mathrm{id}\mathop{{\otimes}}\Delta)\chi)\chi_{23}\bar\phi. \] We have checked directly that this indeed holds. Next, the antipode of the first transversal should twist to \[ \bar S=S,\quad \bar\alpha=\delta_e c_e+\delta_{uv}c_{vu}+\delta_{vu}c_{uv}=\delta_e(e-u)+u=\delta_e c_e+\delta_{vu}c_{vu}+\delta_{uv}c_{uv}=\bar\beta\] by Corollary~\ref{twistant} for twisting the antipode. Here, $U=\bar\alpha^{-1}=\bar\beta = U^{-1}$ and $\bar S'=U(S\ )U^{-1}$ with $\bar\alpha'=\bar\beta'=1$ should also be an antipode. We can check this: \[U u = (\delta_0(e-u)+u)u = \delta_0(u-e)+e = u(\delta_{u^{-1}{\triangleright} 0}(e-u)+u) = u U\] so $\bar S' u = UuU^{-1} = u$, and \[\bar S' \delta_1 = U(S\delta_1)U= U\delta_2 U = (\delta_0(e-u)+u)\delta_2(\delta_0(e-u)+u) = \delta_1.\] \bigskip {\sl (ii) 3rd transversal as a twist of the first.} A mixed up choice is $\bar R=\{e,uv,v\}$ which is not a subgroup so $\tau$ is nontrivial. One has \[ \tau(uv,uv)=\tau(v,uv)=\tau(uv,v)=u,\quad \tau(v,v)=e,\quad v.v=e,\quad v.uv=uv,\quad uv.v=e,\quad uv.uv=v,\] \[ u{\triangleright} v=uv,\quad u{\triangleright} (uv)=v,\quad u{\triangleleft} v=e,\quad u{\triangleleft} uv=e\] and all other cases implied from the properties of $e$. Here $v^R=v$ and $(uv)^R=v$. These are with respect to $\bar R$, but note that twisting calculations will take place with respect to $R$. Writing $\delta_0=\delta_e,\delta_1=\delta_{uv},\delta_2=\delta_v$ we have the same algebra as before (as we had to) and now the coproduct etc., \[ \bar\Delta u=u\mathop{{\otimes}} 1+\delta_0u\mathop{{\otimes}} (u-1),\quad \bar\Delta\delta_0=\delta_0\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_2+\delta_1\mathop{{\otimes}}\delta_2 \] \[ \bar\Delta\delta_1=\delta_0\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_1,\quad \bar\Delta\delta_2=\delta_0\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_1,\] \[ \bar\phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1+ (\delta_1\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_1)(u-1)=\bar\phi^{-1}\] for the quasibialgebra. We used the $\tau,{\triangleright},{\triangleleft},\cdot$ for $\bar R$ for these direct calculations. Now we consider twisting with \[ c_0=e,\quad c_1=(uv)^{-1}uv=1,\quad c_2=v^{-1}vu=u,\quad \chi=1\mathop{{\otimes}} 1+ \delta_2\mathop{{\otimes}} (u-1)=\chi^{-1}\] and check twisting the coproducts \[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(u\mathop{{\otimes}} u)(1\mathop{{\otimes}} 1+\delta_2u\mathop{{\otimes}} (u-1))=u\mathop{{\otimes}} 1+\delta_0\mathop{{\otimes}}(u-1)=\bar\Delta u, \] \[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_1)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_0,\] \[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_2)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_1,\] \[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_1)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_2.\] One can also check that (\ref{taucond}) hold, e.g. for the first half, \[ \bar 2=\bar 1\bar\cdot\bar 1=\overline{1+c_1{\triangleright} 1}=\overline{1+1},\quad \bar 0=\bar 1\bar\cdot\bar 2=\overline{1+c_1{\triangleright} 2}=\overline{1+2},\] \[ \bar 1=\bar2\bar\cdot\bar 1=\overline{2+c_2{\triangleright} 1}=\overline{2+2},\quad \bar 0=\bar2\bar\cdot\bar 2=\overline{2+c_2{\triangleright} 2}=\overline{2+1}\] as it must. Now we apply the twisting of antipodes in Corollary~\ref{twistant}, remembering to do calculations now with $R$ where $\tau,{\triangleleft}$ are trivial, to get \[ \bar S=S,\quad \bar\alpha=\delta_0+\delta_1c_2+\delta_2c_1=1+\delta_1(u-1),\quad \bar\beta=\delta_0+\delta_2c_2+\delta_1c_1=1+\delta_2(u-1),\] which obey $\bar\alpha^2=\bar\alpha$ and $\bar\beta^2=\bar\beta$ and are therefore not (left or right) invertible. Hence, we cannot set either equal to 1 by $U$ and there is an antipode, but it is not regular. One can check the antipode indeed works: \begin{align*}(Su)\alpha+ (Su) (S\delta_0)\alpha(u-1)&=u(1+\delta_1(u-1))+\delta_0 u(1+\delta_1(u-1))(u-1)\\ &=u+\delta_2(1-u)+\delta_0(1-u)=u+(1-\delta_1)(1-u)=\alpha\\ u\beta+\delta_0u\beta S(u-1)&=u(1+\delta_2(u-1))+\delta_0 u(1+\delta_2(u-1))(u-1)\\ &=u+\delta_1(1-u)+\delta_0(1-u)=u+(1-\delta_2)(1-u)=\beta \end{align*} \begin{align*} (S\delta_0)\alpha\delta_0&+(S\delta_2)\alpha\delta_2+(S\delta_1)\alpha\delta_2=\delta_0(1+\delta_1(u-1))\delta_0+(1-\delta_0)(1+\delta_1(u-1))\delta_2\\ &=\delta_0+(1-\delta_0)\delta_2+\delta_1(\delta_1 u-\delta_2)=\delta_0+\delta_2+\delta_1u=\alpha\\ \delta_0\beta S\delta_0&+\delta_2\beta S\delta_2+\delta_1\beta S\delta_2=\delta_0(1+\delta_2(u-1))\delta_0+(1-\delta_0)(1+\delta_2(u-1))\delta_1\\ &=\delta_0+(1-\delta_0)\delta_1+(1-\delta_0)\delta_2(u-1)\delta_1=\delta_0+\delta_1+\delta_2(\delta_2u-\delta_1)=\beta \end{align*} and more simply on $\delta_1,\delta_2$. The fourth transversal has a similar pattern to the 3rd, so we do not list its coproduct etc. explicitly. \end{example} In general, there will be many different choices of transversal. For $S_{n-1}\subset S_n$, the first two transversals for $S_2\subset S_3$ generalise as follows, giving a Hopf algebra and a strictly quasi-Hopf algebra respectively. \begin{example}\rm {\sl (i) First transversal.} Here $R=\mathbb{Z}_n$ is a subgroup with $i=0,1,\cdots,n-1$ mod $n$ corresponding to the elements $(12\cdots n)^i$. Neither subgroup is normal for $n\ge 4$, so both actions are nontrivial but $\tau$ is trivial. This expresses $S_n$ as a double cross product $\mathbb{Z}_n{\bowtie} S_{n-1}$ (with trivial $\tau$) and the matched pair of actions \[ \sigma{\triangleright} i=\sigma(i),\quad (\sigma{\triangleleft} i)(j)=\sigma(i+j)-\sigma(i)\] for $i,j=1,\cdots,n-1$, where we add and subtract mod $n$ but view the results in the range $1,\cdots, n$. This was actually found by twisting from the 2nd transversal below, but we can check it directly as follows. First. \[\sigma (1\cdots n)^i= (\sigma{\triangleright} i)(\sigma{\triangleleft} i)=(12\cdots n)^{\sigma(i)}\left((1\cdots n)^{-\sigma(i)}\sigma(12\cdots n)^i\right)\] and we check that the second factor sends $n\to i\to \sigma(i) \to n$, hence lies in $S_n$. It follows by the known fact of unique factorisation into these subgroups that this factor is $\sigma{\triangleleft} i$. Its action on $j=1,\cdots, n-1$ is \[ (\sigma{\triangleright} i)(j)=(12\cdots n)^{-\sigma(i)}\sigma(12\cdots n)^i(j)=\begin{cases} n-\sigma(i) & i+j=n\\ \sigma(i+j)-\sigma(i) & i+j\ne n\end{cases}=\sigma(i+j)-\sigma(i),\] where $\sigma(i+j)\ne \sigma(i)$ as $i+j\ne i$ and $\sigma(n)=n$ as $\sigma\in S_{n-1}$. It also follows since the two factors are subgroups that these are indeed a matched pair of actions. We can also check the matched pair axioms directly. Clearly, ${\triangleright}$ is an action and \[ \sigma(i)+ (\sigma{\triangleleft} i)(j)=\sigma(i)+\sigma(i+j)-\sigma(i)=\sigma{\triangleright}(i+j)\] for $i,j\in\mathbb{Z}_n$. On the other side, \begin{align*}( (\sigma{\triangleleft} i){\triangleleft} j)(k)&=(\sigma{\triangleleft} i)(j+k)-(\sigma{\triangleleft} i)(j)=\sigma(i+(j+k))-\sigma(i)-\sigma(i+j)+\sigma(i)\\ &=\sigma((i+j)+k)-\sigma(i+j)=(\sigma{\triangleleft}(i+j))(k),\\ ((\sigma{\triangleleft}(\tau{\triangleright} i))(\tau{\triangleleft} i))(j)&=(\sigma{\triangleleft}\tau(i))(\tau(i+j))-\tau(i))=\sigma(\tau(i)+\tau(i+j)-\tau(i)) -\sigma(\tau(i))\\ &= \sigma(\tau(i+j))-\sigma(\tau(i))=((\sigma\tau){\triangleleft} i)(j)\end{align*} for $i,j\in \mathbb{Z}_n$ and $k\in 1,\cdots,n-1$. This gives $ \mathbb{C} S_{n-1}{\triangleright\!\blacktriangleleft}\mathbb{C}(\mathbb{Z}_n)$ as a natural bicrossproduct Hopf algebra which we identify with $\Xi$ (which we prefer to build on the other tensor product order). From Lemma~\ref{Xibialg} and Proposition~\ref{standardS}, this is spanned by products of $\delta_i$ for $i=0,\cdots n-1$ as our labelling of $R=\mathbb{Z}_n$ and $\sigma\in S_{n-1}=K$, with cross relations $\sigma\delta_i=\delta_{\sigma(i)}\sigma$, $\sigma\delta_0=\delta_0\sigma$, and coproduct etc., \[ \Delta \delta_i=\sum_{j\in \mathbb{Z}_n}\delta_j\mathop{{\otimes}}\delta_{i-j},\quad \Delta\sigma=\sigma\delta_0+\sum_{i=1}^{n-1}(\sigma{\triangleleft} i),\quad {\epsilon}\delta_i=\delta_{i,0},\quad{\epsilon}\sigma=1,\] \[ S\delta_i=\delta_{-i},\quad S\sigma=\sigma^{-1}\delta_0+(\sigma^{-1}{\triangleleft} i)\delta_{-i},\] where $\sigma{\triangleleft} i$ is as above for $i=1,\cdots,n-1$. This is a usual Hopf $*$-algebra with $\delta_i^*=\delta_i$ and $\sigma^*=\sigma^{-1}$ according to Corollary~\ref{corstar}. \medskip {\sl (ii) 2nd transversal.} Here $R=\{e, (1\, n),(2\, n),\cdots,(n-1\, n)\}$, which has nontrivial ${\triangleright}$ in which $S_{n-1}$ permutes the 2-cycles according to the $i$ label, but again trivial ${\triangleleft}$ since \[ \sigma(i\, n)=(\sigma(i)\, n)\sigma,\quad \sigma{\triangleright} (i\ n)=(\sigma(i)\, n)\] for all $i=1,\cdots,n-1$ and $\sigma\in S_{n-1}$. It has nontrivial $\tau$ as \[ (i\, n )(j\, n)=(j\, n)(i\, j)\Rightarrow (i\, n)\cdot (j\, n)=(j\, n),\quad \tau((i\, n),(j\, n))=(ij)\] for $i\ne j$ and we see that $\cdot$ has right but not left division or left but not right cancellation. We also have $(in)\cdot(in)=e$ and $\tau((in),(in))=e$ so that $(\ )^R$ is the identity map, hence $R$ is regular. This transversal gives a cross-product quasiHopf algebra $\Xi=\mathbb{C} S_{n-1}{\triangleright\!\!\!<}_\tau \mathbb{C}(R)$ where $R$ is a left quasigroup (i.e. unital and with left cancellation) except that we prefer to write it with the tensor factors in the other order. From Lemma~\ref{Xibialg} and Proposition~\ref{standardS}, this is spanned by products of $\delta_i$ and $\sigma\in S_{n-1}$, where $\delta_0$ is the delta function at $e\in R$ and $\delta_i$ at $(i,n)$ for $i=1,\cdots,n-1$. The cross relations have the same algebra $\sigma\delta_i=\delta_{\sigma(i)}\sigma$ for $i=1,\cdots,n-1$ as before but now the tensor coproduct etc., and nontrivial associator \[\Delta\delta_0=\sum_{i=0}^{n-1}\delta_i\mathop{{\otimes}}\delta_i,\quad \Delta\delta_i=1\mathop{{\otimes}}\delta_i+\delta_i\mathop{{\otimes}}\delta_0,\quad \Delta \sigma=\sigma\mathop{{\otimes}}\sigma,\quad {\epsilon}\delta_i=\delta_{i,0},\quad{\epsilon}\sigma=1,\] \[ S\delta_i=\delta_{i},\quad S\sigma=\sigma^{-1},\quad \alpha=\beta=1,\] \[\phi=(1\mathop{{\otimes}}\delta_0+\delta_0\mathop{{\otimes}}(1-\delta_0)+\sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}}\delta_i)\mathop{{\otimes}} 1+ \sum_{i,j=1\atop i\ne j}^{n-1}\delta_i\mathop{{\otimes}}\delta_j\mathop{{\otimes}} (ij).\] This is a $*$-quasi Hopf algebra with the same $*$ as before but now nontrivial \[ \gamma=1,\quad \hbox{{$\mathcal G$}}=1\mathop{{\otimes}}\delta_0+\delta_0\mathop{{\otimes}}(1-\delta_0)+\sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}}\delta_i+ \sum_{i,j=1\atop i\ne j}^{n-1}\delta_i(ij)\mathop{{\otimes}}\delta_j(ij)\] from Corollary~\ref{corstar}. \medskip{\sl (iii) Twisting between the above two transversals.} We denote the first transversal $R=\mathbb{Z}_n$, where $i$ is identified with $(12\cdots n)^i$, and we denote the 2nd transversal by $\bar R$ with corresponding elements $\bar i=(i\ n)$. Then \[ c_i=(12\cdots n)^{-i}(i\ n)\in S_{n-1},\quad c_i(j)=\begin{cases} n-i & j=i\\ j-i & else \end{cases}\] for $i,j=1,\cdots,n-1$. If we use the stated ${\triangleright}$ for the first transversal then one can check that the first half of (\ref{taucond}) holds, \[ \overline{i+c_i{\triangleright} i}=\overline{i+n-i}=e=\bar i\bar\cdot \bar i,\quad \overline{i+c_i{\triangleright} j}=\overline{i+j-i}=\bar j=\bar i\bar\cdot \bar j\] as it must. We can also check that the actions are indeed related by twisting. Thus, \[ \sigma{\triangleleft}\bar i=c_{\sigma{\triangleright} i}^{-1}(\sigma{\triangleleft} i)c_i=(\sigma(i),n)(12\cdots n)^{\sigma(i)}(\sigma{\triangleleft} i)(12\cdots n)^{-i}(i,n)=(\sigma(i),n)\sigma(i,n)=\sigma\] \[ \sigma\bar{\triangleright} \bar i=(\sigma{\triangleright} i)c_{\sigma{\triangleright} i}=(12\cdots n)^{\sigma(i)}(12\cdots n)^{-\sigma(i)}(\sigma(i),n)=(\sigma(i),n),\] where we did the computation with $\mathbb{Z}_n$ viewed in $S_n$. It follows that the Hopf algebra from case (i) cochain twists to a simpler quasihopf algebra in case (ii). The required cochain from Theorem~\ref{thmtwist} is \[ \chi=\delta_0\mathop{{\otimes}} 1+ \sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}} (12\cdots n)^{-i}(in).\] \end{example} The above example is a little similar to the Drinfeld $U_q(g)$ as Hopf algebras which are cochain twists of $U(g)$ viewed as a quasi-Hopf algebra. We conclude with the promised example related to the octonions. This is a version of \cite[Example~4.6]{KM2}, but with left and right swapped and some cleaned up conventions. \begin{example}\rm We let $G=Cl_3{>\!\!\!\triangleleft} \mathbb{Z}_2^3$, where $Cl_3$ is generated by $1,-1$ and $e_{i}$, $i=1,2,3$, with relations \[ (-1)^2=1,\quad (-1)e_i=e_i(-1),\quad e_i^2=-1,\quad e_i e_j=-e_j e_i \] for $i\ne j$ and the usual combination rules for the product of signs. Its elements can be enumerated as $\pm e_{\vec a}$ where $\vec{a}\in \mathbb{Z}_2^3$ is viewed in the additive group of 3-vectors with entries in the field $\mathbb{F}_2=\{0,1\}$ of order 2 and \[ e_{\vec a}=e_1^{a_1}e_2^{a_2}e_3^{a_3},\quad e_{\vec a} e_{\vec b}=e_{\vec a+\vec b}(-1)^{\sum_{i\ge j}a_ib_j}. \] This is the twisted group ring description of the 3-dimensional Clifford algebra over $\mathbb{R}$ in \cite{AlbMa}, but now restricted to coefficients $0,\pm1$ to give a group of order 16. For an example, \[ e_{110}e_{101}=e_2e_3 e_1e_3=e_1e_2e_3^2=-e_1e_2=-e_{011}=-e_{110+101}\] with the sign given by the formula. We similarly write the elements of $K=\mathbb{Z}_2^3$ multiplicatively as $g^{\vec a}=g_1^{a_1}g_1^{a_2}g_3^{a_3}$ labelled by 3-vectors with values in $\mathbb{F}_2$. The generators $g_i$ commute and obey $g_i^2=e$. The general group product becomes the vector addition, and the cross relations are \[ (-1)g_i=g_i(-1),\quad e_i g_i= -g_i e_i,\quad e_i g_j=g_j e_i\] for $i\ne j$. This implies that $G$ has order 128. (i) If we take $R=Cl_3$ itself then this will be a subgroup and we will have for $\Xi(R,K)$ an ordinary Hopf $*$-algebra as a semidirect product $\mathbb{C} \mathbb{Z}_2^3{\triangleright\!\!\!<} \mathbb{C}(Cl_3)$ except that we build it on the opposite tensor product. (ii) Instead, we take as representatives the eight elements again labelled by 3-vectors over $\mathbb{F}_2$, \[ r_{000}=1,\quad r_{001}=e_3,\quad r_{010}=e_2,\quad r_{011}=e_2e_3g_1\] \[ r_{100}=e_1,\quad r_{101}=e_1e_3 g_2,\quad r_{110}=e_1e_2g_3,\quad r_{111}=e_1e_2e_3 g_1g_2g_3 \] and their negations, as a version of \cite[Example~4.6]{KM2}. This can be written compactly as \[ r_{\vec a}=e_{\vec a}g_1^{a_2 a_3}g_2^{a_1a_3}g_3^{a_1a_2}\] \begin{proposition}\cite{KM2} This choice of transversal makes $(R,\cdot)$ the octonion two sided inverse property quasigroup $G_{\O}$ in the Albuquerque-Majid description of the octonions\cite{AlbMa}, \[ r_{\vec a}\cdot r_{\vec b}=(-1)^{f(\vec a,\vec b)} r_{\vec a+\vec b},\quad f(\vec a,\vec b)=\sum_{i\ge j}a_ib_j+ a_1a_2b_3+ a_1b_2a_3+b_1a_2a_3 \] with the product on signed elements behaving as if bilinear. The action ${\triangleleft}$ is trivial, and left action and cocycle $\tau$ are \[ g^{\vec a}{\triangleright} r_{\vec b}=(-1)^{\vec a\cdot \vec b}r_{\vec b},\quad \tau(r_{\vec a},r_{\vec b})=g^{\vec a\times\vec b}=g_1^{a_2 b_3+a_3 b_2}g_2^{a_3 b_1+a_1b_3} g_3^{a_1b_2+a_2b_1}\] with the action extended with signs as if linearly and $\tau$ independent of signs in either argument. \end{proposition} {\noindent {\bfseries Proof:}\quad } We check in the group \begin{align*} r_{\vec a}r_{\vec b}&=e_{\vec a}g_1^{a_2 a_3}g_2^{a_1a_3}g_3^{a_1a_2}e_{\vec b}g_1^{b_2 b_3}g_2^{b_1b_3}g_3^{b_1b_2}\\ &=e_{\vec a}e_{\vec b}(-1)^{b_1a_2a_3+b_2a_1a_3+b_3a_1a_2} g_1^{a_2a_3+b_2b_3}g_2^{a_1a_3+b_1b_3}g_3^{a_1a_2+b_1b_2}\\ &=(-1)^{f(a,b)}r_{\vec a+\vec b}g_1^{a_2a_3+b_2b_3-(a_2+b_2)(a_3+b_3)}g_2^{a_1a_3+b_1b_3-(a_1+b_1)(a_3+b_3)}g_3^{a_1a_2+b_1b_2-(a_1+b_1)(a_2+b_2)}\\ &=(-1)^{f(a,b)}r_{\vec a+\vec b}g_1^{a_2b_3+b_2a_3} g_2^{a_1b_3+b_1a_3}g_3^{a_1b_2+b_1a_2}, \end{align*} from which we read off $\cdot$ and $\tau$. For the second equality, we moved the $g_i$ to the right using the commutation rules in $G$. For the third equality we used the product in $Cl_3$ in our description above and then converted $e_{\vec a+\vec b}$ to $r_{\vec a+\vec b}$. \endproof The product of the quasigroup $G_\O$ here is the same as the octonions product as an algebra over $\mathbb{R}$ in the description of \cite{AlbMa}, restricted to elements of the form $\pm r_{\vec a}$. The cocycle-associativity property of $(R,\cdot)$ says \[ r_{\vec a}\cdot(r_{\vec b}\cdot r_{\vec c})=(r_{\vec a}\cdot r_{\vec b})\cdot\tau(\vec a,\vec b){\triangleright} r_{\vec c}=(r_{\vec a}\cdot r_{\vec b})\cdot r_{\vec c} (-1)^{(\vec a\times\vec b)\cdot\vec c}\] giving -1 exactly when the 3 vectors are linearly independent as 3-vectors over $\mathbb{F}_2$. One also has $r_{\vec a}\cdot r_{\vec b}=\pm r_{\vec b}\cdot r_{\vec a}$ with $-1$ exactly when the two vectors are linearly independent, which means both nonzero and not equal, and $r_{\vec a} \cdot r_{\vec a}=\pm1 $ with $-1$ exactly when the one vector is linearly independent, i.e. not zero. (These are exactly the quasiassociativity, quasicommutativity and norm properties of the octonions algebra in the description of \cite{AlbMa}.) The 2-sided inverse is \[ r_{\vec a}^{-1}=(-1)^{n(\vec a)}r_{\vec a},\quad n(0)=0,\quad n(\vec a)=1,\quad \forall \vec a\ne 0\] with the inversion operation extended as usual with respect to signs. The quasi-Hopf algebra $\Xi(R,K)$ is spanned by $\delta_{(\pm,\vec a)}$ labelled by the points of $R$ and products of the $g_i$ with the relations $g^{\vec a}\delta_{(\pm, \vec b)}=\delta_{(\pm (-1)^{\vec a\cdot\vec b},\vec b)} g^{\vec a}$ and tensor coproduct etc., \[ \Delta \delta_{(\pm, \vec a)}=\sum_{(\pm', \vec b)}\delta_{(\pm' ,\vec b)}\mathop{{\otimes}}\delta_{(\pm\pm'(-1)^{n(\vec b)},\vec a+\vec b)},\quad \Delta g^{\vec a}=g^{\vec a}\mathop{{\otimes}} g^{\vec a},\quad {\epsilon}\delta_{(\pm,\vec a)}=\delta_{\vec a,0}\delta_{\pm,+},\quad {\epsilon} g^{\vec a}=1,\] \[S\delta_{(\pm,\vec a)}=\delta_{(\pm(-1)^{n(\vec a)},\vec a},\quad S g^{\vec a}=g^{\vec a},\quad\alpha=\beta=1,\quad \phi=\sum_{(\pm, \vec a),(\pm',\vec{b})} \delta_{(\pm,\vec a)}\mathop{{\otimes}}\delta_{(\pm',\vec{b})}\mathop{{\otimes}} g^{\vec a\times\vec b}\] and from Corollary~\ref{corstar} is a $*$-quasi-Hopf algebra with $*$ the identity on $\delta_{(\pm,\vec a)},g^{\vec a}$ and \[ \gamma=1,\quad \hbox{{$\mathcal G$}}=\sum_{(\pm, \vec a),(\pm',\vec{b})} \delta_{(\pm,\vec a)}g^{\vec a\times\vec b} \mathop{{\otimes}}\delta_{(\pm',\vec{b})}g^{\vec a\times\vec b}.\] The general form here is not unlike our $S_n$ example. \end{example} \subsection{Module categories context} This section does not contain anything new beyond \cite{Os2,EGNO}, but completes the categorical picture that connects our algebra $\Xi(R,K)$ to the more general context of module categories, adapted to our notations. Our first observation is that if $\mathop{{\otimes}}: {\hbox{{$\mathcal C$}}}\times {\hbox{{$\mathcal V$}}}\to {\hbox{{$\mathcal V$}}}$ is a left action of a monoidal category ${\hbox{{$\mathcal C$}}}$ on a category ${\hbox{{$\mathcal V$}}}$ (one says that ${\hbox{{$\mathcal V$}}}$ is a left ${\hbox{{$\mathcal C$}}}$-module) then one can check that this is the same thing as a monoidal functor $F:{\hbox{{$\mathcal C$}}}\to \mathrm{ End}({\hbox{{$\mathcal V$}}})$ where the set ${\rm End}({\hbox{{$\mathcal V$}}})$ of endofunctors can be viewed as a strict monoidal category with monoidal product the endofunctor composition $\circ$. Here ${\rm End}({\hbox{{$\mathcal V$}}})$ has monoidal unit $\mathrm{id}_{\hbox{{$\mathcal V$}}}$ and its morphisms are natural transformations between endofunctors. $F$ just sends an object $X\in {\hbox{{$\mathcal C$}}}$ to $X\mathop{{\otimes}}(\ )$ as a monoidal functor from ${\hbox{{$\mathcal V$}}}$ to ${\hbox{{$\mathcal V$}}}$. A monoidal functor comes with natural isomorphisms $\{f_{X,Y}\}$ and these are given tautologically by \[ f_{X,Y}(V): F(X)\circ F(Y)(V)=X\mathop{{\otimes}} (Y\mathop{{\otimes}} V)\cong (X\mathop{{\otimes}} Y)\mathop{{\otimes}} V= F(X\mathop{{\otimes}} Y)(V)\] as part of the monoidal action. Conversely, if given a functor $F$, we define $X\mathop{{\otimes}} V=F(X)V$ and extend the monoidal associativity of ${\hbox{{$\mathcal C$}}}$ to mixed objects using $f_{X,Y}$ to define $X\mathop{{\otimes}} (Y\mathop{{\otimes}} V)= F(X)\circ F(Y)V{\cong} F(X\mathop{{\otimes}} Y)V= (X\mathop{{\otimes}} Y)\mathop{{\otimes}} V$. The notion of a left module category is a categorification of the bijection between an algebra action $\cdot: A \mathop{{\otimes}} V\rightarrow V$ and a representation as an algebra map $A \rightarrow {\rm End}(V)$. There is an equally good notion of a right ${\hbox{{$\mathcal C$}}}$-module category extending $\mathop{{\otimes}}$ to ${\hbox{{$\mathcal V$}}}\times{\hbox{{$\mathcal C$}}}\to {\hbox{{$\mathcal V$}}}$. In the same way as one uses $\cdot$ for both the algebra product and the module action, it is convenient to use $\mathop{{\otimes}}$ for both in the categorified version. Similarly for the right module version. Another general observation is that if ${\hbox{{$\mathcal V$}}}$ is a ${\hbox{{$\mathcal C$}}}$-module category for a monoidal category ${\hbox{{$\mathcal C$}}}$ then ${\rm Fun}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})$, the (left exact) functors from ${\hbox{{$\mathcal V$}}}$ to itself that are compatible with the action of ${\hbox{{$\mathcal C$}}}$, is another monoidal category. This is denoted ${\hbox{{$\mathcal C$}}}^*_{{\hbox{{$\mathcal V$}}}}$ in \cite{EGNO}, but should not be confused with the dual of a monoidal functor which was one of the origins\cite{Ma:rep} of the centre $\hbox{{$\mathcal Z$}}({\hbox{{$\mathcal C$}}})$ construction as a special case. Also note that if $A\in {\hbox{{$\mathcal C$}}}$ is an algebra in the category then ${\hbox{{$\mathcal V$}}}={}_A{\hbox{{$\mathcal C$}}}$, the left modules of $A$ in the category, is a {\em right} ${\hbox{{$\mathcal C$}}}$-module category. If $V$ is an $A$-module then we define $V\mathop{{\otimes}} X$ as the tensor product in ${\hbox{{$\mathcal C$}}}$ equipped with an $A$-action from the left on the first factor. Moreover, for certain `nice' right module categories ${\hbox{{$\mathcal V$}}}$, there exists a suitable algebra $A\in {\hbox{{$\mathcal C$}}}$ such that ${\hbox{{$\mathcal V$}}}\simeq {}_A{\hbox{{$\mathcal C$}}}$, see \cite{Os2}\cite[Thm~7.10.1]{EGNO} in other conventions. For such module categories, ${\rm Fun}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})\simeq {}_A{\hbox{{$\mathcal C$}}}_A$ the category of $A$-$A$-bimodules in ${\hbox{{$\mathcal C$}}}$. Here, if given an $A$-$A$-bimodule $E$ in ${\hbox{{$\mathcal C$}}}$, the corresponding endofunctor is given by $E\mathop{{\otimes}}_A(\ )$, where we require ${\hbox{{$\mathcal C$}}}$ to be Abelian so that we can define $\mathop{{\otimes}}_A$. This turns $V\in {}_A{\hbox{{$\mathcal C$}}}$ into another $A$-module in ${\hbox{{$\mathcal C$}}}$ and $E\mathop{{\otimes}}_A(V\mathop{{\otimes}} X){\cong} (E\mathop{{\otimes}}_A V)\mathop{{\otimes}} X$, so the construction commutes with the right ${\hbox{{$\mathcal C$}}}$-action. Before we explain how these abstract ideas lead to ${}_K\hbox{{$\mathcal M$}}^G_K$, a more `obvious' case is the study of left module categories for ${\hbox{{$\mathcal C$}}} = {}_G\hbox{{$\mathcal M$}}$. If $K\subseteq G$ is a subgroup, we set ${\hbox{{$\mathcal V$}}} = {}_K\hbox{{$\mathcal M$}}$ for $i: K\subseteq G$. The functor ${\hbox{{$\mathcal C$}}}\to \mathrm{ End}({\hbox{{$\mathcal V$}}})$ just sends $X\in {\hbox{{$\mathcal C$}}}$ to $i^*(X)\mathop{{\otimes}}(\ )$ as a functor on ${\hbox{{$\mathcal V$}}}$, or more simply ${\hbox{{$\mathcal V$}}}$ is a left ${\hbox{{$\mathcal C$}}}$-module by $X\mathop{{\otimes}} V=i^*(X)\mathop{{\otimes}} V$. More generally\cite{Os2}\cite[Example~7..4.9]{EGNO}, one can include a cocycle $\alpha\in H^2(K,\mathbb{C}^\times)$ since we are only interested in monoidal equivalence, and this data $(K,\alpha)$ parametrises all indecomposable left ${}_G\hbox{{$\mathcal M$}}$-module categories. Moreover, here $\mathrm{ End}({\hbox{{$\mathcal V$}}})\simeq {}_K\hbox{{$\mathcal M$}}_K$, the category of $K$-bimodules, where a bimodule $E$ acts by $E\mathop{{\otimes}}_{\mathbb{C} K}(\ )$. So the data we need for a ${}_G\hbox{{$\mathcal M$}}$-module category is a monoidal functor ${}_G\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K$. This is of potential interest but is not the construction we were looking for. Rather, we are interested in right module categories of ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, the category of $G$-graded vector spaces. It turns out that these are classified by the exact same data $(K,\alpha)$ (this is related to the fact that the $\hbox{{$\mathcal M$}}^G,{}_G\hbox{{$\mathcal M$}}$ have the same centre) but the construction is different. Thus, if $K\subseteq G$ is a subgroup, we consider $A=\mathbb{C} K$ regarded as an algebra in ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$ by $|x|=x$ viewed in $G$. One can also twist this by a cocycle $\alpha$, but here we stick to the trivial case. Then ${\hbox{{$\mathcal V$}}}={}_A{\hbox{{$\mathcal C$}}}={}_K\hbox{{$\mathcal M$}}^G$, the category of $G$-graded left $K$-modules, is a right ${\hbox{{$\mathcal C$}}}$-module category. Explicitly, if $X\in {\hbox{{$\mathcal C$}}}$ is a $G$-graded vector space and $V\in{\hbox{{$\mathcal V$}}}$ a $G$-graded left $K$-module then \[ V\mathop{{\otimes}} X,\quad x.(v\mathop{{\otimes}} w)=v.x\mathop{{\otimes}} w,\quad |v\mathop{{\otimes}} w|=|v||w|,\quad \forall\ v\in V,\ w\in X\] is another $G$-graded left $K$-module. Finally, by the general theory, there is an associated monoidal category \[ {\hbox{{$\mathcal C$}}}^*_{\hbox{{$\mathcal V$}}}:={\rm Fun}_{{\hbox{{$\mathcal C$}}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})\simeq {}_K\hbox{{$\mathcal M$}}^G_K\simeq {}_{\Xi(R,K)}\hbox{{$\mathcal M$}}.\] which is the desired category to describe quasiparticles on boundaries in \cite{KK}. Conversely, if ${\hbox{{$\mathcal V$}}}$ is an indecomposable right ${\hbox{{$\mathcal C$}}}$-module category for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, it is explained in \cite{Os2}\cite[Example~7.4.10]{EGNO} (in other conventions) that the set of indecomposable objects has a transitive action of $G$ and hence can be identified with $G/K$ for some subgroup $K\subseteq G$. This can be used to put the module category up to equivalence in the above form (with some cocycle $\alpha$). \section{Concluding remarks}\label{sec:rem} We have given a detailed account of the algebra behind the treatment of boundaries in the Kitaev model based on subgroups $K$ of a finite group $G$. New results include the quasi-bialgebra $\Xi(R,K)$ in full generality, a more direct derivation from the category ${}_K\hbox{{$\mathcal M$}}^G_K$ that connects to the module category point of view, a theorem that $\Xi(R,K)$ changes by a Drinfeld twist as $R$ changes, and a $*$-quasi-Hopf algebra structure that ensures a nice properties for the category of representations (these form a strong bar category). On the computer science side, we edged towards how one might use these ideas in quantum computations and detect quasiparticles across ribbons where one end is on a boundary. We also gave new decomposition formulae relating representations of $D(G)$ in the bulk to those of $\Xi(R,K)$ in the boundary. Both the algebraic and the computer science aspects can be taken much further. The case treated here of trivial cocycle $\alpha$ is already complicated enough but the ideas do extend to include these and should similarly be worked out. Whereas most of the abstract literature on such matters is at the conceptual level where we work up to categorical equivalence, we set out to give constructions more explicitly, which we we believe is essential for concrete calculations and should also be relevant to the physics. For example, much of the literature on anyons is devoted to so-called $F$-moves which express the associativity isomorphisms even though, by Mac Lane's theorem, monoidal categories are equivalent to strict ones. On the physics side, the covariance properties of ribbon operators also involve the coproduct and hence how they are realised depends on the choice of $R$. The same applies to how $*$ interacts with tensor products, which would be relevant to the unitarity properties of composite systems. Of interest, for example, should be the case of a lattice divided into two parts $A,B$ with a boundary between them and how the entropy of states in the total space relate to those in the subsystem. This is an idea of considerable interest in quantum gravity, but the latter has certain parallels with quantum computing and could be explored concretely using the results of the paper. We also would like to expand further the concrete use of patches and lattice surgery, as we considered only the cases of boundaries with $K=\{e\}$ and $K=G$, and only a square geometry. Additionally, it would be useful to know under what conditions the model gives universal quantum computation. While there are broadly similar such ideas in the physics literature, e.g., \cite{CCW}, we believe our fully explicit treatment will help to take these forward. Further on the algebra side, the Kitaev model generalises easily to replace $G$ by a finite-dimensional semi-simple Hopf algebra, with some aspects also in the non-semisimple case\cite{CowMa}. The same applies easily enough to at least a quasi-bialgebra associated to an inclusion $L\subseteq H$ of finite-dimensional Hopf algebras\cite{PS3} and to the corresponding module category picture. Ultimately here, it is the nonsemisimple case that is of interest as such Hopf algebras (e.g. of the form of reduced quantum groups $u_q(g)$) generate the categories where anyons as well as TQFT topological invariants live. It is also known that by promoting the finite group input of the Kitaev model to a more general weak Hopf algebra, one can obtain any unitary fusion category in the role of ${\hbox{{$\mathcal C$}}}$\cite{Chang}. There remains a lot of work, therefore, to properly connect these theories to computer science and in particular to established methods for quantum circuits. A step here could be braided ZX-calculus\cite{Ma:fro}, although precisely how remains to be developed. These are some directions for further work. \section*{Data availability statement} Data sharing is not applicable to this article as no new data were created or analysed in this study. \input{appendix} \end{document}
{'timestamp': '2022-08-15T02:13:04', 'yymm': '2208', 'arxiv_id': '2208.06317', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.06317'}
ArXiv
\section{Finite transverse field} \label{transverse} In this section, we investigate the effects that a finite transverse field has on the validity of the effective XX model \begin{equation} H = \sum_{i < j } J_{ij} (1/2+s_i^z) (1/2+s_j^z) + B \sum_i s_i^x. \end{equation} Note that aside from the desired interactions, there is also a longitudinal field $B^\parallel_i \equiv \sum_j J_{ij}/2$ introduced by the Rydberg dressing. Although in the bulk of the system this is homogeneous, near the boundaries, this is no longer the case. In the limit of $B \gg N \overline{J}$, we also have $B \gg B^\parallel$, so the longitudinal field is dropped via the RWA. However, for finite transverse field, it is important to take into account its presence. There are two possible approaches to reducing the effects of the longitudinal field. In the first, a $\pi$ pulse is applied halfway through the evolution, effectively flipping the sign of the longitudinal field while leaving the interactions and transverse field unchanged. As a result, the evolution from the longitudinal field is removed in a spin-echo fashion. In the second, we can detune the drive used to generate the transverse field by the average $\frac{1}{N} \sum_i B_i^\parallel$. Since the longitudinal field is not homogeneous, this will not fully remove it, but it will drastically reduce its effect. Here, we will focus on the second approach. \begin{figure}[h!] \centering \includegraphics[scale=.4]{r2B.pdf} \qquad \qquad \includegraphics[scale=.4]{r4B.pdf} \caption{Effect of finite transverse field on the squeezing for (a) $r_b = 2$ and (b) $r_b = 4$. Squeezing is numerically calculated using DTWA with $10^4$ samples in a $14 \times 14$ lattice.} \label{fig:finiteB} \end{figure} In Fig.~\ref{fig:finiteB}, we investigate the squeezing in a $14 \times 14$ lattice with open boundary conditions and a vdW tail for blockade radii of $r_b=2,4$ and for $B/\overline{J} = 2.5, 12.5$, and the limit of infinite transverse field where the RWA is valid. We see that when there is a finite transverse field, there are oscillations in the squeezing. As the transverse field is increased, these oscillations increase in frequency and decrease in magnitude. Note that each oscillation corresponds approximately to half a Rabi cycle, so the squeezing is maximal when the Bloch vector is near either of the two poles of the Bloch sphere. Interestingly, we see that at these maxima, the squeezing for the finite transverse field can potentially exceed that of the infinite transverse field. While this requires stopping the evolution at the right time, we note that for $B/N \overline{J} = 2.5$, the optimal squeezing time corresponds to approximately 12 Rabi cycles, so for transverse fields of the order of $10-100$ kHz, stopping the evolution near one of the maxima is feasible. Note that as the transverse field is increased, the number of Rabi cycles will increase accordingly. \section{Benchmarking DTWA in 1D} \label{bench} In this section, we benchmark the discrete truncated Wigner approximation (DTWA) (see \cite{Schachenmayer2015a,Schachenmayer2015b,Zhu2019}, and also Sec.~\ref{DissDTWA} below) by comparing to results based on time-evolved matrix product states (MPS) \cite{Schollwock2011}. Owing to the generic difficulty of simulating the exact dynamics of higher-dimensional systems or systems exhibiting interactions between distant spins with MPS, we benchmark DTWA for 1D chains, utilizing a sharp cutoff in the potential with no power-law tail and open boundary conditions. We utilize the time-dependent variational principle (TDVP) to provide quasi-exact solutions to the time-dependent Schr{\"o}dinger equation for our MPS \cite{Haegeman2011,Haegeman2016,Wall2012,Jaschke2018}. In Fig.~\ref{fig:TDVP}, we compare results for the optimal squeezing generated by the XX model (see Eq.~(2) in the main text) in 1D. We generally find improved agreement for both the predicted squeezing and the time at which this occurs as the potential range increases, and the system becomes increasingly connected. In fact, for all $r_b > 1$ shown, we observe excellent agreement in the predicted amount of attainable squeezing. While we average these DTWA results over 10,000 trajectories, in the main text we utilize 20,000 trajectories for Fig.~2(a,b), and 40,000 trajectories for Fig.~3. For smaller $r_b$ where the most notable discrepancies arise, we observe that DTWA underestimates the attainable squeezing, as well as overestimates the time at which this occurs. Thus, in our examination of the effects of decoherence in Fig.~3, where the dynamics at longer times are increasingly susceptible to the degrading effects of the finite Rydberg lifetime, we expect that DTWA provides, at worst, a conservative estimate for the attainable squeezing. Overall however, our benchmarking suggests that such deviations, when they occur, should remain small. Furthermore, both the consideration of higher-dimensions (i.e. 2D) and the addition of a power-law tail lead to enhanced connectivity of our lattice, and we expect this to lead to further improvement of our results. In fact, similar benchmarks in power-law interacting systems in 2D demonstrate that DTWA yields reliable results for the spin squeezing dynamics \cite{Perlin2020,Muleady2022}. \begin{figure}[h!] \centering \includegraphics[scale=.6]{figS2_v1J.pdf} \caption{Benchmarking DTWA in 1D. Comparison between TDVP (x's) and DTWA (o's) results for the optimal squeezing, in decibels (left), and the corresponding squeezing time, scaled by the nearest-neighbor coupling $J_0$ (right), obtained with the XX model as given by Eq.~(2) in the main text for various system lengths $L$ and potential ranges $r_b$. We utilize a potential with a sharp cutoff, as well as open boundary conditions for various chain lengths $L = N$. DTWA results are averaged over 10,000 stochastic trajectories. For our TDVP results, our we utilize a time step $J_0 dt = 0.02$, resulting in the discrete jumps observed in the corresponding values of $J_0 t_{\textrm{sq}}$.} \label{fig:TDVP} \end{figure} \section{Experimental parameters} \label{exp} In this section, we shall discuss the experimental parameters used to produce Fig.~3 of the main text. First, we note that the we restrict the range of the parameters such that $\Omega \leq 2 \pi \times \SI{10}{\MHz}$ and $N \overline{J} \leq 2 \pi \times \SI{20}{\kHz}$, where the latter restriction is to ensure the validity of the RWA. We considered three commonly-studied Rydberg $S$ (i.e., zero angular momentum) atoms: $^{133}$Cs, $^{87}$Rb, and triplet $^{88}$Sr. To extract the interaction strengths and decay rates, we utilize the ARC code \cite{Robertson2021}. To determine the lattice spacing, there are two behaviors we take into account. First, we ensure that there are no significant level crossings due to the interactions that would lead to a weak Rydberg interaction, weakening the blockade effect. Second, we ensure that the dipole-dipole interactions are perturbative at twice the lattice spacing. Here, we take this to be the point at which the two-atom eigenstate is 95\% $|ss\rangle$, i.e., 5\% of the eigenstate involves other Rydberg states. Although this implies that the eigenstate at shorter distances is strongly composed of additional Rydberg states, for most of the blockade radii considered in Fig.~3, this is well into the blockaded region, so as long as the effective Rydberg blockade interaction is not significantly reduced, the soft-core potential will not be strongly modified. In the case of $^{88}$Sr, the numerical methods for extracting the decay rates are inaccurate since it is an alkaline-earth atom. In this case, we rely on experimentally-measured values. In particular, we use Ref.~\cite{Kunze1993} to extract the spontaneous emission rate's scaling behavior $\gamma_{\text{se}} = a n^{*-3}$, where $n^*$ is the effective principle quantum number, for $n=19-23$. To incorporate the effect of blackbody radiation, we utilize measurements from an ongoing experiment at $n = 41$ that is consistent with a lifetime of at least \SI{20}{\micro\second} \cite{KaufmanExpt} and fit the total decay rate $\gamma = a n^{*-3} + b n^{*-2}$ to extrapolate to arbitrary $n^*$ at $T = 300$ K, where we have utilized the fact that the blackbody radiation rate will scale approximately as $n^{*2}$. Fitting the experimental values, we find $a = \SI{2070}{\micro\second^{-1}}$, $b = \SI{15.8}{\micro\second^{-1}}$. For $n=80$ in the main text, this corresponds to a lifetime of \SI{137}{\micro\second}. \begin{table}[b] \renewcommand{\arraystretch}{1.3} \centering \begin{tabular}{c|c|c|c} & $a$ & $C_6/2 \pi$ & $\tau$ \\ \hline $^{88}$Sr $41^3$S$_1$ & \SI{0.651}{\micro\metre} & \SI{1.5}{\GHz\ \micro\metre^6} & \SI{20}{\micro\second} \\ \hline $^{88}$Sr $60^3$S$_1$ & \SI{1.79}{\micro\metre} & \SI{156}{\GHz\ \micro\metre^6} & \SI{61.3}{\micro\second} \\ \hline $^{88}$Sr $80^3$S$_1$ & \SI{3.76}{\micro\metre} & \SI{4.8}{\THz\ \micro\metre^6} & \SI{137}{\micro\second} \\ \hline $^{87}$Rb 60S & \SI{1.74}{\micro\metre}& \SI{138}{\GHz\ \micro\metre^6} & \SI{101}{\micro\second} \\ \hline $^{133}$Cs 60S & \SI{1.62}{\micro\metre} & \SI{107}{\GHz\ \micro\metre^6} & \SI{95.6}{\micro\second} \\ \end{tabular} \caption{Lattice spacings $a$, vdW dispersion coefficients $C_6$, and lifetimes $\tau$ at 300 K used in Fig.~3 of the main text.} \label{tab:exptab} \end{table} The lattice spacing $a$, $C_6$, and lifetimes for the states considered in Fig.~3 of the main text are listed in Table \ref{tab:exptab}. Although we have focused on a particular set of Rydberg states, we can determine how the behavior changes for different $n$ through scaling arguments. First, we note that the energy difference between different Rydberg states scales like $n^{*-3}$, while the dipole-dipole interaction dispersion coefficient scales like $C_3 \propto n^{*4}$. The first of these two scaling behaviors implies that we should take $\Omega, \Delta \propto n^{*-3}$, implying $J_0 \propto n^{*-3}$. Additionally, in order for the dipole-dipole interactions to continue to remain perturbative, the dipole-dipole interactions must scale with the Rydberg state energy differences, i.e., $C_3/a^3 \propto n^{*-3}$, which implies $a \propto n^{*7/3}$. Since the vdW dispersion coefficient scales like $C_6 \propto n^{*11}$, we see that the vdW interactions scale like $C_6/a^6 \propto n^{*-3}$. As a result, the blockade radius (in units of the lattice spacing) does not scale with $n^*$, and $N\overline{J} \propto n^{*-3}$, which implies smaller transverse fields are needed with increasing $n^*$. Moreover, we see that $\gamma/N \overline{J} \propto a+b n^*$, so the presence of blackbody radiation leads to worse decoherence with increasing $n^*$, although based on the values of $a,b$ for $^{88}$Sr above, we see that the effect is relatively small even at room temperature. \begin{figure} \centering \includegraphics[scale=.55]{figS3_v4J.pdf} \caption{Parameters used for the overlaid curves in Fig.~3 of the main text as a function of $r_b$, in addition to the achievable spin squeezing. We plot the values of (a-b) $N\overline{J}$, (c-d) $\Omega$, and (e-f) $\overline{J}\tau/ f$ for each atom considered. The dotted continuations of each line denote parameters for the maximum cutoff on $\Omega$ ($2\pi\times 10$ MHz) or $N\overline{J}$ ($2\pi\times 20$ kHz) has been exceeded. (g-h) We also show the spin squeezing achievable for these parameters, and compare to the corresponding Ising dynamics, denoted by the faded lines. For the Ising results, we only consider a cutoff on $\Omega$, as we do note require a restriction on $N\overline{J}$ for implementing a transverse field. We show results for Rydberg fractions $f=0.01$ (a,c,e,g) and $f=0.001$ (b,d,f,h). Note that the values of $N\overline{J}$ and $\Omega$ for $^{88}$Sr and $^{87}$Rb with $n=60$ lie virtually on top of each other.} \label{fig:Fig3_params} \end{figure} In Fig.~\ref{fig:Fig3_params}, we plot the values of $N\overline{J}$ and $\Omega$ as well as the dimensionless quantity $\overline{J}\tau/f$ as a function of $r_b$ for each curve shown in Fig.~3, corresponding to each Rydberg level and atom considered, and different fixed Rydberg fractions $f$; we additionally consider results for $^{88}$Sr with $n=60$, not shown in the main text. We also plot the corresponding value of the spin squeezing in the presence of the relevant decoherence for the considered atom, Rydberg level, Rydberg fraction $f$, and blockade radius $r_b$. Values of $r_b$ for which either $\Omega$ or $N\overline{J}$ exceeds the restricted range, i.e. $\Omega > 2\pi\times 10$ MHz or $N\overline{J} > 2\pi\times 20$ kHz are denoted by a dotted line; in Fig.~3, we simply terminate the curves when these conditions are violated. For the comparable Ising results shown in Fig.~\ref{fig:Fig3_params} (which we do not plot in Fig.~3), we only impose the restriction on the value of $\Omega$, since we do not need to implement a transverse field for this protocol. We note that for $f=0.01$, where the associated interaction timescales are typically large compared to when $f=0.001$, the threshold on $N\overline{J}$ is exceeded before the threshold on $\Omega$ for the chosen examples, whereas the opposite tends to be the case with $f=0.001$. As also evident in Fig.~3 of the main text, we observe that for fixed $f$, an increase in $r_b$ is also accompanied by a decrease in $N\overline{J}$, and the dynamics become increasingly susceptible to the effects of decoherence, leading to a comparably faster degradation of the achievable spin squeezing for the XX model vs the Ising model. \section{Dissipative DTWA} \label{DissDTWA} To treat incoherent processes in the system, we use the semiclassical dissipative discrete truncated Wigner approximation (DDTWA) \cite{Huber2022,Barberena2022} to simulate the dynamics of the master equation in Eq.~(5a) of the main text. We formulate a semiclassical description of our system, introducing classical variables $\mathcal{S}_i^\mu$ corresponding to the value of $s_i^\mu$, where $\mu = x,y,z$ and $1 \leq i \leq N$. For an initial spin-polarized state along $+z$, we form a discrete Wigner function \begin{align} W(\vec{\mathcal{S}}_i) = \frac{1}{4}\Big[\delta(\mathcal{S}_i^x - 1/2) + \delta(\mathcal{S}_i^x + 1/2)\Big]\Big[\delta(\mathcal{S}_i^y - 1/2) + \delta(\mathcal{S}_i^y + 1/2)\Big]\delta(\mathcal{S}_i^z - 1/2). \end{align} For each spin, this amounts to the four phase space points $(\mathcal{S}_i^x,\mathcal{S}_i^y,\mathcal{S}_i^z) = (\pm 0.5, \pm 0.5, 0.5)$ each occurring with equal probability $1/4$. The coherent dynamics are then obtained by solving the associated classical equations of motion of the relevant Hamiltonian, in conjunction with randomly sampling initial values for $(\mathcal{S}_i^x,\mathcal{S}_i^y,\mathcal{S}_i^z)_{1\leq i\leq N}$ according to the above distribution. Incoherent terms in our master equation may be accounted for by the addition of stochastic noise terms to our classical equations of motion. For further details, see \cite{Gardiner2009,Huber2022,Barberena2022}. For an ensemble of dynamical trajectories with initial conditions sampled from our initial Wigner distributions, quantum expectation values may then be approximated via $\langle s_i^\mu(t)\rangle \approx \overline{\mathcal{S}_i^\mu(t)}$, where $\overline{\,\cdot\,}$ denotes averaging with respect to this ensemble. Likewise, symmetrically-ordered correlators may be obtained via $\langle(s_i^\mu s_j^\nu + s_j^\nu s_i^\mu)(t)\rangle/2 \approx \overline{\mathcal{S}_i^\mu(t)\mathcal{S}_j^\nu(t)}$. Given the generic nonlinear nature of our classical equations of motion, this averaging produces results beyond mean-field theory that take into account the effect of the quantum noise distribution on the dynamics \cite{Schachenmayer2015a,Schachenmayer2015b,Zhu2019}.
{'timestamp': '2022-08-04T02:10:35', 'yymm': '2208', 'arxiv_id': '2208.01869', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.01869'}
ArXiv
\section{Introduction}\label{sec:intro} Historically, humans have performed inconsistently in judgemental forecasting \cite{Makridakis2010,TetlockExp2017}, which incorporates subjective opinion and probability estimates to predictions \cite{Lawrence2006}. Yet, human judgement remains essential in cases where pure statistical methods are not applicable, e.g. where historical data alone is insufficient or for one-off, more `unknowable' events \cite{Petropoulos2016,Arvan2019,deBaets2020}. Judgemental forecasting is widely relied upon for decision-making \cite{Nikolopoulos2021}, in myriad fields from epidemiology to national security \cite{Nikolopoulos2015,Litsiou2019}. Effective tools to help humans improve their predictive capabilities thus have enormous potential for impact. Two recent global events -- the COVID-19 pandemic and the US withdrawal from Afghanistan -- underscore this by highlighting the human and financial cost of predictive deficiency. A multi-purpose system which could improve our ability to predict the incidence and impact of events by as little as 5\%, could save millions of lives and be worth trillions of dollars per year \cite{TetlockGard2016}. Research on judgemental forecasting (see \cite{Lawrence2006,Zellner2021} for overviews), including the recent\AX{,} groundbreaking `Superforecasting Experiment' \cite{TetlockGard2016}, is instructive in establishing the desired properties for systems for supporting forecastin . In addition to reaffirming the importance of fine-grained probabilistic reasoning \cite{Mellers2015}, this literature points to the benefits of some group techniques versus solo forecasting \cite{Landeta2011,Tetlock2014art}, of synthesising qualitative and quantitative information \cite{Lawrence2006}, of combating agents' irrationality \cite{Chang2016} and of high agent engagement with the forecasting challenge, e.g. robust debating \cite{Landeta2011} and frequent prediction updates \cite{Mellers2015}. Meanwhile, \emph{computational argumentation} (see \cite{AImagazine17,handbook} for recent overviews) is a field of AI which involves reasoning with uncertainty and resolving conflicting information, e.g. in natural language debates. As such, it is an ideal candidate for aggregating the broad, polymorphous set of information involved in judgemental group forecasting. An extensive and growing literature is based on various argumentation frameworks -- rule-based systems for aggregating, representing and evaluating sets of arguments, such as those applied in the contexts of \emph{scheduling} \cite{Cyras_19}, \emph{fact checking} \cite{Kotonya_20} or in various instances of \emph{explainable AI} \cite{Cyras_21}. Subsets of the requirements for forecasting systems are addressed by individual formalisms, e.g. \emph{probabilistic argumentation} \AX{\cite{Dung2010,Thimm2012,Hunter2013,Fazzinga2018}} may effectively represent and analyse uncertain arguments about the future. However, we posit that a purpose-built argumentation framework for forecasting is essential to effectively utilise argumentation's reasoning capabilities in this context. \begin{figure*} \includegraphics[width=\textwidth]{images/FAF_diagram.png} \caption{The step-by-step process of a FAF over its lifetime.} \label{fig:FAFdiag} \end{figure*} In this paper, we attempt to cross-fertilise these two as of yet unconnected academic areas. We draw from forecasting literature to inform the design of a new computational argumentation approach: \emph{Forecasting Argumentation Frameworks} (FAFs). FAFs empower (human and artificial) agents to structure debates in real time and to deliver argumentation-based forecasting. They offer an approach in the spirit of \emph{deliberative democracy} \cite{Bessette1980} to respond to a forecasting problem over time. The steps which underpin FAFs are depicted in Figure \ref{fig:FAFdiag} (referenced throughout) and can be described in simple terms \FT{as follows}: a FAF is initialised with a time limit \FT{(for the overall forecasting process and for each iteration therein)} and a pre-agreed `base-rate' forecast $\ensuremath{\mathcal{F}}$ (Stage 1), e.g. based on historical data. \FT{Then,} the forecast is revised by one or more (non-concurrent) debates, \BI{in the form of `update frameworks' (Stage 2)}, opened and resolved by participating agents \FT{(}until \FT{the} specified time limit is reached\FT{)}. Each update framework begins with a proposed revision to the current forecast (Stage 2a), and proceeds with a cycle of argumentation (Stage 2b) about the proposed forecast, voting on said argumentation and forecasting. Forecasts deemed `irrational' with a view to agents' argumentation and voting are blocked. Finally, the rational forecasts are aggregated and the result replaces the current group forecast (Stage 2c). This process may be repeated over time \BI{in an indefinite number of update frameworks} (thus continually \BI{revising} the group forecast) until the \FT{(overall)} time limit is reached. The composite nature of this process enables the appraisal of new information relevant to the forecasting question as and when it arrives. Rather than confronting an unbounded forecasting question with a diffuse set of possible debates open at once, all agents concentrate their argumentation on a single topic (a proposal) at any given time. After giving the necessary background on forecasting and argumentation (§\ref{sec:background}), we formalise our \FT{update} framework\FT{s for Step 2a} (§\ref{sec:fw}). We then give \FT{our} notion of rationality \FT{(Step 2b)}, along with \FT{our} new method for \FT{aggregating rational forecasts (Step 2c)} from a group of agents (§\ref{sec:forecasting}) \FT{and FAFs overall}. We explore the underlying properties of \FT{FAFs} (§\ref{sec:props}), before describing \FT{\AX{an experiment} with \FT{a prototype implementing} our approach (§\ref{sec:experiments}). Finally, we conclude and suggest potentially fruitful avenues for future work (§\ref{sec:conclusions}). \section{Background}\label{sec:background} \subsection{Forecasting} Studies on the efficacy of judgemental forecasting have shown mixed results \cite{Makridakis2010,TetlockExp2017,Goodwin2019}. Limitations of the judgemental approach are a result of well-documented cognitive biases \cite{Kahneman2012}, irrationalities in human probabilistic reasoning which lead to distortion of forecasts. Manifold methodologies have been explored to improve judgemental forecasting accuracy to varying success \cite{Lawrence2006}. These methodologies include, but are not limited to, prediction intervals \cite{Lawrence1989}, decomposition \cite{MacGregorDonaldG1994Jdwd}, structured analogies \cite{Green2007,Nikolopoulos2015} and unaided judgement \cite{Litsiou2019}. Various group forecasting techniques have also been explored \cite{Linstone1975,Delbecq1986,Landeta2011}, although the risks of groupthink \cite{McNees1987} and the importance of maintaining the independence of each group member's individual forecast are well established \cite{Armstrong2001}. Recent advances in the field have been led by Tetlock and Mellers' superforecasting experiment \cite{TetlockGard2016}, which leveraged \AX{geopolitical} forecasting tournaments and a base of 5000 volunteer forecasters to identify individuals with consistently exceptional accuracy (top 2\%). The experiment\AR{'s} findings underline the effectiveness of group forecasting orientated around debating \cite{Tetlock2014art}, and demonstrate a specific cognitive-intellectual approach conducive for forecasting \cite{Mellers20151,Mellers2015}, but stop short of suggesting a concrete universal methodology for higher accuracy. Instead, Tetlock draws on his own work and previous literature to crystallise a broad set of methodological principles by which superforecasters abide \cite[pg.144]{TetlockGard2016}: \begin{itemize} \item \emph{Pragmatic}: not wedded to any idea or agenda; \item \emph{Analytical:} capable of stepping back from the tip-of-your-nose perspective and considering other views; \item \emph{Dragonfly-eyed:} value diverse views and synthesise them into their own; \item \emph{Probabilistic:} judge using many grades of maybe; \item \emph{Thoughtful updaters:} when facts change, they change their minds; \item \emph{Good intuitive psychologists:} aware of the value of checking thinking for cognitive and emotional biases. \end{itemize} Subsequent research after the superforecasting experiment has included exploring further optimal forecasting tournament preparation \cite{penn_global_2021,Katsagounos2021} and extending Tetlock and Mellers' approach to answer broader, more time-distant questions \cite{georgetown}. It should be noted that there have been no recent advances on computational tool\AX{kits} for the field similar to that proposed in this paper. \iffalse \begin{quote} \textbf{PRAGMATIC:} Not wedded to any idea or agenda \textbf{ANALYTICAL:} Capable of stepping back from the tip-of-your-nose perspective and considering other views \textbf{DRAGONFLY-EYED:} Value diverse views and synthesize them into their own \textbf{PROBABILISTIC:} Judge using many grades of maybe \textbf{THOUGHTFUL UPDATERS:} When facts change, they change their minds \textbf{GOOD INTUITIVE PSYCHOLOGISTS:} Aware of the value of checking thinking for cognitive and emotional biases \cite[pg.144]{TetlockGard2016}\newline \end{quote} \fi \subsection{Computational Argumentation} We posit that existing argumentation formalisms are not well suited for the aforementioned future-based arguments, which are necessarily semantically and structurally different from arguments about present or past concerns. Specifically, forecasting arguments are inherently probabilistic and must deal with the passage of time and its implications for the outcomes at hand. Further, several other important characteristics can be drawn from the forecasting literature which render current argumentation formalisms unsuitable, e.g. the paramountcy of dealing with bias (in data and cognitive), forming granular conclusions, fostering group debate and the co-occurrence of qualitative and quantitative arguing. Nonetheless, several of these characteristics have been previously explored in argumentation and our formalisation draws from several existing approache . First and foremost, it draws in spirit from abstract argumentation frameworks (AAFs) \cite{Dung1995}, in that the arguments' inner contents are ignored and the focus is on the relationships between arguments. However, we consider arguments of different types and \AX{an additional relation of} support (pro), \AX{rather than} attack (con) alone as in \cite{Dung1995}. Past work has also introduced probabilistic constraints into argumentation frameworks. {Probabilistic AAFs} (prAAFs) propose two divergent ways for modelling uncertainty in abstract argumentation using probabilities - the constellation approach \cite{Dung2010,Li2012} and the epistemic approach \cite{Hunter2013,Hunter2014,Hunter2020}. These formalisations use probability as a means to assess uncertainty over the validity of arguments (epistemic) or graph topology (constellation), but do not enable reasoning \emph{with} or \emph{about} probability, which is fundamental in forecasting. In exploring temporality, \cite{Cobo2010} augment AAFs by providing each argument with a limited lifetime. Temporal constraints have been extended in \cite{Cobo2012} and \cite{Baron2014}. Elsewhere, \cite{Rago2017} have used argumentation to model irrationality or bias in agents. Finally, a wide range of gradual evaluation methods have gone beyond traditional qualitative semantics by measuring arguments' acceptability on a scale (normally [0,1]) \cite{Leite2011,Evripidou2012,Amgoud2017,Amgoud2018,Amgoud2016}. Many of these approaches have been unified as Quantitative Bipolar Argumentation Frameworks (QBAFs) in \cite{Baroni2018}. Amongst existing approaches, of special relevance in this paper are Quantitative Argumentation Debate (QuAD) frameworks \cite{Baroni2015}, i.e. 5-tuples ⟨$\mathcal{X}^a$, $\mathcal{X}^c$, $\mathcal{X}^p$, $\mathcal{R}$, $\ensuremath{\mathcal{\tau}}$⟩ where $\mathcal{X}^a$ is a finite set of \emph{answer} arguments (to implicit \emph{issues}); $\mathcal{X}^c$ is a finite set of \emph{con} arguments; $\mathcal{X}^p$ is a finite set of \emph{pro} arguments; $\mathcal{X}^a$, $\mathcal{X}^c$ and $\mathcal{X}^p$ are pairwise disjoint; $\mathcal{R} \subseteq (\mathcal{X}^c \cup \mathcal{X}^p) \times (\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p)$ is an acyclic binary relation; $\ensuremath{\mathcal{\tau}}$ : $(\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p) \rightarrow [0,1]$ is a total function: $\ensuremath{\mathcal{\tau}}(a)$ is the \emph{base score} of $a$. Here, attackers and supporters of arguments are determined by the pro and con arguments they are in relation with. Formally, for any $a\in\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p$, the set of \emph{con} (\emph{pro}\AX{)} \emph{arguments} of $a$ is $\mathcal{R}^-(a) = \{b\in\mathcal{X}^c|(b,a)\in\mathcal{R}\}$ ($\mathcal{R}^+(a) = \{b\in\mathcal{X}^p|(b,a)\in\mathcal{R}\}$, resp.). Arguments in QuAD frameworks are scored by the \emph{Discontinuity-Free QuAD} (DF-QuAD) algorithm \cite{Rago2016}, using the argument's intrinsic base score and the \emph{strengths} of its pro/con arguments. \FTn{Given that DF-QuAD is used to define our method (see Def.~\ref{def:conscore}), for completeness we define it formally here.} DF-QuAD's \emph{strength aggregation function} is defined as $\Sigma : [0,1]^* \rightarrow [0,1]$, where for $\mathcal{S} = (v_1,\ldots,v_n) \in [0,1]^*$: if $n=0$, $\Sigma(S) = 0$; if $n=1$, $\Sigma(S) = v_1$; if $n=2$, $\Sigma(S) = f(v_1, v_2)$; if $n>2$, $\Sigma(S) = f(\Sigma(v_1,\ldots,v_{n-1}), v_n)$; with the \emph{base function} $f [0,1]\times [0,1] \rightarrow [0,1]$ defined, for $v_1, v_2\i [0,1]$, as: $f(v_1,v_2)=v_1+(1-v_1)\cdot v_2 = v_1 + v_2 - v_1\cdot v_2$. After separate aggregation of the argument's pro/con descendants, the combination function $c : [0,1]\time [0,1]\time [0,1]\rightarro [0,1]$ combines $v^-$ and $v^+$ with the argument's base score ($v^0$): $c(v^0,v^-,v^+)=v^0-v^0\cdot\mid v^+ - v^-\mid\:if\:v^-\geq v^+$ and $c(v^0,v^-,v^+)=v^0+(1-v^0)\cdot\mid v^+ - v^-\mid\:if\:v^-< v^+$, resp.\ The inputs for the combination function are provided by the \emph{score function}, $\ensuremath{\mathcal{\sigma}} : \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p\rightarro [0,1]$, which gives the argument's strength, as follows: for any $\ensuremath{x} \in \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p$: $\ensuremath{\mathcal{\sigma}}(\ensuremath{x}) = c(\ensuremath{\mathcal{\tau}}(\ensuremath{x}),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^+(\ensuremath{x}))))$ where if $(\ensuremath{x}_1,\ldots,\ensuremath{x}_n)$ is an arbitrary permutation of the ($n \geq 0$) con arguments in $\mathcal{R}^-(\ensuremath{x})$, $\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))=(\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_1),\ldots,\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_n))$ (similarly for pro arguments). Note that the DF-QuAD notion of $\ensuremath{\mathcal{\sigma}}$ can be applied to any argumentation framework where arguments are equipped with base scores and pro/con arguments. We will do so later, for our novel formalism. \section Update \AX{F}rameworks}\label{sec:fw} We begin by defining the individual components of our frameworks, starting with the fundamental notion of a \emph{forecast}. \FT{This} is a probability estimate for the positive outcome of a given (binary) question. \begin{definition} A \emph{forecast} $\ensuremath{\mathcal{F}}$ is the probability $P(\ensuremath{\mathcal{Q}}=true) \in [0,1]$ for a given \emph{forecasting question} $\ensuremath{\mathcal{Q}}$. \end{definition} \begin{example} \label{FAFEx} Consider the forecasting question $\ensuremath{\mathcal{Q}}$: \emph{`Will the Tokyo \AX{2020 Summer} Olympics be cancelled/postponed to another year?'}. \AX{Here, the $true$ outcome amounts to the Olympics being cancelled/postponed (and $false$ to it taking place in 2020 as planned).} Then, a forecast $\ensuremath{\mathcal{F}}$ may be $P(\ensuremath{\mathcal{Q}}=true)= 0.15$\, which amounts to a 15\% probability of the Olympics \BIn{being cancelled/postponed}. \BI{Note that $\ensuremath{\mathcal{F}}$ may have been introduced as part of an update framework (herein described), or as an initial base rate at the outset of a FAF (Stage 1 in Figure \ref{fig:FAFdiag}).} \end{example} In the remainder, we will often drop $\ensuremath{\mathcal{Q}}$, implicitly assuming it is given, and use $P(true)$ to stand for $P(\ensuremath{\mathcal{Q}}=true)$. In order to empower agents to reason about probabilities and thus support forecasting, we need, in addition to \emph{pro/con} arguments as in QuAD frameworks, two new argument types: \begin{itemize} \item \emph{proposal} arguments, each about some forecast (and its underlying forecasting question); each proposal argument $\ensuremath{\mathcal{P}}$ has a \emph{forecast} and, optionally, some supporting \emph{evidence ; and \item \emph{amendment} argument , which \AX{suggest a modification to} some forecast\AX{'s probability} by increasing or decreasing it, and are accordingly separated into disjoint classes of \emph{increase} and \emph{decrease} (amendment) arguments.\footnote{Note that we decline to include a third type of amendment argument for arguing that $\ensuremath{\Forecast^\Proposal}$ is just right. This choice rests on the assumption that additional information always necessitates a change to $\ensuremath{\Forecast^\Proposal}$, however granular that change may be. This does not restrict individual agents arguing about $\ensuremath{\Forecast^\Proposal}$ from casting $\ensuremath{\Forecast^\Proposal}$ as their own final forecast. However, rather than cohering their argumentation around $\ensuremath{\Forecast^\Proposal}$, which we hypothesise would lead to high risk of groupthink~\cite{McNees1987}, agents are compelled to consider the impact of their amendment arguments on this more granular level.} \end{itemize} Note that amendment arguments are introduced specifically for arguing about a proposal argument, given that traditional QuAD pro/con arguments are of limited use when the goal is to judge the acceptability of a probability, and that in forecasting agents must not only decide \emph{if} they agree/disagree but also \emph{how} they agree/disagree (i.e. whether they believe the forecast is too low or too high considering, if available, the evidence). Amendment arguments, with their increase and decrease classes, provide for this. \begin{example \label{ProposalExample} A proposal argument $\ensuremath{\mathcal{P}}$ in the Tokyo Olympics setting may comprise forecast: \emph{\AX{`}There is a 75\% chance that the Olympics will be cancelled/postponed to another year'}. It may also include evidence: \emph{`A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled. The Japanese government is likely to buckle under this pressure.'} This argument may aim to prompt updating the earlier forecast in Example~\ref{FAFEx}. A \emph{decrease} amendment argument may be $\ensuremath{\decarg_1}$: \emph{`The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'}. An \emph{increase} amendment argument may be $\ensuremath{\incarg_1}$: \emph{`Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation'}. \end{example} Intuitively, a proposal argument is the focal point of the argumentation. It typically suggests a new forecast to replace prior forecasts, argued on the basis of some new evidence (as in the earlier example). We will see that proposal arguments remain immutable through each debate (update framework), which takes place via amendment arguments and standard pro/con arguments. Note that, wrt QuAD argument types, proposal arguments replace issues and amendment arguments replace answers, in that the former are driving the debates and the latter are the options up for debate. Note also that amendment arguments merely state a direction wrt $\ensuremath{\Forecast^\Proposal}$ and do not contain any more information, such as \emph{how much} to alter $\ensuremath{\Forecast^\Proposal}$ by. We will see that alteration can be determined by \emph{scoring} amendment arguments. Proposal and amendment arguments, alongside pro/con arguments, form part of our novel update frameworks \BI{(Stage 2 of Figure \ref{fig:FAFdiag})}, defined as follows: \begin{definition} An \emph{update framework} is a nonad ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{\mathcal{A}}, \ensuremath{\mathcal{V}}, \ensuremath{\Forecast^\Agents}$⟩ such that: \item[$\bullet$] $\ensuremath{\mathcal{P}}$ is a single proposal argument with \emph{forecast} $\NewForecas $ and, optionally, \emph{evidence} $\mathcal{E}^\ensuremath{\mathcal{P}}$ for this forecast; \item[$\bullet$] $\ensuremath{\mathcal{X}} = \ensuremath{\AmmArgs^\uparrow} \cup \ensuremath{\AmmArgs^\downarrow}$ is a finite set of \emph{amendment arguments} composed of subsets $\ensuremath{\AmmArgs^\uparrow}$ of \emph{increase} arguments and $\ensuremath{\AmmArgs^\downarrow}$ of \emph{decrease} arguments; \item[$\bullet$] $\ensuremath{\AmmArgs^-}$ is a finite set of \emph{con} arguments; \item[$\bullet$] $\ensuremath{\AmmArgs^+}$ is a finite set of \emph{pro} arguments; \item[$\bullet$] the sets $\{\ensuremath{\mathcal{P}}\}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^-}$ and $\ensuremath{\AmmArgs^+}$ are pairwise disjoint; \item[$\bullet$] $\ensuremath{\Rels^p}$ $\subseteq$ $\ensuremath{\mathcal{X}}$ $\times$ $\{\ensuremath{\mathcal{P}}\}$ is a directed acyclic binary relation between amendment arguments and the proposal argument (we may refer to this relation informally as `probabilistic'); \item[$\bullet$] $\ensuremath{\Rels}$ $\subseteq$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\times$ ($\ensuremath{\mathcal{X}}$ $\cup$ $\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) is a directed acyclic, binary relation \FTn{from} pro/con arguments \FTn{to} amendment\FTn{/pro/con arguments} (we may refer to this relation informally as `argumentative'); \item[$\bullet$] $\ensuremath{\mathcal{A}} = \{ \ensuremath{a}_1, \ldots, \ensuremath{a}_n \}$ is a finite set of \emph{agents} $(n >1$); \item[$\bullet$] $\ensuremath{\mathcal{V}}$ : $\ensuremath{\mathcal{A}}$ $\times$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow$ [0, 1] is a total function such that $\ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$ is the \emph{vote} of agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$ on argument $\ensuremath{x} \in \ensuremath{\AmmArgs^-} \cup \ensuremath{\AmmArgs^+}$; with an abuse of notation, we let $\ensuremath{\mathcal{V}}_\ensuremath{a}$ : ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow [0, 1]$ represent the votes of a \emph{single} agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$, e.g. $\ensuremath{\mathcal{V}}_\ensuremath{a}(\ensuremath{x}) = \ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$; \item[$\bullet$] $\ensuremath{\Forecast^\Agents} = \{ \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n} \}$ is such that $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i} $, where $i \in \{ 1, \ldots n \}$, is the \emph{forecast} of agent $\ensuremath{a}_i\in\ensuremath{\mathcal{A}}$. \end{definition} \BIn{Note that pro \AX{(}con\AX{)} arguments can be seen as supporting (attacking, resp.) other arguments via $\ensuremath{\mathcal{R}}$, as in the case of conventional QuAD frameworks~\cite{Baroni2015}.} \begin{example \label{eg:tokyo} A possible update framework in our running setting may include $\ensuremath{\mathcal{P}}$ as in Example~\ref{ProposalExample} as well as (see Table \ref{table:tokyo}) $\ensuremath{\mathcal{X}}=\{\ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ensuremath{\incarg_1}\}$, $\ensuremath{\AmmArgs^-}=\{\ensuremath{\attarg_1}, \ensuremath{\attarg_2}, \ensuremath{\attarg_3}\}$, $\ensuremath{\AmmArgs^+}=\{\ensuremath{\supparg_1}, \ensuremath{\supparg_2}\}$, $\ensuremath{\Rels^p}=\{(\ensuremath{\decarg_1}, \ensuremath{\mathcal{P}})$, $(\ensuremath{\decarg_2}, \ensuremath{\mathcal{P}}), (\ensuremath{\incarg_3}, \ensuremath{\mathcal{P}})\}$, and $\ensuremath{\mathcal{R}}=\{(\ensuremath{\attarg_1}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_2}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_3}, \ensuremath{\incarg_1})$, $(\ensuremath{\supparg_1}, \ensuremath{\decarg_2}),$ $ (\ensuremath{\supparg_2}, \ensuremath{\incarg_1})\} . Figure \ref{fig:tokyo} gives a graphical representation of these arguments and relations. \BIn{Assuming $\ensuremath{\mathcal{A}}=\{alice, bob, charlie\}$, $\ensuremath{\mathcal{V}}$ may be such that $\AX{\ensuremath{\mathcal{V}}_{alice}(\ensuremath{\attarg_1})} = 1$, $\AX{\ensuremath{\mathcal{V}}_{bob}(\ensuremath{\supparg_1})} = 0$, and so on.} \end{example} \begin{table}[t] \begin{tabular}{p{0.7cm}p{6.7cm}} \hline & Content \\ \hline $\ensuremath{\mathcal{P}}$ & `A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled owing to COVID-19, and the Japanese government is likely to buckle under this pressure ($\mathcal{E}^\ensuremath{\mathcal{P}})$. Thus, there is a 75\% chance that the Olympics will be cancelled/postponed to another year' ($\ensuremath{\Forecast^\Proposal}$). \\ $\ensuremath{\decarg_1}$ & `The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'. \\ $\ensuremath{\decarg_2}$ & `This poll comes from an unreliable source.' \vspace{2mm}\\ $\ensuremath{\incarg_1}$ & `Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation.' \\ $\ensuremath{\attarg_1}$ & `The IOC is bluffing - people are dying, Japan is experiencing a strike. They will not go ahead with the games if there is a risk of mass death.' \\ $\ensuremath{\attarg_2}$ & `The Japanese government may renege on its commitment to the IOC, and use legislative or immigration levers to block the event.' \\ $\ensuremath{\attarg_3}$ & `Japan's government has sustained a high-approval rating in the last year and is strong enough to ward off opposition attacks.' \\ $\ensuremath{\supparg_1}$ & `This pollster has a track record of failure on Japanese domestic issues.' \\ $\ensuremath{\supparg_2}$ & `Rising anti-government sentiment on Japanese Twitter indicates that voters may be receptive to such arguments.' \\ \hline \end{tabular} \caption Arguments in the update framework in Example~\ref{eg:tokyo}.} \label{table:tokyo} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/FAF1.png} \centering \caption {\BIn{A graphical representation of arguments and relations in the update framework from Example~\ref{eg:tokyo}. Nodes represent proposal ($\ensuremath{\mathcal{P}}$), increase ($\uparrow$), decrease ($\downarrow$), pro ($+$) and con ($-$) arguments, while \FTn{dashed/solid} edges indicat , resp., the $\ensuremath{\Rels^p}$/$\ensuremath{\mathcal{R}}$ relations. } } \label{fig:tokyo} \end{figure} Several considerations about update frameworks are in order. Firstly, they represent `stratified' debates: graphically, they can be represented as trees with the proposal argument as root, amendment arguments as children of the root, and pro/con arguments forming the lower layers, as shown in Figure \ref{fig:tokyo}. This tree structure serves to focus argumentation towards the proposal (i.e. the probability and, if available, evidence) it puts forward. Second, we have chosen to impose a `structure' on proposal arguments, whereby their forecast is distinct from their (optional) evidence. Here the forecast has special primacy over the evidence, because forecasts are the vital reference point and the drivers of debates in FAFs. They are, accordingly, both mandatory and required to `stand out' to participating agents. In the spirit of abstract argumentation \cite{Dung1995}, we nonetheless treat all arguments, including proposal arguments, as `abstract', and focus on relations between them rather between their components. In practice, therefore, amendment arguments may relate to a proposal argument's forecast but also, if present, to its evidence. We opt for this abstract view on the assumption that the flexibility of this approach better suits judgmental forecasting, which has a diversity of use cases (e.g. including politics, economics and sport) where different argumentative approaches may be deployed (i.e. quantitative, qualitative, directly attacking amendment nodes or raising alternative POVs) and wherein forecasters may lack even a basic knowledge of argumentation. We leave the study of structured variants of our framework (e.g. see overview in \cite{structArg}) to future work: these may consider finer-grained representations of all arguments in terms of different components, and finer-grained notions of relations between components, rather than full arguments. Third, in update frameworks, voting is restricted to pro/con arguments. Preventing agents from voting directly on amendment arguments mitigates against the risk of arbitrary judgements: agents cannot make off-the-cuff estimations but can only express their beliefs via (pro/con) argumentation, thus ensuring a more rigorous process of appraisal for the proposal and amendment arguments. Note that rather than facilitating voting on arguments using a two-valued perspective (i.e. positive/negative) or a three-valued perspective (i.e. positive/negative/neutral), $\ensuremath{\mathcal{V}}$ allows agents to cast more granular judgements of (pro/con) argument acceptability, the need for which has been highlighted in the literature \cite{Mellers2015}. Finally, although we envisage that arguments of all types are put forward by agents during debates, we do not capture this mapping in update frameworks. Thus, we do not capture who put forward which arguments, but instead only use votes to encode and understand agents' views. This enables more nuanced reasoning and full engagement on the part of agents with alternative viewpoints (i.e. an agent may freely argue both for and against a point before taking an explicit view with their voting). Such conditions are essential in a healthy forecasting debate \cite{Landeta2011,Mellers2015}. In the remainder of this paper, with an abuse of notation, we often use $\ensuremath{\Forecast^\Proposal}$ to denote, specifically, the probability advocated in $\ensuremath{\Forecast^\Proposal}$ (e.g. 0.75 in Example \ref{ProposalExample}). \section{Aggregating Rational Forecasts }\label{sec:forecasting} In this section we formally introduce (in \AX{§}\ref{subsec:rationality}) our notion of rationality and discuss how it may be used to identify\BI{, and subsequently `block',} undesirable behaviour in forecasters. We then define (in \AX{§}\ref{subsec:aggregation}) a method for calculating a revised forecast \BI{(Stage 2c of Figure \ref{fig:FAFdiag})}, which aggregates the views of all agents in the update framework, whilst optimising their overall forecasting accuracy. \subsection{Rationality}\label{subsec:rationality} Characterising an agent’s view as irrational offers opportunities to refine the accuracy of their forecast (and thus the overall aggregated group forecast). Our definition of rationality is inspired by, but goes beyond, that of QuAD-V \cite{Rago2017}, which was introduced for the e-polling context. Whilst update frameworks eventually produce a single aggregated forecast on the basis of group deliberation, each agent is first evaluated for their rationality on an individual basis. Thus, as in QuAD-V, in order to define rationality for individual agents, we first reduce frameworks to \emph{delegate frameworks} for each agent, which are the restriction of update frameworks to a single agent. \begin{definition} A \emph{delegate framework} for an agent $\ensuremath{a}$ is $\ensuremath{u}_{\ensuremath{a}} =$ ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{a}, \ensuremath{\mathcal{V}}_{\ensuremath{a}}, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩. \end{definition} Note that all arguments in an update framework are included in each agent's delegate framework, but only the agent's votes and forecast are carried over. Recognising the irrationality of an agent requires comparing the agent's forecast against (an aggregation of) their opinions on the amendment arguments and, by extension, on the proposal argument. To this end, we evaluate the different parts of the update framework as follows. We use the DF-QuAD algorithm \cite{Rago2016} to score each amendment argument for the agent, in the context of the pro/con arguments `linked' to the amendment argument, using $\ensuremath{\mathcal{R}}$, in the context of the agent's delegate framework. We refer to the DF-QuAD score function as $\ensuremath{\mathcal{\sigma}}$. This requires a choice of base scores for amendment arguments as well as pro/con arguments. We assume the same base score $\ensuremath{\mathcal{\tau}}(\ensuremath{x})=0.5$ for all $\ensuremath{x} \in \ensuremath{\mathcal{X}}$; in contrast, the base score of pro/con arguments is a result of the votes they received from the agent, in the spirit of QuAD-V \cite{Rago2017}. The intuition behind assigning a neutral (0.5) base score for amendment arguments is that an agent's estimation of their strength from the outset would be susceptible to bias and inaccuracy. Once each amendment argument has been scored (using $\ensuremath{\mathcal{\sigma}}$) for the agent, we aggregate the scores of all amendment arguments (for the same agent) to to calculate the agent's \emph{confidence score} in the proposal argument (which underpins our rationality constraints), by weighting the mean average strength of this argument's increase amendment relations against that of the set of decrease amendment relations: \begin{definition}\label{def:conscore} Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩ , let $\ensuremath{\AmmArgs^\uparrow} = \{ \ensuremath{\incarg_1}, \ensuremath{\incarg_2}, \ldots , \ensuremath{\arg^\uparrow}_i \}$ and $\ensuremath{\AmmArgs^\downarrow} = \{ \ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ldots , \ensuremath{\arg^\downarrow}_j \}$. Then, $\ensuremath{a}$'s \emph{confidence score} is as follows: \begin{align} &\text{if } i\neq0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k) - \frac{1}{j} \sum_{l=1}^{j} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\ &\text{if } i\neq0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k); \nonumber \\ &\text{if } i=0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = - \frac{1}{j} \sum_{l=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\ &\text{if } i=0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = 0. \nonumber \end{align} \end{definition} Note that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) \in [-1,1]$, which denotes the overall views of the agent on the forecast $\ensuremath{\Forecast^\Proposal}$ (i.e. as to whether it should be \emph{increased} or \emph{decreased}, and how far). A negative (positive) $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ indicates that an agent believes that $\ensuremath{\Forecast^\Proposal}$ should be amended down (up, resp.). The size of $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ reflects the degree of the agent's certainty in either direction. In turn, we can constrain an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ if it contradicts this belief as follows. \begin{definition}\label{def:irrationality} Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$ , $\ensuremath{a}$’s forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ is \emph{strictly rational} (wrt $\ensuremath{u}_{\ensuremath{a}}$) iff: \begin{align} if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} < \ensuremath{\Forecast^\Proposal} \nonumber \\ if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} > \ensuremath{\Forecast^\Proposal} \nonumber \\ \centering \mid\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\mid \geq \frac{\mid\ensuremath{\Forecast^\Proposal} - \ensuremath{\Forecast^\Agents}_\ensuremath{a}\mid}{\ensuremath{\Forecast^\Proposal}} \nonumber \end{align} \end{definition} Hereafter, we refer to forecasts which violate the first two constraints as, resp., \emph{irrational increase} and \emph{irrational decrease} forecasts, and to forecasts which violate the final constraint as \emph{irrational scale} forecasts. This definition of rationality preserves the integrity of group forecast in two ways. First, it prevents agents from forecasting against their beliefs: an agent cannot increase $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0$ and an agent cannot decrease $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0 ; further, it ensures that agents cannot make forecasts disproportionate to their confidence score -- \emph{how far} an agent $\ensuremath{a}$ deviates from the proposed change $\ensuremath{\Forecast^\Proposal}$ is restricted by $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$; finally, an agent must have $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ greater than or equal to the relative change to $\ensuremath{\Forecast^\Proposal}$ denoted in their forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a} . Note that the \emph{irrational scale} constraint deals with just one direction of proportionality (i.e. providing only a maximum threshold for $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$'s deviation from $\ensuremath{\Forecast^\Proposal}$, but no minimum threshold). Here, we avoid bidirectional proportionality on the grounds that such a constraint would impose an arbitrary notion of arguments' `impact' on agents. An agent may have a very high $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$, indicating \FT{their} belief that $\ensuremath{\Forecast^\Proposal}$ is too low, but \AX{may}, we suggest, rationally choose to increase $\ensuremath{\Forecast^\Proposal}$ by only a small amount (e.g. if, despite \FT{their} general agreement with the arguments, \FT{they} believe the overall issue at stake in $\ensuremath{\mathcal{P}}$ to be minor or low impact to the overall forecasting question). Our definition of rationality, which relies on notions of argument strength derived from DF-QuAD, thus informs but does not wholly dictate agents' forecasting, affording them considerable freedom. We leave alternative, stricter definitions of rationality, which may derive from probabilistic conceptions of argument strength, to future work. \begin{example Consider our running Tokyo Olympics example, with the same arguments and relations from Example \ref{eg:tokyo} and an agent \BIn{$alice$} with a confidence score \BIn{$\ensuremath{\mathcal{C}}_{alice}(\ensuremath{\mathcal{P}}) = -0.5$}. From this we know that \BIn{$alice$} believes that the suggested $\ensuremath{\Forecast^\Proposal}$ in the proposal argument $\ensuremath{\mathcal{P}}$ should be decreased. Then, under our definition of rationality, \BIn{$alice$'s} forecast \BIn{$\ensuremath{\Forecast^\Agents}_{alice}$} is `rational' if it decreases $\ensuremath{\Forecast^\Proposal}$ by up to 50\%. \end{example} If an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ violates these rationality constraints then \BI{it is `blocked'} and the agent is prompted to return to the argumentation graph. From here, they may carry out one or more of the following actions to render their forecast rational: a. Revise their forecast; b. Revise their votes on arguments; c. Add new arguments (and vote on them). \iffalse \begin{enumerate}[label=\alph*.] \item Revise their forecast; \item Revise their votes on arguments; \item Add new arguments to the update framework (and vote on them). \end{enumerate} \fi Whilst a) and b) occur on an agent-by-agent basis, confined to each delegate framework, c) affects the shared update framework and requires special consideration. Each time new \AX{arguments} are added to the shared graph, every agent must vote on \AX{them}, even if they have already made a rational forecast. In certain cases, after an agent has voted on a new argument, it is possible that their rational forecast is made irrational. In this instance, the agent must resolve their irrationality via the steps above. In this way, the update framework can be refined on an iterative basis until the graph is no longer being modified and all agents' forecasts are rational. At this stage, the update framework has reached a stable state and the agents $\ensuremath{\mathcal{A}}$ are collectively rational: \begin{definition} Given an update framework $\ensuremath{u}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩, $\ensuremath{\mathcal{A}}$ is \emph{collectively rational} (wrt \emph{u}) iff $\forall \ensuremath{a} \in \ensuremath{\mathcal{A}}$, $\ensuremath{a}$ is individually rational (wrt the delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩). \end{definition} When $\ensuremath{\mathcal{A}}$ is collectively rational, the update framework $u$ becomes immutable and the aggregation (defined next) \AX{produces} a group forecast $\ensuremath{\Forecast^g}$, which becomes the new $\ensuremath{\mathcal{F}}$. \subsection{Aggregating Forecasts}\label{subsec:aggregation} After all the agents have made a rational forecast, an aggregation function is applied to produce one collective forecast. One advantage of forecasting debates vis-a-vis \AX{the} many other forms of debate, is that a ground truth always exists -- an event either happens or does not. This means that, over time and after enough FAF instantiations, data on the forecasting success of different agents can be amassed. In turn, the relative historical performance of forecasting agents can inform the aggregation of group forecasts. In update frameworks, a weighted aggregation function based on Brier Scoring \cite{Brier1950} is used, such that more accurate forecasting agents have greater influence over the final forecast. Brier Scores are a widely used criterion to measure the accuracy of probabilistic predictions, effectively gauging the distance between a forecaster's predictions and an outcome after it has(n't) happened, as follows. \begin{definition} \label{def:bscore} Given an agent $\ensuremath{a}$, a non-empty series of forecasts $\ensuremath{\Forecast^\Agents}_\ensuremath{a}(1), \ldots, \ensuremath{\Forecast^\Agents}_\ensuremath{a}(\ensuremath{\mathcal{N}}_{\ensuremath{a}})$ with corresponding actual outcomes $\ensuremath{\mathcal{O}}_1, \ldots,$ $\ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \in \{true, false\}$ (where $\ensuremath{\mathcal{N}}_{\ensuremath{a}}>0$ is the number of forecasts $\ensuremath{a}$ has made in a non-empty sequence of as many update frameworks), $\ensuremath{a}$'s Brier Score $\ensuremath{b}_{\ensuremath{a}} \in [0, 1]$ is as follows: \begin{align} \ensuremath{b}_{\ensuremath{a}} = \frac{1}{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \sum_{t=1}^{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} (\ensuremath{\Forecast^\Agents}_\ensuremath{a}(t) - val(\ensuremath{\mathcal{O}}_t))^2 \nonumber \end{align} where $val(\ensuremath{\mathcal{O}}_t)=1$ if $\ensuremath{\mathcal{O}}_t=true$, and 0 otherwise. \end{definition} A Brier Score $\ensuremath{b}$ is effectively the mean squared error used to gauge forecasting accuracy, where a low $\ensuremath{b}$ indicates high accuracy and high $\ensuremath{b}$ indicates low accuracy. This can be used in the update framework's aggregation function via the weighted arithmetic mean as follows. \AX{E}ach Brier Score is inverted to ensure that more (less, resp.) accurate forecasters have higher (lower, resp.) weighted influence\AX{s} on $\ensuremath{\Forecast^g}$: \begin{definition}\label{def:group} Given a set of agents $\ensuremath{\mathcal{A}} = \{\ensuremath{a}_1, \ldots,\ensuremath{a}_n\}$, their corresponding set of Brier Scores $\ensuremath{b} = \{\ensuremath{b}_{\ensuremath{a}_1}, \ldots,\ensuremath{b}_{\ensuremath{a}_n}\}$ and their forecasts $\ensuremath{\Forecast^\Agents} = \{\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots,\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n}\}$, and letting, for $i \!\!\in\!\! \{ 1, \ldots, n\}$, $w_{i} \!\!=\!\! 1-\ensuremath{b}_{\ensuremath{a}_i}$, the \emph{group forecast} $\ensuremath{\Forecast^g}$ is as follows: \begin{align} &\text{if } \sum_{i=1}^{n}w_{i} \neq 0: & &\ensuremath{\Forecast^g} = \frac{\sum_{i=1}^{n}w_{i}\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i}}{\sum_{i=1}^{n}w_{i}}; \nonumber \\ &\text{otherwise}: & &\ensuremath{\Forecast^g} = 0. \nonumber \end{align} \end{definition} This group forecast could be `activated' after a fixed number of debates (with the mean average used prior), when sufficient data has been collected on the accuracy of participating agents, or after a single debate, in the context of our general \emph{Forecasting Argumentation Frameworks}: \begin{definition} A \emph{Forecasting Argumentation Framework} (FAF) is a triple ⟨$ \ensuremath{\mathcal{F}}, \ensuremath{\mathcal{U}}, \ensuremath{\mathcal{T}}$⟩ such that: \item[$\bullet$] $\ensuremath{\mathcal{F}}$ is a \emph{forecast ; \item[$\bullet$] $\ensuremath{\mathcal{U}}$ is a finite, non-empty sequence of update frameworks with \ensuremath{\mathcal{F}}\ the forecast of the proposal argument in the first update framework in the sequence\AR{;} the forecast of each subsequent update framework is the group forecast of the previous update framework's agents' forecasts; \item[$\bullet$] $\ensuremath{\mathcal{T}}$ is a preset time limit representing the lifetime of the FAF; \item[$\bullet$] each agent's forecast wrt the agent's delegate framework drawn from each update framework is strictly rational. \end{definition} \begin{example \BIn{Consider our running Tokyo Olympics example: the overall FAF may be composed of $\ensuremath{\mathcal{F}} = 0.15$, update frameworks $\ensuremath{\mathcal{U}} = \{ u_1, u_2, u_3 \}$ and time limit $\ensuremath{\mathcal{T}}=14\ days$, where $u_3$ is the latest (and therefore the only open) update framework after, for example, four days.} \end{example} \AX{T}he superforecasting literature explores a range of forecast aggregation algorithms: extremizing algorithms \cite{Baron2014}, variations on logistic \AX{and} Fourier $L_2E$ regression \cite{Cross2018}, with considerable success. \AX{T}hese approaches \AX{aim} at ensuring that less certain \AX{or less} accurate forecasts have a lesser influence over the final aggregated forecast. We believe that FAFs apply a more intuitive algorithm \AX{since} much of the `work' needed to bypass inaccurate and erroneous forecasting is \AX{expedited} via argumentation. \section{Properties}\label{sec:props} We now undertake a theoretical analysis of FAFs by considering mathematical properties they satisfy. Note that the properties of the DF-QuAD algorithm (see \cite{Rago2016}) hold (for the amendment and pro/con arguments) here. For brevity, we focus on novel properties unique to FAFs which relate to our new argument types. These properties focus on aggregated group forecasts wrt a debate (update framework). They imply the two broad, and we posit, desirable, principles of \emph{balance} and \emph{unequal representation}. We assume for this section a generic update framework $\ensuremath{u} = $ ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩ with group forecast $\ensuremath{\Forecast^g}$. \paragraph{Balance.} The intuition for these properties is that differences between $\ensuremath{\Forecast^g}$ and $\ensuremath{\Forecast^\Proposal}$ correspond to imbalances between the \emph{increase} and \emph{decrease} amendment arguments. The first result states that $\ensuremath{\Forecast^g}$ only differs from $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\Forecast^\Proposal}$ is the dialectical target of amendment arguments. \begin{proposition} \label{prop:balance1} If $\ensuremath{\mathcal{X}}\!\!=\!\!\emptyset$ ($\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$), then $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Proposal}$. \end{proposition} \begin{proof} \AX{If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!=\!0$ by Def.~\ref{def:conscore} and $\ensuremath{\Forecast^\Agents}_\ensuremath{a}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:group}.} \end{proof} \AX{T}his simple proposition conveys an important property for forecasting for an agent to put forward a different forecast, amendment arguments must have been introduced. \begin{example} In the Olympics setting, the group of agents could only forecast higher or lower than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ after the addition of at least one of \AX{the} amendment arguments $\ensuremath{\decarg_1}$, $\ensuremath{\decarg_2}$ or $\ensuremath{\incarg_1}$. \end{example} In the absence of increase \FTn{(decrease)} amendment arguments, if there are decrease \FTn{(increase, resp.)} amendment arguments, then $\ensuremath{\Forecast^g}$ is not higher \FTn{(lower, resp.)} than $\ensuremath{\Forecast^\Proposal}$. \begin{proposition}\label{prop:balance2} If $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}=\emptyset$, then $\ensuremath{\Forecast^g} \leq\ensuremath{\Forecast^\Proposal}$. \FTn{\label{balance3prop} If $\ensuremath{\AmmArgs^\downarrow}=\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$, then $\ensuremath{\Forecast^g}\geq\ensuremath{\Forecast^\Proposal}$.} \end{proposition} \begin{proof} \AX{If $\ensuremath{\AmmArgs^\downarrow}\!\! \neq \!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\leq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\leq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\leq\!\ensuremath{\Forecast^\Proposal}$. If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!\neq\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\geq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\geq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\geq\!\ensuremath{\Forecast^\Proposal}$.} \end{proof} This proposition demonstrates that, if a decrease \BIn{(increase)} amendment argument has an effect on proposal arguments, it can only be as its name implies. \begin{example} \BIn{In the Olympics setting, the agents could not forecast higher than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ if either of the decrease amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had} been added, but the increase argument $\ensuremath{\incarg_1}$ \AX{had} not. Likewise, \AX{the} agents could not forecast lower than $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\incarg_1}$ \AX{had} been added, but \AX{neither} of $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had .} \end{example} If $\ensuremath{\Forecast^g}$ is lower \BIn{(higher)} than $\ensuremath{\Forecast^\Proposal}$, there is at least one decrease \BIn{(increase, resp.)} argument. \begin{proposition} \label{prop:balance4} If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. \BIn{If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$.} \end{proposition} \begin{proof} \AX{ If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}<\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})<0$. Then, irrespective of $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}>\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})>0$. Then, irrespective of \BIn{$\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$}. } \end{proof} We can see here that the only way an agent can decrease \BIn{(increase)} the forecast is \FT{by adding} decrease \BIn{(increase, resp.)} arguments, ensuring the debate is structured as \FT{intended}. \begin{example} \BIn{In the Olympics setting, the group of agents could only produce a group forecast $\ensuremath{\Forecast^g}$ lower than $\ensuremath{\Forecast^\Proposal}$ due to the presence of \emph{decrease} amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$. Likewise, the group of agents could only produce a $\ensuremath{\Forecast^g}$ higher than $\ensuremath{\Forecast^\Proposal}$ due to the presence of $\ensuremath{\incarg_1}$.} \end{example} \paragraph{Unequal representation.} AFs exhibit instances of unequal representation in the final voting process. In formulating the following properties, we distinguish between two forms of unequal representation. First, \emph{dictatorship}, where a single agent dictates $\ensuremath{\Forecast^g}$ with no input from other agents. Second, \emph{pure oligarchy}, where a group of agents dictates $\ensuremath{\Forecast^g}$ with no input from other agents outside the group. In the forecasting setting, these properties are desirable as they guarantee higher accuracy \AX{from} the group forecast $\ensuremath{\Forecast^g}$. An agent with a forecasting record of \emph{some} accuracy exercises \emph{dictatorship} over the group forecast $\ensuremath{\Forecast^g}$, if the rest of the participating \AX{agents} have a record of total inaccuracy. \begin{proposition}\label{prop:dictatorship} If $\ensuremath{a}_d\in\ensuremath{\mathcal{A}}$ has a Brier score $\ensuremath{b}_{\ensuremath{a}_d}<1$ and $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}} \setminus \{\ensuremath{a}_d$\}, $\ensuremath{b}_{\ensuremath{a}_z} = 1$, then $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$. \end{proposition} \begin{proof} \AX{ By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} \!\!\!=\!\! 1$ $\forall \ensuremath{a}_z\!\in\!\ensuremath{\mathcal{A}} \!\setminus\! \{\!\ensuremath{a}_d\!\}$, then $w_{\ensuremath{a}_z}\!\!\!=\!0$; and if $\ensuremath{b}_{\ensuremath{a}_d}\!\!<\!\!1$, then $w_{\ensuremath{a}_d}\!\!>\!\!0 . Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$ is weighted at 100\% and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\% so $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$. } \end{proof} This proposition demonstrates how we will disregard agents with total inaccuracy, even in \FT{the} extreme case where we allow one (more accurate) agent to dictate the forecast. \begin{example} \BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 0.5, 1 and 1, resp., bob's and charlie's forecasts have} no impact on $\ensuremath{\Forecast^g}$, whilst \AX{alice's} forecast becomes the group forecast $\ensuremath{\Forecast^g}$.} \end{example} A group of agents with a forecasting record of $some$ accuracy exercises \emph{pure oligarchy} over $\ensuremath{\Forecast^g}$ if the rest of the \AX{agents} all have a record of total inaccuracy. \begin{proposition}\label{oligarchytotalprop} Let $\ensuremath{\mathcal{A}} = \ensuremath{\mathcal{A}}_o \cup \ensuremath{\mathcal{A}}_z$ where $\ensuremath{\mathcal{A}}_o \cap \ensuremath{\mathcal{A}}_z = \emptyset$, $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o \in \ensuremath{\mathcal{A}}_o$ and $\ensuremath{b}_{\ensuremath{a}_z}=1$ $\forall \ensuremath{a}_z \in \ensuremath{\mathcal{A}}_z$. Then, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $>0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\ . \end{proposition} \begin{proof} \AX{ By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} = 1$ $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}}_z$, then $w_{\ensuremath{a}_z}=0$; and if $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o\in\ensuremath{\mathcal{A}}_o$, then $w_{\ensuremath{a}_o}>0$. Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $> 0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at $0\%$. } \end{proof} This proposition extends the behaviour from Proposition \ref{prop:dictatorship} to the (more desirable) case where fewer agents have a record of total inaccuracy. \begin{example} \BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 1, 0.2 and 0.6, resp., alice's forecast} has no impact on $\ensuremath{\Forecast^g}$, whilst \AX{bob and charlie's} aggregated forecast becomes the group forecast $\ensuremath{\Forecast^g}$.} \end{example} \section{Evaluation}\label{sec:experiments} \BI{We conducted an experiment using a dataset obtained from the `Superforecasting' project, Good Judgment Inc \cite{GJInc}, to simulate four past forecasting debates in FAFs. This dataset contained 1770 datapoints (698 `forecasts' and 1072 `comments') posted by 242 anonymised users with a range of expertise. The original debates had occurred on the publicly available group forecasting platform, the Good Judgment Open (GJO)\footnote{https://www.gjopen.com/}, providing a suitable baseline against which to compare FAFs' accurac . \BI{For the experiment, we used a prototype implementation of FAFs in the form of the publicly available web platform called \emph{Arg\&Forecast} (see \cite{Irwin2022} for an introduction to the platform and an additional human experiment with FAFs). Python's Gensim topic modelling library \cite{rehurek2011gensim} was used to separate the datapoints for each debate into contextual-temporal groups that could form update frameworks.} In each update framework the proposal forecast was set to the mean average of forecasts made in the update framework window and each argument appeared only once. Gensim was further used to simulate voting, matching users to specific arguments they (dis)approved of. Some 4,700 votes \AX{were then} generated with a three-valued system (where votes were taken from \{0,0.5,1\}) to ensure consistency: if a user voiced approval for an argument in the debate time window, their vote for the corresponding argument(s) was set to 1; disapproval for an argument led to a vote of 0, and (in the most common case) if a user did not mention an argument at all, their vote for the corresponding argument(s) defaulted to 0.5. With the views of all participating users wrt the proposal argument encoded in each update framework's votes, forecasts could then be simulated. If a forecast was irrational, violating any of the three constraints in Def.~\ref{def:irrationality} (referred to \AX{in the following} as \emph{increase}, \emph{decrease} and \emph{scale}, resp.), it was blocked and, to mimic real life use, an automatic `follow up' forecast was made. The `follow up' forecast would be the closest possible prediction (to their original choice) a user could make whilst remaining `rational'. \BI{Note that evaluation of the aggregation function described in \AX{§}\ref{subsec:aggregation} was outside this experiment, since the past forecasting accuracy of the dataset's 242 anonymised users was unavailable. Instead, we used \AX{the} mean average whilst adopting the GJO's method for scoring the accuracy of a user and/or group over the lifetime of the question \cite{roesch_2015}. This meant calculating a daily forecast and daily Brier score for each user, for every day of the question. After users made their first rational forecast, that forecast became their `daily forecast' until it was updated with a new forecast. Average and range of daily Brier scores allowed reliable comparison between (individual and aggregated) performance of the GJO versus the FAF implementation.} \begin{table}[t] \begin{tabular}{@{}llllll@{}} \toprule Q & Group $\ensuremath{b}$ & $min(\ensuremath{b})$ & $max(\ensuremath{b})$ \\ \midrule Q1 & 0.1013 (0.1187) & 0.0214 (0) & 0.4054 (1) \\ Q2 & 0.216 (0.1741) & 0 (0) & 0.3853 (1) \\ Q3 & 0.01206 (0.0227) & 0.0003 (0) & 0.0942 (0.8281) \\ Q4 & 0.5263 (0.5518) & 0 (0) & 0.71 (1) \\ \midrule \textbf{All} & \textbf{0.2039 (0.217)} & \textbf{0 (0)} & \textbf{1 (1)} \\ \bottomrule \end{tabular} \caption{The accuracy of the platform group versus control, where \AX{`}Group $\ensuremath{b}$\AX{'} is the aggregated (mean) Brier score, `$min(\ensuremath{b})$' is the lowest individual Brier score and `$max(\ensuremath{b})$' is the highest individual Brier score. Q1-Q4 indicate the four simulated debates.} \label{accuracyExp1} \end{table} \begin{table}[t] \begin{tabular}{llllll} \hline \multirow{2}{*}{Q} & \multirow{2}{*}{$\overline{\ensuremath{\mathcal{C}}}$} & \multirow{2}{*}{Forecasts} & \multicolumn{3}{c}{Irrational Forecasts} \\ \cline{4-6} & & & \multicolumn{1}{c}{\emph{Increase} } \!\!\!\! & \multicolumn{1}{c}{\emph{Decrease} } \!\!\!\! & \multicolumn{1}{c}{\emph{Scale} }\!\! \!\! \\ \hline Q1 & -0.0418 & 366 & 63 & 101 & 170 \\ Q2 & 0.1827 & 84 & 11 & 15 & 34 \\ Q3 & -0.4393 & 164 & 53 & 0 & 86 \\ Q4 & 0.3664 & 84 & 4 & 19 & 15 \\ \hline All & -0.0891 & 698 & 131 & 135 & 305 \\ \hline \end{tabular} \caption{Auxiliary results from \FT{the experiment}, where $\overline{\ensuremath{\mathcal{C}}}$ is the average confidence score, `Forecasts' is number of forecasts made in each question and `Irrational Forecasts' the number in each question which violated each constraint in §\ref{subsec:rationality}.} \label{exp1auxinfo} \end{table} \paragraph{Results.} As Table \ref{accuracyExp1} demonstrates, simulating forecasting debates from GJO in \emph{Arg\&Forecast} led to predictive accuracy improvements in three of the four debates. \BIn{This is reflected in these debates by a substantial reduction in Brier scores versus control.} The greatest accuracy improvement in absolute terms was in Q4, which saw a Brier score decrease of 0.0255. In relative terms, Brier score decreases ranged from 5\% (Q4) to 47\% (Q3). \BIn{The average Brier score decrease was 33\%, representing a significant improvement in forecasting accuracy across the board}. \BIn{Table \ref{exp1auxinfo} demonstrates how \AX{our} rationality constraints drove forward this improvement}. 82\% of forecasts made across the four debates were classified as irrational \BIn{and subsequently moderated with a rational `follow up' forecast}. Notably, there were more \emph{irrational scale} forecasts than \emph{irrational increase} and \emph{irrational decrease} forecasts combined. These results demonstrate how argumentation-based rationality constraints can play an active role in facilitating higher forecasting accuracy, signalling the early promise of FAFs. \section{Conclusions}\label{sec:conclusions} We have introduced the Forecasting Argumentation Framework (FAF), a multi-agent argumentation framework which supports forecasting debates and probability estimates. FAFs are composite argumentation frameworks, comprised of multiple non-concurrent update frameworks which themselves depend on three new argument types and a novel definition of rationality for the forecasting context. Our theoretical and empirical evaluation demonstrates the potential of FAFs, namely in increasing forecasting accuracy, holding intuitive properties, identifying irrational behaviour and driving higher engagement with the forecasting question (more arguments and responses, and more forecasts in the user study). These strengths align with requirements set out by previous research in the field of judgmental forecasting. There \AX{is} a multitude of possible directions for future wor . First, FAFs are equipped to deal only with two-valued outcomes but, given the prevalence of forecasting issues with multi-valued outcomes (e.g. `Who will win the next UK election?'), expanding their capability would add value. Second, further work may focus on the rationality constraints, e.g. by introducing additional parameters to adjust their strictness, or \AX{by implementing} alternative interpretations of rationalit . Third, future work could explore constraining agents' argumentation. This could involve using past Brier scores to limit the quantity or strength of agents' arguments and also to give them greater leeway wrt the rationality constraints. \FTn{Fourth, our method relies upon acyclic graphs: we believe that they are intuitive for users and note that all Good Judgment Open debates were acyclic; nonetheless, the inclusion of cyclic relations (e.g. to allow \AX{con} arguments that attack each other) could expand the scope of the argumentative reasoning in \AX{in FAFs.} } Finally, there is an immediate need for larger scale human experiments. \newpage \section*{Acknowledgements} The authors would like to thank Prof. Anthony Hunter for his helpful contributions to discussions in the build up to this work. \BIn{Special thanks, in addition, go to Prof. Philip E. Tetlock and the Good Judgment Project team for their warm cooperation and for providing datasets for the experiments.} \AX{Finally, the authors would like to thank the anonymous reviewers and meta-reviewer for their suggestions, which led to a significantly improved paper.} \bibliographystyle{kr} \section{Introduction}\label{sec:intro} Historically, humans have performed inconsistently in judgemental forecasting \cite{Makridakis2010,TetlockExp2017}, which incorporates subjective opinion and probability estimates to predictions \cite{Lawrence2006}. Yet, human judgement remains essential in cases where pure statistical methods are not applicable, e.g. where historical data alone is insufficient or for one-off, more `unknowable' events \cite{Petropoulos2016,Arvan2019,deBaets2020}. Judgemental forecasting is widely relied upon for decision-making \cite{Nikolopoulos2021}, in myriad fields from epidemiology to national security \cite{Nikolopoulos2015,Litsiou2019}. Effective tools to help humans improve their predictive capabilities thus have enormous potential for impact. Two recent global events -- the COVID-19 pandemic and the US withdrawal from Afghanistan -- underscore this by highlighting the human and financial cost of predictive deficiency. A multi-purpose system which could improve our ability to predict the incidence and impact of events by as little as 5\%, could save millions of lives and be worth trillions of dollars per year \cite{TetlockGard2016}. Research on judgemental forecasting (see \cite{Lawrence2006,Zellner2021} for overviews), including the recent\AX{,} groundbreaking `Superforecasting Experiment' \cite{TetlockGard2016}, is instructive in establishing the desired properties for systems for supporting forecastin . In addition to reaffirming the importance of fine-grained probabilistic reasoning \cite{Mellers2015}, this literature points to the benefits of some group techniques versus solo forecasting \cite{Landeta2011,Tetlock2014art}, of synthesising qualitative and quantitative information \cite{Lawrence2006}, of combating agents' irrationality \cite{Chang2016} and of high agent engagement with the forecasting challenge, e.g. robust debating \cite{Landeta2011} and frequent prediction updates \cite{Mellers2015}. Meanwhile, \emph{computational argumentation} (see \cite{AImagazine17,handbook} for recent overviews) is a field of AI which involves reasoning with uncertainty and resolving conflicting information, e.g. in natural language debates. As such, it is an ideal candidate for aggregating the broad, polymorphous set of information involved in judgemental group forecasting. An extensive and growing literature is based on various argumentation frameworks -- rule-based systems for aggregating, representing and evaluating sets of arguments, such as those applied in the contexts of \emph{scheduling} \cite{Cyras_19}, \emph{fact checking} \cite{Kotonya_20} or in various instances of \emph{explainable AI} \cite{Cyras_21}. Subsets of the requirements for forecasting systems are addressed by individual formalisms, e.g. \emph{probabilistic argumentation} \AX{\cite{Dung2010,Thimm2012,Hunter2013,Fazzinga2018}} may effectively represent and analyse uncertain arguments about the future. However, we posit that a purpose-built argumentation framework for forecasting is essential to effectively utilise argumentation's reasoning capabilities in this context. \begin{figure*} \includegraphics[width=\textwidth]{images/FAF_diagram.png} \caption{The step-by-step process of a FAF over its lifetime.} \label{fig:FAFdiag} \end{figure*} In this paper, we attempt to cross-fertilise these two as of yet unconnected academic areas. We draw from forecasting literature to inform the design of a new computational argumentation approach: \emph{Forecasting Argumentation Frameworks} (FAFs). FAFs empower (human and artificial) agents to structure debates in real time and to deliver argumentation-based forecasting. They offer an approach in the spirit of \emph{deliberative democracy} \cite{Bessette1980} to respond to a forecasting problem over time. The steps which underpin FAFs are depicted in Figure \ref{fig:FAFdiag} (referenced throughout) and can be described in simple terms \FT{as follows}: a FAF is initialised with a time limit \FT{(for the overall forecasting process and for each iteration therein)} and a pre-agreed `base-rate' forecast $\ensuremath{\mathcal{F}}$ (Stage 1), e.g. based on historical data. \FT{Then,} the forecast is revised by one or more (non-concurrent) debates, \BI{in the form of `update frameworks' (Stage 2)}, opened and resolved by participating agents \FT{(}until \FT{the} specified time limit is reached\FT{)}. Each update framework begins with a proposed revision to the current forecast (Stage 2a), and proceeds with a cycle of argumentation (Stage 2b) about the proposed forecast, voting on said argumentation and forecasting. Forecasts deemed `irrational' with a view to agents' argumentation and voting are blocked. Finally, the rational forecasts are aggregated and the result replaces the current group forecast (Stage 2c). This process may be repeated over time \BI{in an indefinite number of update frameworks} (thus continually \BI{revising} the group forecast) until the \FT{(overall)} time limit is reached. The composite nature of this process enables the appraisal of new information relevant to the forecasting question as and when it arrives. Rather than confronting an unbounded forecasting question with a diffuse set of possible debates open at once, all agents concentrate their argumentation on a single topic (a proposal) at any given time. After giving the necessary background on forecasting and argumentation (§\ref{sec:background}), we formalise our \FT{update} framework\FT{s for Step 2a} (§\ref{sec:fw}). We then give \FT{our} notion of rationality \FT{(Step 2b)}, along with \FT{our} new method for \FT{aggregating rational forecasts (Step 2c)} from a group of agents (§\ref{sec:forecasting}) \FT{and FAFs overall}. We explore the underlying properties of \FT{FAFs} (§\ref{sec:props}), before describing \FT{\AX{an experiment} with \FT{a prototype implementing} our approach (§\ref{sec:experiments}). Finally, we conclude and suggest potentially fruitful avenues for future work (§\ref{sec:conclusions}). \section{Background}\label{sec:background} \subsection{Forecasting} Studies on the efficacy of judgemental forecasting have shown mixed results \cite{Makridakis2010,TetlockExp2017,Goodwin2019}. Limitations of the judgemental approach are a result of well-documented cognitive biases \cite{Kahneman2012}, irrationalities in human probabilistic reasoning which lead to distortion of forecasts. Manifold methodologies have been explored to improve judgemental forecasting accuracy to varying success \cite{Lawrence2006}. These methodologies include, but are not limited to, prediction intervals \cite{Lawrence1989}, decomposition \cite{MacGregorDonaldG1994Jdwd}, structured analogies \cite{Green2007,Nikolopoulos2015} and unaided judgement \cite{Litsiou2019}. Various group forecasting techniques have also been explored \cite{Linstone1975,Delbecq1986,Landeta2011}, although the risks of groupthink \cite{McNees1987} and the importance of maintaining the independence of each group member's individual forecast are well established \cite{Armstrong2001}. Recent advances in the field have been led by Tetlock and Mellers' superforecasting experiment \cite{TetlockGard2016}, which leveraged \AX{geopolitical} forecasting tournaments and a base of 5000 volunteer forecasters to identify individuals with consistently exceptional accuracy (top 2\%). The experiment\AR{'s} findings underline the effectiveness of group forecasting orientated around debating \cite{Tetlock2014art}, and demonstrate a specific cognitive-intellectual approach conducive for forecasting \cite{Mellers20151,Mellers2015}, but stop short of suggesting a concrete universal methodology for higher accuracy. Instead, Tetlock draws on his own work and previous literature to crystallise a broad set of methodological principles by which superforecasters abide \cite[pg.144]{TetlockGard2016}: \begin{itemize} \item \emph{Pragmatic}: not wedded to any idea or agenda; \item \emph{Analytical:} capable of stepping back from the tip-of-your-nose perspective and considering other views; \item \emph{Dragonfly-eyed:} value diverse views and synthesise them into their own; \item \emph{Probabilistic:} judge using many grades of maybe; \item \emph{Thoughtful updaters:} when facts change, they change their minds; \item \emph{Good intuitive psychologists:} aware of the value of checking thinking for cognitive and emotional biases. \end{itemize} Subsequent research after the superforecasting experiment has included exploring further optimal forecasting tournament preparation \cite{penn_global_2021,Katsagounos2021} and extending Tetlock and Mellers' approach to answer broader, more time-distant questions \cite{georgetown}. It should be noted that there have been no recent advances on computational tool\AX{kits} for the field similar to that proposed in this paper. \iffalse \begin{quote} \textbf{PRAGMATIC:} Not wedded to any idea or agenda \textbf{ANALYTICAL:} Capable of stepping back from the tip-of-your-nose perspective and considering other views \textbf{DRAGONFLY-EYED:} Value diverse views and synthesize them into their own \textbf{PROBABILISTIC:} Judge using many grades of maybe \textbf{THOUGHTFUL UPDATERS:} When facts change, they change their minds \textbf{GOOD INTUITIVE PSYCHOLOGISTS:} Aware of the value of checking thinking for cognitive and emotional biases \cite[pg.144]{TetlockGard2016}\newline \end{quote} \fi \subsection{Computational Argumentation} We posit that existing argumentation formalisms are not well suited for the aforementioned future-based arguments, which are necessarily semantically and structurally different from arguments about present or past concerns. Specifically, forecasting arguments are inherently probabilistic and must deal with the passage of time and its implications for the outcomes at hand. Further, several other important characteristics can be drawn from the forecasting literature which render current argumentation formalisms unsuitable, e.g. the paramountcy of dealing with bias (in data and cognitive), forming granular conclusions, fostering group debate and the co-occurrence of qualitative and quantitative arguing. Nonetheless, several of these characteristics have been previously explored in argumentation and our formalisation draws from several existing approache . First and foremost, it draws in spirit from abstract argumentation frameworks (AAFs) \cite{Dung1995}, in that the arguments' inner contents are ignored and the focus is on the relationships between arguments. However, we consider arguments of different types and \AX{an additional relation of} support (pro), \AX{rather than} attack (con) alone as in \cite{Dung1995}. Past work has also introduced probabilistic constraints into argumentation frameworks. {Probabilistic AAFs} (prAAFs) propose two divergent ways for modelling uncertainty in abstract argumentation using probabilities - the constellation approach \cite{Dung2010,Li2012} and the epistemic approach \cite{Hunter2013,Hunter2014,Hunter2020}. These formalisations use probability as a means to assess uncertainty over the validity of arguments (epistemic) or graph topology (constellation), but do not enable reasoning \emph{with} or \emph{about} probability, which is fundamental in forecasting. In exploring temporality, \cite{Cobo2010} augment AAFs by providing each argument with a limited lifetime. Temporal constraints have been extended in \cite{Cobo2012} and \cite{Baron2014}. Elsewhere, \cite{Rago2017} have used argumentation to model irrationality or bias in agents. Finally, a wide range of gradual evaluation methods have gone beyond traditional qualitative semantics by measuring arguments' acceptability on a scale (normally [0,1]) \cite{Leite2011,Evripidou2012,Amgoud2017,Amgoud2018,Amgoud2016}. Many of these approaches have been unified as Quantitative Bipolar Argumentation Frameworks (QBAFs) in \cite{Baroni2018}. Amongst existing approaches, of special relevance in this paper are Quantitative Argumentation Debate (QuAD) frameworks \cite{Baroni2015}, i.e. 5-tuples ⟨$\mathcal{X}^a$, $\mathcal{X}^c$, $\mathcal{X}^p$, $\mathcal{R}$, $\ensuremath{\mathcal{\tau}}$⟩ where $\mathcal{X}^a$ is a finite set of \emph{answer} arguments (to implicit \emph{issues}); $\mathcal{X}^c$ is a finite set of \emph{con} arguments; $\mathcal{X}^p$ is a finite set of \emph{pro} arguments; $\mathcal{X}^a$, $\mathcal{X}^c$ and $\mathcal{X}^p$ are pairwise disjoint; $\mathcal{R} \subseteq (\mathcal{X}^c \cup \mathcal{X}^p) \times (\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p)$ is an acyclic binary relation; $\ensuremath{\mathcal{\tau}}$ : $(\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p) \rightarrow [0,1]$ is a total function: $\ensuremath{\mathcal{\tau}}(a)$ is the \emph{base score} of $a$. Here, attackers and supporters of arguments are determined by the pro and con arguments they are in relation with. Formally, for any $a\in\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p$, the set of \emph{con} (\emph{pro}\AX{)} \emph{arguments} of $a$ is $\mathcal{R}^-(a) = \{b\in\mathcal{X}^c|(b,a)\in\mathcal{R}\}$ ($\mathcal{R}^+(a) = \{b\in\mathcal{X}^p|(b,a)\in\mathcal{R}\}$, resp.). Arguments in QuAD frameworks are scored by the \emph{Discontinuity-Free QuAD} (DF-QuAD) algorithm \cite{Rago2016}, using the argument's intrinsic base score and the \emph{strengths} of its pro/con arguments. \FTn{Given that DF-QuAD is used to define our method (see Def.~\ref{def:conscore}), for completeness we define it formally here.} DF-QuAD's \emph{strength aggregation function} is defined as $\Sigma : [0,1]^* \rightarrow [0,1]$, where for $\mathcal{S} = (v_1,\ldots,v_n) \in [0,1]^*$: if $n=0$, $\Sigma(S) = 0$; if $n=1$, $\Sigma(S) = v_1$; if $n=2$, $\Sigma(S) = f(v_1, v_2)$; if $n>2$, $\Sigma(S) = f(\Sigma(v_1,\ldots,v_{n-1}), v_n)$; with the \emph{base function} $f [0,1]\times [0,1] \rightarrow [0,1]$ defined, for $v_1, v_2\i [0,1]$, as: $f(v_1,v_2)=v_1+(1-v_1)\cdot v_2 = v_1 + v_2 - v_1\cdot v_2$. After separate aggregation of the argument's pro/con descendants, the combination function $c : [0,1]\time [0,1]\time [0,1]\rightarro [0,1]$ combines $v^-$ and $v^+$ with the argument's base score ($v^0$): $c(v^0,v^-,v^+)=v^0-v^0\cdot\mid v^+ - v^-\mid\:if\:v^-\geq v^+$ and $c(v^0,v^-,v^+)=v^0+(1-v^0)\cdot\mid v^+ - v^-\mid\:if\:v^-< v^+$, resp.\ The inputs for the combination function are provided by the \emph{score function}, $\ensuremath{\mathcal{\sigma}} : \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p\rightarro [0,1]$, which gives the argument's strength, as follows: for any $\ensuremath{x} \in \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p$: $\ensuremath{\mathcal{\sigma}}(\ensuremath{x}) = c(\ensuremath{\mathcal{\tau}}(\ensuremath{x}),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^+(\ensuremath{x}))))$ where if $(\ensuremath{x}_1,\ldots,\ensuremath{x}_n)$ is an arbitrary permutation of the ($n \geq 0$) con arguments in $\mathcal{R}^-(\ensuremath{x})$, $\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))=(\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_1),\ldots,\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_n))$ (similarly for pro arguments). Note that the DF-QuAD notion of $\ensuremath{\mathcal{\sigma}}$ can be applied to any argumentation framework where arguments are equipped with base scores and pro/con arguments. We will do so later, for our novel formalism. \section Update \AX{F}rameworks}\label{sec:fw} We begin by defining the individual components of our frameworks, starting with the fundamental notion of a \emph{forecast}. \FT{This} is a probability estimate for the positive outcome of a given (binary) question. \begin{definition} A \emph{forecast} $\ensuremath{\mathcal{F}}$ is the probability $P(\ensuremath{\mathcal{Q}}=true) \in [0,1]$ for a given \emph{forecasting question} $\ensuremath{\mathcal{Q}}$. \end{definition} \begin{example} \label{FAFEx} Consider the forecasting question $\ensuremath{\mathcal{Q}}$: \emph{`Will the Tokyo \AX{2020 Summer} Olympics be cancelled/postponed to another year?'}. \AX{Here, the $true$ outcome amounts to the Olympics being cancelled/postponed (and $false$ to it taking place in 2020 as planned).} Then, a forecast $\ensuremath{\mathcal{F}}$ may be $P(\ensuremath{\mathcal{Q}}=true)= 0.15$\, which amounts to a 15\% probability of the Olympics \BIn{being cancelled/postponed}. \BI{Note that $\ensuremath{\mathcal{F}}$ may have been introduced as part of an update framework (herein described), or as an initial base rate at the outset of a FAF (Stage 1 in Figure \ref{fig:FAFdiag}).} \end{example} In the remainder, we will often drop $\ensuremath{\mathcal{Q}}$, implicitly assuming it is given, and use $P(true)$ to stand for $P(\ensuremath{\mathcal{Q}}=true)$. In order to empower agents to reason about probabilities and thus support forecasting, we need, in addition to \emph{pro/con} arguments as in QuAD frameworks, two new argument types: \begin{itemize} \item \emph{proposal} arguments, each about some forecast (and its underlying forecasting question); each proposal argument $\ensuremath{\mathcal{P}}$ has a \emph{forecast} and, optionally, some supporting \emph{evidence ; and \item \emph{amendment} argument , which \AX{suggest a modification to} some forecast\AX{'s probability} by increasing or decreasing it, and are accordingly separated into disjoint classes of \emph{increase} and \emph{decrease} (amendment) arguments.\footnote{Note that we decline to include a third type of amendment argument for arguing that $\ensuremath{\Forecast^\Proposal}$ is just right. This choice rests on the assumption that additional information always necessitates a change to $\ensuremath{\Forecast^\Proposal}$, however granular that change may be. This does not restrict individual agents arguing about $\ensuremath{\Forecast^\Proposal}$ from casting $\ensuremath{\Forecast^\Proposal}$ as their own final forecast. However, rather than cohering their argumentation around $\ensuremath{\Forecast^\Proposal}$, which we hypothesise would lead to high risk of groupthink~\cite{McNees1987}, agents are compelled to consider the impact of their amendment arguments on this more granular level.} \end{itemize} Note that amendment arguments are introduced specifically for arguing about a proposal argument, given that traditional QuAD pro/con arguments are of limited use when the goal is to judge the acceptability of a probability, and that in forecasting agents must not only decide \emph{if} they agree/disagree but also \emph{how} they agree/disagree (i.e. whether they believe the forecast is too low or too high considering, if available, the evidence). Amendment arguments, with their increase and decrease classes, provide for this. \begin{example \label{ProposalExample} A proposal argument $\ensuremath{\mathcal{P}}$ in the Tokyo Olympics setting may comprise forecast: \emph{\AX{`}There is a 75\% chance that the Olympics will be cancelled/postponed to another year'}. It may also include evidence: \emph{`A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled. The Japanese government is likely to buckle under this pressure.'} This argument may aim to prompt updating the earlier forecast in Example~\ref{FAFEx}. A \emph{decrease} amendment argument may be $\ensuremath{\decarg_1}$: \emph{`The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'}. An \emph{increase} amendment argument may be $\ensuremath{\incarg_1}$: \emph{`Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation'}. \end{example} Intuitively, a proposal argument is the focal point of the argumentation. It typically suggests a new forecast to replace prior forecasts, argued on the basis of some new evidence (as in the earlier example). We will see that proposal arguments remain immutable through each debate (update framework), which takes place via amendment arguments and standard pro/con arguments. Note that, wrt QuAD argument types, proposal arguments replace issues and amendment arguments replace answers, in that the former are driving the debates and the latter are the options up for debate. Note also that amendment arguments merely state a direction wrt $\ensuremath{\Forecast^\Proposal}$ and do not contain any more information, such as \emph{how much} to alter $\ensuremath{\Forecast^\Proposal}$ by. We will see that alteration can be determined by \emph{scoring} amendment arguments. Proposal and amendment arguments, alongside pro/con arguments, form part of our novel update frameworks \BI{(Stage 2 of Figure \ref{fig:FAFdiag})}, defined as follows: \begin{definition} An \emph{update framework} is a nonad ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{\mathcal{A}}, \ensuremath{\mathcal{V}}, \ensuremath{\Forecast^\Agents}$⟩ such that: \item[$\bullet$] $\ensuremath{\mathcal{P}}$ is a single proposal argument with \emph{forecast} $\NewForecas $ and, optionally, \emph{evidence} $\mathcal{E}^\ensuremath{\mathcal{P}}$ for this forecast; \item[$\bullet$] $\ensuremath{\mathcal{X}} = \ensuremath{\AmmArgs^\uparrow} \cup \ensuremath{\AmmArgs^\downarrow}$ is a finite set of \emph{amendment arguments} composed of subsets $\ensuremath{\AmmArgs^\uparrow}$ of \emph{increase} arguments and $\ensuremath{\AmmArgs^\downarrow}$ of \emph{decrease} arguments; \item[$\bullet$] $\ensuremath{\AmmArgs^-}$ is a finite set of \emph{con} arguments; \item[$\bullet$] $\ensuremath{\AmmArgs^+}$ is a finite set of \emph{pro} arguments; \item[$\bullet$] the sets $\{\ensuremath{\mathcal{P}}\}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^-}$ and $\ensuremath{\AmmArgs^+}$ are pairwise disjoint; \item[$\bullet$] $\ensuremath{\Rels^p}$ $\subseteq$ $\ensuremath{\mathcal{X}}$ $\times$ $\{\ensuremath{\mathcal{P}}\}$ is a directed acyclic binary relation between amendment arguments and the proposal argument (we may refer to this relation informally as `probabilistic'); \item[$\bullet$] $\ensuremath{\Rels}$ $\subseteq$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\times$ ($\ensuremath{\mathcal{X}}$ $\cup$ $\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) is a directed acyclic, binary relation \FTn{from} pro/con arguments \FTn{to} amendment\FTn{/pro/con arguments} (we may refer to this relation informally as `argumentative'); \item[$\bullet$] $\ensuremath{\mathcal{A}} = \{ \ensuremath{a}_1, \ldots, \ensuremath{a}_n \}$ is a finite set of \emph{agents} $(n >1$); \item[$\bullet$] $\ensuremath{\mathcal{V}}$ : $\ensuremath{\mathcal{A}}$ $\times$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow$ [0, 1] is a total function such that $\ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$ is the \emph{vote} of agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$ on argument $\ensuremath{x} \in \ensuremath{\AmmArgs^-} \cup \ensuremath{\AmmArgs^+}$; with an abuse of notation, we let $\ensuremath{\mathcal{V}}_\ensuremath{a}$ : ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow [0, 1]$ represent the votes of a \emph{single} agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$, e.g. $\ensuremath{\mathcal{V}}_\ensuremath{a}(\ensuremath{x}) = \ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$; \item[$\bullet$] $\ensuremath{\Forecast^\Agents} = \{ \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n} \}$ is such that $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i} $, where $i \in \{ 1, \ldots n \}$, is the \emph{forecast} of agent $\ensuremath{a}_i\in\ensuremath{\mathcal{A}}$. \end{definition} \BIn{Note that pro \AX{(}con\AX{)} arguments can be seen as supporting (attacking, resp.) other arguments via $\ensuremath{\mathcal{R}}$, as in the case of conventional QuAD frameworks~\cite{Baroni2015}.} \begin{example \label{eg:tokyo} A possible update framework in our running setting may include $\ensuremath{\mathcal{P}}$ as in Example~\ref{ProposalExample} as well as (see Table \ref{table:tokyo}) $\ensuremath{\mathcal{X}}=\{\ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ensuremath{\incarg_1}\}$, $\ensuremath{\AmmArgs^-}=\{\ensuremath{\attarg_1}, \ensuremath{\attarg_2}, \ensuremath{\attarg_3}\}$, $\ensuremath{\AmmArgs^+}=\{\ensuremath{\supparg_1}, \ensuremath{\supparg_2}\}$, $\ensuremath{\Rels^p}=\{(\ensuremath{\decarg_1}, \ensuremath{\mathcal{P}})$, $(\ensuremath{\decarg_2}, \ensuremath{\mathcal{P}}), (\ensuremath{\incarg_3}, \ensuremath{\mathcal{P}})\}$, and $\ensuremath{\mathcal{R}}=\{(\ensuremath{\attarg_1}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_2}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_3}, \ensuremath{\incarg_1})$, $(\ensuremath{\supparg_1}, \ensuremath{\decarg_2}),$ $ (\ensuremath{\supparg_2}, \ensuremath{\incarg_1})\} . Figure \ref{fig:tokyo} gives a graphical representation of these arguments and relations. \BIn{Assuming $\ensuremath{\mathcal{A}}=\{alice, bob, charlie\}$, $\ensuremath{\mathcal{V}}$ may be such that $\AX{\ensuremath{\mathcal{V}}_{alice}(\ensuremath{\attarg_1})} = 1$, $\AX{\ensuremath{\mathcal{V}}_{bob}(\ensuremath{\supparg_1})} = 0$, and so on.} \end{example} \begin{table}[t] \begin{tabular}{p{0.7cm}p{6.7cm}} \hline & Content \\ \hline $\ensuremath{\mathcal{P}}$ & `A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled owing to COVID-19, and the Japanese government is likely to buckle under this pressure ($\mathcal{E}^\ensuremath{\mathcal{P}})$. Thus, there is a 75\% chance that the Olympics will be cancelled/postponed to another year' ($\ensuremath{\Forecast^\Proposal}$). \\ $\ensuremath{\decarg_1}$ & `The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'. \\ $\ensuremath{\decarg_2}$ & `This poll comes from an unreliable source.' \vspace{2mm}\\ $\ensuremath{\incarg_1}$ & `Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation.' \\ $\ensuremath{\attarg_1}$ & `The IOC is bluffing - people are dying, Japan is experiencing a strike. They will not go ahead with the games if there is a risk of mass death.' \\ $\ensuremath{\attarg_2}$ & `The Japanese government may renege on its commitment to the IOC, and use legislative or immigration levers to block the event.' \\ $\ensuremath{\attarg_3}$ & `Japan's government has sustained a high-approval rating in the last year and is strong enough to ward off opposition attacks.' \\ $\ensuremath{\supparg_1}$ & `This pollster has a track record of failure on Japanese domestic issues.' \\ $\ensuremath{\supparg_2}$ & `Rising anti-government sentiment on Japanese Twitter indicates that voters may be receptive to such arguments.' \\ \hline \end{tabular} \caption Arguments in the update framework in Example~\ref{eg:tokyo}.} \label{table:tokyo} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/FAF1.png} \centering \caption {\BIn{A graphical representation of arguments and relations in the update framework from Example~\ref{eg:tokyo}. Nodes represent proposal ($\ensuremath{\mathcal{P}}$), increase ($\uparrow$), decrease ($\downarrow$), pro ($+$) and con ($-$) arguments, while \FTn{dashed/solid} edges indicat , resp., the $\ensuremath{\Rels^p}$/$\ensuremath{\mathcal{R}}$ relations. } } \label{fig:tokyo} \end{figure} Several considerations about update frameworks are in order. Firstly, they represent `stratified' debates: graphically, they can be represented as trees with the proposal argument as root, amendment arguments as children of the root, and pro/con arguments forming the lower layers, as shown in Figure \ref{fig:tokyo}. This tree structure serves to focus argumentation towards the proposal (i.e. the probability and, if available, evidence) it puts forward. Second, we have chosen to impose a `structure' on proposal arguments, whereby their forecast is distinct from their (optional) evidence. Here the forecast has special primacy over the evidence, because forecasts are the vital reference point and the drivers of debates in FAFs. They are, accordingly, both mandatory and required to `stand out' to participating agents. In the spirit of abstract argumentation \cite{Dung1995}, we nonetheless treat all arguments, including proposal arguments, as `abstract', and focus on relations between them rather between their components. In practice, therefore, amendment arguments may relate to a proposal argument's forecast but also, if present, to its evidence. We opt for this abstract view on the assumption that the flexibility of this approach better suits judgmental forecasting, which has a diversity of use cases (e.g. including politics, economics and sport) where different argumentative approaches may be deployed (i.e. quantitative, qualitative, directly attacking amendment nodes or raising alternative POVs) and wherein forecasters may lack even a basic knowledge of argumentation. We leave the study of structured variants of our framework (e.g. see overview in \cite{structArg}) to future work: these may consider finer-grained representations of all arguments in terms of different components, and finer-grained notions of relations between components, rather than full arguments. Third, in update frameworks, voting is restricted to pro/con arguments. Preventing agents from voting directly on amendment arguments mitigates against the risk of arbitrary judgements: agents cannot make off-the-cuff estimations but can only express their beliefs via (pro/con) argumentation, thus ensuring a more rigorous process of appraisal for the proposal and amendment arguments. Note that rather than facilitating voting on arguments using a two-valued perspective (i.e. positive/negative) or a three-valued perspective (i.e. positive/negative/neutral), $\ensuremath{\mathcal{V}}$ allows agents to cast more granular judgements of (pro/con) argument acceptability, the need for which has been highlighted in the literature \cite{Mellers2015}. Finally, although we envisage that arguments of all types are put forward by agents during debates, we do not capture this mapping in update frameworks. Thus, we do not capture who put forward which arguments, but instead only use votes to encode and understand agents' views. This enables more nuanced reasoning and full engagement on the part of agents with alternative viewpoints (i.e. an agent may freely argue both for and against a point before taking an explicit view with their voting). Such conditions are essential in a healthy forecasting debate \cite{Landeta2011,Mellers2015}. In the remainder of this paper, with an abuse of notation, we often use $\ensuremath{\Forecast^\Proposal}$ to denote, specifically, the probability advocated in $\ensuremath{\Forecast^\Proposal}$ (e.g. 0.75 in Example \ref{ProposalExample}). \section{Aggregating Rational Forecasts }\label{sec:forecasting} In this section we formally introduce (in \AX{§}\ref{subsec:rationality}) our notion of rationality and discuss how it may be used to identify\BI{, and subsequently `block',} undesirable behaviour in forecasters. We then define (in \AX{§}\ref{subsec:aggregation}) a method for calculating a revised forecast \BI{(Stage 2c of Figure \ref{fig:FAFdiag})}, which aggregates the views of all agents in the update framework, whilst optimising their overall forecasting accuracy. \subsection{Rationality}\label{subsec:rationality} Characterising an agent’s view as irrational offers opportunities to refine the accuracy of their forecast (and thus the overall aggregated group forecast). Our definition of rationality is inspired by, but goes beyond, that of QuAD-V \cite{Rago2017}, which was introduced for the e-polling context. Whilst update frameworks eventually produce a single aggregated forecast on the basis of group deliberation, each agent is first evaluated for their rationality on an individual basis. Thus, as in QuAD-V, in order to define rationality for individual agents, we first reduce frameworks to \emph{delegate frameworks} for each agent, which are the restriction of update frameworks to a single agent. \begin{definition} A \emph{delegate framework} for an agent $\ensuremath{a}$ is $\ensuremath{u}_{\ensuremath{a}} =$ ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{a}, \ensuremath{\mathcal{V}}_{\ensuremath{a}}, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩. \end{definition} Note that all arguments in an update framework are included in each agent's delegate framework, but only the agent's votes and forecast are carried over. Recognising the irrationality of an agent requires comparing the agent's forecast against (an aggregation of) their opinions on the amendment arguments and, by extension, on the proposal argument. To this end, we evaluate the different parts of the update framework as follows. We use the DF-QuAD algorithm \cite{Rago2016} to score each amendment argument for the agent, in the context of the pro/con arguments `linked' to the amendment argument, using $\ensuremath{\mathcal{R}}$, in the context of the agent's delegate framework. We refer to the DF-QuAD score function as $\ensuremath{\mathcal{\sigma}}$. This requires a choice of base scores for amendment arguments as well as pro/con arguments. We assume the same base score $\ensuremath{\mathcal{\tau}}(\ensuremath{x})=0.5$ for all $\ensuremath{x} \in \ensuremath{\mathcal{X}}$; in contrast, the base score of pro/con arguments is a result of the votes they received from the agent, in the spirit of QuAD-V \cite{Rago2017}. The intuition behind assigning a neutral (0.5) base score for amendment arguments is that an agent's estimation of their strength from the outset would be susceptible to bias and inaccuracy. Once each amendment argument has been scored (using $\ensuremath{\mathcal{\sigma}}$) for the agent, we aggregate the scores of all amendment arguments (for the same agent) to to calculate the agent's \emph{confidence score} in the proposal argument (which underpins our rationality constraints), by weighting the mean average strength of this argument's increase amendment relations against that of the set of decrease amendment relations: \begin{definition}\label{def:conscore} Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩ , let $\ensuremath{\AmmArgs^\uparrow} = \{ \ensuremath{\incarg_1}, \ensuremath{\incarg_2}, \ldots , \ensuremath{\arg^\uparrow}_i \}$ and $\ensuremath{\AmmArgs^\downarrow} = \{ \ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ldots , \ensuremath{\arg^\downarrow}_j \}$. Then, $\ensuremath{a}$'s \emph{confidence score} is as follows: \begin{align} &\text{if } i\neq0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k) - \frac{1}{j} \sum_{l=1}^{j} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\ &\text{if } i\neq0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k); \nonumber \\ &\text{if } i=0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = - \frac{1}{j} \sum_{l=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\ &\text{if } i=0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = 0. \nonumber \end{align} \end{definition} Note that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) \in [-1,1]$, which denotes the overall views of the agent on the forecast $\ensuremath{\Forecast^\Proposal}$ (i.e. as to whether it should be \emph{increased} or \emph{decreased}, and how far). A negative (positive) $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ indicates that an agent believes that $\ensuremath{\Forecast^\Proposal}$ should be amended down (up, resp.). The size of $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ reflects the degree of the agent's certainty in either direction. In turn, we can constrain an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ if it contradicts this belief as follows. \begin{definition}\label{def:irrationality} Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$ , $\ensuremath{a}$’s forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ is \emph{strictly rational} (wrt $\ensuremath{u}_{\ensuremath{a}}$) iff: \begin{align} if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} < \ensuremath{\Forecast^\Proposal} \nonumber \\ if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} > \ensuremath{\Forecast^\Proposal} \nonumber \\ \centering \mid\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\mid \geq \frac{\mid\ensuremath{\Forecast^\Proposal} - \ensuremath{\Forecast^\Agents}_\ensuremath{a}\mid}{\ensuremath{\Forecast^\Proposal}} \nonumber \end{align} \end{definition} Hereafter, we refer to forecasts which violate the first two constraints as, resp., \emph{irrational increase} and \emph{irrational decrease} forecasts, and to forecasts which violate the final constraint as \emph{irrational scale} forecasts. This definition of rationality preserves the integrity of group forecast in two ways. First, it prevents agents from forecasting against their beliefs: an agent cannot increase $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0$ and an agent cannot decrease $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0 ; further, it ensures that agents cannot make forecasts disproportionate to their confidence score -- \emph{how far} an agent $\ensuremath{a}$ deviates from the proposed change $\ensuremath{\Forecast^\Proposal}$ is restricted by $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$; finally, an agent must have $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ greater than or equal to the relative change to $\ensuremath{\Forecast^\Proposal}$ denoted in their forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a} . Note that the \emph{irrational scale} constraint deals with just one direction of proportionality (i.e. providing only a maximum threshold for $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$'s deviation from $\ensuremath{\Forecast^\Proposal}$, but no minimum threshold). Here, we avoid bidirectional proportionality on the grounds that such a constraint would impose an arbitrary notion of arguments' `impact' on agents. An agent may have a very high $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$, indicating \FT{their} belief that $\ensuremath{\Forecast^\Proposal}$ is too low, but \AX{may}, we suggest, rationally choose to increase $\ensuremath{\Forecast^\Proposal}$ by only a small amount (e.g. if, despite \FT{their} general agreement with the arguments, \FT{they} believe the overall issue at stake in $\ensuremath{\mathcal{P}}$ to be minor or low impact to the overall forecasting question). Our definition of rationality, which relies on notions of argument strength derived from DF-QuAD, thus informs but does not wholly dictate agents' forecasting, affording them considerable freedom. We leave alternative, stricter definitions of rationality, which may derive from probabilistic conceptions of argument strength, to future work. \begin{example Consider our running Tokyo Olympics example, with the same arguments and relations from Example \ref{eg:tokyo} and an agent \BIn{$alice$} with a confidence score \BIn{$\ensuremath{\mathcal{C}}_{alice}(\ensuremath{\mathcal{P}}) = -0.5$}. From this we know that \BIn{$alice$} believes that the suggested $\ensuremath{\Forecast^\Proposal}$ in the proposal argument $\ensuremath{\mathcal{P}}$ should be decreased. Then, under our definition of rationality, \BIn{$alice$'s} forecast \BIn{$\ensuremath{\Forecast^\Agents}_{alice}$} is `rational' if it decreases $\ensuremath{\Forecast^\Proposal}$ by up to 50\%. \end{example} If an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ violates these rationality constraints then \BI{it is `blocked'} and the agent is prompted to return to the argumentation graph. From here, they may carry out one or more of the following actions to render their forecast rational: a. Revise their forecast; b. Revise their votes on arguments; c. Add new arguments (and vote on them). \iffalse \begin{enumerate}[label=\alph*.] \item Revise their forecast; \item Revise their votes on arguments; \item Add new arguments to the update framework (and vote on them). \end{enumerate} \fi Whilst a) and b) occur on an agent-by-agent basis, confined to each delegate framework, c) affects the shared update framework and requires special consideration. Each time new \AX{arguments} are added to the shared graph, every agent must vote on \AX{them}, even if they have already made a rational forecast. In certain cases, after an agent has voted on a new argument, it is possible that their rational forecast is made irrational. In this instance, the agent must resolve their irrationality via the steps above. In this way, the update framework can be refined on an iterative basis until the graph is no longer being modified and all agents' forecasts are rational. At this stage, the update framework has reached a stable state and the agents $\ensuremath{\mathcal{A}}$ are collectively rational: \begin{definition} Given an update framework $\ensuremath{u}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩, $\ensuremath{\mathcal{A}}$ is \emph{collectively rational} (wrt \emph{u}) iff $\forall \ensuremath{a} \in \ensuremath{\mathcal{A}}$, $\ensuremath{a}$ is individually rational (wrt the delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩). \end{definition} When $\ensuremath{\mathcal{A}}$ is collectively rational, the update framework $u$ becomes immutable and the aggregation (defined next) \AX{produces} a group forecast $\ensuremath{\Forecast^g}$, which becomes the new $\ensuremath{\mathcal{F}}$. \subsection{Aggregating Forecasts}\label{subsec:aggregation} After all the agents have made a rational forecast, an aggregation function is applied to produce one collective forecast. One advantage of forecasting debates vis-a-vis \AX{the} many other forms of debate, is that a ground truth always exists -- an event either happens or does not. This means that, over time and after enough FAF instantiations, data on the forecasting success of different agents can be amassed. In turn, the relative historical performance of forecasting agents can inform the aggregation of group forecasts. In update frameworks, a weighted aggregation function based on Brier Scoring \cite{Brier1950} is used, such that more accurate forecasting agents have greater influence over the final forecast. Brier Scores are a widely used criterion to measure the accuracy of probabilistic predictions, effectively gauging the distance between a forecaster's predictions and an outcome after it has(n't) happened, as follows. \begin{definition} \label{def:bscore} Given an agent $\ensuremath{a}$, a non-empty series of forecasts $\ensuremath{\Forecast^\Agents}_\ensuremath{a}(1), \ldots, \ensuremath{\Forecast^\Agents}_\ensuremath{a}(\ensuremath{\mathcal{N}}_{\ensuremath{a}})$ with corresponding actual outcomes $\ensuremath{\mathcal{O}}_1, \ldots,$ $\ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \in \{true, false\}$ (where $\ensuremath{\mathcal{N}}_{\ensuremath{a}}>0$ is the number of forecasts $\ensuremath{a}$ has made in a non-empty sequence of as many update frameworks), $\ensuremath{a}$'s Brier Score $\ensuremath{b}_{\ensuremath{a}} \in [0, 1]$ is as follows: \begin{align} \ensuremath{b}_{\ensuremath{a}} = \frac{1}{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \sum_{t=1}^{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} (\ensuremath{\Forecast^\Agents}_\ensuremath{a}(t) - val(\ensuremath{\mathcal{O}}_t))^2 \nonumber \end{align} where $val(\ensuremath{\mathcal{O}}_t)=1$ if $\ensuremath{\mathcal{O}}_t=true$, and 0 otherwise. \end{definition} A Brier Score $\ensuremath{b}$ is effectively the mean squared error used to gauge forecasting accuracy, where a low $\ensuremath{b}$ indicates high accuracy and high $\ensuremath{b}$ indicates low accuracy. This can be used in the update framework's aggregation function via the weighted arithmetic mean as follows. \AX{E}ach Brier Score is inverted to ensure that more (less, resp.) accurate forecasters have higher (lower, resp.) weighted influence\AX{s} on $\ensuremath{\Forecast^g}$: \begin{definition}\label{def:group} Given a set of agents $\ensuremath{\mathcal{A}} = \{\ensuremath{a}_1, \ldots,\ensuremath{a}_n\}$, their corresponding set of Brier Scores $\ensuremath{b} = \{\ensuremath{b}_{\ensuremath{a}_1}, \ldots,\ensuremath{b}_{\ensuremath{a}_n}\}$ and their forecasts $\ensuremath{\Forecast^\Agents} = \{\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots,\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n}\}$, and letting, for $i \!\!\in\!\! \{ 1, \ldots, n\}$, $w_{i} \!\!=\!\! 1-\ensuremath{b}_{\ensuremath{a}_i}$, the \emph{group forecast} $\ensuremath{\Forecast^g}$ is as follows: \begin{align} &\text{if } \sum_{i=1}^{n}w_{i} \neq 0: & &\ensuremath{\Forecast^g} = \frac{\sum_{i=1}^{n}w_{i}\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i}}{\sum_{i=1}^{n}w_{i}}; \nonumber \\ &\text{otherwise}: & &\ensuremath{\Forecast^g} = 0. \nonumber \end{align} \end{definition} This group forecast could be `activated' after a fixed number of debates (with the mean average used prior), when sufficient data has been collected on the accuracy of participating agents, or after a single debate, in the context of our general \emph{Forecasting Argumentation Frameworks}: \begin{definition} A \emph{Forecasting Argumentation Framework} (FAF) is a triple ⟨$ \ensuremath{\mathcal{F}}, \ensuremath{\mathcal{U}}, \ensuremath{\mathcal{T}}$⟩ such that: \item[$\bullet$] $\ensuremath{\mathcal{F}}$ is a \emph{forecast ; \item[$\bullet$] $\ensuremath{\mathcal{U}}$ is a finite, non-empty sequence of update frameworks with \ensuremath{\mathcal{F}}\ the forecast of the proposal argument in the first update framework in the sequence\AR{;} the forecast of each subsequent update framework is the group forecast of the previous update framework's agents' forecasts; \item[$\bullet$] $\ensuremath{\mathcal{T}}$ is a preset time limit representing the lifetime of the FAF; \item[$\bullet$] each agent's forecast wrt the agent's delegate framework drawn from each update framework is strictly rational. \end{definition} \begin{example \BIn{Consider our running Tokyo Olympics example: the overall FAF may be composed of $\ensuremath{\mathcal{F}} = 0.15$, update frameworks $\ensuremath{\mathcal{U}} = \{ u_1, u_2, u_3 \}$ and time limit $\ensuremath{\mathcal{T}}=14\ days$, where $u_3$ is the latest (and therefore the only open) update framework after, for example, four days.} \end{example} \AX{T}he superforecasting literature explores a range of forecast aggregation algorithms: extremizing algorithms \cite{Baron2014}, variations on logistic \AX{and} Fourier $L_2E$ regression \cite{Cross2018}, with considerable success. \AX{T}hese approaches \AX{aim} at ensuring that less certain \AX{or less} accurate forecasts have a lesser influence over the final aggregated forecast. We believe that FAFs apply a more intuitive algorithm \AX{since} much of the `work' needed to bypass inaccurate and erroneous forecasting is \AX{expedited} via argumentation. \section{Properties}\label{sec:props} We now undertake a theoretical analysis of FAFs by considering mathematical properties they satisfy. Note that the properties of the DF-QuAD algorithm (see \cite{Rago2016}) hold (for the amendment and pro/con arguments) here. For brevity, we focus on novel properties unique to FAFs which relate to our new argument types. These properties focus on aggregated group forecasts wrt a debate (update framework). They imply the two broad, and we posit, desirable, principles of \emph{balance} and \emph{unequal representation}. We assume for this section a generic update framework $\ensuremath{u} = $ ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩ with group forecast $\ensuremath{\Forecast^g}$. \paragraph{Balance.} The intuition for these properties is that differences between $\ensuremath{\Forecast^g}$ and $\ensuremath{\Forecast^\Proposal}$ correspond to imbalances between the \emph{increase} and \emph{decrease} amendment arguments. The first result states that $\ensuremath{\Forecast^g}$ only differs from $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\Forecast^\Proposal}$ is the dialectical target of amendment arguments. \begin{proposition} \label{prop:balance1} If $\ensuremath{\mathcal{X}}\!\!=\!\!\emptyset$ ($\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$), then $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Proposal}$. \end{proposition} \begin{proof} \AX{If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!=\!0$ by Def.~\ref{def:conscore} and $\ensuremath{\Forecast^\Agents}_\ensuremath{a}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:group}.} \end{proof} \AX{T}his simple proposition conveys an important property for forecasting for an agent to put forward a different forecast, amendment arguments must have been introduced. \begin{example} In the Olympics setting, the group of agents could only forecast higher or lower than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ after the addition of at least one of \AX{the} amendment arguments $\ensuremath{\decarg_1}$, $\ensuremath{\decarg_2}$ or $\ensuremath{\incarg_1}$. \end{example} In the absence of increase \FTn{(decrease)} amendment arguments, if there are decrease \FTn{(increase, resp.)} amendment arguments, then $\ensuremath{\Forecast^g}$ is not higher \FTn{(lower, resp.)} than $\ensuremath{\Forecast^\Proposal}$. \begin{proposition}\label{prop:balance2} If $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}=\emptyset$, then $\ensuremath{\Forecast^g} \leq\ensuremath{\Forecast^\Proposal}$. \FTn{\label{balance3prop} If $\ensuremath{\AmmArgs^\downarrow}=\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$, then $\ensuremath{\Forecast^g}\geq\ensuremath{\Forecast^\Proposal}$.} \end{proposition} \begin{proof} \AX{If $\ensuremath{\AmmArgs^\downarrow}\!\! \neq \!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\leq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\leq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\leq\!\ensuremath{\Forecast^\Proposal}$. If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!\neq\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\geq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\geq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\geq\!\ensuremath{\Forecast^\Proposal}$.} \end{proof} This proposition demonstrates that, if a decrease \BIn{(increase)} amendment argument has an effect on proposal arguments, it can only be as its name implies. \begin{example} \BIn{In the Olympics setting, the agents could not forecast higher than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ if either of the decrease amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had} been added, but the increase argument $\ensuremath{\incarg_1}$ \AX{had} not. Likewise, \AX{the} agents could not forecast lower than $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\incarg_1}$ \AX{had} been added, but \AX{neither} of $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had .} \end{example} If $\ensuremath{\Forecast^g}$ is lower \BIn{(higher)} than $\ensuremath{\Forecast^\Proposal}$, there is at least one decrease \BIn{(increase, resp.)} argument. \begin{proposition} \label{prop:balance4} If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. \BIn{If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$.} \end{proposition} \begin{proof} \AX{ If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}<\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})<0$. Then, irrespective of $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}>\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})>0$. Then, irrespective of \BIn{$\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$}. } \end{proof} We can see here that the only way an agent can decrease \BIn{(increase)} the forecast is \FT{by adding} decrease \BIn{(increase, resp.)} arguments, ensuring the debate is structured as \FT{intended}. \begin{example} \BIn{In the Olympics setting, the group of agents could only produce a group forecast $\ensuremath{\Forecast^g}$ lower than $\ensuremath{\Forecast^\Proposal}$ due to the presence of \emph{decrease} amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$. Likewise, the group of agents could only produce a $\ensuremath{\Forecast^g}$ higher than $\ensuremath{\Forecast^\Proposal}$ due to the presence of $\ensuremath{\incarg_1}$.} \end{example} \paragraph{Unequal representation.} AFs exhibit instances of unequal representation in the final voting process. In formulating the following properties, we distinguish between two forms of unequal representation. First, \emph{dictatorship}, where a single agent dictates $\ensuremath{\Forecast^g}$ with no input from other agents. Second, \emph{pure oligarchy}, where a group of agents dictates $\ensuremath{\Forecast^g}$ with no input from other agents outside the group. In the forecasting setting, these properties are desirable as they guarantee higher accuracy \AX{from} the group forecast $\ensuremath{\Forecast^g}$. An agent with a forecasting record of \emph{some} accuracy exercises \emph{dictatorship} over the group forecast $\ensuremath{\Forecast^g}$, if the rest of the participating \AX{agents} have a record of total inaccuracy. \begin{proposition}\label{prop:dictatorship} If $\ensuremath{a}_d\in\ensuremath{\mathcal{A}}$ has a Brier score $\ensuremath{b}_{\ensuremath{a}_d}<1$ and $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}} \setminus \{\ensuremath{a}_d$\}, $\ensuremath{b}_{\ensuremath{a}_z} = 1$, then $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$. \end{proposition} \begin{proof} \AX{ By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} \!\!\!=\!\! 1$ $\forall \ensuremath{a}_z\!\in\!\ensuremath{\mathcal{A}} \!\setminus\! \{\!\ensuremath{a}_d\!\}$, then $w_{\ensuremath{a}_z}\!\!\!=\!0$; and if $\ensuremath{b}_{\ensuremath{a}_d}\!\!<\!\!1$, then $w_{\ensuremath{a}_d}\!\!>\!\!0 . Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$ is weighted at 100\% and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\% so $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$. } \end{proof} This proposition demonstrates how we will disregard agents with total inaccuracy, even in \FT{the} extreme case where we allow one (more accurate) agent to dictate the forecast. \begin{example} \BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 0.5, 1 and 1, resp., bob's and charlie's forecasts have} no impact on $\ensuremath{\Forecast^g}$, whilst \AX{alice's} forecast becomes the group forecast $\ensuremath{\Forecast^g}$.} \end{example} A group of agents with a forecasting record of $some$ accuracy exercises \emph{pure oligarchy} over $\ensuremath{\Forecast^g}$ if the rest of the \AX{agents} all have a record of total inaccuracy. \begin{proposition}\label{oligarchytotalprop} Let $\ensuremath{\mathcal{A}} = \ensuremath{\mathcal{A}}_o \cup \ensuremath{\mathcal{A}}_z$ where $\ensuremath{\mathcal{A}}_o \cap \ensuremath{\mathcal{A}}_z = \emptyset$, $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o \in \ensuremath{\mathcal{A}}_o$ and $\ensuremath{b}_{\ensuremath{a}_z}=1$ $\forall \ensuremath{a}_z \in \ensuremath{\mathcal{A}}_z$. Then, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $>0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\ . \end{proposition} \begin{proof} \AX{ By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} = 1$ $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}}_z$, then $w_{\ensuremath{a}_z}=0$; and if $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o\in\ensuremath{\mathcal{A}}_o$, then $w_{\ensuremath{a}_o}>0$. Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $> 0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at $0\%$. } \end{proof} This proposition extends the behaviour from Proposition \ref{prop:dictatorship} to the (more desirable) case where fewer agents have a record of total inaccuracy. \begin{example} \BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 1, 0.2 and 0.6, resp., alice's forecast} has no impact on $\ensuremath{\Forecast^g}$, whilst \AX{bob and charlie's} aggregated forecast becomes the group forecast $\ensuremath{\Forecast^g}$.} \end{example} \section{Evaluation}\label{sec:experiments} \BI{We conducted an experiment using a dataset obtained from the `Superforecasting' project, Good Judgment Inc \cite{GJInc}, to simulate four past forecasting debates in FAFs. This dataset contained 1770 datapoints (698 `forecasts' and 1072 `comments') posted by 242 anonymised users with a range of expertise. The original debates had occurred on the publicly available group forecasting platform, the Good Judgment Open (GJO)\footnote{https://www.gjopen.com/}, providing a suitable baseline against which to compare FAFs' accurac . \BI{For the experiment, we used a prototype implementation of FAFs in the form of the publicly available web platform called \emph{Arg\&Forecast} (see \cite{Irwin2022} for an introduction to the platform and an additional human experiment with FAFs). Python's Gensim topic modelling library \cite{rehurek2011gensim} was used to separate the datapoints for each debate into contextual-temporal groups that could form update frameworks.} In each update framework the proposal forecast was set to the mean average of forecasts made in the update framework window and each argument appeared only once. Gensim was further used to simulate voting, matching users to specific arguments they (dis)approved of. Some 4,700 votes \AX{were then} generated with a three-valued system (where votes were taken from \{0,0.5,1\}) to ensure consistency: if a user voiced approval for an argument in the debate time window, their vote for the corresponding argument(s) was set to 1; disapproval for an argument led to a vote of 0, and (in the most common case) if a user did not mention an argument at all, their vote for the corresponding argument(s) defaulted to 0.5. With the views of all participating users wrt the proposal argument encoded in each update framework's votes, forecasts could then be simulated. If a forecast was irrational, violating any of the three constraints in Def.~\ref{def:irrationality} (referred to \AX{in the following} as \emph{increase}, \emph{decrease} and \emph{scale}, resp.), it was blocked and, to mimic real life use, an automatic `follow up' forecast was made. The `follow up' forecast would be the closest possible prediction (to their original choice) a user could make whilst remaining `rational'. \BI{Note that evaluation of the aggregation function described in \AX{§}\ref{subsec:aggregation} was outside this experiment, since the past forecasting accuracy of the dataset's 242 anonymised users was unavailable. Instead, we used \AX{the} mean average whilst adopting the GJO's method for scoring the accuracy of a user and/or group over the lifetime of the question \cite{roesch_2015}. This meant calculating a daily forecast and daily Brier score for each user, for every day of the question. After users made their first rational forecast, that forecast became their `daily forecast' until it was updated with a new forecast. Average and range of daily Brier scores allowed reliable comparison between (individual and aggregated) performance of the GJO versus the FAF implementation.} \begin{table}[t] \begin{tabular}{@{}llllll@{}} \toprule Q & Group $\ensuremath{b}$ & $min(\ensuremath{b})$ & $max(\ensuremath{b})$ \\ \midrule Q1 & 0.1013 (0.1187) & 0.0214 (0) & 0.4054 (1) \\ Q2 & 0.216 (0.1741) & 0 (0) & 0.3853 (1) \\ Q3 & 0.01206 (0.0227) & 0.0003 (0) & 0.0942 (0.8281) \\ Q4 & 0.5263 (0.5518) & 0 (0) & 0.71 (1) \\ \midrule \textbf{All} & \textbf{0.2039 (0.217)} & \textbf{0 (0)} & \textbf{1 (1)} \\ \bottomrule \end{tabular} \caption{The accuracy of the platform group versus control, where \AX{`}Group $\ensuremath{b}$\AX{'} is the aggregated (mean) Brier score, `$min(\ensuremath{b})$' is the lowest individual Brier score and `$max(\ensuremath{b})$' is the highest individual Brier score. Q1-Q4 indicate the four simulated debates.} \label{accuracyExp1} \end{table} \begin{table}[t] \begin{tabular}{llllll} \hline \multirow{2}{*}{Q} & \multirow{2}{*}{$\overline{\ensuremath{\mathcal{C}}}$} & \multirow{2}{*}{Forecasts} & \multicolumn{3}{c}{Irrational Forecasts} \\ \cline{4-6} & & & \multicolumn{1}{c}{\emph{Increase} } \!\!\!\! & \multicolumn{1}{c}{\emph{Decrease} } \!\!\!\! & \multicolumn{1}{c}{\emph{Scale} }\!\! \!\! \\ \hline Q1 & -0.0418 & 366 & 63 & 101 & 170 \\ Q2 & 0.1827 & 84 & 11 & 15 & 34 \\ Q3 & -0.4393 & 164 & 53 & 0 & 86 \\ Q4 & 0.3664 & 84 & 4 & 19 & 15 \\ \hline All & -0.0891 & 698 & 131 & 135 & 305 \\ \hline \end{tabular} \caption{Auxiliary results from \FT{the experiment}, where $\overline{\ensuremath{\mathcal{C}}}$ is the average confidence score, `Forecasts' is number of forecasts made in each question and `Irrational Forecasts' the number in each question which violated each constraint in §\ref{subsec:rationality}.} \label{exp1auxinfo} \end{table} \paragraph{Results.} As Table \ref{accuracyExp1} demonstrates, simulating forecasting debates from GJO in \emph{Arg\&Forecast} led to predictive accuracy improvements in three of the four debates. \BIn{This is reflected in these debates by a substantial reduction in Brier scores versus control.} The greatest accuracy improvement in absolute terms was in Q4, which saw a Brier score decrease of 0.0255. In relative terms, Brier score decreases ranged from 5\% (Q4) to 47\% (Q3). \BIn{The average Brier score decrease was 33\%, representing a significant improvement in forecasting accuracy across the board}. \BIn{Table \ref{exp1auxinfo} demonstrates how \AX{our} rationality constraints drove forward this improvement}. 82\% of forecasts made across the four debates were classified as irrational \BIn{and subsequently moderated with a rational `follow up' forecast}. Notably, there were more \emph{irrational scale} forecasts than \emph{irrational increase} and \emph{irrational decrease} forecasts combined. These results demonstrate how argumentation-based rationality constraints can play an active role in facilitating higher forecasting accuracy, signalling the early promise of FAFs. \section{Conclusions}\label{sec:conclusions} We have introduced the Forecasting Argumentation Framework (FAF), a multi-agent argumentation framework which supports forecasting debates and probability estimates. FAFs are composite argumentation frameworks, comprised of multiple non-concurrent update frameworks which themselves depend on three new argument types and a novel definition of rationality for the forecasting context. Our theoretical and empirical evaluation demonstrates the potential of FAFs, namely in increasing forecasting accuracy, holding intuitive properties, identifying irrational behaviour and driving higher engagement with the forecasting question (more arguments and responses, and more forecasts in the user study). These strengths align with requirements set out by previous research in the field of judgmental forecasting. There \AX{is} a multitude of possible directions for future wor . First, FAFs are equipped to deal only with two-valued outcomes but, given the prevalence of forecasting issues with multi-valued outcomes (e.g. `Who will win the next UK election?'), expanding their capability would add value. Second, further work may focus on the rationality constraints, e.g. by introducing additional parameters to adjust their strictness, or \AX{by implementing} alternative interpretations of rationalit . Third, future work could explore constraining agents' argumentation. This could involve using past Brier scores to limit the quantity or strength of agents' arguments and also to give them greater leeway wrt the rationality constraints. \FTn{Fourth, our method relies upon acyclic graphs: we believe that they are intuitive for users and note that all Good Judgment Open debates were acyclic; nonetheless, the inclusion of cyclic relations (e.g. to allow \AX{con} arguments that attack each other) could expand the scope of the argumentative reasoning in \AX{in FAFs.} } Finally, there is an immediate need for larger scale human experiments. \newpage \section*{Acknowledgements} The authors would like to thank Prof. Anthony Hunter for his helpful contributions to discussions in the build up to this work. \BIn{Special thanks, in addition, go to Prof. Philip E. Tetlock and the Good Judgment Project team for their warm cooperation and for providing datasets for the experiments.} \AX{Finally, the authors would like to thank the anonymous reviewers and meta-reviewer for their suggestions, which led to a significantly improved paper.} \bibliographystyle{kr}
{'timestamp': '2022-05-25T02:01:58', 'yymm': '2205', 'arxiv_id': '2205.11590', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.11590'}
ArXiv
\section{Introduction} \label{sec:0} White dwarf binaries are thought to be the most common binaries in the Universe, and in our Galaxy their number is estimated to be as high as 10$^8$. In addition most stars are known to be part of binary systems, roughly half of which have orbital periods short enough that the evolution of the two stars is strongly influenced by the presence of a companion. Furthermore, it has become clear from observed close binaries, that a large fraction of binaries that interacted in the past must have lost considerable amounts of angular momentum, thus forming compact binaries, with compact stellar components. The details of the evolution leading to the loss of angular momentum are uncertain, but generally this is interpreted in the framework of the so called ``common-envelope evolution'': the picture that in a mass-transfer phase between a giant and a more compact companion the companion quickly ends up inside the giant's envelope, after which frictional processes slow down the companion and the core of the giant, causing the ``common envelope'' to be expelled, as well as the orbital separation to shrink dramatically \cite{Taam and Sandquist (2000)}. Among the most compact binaries know, often called ultra-compact or ultra-short binaries, are those hosting two white dwarfs and classified into two types: \emph{detached} binaries, in which the two components are relatively widely separated and \emph{interacting} binaries, in which mass is transferred from one component to the other. In the latter class a white dwarf is accreting from a white dwarf like object (we often refer to them as AM CVn systems, after the prototype of the class, the variable star AM CVn; \cite{warn95,Nelemans (2005)}). \begin{figure} \includegraphics[height=7.5cm,angle=0]{P_Mtot_SPYnew1} \caption{Period versus total mass of double white dwarfs. The points and arrows are observed systems \cite{Nelemans et al. (2005)}, the grey shade a model for the Galactic population. Systems to the left of the dashed line will merge within a Hubble time, systems above the dotted line have a combined mass above the Chandrasekhar mass. The top left corner shows the region of possible type Ia supernova progenitors, where the grey shade has been darkened for better visibility (adapted from \cite{Nelemans (2007)}). } \label{fig:P_Mtot} \end{figure} In the past many authors have emphasised the importance of studying white dwarfs in DDBs. In fact, the study of ultra-short white dwarf binaries is relevant to some important astrophysical questions which have been outlined by several author. Recently, \cite{Nelemans (2007)} listed the following ones: \begin{itemize} \item {\em Binary evolution} Double white dwarfs are excellent tests of binary evolution. In particular the orbital shrinkage during the common-envelope phase can be tested using double white dwarfs. The reason is that for giants there is a direct relation between the mass of the core (which becomes a white dwarf and so its mass is still measurable today) and the radius of the giant. The latter carries information about the (minimal) separation between the two components in the binary before the common envelope, while the separation after the common envelope can be estimated from the current orbital period. This enables a detailed reconstruction of the evolution leading from a binary consisting of two main sequence stars to a close double white dwarf \cite{Nelemans et al.(2000)}. The interesting conclusion of this exercise is that the standard schematic description of the common envelope -- in which the envelope is expelled at the expense of the orbital energy -- cannot be correct. An alternative scheme, based on the angular momentum, for the moment seems to be able to explain all the observations \cite{Nelemans and Tout (2005)}. \item {\em Type Ia supernovae} Type Ia supernovae have peak brightnesses that are well correlated with the shape of their light curve \cite{Phillips (1993)}, making them ideal standard candles to determine distances. The measurement of the apparent brightness of far away supernovae as a function of redshift has led to the conclusion that the expansion of the universe is accelerating \cite{Perlmutter et al. (1998),Riess et al.(2004)}. This depends on the assumption that these far-away (and thus old) supernovae behave the same as their local cousins, which is a quite reasonable assumption. However, one of the problems is that we do not know what exactly explodes and why, so the likelihood of this assumption is difficult to assess \cite{Podsiadlowski et al. (2006)}. One of the proposed models for the progenitors of type Ia supernovae are massive close double white dwarfs that will explode when the two stars merge \cite{Iben and Tutukov (1984)}. In Fig.~\ref{fig:P_Mtot} the observed double white dwarfs are compared to a model for the Galactic population of double white dwarfs \cite{Nelemans et al. (2001)}, in which the merger rate of massive double white dwarfs is similar to the type Ia supernova rate. The grey shade in the relevant corner of the diagram is enhanced for visibility. The discovery of at least one system in this box confirms the viability of this model (in terms of event rates). \item {\em Accretion physics} The fact that in AM CVn systems the mass losing star is an evolved, hydrogen deficient star, gives rise to a unique astrophysical laboratory, in which accretion discs made of almost pure helium \cite{Marsh et al. (1991),Schulz et al.(2001),Groot et al. (2001),Roelofs et al. (2006),Werner et al. (2006)}. This opens the possibility to test the behaviour of accretion discs of different chemical composition. \item {\em Gravitational wave emission} Untill recently the DDBs with two NSs were considered among the best sources to look for gravitational wave emission, mainly due to the relatively high chirp mass expected for these sources, In fact, simply inferring the strength of the gravitational wave amplitude expected for from \cite{Evans et al. (1987)} \begin{equation} h = \left[ \frac{16 \pi G L_{GW}} {c^3 \omega^2_g 4 \pi d^2} \right] ^{1/2} = 10^{-21} \left( \frac{{\cal{M}}}{{\it M}_{\odot}} \right)^{5/3} \left ( \frac{P_{orb}}{\rm 1 hr} \right)^{-2/3} \left ( \frac{d}{\rm 1 kpc} \right)^{-1} \end{equation} where \begin{equation} L_{GW} = \frac{32}{5}\frac{G^4}{c^5}\frac{M^2 m^2 (m+M)}{a^5} ; \end{equation} \begin{equation} {\cal{M}}=\frac{(Mm)^{3/5}}{(M+m)^{1/5}} \end{equation} where the frequency of the wave is given by $f = 2/P_{orb}$. It is evident that the strain signal $h$ from DDBs hosting neutron stars is a factor 5-20 higher than in the case of DDBs with white dwarfs as far as the orbital period is larger than approximatively 10-20 minutes. In recent years, AM CVns have received great attention as they represent a large population of guaranteed sources for the forthcoming \textit{Laser Interferometer Space Antenna} \cite{2006astro.ph..5722N,2005ApJ...633L..33S}. Double WD binaries enter the \textit{LISA} observational window (0.1 $\div$ 100 mHz) at an orbital period $\sim$ 5 hrs and, as they evolve secularly through GW emission, they cross the whole \textit{LISA} band. They are expected to be so numerous ($\sim 10^3 \div 10^4$ expected), close on average, and luminous in GWs as to create a stochastic foreground that dominates the \textit{LISA} observational window up to $\approx$ 3 mHz \cite{2005ApJ...633L..33S}. Detailed knowledge of the characteristics of their background signal would thus be needed to model it and study weaker background GW signals of cosmological origin. \end{itemize} \begin{figure} \includegraphics[height=12cm,angle=-90]{Galactic_GWR1} \caption{Expected signals of ultra-compact binaries, the ones with error bars from (adapted from \cite{Roelofs et al. (2006),Nelemans (2007)}.} \label{fig:fh_HST} \end{figure} A relatively small number of ultracompact DDBs systems is presently known. According to \cite{2006MNRAS.367L..62R} there exist 17 confirmed objects with orbital periods in the $10 \div 70$ min in which a hydrogen-deficient mass donor, either a semi-degenerate star or a WD itself, is present. These are called AM CVn systems and are roughly characterized by optical emission modulated at the orbital period, X-ray emission showing no evidence for a significant modulation (from which a moderately magnetic primary is suggested, \cite{2006astro.ph.10357R}) and, in the few cases where timing analyses could be carried out, orbital spin-down consistent with GW emission-driven mass transfer. In addition there exist two peculiar objects, sharing a number of observational properties that partially match those of the ``standard'' AM CVn's. They are RX\,J0806.3+1527\ and RX\,J1914.4+2456, whose X-ray emission is $\sim$ 100\% pulsed, with on-phase and off-phase of approximately equal duration. The single modulations found in their lightcurves, both in the optical and in X-rays, correspond to periods of, respectively, 321.5 and 569 s (\cite{2004MSAIS...5..148I,2002ApJ...581..577S}) and were first interpreted as orbital periods. If so, these two objects are the binary systems with the shortest orbital period known and could belong to the AM CVn class. However, in addition to peculiar emission properties with respect to other AM CVn's, timing analyses carried out by the above cited authors demonstrate that, in this interpretation, these two objects have shrinking orbits. This is contrary to what expected in mass transferring double white dwarf systems (including AM CVn's systems) and suggests the possibility that the binary is detached, with the orbit shrinking because of GW emission. The electromagnetic emission would have in turn to be caused by some other kind of interaction. Nonetheless, there are a number of alternative models to account for the observed properties, all of them based upon binary systems. The intermediate polar (IP) model (\cite{Motch et al.(1996),io99,Norton et al. (2004)}) is the only one in which the pulsation periods are not assumed to be orbital. In this model, the pulsations are likely due to the spin of a white dwarf accreting from non-degenerate secondary star. Moreover, due to geometrical constraints the orbital period is not expected to be detectable. The other two models assume a double white dwarf binaries in which the pulsation period is the orbital period. Each of them invoke a semi-detached, accreting double white dwarfs: one is magnetic, the double degenerate polar model (\cite{crop98,ram02a,ram02b,io02a}), while the other is non-magnetic, the direct impact model (\cite{Nelemans et al. (2001),Marsh and Steeghs(2002),ram02a}), in which, due to the compact dimensions of these systems, the mass transfer streams is forced to hit directly onto the accreting white dwarfs rather than to form an accretion disk . \begin{table}[!ht] \caption{Overview of observational properties of AM CVn stars (adapted from \cite{Nelemans (2005)})} \label{tab:overview} \smallskip \begin{center} \hspace*{-0.5cm} {\small \begin{tabular}{lllllllcc}\hline Name & $P_{\rm orb}^a$ & & $P_{\rm sh}^a$ & Spectrum & Phot. var$^b$ & dist & X-ray$^c$ & UV$^d$ \\ & (s) & & (s) & & & (pc) & & \\ \hline \hline ES Cet & 621 &(p/s) & & Em & orb & 350& C$^3$X & GI \\ AM CVn & 1029 &(s/p) & 1051 & Abs & orb & 606$^{+135}_{-95}$ & RX & HI \\ HP Lib & 1103 &(p) & 1119 & Abs & orb & 197$^{+13}_{-12}$ & X & HI \\ CR Boo & 1471 &(p) & 1487 & Abs/Em? & OB/orb & 337$^{+43}_{-35}$& ARX & I \\ KL Dra & 1500 &(p) & 1530 & Abs/Em? & OB/orb & & & \\ V803 Cen & 1612 &(p) & 1618 & Abs/Em? & OB/orb & & Rx & FHI \\ SDSSJ0926+36 & 1698.6& (p) & & & orb & & & \\ CP Eri & 1701 &(p) & 1716 & Abs/Em & OB/orb & & & H \\ 2003aw & ? & & 2042 & Em/Abs? & OB/orb & & & \\ SDSSJ1240-01 & 2242 &(s) & & Em & n & & & \\ GP Com & 2794 &(s) & & Em & n & 75$\pm2$ & ARX & HI \\ CE315 & 3906 &(s) & & Em & n & 77? & R(?)X & H \\ & & & & & & & & \\ Candidates & & & & & & & & \\\hline\hline RXJ0806+15 & 321 &(X/p) & & He/H?$^{11}$ & ``orb'' & & CRX & \\ V407 Vul & 569 &(X/p) & & K-star$^{16}$ & ``orb'' & & ARCRxX & \\ \hline \end{tabular} } \end{center} {\small $a$ orb = orbital, sh = superhump, periods from {ww03}, see references therein, (p)/(s)/(X) for photometric, spectroscopic, X-ray period.\\ $b$ orb = orbital, OB = outburst\\ $c$ A = ASCA, C = Chandra, R = ROSAT, Rx = RXTE, X = XMM-Newton {kns+04}\\ $d$ F = FUSE, G = GALEX, H = HST, I = IUE } \end{table} After a brief presentation of the two X--ray selected double degenerate binary systems, we discuss the main scenario of this type, the Unipolar Inductor Model (UIM) introduced by \cite{2002MNRAS.331..221W} and further developed by \cite{2006A&A...447..785D,2006astro.ph..3795D}, and compare its predictions with the salient observed properties of these two sources. \subsection{RX\,J0806.3+1527} \label{0:j0806} RX\,J0806.3+1527\ was discovered in 1990 with the {\em ROSAT}\ satellite during the All-Sky Survey (RASS; \cite{beu99}). However, it was only in 1999 that a periodic signal at 321\,s was detected in its soft X-ray flux with the {\em ROSAT}\ HRI (\cite{io99,bur01}). Subsequent deeper optical studies allowed to unambiguously identify the optical counterpart of RX\,J0806.3+1527, a blue $V=21.1$ ($B=20.7$) star (\cite{io02a,io02b}). $B$, $V$ and $R$ time-resolved photometry revealed the presence of a $\sim 15$\% modulation at the $\sim 321$\,s X-ray period (\cite{io02b,ram02a}. \begin{figure*}[htb] \resizebox{16pc}{!}{\rotatebox[]{-90}{\includegraphics{new_spec_norm.ps}}} \caption{VLT FORS1 medium (6\AA; 3900--6000\AA) and low (30\AA; above 6000\AA) resolution spectra obtained for the optical counterpart of RX\,J0806.3+1527. Numerous faint emission lines of HeI and HeII (blended with H) are labeled (adapted form \cite{io02b}).} \label{spec} \end{figure*} \begin{figure*}[hbt] \centering \resizebox{20pc}{!}{\includegraphics{israel_f1.eps} } \caption{Left panel: Results of the phase fitting technique used to infer the P-\.P coherent solution for RX\,J0806.3+1527: the linear term (P component) has been corrected, while the quadratic term (the \.P component) has been kept for clarity. The best \.P solution inferred for the optical band is marked by the solid fit line. Right panel: 2001-2004 optical flux measurements at fdifferent wavelengths.} \label{timing} \end{figure*} The VLT spectral study revealed a blue continuum with no intrinsic absorption lines \cite{io02b} . Broad ($\rm FWHM\sim 1500~\rm km~s^{-1}$), low equivalent width ($EW\sim -2\div-6$ \AA) emission lines from the He~II Pickering series (plus additional emission lines likely associated with He~I, C~III, N~III, etc.; for a different interpretation see \cite{rei04}) were instead detected \cite{io02b}. These findings, together with the period stability and absence of any additional modulation in the 1\,min--5\,hr period range, were interpreted in terms of a double degenerate He-rich binary (a subset of the AM CVn class; see \cite{warn95}) with an orbital period of 321\,s, the shortest ever recorded. Moreover, RX\,J0806.3+1527\ was noticed to have optical/X-ray properties similar to those of RX\,J1914.4+2456, a 569\,s modulated soft X-ray source proposed as a double degenerate system (\cite{crop98,ram00,ram02b}). In the past years the detection of spin--up was reported, at a rate of $\sim$6.2$\times$10$^{-11}$\, s~s$^{-1}$, for the 321\,s orbital modulation, based on optical data taken from the Nordic Optical Telescope (NOT) and the VLT archive, and by using incoherent timing techniques \cite{hak03,hak04}. Similar results were reported also for the X-ray data (ROSAT and Chandra; \cite{stro03}) of RX\,J0806.3+1527\ spanning over 10 years of uncoherent observations and based on the NOT results \cite{hak03}. A Telescopio Nazionale Galileo (TNG) long-term project (started on 2000) devoted to the study of the long-term timing properties of RX\,J0806.3+1527\ found a slightly energy--dependent pulse shape with the pulsed fraction increasing toward longer wavelengths, from $\sim$12\% in the B-band to nearly 14\% in the I-band (see lower right panel of Figure~\ref{QU}; \cite{2004MSAIS...5..148I}). An additional variability, at a level of 4\% of the optical pulse shape as a function of time (see upper right panel of Figure~\ref{QU} right) was detected. The first coherent timing solution was also inferred for this source, firmly assessing that the source was spinning-up: P=321.53033(2)\,s, and \.P=-3.67(1)$\times$10$^{-11}$\,s~s$^{-1}$ (90\% uncertainties are reported; \cite{2004MSAIS...5..148I}). Reference \cite{2005ApJ...627..920S} obtained independently a phase-coherent timing solutions for the orbital period of this source over a similar baseline, that is fully consistent with that of \cite{2004MSAIS...5..148I}. See \cite{2007MNRAS.374.1334B} for a similar coherent timing solution also including the covariance terms of the fitted parameters. \begin{figure}[hbt] \centering \resizebox{33pc}{!}{\rotatebox[]{0}{\includegraphics{israel_f2.eps}}} \caption{Left Panel: The 1994--2002 phase coherently connected X--ray folded light curves (filled squares; 100\% pulsed fraction) of RX\,J0806.3+1527, together with the VLT-TNG 2001-2004 phase connected folded optical light curves (filled circles). Two orbital cycles are reported for clarity. A nearly anti-correlation was found. Right panels: Analysis of the phase variations induced by pulse shape changes in the optical band (upper panel), and the pulsed fraction as a function of optical wavelengths (lower panel). } \label{QU}% \end{figure} The relatively high accuracy obtained for the optical phase coherent P-\.P solution (in the January 2001 - May 2004 interval) was used to extend its validity backward to the ROSAT observations without loosing the phase coherency, i.e. only one possible period cycle consistent with our P-\.P solution. The best X--ray phase coherent solution is P=321.53038(2)\,s, \.P=-3.661(5)$\times$10$^{-11}$\,s~s$^{-1}$ (for more details see \cite{2004MSAIS...5..148I}). Figure~\ref{QU} (left panel) shows the optical (2001-2004) and X--ray (1994-2002) light curves folded by using the above reported P-\.P coherent solution, confirming the amazing stability of the X--ray/optical anti-correlation first noted by (\cite{2003ApJ...598..492I}; see inset of left panel of Figure\,\ref{QU}). \begin{figure} \centering \resizebox{20pc}{!}{\rotatebox[]{-90}{\includegraphics{israel_f3.eps}}} \caption{The results of the {\em XMM--Newton}\ phase-resolved spectroscopy (PRS) analysis for the absorbed blackbody spectral parameters: absorption, blackbody temperature, blackbody radius (assuming a distance of 500\,pc), and absorbed (triangles) and unabsorbed (asterisks) flux. Superposed is the folded X-ray light curve. } \label{xmm}% \end{figure} \begin{figure} \centering \resizebox{16pc}{!}{\rotatebox[]{-90}{\includegraphics{israel_f4.eps}}} \caption{ Broad-band energy spectrum of RX\,J0806.3+1527\ as inferred from the {\em Chandra}, {\em XMM--Newton}, VLT and TNG measurements and {\it EUVE\/} upper limits. The dotted line represents one of the possible fitting blackbody models for the IR/optical/UV bands.} \label{xmm}% \end{figure} On 2001, a Chandra observation of RX\,J0806.3+1527\ carried out in simultaneity with time resolved optical observation at the VLT, allowed for the first time to study the details of the X-ray emission and the phase-shift between X-rays and optical band. The X-ray spectrum is consistent with a occulting, as a function of modulation phase, black body with a temperature of $\sim$60\,eV \cite{2003ApJ...598..492I}. A 0.5 phase-shift was reported for the X-rays and the optical band \cite{2003ApJ...598..492I}. More recently, a 0.2 phase-shift was reported by analysing the whole historical X-ray and optical dataset: this latter result is considered the correct one \cite{2007MNRAS.374.1334B}. On 2002 November 1$^{\rm st}$ a second deep X-ray observation was obtained with the {\em XMM--Newton}\ instrumentations for about 26000\,s, providing an increased spectral accuracy (see eft panel of Figure~\ref{xmm}). The {\em XMM--Newton}\ data show a lower value of the absorption column, a relatively constant black body temperature, a smaller black body size, and, correspondingly, a slightly lower flux. All these differences may be ascribed to the pile--up effect in the Chandra data, even though we can not completely rule out the presence of real spectral variations as a function of time. In any case we note that this result is in agreement with the idea of a self-eclipsing (due only to a geometrical effect) small, hot and X--ray emitting region on the primary star. Timing analysis did not show any additional significant signal at periods longer or shorter than 321.5\,s, (in the 5hr-200ms interval). By using the {\em XMM--Newton}\ OM a first look at the source in the UV band (see right panel of Figure~\ref{xmm}) was obtained confirming the presence of the blackbody component inferred from IR/optical bands. Reference \cite{2003ApJ...598..492I} measured an on-phase X-ray luminosity (in the range 0.1-2.5 keV) $L_X = 8 \times 10^{31} (d/200~\mbox{pc})^2$ erg s$^{-1}$ for this source. These authors suggested that the bolometric luminosity might even be dominated by the (unseen) value of the UV flux, and reach values up to 5-6 times higher. The optical flux is only $\sim$ 15\% pulsed, indicating that most of it might not be associated to the same mechanism producing the pulsed X-ray emission (possibly the cooling luminosity of the WD plays a role). Given these uncertainties and, mainly, the uncertainty in the distance to the source, a luminosity $W\simeq 10^{32} (d/200~\mbox{pc})^2$ erg s$^{-1}$ will be assumed as a reference value.\\ \subsection{RX\,J1914.4+2456\ } \label{0:j1914} The luminosity and distance of this source have been subject to much debate over the last years. Reference \cite{2002MNRAS.331..221W} refer to earlier ASCA measurements that, for a distance of 200-500 pc, corresponded to a luminosity in the range ($4\times 10^{33} \div 2.5 \times 10^{34}$) erg s$^{-1}$. Reference \cite{2005MNRAS.357...49R}, based on more recent XMM-Newton observations and a standard blackbody fit to the X-ray spectrum, derived an X-ray luminosity of $\simeq 10^{35} d^2_{\mbox{\tiny{kpc}}}$ erg s$^{-1}$, where $d_{\mbox{\tiny{kpc}}}$ is the distance in kpc. The larger distance of $\sim$ 1 kpc was based on a work by \cite{2006ApJ...649..382S}. Still more recently, \cite{2006MNRAS.367L..62R} find that an optically thin thermal emission spectrum, with an edge at 0.83 keV attributed to O VIII, gives a significantly better fit to the data than a blackbody model. The optically thin thermal plasma model implies a much lower bolometric luminosity of L$_{\mbox{\tiny{bol}}} \simeq 10^{33}$ d$^2_{\mbox{\tiny{kpc}}}$ erg s$^{-1}$. \\ Reference \cite{2006MNRAS.367L..62R} also note that the determination of a 1 kpc distance is not free of uncertainties and that a minimum distance of $\sim 200$ pc might still be possible: the latter leads to a minimum luminosity of $\sim 3 \times 10^{31}$ erg s$^{-1}$. \\ Given these large discrepancies, interpretation of this source's properties remains ambiguous and dependent on assumptions. In the following, we refer to the more recent assessment by \cite{2006MNRAS.367L..62R} of a luminosity $L = 10^{33}$ erg s$^{-1}$ for a 1 kpc distance.\\ Reference \cite{2006MNRAS.367L..62R} also find possible evidence, at least in a few observations, of two secondary peaks in power spectra. These are very close to ($\Delta \nu \simeq 5\times 10^{-5}$ Hz) and symmetrically distributed around the strongest peak at $\sim 1.76 \times 10^{-3}$ Hz. References \cite{2006MNRAS.367L..62R} and \cite{2006ApJ...649L..99D} discuss the implications of this possible finding. \section{The Unipolar Inductor Model} \label{sec:1} The Unipolar Inductor Model (UIM) was originally proposed to explain the origin of bursts of decametric radiation received from Jupiter, whose properties appear to be strongly influenced by the orbital location of Jupiter's satellite Io \cite{1969ApJ...156...59G,1977Moon...17..373P} . \\ The model relies on Jupiter's spin being different from the system orbital period (Io spin is tidally locked to the orbit). Jupiter has a surface magnetic field $\sim$ 10 G so that, given Io's good electrical conductivity ($\sigma$), the satellite experiences an e.m.f. as it moves across the planet's field lines along the orbit. The e.m.f. accelerates free charges in the ambient medium, giving rise to a flow of current along the sides of the flux tube connecting the bodies. Flowing charges are accelerated to mildly relativistic energies and emit coherent cyclotron radiation through a loss cone instability (cfr. \cite{Willes and Wu(2004)} and references therein): this is the basic framework in which Jupiter decametric radiation and its modulation by Io's position are explained. Among the several confirmations of the UIM in this system, HST UV observations revealed the localized emission on Jupiter's surface due to flowing particles hitting the planet's surface - the so-called Io's footprint \cite{Clarke et al. (1996)}. In recent years, the complex interaction between Io-related free charges (forming the Io torus) and Jupiter's magnetosphere has been understood in much greater detail \cite{Russ1998P&SS...47..133R,Russ2004AdSpR..34.2242R}. Despite these significant complications, the above scenario maintains its general validity, particularly in view of astrophysical applications.\\ Reference \cite{2002MNRAS.331..221W} considered the UIM in the case of close white dwarf binaries. They assumed a moderately magnetized primary, whose spin is not synchronous with the orbit and a non-magnetic companion, whose spin is tidally locked. They particularly highlight the role of ohmic dissipation of currents flowing through the two WDs and show that this occurs essentially in the primary atmosphere. A small bundle of field lines leaving the primary surface thread the whole secondary. The orbital position of the latter is thus ``mapped'' to a small region onto the primary's surface; it is in this small region that ohmic dissipation - and the associated heating - mainly takes place. The resulting geometry, illustrated in Fig. \ref{fig:1}, naturally leads to mainly thermal, strongly pulsed X-ray emission, as the secondary moves along the orbit. \begin{figure} \centering \includegraphics[height=6.18cm]{Wu2004.eps} \caption{Electric coupling between the asynchronous, magnetic primary star and the non-magnetic secondary,, in the UIM (adapted from \cite{2002MNRAS.331..221W}).} \label{fig:1} \end{figure} The source of the X-ray emission is ultimately represented by the relative motion between primary spin and orbit, that powers the electric circuit. Because of resistive dissipation of currents, the relative motion is eventually expected to be cancelled. This in turn requires angular momentum to be redistributed between spin and orbit in order to synchronize them. The necessary torque is provided by the Lorentz force on cross-field currents within the two stars.\\ Reference \cite{2002MNRAS.331..221W} derived synchronization timescales $(\tau_{\alpha}) \sim$ few 10$^3$ yrs for both RX\,J1914.4+2456\ and RX\,J0806.3+1527\ , less than 1\% of their orbital evolutionary timescales. This would imply a much larger Galactic population of such systems than predicted by population-synthesis models, a major difficulty of this version of the UIM. However, \cite{2006A&A...447..785D,2006astro.ph..3795D} have shown that the electrically active phase is actually long-lived because perfect synchronism is never reached. In a perfectly synchronous system the electric circuit would be turned off, while GWs would still cause orbital spin-up. Orbital motion and primary spin would thus go out of synchronism, which in turn would switch the circuit on. The synchronizing (magnetic coupling) and de-synchronizing (GWs) torques are thus expected to reach an equilibrium state at a sufficiently small degree of asynchronism.\\ We discuss in detail how the model works and how the major observed properties of RX\,J0806.3+1527\ and RX\,J1914.4+2456\ can be interpreted in the UIM framework. We refer to \cite{2005MNRAS.357.1306B} for a possible criticism of the model based on the shape of the pulsed profiles of the two sources. Finally, we refer to \cite{2006ApJ...653.1429D,2006ApJ...649L..99D}, who have recently proposed alternative mass transfer models that can also account for long-lasting episodes of spin-up in Double White Dwarf systems. \section{UIM in Double Degenerate Binaries} \label{sec:1.1} According to \cite{2002MNRAS.331..221W}, define the primary's asynchronism parameter $\alpha \equiv \omega_1/ \omega_o$, where $\omega_1$ and $\omega_o$ are the primary's spin and orbital frequencies. In a system with orbital separation $a$, the secondary star will move with the velocity $ v = a (\omega_o - \omega_1) = [G M_1 (1+q)]^{1/3}~\omega^{1/3}_o (1-\alpha)$ relative to field lines, where $G$ is the gravitational constant, $M_1$ the primary mass, $q = M_2/M_1$ the system mass-ratio. The electric field induced through the secondary is thus {\boldmath$E$} = $\frac{\mbox{{\boldmath$v \times B_2$}}}{c}$, with an associated e.m.f. $\Phi = 2R_2 E$, $R_2$ being the secondary's radius and {\boldmath$B$$_2$} the primary magnetic field at the secondary's location. The internal (Lorentz) torque redistributes angular momentum between spin and orbit conserving their sum (see below), while GW-emission causes a net loss of orbital angular momentum. Therefore, as long as the primary spin is not efficiently affected by other forces, \textit{i.e.} tidal forces (cfr. App.A in \cite{2006A&A...447..785D}), it will lag behind the evolving orbital frequency, thus keeping electric coupling continuously active.\\ Since most of the power dissipation occurs at the primary atmosphere (cfr. \cite{2002MNRAS.331..221W}), we slightly simplify our treatment assuming no dissipation at all at the secondary. In this case, the binary system is wholly analogous to the elementary circuit of Fig. \ref{fig:2}. \begin{figure} \centering \includegraphics[width=5.8cm]{CircuitComplete.eps} \caption{Sketch of the elementary circuit envisaged in the UIM. The secondary star acts as the battery, the primary star represents a resistance connected to the battery by conducting ``wires'' (field lines.) Inclusion of the effect of GWs corresponds to connecting the battery to a plug, so that it is recharged at some given rate. Once the battery initial energy reservoir is consumed, the bulb will be powered just by the energy fed through the plug. This corresponds to the ``steady-state'' solution.} \label{fig:2} \end{figure} Given the e.m.f. ($\Phi$) across the secondary star and the system's effective resistivity ${\cal{R}} \approx(2\sigma R_2)^{-1}~(a/R_1)^{3/2}$, the dissipation rate of electric current ($W$) in the primary atmosphere is: \begin{equation} \label{dissipation} W = I^2 {\cal{R}} = I \Phi = k \omega^{17/3}_o (1-\alpha)^2 \end{equation} where $k = 2 (\mu_1/c)^2 \sigma R^{3/2}_1 R^3_2 / [GM_1(1+q)]^{11/6}$ is a constant of the system.\\ The Lorentz torque ($N_L$) has the following properties: \textit{i}) it acts with the same magnitude and opposite signs on the primary star and the orbit, $N_L = N^{(1)}_L = - N^{(\mbox{\tiny{orb}})}_L$. Therefore; \textit{ii}) $N_L$ conserves the total angular momentum in the system, transferring all that is extracted from one component to the other one; \textit{iii}) $N_L$ is simply related to the energy dissipation rate: $W = N_L \omega_o (1-\alpha)$.\\ From the above, the evolution equation for $\omega_1$ is: \begin{equation} \label{omega1} \dot{\omega}_1 = (N_L/I_1) = \frac{W}{I_1 \omega_o (1-\alpha)} \end{equation} The orbital angular momentum is $L_o = I_o \omega_o$, so that the orbital evolution equation is: \begin{equation} \label{omegaevolve} \dot{\omega}_o = - 3 (N_{\mbox{\tiny{gw}}}+ N^{(\mbox{\tiny{orb}})}_L)/I_o = - 3 (N_{\mbox{\tiny{gw}}} - N_L)/I_o = -\frac{3}{I_o\omega_o}\left(\dot{E}_{\mbox{\tiny{gw}}} -\frac{W}{1-\alpha} \right) \end{equation} where $I_o = q (1+q)^{-1/3} G^{2/3} M^{5/3}_1 \omega^{-4/3}_o$ is the orbital moment of inertia and $N_{\mbox{\tiny{gw}}} = \dot{E}_{\mbox{\tiny{gw}}}/ \omega_o$ is the GW torque. \subsection{Energetics of the electric circuit} \label{sec:2} Let us focus on how energy is transferred and consumed by the electric circuit. We begin considering the rate of work done by $N_L$ on the orbit \begin{equation} \label{Eorb} \dot{E}^{(orb)}_L = N^{(orb)}_L \omega_o = -N_L \omega_o = - \frac{W} {1-\alpha}, \end{equation} and that done on the primary: \begin{equation} \label{espin} \dot{E}_{spin} = N_L \omega_1 = \frac{\alpha}{1-\alpha} W = -\alpha \dot{E}^{(orb)}_L. \end{equation} The sum $\dot{E}_{spin} + \dot{E}^{(orb)}_L = -W$. Clearly, not all of the energy extracted from one component is transferred to the other one. The energy lost to ohmic dissipation represents the energetic cost of spin-orbit coupling.\\ The above formulae allow to draw some further conclusions concerning the relation between $\alpha$ and the energetics of the electrical circuit. When $\alpha>1$, the circuit is powered at the expenses of the primary's spin energy. A fraction $\alpha^{-1}$ of this energy is transferred to the orbit, the rest being lost to ohmic dissipation. When $\alpha <1$, the circuit is powered at the expenses of the orbital energy and a fraction $\alpha$ of this energy is transferred to the primary spin. Therefore, \textit{the parameter $\alpha$ represents a measure of the energy transfer efficiency of spin-orbit coupling}: the more asynchronous a system is, the less efficiently energy is transferred, most of it being dissipated as heat. \subsection{Stationary state: General solution} \label{sec:2.1} As long as the asynchronism parameter is sufficiently far from unity, its evolution will be essentially determined by the strength of the synchronizing (Lorentz) torque, the GW torque being of minor relevance. The evolution in this case depends on the initial values of $\alpha$ and $\omega_o$, and on stellar parameters. This evolutionary phase drives $\alpha$ towards unity, \textit{i.e.} spin and orbit are driven towards synchronism. It is in this regime that the GW torque becomes important in determining the subsequent evolution of the system. \\ Once the condition $\alpha =1$ is reached, indeed, GW emission drives a small angular momentum disequilibrium. The Lorentz torque is in turn switched on to transfer to the primary spin the amount of angular momentum required for it to keep up with the evolving orbital frequency. This translates to the requirement that $\dot{\omega}_1 = \dot{\omega}_o$. By use of expressions (\ref{omega1}) and (\ref{omegaevolve}), it is found that this condition implies the following equilibrium value for $\alpha$ (we call it $\alpha_{\infty})$: \begin{equation} \label{alfainf} 1 - \alpha_{\infty} = \frac{I_1}{k} \frac{\dot{\omega}_o/\omega_o} {\omega^{11/3}} \end{equation} This is greater than zero if the orbit is shrinking ($\dot{\omega}_o >0$), which implies that $\alpha_{\infty} <1$. For a widening orbit, on the other hand, $\alpha_{\infty} > 1$. However, this latter case does not correspond to a long-lived configuration. Indeed, define the electric energy reservoir as $E_{UIM} \equiv (1/2) I_1 (\omega^2_1 -\omega^2_o)$, which is negative when $\alpha <1$ and positive when $\alpha >1$. Substituting eq. (\ref{alfainf}) into this definition: \begin{equation} \label{stationary} \dot{E}_{UIM} = - W, \end{equation} If $\alpha = \alpha_{\infty} >1$, energy is consumed at the rate W: the circuit will eventually switch off ($\alpha_{\infty}=1$). At later times, the case $\alpha_{\infty}<1$ applies.\\ If $\alpha = \alpha_{\infty} <1$, condition (\ref{stationary}) means that the battery recharges at the rate $W$ at which electric currents dissipate energy: the electric energy reservoir is conserved as the binary evolves.\\ The latter conclusion can be reversed (cfr. Fig. \ref{fig:2}): in steady-state, the rate of energy dissipation ($W$) is fixed by the rate at which power is fed to the circuit by the plug ($\dot{E}_{UIM}$). The latter is determined by GW emission and the Lorentz torque and, therefore, by component masses, $\omega_o$ and $\mu_1$.\\ Therefore the steady-state degree of asynchronism of a given binary system is uniquely determined, given $\omega_o$. Since both $\omega_o$ and $\dot{\omega}_o$ evolve secularly, the equilibrium state will be ``quasi-steady'', $\alpha_{\infty}$ evolving secularly as well. \subsection{Model application: equations of practical use} \label{application} We have discussed in previous sections the existence of an asymptotic regime in the evolution of binaries in the UIM framework. Given the definition of $\alpha_\infty$ and $W$ (eq. \ref{alfainf} and \ref{dissipation}, respectively), we have: \begin{equation} \label{luminosity} W = I_1 \omega_o \dot{\omega}_o~\frac{(1-\alpha)^2}{1-\alpha_{\infty}}. \end{equation} The quantity $(1-\alpha_{\infty})$ represents the \textit{actual} degree of asynchronism only for those systems that had enough time to evolve towards steady-state, \textit{i.e} with sufficiently short orbital period. In this case, the steady-state source luminosity can thus be written as: \begin{equation} \label{luminsteady} W_{\infty} = I_1 \dot{\omega}_o \omega_o (1-\alpha_{\infty}) \end{equation} Therefore - under the assumption that a source is in steady-state - the quantity $\alpha_{\infty}$ can be determined from the measured values of $W$, $\omega_o$, $\dot{\omega}_o$. Given its definition (eq. \ref{alfainf}), this gives an estimate of $k$ and, thus, $\mu_1$.\\ The equation for the orbital evolution (\ref{omegaevolve}) provides a further relation between the three measured quantities, component masses and degree of asynchronism. This can be written as: \begin{equation} \label{useful} \dot{E}_{\mbox{\tiny{gr}}} + \frac{1}{3} I_o \omega^2_o (\dot{\omega}_o / \omega_o) = \frac{W} {(1-\alpha)} \nonumber \end{equation} that becomes, inserting the appropriate expressions for $\dot{E}_ {\mbox{\tiny{gr}}}$ and $I_o$: \begin{equation} \label{extended} \frac{32}{5}\frac{G^{7/3}}{c^5}~\omega^{10/3}_o X^2 -\frac{1}{3}~ G^{2/3} \frac{\dot{\omega}_o}{\omega^{1/3}_o} X + \frac{W}{1-\alpha} = 0~, \end{equation} where $X \equiv M^{5/3}_1 q/(1+q)^{1/3} = {\cal{M}}^{5/3}$, $\cal{M}$ being the system's chirp mass. \section{RX\,J0806.3+1527\ } \label{rxj08} We assume here the values of $\omega_o$, $\dot{\omega}_o$ and of the bolometric luminosity reported in \ref{sec:0} and refer to See \cite{2006astro.ph..3795D} for a complete discussion on how our conclusions depend on these assumptions.\\ In Fig. \ref{fig3} (see caption for further details), the dashed line represents the locus of points in the $M_2$ vs. $M_1$ plane, for which the measured $\omega_o$ and $\dot{\omega}_o$ are consistent with being due to GW emission only, \textit{i.e.} if spin-orbit coupling was absent ($\alpha =1$). This corresponds to a chirp mass ${\cal{M}} \simeq$ 0.3 M$_{\odot}$. \\ Inserting the measured quantities in eq. (\ref{luminosity}) and assuming a reference value of $I_1 = 3 \times 10^{50}$ g cm$^2$, we obtain: \begin{equation} \label{j08} \frac{(1-\alpha)^2} {1-\alpha_{\infty}} \simeq \frac{10^{32}d^2_{200}} {1.3 \times 10^{34}} \simeq 8 \times 10^{-3} d^2_{200}. \end{equation} In principle, the source may be in any regime, but our aim is to check whether it can be in steady-state, as to avoid the short timescale problem mentioned in \ref{sec:1}. Indeed, the short orbital period strongly suggests it may have reached the asymptotic regime (cfr. \cite{2006astro.ph..3795D}). If we assume $\alpha =\alpha_{\infty}$, eq. (\ref{j08}) implies $(1-\alpha_{\infty}) \simeq 8 \times 10^{-3}$.\\ Once UIM and spin-orbit coupling are introduced, the locus of allowed points in the M$_2$ vs. M$_1$ plane is somewhat sensitive to the exact value of $\alpha$: the solid curve of Fig. \ref{fig3} was obtained, from eq. (\ref{extended}), for $\alpha = \alpha_{\infty} = 0.992$. \\ From this we conclude that, if RX\,J0806.3+1527\ is interpreted as being in the UIM steady-state, $M_1$ must be smaller than 1.1 $M_{\odot}$ in order for the secondary not to fill its Roche lobe, thus avoiding mass transfer. \begin{figure}[h] \centering \includegraphics[height=10.18cm,angle=-90]{Massej08.ps} \caption{M$_2$ vs. M$_1$ plot based on the measured timing properties of RX\,J0806.3+1527\ . The dashed curve is the locus expected if orbital decay is driven by GW alone, with no spin-orbit coupling. The solid line describes the locus expected if the system is in a steady-state, with $(1-\alpha) = (1-\alpha_{\infty}) \simeq 8 \times 10^{-3}$. The horizontal dot-dashed line represents the minimum mass for a degenerate secondary not to fill its Roche-lobe at an orbital period of 321.5 s. Dotted lines are the loci of fixed mass ratio.} \label{fig3} \end{figure} From $(1-\alpha_{\infty}) = 8\times 10^{-3}$ and from eq. (\ref{dissipation}), $k \simeq 7.7 \times 10^{45}$ (c.g.s.): from this, component masses and primary magnetic moment can be constrained. Indeed, $k = \hat{k}(\mu_1, M_1, q; \overline{\sigma})$ (eq. \ref{dissipation}) and a further constraint derives from the fact that $M_1$ and $q$ must lie along the solid curve of Fig. \ref{fig3}. Given the value of $\overline{\sigma}$, $\mu_1$ is obtained for each point along the solid curve. We assume an electrical conductivity of $\overline{\sigma} = 3\times 10^{13}$ e.s.u. \cite{2002MNRAS.331..221W,2006astro.ph..3795D}. \\ The values of $\mu_1$ obtained in this way, and the corresponding field at the primary's surface, are plotted in Fig. \ref{fig2}, from which $\mu_1 \sim$ a few $\times 10^{30}$ G cm$^3$ results, somewhat sensitive to the primary mass.\\ We note further that, along the solid curve of Fig. \ref{fig3}, the chirp mass is slightly variable, being: $X \simeq (3.4 \div 4.5) \times 10^{54}$ g$^{5/3}$, which implies ${\cal{M}} \simeq (0.26 \div 0.31)$ M$_{\odot}$. More importantly, $\dot{E}_{\mbox{\tiny{gr}}} \simeq (1.1 \div 1.9) \times 10^{35}$ erg s$^{-1}$ and, since $W/(1-\alpha_{\infty}) = \dot{E}^{(orb)}_L \simeq 1.25 \times 10^{34}$ erg s$^{-1}$, we have $\dot{E}_{\mbox{\tiny{gr}}} \simeq (9\div 15)~\dot{E}^{(orb)}_L $. Orbital spin-up is essentially driven by GW alone; indeed, the dashed and solid curves are very close in the M$_2$ vs. M$_1$ plane. \\ \begin{figure}[h] \centering \includegraphics[height=10.18cm,angle=-90]{Muj08.ps} \caption{The value of the primary magnetic moment $\mu_1$, and the corresponding surface B-field, as a function of the primary mass M$_1$, for $(1-\alpha) = (1-\alpha_{\infty}) = 8\times 10^{-3}$.} \label{fig2} \end{figure} Summarizing, the observational properties of RX\,J0806.3+1527\ can be well interpreted in the UIM framework, assuming it is in steady-state. This requires the primary to have $\mu_1 \sim 10^{30}$ G cm$^3$ and a spin just slightly slower than the orbital motion (the difference being less than $\sim 1$\%). \\ The expected value of $\mu_1$ can in principle be tested by future observations, through studies of polarized emission at optical and/or radio wavelenghts \cite{Willes and Wu(2004)}. \section{RX\,J1914.4+2456\ } \label{rxj19} As for the case of RX\,J0806.3+1527\ , we adopt the values discussed in \ref{sec:0} and refer to \cite{2006astro.ph..3795D} for a discussion of all the uncertainties on these values and their implications for the model.\\ Application of the scheme used for RX\,J0806.3+1527\ to this source is not as straightforward. The inferred luminosity of this source seems inconsistent with steady-state. With the measured values of\footnote{again assuming $I_1= 3 \times 10^{50}$ g cm$^2$} $\omega_o$ and $\dot{\omega}_o$, the system steady-state luminosity should be $ < 2 \times 10^{32}$ erg s$^{-1}$ (eq. \ref{luminsteady}). This is hardly consistent even with the smallest possible luminosity referred to in \ref{sec:0}, unless allowing for a large value of ($1-\alpha_{\infty} \geq 0.15 $). \\ From eq. (\ref{luminosity}) a relatively high ratio between the actual asynchronism parameter and its steady-state value appears unavoidable: \begin{equation} \label{j19} |1-\alpha| \simeq 2.2 (1-\alpha_{\infty})^{1/2} \end{equation} \subsubsection{The case for $\alpha>1$} \label{thecase} The low rate of orbital shrinking measured for this source and its relatively high X-ray luminosity put interesting constraints on the primary spin. Indeed, a high value of $N_L$ is associated to $W\sim 10^{33}$ erg s$^{-1}$. \\ If $\alpha<1$, this torque sums to the GW torque: the resulting orbital evolution would thus be faster than if driven by GW alone. In fact, for $\alpha < 1$, the smallest possible value of $N_L$ obtains with $\alpha = 0$, from which $N^{(\mbox{\tiny{min}})}_L = 9 \times 10^{34}$ erg. This implies an absolute minimum to the rate of orbital shrinking (eq. \ref{omegaevolve}), $3~N^{(\mbox{\tiny{min}})}_L / I_o$, so close to the measured one that unplausibly small component masses would be required for $\dot{E}_{\mbox{\tiny{gr}}}$ to be negligible. We conclude that $\alpha <1$ is essentially ruled out in the UIM discussed here. \\ If $\alpha > 1$ the primary spin is faster than the orbital motion and the situation is different. Spin-orbit coupling has an opposite sign with respect to the GW torque. The small torque on the orbit implied by the measured $\dot{\omega}_o$ could result from two larger torques of opposite signs partially cancelling each other.\\ This point has been overlooked by \cite{Marsh and Nelemans (2005)} who estimated the GW luminosity of the source from its measured timing parameters and, based on this estimate, claimed the failure of the UIM for RX\,J1914.4+2456\ . In discussing this and other misinterpretations of the UIM in the literature, \cite{2006astro.ph..3795D} show that the argument by \cite{Marsh and Nelemans (2005)} actually leads to our same conclusion: in the UIM framework, the orbital evolution of this source must be affected significantly by spin-orbit coupling, being slowed down by the transfer of angular momentum and energy from the primary spin to the orbit. The source GW luminosity must accordingly be larger than indicated by its timing parameters. \subsection{Constraining the asynchronous system} \label{lifetime} Given that the source is not compatible with steady-state, we constrain system parameters in order to match the measured values of $W, \omega_o$ and $\dot{\omega}_o$ and meet the requirement that the resulting state has a sufficiently long lifetime.\\ \begin{figure}[h] \centering \includegraphics[height=10.18cm,angle=-90]{solnew.ps} \caption{M$_2$ vs. M$_1$ plot based on measured timing properties of RX\,J1914.4+2456\ . The dot-dashed line corresponds to the minimum mass for a degenerate secondary not to fill its Roche-lobe. The dashed curve represents the locus expected if orbital decay was driven by GW alone, with no spin-orbit coupling. This curve is consistent with a detached system only for extremely low masses. The solid lines describe the loci expected if spin-orbit coupling is present (the secondary spin being always tidally locked) and gives a \textit{negative} contribution to $\dot{\omega}_o$. The four curves are obtained for $W = 10^{33}$ erg s$^{-1}$ and four different values of $\alpha = 1.025, 1.05, 1.075$, $1.1$, respectively, from top to bottom, as reported in the plot.} \label{fig5} \end{figure} Since system parameters cannot all be determined uniquely, we adopt the following scheme: given a value of $\alpha$ eq. (\ref{extended}) allows to determine, for each value of M$_1$, the corresponding value of M$_2$ (or $q$) that is compatible with the measured $W, \omega_o$ and $\dot{\omega}_o$. This yields the solid curves of Fig. \ref{fig5}.\\ As these curves show, the larger is $\alpha$ and the smaller the upward shift of the corresponding locus. This is not surprising, since these curves are obtained at fixed luminosity $W$ and $\dot{\omega}_o$. Recalling that $(1/\alpha)$ gives the efficiency of energy transfer in systems with $\alpha >1$ (cfr. \ref{sec:2}), a higher $\alpha$ at a given luminosity implies that less energy is being transferred to the orbit. Accordingly, GWs need being emitted at a smaller rate to match the measured $\dot{\omega}_o$.\\ The values of $\alpha$ in Fig. \ref{fig5} were chosen arbitrarily and are just illustrative: note that the resulting curves are similar to those obtained for RX\,J0806.3+1527\ . Given $\alpha$, one can also estimate $k$ from the definiton of $W$ (eq. \ref{dissipation}). The information given by the curves of Fig. \ref{fig5} determines all quantities contained in $k$, apart from $\mu_1$. Therefore, proceeding as in the previous section, we can determine the value of $\mu_1$ along each of the four curves of Fig. \ref{fig5}. As in the case of RX\,J0806.3+1527\ , derived values are in the $\sim 10^{30}$ G cm$^3$ range. Plots and discussion of these results are reported by \cite{2006astro.ph..3795D}.\\ We finally note that the curves of Fig. \ref{fig5} define the value of $X$ for each (M$_1$,M$_2$), from which the system GW luminosity $\dot{E}_ {\mbox{\tiny{gr}}}$ can be calculated and its ratio to spin-orbit coupling. According to the above curves, the expected GW luminosity of this source is in the range $(4.6 \div 1.4) \times 10^{34}$ erg s$^{-1}$. The corresponding ratios $\dot{E}_{\mbox{\tiny{gr}}}/ \dot{E}^{(orb)}_L$ are $1.15, 1.21, 1.29$ and $1.4$, respectively, for $\alpha =1.025, 1.05, 1.075$ and $1.1$.\\ Since the system cannot be in steady-state a strong question on the duration of this transient phase arises. \begin{figure}[h] \centering \includegraphics[height=10.18cm,angle=-90]{tauj19.ps} \caption{The evolution timescale $\tau_{\alpha}$ as a function of the primary mass for the same values of $\alpha$ used previously, reported on the curves. Given the luminosity $W\sim 10^{33}$ erg s$^{-1}$ and a value of $\alpha$, $\tau_{\alpha}$ is calculated as a function of M$_1$.} \label{fig8} \end{figure} The synchronization timescale $\tau_{\alpha} = \alpha / \dot{\alpha}$ can be estimated combining eq. (\ref{omega1}) and (\ref{omegaevolve}). With the measured values of $W$, $\omega_o$ and $\dot{\omega}_o$, $\tau_{\alpha}$ can be calculated as a function of $I_1$ and, thus, of M$_1$, given a particular value of $\alpha$. Fig. \ref{fig8} shows results obtained for the same four values of $\alpha$ assumed previously. The resulting timescales range from a few $\times 10^4$ yrs to a few $\times 10^5$ yrs, tens to hundreds times longer than previously obtained and compatible with constraints derived from the expected population of such objects in the Galaxy. Reference \cite{2006astro.ph..3795D} discuss this point and its implications in more detail. \section{Conclusions} \label{conclusions} The observational properties of the two DDBs with the shortest orbital period known to date have been discussed in relation with their physical nature. \\ The Unipolar Inductor Model and its coupling to GW emission have been introduced to explain a number of puzzling features that these two sources have in common and that are difficult to reconcile with most, if not all, models of mass transfer in such systems.\\ Emphasis was put on the relevant new physical features that characterize the model. In particular, the role of spin-orbit coupling through the Lorentz torque and the role of GW emission in keeping the electric interaction active at all times has been thoroughly discussed in all their implications. It has been shown that the model does work over arbitrarily long timescales.\\ Application of the model to both RX\,J0806.3+1527\ and RX\,J1914.4+2456\ accounts in a natural way for their main observational properties. Constraints on physical parameters are derived in order for the model to work, and can be verified by future observations.\\ It is concluded that the components in these two binaries may be much more similar than it may appear from their timing properties and luminosities. The significant observational differences could essentially be due to the two systems being caught in different evolutionary stages. RX\,J1914.4+2456\ would be in a luminous, transient phase that preceeds its settling into the dimmer steady-state, a regime already reached by the shorter period RX\,J0806.3+1527\ . Although the more luminous phase is transient, its lifetime can be as long as $ \sim 10^5$ yrs, one or two orders of magnitude longer than previously estimated.\\ The GW luminosity of RX\,J1914.4+2456\ could be much larger than previously expected since its orbital evolution could be largely slowed down by an additional torque, apart from GW emission.\\ Finally, we stress that further developements and refinements of the model are required to address more specific observational issues and to assess the consequences that this new scenario might have on evolutionary scenarios and population synthesis models.
{'timestamp': '2010-03-16T01:00:09', 'yymm': '1003', 'arxiv_id': '1003.2636', 'language': 'en', 'url': 'https://arxiv.org/abs/1003.2636'}
ArXiv
\section{Introduction and notation} Let us first make some remarks concerning notation. $\alpha ,$ $\beta ,$ \ldots $ will denote positive measures on the real line. We will assume that all of these measures have infinite supports. In order to be able to use sometimes probabilistic notation we will assume that all considered measures are normalized. Integrals of integrable function $f$ with respect to measure say $\alpha $ will be denoted by either of the following notations \begin{equation*} \int f(x)d\alpha (x),~~\int fd\alpha ,Ef,~~Ef(Z),~~E_{\alpha }f(Z), \end{equation* depending on the context and the need to specify details. In above formulae Z$ denotes random variable with distribution $\alpha .$ Probability theory assures that $Z$ always exist. Matrices and vectors (always columns) will be generally denoted by bold type letters. The most important vector and matrix are $\mathbf{X}_{n}\allowbreak =\allowbreak (1,x,\ldots ,x^{n})^{T}$ ($T-$transposition) and \begin{equation} \mathbf{M}_{n}(\alpha )\allowbreak =\allowbreak \left[ m_{i+j}(\alpha \right] _{j,i=0,\ldots ,n}, \label{matrix} \end{equation where $m_{n}(\alpha )\allowbreak =\allowbreak \int x^{n}d\alpha (x).$ In other words $\mathbf{M}_{n}(\alpha )\allowbreak =\allowbreak E_{\alpha \mathbf{X}_{n}\mathbf{X}_{n}^{T}.$ Matrices of this form i.e. having the same elements on counter diagonals are called Hankel matrices. \begin{definition} We will say that the moment problem is determinate if moment sequence \left\{ m_{n}\left( \alpha \right) \right\} _{n\geq 0}$ is defined by only one measure measure $\alpha .$ Otherwise we say that the moment problem is indeterminate. \end{definition} \begin{remark} There exist necessary criteria allowing to check if the moment problem is determinate or not. E.g. Carleman's criterion states that if $\sum_{n\geq 0}m_{2n}^{1/2n}<\infty $ then the problem is indeterminate. Or if $\int \exp \left( \left\vert x\right\vert \right) d\alpha \left( x\right) <\infty $ then the problem is determinate. \end{remark} In the sequel we will assume generally that our moment problem is determinate. $\left( \mathbf{A}\right) _{j,k}$ will denote $(j,k)-$th entry of matrix \mathbf{A.}$ Infinite support assumption assures that for every $n$ one can always find n+1$ linearly independent vectors of the form $(1,x_{k},\ldots ,x_{k}^{n})^{T},$where $x_{k}\in \limfunc{supp}\alpha ,$ $k\allowbreak =\allowbreak 1,\ldots ,n+1$. Besides we know that if $\limfunc{supp}\alpha $ is infinite then matrices $\mathbf{M}_{n}(\alpha )$ are non-singular for every $n.$ Let us remark immediately that the matrix $\mathbf{M}_{n}$ is the main submatrix of the matrix $\mathbf{M}_{n+1}.$ Let us introduce sequence \begin{equation} \Delta _{n}(\alpha )\allowbreak =\allowbreak \det \mathbf{M}_{n}(\alpha ), \label{det} \end{equation $n\geq 1$, of determinants of matrices $\mathbf{M}_{n}(\alpha ).$ Let us also introduce vectors consisting of successive moments \begin{equation*} \mathbf{m}_{n}^{T}(\alpha )\allowbreak =\allowbreak (1,\ldots ,m_{n}(\alpha )). \end{equation* Vector $\mathbf{m}_{n}(\alpha )$ is the first column of the matrix $\mathbf{ }_{n}(\alpha ).$ In order to avoid repetition of assumption we will assume that matrices \mathbf{M}_{n}(\alpha )$ exist for all $n\geq 0.$ In other words we assume that all moments of the measure $\alpha $ exist. Obviously $(0,0)$ entry of the matrix $\mathbf{M}_{n}$ is equal to $1.$ We know that given measure $\alpha $ such that all moments exist one can define the set of polynomials $\left\{ p_{n}(x,\alpha )\right\} _{n\geq -1}$ with $p_{-1}(x,\alpha )\allowbreak =\allowbreak 0,$ $p_{0}(x,\alpha )\allowbreak =\allowbreak 1$ and such that $p_{n}$ is of degree $n$ and for n+m\neq -2: \begin{equation*} \int p_{n}\left( x,\alpha \right) p_{m}(x,\alpha )d\alpha (x)\allowbreak =\allowbreak \delta _{n,m}, \end{equation* where $\delta _{n,m}$ denoted Kronecker's delta. Moreover if we declare that all leading coefficients of polynomials $p_{n}(x,\alpha )$ are positive then coefficients $\pi _{n,i}(\alpha )$ of the expansio \begin{equation} p_{n}\left( x,\alpha \right) \allowbreak =\allowbreak \sum_{i=0}^{n}\pi _{n,i}\left( \alpha \right) x^{i}, \label{p_o} \end{equation are defined uniquely by the measure $\alpha .$ According to our convention, later we will drop dependence on $\alpha ,$ if measure $\alpha $ is clearly specified. Let us define vectors $\mathbf{P}_{n}(x)\allowbreak =\allowbreak (p_{0}(x),\ldots ,p_{n}(x))^{T}$ and the lower triangular matrix $\mathbf \Pi }_{n}$ with entries $\pi _{i,j}.$ Of course we set $\pi _{i,j}\allowbreak =\allowbreak 0$ for $j>i.$ We obviously have \begin{equation} \mathbf{P}_{n}(x)=\mathbf{\Pi }_{n}\mathbf{X}_{n}. \label{_p} \end{equation To continue introduction of notation let $\lambda _{n,i}(\alpha )$ denote coefficients in the following expansions \begin{equation} x^{n}\allowbreak =\allowbreak \sum_{i=0}^{n}\lambda _{n,i}(\alpha )p_{i}\left( x,\alpha \right) . \label{inv_p} \end{equation Consequently let us introduce lower triangular matrices $\mathbf{\Lambda _{n}$ with entries $\lambda _{i,j}$ if $i\geq j$ and $0$ otherwise. We obviously have \begin{equation} \mathbf{X}_{n}\allowbreak =\allowbreak \mathbf{\Lambda }_{n}\mathbf{P _{n}(x),~~\mathbf{\Pi }_{n}\mathbf{\Lambda }_{n}\allowbreak =\allowbreak \mathbf{\Lambda }_{n}\mathbf{\Pi }_{n}=\mathbf{I}_{n}, \label{_x} \end{equation where $\mathbf{I}_{n}$ denotes $(n+1)\times (n+1)$ identity matrix. Since polynomials $\left\{ p_{n}\right\} $ are orthonormal then there exist two number sequences $\left\{ a_{n}\right\} ,$ $\left\{ b_{n}\right\} $ such that polynomials $\left\{ p_{n}\right\} $ satisfy the following 3-term recurrence \begin{equation} xp_{n}(x)\allowbreak =\allowbreak a_{n+1}p_{n+1}(x)+b_{n}p_{n}(x)+a_{n}p_{n-1}(x), \label{3tr} \end{equation with $a_{0}\allowbreak =\allowbreak 0$ and $n\geq 0.$ We know also that \begin{equation} a_{n}\allowbreak =\allowbreak \frac{\pi _{n-1,n-1}}{\pi _{n,n} ,~~b_{n}\allowbreak =\allowbreak \int xp_{n}^{2}\left( x\right) d\alpha \left( x\right) , \label{coeff} \end{equation consequently that $b_{0}\allowbreak =\allowbreak m_{1}.$ For details see e.g. \cite{Akhizer65} or \cite{Nev86}. \ Combining (\ref{3tr}) and (\ref{p_o}) we get the following set of recursive equations to be satisfied by coefficients $\pi _{n,j}$. \begin{eqnarray} a_{n+1}\pi _{n+1,0}+b_{n}\pi _{n,0}+a_{n}\pi _{n-1,0}\allowbreak &=&\allowbreak 0, \label{_f} \\ a_{n+1}\pi _{n+1,j}+b_{n}\pi _{n,j}+a_{n}\pi _{n-1,j}\allowbreak &=&\allowbreak \pi _{n,j-1}, \label{_s} \end{eqnarray for $n\geq 0,$ $j\allowbreak =\allowbreak 1,\ldots ,n$ , remembering that \pi _{n,j}\allowbreak =\allowbreak 0$ for $j>n.$ Further combining (\ref{3tr ) and (\ref{inv_p}) we get the following set of equations to be satisfied by coefficients $\lambda _{n,i}.$ \begin{eqnarray} \lambda _{n+1,n+1}\allowbreak &=&\allowbreak \lambda _{n,n}a_{n+1}, \label{lam1} \\ \lambda _{n+1,0}\allowbreak &=&\allowbreak \lambda _{n,0}b_{0}+\lambda _{n,1}a_{1} \label{lam2} \\ \lambda _{n+1,i} &=&\lambda _{n,i-1}a_{i}+\lambda _{n,i}b_{i}+\lambda _{n,i+1}a_{i+1} \label{lam3} \end{eqnarray} with, $\lambda _{0,0}\allowbreak =\allowbreak 1,$ so $\lambda _{n,n}\allowbreak =\allowbreak \prod_{j=1}^{n}a_{j},$ $n\geq 1.$ \begin{remark} As it follows from formula (2.1.6) of \cite{IA} coefficients $\pi _{n,i}$ can be expressed as determinants of certain submatrices built of moment matrix $\mathbf{M}_{n}.$ In particular denoting by $D_{n}^{(i,j)}$ the determinant of a submatrix obtained by removing row number $i+1$ and column number $j+1$ of the matrix $\mathbf{M}_{n}.$ We have $\pi _{n,i}\allowbreak =\allowbreak (-1)^{n-i}D_{n}^{(i,n)}/\sqrt{\Delta _{n}\Delta _{n-1}}$ the so called Heine representation of orthogonal polynomials (see formula (2.2.6) in \cite{IA}). \end{remark} Let us also consider the family of associated polynomials $\left\{ q_{n}(x)\right\} _{n\geq -1}.$ As it follows say \cite{Akhizer65} or \cit {Sim05} they satisfy the same 3-term recurrence but with different initial values. Namely we assume that $q_{-1}(x)\allowbreak =\allowbreak -1$ and q_{0}(x)\allowbreak =\allowbreak 0.$ Following (\ref{3tr}) we see that then q_{1}(x)\allowbreak =\allowbreak 1/a_{1}.$ One knows also (see e.g. \cit {Sim05}) that \begin{equation*} q_{n}(x)=\int \frac{p_{n}(x)-p_{n}(y)}{x-y}d\alpha (y). \end{equation* Now since $(x^{k}-y^{k})/(x-y)\allowbreak =\allowbreak x^{k-1}\allowbreak +x^{k-2}y\allowbreak +\allowbreak \allowbreak \ldots \allowbreak +\allowbreak y^{k-1},$ $p_{n}(x)\allowbreak =\allowbreak \sum_{j=0}^{n}\pi _{n,j}x^{j}$ and $\int y^{k}d\alpha \left( y\right) \allowbreak =\allowbreak m_{k}$ we deduce that \begin{equation} q_{n}\left( x\right) =\sum_{j=1}^{n}\pi _{n,j}\sum_{k=0}^{j-1}m_{j-1-k}x^{k}\allowbreak =\allowbreak \sum_{k=0}^{n-1}x^{k}\sum_{j=k+1}^{n}\pi _{n,j}m_{j-1-k}. \label{ass_pol} \end{equation Let us define also $(n+1)-$vector $\mathbf{Q}_{n}(x)\allowbreak =\allowbreak (0,q_{1}(x),\ldots ,q_{n}(x))^{T}.$ It should be stressed that the presented above approach of treating first $n$ elements of the sequence $\left\{ p_{j}\left( x\right) \right\} _{j\geq -1}$ as a $(n+1)-$vector a result of certain matrix multiplication is very fruitful although not original. Traces of it appear in \cite{Bre80} or \cit {Chih79} and can be traced even earlier. Its relation to Choleski decomposition of the moment matrix and inverse of moment matrix are also not original. However such view appears in the literature as 'yet another possibility' of looking on the main result. In this paper this approach is a basic tool to get known results and new ones mostly concerning connection and linearization formulae. Most of our results concern the so called "truncated moment problem" that is we in fact assume that we know finite number (say $2n+1$ including moment or order $0)$ of moments of some distribution. That is in many cases the assumption that the matrix $\mathbf{M}_{n}$ exists for all $n$ will not be needed. Then we derive $n+1$ polynomials $\{p_{i}\}_{i=0}^{n}$ that are mutually orthogonal, we find coefficients of expansion of $x^{i}$ in terms of these polynomials as well as we derive all of the so called linearization coefficients i.e. coefficients of the expansions $p_{i}(x)p_{j}(x)$ in polynomials $\{p_{i}\}_{i=0}^{n}$ for all $i+j\leq n.$ Given two distributions (say $\alpha $ and $\delta )$ and $2$ respective moment sequences we are able to derive all so called "connection coefficients" i.e. coefficients of the expansion of say $p_{j}\left( x,\delta \right) $ in $\{p_{i}(x,\alpha )\}_{i=0}^{j}$ and conversely. Due to very efficient numerical algorithms of Cholesky decomposition and inversion of lower triangular matrices all these calculations can be done within seconds using today's computers. Of course we present also results that require existence of all moments. These are some limit properties of arithmetic averages of orthogonal polynomials and more importantly results concerning expansions of Radon--Nikodym derivatives of one distribution with respect to the other (see (\ref{rozkl})). The paper is organized as follows. In the next Section \ref{chol} we present consequences of are our approach and derive with its help mostly known results. We do this basically to illustrate the usefulness of our approach. By the end of this Section in Subsection \ref{recurr} we relate coefficients of the power series expansion of polynomials $\left\{ p_{n}\right\} $ to the coefficients of the 3-term recurrence satisfied by these polynomials. More precisely we partially solve systems of equations (\ref{_f}), (\ref{_s}) and (\ref{lam1}), (\ref{lam2}), (\ref{lam3}) . The solution is exact in the case of symmetric measures $\alpha .$ As we think particularly interesting and new are the results concerning connection coefficients presented in Section \ref{con-coef} containing not only formula for the connection coefficients between two sets of orthogonal polynomials related to two measures but also expansion of Radon--Nikodym derivative of one measure with respect to the other in a Fourier series of orthogonal polynomials related to one of the measures. Interesting and new seems also Section \ref{Line} presenting general formula for the linearization coefficients. Longer and uninteresting proofs are shifted to Section \ref{dow}. \section{Cholesky decomposition and its consequences\label{chol}} Our basic tool in what follows is the so called Cholesky decomposition of the symmetric, positive definite matrix. Below we collect some of the properties of Cholesky decomposition in the following simple proposition. \begin{proposition} \label{Cholesky}Suppose positive normalize measure $\alpha $ has support of infinite cardinality and $\int x^{2N}d\alpha <\infty $ for some $N\geq 0$. Then i) there exists unique real, non-singular lower triangular matrix $\mathbf{L _{N}(\alpha )$ such that $\mathbf{M}_{N}(\alpha )\allowbreak =\allowbreak \mathbf{L}_{N}(\alpha )\mathbf{L}_{N}^{T}(\alpha ),$ ii) entries of matrix $\mathbf{L}_{N}$ can be calculated recursively \begin{equation} l_{n,n}=\sqrt{m_{2n}-\sum_{j=0}^{n-1}l_{n,j}^{2}},~~l_{n+1,k}=(m_{n+k+1} \sum_{j=0}^{k-1}l_{n+1,j}l_{k,j})/l_{k,k}, \label{gen} \end{equation with $l_{0,0}\allowbreak =\allowbreak 1$ for $n\allowbreak =\allowbreak 0,\ldots ,N.$ Entries $l_{n,n}$ have also the following interpretation \begin{equation} l_{n,n}^{2}\allowbreak =\allowbreak \frac{\Delta _{n}}{\Delta _{n-1}}, \label{l2n} \end{equation where sequence $\left\{ \Delta _{n}\right\} $ is defined by (\ref{det}). In particular we have: \begin{eqnarray} l_{1,1}\allowbreak &=&\allowbreak \sqrt{m_{2}-m_{1}^{2}},~~l_{2,2}=\sqrt m_{4}\allowbreak -\allowbreak m_{2}^{2}-\frac{(m_{3}-m_{1}m_{2})^{2}} (m_{2}-m_{1}^{2})}} \label{_odch} \\ l_{i,0}\allowbreak &=&\allowbreak m_{i},~~l_{i,1}\allowbreak =\allowbreak \frac{(m_{i+1}-m_{i}m_{1})}{l_{1,1}}, \label{mom} \\ l_{i,2}\allowbreak &=&\allowbreak \frac{1}{l_{2,2}}(m_{i+2}-m_{i}m_{2 \allowbreak -\allowbreak \frac{(m_{3}-m_{2}m_{1})(m_{i+1}-m_{i}m_{1})} (m_{2}-m_{1}^{2})}), \label{li2} \end{eqnarray $i\allowbreak =\allowbreak 1,\ldots ,N,\allowbreak $ iii) $\forall ~0\leq i,j\leq N \begin{equation*} m_{i+j}\allowbreak =\allowbreak \sum_{k=0}^{\min (i,j)}l_{i,k}l_{j,k}. \end{equation*} \end{proposition} \begin{proof} i) Follows the existence and uniqueness of the Cholesky decomposition (see e.g. Theorem 8.2.1 of \cite{Serre}) and the fact that if the support of a positive measure is infinite then matrix $\mathbf{M}_{N}$ exists, is symmetric and positive definite. Besides by the Cauchy Theorem we have \Delta _{n}\allowbreak =\allowbreak \left( \det L_{n}\right) ^{2}\allowbreak =\allowbreak \prod_{i=0}^{n}l_{i,i}^{2}.$ Since $\Delta _{n-1}\allowbreak =\allowbreak \prod_{j=0}^{n-1}l_{i,i}^{2}$ we get our assertion. ii) Follows one of the algorithms of obtaining Cholesky decomposition (so called Cholesky--Banachiewicz algorithm that can be found in. e.g. \cite{Fox64}). \end{proof} We have obvious observations that we collect in the next proposition: \begin{proposition} \label{interpretacja}Let $\mathbf{M}_{n}$ and $\mathbf{M}_{n}^{-1},$ $n\geq 0 $ be respectively sequence of moment matrices and the sequence of its inverses of some measure $\alpha .$ Let us denote $\mathbf{M _{n}^{-1}\allowbreak =\allowbreak \lbrack \mu _{i,j}^{(n)}]_{0\leq i,j\leq n} $ i.e. that $\mu _{i,j}^{(n)}$ is $(i,j)$ entry of the matrix $\mathbf{M _{n}^{-1}.$ Let $\mathbf{L}_{n}$ be defined by the sequence of lower triangular matrices forming Cholesky decomposition of matrices $\mathbf{M _{n}$, then i) $\forall n\geq 0,$ $\mathbf{\Pi }_{n}=\mathbf{L}_{n}^{-1},$ $\mathbf \Lambda }_{n}=\mathbf{L}_{n}.$ That is $\mathbf{\Lambda }_{n}\mathbf{\Lambda }_{n}^{T}\allowbreak =\allowbreak \mathbf{M}_{n}$ and $\mathbf{\Pi }_{n}^{T \mathbf{\Pi }_{n}\allowbreak =\allowbreak \mathbf{M}_{n}^{-1}$ in particular $\sum_{k=\max (i,j)}^{n}\pi _{k,i}\pi _{k,j}\allowbreak =\allowbreak \mu _{i,j}^{(n)}$ and $\sum_{k=0}^{\min (i,j)}\lambda _{i,k}\lambda _{j,k}\allowbreak =\allowbreak m_{i+j}.$ ii) $\mathbf{P}_{n}^{T}(x)\mathbf{P}_{n}(y)\allowbreak =\allowbreak \sum_{i=0}^{n}p_{i}(x)p_{i}(y)\allowbreak =\allowbreak \mathbf{X}_{n}^{T \mathbf{M}_{n}^{-1}\mathbf{Y}_{n}$, thus $\mathbf{X}_{n}^{T}\mathbf{M _{n}^{-1}\mathbf{Y}_{n}$ is the reproducing kernel and $1/\mathbf{X}_{n}^{T \mathbf{M}_{n}^{-1}\mathbf{X}_{n}$ is the Christoffel function of the measure $\alpha .$ Consequently \begin{eqnarray*} \left\vert \mathbf{X}_{n}^{T}\mathbf{M}_{n}^{-1}\mathbf{Y}_{n}\right\vert &\leq &\frac{1}{\xi _{0,n}}\sqrt{(1+\ldots +x^{2n})(1+\ldots +y^{2n})}, \\ \frac{\xi _{0,n}}{1+\ldots +x^{2n}}\, &\leq &\frac{1}{\mathbf{X _{n}^{T}M_{n}^{-1}\mathbf{X}_{n}}\leq \frac{\xi _{n,n}}{1+\ldots +x^{2n}}, \end{eqnarray* where $\xi _{0,n}\leq \xi _{1,n}\leq \ldots \leq \xi _{j,n}\leq \ldots \leq \xi _{n,n}$ denote eigenvalues of the matrix $\mathbf{M}_{n}$ in non-decreasing order.$.$ iii) $\int \mathbf{P}_{n}^{T}(x)\mathbf{M}_{n}\mathbf{P}_{n}(x)\allowbreak d\alpha (x)=\allowbreak \sum_{i=0}^{n}m_{2i}\allowbreak =\allowbreak \xi _{0,n}+\ldots +\xi _{n,n}.$ iv) $\frac{1}{2\pi }\int_{0}^{2\pi }\mathbf{P}_{n}(e^{it})\mathbf{P _{n}^{T}(e^{-it})dt\allowbreak =\allowbreak \mathbf{\Pi }_{n}\mathbf{\Pi _{n}^{T}$, consequently $\frac{1}{2\pi }\sum_{j=0}^{n}\int_{0}^{2\pi }\left\vert p_{j}(e^{it})\right\vert ^{2}dt\allowbreak =\allowbreak \func{tr (\mathbf{M}_{n}^{-1})\allowbreak =\allowbreak \sum_{j=0}^{n}1/\xi _{j,n},$ v) $\frac{1}{\xi _{n,n}}\leq \sum_{j\geq 0}^{n}\left\vert p_{j}(0)\right\vert ^{2}\allowbreak =\allowbreak \mu _{0,0}^{(n)}\allowbreak \leq \allowbreak \frac{1}{\xi _{0,n}},$ vi) $(\sum_{j=1}^{n-1}m_{j}^{2})/\xi _{n,n}\allowbreak \leq \allowbreak \sum_{j=1}^{n}\left\vert q_{j}\left( 0\right) \right\vert ^{2}\allowbreak =\allowbreak \sum_{1\leq i,j\leq n}\mu _{i,j}^{(n)}m_{i-1}m_{j-1}\allowbreak \leq \allowbreak (\sum_{j=1}^{n-1}m_{j}^{2})/\xi _{0,n},$ and \sum_{j=1}^{n}q_{j}(0)p_{j}(0)\allowbreak =\allowbreak \sum_{j=1}^{n}m_{j-1}\mu _{0,j}^{(n)}$, vii) $\frac{1}{\sqrt{n+1}\log ^{2}(n+2)}\sum_{i=0}^{n}p_{i}(x,\alpha )\longrightarrow 0,$ $\alpha -$a.s. as $n\longrightarrow \infty .$ \end{proposition} \begin{proof} Is shifted to Section \ref{dow}. \end{proof} \begin{remark} Part of assertion i) namely the statement $\mathbf{\Pi }_{n}^{T}\mathbf{\Pi _{n}\allowbreak =\allowbreak \mathbf{M}_{n}^{-1}$ and assertion iv) were shown in \cite{Berg11}. We presented these statements for completeness of the paper. \end{remark} \begin{remark} Notice that from vi) it follows that if the moment problem is indeterminate i.e. when $\sum_{j=1}^{\infty }\left\vert q_{j}\left( 0\right) \right\vert ^{2}<\infty $ (see e.g. Theorem 2.17 of \cite{Sim98}) we get estimate of the speed of the divergence of $\xi _{n,n}$ to infinity. \end{remark} \begin{remark} Assertion vii) of Proposition \ref{interpretacja} gives in fact an estimate of the speed of convergence in Law of Large Numbers that sequence of orthogonal polynomials satisfies. Namely this assertion can be written in the form $\frac{\sqrt{n+1}}{\log ^{2}(n+2)}\frac{1}{n+1 \sum_{i=0}^{n}p_{i}(x,\alpha )\longrightarrow 0,\alpha -a.s.$as n\longrightarrow \infty .$ This result is in the spirit of \cite{Morg55} and his followers. \end{remark} As a corollary we have the following observations: \begin{corollary} Coefficients $a_{n}$ and $b_{n}:$ $n\geq 0$ defining the 3-term recurrence are related to the moment matrix by the formulae \begin{equation} a_{n}^{2}\allowbreak =\allowbreak \frac{\Delta _{n}\Delta _{n-2}}{\Delta _{n-1}^{2}},~\text{~}b_{n}\allowbreak =\allowbreak \frac{\Delta _{n-1}} \Delta _{n}}l_{n+1,n}l_{n,n}-\frac{\Delta _{n-2}}{\Delta _{n-1} l_{n,n-1}l_{n-1,n-1}, \label{wsp} \end{equation for $n\geq 2$ with $a_{0}=0,$ $a_{1}^{2}\allowbreak =\allowbreak \Delta _{2}\allowbreak =\allowbreak m_{2}-m_{1}^{2}.$ \end{corollary} \begin{proof} Following (\ref{coeff}) and Proposition \ref{interpretacja} i) we deduce a_{n}^{2}\allowbreak =\allowbreak \pi _{n-1,n-1}^{2}/\pi _{n,n}^{2}.$ Since \pi _{n,n}\allowbreak =\allowbreak l_{n,n}^{-1}$ we apply (\ref{l2n}). To get formula for $b_{n}$ first we observe that $(i,i-1)$ entry of the the inverse of the lower triangular matrix $\mathbf{L}_{n}\allowbreak =\allowbreak \lbrack l_{i,j}]_{i=0,\ldots ,n,j=0,\ldots ,i}$ is equal to $ \frac{l_{i,i-1}}{l_{i,i}l_{i-1,i-1}}.$ Besides dividing both sides of (\re {_s}) with $j\allowbreak =\allowbreak n$ by $\pi _{n,n}$ we get \begin{equation*} \frac{\pi _{n+1,n}}{\pi _{n+1,n+1}}+b_{n}=\frac{\pi _{n,n-1}}{\pi _{n,n}}. \end{equation* Now we have $\frac{\pi _{n+1,n}}{\pi _{n+1,n+1}}\allowbreak =\allowbreak \frac{l_{n+1,n}l_{n+1,n+1}}{_{l_{n+1,n+1}l_{n,n}}}\allowbreak =\allowbreak \frac{l_{n+1,n}}{l_{n,n}}$ and similarly $\frac{\pi _{n,n-1}}{\pi _{n,n} \allowbreak =\allowbreak -\frac{l_{n,n-1}}{l_{n-1,n-1}}.$ Finally we use \ref{l2n}). \end{proof} \begin{remark} Notice that if $\alpha $ is a symmetric measure, then its moments of odd order are equal to zero consequently following (\ref{gen}) and (\ref{wsp}) we deduce that coefficients $b_{n}$ are all equal to zero. On the other hand coefficients $a_{n}$ can be expressed as functions of determinants of the moment matrix as it follows from (\ref{wsp}). Coefficients $a_{n}$ determine completely in this case orthogonal polynomials $p_{i}(x),$ $i\allowbreak =\allowbreak 1,\ldots ,n$ by (\ref{_f}), (\ref{_s}) and (\ref{_p}) and consequently matrices $\mathbf{L}_{i}(\alpha ),$ $i\allowbreak =\allowbreak 1,\ldots ,n$ which determine moment matrix $\mathbf{M}_{n}(\alpha )$ as shown by Proposition \ref{interpretacja},i). In other words we have an algorithm for regaining moments from the sequence of determinants of the leading principal submatrices of the moment matrix. This is a particular property of special Hankel matrices with zero as $(i,j)-$entries for $i+j$ being odd. \end{remark} \subsection{Coefficients of the 3-term recurrence\label{recurr}} Below we will formulate a sequence of observations concerning two systems of equations (\ref{_f})-(\ref{lam3}) which relate coefficients of 3-term recurrence to coefficients $\pi _{n,i}$ and $\lambda _{n,j}.$ \begin{proposition} \label{3t-r}i) $\forall n\geq 0:a_{n}>0.$ ii) Let us denote $\eta _{n,i}\allowbreak =\allowbreak \pi _{n,i}\prod_{j=1}^{n}a_{j}$, $\tau _{n,i}\allowbreak =\allowbreak \lambda _{n,i}/\prod_{j=1}^{i}a_{j}\allowbreak =\allowbreak \lambda _{n,i}/\lambda _{i,i}$ and by $\tilde{p}_{n}(x)$ denote the monic version of polynomial p_{n}(x)$ the \begin{eqnarray*} \tilde{p}_{n}(x) &=&\sum_{k=0}^{n}\eta _{n,k}x^{k}, \\ x^{n}\allowbreak &=&\allowbreak \sum_{k=0}^{n}\tau _{n,k}\tilde{p}_{k}(x). \end{eqnarray*} Coefficients $\left\{ \eta _{n,j},\tau _{n,j}\right\} _{n\geq 0,0\leq j\leq n}$ satisfy the following system of equations \begin{eqnarray} \eta _{n+1,0} &=&-b_{n}\eta _{n,0}\allowbreak -a_{n}^{2}\eta _{n-1,0}, \label{_1} \\ \eta _{n+1,j} &=&\eta _{n,j-1}-b_{n}\eta _{n,j}-a_{n}^{2}\eta _{n-1,j}, \label{_2} \\ \tau _{n+1,0}\allowbreak &=&\allowbreak b_{0}\tau _{n,0}+\tau _{n,1}a_{1}^{2}, \label{_3} \\ \tau _{n+1,j} &=&\tau _{n,j-1}+b_{j}\tau _{n,j}+a_{j+1}^{2}\tau _{n,j+1}, \label{_4} \end{eqnarray $n\geq 0,$ $j\leq n,$ with $\eta _{0,0}\allowbreak =\allowbreak 1,\eta _{n,n}\allowbreak =\allowbreak 1,$ and $\tau _{0,0}\allowbreak =\allowbreak \tau _{n,n}\allowbreak =\allowbreak 1$ for $n>0.$ iii) $\forall i>j:\sum_{k=i}^{j}\eta _{j,k}\tau _{k,i}\allowbreak =\allowbreak 0\allowbreak =\allowbreak \sum_{k=i}^{j}\tau _{j,k}\eta _{k,i}.$ \end{proposition} \begin{proof} Is shifted to Section \ref{dow}. \end{proof} \begin{proposition} \label{aux}Let us consider $4$ auxiliary number sequences $\left\{ \xi _{n,j}^{\left( i\right) }\right\} _{n,j\geq 0},$ $\left\{ \zeta _{n,j}^{\left( i\right) }\right\} _{n,j\geq 0}$, $i\allowbreak =\allowbreak 1,2$ satisfying the following systems of recurrences for $n\geq 0$, \begin{eqnarray} \xi _{n+1,0}^{(1)}\allowbreak &=&\allowbreak -a_{n}^{2}\xi _{n-1,0}^{(1)},~\xi _{n+1,0}^{(2)}=-b_{n}\xi _{n,0}^{(2)}, \label{_11} \\ \xi _{n+1,j}^{(1)}\allowbreak &=&\allowbreak \xi _{n,j-1}^{(1)}-a_{n}^{2}\xi _{n-1,j}^{(1)},~\xi _{n+1,j}^{(2)}=\xi _{n,j-1}^{(2)}-b_{n}\xi _{n,j}^{(2)}, \label{_22} \\ \zeta _{n+1,0}^{(1)}\allowbreak &=&\allowbreak a_{1}^{2}\zeta _{n,1}^{(1)},~\zeta _{n+1,0}^{(2)}=b_{0}\zeta _{n,0}^{(2)}, \label{_33} \\ \zeta _{n+1,j}^{(1)}\allowbreak &=&\allowbreak \zeta _{n,j-1}^{(1)}+a_{j+1}^{2}\zeta _{n,j+1}^{(1)},~\zeta _{n+1,j}^{(2)}=\zeta _{n,j-1}^{(2)}+b_{j}\zeta _{n,j}^{(2)}. \label{_44} \end{eqnarray with $\allowbreak \allowbreak \xi _{n,j}^{(i)}\allowbreak =0$ when j>n\allowbreak ,$ for $i=1,2$ and $\xi _{0,0}^{(1)}\allowbreak =\allowbreak \zeta _{0,0}^{(2)}\allowbreak =\allowbreak 1$. Then \begin{gather} \xi _{n+j,n}^{(1)}=\left\{ \begin{array}{ccc} 0 & if & j=2k+1 \\ (-1)^{k}\sum_{\substack{ 1\leq j_{1}<\ldots <j_{k}\leq n+j-1 \\ j_{m+1}-j_{m}\geq 2,m=1,\ldots ,k-1}}\prod_{m=1}^{k}a_{j_{m}}^{2} & if & j=2 \end{array \right. , \label{sol11} \\ \xi _{n+j,n}^{(2)}=(-1)^{j}\sum_{0\leq k_{1}<\ldots <k_{j}\leq n+j-1}\prod_{m=1}^{j}b_{k_{m}}, \label{sol12} \\ \zeta _{n+l,n}^{(1)}\allowbreak =\allowbreak \left\{ \begin{array}{ccc} 0 & if & l=2k+1 \\ \sum_{j_{1}=1}^{n+1}a_{j_{1}}^{2}\sum_{j_{2}=1}^{j_{1}+1}a_{j_{1}}^{2}\ldots \sum_{j_{k}=1}^{j_{k-1}+1}a_{j_{k}}^{2} & if & l=2 \end{array \right. , \label{sol21} \\ \zeta _{n+j,n}^{(2)}\allowbreak =\allowbreak \sum_{k_{1}=0}^{n}b_{k_{1}}\sum_{k_{2}=k_{1}}^{n}b_{k_{2}}\ldots \sum_{k_{j}=k_{j-1}}^{n}b_{k_{j}}. \label{sol22} \end{gather} \end{proposition} \begin{proof} Is shifted to Section \ref{dow} \end{proof} \begin{proposition} \label{part} i) Let us denote $\hat{\eta}_{n,k}\allowbreak =\allowbreak \eta _{n,k}-\xi _{n,k}^{(1)}-\xi _{n,k}^{(2)}$ and $\hat{\tau}_{n,k}\allowbreak =\allowbreak \tau _{n,k}-\zeta _{n,k}^{(1)}-\zeta _{n,k}^{(2)},$ for $n\geq k\geq 0.$ We have \begin{eqnarray} \hat{\eta}_{n+1,k} &=&\hat{\eta}_{n,k-1}-b_{n}\hat{\eta}_{n,k}-a_{n}^{2}\hat \eta}_{n-1,k}-b_{n}\xi _{n,k}^{(1)}-a_{n}^{2}\xi _{n-1,k}^{(2)}, \label{aux1} \\ \hat{\tau}_{n+1,k} &=&\hat{\tau}_{n,k-1}+b_{k}\hat{\tau}_{n,k}+a_{k+1}^{2 \hat{\tau}_{n,k+1}+a_{k+1}^{2}\zeta _{n,k+1}^{(2)}+b_{k}\zeta _{n,k}^{(1)}, \label{aux2} \end{eqnarray with $\hat{\eta}_{0,0}\allowbreak =\allowbreak \hat{\tau}_{0,0}\allowbreak =\allowbreak -1$. In particular we have: ii) $\eta _{n+1,n}\allowbreak =\allowbreak \xi _{n+1,n}^{(2)}\allowbreak =\allowbreak -\tau _{n+1,n}$ for $n\geq 0.$ iii) $\eta _{n+2,n}\allowbreak =\allowbreak \xi _{n+2,n}^{(2)}\allowbreak +\xi _{n+2,n}^{(1)}\allowbreak ,$ $\tau _{n+2,n}\allowbreak =\allowbreak \zeta _{n+2,n}^{(1)}+\zeta _{n+2,n}^{(2)}$ for $n\geq 0.$ iv) $\tau _{n+3,n}\allowbreak =\allowbreak \zeta _{n+3,n}^{(2)}+\zeta _{n+2,n}^{(1)}\zeta _{n+1,n}^{(2)}+\sum_{j=1}^{n+1}a_{j}^{2}(b_{j-1}+b_{j});$ $\eta _{n+3,n}\allowbreak =\allowbreak \xi _{n+3,3}^{(2)}+\sum_{j=1}^{n+2}a_{j}^{2}\sum_{k=0,k\neq j,j-1}^{n+2}b_{k},$ \newline $\eta _{n+4,n}\allowbreak =\allowbreak \xi _{n+4,n}^{(1)}+\xi _{n+4,n}^{(2)}+\sum_{k=1}^{n+3}a_{i}^{2}\sum_{0\leq i<j\leq n+3,i,j\neq k.k-1}b_{i}b_{j},$ \newline $\tau _{n+4,n}\allowbreak =\allowbreak -\eta _{n+4,n}-\eta _{n+4,n+1}\tau _{n+1,n}-\eta _{n+4,n+2}\tau _{n+2,n}-\eta _{n+4,n+3}\tau _{n+3,n}.$ Assume that $\forall n\geq 0:b_{i}\allowbreak =\allowbreak 0,$ then: v) $\eta _{n,0}\allowbreak =\allowbreak \left\{ \begin{array}{ccc} 0 & if & n=2k-1 \\ (-1)^{k}\prod_{j=1}^{k}a_{2j-1}^{2} & if & n=2 \end{array \right. ,$ $k\allowbreak =\allowbreak 1,2,\ldots .$ \begin{equation} \eta _{n+l,n}\allowbreak =\xi _{n+l,n}^{(1)},\tau _{n+l,n}\allowbreak =\allowbreak \zeta _{n+l,n}^{(1)}. \label{sol2} \end{equation} for $l\allowbreak n\geq 0.$ \end{proposition} \begin{proof} is shifted to Section \ref{dow} \end{proof} \begin{remark} As pointed in Proposition \ref{3t-r},ii) coefficients $\eta _{i,j}$ are the power coefficients of monic orthogonal polynomials i.e. orthogonal polynomials with the leading coefficient equal to $1.$ The similar formula to (\ref{sol2}) for orthogonal polynomials on the unit circle was proved by in \cite{Gol07}. Formulae given in assertion ii) and iii) were given in \cit {Chih79} (Thm. 4.2 (d) and ibidem Exercise 4.1, p24).We present them for completeness of the paper. \end{remark} \begin{remark} Notice that $(-1)^{k}\sum_{\substack{ 1\leq j_{1}<\ldots <j_{k}\leq n-1 \\ j_{m+1}-j_{m}\geq 2,m=1,\ldots ,k-1}}\prod_{m=1}^{k}a_{j_{m}}^{2}$ can also be written as \begin{equation*} (-1)^{k}\sum_{j_{1}=1}^{n-2k+1}a_{j_{1}}^{2 \sum_{j_{2}=j_{1}+2}^{n-2k+3}a_{j_{2}}^{2}\ldots \sum_{j_{k}=j_{k-1}+2}^{n-1}a_{j_{k}}^{2}. \end{equation*} \end{remark} As a corollary we get also the following recursive formula expressing moments in terms of the coefficients $a_{n}$ and $b_{n}$ of the 3-term recurrence. \begin{proposition} \label{moments}i) $m_{j}\allowbreak =\allowbreak -\sum_{k=1}^{j-1}\eta _{j-1,k-1}m_{k},$ If we assume that all coefficients $b_{n}\allowbreak =\allowbreak 0$ $n\geq 0,$ then we have simplified version of the previous statement: ii) $m_{2k-1}\allowbreak =\allowbreak 0$ $k\allowbreak =\allowbreak 1,2,\ldots ,$ $m_{4}\allowbreak =\allowbreak a_{1}^{2}(a_{1}^{2}+a_{2}^{2}),$ $m_{2k}\allowbreak =\allowbreak (\sum_{j=1}^{2k-2}a_{j}^{2})m_{2k-2}-\sum_{j=2}^{k-1}\eta _{2k-1,2k-1-2j}m_{2k-2j},$ $k\geq 3.$ \end{proposition} \begin{proof} i) We use the (\ref{_x}) and (\ref{mom}) which leads to the identity \forall j\geq 1 \begin{equation*} \sum_{k=0}^{j}\eta _{j,k}m_{k}\allowbreak =\allowbreak 0. \end{equation* Consequently $m_{j}\allowbreak =\allowbreak -\sum_{k=0}^{j-1}\eta _{j,k}m_{k}.$ Now we utilize (\ref{_2}) and get: \begin{eqnarray*} m_{j}\allowbreak &=&\allowbreak -\sum_{k=0}^{j-1}(-b_{j-1}\eta _{j-1,k}-a_{j-1}^{2}\eta _{j-2,k}+\eta _{j-1,k-1})m_{k}\allowbreak \\ &=&b_{j-1}\sum_{k=0}^{j-1}\eta _{j-1,k}m_{k}+a_{j-1}^{2}\sum_{k=0}^{j-2}\eta _{j-2,k}m_{k}-\sum_{k=0}^{j-1}\eta _{j-1,k-1}m_{k} \\ &=&-\sum_{k=1}^{j-1}\eta _{j-1,k-1}m_{k}. \end{eqnarray*} ii) By i) we have $m_{2j}\allowbreak =\allowbreak \allowbreak -\sum_{k=1}^{2j-1}\eta _{2j-1,k-1}m_{k}\allowbreak =\allowbreak -\sum_{n=1}^{j-1}\eta _{2j-1,2n-1}m_{2n}$ since $m_{j}$ with odd $j$ are equal to zero. Now we recall that $\eta _{2j-1,2j-3}\allowbreak =\allowbreak -\sum_{i=1}^{2j-2}a_{i}^{2}$ by Proposition \ref{3t-r}, iv). \end{proof} \section{Connection coefficients and Radon--Nikodym derivatives.\labe {con-coef}} In this subsection we will express the so called connection coefficients between two sets of $N-$orthogonal polynomials. So let us assume that we have two moment matrices $\mathbf{M}_{N}\left( \alpha \right) $ and $\mathbf M}_{N}\left( \delta \right) .$ Let $\mathbf{L}_{N}\left( \alpha \right) $ and $\mathbf{L}_{N}(\delta )$ be their Cholesky decomposition matrices and \left\{ \mathbf{P}_{N}\left( x,\alpha \right) \right\} $ and $\left\{ \mathbf{P}_{N}(x,\delta )\right\} $ respective sets of $N-$orthogonal polynomials. Then we have \begin{lemma} \label{con-c}We have \begin{equation*} \mathbf{P}_{N}(x,\delta )\allowbreak =\allowbreak \mathbf{L}_{N}^{-1}(\delta )\mathbf{L}_{N}(\alpha )\mathbf{P}_{N}\left( x,\alpha \right) , \end{equation* or more precisely for all $n\allowbreak =\allowbreak 1,\ldots ,N:$ \begin{equation*} p_{n}(x,\delta )\allowbreak =\allowbreak \sum_{k=0}^{n}\gamma _{n,k}(\delta ,\alpha )p_{k}\left( x,\alpha \right) , \end{equation* where \begin{equation} \gamma _{n,k}(\delta ,\alpha )\allowbreak =\allowbreak \sum_{j=k}^{n}\pi _{n,j}\left( \delta \right) \lambda _{j,k}(\alpha ). \label{conn} \end{equation Moreover, if we assume that polynomials $\left\{ \tilde{p}_{n}(x,\delta ) \tilde{p}_{n}(x,\alpha )\right\} _{n\geq 0}^{N}$ are assumed to be monic then we have the same formula with coefficients $\pi $ replaced by $\eta $ and $\lambda $ by $\tau $ both defined in Proposition \ref{3t-r}. \end{lemma} \begin{proof} This formula follows simple observation that \begin{equation*} \mathbf{X}_{n}=\mathbf{L}_{n}(\alpha )\mathbf{P}_{n}(x,\alpha ). \end{equation* Then we apply Proposition \ref{interpretacja} i). The fact that the same formula is satisfied by $\eta ^{\prime }s$ and $\tau ^{\prime }s$ instead by $\pi ^{\prime }s$ and $\lambda ^{\prime }s$ follows the fact that we have $\mathbf{X}_{n}\allowbreak =\allowbreak \mathbf{\tilde L}}_{n}(\alpha )\mathbf{\tilde{P}}_{n}(x,\alpha )$ where $\mathbf{\tilde{P} _{n}(x,\alpha )$ denotes the vector $(1,\tilde{p}_{1}(\alpha ),\ldots \tilde{p}_{n}(\alpha ))^{T}$ while $\mathbf{\tilde{L}}_{n}(\alpha )$ denotes lower triangular matrix with $(i,j)$ entry equal to $\tau _{i,j}(\alpha ).$ \end{proof} \begin{corollary} \label{f_few}Let $\left\{ b_{n}(\iota ),a_{n+1}(\iota )\right\} _{n\geq 0},$ $\iota \allowbreak =\allowbreak \delta ,\alpha $ denote coefficients of 3-term recurrences of polynomials respectively $\left\{ \tilde{p _{n}(x,\delta )\right\} _{n\geq -1}$ and $\left\{ \tilde{p}_{n}(x,\alpha )\right\} _{n\geq -1}.$ Then i) \begin{equation*} \gamma _{n,n-1}(\delta ,\alpha )\allowbreak =\allowbreak \sum_{k=0}^{n-1}b_{k}(\alpha )-b_{k}(\delta ), \end{equation*} ii) \begin{gather*} \gamma _{n,n-2}(\delta ,\alpha )\allowbreak =\allowbreak \sum_{k=1}^{n-1}(a_{k}^{2}\left( \alpha \right) -a_{k}^{2}\left( \delta \right) )\allowbreak +\allowbreak \frac{1}{2}(\sum_{j=0}^{n-2}(b_{j}(\alpha )-b_{j}(\delta )))^{2}\allowbreak \\ +\allowbreak \frac{1}{2}\sum_{j=0}^{n-2}(b_{j}^{2}(\alpha )-b_{j}^{2}(\delta ))\allowbreak -\allowbreak b_{n-1}(\delta )\sum_{j=0}^{n-2}(b_{j}(\alpha )-b_{j}(\delta )). \end{gather*} \end{corollary} \begin{proof} i) We use (\ref{conn}) with $\pi $ replaced by $\eta $ and $\lambda $ replaced by $\tau $ and get $\gamma _{n,n-1}\left( \delta ,\alpha \right) \allowbreak =\allowbreak \eta _{n,n-1}(\delta )\tau _{n,n}\left( \alpha \right) \allowbreak +\allowbreak \eta _{n,n}\left( \delta \right) \tau _{n,n-1}\allowbreak =\allowbreak \sum_{k=0}^{n-1}(b_{k}(\alpha )-b_{k}(\delta ))$ by Proposition \ref{part}, ii) ii) $\gamma _{n,n-2}(\delta ,\alpha )\allowbreak =\allowbreak \eta _{n,n-2}(\delta )\tau _{n-2,n-2}\left( \alpha \right) \allowbreak +\allowbreak \eta _{n,n-1}\left( \delta \right) \tau _{n-1,n-2}\left( \alpha \right) \allowbreak +\allowbreak \eta _{n,n}\left( \delta \right) \tau _{n,n-2}\left( \alpha \right) \allowbreak =\allowbreak \eta _{n,n-2}(\delta )\allowbreak +\allowbreak \tau _{n,n-2}\left( \alpha \right) \allowbreak +\allowbreak \eta _{n,n-1}\left( \delta \right) \tau _{n-1,n-2}\left( \alpha \right) $. Now we apply Proposition \ref{part}, iii) and do some algebra. \end{proof} \begin{corollary} \label{con_symm}Let us assume that both distributions $\alpha $ and $\delta $ are symmetric and coefficients of the 3-term recurrences satisfied by monic polynomials orthogonal with respect to distributions $\alpha $ and $\delta $ are respectively $\left\{ a_{n}\left( \alpha \right) \right\} _{n\geq 0}$ and $\left\{ a_{n}\left( \delta \right) \right\} _{n\geq 0}$ then \begin{equation*} \tilde{p}_{n}\left( x,\delta \right) =\sum_{k=0}^{\left\lfloor n/2\right\rfloor }\gamma _{n,n-2k}(\delta ,\alpha )\tilde{p}_{n-2k}(x,\alpha ), \end{equation* where \begin{gather*} \gamma _{n,n-2k}\left( \delta ,\alpha \right) \allowbreak =\allowbreak \\ \sum_{m=0}^{k}(-1)^{m}\sum_{\substack{ 1\leq j_{1}\leq j_{2}\ldots ,j_{m}\leq n-1, \\ j_{k+1}-j_{k}\geq 2, \\ k=1,\ldots ,m-1} \prod_{i=1}^{m}a_{j_{1}}^{2}\left( \alpha \right) \ldots a_{j_{m}}^{2}\left( \alpha \right) \sum_{i_{1}=1}^{k-m}a\left( \delta \right) _{j_{1}}^{2}\ldots \sum_{i_{k-m}=1}^{i_{k-m-1}+1}a\left( \delta \right) _{i_{k-m}}^{2}. \end{gather*} In particular \begin{eqnarray*} \gamma _{n,n}\left( \delta ,\alpha \right) \allowbreak &=&\allowbreak 1, \\ \gamma _{n,0}(\delta ,\alpha )\allowbreak &=&\allowbreak \left\{ \begin{array}{ccc} 0 & if & n\text{ is odd,} \\ \chi _{k} & if & n\allowbreak =\allowbreak 2k \end{array \right. \\ \gamma _{n,n-2}\left( \delta ,\alpha \right) &=&\sum_{k=1}^{n-1}(a_{k}^{2}\left( \alpha \right) -a_{k}^{2}\left( \delta \right) ), \end{eqnarray* where \begin{gather*} \chi _{k}\allowbreak =\allowbreak \\ \sum_{m=0}^{k}(-1)^{m}\sum_{\substack{ 1\leq j_{1}\leq j_{2}\ldots ,j_{m}\leq n-1, \\ j_{k+1}-j_{k}\geq 2, \\ k=1,\ldots ,m-1} \prod_{i=1}^{m}a\left( \alpha \right) _{j_{1}}^{2}\ldots a\left( \alpha \right) _{j_{m}}^{2}\sum_{i_{1}=1}^{k-m}a\left( \delta \right) _{j_{1}}^{2}\ldots \sum_{i_{k-m}=1}^{i_{k-m-1}+1}a\left( \delta \right) _{i_{k-m}}^{2}. \end{gather*} \end{corollary} \begin{remark} It turns out that pairs of systems of orthogonal polynomials with the property that the connection coefficients between them are nonnegative are important. Basing on Corollaries \ref{f_few} and \ref{con_symm} we see that a necessary conditions for coefficients $\gamma _{n,j}\left( \delta ,\alpha \right) $ be nonnegative that $\forall n\geq 0:\sum_{j=0}^{n}b_{j}\left( \delta \right) \geq \sum_{j=0}^{n}b_{j}\left( \alpha )\right) .$ If the measures that orthogonalize those systems of polynomials are such that \forall n\geq 0:b_{n}\left( \delta \right) \allowbreak =\allowbreak b_{n}\left( \alpha \right) $ then the necessary condition for coefficients \gamma _{n,j}\left( \delta ,\alpha \right) $ to be nonnegative is $\forall n\geq 0:\sum_{j=1}^{n}a_{j}^{2}\left( \delta \right) \geq \sum_{j=1}^{n}a_{j}^{2}\left( \alpha \right) .$ The discussion why the nonnegativity of connection coefficients is important and what are the consequences of this fact is given in \cite{Szwarc92}. \end{remark} Following slight modification (ratio of densities is substituted by the Radon--Nikodym derivative of respective measures) of Proposition 1 iii) of \cite{Szab10} we deduce the following general statement concerning : \begin{corollary} If $\frac{d\alpha }{d\delta }(x)\allowbreak =\allowbreak 1/Q_{r}(x)$ where Q_{r}$ is a polynomial of order $r$ (positive on $\limfunc{supp}\delta )$ then for $N\geq r+1$ the symmetric matrix \begin{equation*} \mathbf{L}_{N}^{-1}(\alpha )\mathbf{M}_{N}(\delta )\left( \mathbf{L _{N}^{-1}(\alpha )\right) ^{T} \end{equation*} is a '$r-$ribbon' matrix i.e. its $(i,j)$ entries such that $\left\vert i-j\right\vert >r$ are zeros. \end{corollary} \begin{proof} By the above mentioned Proposition we deduce that the lower triangular matrix $\mathbf{L}_{N}^{-1}(\alpha )\mathbf{L}_{N}(\delta )$ is a '$r- ribbon' matrix. Then we have $\mathbf{L}_{N}^{-1}(\alpha )\mathbf{L _{N}(\delta )(\mathbf{L}_{N}^{-1}(\alpha )\mathbf{L}_{N}(\delta ))^{T}\allowbreak =\allowbreak \mathbf{L}_{N}^{-1}(\alpha )\mathbf{M _{N}(\delta )\left( \mathbf{L}_{N}^{-1}(\alpha )\right) ^{T}$ . Then we use the fact that if $\mathbf{A}$ is a lower triangular '$r-$ribbon' matrix then $\mathbf{AA}^{T}$ is also a '$r-$ribbon' matrix. \end{proof} As a more interesting consequence of Lemma \ref{con-c} we have an important expansion of the Radon--Nikodym derivative of two measures $\alpha <<\delta . $ \begin{theorem} \label{expansion}Let the two measures $\alpha $ and $\delta $ both having all moments be such that $\alpha <<\delta $ and $\int (\frac{d\alpha } d\delta }(x))^{2}d\delta (x)\allowbreak <\allowbreak \infty ,$ where $\frac d\alpha }{d\delta }(x)$ denotes their Radon--Nikodym derivative. The \begin{equation} \frac{d\alpha }{d\delta }(x)\allowbreak =\allowbreak \sum_{j=0}^{\infty }E_{\alpha }p_{j}(Z,\delta )p_{j}(x,\delta ), \label{rozkl} \end{equation in $L_{2}(\limfunc{supp}\delta ,\mathcal{F},d\delta ),$ where $\mathcal{F}$ denotes Borel sigma field of $\limfunc{supp}\delta .$ In particular we have (Parseval's formula) \begin{equation} \int (\frac{d\alpha }{d\delta }(x))^{2}d\delta (x)=\sum_{j\geq 0}(E_{\alpha }p_{j}(Z,\delta ))^{2}. \label{bessel} \end{equation Additionally when $\sum_{j\geq 0}(E_{a}p_{j}(Z,\delta ))^{2}\ln (j+1)^{2}\allowbreak <\allowbreak \infty ,$ we have $\delta $ almost everywhere convergence. \end{theorem} \begin{proof} Although the idea of this simple in fact theorem appeared in \cite{Szab10} where also its numerous nontrivial applications were presented we will give its simple proof for completeness of the paper. \newline Radon--Nikodym derivative $\frac{d\alpha }{d\delta }(x)$ is square integrable with respect to the measure $\delta $ i.e. hence it can be expanded in a Fourier series with respect to the system of orthogonal polynomials $\left\{ p_{j}\left( x,\delta \right) \right\} _{j\geq 0} \begin{equation*} \frac{d\alpha }{d\delta }(x)=\sum_{j\geq 0}\omega _{j}p_{j}\left( x,\delta \right) . \end{equation* Now let us multiply both sides of this expansion by $p_{k}\left( x,\delta \right) $ and integrate with respect to $\delta \left( dx\right) .$ On the right hand side we will get $\omega _{k}$ while on the left hand side $\int p_{k}\left( x,\delta \right) \alpha \left( dx\right) \allowbreak =\allowbreak E_{\alpha }p_{k}\left( Z,\delta \right) .$ (\ref{bessel}) follows Bessel equality of orthogonal series. If $\sum_{j\geq 0}(E_{a}p_{j}(Z,\delta ))^{2}\ln (j+1)^{2}\allowbreak <\allowbreak \infty $ then we apply Rademacher--Menshov Theorem (see e.g. \cite{Alexits}) and get almost everywhere convergence. \end{proof} \begin{remark} Let us notice that if we write $p_{n}(x,\delta )\allowbreak =\allowbreak \sum_{i=0}^{n}\gamma _{n,i}(\delta ,\alpha )p_{i}(x,\alpha )$ then $\gamma _{n,0}(\delta ,\alpha )\allowbreak =\allowbreak E_{\alpha }p_{n}(Z,\delta )$ after integrating both sides with respect to $\alpha \left( dx\right) .$ \end{remark} \begin{example} As a corollary we will get the famous Poisson--Mehler expansion formula ( \ref{P-M}), below). In order not to repeat too many known details we refer the reader to \cite{Szab10}, \cite{SzablAW} as far as the ideas and calculations are concerned and to \cite{IA} in order to get more properties of the mentioned below families of orthogonal polynomials. Namely we will consider the so called $q-$Hermite polynomials defined for \left\vert q\right\vert <1$ as $H_{n}(x|q)/\sqrt{[n]_{q}!},$ where H_{n}(x|q)$ are monic polynomials satisfying 3-term recurrence given by (2.3) of \cite{SzablAW}. We used here traditional notation common in the so called $q-$series theory: $[n]_{q}\allowbreak =\allowbreak (1-q^{n})/(1-q),$ for $\left\vert q\right\vert <1$ and $[n]_{1}\allowbreak =\allowbreak n,$ [n]_{q}!\allowbreak =\allowbreak \prod_{j=1}^{n}[j]_{q},$ with $[0]_{q}!=1$ (a)_{n}\allowbreak =\allowbreak \prod_{i=0}^{n-1}(1-aq^{i}),$ (the so called $q-$Pochhammer symbol). One can consider also the case $q\allowbreak =\allowbreak 1$ obtaining similar results but for the sake of simplicity let us consider only the case $\left\vert q\right\vert <1.$ Let us mention only that for $q\allowbreak =\allowbreak 1$ $q-$Hermite polynomials are in fact equal to classical Hermite polynomials, more precisely the ones that are orthogonal with respect to measure with the density $\exp \left( -x^{2}/2\right) /\sqrt{2\pi }.$ It is known that $q-$Hermite polynomials are orthogonal for $\left\vert q\right\vert <1,$ $x\in S\left( q\right) \allowbreak =\{x\in \mathbb{R :\left\vert x\right\vert \leq 2/\sqrt{1-q}\}\allowbreak $ with respect to measure with the density $f_{N}(x|q)$ whose exact formula is not very important and which is given e.g. in \cite{SzablAW} (formula (2.10)). The measure with the density $f_{N}(x|q)$ is our measure $\delta .$ It is also known (see same references) that the measure with the density $: \begin{equation*} f_{CN}\left( x|y,\rho ,q\right) =f_{N}\left( x|q\right) \prod_{k=0}^{\infty \frac{(1-\rho ^{2}q^{k})}{w_{k}\left( x,y|\rho ,q\right) }, \end{equation* where \begin{equation*} w_{k}\left( x,y|\rho ,q\right) =(1-\rho ^{2}q^{2k})^{2}-(1-q)\rho q^{k}(1+\rho ^{2}q^{2k})xy+(1-q)\rho ^{2}(x^{2}+y^{2})q^{2k}, \end{equation* for $x,y\in S(q),$ $\left\vert \rho \right\vert <1$ for $\left\vert q\right\vert <1$ has orthonormal polynomials equal to the so called Al-Salam--Chihara polynomials $P_{n}(x|y,\rho ,q)$ satisfying the 3-term recurrence given by formula (2.6) of \cite{SzablAW} divided by $\sqrt{(\rho ^{2})_{n}[n]_{q}!}$ as it follows from Proposition 1,iii) of \cite{SzablAW} (to get orthonormality). Measure with density the $f_{CN}$ it is our measure $\alpha .$ Following formula (4.7) in \cite{IRS99} we deduce that \begin{equation*} E_{\alpha }H_{n}\left( Z|q\right) \allowbreak =\allowbreak \rho ^{n}H_{n}(y|q). \end{equation* where $y\in S(q)$ and $\left\vert \rho \right\vert <1$ are some parameters. Details are in \cite{SzablAW} but the can be traced to earlier works of Bryc, Matysiak, Szab\l owski \cite{bms}. \begin{equation*} \frac{d\alpha }{d\delta }\left( x\right) \allowbreak =\allowbreak \prod_{k=0}^{\infty }\frac{(1-\rho ^{2}q^{k})}{w_{k}\left( x,y|\rho ,q\right) }I_{S\left( q\right) }\left( x\right) . \end{equation* Notice also that this function is bounded from above and as such square integrable with respect to any finite measure on $S(q).$ Again details of the proof of this simple fact are in \cite{SzablAW}. Now following (\re {rozkl}) we get \begin{equation} \prod_{k=0}^{\infty }\frac{(1-\rho ^{2}q^{k})}{w_{k}\left( x,y|\rho ,q\right) }=\sum_{j\geq 0}\frac{\rho ^{j}}{[j]_{q}!}H_{j}(x|q)H_{j}(y|q), \label{P-M} \end{equation for every $y\in S\left( q\right) $ and almost all $x\in S\left( q\right) .$ Notice that for $q=1$ (\ref{P-M}) is also true but it requires some more properties of Hermite polynomials. \end{example} \begin{remark} Situation described above is an illustration of the situation often met in the theory of Markov processes. Namely suppose that we have process $\mathbf X}\allowbreak =\allowbreak \{X_{t}:t\in T\},$ where $T$ is some ordered set of infinite cardinality and $\forall t\in T:$ $X_{t}$ is a random variable with support of infinite cardinality. Suppose $dP_{t}$ is the distribution of $X_{t}$ and that $E_{t}\left\vert X_{t}\right\vert ^{n}$ is finite for all $t$ and $n.$ Suppose also that $\left\{ p_{n}^{(t)}\right\} $ are polynomials orthogonal with respect to $dP_{t}.$ Further suppose that the conditional distribution of $X_{s}$ given $X_{t}\allowbreak =\allowbreak x$ for $s>t$ i.e. $dC_{s,t}$ is absolutely continuous with respect to $dP_{s}$ and that $\frac{dC_{s,t}}{dP_{s}}(x)$ is square integrable with respect to dP_{s}$ for every $s>t$ and $y\in \limfunc{supp}X_{t}.$ Then as it follows from Theorem \ref{expansion} in $L_{2}(\limfunc{supp}X_{s},\mathcal{F ,dP_{s})$ we have \begin{equation*} dC_{s,t}\allowbreak =\allowbreak (\sum_{j\geq 0}E_{s,t}p_{j}^{(s)}(X_{s})p_{j}^{(s)}(x))dP_{s}, \end{equation* where as usually in the theory of Markov processes E_{s,t}(p_{j}^{(s)}(X_{s}))$ denotes expectation with respect to distribution $C_{s,t}$ i.e. it denotes conditional expectation of p_{j}^{(s)}(X_{s})$ given $X_{t}\allowbreak =\allowbreak x.$ In other words we get expansion of the transfer function of our process. \end{remark} \section{Linearization coefficients\label{Line}} Notice that Propositions \ref{Cholesky} and \ref{interpretacja} allow us to formulate an algorithm to get so called 'linearization coefficients'. Let us recall that linearization formula is popular name for the expansions of the for \begin{equation*} p_{n}\left( x\right) p_{m}\left( x\right) \allowbreak =\allowbreak \sum_{j=0}^{m+n}c_{n,m,j}p_{j}(x). \end{equation* The problem is to find coefficients $c_{n,m,j}$ for all $n,$ $m\geq 1$. We have the following lemma. \begin{lemma} \label{Lin_c}For $\forall n,m\geq 0$ and $s=0,\ldots ,m+n$ \begin{equation*} c_{n,m,s}=\left( \sum_{\substack{ 0\leq j\leq n, \\ 0\leq k\leq m,j+k\geq s }}\pi _{n,j}\pi _{m,k}\lambda _{j+k,s}\right) . \end{equation*} \end{lemma} \begin{proof} For $N\geq \max (m,n)$ we have: \begin{gather*} p_{n}(x)p_{m}(x)\allowbreak =\allowbreak (\mathbf{P}_{N}(x)\mathbf{P _{N}^{T}(x))_{n,m}=(\mathbf{\Pi }_{N}\mathbf{X}_{N}\mathbf{X}_{N}^{T}\mathbf \Pi }_{N}^{T})_{n,m} \\ \sum_{j,k=0}^{N}\left( \mathbf{\Pi }_{N}\right) _{n,j}\left( \mathbf{X}_{N \mathbf{X}_{N}^{T}\right) _{j,k}\left( \mathbf{\Pi }_{N}^{T}\right) _{k,m}=\sum_{j,k=0}^{N}\pi _{n,j}x^{j+k}\pi _{m,k}= \\ \sum_{s=0}^{2N}p_{s}(x)\left( \sum_{j,k=0}^{N}\pi _{n,j}\pi _{m,k}\lambda _{j+k,s}\right) . \end{gather* We now use the fact that $\pi _{n,j}\allowbreak =\allowbreak 0$ for $n<j$ and $\lambda _{k,j}\allowbreak =\allowbreak 0$ for $k<j.$ \end{proof} \begin{remark} Following general properties of orthogonal polynomials we deduce that \forall k<|n-m|:\left( \sum_{\substack{ 0\leq i\leq n, \\ 0\leq j\leq m,j+i\geq k}}\pi _{n,i}\pi _{m,j}\lambda _{i+j,k}\right) \allowbreak =\allowbreak 0.$ More precisely $c_{n,m,s}\allowbreak =\allowbreak 0$ for s=0,\ldots ,\left\vert n-m\right\vert -1.$ \end{remark} \begin{remark} By Proposition \ref{3t-r} we deduce that for monic versions of polynomials p_{n}$ we have similar formula. More precisely let $\tilde{p}_{n}(x)$ be the monic version of polynomial $p_{n}(x)$ then \begin{equation*} \tilde{p}_{n}(x)\tilde{p}_{n}\left( x\right) \allowbreak =\allowbreak \sum_{s=0}^{n+m}\tilde{c}_{n,m,s}\tilde{p}_{s}(x), \end{equation* where \begin{equation} \tilde{c}_{n,m,s}\allowbreak =\allowbreak \left( \sum_{\substack{ 0\leq j\leq n, \\ 0\leq k\leq m,j+k\geq s}}\eta _{n,j}\eta _{m,k}\tau _{j+k,s}\right) . \label{lin} \end{equation} This is so since $\left( \prod_{j=1}^{n}a_{j}\right) \pi _{n,k}\left( \prod_{j=1}^{m}a_{j}\right) \pi _{m,l}\frac{1}{\prod_{j=1}^{s}a_{j}}\lambda _{l+k,s}\allowbreak =\allowbreak \eta _{n,k}\eta _{m,l}\tau _{k+l,s}.$ \end{remark} \begin{corollary} We have i) \begin{equation*} c_{n,m,m+m-1}\allowbreak =\allowbreak \sum_{j=\max (n,m)}^{n+m-1}(b_{j}-b_{j-\max (n,m)}), \end{equation*} ii) \begin{gather*} c_{n,m,n+m-2}\allowbreak =\allowbreak \sum_{j=\max (m,n)}^{m+n-1}a_{j}^{2}\allowbreak -\allowbreak \sum_{j=1}^{\min \{n,m)-1}a_{j}^{2}\allowbreak -\allowbreak \frac{1}{2}(\sum_{j=\max (n,m)}^{m+n-2}b_{j}\allowbreak \\ -\allowbreak \sum_{j=0}^{\min (n,m)-1}b_{j})^{2}\allowbreak -\allowbreak \frac{1}{2}(\sum_{j=\max (n,m)}^{m+n-2}b_{j}^{2}\allowbreak -\allowbreak \sum_{j=0}^{\min (n,m)-1}b_{j}^{2}). \end{gather*} \end{corollary} \begin{proof} i) By (\ref{lin}) we have $c_{n,m,n+m-1}\allowbreak =\allowbreak \eta _{n,n}\eta _{m,m}\tau _{n+m,n\_m-1}\allowbreak +\allowbreak \eta _{n,n-1}\eta _{m,m}\tau _{n+m-1,n+n-1}\allowbreak +\allowbreak \eta _{n,n}\eta _{m,m-1}\tau _{n+m-1,n+n-1}\allowbreak =\allowbreak \sum_{k=0}^{n+m-1}b_{k}\allowbreak \allowbreak -\allowbreak \sum_{k=0}^{n-1}b_{k}\allowbreak -\allowbreak \sum_{k=0}^{m-1}b_{k}.$ ii) By (\ref{lin}) we have $c_{n,m,m+m-2}\allowbreak =\allowbreak \eta _{n,n}\eta _{m,m}\tau _{n+m,n\_m-2}\allowbreak +\allowbreak \eta _{n,n-2}\eta _{m,m}\tau _{n+m-2,n+m-2}+\allowbreak \eta _{n,n}\eta _{m,m-2}\tau _{n+m-2,n+m-2}+\allowbreak \eta _{n,n-1}\eta _{m,m}\tau _{n+m-1,n+m-2}\allowbreak +\allowbreak \eta _{n,n}\eta _{m,m-1}\tau _{n+m-1,n+m-2}\allowbreak \allowbreak +\allowbreak \eta _{n,n-1}\eta _{m,m-1}\tau _{n+m-2,m+n-2}\allowbreak \allowbreak =\allowbreak \sum_{k=1}^{n+m-1}a_{k}^{2}\allowbreak -\allowbreak \sum_{k=1}^{n-1}a_{k}^{2}-\sum_{k=1}^{m-1}a_{k}^{2}\allowbreak -\allowbreak (\sum_{j=0}^{n-1}b_{j}\allowbreak +\allowbreak \sum_{j=0}^{m-1}b_{j})\sum_{j=0}^{n+m-2}b_{j}\allowbreak +\allowbreak \sum_{j=0}^{n-1}b_{j}\sum_{j=0}^{m-1}b_{j}.$ After little algebra we get the desired form. \end{proof} \begin{remark} As in the case of the connection coefficients the fact that linarization coefficients are nonnegative is important. Why it is so, what are the strightforward consequeces of this fact and in what particular situation it happens is again given in \cite{Szwarc92}. From the the above mentioned Corollary one can derive in fact necessary condition for linearization coefficients to be nonnegative. \end{remark} \section{Proofs\label{dow}} \begin{proof}[Proof of Proposition \protect\ref{interpretacja}] i) Follows uniqueness of both Cholesky decomposition and orthonormal polynomials provided sign of the leading coefficient is selected. ii) We \begin{eqnarray*} \int \mathbf{P}_{n}(x,\alpha )\mathbf{P}_{n}^{T}(x,\alpha )d\alpha (x)\allowbreak &=&\allowbreak \mathbf{L}_{n}^{-1}\int \mathbf{X}_{n}\mathbf{ }_{n}^{T}d\alpha (x)\left( \mathbf{L}_{n}^{-1}\right) ^{T}\allowbreak \\ &=&\allowbreak \mathbf{L}_{n}^{-1}\mathbf{M}_{n}\left( \mathbf{L _{n}^{-1}\right) ^{T}\allowbreak =\allowbreak \mathbf{I}_{n}. \end{eqnarray* Further we have \begin{equation*} \mathbf{P}_{n}^{T}(x,\alpha )\mathbf{P}_{n}(y,\alpha )\allowbreak \allowbreak =\allowbreak \mathbf{X}_{n}^{T}\allowbreak \left( \mathbf{L _{n}^{-1}\right) ^{T}\mathbf{L}_{n}^{-1}\mathbf{Y}_{n}\allowbreak =\allowbreak \left( \mathbf{X}_{n}\right) ^{T}(\mathbf{L}_{n}\mathbf{L _{n}^{T})^{-1}\mathbf{Y}_{n}. \end{equation* Thus obviously we have \begin{equation*} \left\vert \mathbf{X}_{n}\right\vert ^{2}/\xi _{n,n}\allowbreak \leq \allowbreak \mathbf{X}_{n}^{T}\mathbf{M}_{n}^{-1}\mathbf{Y}_{n}\allowbreak \leq \allowbreak \left\vert \mathbf{X}_{n}\right\vert ^{2}/\xi _{0,n},~ \text{and }\left\vert \mathbf{X}_{n}\right\vert ^{2}\allowbreak =\allowbreak \sum_{i=0}^{n}x^{2i}. \end{equation* iii) \begin{eqnarray*} \int \mathbf{P}_{n}^{T}(x)\mathbf{M}_{n}\mathbf{P}_{n}(x)d\alpha \allowbreak &=&\allowbreak \int \func{tr}(\mathbf{M}_{n}\mathbf{P}_{n}(x)\mathbf{P _{n}^{T}(x))d\alpha \\ &=&\allowbreak \func{tr}\mathbf{M}_{n}\mathbf{L}_{n}^{-1}\mathbf{M _{n}\left( \mathbf{L}_{n}^{-1}\right) ^{T}\allowbreak =\allowbreak \func{tr \mathbf{M}_{n}. \end{eqnarray*} iv) Denote $\mathbf{e}_{n}^{T}(t)\allowbreak =\allowbreak (1,e^{it},\ldots ,e^{int}).$ We have by Proposition \ref{interpretacja}, i) $\mathbf{P _{n}(e^{it})\allowbreak =\allowbreak \mathbf{\Pi }_{n}\mathbf{e _{n}^{T}(t)\allowbreak ,$ hence $\frac{1}{2\pi }\int_{0}^{2\pi }\mathbf{P _{n}(e^{it})\mathbf{P}_{n}^{T}(e^{-it})dt\allowbreak \allowbreak =\allowbreak \mathbf{\Pi }_{n}(\frac{1}{2\pi }\int_{0}^{2\pi }\mathbf{e _{n}(t)\mathbf{e}_{n}^{T}(-t)dt)\mathbf{\Pi }_{n}^{T}.$ Secondly notice that $\left( k,j\right) -$th entry of the matrix $\mathbf{e}_{n}(t)\mathbf{e _{n}^{T}(-t)$ is equal to $e^{it(k-j)}$ consequently $(\frac{1}{2\pi \int_{0}^{2\pi }\mathbf{e}_{n}(t)\mathbf{e}_{n}^{T}(-t)dt)$ is equal to an identity matrix. Second statement follows the fact that $\frac{1}{2\pi \sum_{j=0}^{n}\int_{0}^{2\pi }\left\vert p_{j}(e^{it})\right\vert ^{2}dt$ is the trace of $\frac{1}{2\pi }\int_{0}^{2\pi }\mathbf{P}_{n}(e^{it})\mathbf{P _{n}^{T}(e^{-it})dt.$ But $\func{tr}(\mathbf{\Pi }_{n}\mathbf{\Pi _{n}^{T})\allowbreak =\allowbreak \func{tr}(\mathbf{\Pi }_{n}^{T}\mathbf{\Pi }_{n})\allowbreak =\allowbreak \func{tr}\mathbf{M}_{n}^{-1}.$ v) By Proposition \ref{interpretacja}, ii) considered for $x\allowbreak =\allowbreak y\allowbreak =\allowbreak 0.$ We get $\sum_{i=0}^{n}\left\vert p_{i}(0)\right\vert ^{2}\allowbreak =\allowbreak \mathbf{0}_{n}^{T}\mathbf{M _{n}^{-1}\mathbf{0}_{n}\mathbf{,}$ where $\mathbf{0}_{n}^{T}\allowbreak =\allowbreak (1,0,\ldots ,0),$ which means that $\sum_{i=0}^{n}\left\vert p_{i}(0)\right\vert ^{2}$ is $(0,0)$ entry of $\mathbf{M}_{n}^{-1}.$ vi) Following (\ref{ass_pol}) we see that $q_{n}(0)\allowbreak =\allowbreak \sum_{j=1}^{n}\pi _{n,j}m_{j-1}.$ In other words $\mathbf{Q _{n}(0)\allowbreak =\allowbreak \mathbf{\Pi }_{n}\mathbf{\mu }_{n}.$ Now \sum_{j=1}^{n}\left\vert q_{j}\left( 0)\right) ^{2}\right\vert \allowbreak =\allowbreak \mathbf{Q}_{n}^{T}(0)\mathbf{Q}_{n}(0).$ On the way we utilize the fact that $\mathbf{\Pi }_{n}^{T}\mathbf{\Pi }_{n}\allowbreak =\allowbreak \mathbf{M}_{n}^{-1}.$ Following similar arguments we have \sum_{j=1}^{n}q_{j}(0)p_{j}(0)\allowbreak =\allowbreak (1,0,\ldots ,0 \mathbf{M}_{n}\mathbf{\mu }_{n}.$ vii) Let us denote $\bar{p}_{n}(x,\alpha )\allowbreak =\allowbreak \frac{1} \sqrt{n+1}\log ^{2}(n+2)}\sum_{i=0}^{n}p_{i}(x,\alpha ).$ It satisfies recursion \begin{equation*} \bar{p}_{n+1}(x,\alpha )\allowbreak =\allowbreak \sqrt{\frac{n+1}{n+1}}\frac \log ^{2}(n+2)}{\log ^{2}(n+3)}\bar{p}_{n}(x)+p_{n+1}(x)/\sqrt{n+2}\log ^{2}(n+3). \end{equation* Since we have $\sum_{n\geq 0}\frac{\log ^{2}(n+2)}{(n+2)\log ^{4}(n+2) <\infty $ we deduce by Rademacher--Menshov theorem that series $\sum_{n\geq 0}\frac{p_{n}(x)}{\sqrt{n+1}\log ^{2}(n+2)}$ converges $\alpha -$a.s. Further we apply \cite{Szab87} (Thm. 5). \end{proof} \begin{proof}[Proof of Proposition \protect\ref{3t-r}] Multiplying both sides of (\ref{_f}) and (\ref{_s}) by $\prod_{i=1}^{n}a_{i}$ and dividing both sides of (\ref{lam1}) and (\ref{lam3}) by \prod_{i=1}^{n}a_{i}$ we see that quantities $\eta $ $\tau $ satisfy the following system of equations:) Follows immediately formulae (\ref{wsp}). iii) Follows the fact that $j>i:$ $\sum_{k=i}^{j}\pi _{j,k}\lambda _{k,j}\allowbreak =\allowbreak \sum_{k=i}^{j}\lambda _{j,k}\pi _{k,j}\allowbreak =\allowbreak 0$ and the fact that $\lambda _{j,k}\pi _{k,i}\allowbreak =\allowbreak \tau _{j,k}\eta _{k,i}$ and similarly for the product $\eta _{j,k}\tau _{k,i}.$ \end{proof} \begin{proof}[Proof of Proposition \protect\ref{aux}] First let us consider sequences with upper indices $\left( 1\right) .$ We have $\xi _{n+1,0}^{(1)}\allowbreak =\allowbreak -a_{n}^{2}\xi _{n-1,0}^{(1)} $ $.$ Recall that then $\xi _{0,0}^{(1)}\allowbreak =\allowbreak 1$ and $\xi _{1,0}^{(1)}\allowbreak =\allowbreak 0.$ So we see that $\eta _{n,0}$ with odd $n$ must be equal to zero. To see $\xi _{n,n-2k+1}^{(1)}\allowbreak =\allowbreak 0,$ and $\zeta _{n,n-2k-1}^{(1)},$ $k\allowbreak =\allowbreak 1,2,\ldots ,$ $n\geq 2k+1$ is easy since then our formulae (\ref{_22}) and (\ref{_44}) become now: \begin{eqnarray} \xi _{n+1,n+1-2k-1}^{(1)} &=&-a_{n}^{2}\xi _{n-1,n-1-(2k-1)}^{(1)}\allowbreak +\allowbreak \xi _{n,n-2k-1}^{(1)}, \label{pom} \\ \zeta _{n+1,n+1-(2k+1)}^{(1)} &=&\zeta _{n,n-(2k+1)}^{(1)}+a_{n-1-(2k-1)}^{2}\zeta _{n-1,n-1-(2k-1).}^{(1)} \label{pom1} \end{eqnarray We argue in case of (\ref{pom}) by induction assuming $\eta _{n-1,n-2k}\allowbreak =\allowbreak 0$ and having $\eta _{2k+1,0}\allowbreak =\allowbreak 0$ as shown above. In the case of (\ref{pom1}) firstly we notice that from (\ref{_33}) with $n\allowbreak =\allowbreak 0$ we deduce that $\zeta _{1,0}\allowbreak =\allowbreak 0$. Then taking in (\ref{pom1}) n\allowbreak =\allowbreak 2$ and $k\allowbreak =\allowbreak 1$ we deduce that $\zeta _{3,0}\allowbreak =\allowbreak 0.$ We use induction in the similar way and deduce that $\zeta _{2k+1,0}\allowbreak =\allowbreak 0,$ k\allowbreak =\allowbreak 0,\ldots $ . Now taking $k\allowbreak =\allowbreak 0$ we get \begin{equation*} \zeta _{n+1,n}\allowbreak =\allowbreak \zeta _{n,n-1}+a_{n-2}\zeta _{n-1,n}\allowbreak =\allowbreak \zeta _{n,n-1}, \end{equation* from which we deduce that $\zeta _{n,n-1}\allowbreak =\allowbreak 0$ for n\geq 1.$ Now take $k\allowbreak =\allowbreak 1$ we get \begin{equation*} \zeta _{n+1,n-2}\allowbreak =\allowbreak \zeta _{n,n-3}+a_{n-2}\zeta _{n-1,n-2}\allowbreak =\allowbreak \zeta _{n,n-3}, \end{equation* from which we deduce that $\zeta _{n,n-3}\allowbreak =\allowbreak 0$ for all $n\geq 3.$ In the similar way we show that $\zeta _{n,n-2k-1}$ $\allowbreak =\allowbreak 0$ for all $n\geq 2k+1.$ Hence let us consider the case of even differences in indices $i$ and $j$ in $\xi _{i,j}^{(1)}.$ The proofs will be by induction. Let us prove (\ref{sol11}) first. We will prove it for indices $(n,n-2k).$ Recursive formula (\ref{_2}) becomes now \begin{equation} \eta _{n+1,n+1-2k}=-a_{n}^{2}\eta _{n-1,n-1-2(k-1)}+\eta _{n,n-2k} \label{eqn} \end{equation First notice that since sign of $\eta _{n,n-2k}$ is $(-1)^{k}$ and of $\eta _{n-1,n-2k+1}$ is $(-1)^{k-1}$ by induction assumption we deduce that the sign of $\eta _{n+1,n+1-2k}$ is $(-1)^{k}$ as claimed. Secondly notice (\re {sol11}) can be interpreted as a sum of products of elements of $k- combinations drawn from the set $\left\{ a_{1}^{2},\ldots ,a_{n-1}^{2}\right\} $ such that distance between numbers of chosen elements is greater than $1.$ For example from $3$ elements we can select only one such $2-$combinations. Alter little reflection one sees that one there are \binom{n-k}{k}$ combinations consequently that $\eta _{n,n-2k}$ contains \binom{n-k}{k}$ products. Equation (\ref{eqn}) states that sum of such products of $k-$combinations chosen form the set $\{1,,\ldots ,n\}$ can be decomposed on the sum of such products chosen from the set set with indices \left\{ 1,\ldots ,n-1\right\} $ and a sum of products containing element a_{n}^{2}$ times products of similarly chosen $(k-1)-$combinations but from the set $\left\{ 1,\ldots ,n-2\right\} .$ There are $\binom{n-k}{k}$ summands of the first type and $\binom{n-1-(k-1)}{k-1}$ summand of the second type (i.e. containing $a_{n}^{2}).$ The total number of summands in \eta _{n+1,n+1-2k}$ is jus \begin{equation*} \binom{n-1-(k-1)}{k-1}+\binom{n-k}{k}=\binom{n+1-k}{k}, \end{equation* by the well know property of the Pascal triangle as it should be. The proof of (\ref{sol21}). Let us denote by $\beta _{n,l}$ the right hand side of (\ref{sol2}). We have: \begin{eqnarray*} \beta _{n+2k,n}-\beta _{n-1+2k,n-1}\allowbreak &=&\allowbreak a_{n+1}^{2}\sum_{j_{2}=1}^{n+2}a_{j_{1}}^{2}\ldots \sum_{j_{k}=1}^{j_{k-1}+1} \\ &=&a_{n+1}^{2}\beta _{n+1+2k-2,n+1}. \end{eqnarray* Further we have $\beta _{n+2,n}\allowbreak =\allowbreak \sum_{j_{1}=1}^{n+1}a_{j_{1}}^{2}$ by direct calculation. Now notice that sequences $\zeta ^{(1)}$ and $\beta $ satisfy the same difference equations and have the same initial conditions. Hence they are identical. Now let us consider sequences with upper index $(2).$ First of all notice that by (\ref{sol22}) can be also written \begin{equation} \zeta _{n+j,n}^{(2)}\allowbreak =\allowbreak \sum_{k_{1}=0}^{n}b_{k_{1}}\sum_{k_{2}=0}^{k_{1}}b_{k_{2}}\ldots \sum_{k_{j}=0}^{k_{j-1}}b_{k_{j}}. \label{pp} \end{equation} Let us denote by $\gamma _{n+j,n}$ the left hand side of (\ref{pp}). From this form we can easily deduce that \begin{equation*} \gamma _{n+1+j,n+1}-\gamma _{n+j,n}\allowbreak =\allowbreak b_{n+1}\gamma _{n+j,n+1}\allowbreak . \end{equation* Thus $\gamma _{n+j,n}$ satisfies the same recurrence as $\zeta _{n+j,n}^{(2)} $ with the same initial condition. Consequently $\zeta _{n+j,n}^{(2)}\allowbreak =\allowbreak \gamma _{n+j,n}.$ Now it remained to prove (\ref{sol12}) . First of all notice that left hand side of (\ref{sol12}) is a sum of products of all of the sets $\left\{ b_{0},\ldots ,b_{n+j-1}\right\} $ of the size $j$. Let us denote it by \delta _{n+j,n}.$ From what was stated earlier it follows that $\delta _{n+1+j,n+1}-\delta _{n+j,n}$ is equal to the sum of product of subsets of the set $\left\{ b_{0},\ldots ,b_{n+j-1}\right\} $ of the size $j$ that contain element $b_{n+j-1}.$ Another word it is equal to -(-1)^{j-1}b_{n+j-1}\delta _{n+j-1,n}.$ Hence $\delta _{n+j,n}$ and $\xi _{n+j,n}^{(2)}$ satisfy the same recurrence with the same initial condition. \end{proof} \begin{proof}[Proof of Proposition \protect\ref{part}] i) Combining (\ref{_1}), (\ref{_2}) with (\ref{_11}) and (\ref{_22}) we get \hat{\eta}_{n+1,j}\allowbreak =\allowbreak \eta _{n+1,j}-\xi _{n+1,j}^{(1)}-\xi _{n+1,j}^{(2)}\allowbreak =\allowbreak \eta _{n,j-1}-b_{n}\eta _{n,j}-a_{n}^{2}\eta _{n-1,j}\allowbreak -\allowbreak (\xi _{n,j-1}^{(1)}-a_{n}^{2}\xi _{n-1,j}^{(1)})\allowbreak -\allowbreak (\xi _{n,j-1}^{(2)}-b_{n}\xi _{n,j}^{(2)})\allowbreak =\allowbreak \hat{\eta _{n,j-1}\allowbreak -\allowbreak b_{n}(\eta _{n,j}-\xi _{n,j}^{(2)}-\xi _{n,j}^{(1)})\allowbreak \allowbreak -\allowbreak \allowbreak b_{n}\xi _{n,j}^{(1)}-a_{n}^{2}(\eta _{n-1,j}-\xi _{n-1,j}^{(1)}-\xi _{n-1,j}^{(2)})\allowbreak -a_{n}^{2}\allowbreak \xi _{n-1,j}^{(2)}$ which is first of the equations in i). Now let us consider (\ref{_3}), (\ref{_4}), (\ref{_33}) and (\ref{_44}). We get $\allowbreak \hat{\tau _{n+1,j}=\allowbreak \tau _{n+1,j}\allowbreak -\allowbreak \zeta _{n+1,j}^{(1)}\allowbreak -\allowbreak \zeta _{n+1,j}^{(2)}\allowbreak =\allowbreak \tau _{n,j-1}\allowbreak +\allowbreak b_{j}\tau _{n,j}\allowbreak +\allowbreak a_{j+1}^{2}\tau _{n,j+1}\allowbreak \allowbreak -\allowbreak (\allowbreak \zeta _{n,j-1}^{(1)}+a_{j+1}^{2}\zeta _{n,j+1}^{(1)})\allowbreak -\allowbreak (\zeta _{n,j-1}^{(2)}+b_{j}\zeta _{n,j}^{(2)})\allowbreak =\allowbreak \hat{\tau}_{n,j-1}\allowbreak +\allowbreak a_{j+1}^{2}(\tau _{n,j+1}\allowbreak -\allowbreak \zeta _{n,j+1}^{(1)}\allowbreak -\allowbreak \zeta _{n,j+1}^{(2)})\allowbreak +\allowbreak a_{j+1}^{2}\zeta _{n,j+1}^{(2)}\allowbreak +\allowbreak b_{j}(\tau _{n,j}\allowbreak -\allowbreak \zeta _{n,j}^{(2)}\allowbreak -\allowbreak \zeta _{n,j}^{(1)})\allowbreak +\allowbreak b_{j}\zeta _{n,j}^{(1)}.$ ii) Consider (\ref{aux1}) and (\ref{aux2}) with $k\allowbreak =\allowbreak n. $ We get then $\hat{\eta}_{n+1,n}\allowbreak =\allowbreak \hat{\eta _{n,n-1}$ and $\hat{\tau}_{n+1,n}\allowbreak =\allowbreak \hat{\tau _{n,n-1}. $ Since for $n\allowbreak =\allowbreak 1$ we have $\hat{\eta _{1,0}\allowbreak =\allowbreak \hat{\tau}_{1,0}\allowbreak =\allowbreak 0$ we get the assertion. iii) Take $k\allowbreak =\allowbreak n-1$ in (\ref{aux1}) and (\ref{aux2}). We get then $\hat{\eta}_{n+1,n-1}\allowbreak =\allowbreak \hat{\eta _{n,n-2}\allowbreak -\allowbreak b_{n}\hat{\eta}_{n,n-1}\allowbreak -\allowbreak a_{n}^{2}\hat{\eta}_{n-1,n-1}\allowbreak -\allowbreak b_{n}\xi _{n,n-1}^{(1)}\allowbreak -\allowbreak a_{n}^{2}\xi _{n-1,n-1}^{(2)}\allowbreak =\allowbreak \hat{\eta}_{n,n-2},$ since $\hat \eta}_{n,n-1}\allowbreak =\allowbreak \xi _{n,n-1}^{(1)}\allowbreak =\allowbreak 0$ and $\hat{\eta}_{n-1,n-1}\allowbreak =\allowbreak -\xi _{n-1,n-1}^{(2)}\allowbreak =\allowbreak -1.$ Further since $\hat{\eta _{2,0}\allowbreak =\allowbreak 0$ we get the assertion. iv) As before we take $k\allowbreak =\allowbreak n-2$ in (\ref{aux1}) and \ref{aux2}). We get then $\hat{\tau}_{n+1,n-2}\allowbreak =\allowbreak \hat \tau}_{n,n-3}\allowbreak \allowbreak +\allowbreak b_{n-2}\hat{\tau _{n,n-2}\allowbreak +\allowbreak +a_{n-1}^{2}\hat{\tau}_{n,n-1}\allowbreak +\allowbreak a_{n-1}^{2}\zeta _{n,n-1}^{(2)}\allowbreak +\allowbreak b_{n-2}\zeta _{n,n-2}^{(1)}\allowbreak =\allowbreak \hat{\tau _{n,n-3}\allowbreak +\allowbreak \allowbreak a_{n-1}^{2}\zeta _{n,n-1}^{(2)}\allowbreak +\allowbreak b_{n-2}\zeta _{n,n-2}^{(1)},$ since \hat{\tau}_{n,n-2}\allowbreak =\allowbreak \hat{\tau}_{n,n-1}\allowbreak =\allowbreak 0$ as shown above. Besides $\zeta _{n,n-1}^{(2)}\allowbreak =\allowbreak \sum_{k=0}^{n-1}b_{k}$ and $\zeta _{n,n-2}^{(1)}\allowbreak =\allowbreak \sum_{k=1}^{n-1}a_{k}^{2}$ as shown in (\ref{sol21}) and (\re {sol22}). Now it a matter of algebra. We reason in the similar way in case of $\eta _{n+3,n}.$ So now let us consider $\eta _{n+4,n}.$ By taking $k\allowbreak =\allowbreak n-3$ in (\ref{aux1}) we get: $\hat{\eta}_{n+1,n-3}\allowbreak =\allowbreak \hat{\eta}_{n,n-4}\allowbreak -\allowbreak b_{n}\hat{\eta _{n,n-3}\allowbreak -\allowbreak a_{n}^{2}\hat{\eta}_{n-1,n-3}\allowbreak -\allowbreak b_{n}\xi _{n,n-3}^{(1)}\allowbreak -\allowbreak a_{n}^{2}\xi _{n-1,n-3}^{(2)}\allowbreak =\allowbreak \hat{\eta}_{n,n-4}\allowbreak -\allowbreak b_{n}\hat{\eta}_{n,n-3}\allowbreak \allowbreak -\allowbreak a_{n}^{2}\sum_{0\leq k_{1}<k_{2}\leq n-2}b_{k_{1}}b_{k_{2}}$ since $\hat{\et }_{n-1,n-3}\allowbreak =\allowbreak \xi _{n,n-3}^{(1)}\allowbreak =\allowbreak 0$ as shown above and by (\ref{sol11}). Further we use (\re {sol12})and some algebra. v) To see that (\ref{sol2}) holds true it is enough to apply (\ref{aux1}) and (\ref{aux2}) with $b_{k}\allowbreak =\allowbreak 0,$ $k\geq 0$ which results in $\xi _{n,k}^{(2)}\allowbreak =\allowbreak \zeta _{n,k}^{(2)}\allowbreak =\allowbreak 0,$ for $n>0,$ $k\allowbreak \geq 0$ and which leads to relationships $\hat{\eta}_{n+1,k}=\hat{\eta _{n,k-1}\allowbreak -\allowbreak a_{n}^{2}\hat{\eta}_{n-1,k}$ and $\hat{\tau _{n+1,k}=\hat{\tau}_{n,k-1}\allowbreak +\allowbreak a_{k+1}^{2}\hat{\tau _{n,k+1}$ with $\hat{\eta}_{n,n}\allowbreak =\allowbreak \hat{\tau _{n,n}\allowbreak =\allowbreak 0,$ for $n>0$ and $\hat{\eta _{i,0}\allowbreak =\allowbreak \hat{\tau}_{i,0}\allowbreak =\allowbreak 0$ for $i\allowbreak =\allowbreak 1,2.$ Now it is elementary to see that we must have $\hat{\eta}_{n,k}\allowbreak =\allowbreak \hat{\tau _{n,k}\allowbreak =\allowbreak 0$ for all $n>0,$ $k\geq 0.$ \end{proof}
{'timestamp': '2014-02-21T02:03:45', 'yymm': '1303', 'arxiv_id': '1303.0627', 'language': 'en', 'url': 'https://arxiv.org/abs/1303.0627'}
ArXiv
\section{Introduction} Helicoidal surfaces in $3$-dimensional space forms arise as a natural generalization of rotational surfaces in such spaces. These surfaces are invariant by a subgroup of the group of isometries of the ambient space, called helicoidal group, whose elements can be seen as a composition of a translation with a rotation for a given axis. In the Euclidean space $\mathbb{R}^3$, do Carmo and Dajczer \cite{docarmo} describe the space of all helicoidal surfaces that have constant mean curvature or constant Gaussian curvature. This space behaves as a circular cylinder, where a given generator corresponds to the rotational surfaces and each parallel corresponds to a periodic family of helicoidal surfaces. Helicoidal surfaces with prescribed mean or Gaussian curvature are obtained by Baikoussis and Koufogiorgos \cite{BK}. More precisely, they obtain a closed form of such a surface by integrating the second-order ordinary differential equation satisfied by the generating curve of the surface. Helicoidal surfaces in $\mathbb{R}^3$ are also considered by Perdomo \cite{P1} in the context of minimal surfaces, and by Palmer and Perdomo \cite{PP2} where the mean curvature is related with the distance to the $z$-axis. In the context of constant mean curvature, helicoidal surfaces are considered by Solomon and Edelen in \cite{edelen1}. In the $3$-dimensional hyperbolic space $\mathbb{H}^3$, Mart\'inez, the second author and Tenenblat \cite{MST} give a complete classification of the helicoidal flat surfaces in terms of meromorphic data, which extends the results obtained by Kokubu, Umehara and Yamada \cite{KUY} for rotational flat surfaces. Moreover, the classification is also given by means of linear harmonic functions, characterizing the flat fronts in $\mathbb{H}^3$ that correspond to linear harmonic functions. Namely, it is well known that for flat surfaces in $\mathbb{H}^3$, on a neighbourhood of a non-umbilical point, there is a curvature line parametrization such that the first and second fundamental forms are given by \begin{equation} \begin{array}{rcl} I &=& \cosh^2 \phi(u,v) (du)^2 + \sinh^2 \phi(u,v) (dv)^2, \\ II &=& \sinh \phi(u,v) \cosh \phi(u,v) \left((du)^2 + (dv)^2 \right), \end{array} \label{firstff} \end{equation} where $\phi$ is a harmonic function, i.e., $\phi_{uu}+\phi_{vv}=0$. In this context, the main result states that a surface in $\mathbb{H}^3$, parametrized by curvature lines, with fundamental forms as in \eqref{firstff} and $\phi(u,v)$ linear, i.e, $\phi(u,v) = a u + b v + c$, is flat if and only if, the surface is a helicoidal surface or a {\em peach front}, where the second one is associated to the case $(a,b,c) = (0,\pm1,0)$. Helicoidal minimal surfaces were studied by Ripoll \cite{ripoll} and helicoidal constant mean curvature surfaces in $\mathbb{H}^3$ are considered by Edelen \cite{edelen2}, as well as the cases where such invariant surfaces belong to $\mathbb{R}^3$ and $\mathbb{S}^3$. Similarly to the hyperbolic space, for a given flat surface in the $3$-dimensional sphere $\mathbb{S}^3$, there exists a parametrization by asymptotic lines, where the first and the second fundamental forms are given by \begin{equation} \begin{array}{rcl} I &=& du^2 + 2 \cos \omega du dv + dv^2, \\ II &=& 2 \sin \omega du dv \label{primeiraffs} \end{array} \end{equation} for a smooth function $\omega$, called the {\em angle function}, that satisfy the homogeneous wave equation $\omega_{uv} = 0$. Therefore, one can ask which surfaces are related to linear solutions of such equation. The aim of this paper is to give a complete classification of helicoidal flat surfaces in $\mathbb{S}^3$, established in Theorems 1 and 2, by means of asymptotic lines coordinates, with first and second fundamental forms given by \eqref{primeiraffs}, where the angle function is linear. In order to do this, one uses the Bianchi-Spivak construction for flat surfaces in $\mathbb{S}^3$. This construction and the Kitagawa representation \cite{kitagawa1}, are important tools used in the recent developments of flat surface theory. Examples of applications of such representations can be seen in \cite{galvezmira1} and \cite{aledogalvezmira}. Our classification also makes use of a representation for constant angle surfaces in $\mathbb{S}^3$, who comes from a characterization of constant angle surfaces in the Berger spheres obtained by Montaldo and Onnis \cite{MO}. This paper is organized as follows. In Section 2 we give a brief description of helicoidal surfaces in $\mathbb{S}^3$, as well as a ordinary differential equation that characterizes those one that has zero intrinsic curvature. In Section 3, the Bianchi-Spivak construction is introduced. It will be used to prove Theorem 1, which states that a flat surface in $\mathbb{S}^3$, with asymptotic parameters and linear angle function, is invariant under helicoidal motions. In Section 4, Theorem 2 establishes the converse of Theorem 1, that is, a helicoidal flat surface admits a local parametrization, given by asymptotic parameters where the angle function is linear. Such local parametrization is obtained by using a characterization of constant angle surfaces in Berger spheres, which is a consequence of the fact that a helicoidal flat surface is a constant angle surface in $\mathbb{S}^3$, i.e., it has a unit normal that makes a constant angle with the Hopf vector field. In section 5 we present an application for conformally flat hypersurfaces in $\mathbb{R}^4$. The classification result obtained is used to give a geometric characterization for special conformally flat surfaces in $4-$dimensional space forms. It is known that conformally flat hypersurfaces in $4$-dimensional space forms are associated with solutions of a system of equations, known as Lam\'e's system (see \cite{Jeromin1} and \cite{santos} for details). In \cite{santos}, Tenenblat and the second author obtained invariant solutions under symmetry groups of Lam\'e's system. A class of those solutions is related to flat surfaces in $\mathbb{S}^3$, parametrized by asymptotic lines with linear angle function. Thus a geometric description of the correspondent conformally flat hypersurfaces is given in terms of helicoidal flat surfaces in $\mathbb{S}^3$. \section{Helicoidal flat surfaces} Given any $\beta\in\mathbb{R}$, let $\{\varphi_\beta(t)\}$ be the one-parameter subgroup of isometries of $\mathbb{S}^3$ given by \[ \varphi_\beta(t) = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos \beta t & -\sin \beta t \\ 0 & 0 & \sin \beta t & \cos \beta t \\ \end{array} \right). \] When $\beta\neq0$, this group fixes the set $l=\{(z,0)\in\mathbb{S}^3\}$, which is a great circle and it is called the {\em axis of rotation}. In this case, the orbits are circles centered on $l$, i.e., $\{\varphi_\beta(t)\}$ consists of rotations around $l$. Given another number $\alpha\in\mathbb{R}$, consider now the translations $\{\psi_\alpha(t)\}$ along $l$, \[ \psi_\alpha(t)=\left( \begin{array}{cccc} \cos \alpha t & -\sin \alpha t & 0 & 0 \\ \sin \alpha t & \cos \alpha t & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right). \] \begin{defi}\label{def:helicoidal} {\em A {\em helicoidal} surface in $\mathbb{S}^3$ is a surface invariant under the action of the helicoidal 1-parameter group of isometries \begin{equation}\label{eq:movhel} \phi_{\alpha,\beta}(t)=\psi_\alpha(t)\circ\varphi_\beta(t)= \left( \begin{array}{cccc} \cos \alpha t & -\sin \alpha t & 0 & 0 \\ \sin \alpha t & \cos \alpha t & 0 & 0 \\ 0 & 0 & \cos \beta t & -\sin \beta t \\ 0 & 0 & \sin \beta t & \cos \beta t \\ \end{array} \right), \end{equation}} {\em given by a composition of a translation $\psi_\alpha(t)$ and a rotation $\varphi_\beta(t)$ in $\mathbb{S}^3$. } \end{defi} \begin{rem} {\em When $\alpha=\beta$, these isometries are usually called {\em Clifford translations}. In this case, the orbits are all great circles, and they are equidistant from each other. In fact, the orbits of the action of $G$ coincide with the fibers of the Hopf fibration $h:\mathbb{S}^3\to\mathbb{S}^2$. We note that, when $\alpha=-\beta$, these isometries are also, up to a rotation in $\mathbb{S}^3$, Clifford translations. For this reason we will consider in this paper only the cases $\alpha\neq\pm\beta$.} \end{rem} With these basic properties in mind, a helicoidal surface can be locally parametrized by \begin{equation}\label{eq:param-helicoidal} X(t,s) = \phi_{\alpha,\beta}(t)\cdot\gamma(s), \end{equation} where $\gamma:I\subset\mathbb{R}\to\mathbb{S}^2_+$ is a curve parametrized by the arc length, called the {\em profile curve} of the parametrization $X$. Here, $\mathbb{S}^2_+$ is the half totally geodesic sphere of $\mathbb{S}^3$ given by \[ \mathbb{S}^2_+ = \left\{(x_1, x_2, x_3, 0)\in\mathbb{S}^3: x_3>0\right\}. \] Then we have \[ \begin{array}{rcl} X_t &=& \phi_{\alpha,\beta}(t)\cdot(-\alpha x_2,\alpha x_1, 0,\beta x_3), \\ X_s &=& \phi_{\alpha,\beta}(t)\cdot\gamma'(s). \end{array} \] Moreover, a unit normal vector field associated to the parametrization $X$ is given by $N=\tilde N/ \|\tilde N\|$, where $\tilde N$ is explicitly given by \begin{equation}\label{normal-field} \tilde{N} = \phi_{\alpha,\beta}(t)\cdot \big(\beta x_3(x_2'x_3-x_2 x_3',\beta x_3(x_1x_3'-x_1'x_3), \beta x_3 (x_1'x_2-x_1 x_2'),-\alpha x_3' \big). \end{equation} Let us now consider a parametrization by the arc length of $\gamma$ given by \begin{equation} \label{eq:param-gamma} \gamma(s) = \big(\cos\varphi(s)\cos\theta(s),\cos\varphi(s)\sin\theta(s), \sin\varphi(s),0\big). \end{equation} We will finish this section discussing the flatness of helicoidal surfaces in $\mathbb{S}^3$. Recall that a simple way to obtain flat surfaces in $\mathbb{S}^3$ is by means of the Hopf fibration $h:\mathbb{S}^3\to\mathbb{S}^2$. More precisely, if $c$ is a regular curve in $\mathbb{S}^2$, then $h^{-1}(c)$ is a flat surface in $\mathbb{S}^3$ (cf. \cite{spivak}). Such surfaces are called {\em Hopf cylinders}. The next result provides a necessary and sufficient condition for a helicoidal surface, parametrized as in \eqref{eq:param-helicoidal}, to be flat. \begin{prop}\label{prop:HSF} A helicoidal surface locally parametrized as in \eqref{eq:param-helicoidal}, where $\gamma$ is given by \eqref{eq:param-gamma}, is a flat surface if and only if the following equation \begin{equation} \beta^2\varphi''\sin^3\varphi\cos\varphi - \beta^2(\varphi')^2 \sin^4\varphi +\alpha^2(\varphi')^4 \cos^4 \varphi = 0 \label{ode-helicoidal} \end{equation} \label{prop-ode-helicoidal} is satisfied. \end{prop} \begin{proof} Since $\phi_{\alpha,\beta}(t)\in O(4)$ and $\gamma$ is parametrized by the arc length, the coefficients of the first fundamental form are given by \[ \begin{array}{rcl} E &=& \alpha^2\cos^2\varphi + \beta^2\sin^2\varphi, \\ F &=& \alpha\theta'\cos^2\varphi,\\ G &=& (\varphi')^2 + (\theta')^2 \cos^2 \varphi =1. \end{array} \] Moreover, the Gauss curvature $K$ is given by \[ \begin{array}{rcl} 4(EG - F^2)^2 K &=& E \left[ E_s G_s - 2 F_t G_s + (G_t)^2 \right] + G \left[E_t G_t - 2 E_t F_s + (E_s)^2 \right] \\ &&+ F (E_t G_s - E_s G_t - 2 E_s F_s + 4 F_t F_s - 2 F_t G_t ) \\ && - 2 (EG-F^2)(E_{ss} - 2 F_{st} + G_{tt} ). \end{array} \] Thus, it follows from the expression of $K$ and from the coefficients of the first fundamental form that the surface is flat if, and only if, \begin{equation}\label{eq:gauss} E_s (EG - F^2)_s - 2 (EG-F^2)E_{ss}=0. \end{equation} When $\alpha=\pm\beta$, the equation \eqref{eq:gauss} is trivially satisfied, regardless of the chosen curve $\gamma$. For the case $\alpha\neq\pm\beta$, since \[ EG-F^2 =\beta^2\sin^2\varphi+\alpha^2(\varphi')^2\cos^2\varphi, \] a straightforward computation shows that the equation \eqref{eq:gauss} is equivalent to \[ (\beta^2-\alpha^2)\big(\beta^2\varphi''\sin^3\varphi\cos\varphi -\beta^2(\varphi')^2\sin^4\varphi+\alpha^2(\varphi')^4\cos^4\varphi\big) = 0, \] and this concludes the proof. \end{proof} \section{The Bianchi-Spivak construction} A nice way to understand the fundamental equations of a flat surface $M$ in $\mathbb{S}^3$ is by parameters whose coordinate curves are asymptotic curves on the surface. As $M$ is flat, its intrinsic curvature vanishes identically. Thus, by the Gauss equation, the extrinsic curvature of $M$ is constant and equal to $-1$. In this case, as the extrinsic curvature is negative, it is well known that there exist Tschebycheff coordinates around every point. This means that we can choose local coordinates $(u,v)$ such that the coordinates curves are asymptotic curves of $M$ and these curves are parametrized by the arc length. In this case, the first and second fundamental forms are given by \begin{eqnarray}\label{eq:forms} \begin{aligned} I &= du^2 + 2 \cos \omega dudv + dv^2, \\ II& = 2 \sin \omega du dv, \end{aligned} \end{eqnarray} for a certain smooth function $\omega$, usually called the {\em angle function}. This function $\omega$ has two basic properties. The first one is that as $I$ is regular, we must have $0<\omega<\pi$. Secondly, it follows from the Gauss equation that $\omega_{uv}=0$. In other words, $\omega$ satisfies the homogeneous wave equation, and thus it can be locally decomposed as $\omega(u,v) = \omega_1(u) + \omega_2(v)$, where $\omega_1$ and $\omega_2$ are smooth real functions (cf. \cite{galvez1} and \cite{spivak} for further details). \vspace{.2cm} Given a flat isometric immersion $f:M\to\mathbb{S}^3$ and a local smooth unit normal vector field $N$ along $f$, let us consider coordinates $(u,v)$ such that the first and the second fundamental forms of $M$ are given as in \eqref{eq:forms}. The aim of this work is to characterize the flat surfaces when the angle function $\omega$ is linear, i.e., when $\omega=\omega_1+\omega_2$ is given by \begin{eqnarray}\label{eq:linear} \omega_1(u) + \omega_2 (v) = \lambda_1 u + \lambda_2 v + \lambda_3 \end{eqnarray} where $\lambda_1, \lambda_2, \lambda_3 \in \mathbb{R}$. I order to do this, let us first construct flat surfaces in $\mathbb{S}^3$ whose first and second fundamental forms are given by \eqref{eq:forms} and with linear angle function. This construction is due to Bianchi \cite{bianchi} and Spivak \cite{spivak}. \vspace{.2cm} We will use here the division algebra of the quaternions, a very useful approach to describe explicitly flat surfaces in $\mathbb{S}^3$. More precisely, we identify the sphere $\mathbb{S}^3$ with the set of the unit quaternions $\{q\in\mathrm{H}: q\overline q=1\}$ and $\mathbb{S}^2$ with the unit sphere in the subspace of $\mathrm{H}$ spanned by $1$, $i$ and $j$. \begin{prop}[Bianchi-Spivak representation]\label{teo:BS} Let $c_a,c_b:I\subset\mathbb{R}\to\mathbb{S}^3$ be two curves parametrized by the arc length, with curvatures $\kappa_a$ and $\kappa_b$, and whose torsions are given by $\tau_a=1$ and $\tau_b=-1$. Suppose that $0 \in I$, $c_a(0)=c_b(0)=(1,0,0,0)$ e $c_a'(0) \wedge c_b '(0) \neq 0$. Then the map \[ X(u,v) = c_a (u) \cdot c_b (v) \] is a local parametrization of a flat surface in $\mathbb{S}^3$, whose first and second fundamental forms are given as in \eqref{eq:forms}, where the angle function satisfies $\omega_1'(u) = -\kappa_a (u)$ and $\omega_2'(v) =\kappa_b (v)$. \end{prop} Since the goal here is to find a parametrization such that $\omega$ can be written as in \eqref{eq:linear}, it follows from Theorem \ref{teo:BS} that the curves of the representation must have constant curvatures. Therefore, we will use the Frenet-Serret formulas in order to obtain curves with torsion $\pm 1$ and with constant curvatures. \vspace{.2cm} Given a real number $r>1$, let us consider the curve $\gamma_r:\mathbb{R}\to\mathbb{S}^3$ given by \begin{equation}\label{eq:base} \gamma_r(u) = \frac{1}{\sqrt{1+r^2}} \left(r\cos\frac{u}{r},r\sin\frac{u}{r}, \cos ru,\sin ru\right). \end{equation} A straightforward computation shows that $\gamma_r(u)$ is parametrized by the arc length, has constant curvature $\kappa=\frac{r^2-1}{r}$ and its torsion $\tau$ satisfies $\tau^2=1$. Observe that $\gamma_r(u)$ is periodic if and only if $r^2\in\mathbb Q$. When $r$ is a positive integer, $\gamma_r(u)$ is a closed curve of period $2\pi r$. A curve $\gamma$ as in \eqref{eq:base} will be called a {\em base curve}. \vspace{.2cm} Now we just have to apply rigid motions to a base curve in order to satisfy the remaining requirements of the Bianchi-Spivak construction. It is easy to verify that the curves \begin{eqnarray}\label{eq:curves-condition} \begin{array}{rcl} c_a(u) &=& \dfrac{1}{\sqrt{1+a^2}}(a,0,-1,0) \cdot \gamma_a (u), \\ c_b(v) &=& \dfrac{1}{\sqrt{1+b^2}}T ( \gamma_b (v)) \cdot (b,0,0,-1), \end{array} \end{eqnarray} are base curves, and satisfy $c_a(0)=c_b(0)=(1,0,0,0)$ and $c_a'(0)\wedge c_b'(0)\neq0$, where \begin{equation} \begin{array}{rcl} T &=& \left( \begin{array}{cccc} 1&0&0 & 0\\ 0&1&0& 0 \\ 0&0&0&1\\ 0&0&1&0 \end{array} \right). \\ \end{array} \end{equation} Therefore we can establish our first main result: \begin{thm}\label{teo:main1} The map $X:U\subset\mathbb{R}^2\to\mathbb{S}^3$ given by \[ X(u,v) = c_a(u) \cdot c_b(v), \] where $c_a$ and $c_b$ are the curves given in \eqref{eq:curves-condition}, is a parametrization of a flat surface in $\mathbb{S}^3$, whose first and second fundamental forms are given by \[ \begin{array}{rcl} I &=& du^2 +2 \cos \left( \left( \frac{1-a^2}{a} \right) u + \left( \frac{b^2-1}{b} \right) v + c \right) du dv + dv^2, \\ II &=& 2 \sin \left(\left( \frac{1-a^2}{a} \right) u + \left( \frac{b^2-1}{b} \right) v + c \right) du dv, \end{array} \] where $c$ is a constant. Moreover, up to rigid motions, $X$ is invariant under helicoidal motions. \end{thm} \begin{proof} The statement about the fundamental forms follows directly from the Bianchi-Spivak construction. For the second statement, note that the parametrization $X(u,v)$ can be written as \[ X(u,v) = g_a \cdot Y(u,v) \cdot g_b, \] where \[ \begin{array}{rcl} g_a &=& \dfrac{1}{\sqrt{1+a^2}}(a,0,-1,0), \\ g_b &=& \dfrac{1}{\sqrt{1+b^2}}(b,0,0,-1), \end{array} \] and \[ Y(u,v) = \gamma_a(u) \cdot T(\gamma_b(v)). \] To conclude the proof, it suffices to show that $Y(u,v)$ is invariant by helicoidal motions. To do this, we have to find $\alpha$ and $\beta$ such that \[ \phi_{\alpha,\beta}(t)\cdot Y(u,v)=Y\big(u(t),v(t)\big), \] where $u(t)$ and $v(t)$ are smooth functions. Observe that $Y(u,v)$ can be written as \begin{eqnarray}\label{eq:expY} Y(u,v) = \dfrac{1}{\sqrt{(1+a^2)(1+b^2)}} (y_1, y_2, y_3, y_4), \end{eqnarray} where \begin{equation*} \begin{array}{rcl} y_1(u,v) &=& ab \cos \left(\dfrac{u}{a} + \dfrac{v}{b} \right) - \sin (au+ bv), \\ y_2(u,v) &=& ab \sin \left(\dfrac{u}{a} + \dfrac{v}{b} \right) + \cos (au+ bv), \\ y_3(u,v) &=& b \cos \left(au- \dfrac{v}{b} \right) - a\sin\left(\dfrac{u}{a}-bv\right), \\ y_4(u,v) &=& b\sin\left(au-\dfrac{v}{b} \right)+a\cos\left(\dfrac{u}{a}-bv\right). \\ \end{array} \label{parametrizacao-y} \end{equation*} A straightforward computation shows that if $\phi_{\alpha,\beta}(t)$ is given by \eqref{eq:movhel}, we have \[ u(t) =u+z(t) \quad\textrm{and}\quad v(t)=v+w(t), \] where \begin{eqnarray}\label{eq:z(t)w(t)} z(t)=\frac{a(b^2-1)}{a^2b^2-1}\beta t \quad\text{and}\quad w(t)=\frac{b(1-a^2)}{a^2b^2-1}\beta t, \end{eqnarray} with \begin{eqnarray}\label{eq:alpha-beta} \alpha=\dfrac{b^2-a^2}{a^2b^2-1}\beta, \end{eqnarray} showing that $Y(u,v)$ is invariant by helicoidal motions. Observe that when $a=\pm b$ we have $\alpha=0$, i.e., $X$ is a rotational surface in $\mathbb{S}^3$. \end{proof} \begin{rem} {\em It is important to note that the constant $a$ and $b$ in \eqref{eq:curves-condition} were considered in $(1,+\infty)$ in order to obtain non-zero constant curvatures with its well defined torsions, and then to apply the Bianchi-Spivak construction. This is not a strong restriction since the curvature function $\kappa(t)=\frac{t^2-1}{t}$ assumes all values in $\mathbb{R} \setminus\{0\}$ when $t \in (1, + \infty)$. However, by taking $a=1$ and $b>1$ in \eqref{eq:curves-condition}, a long but straightforward computation gives an unit normal vector field \[ N(u,v) = \dfrac{1}{\sqrt{2(1+b^2)}} (n_1, n_2, n_3, n_4), \] where \[ \begin{array}{rcl} n_1(u,v) &=& -b \sin \left(u + \dfrac{v}{b} \right) + \cos (u+ bv), \\ n_2(u,v) &=& b \cos \left(u + \dfrac{v}{b} \right) + \sin (u+ bv), \\ n_3(u,v) &=& b \sin \left(u- \dfrac{v}{b} \right) - \cos\left(a-bv\right), \\ n_4(u,v) &=& -b\cos \left(u-\dfrac{v}{b} \right) - \sin\left(u-bv\right). \\ \end{array} \] Therefore, one shows that this parametrization is also by asymptotic lines where the angle function is given by $\omega(u,v) =\frac{1-b^2}{b}v -\frac{\pi}{2}$. Moreover, this is a parametrization of a Hopf cylinder, since the unit normal vector field $N$ makes a constant angle with the Hopf vector field (see section 4).} \end{rem} We will use the parametrization $Y(u,v)$ given in \eqref{eq:expY}, compose with the stereographic projection in $\mathbb{R}^3$, to visualize some examples with the corresponding constants $a$ and $b$. \begin{figure}[!htb] \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura1.jpg} \end{minipage} \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura2.jpg} \end{minipage} \caption{$a=2$ and $b=3$.} \end{figure} \begin{figure}[!htb] \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura3.jpg} \end{minipage} \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura4.jpg} \end{minipage} \caption{$a=\sqrt2$ and $b=3$.} \end{figure} \begin{figure}[!htb] \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura5.jpg} \end{minipage} \begin{minipage}[]{0.45\linewidth} \includegraphics[width=\linewidth]{figura6.jpg} \end{minipage} \caption{$a=\sqrt3$ and $b=\sqrt2$.} \end{figure} \section{Constant angle surfaces} In this section we will complete our classification of helicoidal flat surfaces in $\mathbb{S}^3$, by establishing our second main theorem, that can be seen as a converse of Theorem \ref{teo:main1}. It is well known that the Hopf map $h:\mathbb{S}^3\to\mathbb{S}^2$ is a Riemannian submersion and the standard orthogonal basis of $\mathbb{S}^3$ \[ E_1(z,w)=i(z,w), \ \ E_2(z,w)=i(-\overline w,\overline z), \ \ E_3(z,w)=(-\overline w,\overline z) \] has the property that $E_1$ is vertical and $E_2$, $E_3$ are horizontal. The vector field $E_1$, usually called the Hopf vector field, is an unit Killing vector field. \vspace{.2cm} Constant angle surface in $\mathbb{S}^3$ are those surfaces whose its unit normal vector field makes a constant angle with the Hopf vector field $E_1$. The next result states that flatness of a helicoidal surface in $\mathbb{S}^3$ turns out to be equivalent to constant angle surface. \begin{prop}\label{prop:CAS} A helicoidal surface in $\mathbb{S}^3$, locally parametrized by \eqref{eq:param-helicoidal} and with the profile curve $\gamma$ parametrized by \eqref{eq:param-gamma}, is a flat surface if and only if it is a constant angle surface. \end{prop} \begin{proof} Let us consider the Hopf vector field \[ E_1(x_1, x_2, x_3, x_4) = (-x_2, x_1, -x_4, x_3), \] and let us denote by $\nu$ the angle between $E_1$ and the normal vector field $N$ along the surface given in \eqref{normal-field}. Along the parametrization \eqref{eq:param-helicoidal}, we can write the vector field $E_1$ as \[ E_1(X(t,s)) = \phi_{\alpha,\beta}(t)(-x_2, x_1,0,x_3). \] Then, since $\phi_{\alpha,\beta}(t)\in O(4)$, we have \[ \langle N,E_1\rangle(t,s)=\langle N,E_1\rangle(s)= (\beta-\alpha) \frac{x_3 x_3'}{\sqrt{\beta^2x_3^2 + \alpha^2 (x_3')^2}}. \] By considering the parametrization \eqref{eq:param-gamma} for the profile curve $\gamma$, the angle $\nu=\nu(s)$ between $N$ and $E_1$ is given by \begin{eqnarray}\label{eq:cosnu} \cos \nu (s) = (\beta-\alpha) \frac{\varphi' \sin\varphi \cos\varphi} {\sqrt{\beta^2\sin^2 \varphi + \alpha^2 (\varphi')^2 \cos^2 \varphi}}. \end{eqnarray} By taking the derivative in \eqref{eq:cosnu}, we have \[ \frac{d}{ds}(\cos\nu(s))=\frac{(\beta-\alpha)\big(\beta^2\varphi''\sin^3 \varphi\cos\varphi - \beta^2(\varphi')^2 \sin^4 \varphi + \alpha^2 (\varphi')^2 \cos^4 \varphi\big)} {\big(\beta^2\sin^2\varphi+\alpha^2(\varphi')^2 \cos^2 \varphi \big)^{\frac{3}{2}}}, \] and the conclusion follows from the Proposition \ref{prop:HSF}. \end{proof} Given a number $\epsilon>0$, let us recall that the Berger sphere $\mathbb{S}^3_\epsilon$ is defined as the sphere $\mathbb{S}^3$ endowed with the metric \begin{eqnarray}\label{eq:berger} \langle X,Y\rangle_\epsilon=\langle X,Y\rangle+(\epsilon^2-1)\langle X,E_1\rangle\langle Y,E_1\rangle, \end{eqnarray} where $\langle,\rangle$ denotes de canonical metric of $\mathbb{S}^3$. We define constant angle surface in $\mathbb{S}^3_\epsilon$ in the same way that in the case of $\mathbb{S}^3$. Constant angle surfaces in the Berger spheres were characterized by Montaldo and Onnis \cite{MO}. More precisely, if $M$ is a constant angle surface in the Berger sphere, with constant angle $\nu$, then there exists a local parametrization $F(u,v)$ given by \begin{eqnarray}\label{eq:paramMO} F(u,v)=A(v)b(u), \end{eqnarray} where \begin{eqnarray}\label{eq:curveb} b(u)=\big(\sqrt{c_1}\cos(\alpha_1u),\sqrt{c_1}\sin(\alpha_1u), \sqrt{c_2}\cos(\alpha_2u),\sqrt{c_2}\sin(\alpha_2u)\big) \end{eqnarray} is a geodesic curve in the torus $\mathbb{S}^1(\sqrt{c_1})\times\mathbb{S}^1(\sqrt{c_2})$, with \[ c_{1,2}=\frac{1}{2}\mp\frac{\epsilon\cos\nu}{2\sqrt{B}},\ \alpha_1=\frac{2B}{\epsilon}c_2, \ \alpha_2=\frac{2B}{\epsilon}c_1, \ B=1+(\epsilon^2-1)\cos^2\nu, \] and \begin{eqnarray}\label{eq:xi_i} A(v)=A(\xi,\xi_1,\xi_2,\xi_3)(v) \end{eqnarray} is a $1$-parameter family of $4\times4$ orthogonal matrices given by \[ A(v)=A(\xi)\cdot\tilde A(v), \] where \[ A(\xi)=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \sin\xi & \cos\xi \\ 0 & 0 & -\cos\xi & \sin\xi \end{array} \right) \] and \[ \tilde A(v)=\left( \begin{array}{rrrr} \cos\xi_1\cos\xi_2 & -\cos\xi_1\sin\xi_2 & \sin\xi_1\cos\xi_2 & -\sin\xi_1\sin\xi_3 \\ \cos\xi_1\sin\xi_2 & -\cos\xi_1\cos\xi_2 & \sin\xi_1\sin\xi_3 & \sin\xi_1\cos\xi_3 \\ -\sin\xi_1\cos\xi_3 & \sin\xi_1\sin\xi_3 & \cos\xi_1\cos\xi_2 & \cos\xi_1\sin\xi_2 \\ \sin\xi_1\sin\xi_3 & -\sin\xi_1\cos\xi_3 & -\cos\xi_1\sin\xi_2 & \cos\xi_1\cos\xi_2 \end{array} \right), \] $\xi$ is a constant and the functions $\xi_i(v)$, $1\leq i\leq 3$, satisfy \begin{eqnarray}\label{eq:xis} \cos^2(\xi_1(v))\xi_2'(v)-\sin^2(\xi_1(v))\xi_3'(v)=0. \end{eqnarray} In the next result we obtain another relation between the function $\xi_i$, given in \eqref{eq:xi_i}, and the angle function $\nu$. \begin{prop} {\em The functions $\xi_i(v)$, given in \eqref{eq:xi_i}, satisfy the following relation: \begin{eqnarray}\label{eq:relxi_inu} (\xi_1'(v))^2+(\xi_2'(v))^2\cos^2(\xi_1(v))+(\xi_3'(v))\sin^2(\xi_1(v))= \sin^2\nu, \end{eqnarray} where $\nu$ is the angle function of the surface $M$.} \end{prop} \begin{proof} With respect to the parametrization $F(u,v)$, given in \eqref{eq:paramMO}, we have \[ F_v=A'(v)\cdot b(u)=A(\xi)\cdot\tilde A'(v)\cdot b(u). \] We have $\langle F_v,F_v\rangle=\sin^2\nu$ (cf. \cite{MO}). On the other hand, if we denote by $c_1$, $c_2$, $c_3$, $c_4$ the columns of $\tilde A$, we have \[ \langle F_v,F_v\rangle\vert_{u=0}=g_{11}\langle c_1',c_1'\rangle+g_{33}\langle c_3',c_3'\rangle. \] As $\langle c_1',c_1'\rangle=\langle c_3',c_3'\rangle$, $\langle c_1',c_3'\rangle=0$ and $g_{11}+g_{33}=1$, a straightforward computation gives \begin{eqnarray*} \sin^2\nu &=& \langle F_v,F_v\rangle = (g_{11}+g_{33})\langle c_1',c_1'\rangle \\ &=& (\xi_1'(v))^2+(\xi_2'(v))^2\cos^2(\xi_1(v))+(\xi_3'(v))\sin^2(\xi_1(v)), \end{eqnarray*} and we conclude the proof. \end{proof} \begin{thm} Let $M$ be a helicoidal flat surface in $\mathbb{S}^3$, locally parametrized by \eqref{eq:param-helicoidal}, and whose profile curve $\gamma$ is given by \eqref{eq:param-gamma}. Then $M$ admits a new local parametrization such that the fundamental forms are given as in \eqref{eq:forms} and $\omega$ is a linear function. \end{thm} \begin{proof} Consider the unit normal vector field $N$ associated to the local parametrization $X$ of $M$ given in \eqref{eq:param-helicoidal}. From Proposition \ref{prop:CAS}, the angle between $N$ and the Hopf vector field $E_1$ is constant. Hence, it follows from \cite{MO} (Theorem 3.1) that $M$ can be locally parametrized as in \eqref{eq:paramMO}. By taking $\epsilon=1$ in \eqref{eq:berger}, we can reparametrize the curve $b$ given in \eqref{eq:curveb} in such a way that the new curve is a base curve $\gamma_a$. In fact, by taking $\epsilon=1$, we obtain $B=1$, and so $\alpha_1=2c_2$ and $\alpha_2=2c_1$. This implies that $\|b'(u)\|=2\sqrt{c_1c_2}$, because $c_1+c_2=1$. Thus, by writing $s=2\sqrt{c_1c_2}$, the new parametrization of $b$ is given by \[ b(s)=\frac{1}{\sqrt{1+a^2}} \left(a\cos\frac{s}{a},a\sin\frac{s}{a}, \cos(as),\sin(as)\right), \] where $a=\sqrt{c_1/c_2}$. On the other hand, we have \[ A(v)\cdot b(s)=A(\xi)X(v,s), \] where $X(v,s)$ can be written as \[ X(v,s)=\frac{1}{\sqrt{1+a^2}}(x_1,x_2,x_3,x_4), \] with \begin{eqnarray}\label{eq:coef_xi} \begin{array}{rcl} x_1 &=& a\cos\xi_1\cos\left(\dfrac{s}{a}+\xi_2\right)+\sin\xi_1\cos(as+\xi_3), \\ x_2 &=& a\cos\xi_1\sin\left(\dfrac{s}{a}+\xi_2\right)+\sin\xi_1\sin(as+\xi_3), \\ x_3 &=& -a\sin\xi_1\cos\left(\dfrac{s}{a}-\xi_3\right)+\cos\xi_1\cos(as-\xi_2), \\ x_4 &=& -a\sin\xi_1\sin\left(\dfrac{s}{a}-\xi_3\right)+\cos\xi_1\sin(as-\xi_2). \\ \end{array} \end{eqnarray} On the other hand, the product $\phi_{\alpha,\beta}(t)\cdot X(v,s)$ can be written as \[ \phi_{\alpha,\beta}(t)\cdot X(v,s)=\frac{1}{\sqrt{1+a^2}}(z_1,z_2,z_3,z_4), \] where \begin{eqnarray}\label{eq:coef_zi} \begin{array}{rcl} z_1 &=& a\cos\xi_1\cos\left(\dfrac{s}{a}+\xi_2+\alpha t\right) +\sin\xi_1\cos\left(as+\xi_3+\alpha t\right), \\ z_2 &=& a\cos\xi_1\sin\left(\dfrac{s}{a}+\xi_2+\alpha t\right) +\sin\xi_1\sin\left(as+\xi_3+\alpha t\right), \\ z_3 &=& -a\sin\xi_1\cos\left(\dfrac{s}{a}-\xi_3+\beta t\right) +\cos\xi_1\cos\left(as-\xi_2+\beta t\right), \\ z_4 &=& -a\sin\xi_1\sin\left(\dfrac{s}{a}-\xi_3+\beta t\right) +\cos\xi_1\sin\left(as-\xi_2+\beta t\right). \end{array} \end{eqnarray} As the surface is helicoidal, we have \[ \phi_{\alpha,\beta}(t)\cdot X(v,s)=X(v(t),s(t)), \] for some smooth functions $v(t)$ and $s(t)$, which satisfy the following equations: \begin{eqnarray} \xi_2(v(t)) + \dfrac{s(t)}{a} = \xi_2(v)+ \dfrac{s}{a}+\alpha t, \label{eq:xi2-alpha} \\ \xi_3(v(t)) + a s(t) = \xi_3 (v) + a s + \alpha t, \label{eq:xi3-alpha} \\ \dfrac{s(t)}{a} - \xi_3(v(t)) = \dfrac{s}{a} - \xi_3(v) + \beta t, \label{eq:xi3-beta} \\ a s(t) - \xi_2(v(t)) = as - \xi_2(v) + \beta t. \label{eq:xi2-beta} \end{eqnarray} It follows directly from \eqref{eq:xi2-alpha} and \eqref{eq:xi2-beta} that \begin{equation} s(t) = s + \dfrac{a(\alpha+\beta)}{a^2+1} t. \label{eq:s(t)} \end{equation} Note that the same conclusion is obtained by using \eqref{eq:xi3-alpha} and \eqref{eq:xi3-beta}. By substituting the expression of $s(t)$ given in \eqref{eq:s(t)} on the equations \eqref{eq:xi2-alpha} -- \eqref{eq:xi2-beta}, one has \begin{eqnarray} \xi_2(v(t))= \xi_2(v) + \left( \dfrac{a^2 \alpha - \beta}{a^2+1} \right) t, \label{eq:xi2(v(t))} \\ \xi_3(v(t))= \xi_3(v) + \left( \dfrac{\alpha - a^2 \beta}{a^2+1} \right) t. \label{eq:xi3(v(t))} \end{eqnarray} From now on we assume that $v'(t)\neq0$ since, otherwise, we would have \[ \frac{s(t)}{a}=\frac{s}{a}+\alpha t=\frac{s}{a}+\beta t \quad\text{and}\quad as(t)=as+\alpha t=as+\beta t. \] But the equalities above imply that $a^2=1$, which contradicts the definition of base curve in \eqref{eq:base}. Thus, it follows from \eqref{eq:xi2(v(t))} and \eqref{eq:xi3(v(t))} that \begin{eqnarray}\label{eq:xi_2exi_3} \xi_2'=\frac{a^2 \alpha-\beta}{a^2+1}\cdot\frac{1}{v'} \quad\text{and}\quad \xi_3'=\frac{\alpha-a^2 \beta}{a^2+1}\cdot\frac{1}{v'}. \end{eqnarray} Therefore, from \eqref{eq:xis} and \eqref{eq:xi_2exi_3} we obtain \begin{eqnarray}\label{eq:relxi_1} \cos^2(\xi_1(v))(a^2\alpha-\beta)=\sin^2(\xi_1(v))(\alpha-a^2\beta). \end{eqnarray} As $a>1$, one has $a^2\alpha-\beta\neq0$ or $\alpha-a^2\beta\neq0$, and we conclude from \eqref{eq:relxi_1} that $\xi_1(v)$ is constant. In this case, there is a constant $b>1$ such that $\cos^2\xi_1=\dfrac{b^2}{1+b^2}$ and $\sin^2 \xi_1 = \dfrac{1}{1+b^2}$. Therefore, it follows from \eqref{eq:xis} that \begin{eqnarray}\label{eq:xi2-xi3} \xi_2(v)= \dfrac{1}{b^2} \xi_3(v)+d, \end{eqnarray} for some constant $d$. On the other hand, if $\cos\xi_1\neq0$, it follows from \eqref{eq:xis} that \begin{eqnarray}\label{eq:xi_2xi_3} (\xi_2'(v))^2=\tan^4\xi_1\cdot(\xi_3'(v))^2. \end{eqnarray} By substituting \eqref{eq:xi_2xi_3} in \eqref{eq:relxi_inu} we obtain \[ \tan^2\xi_1\cdot(\xi_3'(v))^2=\sin^2\nu, \] ant this implies that we can choose $\xi_3(v)=bv$, and from \eqref{eq:xi3(v(t))} we obtain \begin{eqnarray}\label{eq:v(t)} v(t)=v+\frac{\alpha-a^2\beta}{b(a^2+1)}t. \end{eqnarray} Moreover, from \eqref{eq:xi2-xi3}, the equation $\xi_2(v(t)) = \dfrac{1}{b^2} \xi_3(v(t))+d$ implies that \begin{eqnarray}\label{eq:b-alpha-beta} \dfrac{1}{b^2} = \dfrac{a^2 \alpha - \beta}{\alpha-a^2 \beta}, \end{eqnarray} and from \eqref{eq:b-alpha-beta} we obtain the same relation \eqref{eq:alpha-beta} between $\alpha$ and $\beta$. This relation, when substituted in \eqref{eq:s(t)} and \eqref{eq:v(t)}, gives \[ s(t)=s+\frac{a(b^2-1)}{a^2b^2-1}\beta t \quad\text{and}\quad v(t)=v+\frac{b(1-a^2)}{a^2b^2-1}\beta t, \] that coincide with the expressions in \eqref{eq:z(t)w(t)}. Finally, from the relation \eqref{eq:xi2-xi3} we obtain $\xi_2(v)=\frac{v}{b}-\frac{\pi}{2}$. By taking $\xi=\frac{\pi}{2}$ and $\xi_1(v)=\arcsin\left(\frac{1}{\sqrt{1+b^2}}\right)$, the new parametrization $F(u,v)$ thus obtained coincides with $Y(u,v)$ given in \eqref{eq:expY}, up to isometries of $\mathbb{S}^3$ and linear reparametrization. The conclusion follows from Theorem \ref{teo:main1}. \end{proof} \section{Conformally flat hypersurfaces} In this section, it will presented an application of the classification result for helicoidal flat surfaces in $\mathbb{S}^3$ in a geometric description for conformally flat hypersurfaces in four-dimensional space forms. The problem of classifying conformally flat hypersurfaces in space forms has been investigated for a long time, with special attention on $4$-dimensional space forms. In fact, any surface in $\mathbb{R}^3$ is conformally flat, since it can be parametrized by isothermal coordinates. On the other hand, Cartan \cite{Cartan} gave a complete classification of conformally flat hypersurfaces into a $(n+1)$-dimensional space form, with $n+1\geq5$. Such hypersurfaces are quasi-umbilic, i.e., one of the principal curvatures has multiplicity at least $n-1$. In the same paper, Cartan showed that the quasi-umbilic surfaces are conformally flat, but the converse does not hold. Since then, there has been an effort to obtain a classification of hypersurfaces with three distinct principal curvatures. Lafontaine \cite{Lafontaine} considered hypersurfaces of type $M^3 = M^2 \times I\subset\mathbb{R}^4$ and obtained the following classes of conformally flat hypersurfaces: (a) $M^3$ is a cylinder over a surface, where $M^2 \subset \mathbb{R}^3$ has constant curvature; (b) $M^3$ is a cone over a surface in the sphere, where $M^2 \subset\mathbb{S}^3$ has with constant curvature; (c) $M^3$ is obtained by rotating a constant curvature surface of the hyperbolic space and $M^2 \subset \mathbb{H}^3 \subset \mathbb{R}^4$, where $\mathbb{H}^3$ is the half space model (see \cite{Suyama2} for more details). Hertrich-Jeromin \cite{Jeromin1} established a correspondence between conformally flat hypersufaces in space forms, with three distinct principal curvatures, and solutions $(l_1, l_2, l_3) : U \subset \mathbb{R}^3 \rightarrow \mathbb{R}$ for the Lam\'e's system \cite{reflame} \begin{equation} \begin{array}{rcl} l_{i,x_jx_k} - \dfrac{l_{i,x_j} l_{j,x_k}}{l_j} - \dfrac{l_{i,x_k} l_{k,x_j}}{l_k} &=& 0, \\ \left( \dfrac{l_{i,x_j}}{l_j} \right)_{,x_j} + \left( \dfrac{l_{j,x_i}}{l_i} \right)_{,x_i} + \dfrac{l_{i,x_k} l_{j,x_k}}{l_k^2} &=& 0, \end{array} \label{lame} \end{equation} where $i, j, k$ are distinct indices that satisfies the condition \begin{equation} \label{guich} l_1^2 - l_2^2 + l_3^2 = 0 \end{equation} known as Guichard condition. In this case, the correspondent coformally flat hypersurface in $M^4_K$ is parametrized by curvature lines, with induced metric given by \[ g = e^{2u} \left\{ l_1^2 (dx_1)^2 + l_1^2 (dx_1)^2 + l_3^2 (dx_3)^2 \right\}. \] In \cite{santos} the second author and Tenenblat obtained solutions of Lam\'e's system (\ref{lame}) that are invariant under symmetry groups. Among the solutions, there are those that are invariant under the action of the 2-dimensional subgroup of translations and dilations and depends only on two variables: \begin{enumerate} \item[(a)] $l_1 = \lambda_1, \; l_2 = \lambda_1 \cosh (b\xi + \xi_0), \; l_3 = \lambda_1 \sinh (b\xi + \xi_0)$, where $\xi = \alpha_2 x_2 + \alpha_3 x_3$, $\alpha_2^2 + \alpha_3^2 \neq 0$ and $b,\, \xi_0\in \mathbb{R}$ ; \item[(b)] $ l_2 = \lambda_2, \; l_1 = \lambda_2 \cos \varphi(\xi) , \; l_3 = \lambda_2 \sin \varphi(\xi) $, where $\xi = \alpha_1 x_1 + \alpha_3 x_3$, $\alpha_1^2 + \alpha_3^2 \neq 0$ and $\varphi$ is one of the following functions: \begin{enumerate} \item[(b.1)] $\varphi(\xi) = b \xi + \xi_0$, if $\alpha_1^2 \neq \alpha_3^2$, where $\xi_0, \,b\in \mathbb{R}$; \item[(b.2)] $\varphi$ is any function of $\xi$, if $\alpha_1^2 = \alpha_3^2$; \end{enumerate} \item[(c)] $l_3 = \lambda_3, \; l_2 = \lambda_3 \cosh (b\xi + \xi_0), \; l_1 = \lambda_3 \sinh (b\xi + \xi_0)$, where $\xi = \alpha_1 x_1 + \alpha_2 x_2$, $\alpha_1^2 + \alpha_2^2 \neq 0$ and $b,\,\xi_0\in\mathbb{R} $. \end{enumerate} It is known (see \cite{Suyama2}) that the solutions that do not depend on one of the variables are associated to the products given by Lafontaine. For the solutions given in (b), further geometric solutions can be obtained with the classification result for helicoidal falt surfaces in $\mathbb{S}^3$. These solutions are associated to conformally flat hypersurfaces that are conformal to the products $M^2 \times I \subset \mathbb{R}^4$ given by \[ M^2 \times I = \left\{ tp:0<t<\infty, p\in M^2 \subset \mathbb{S}^3 \right\}, \] where $M^2$ is a flat surface in $S^3$, parametrized by lines of curvature, whose first and second fundamental forms are given by \begin{equation} \begin{array}{rcl} \label{firstffsphere} I &=& \sin^2 (\xi + \xi_0) dx_1^2 + \cos^2 (\xi + \xi_0) dx_3^2, \\ II &=& \sin (\xi + \xi_0) \cos (\xi + \xi_0) (dx_1^2 - dx_3^2), \end{array} \end{equation} which are, up to a linear change of variables, the fundamental forms that are considered in this paper. Therefore, as an application of the characterization of helicoidal flat surfaces in terms of first and second fundamental forms, one has the following theorem: \begin{thm} \label{characterization.helicoidal} Let $ l_2 = \lambda_2, \; l_1 = \lambda_2 \cos \xi + \xi_0 , \; l_3 = \lambda_2 \sin \xi + \xi_0 $ be solutions of the Lam\'e's sytem, where $\xi = \alpha_1 x_1 + \alpha_3 x_3$ and $\alpha_1, \, \alpha_3, \, \lambda_2, \, \xi_0$ are real constants with $\alpha_1\cdot\alpha_3 \neq 0$. Then the associated conformally flat hypersurfaces are conformal to the product, $M^2 \times I$, where $M^2 \subset \mathbb{S}^3$ is locally congruent to helicoidal flat surface. \end{thm} \bibliographystyle{acm}
{'timestamp': '2016-01-20T02:09:06', 'yymm': '1601', 'arxiv_id': '1601.04930', 'language': 'en', 'url': 'https://arxiv.org/abs/1601.04930'}
ArXiv
\section{Introduction} The Regularity lemma of Szemer\'edi \cite{Sz} has proved to be a very useful tool in graph theory. It was initially developed as an auxiliary lemma to prove a long standing conjecture of Erd\H{o}s and Tur\'{a}n\cite{ET} on arithmetic progressions, which stated that sequences of integers with positive upper density must contain arbitrarily long arithmetic progressions. Now the Regularity Lemma by itself has become an important tool and found numerous other applications (see \cite{KSSS}). Based on the Regularity Lemma and the Blow-up Lemma \cite{KSS5} the Regularity method has been developed that has been quite successful in a number of applications in graph theory (e.g. \cite{GRSS1}). However, one major disadvantage of these applications and the Regularity Lemma is that they are mainly theoretical, they work only for astronomically large graphs as the Regularity Lemma can be applied only for such large graphs. Indeed, to find the $\varepsilon$-regular partition in the Regularity Lemma, the number of vertices must be a tower of 2's with height proportional to $\varepsilon^{-5}$. Furthermore, Gowers demonstrated \cite{G} that a tower bound is necessary. The basic content of the Regularity Lemma could be described by saying that every graph can, in some sense, be partitioned into random graphs. Since random graphs of a given edge density are much easier to treat than all graphs of the same edge-density, the Regularity Lemma helps us to carry over results that are trivial for random graphs to the class of all graphs with a given number of edges. We are especially interested in harnessing the power of the Regularity Lemma for clustering data. Graph partitioning methods for clustering and segmentation have become quite popular in the past decade because of representative ease of data with graphs and the strong theoretical underpinnings that accompany the same. In this paper we propose a general methodology to make the Regularity Lemma more useful in practice. To make it truly applicable, instead of constructing a provably regular partition we construct an {\em approximately} regular partition. This partition behaves just like a regular partition (especially for graphs appearing in practice) and yet it does not require the large number of vertices as mandated by the original Regularity Lemma. Then this approximately regular partition is used for performing clustering. We call the resulting new clustering technique {\em Regularity clustering}. We present comparisons with standard clustering methods such as $k$-means and spectral clustering and the results are very encouraging. To present our attempt and the results obtained, the paper is organized as follows: In section \ref{prior} we discuss briefly some prior attempts to apply the Regularity Lemma in practical settings and place our work in contrast to those. In Section \ref{clust} we discuss clustering in general and also present a popular spectral clustering algorithm that is used later on the reduced graph. We also point out what are the possible ways to improve its running time. In Section \ref{not} we give some definitions and general notation. In Section \ref{reg} we present two constructive versions of the Regularity Lemma (the original lemma was non-constructive). Furthermore, in this section we point out the various problems arising when we attempt to apply the lemma in real-world applications. In Section \ref{mod} we discuss how the constructive Regularity Lemmas could be modified to make them truly applicable for real-world problems where the graphs typically are much smaller, say have a few thousand vertices only. In Section \ref{app} we show how this Practical Regularity partitioning algorithm can be applied to develop a new clustering technique. In Section \ref{test}, we present an extensive empirical validation of our method. Section \ref{future} is spent in discussing the various possible future directions of work. \section{Prior Applications of the Regularity Lemma}\label{prior} As we discussed above so far the Regularity Lemma has been ``well beyond the realms of any practical applications" \cite{Abel}, the existing applications have been theoretical, {\em mathematical}. The only practical application attempt of the Regularity Lemma to the best of our knowledge is by Sperotto and Pelillo \cite{SP}, where they use the Regularity Lemma as a pre-processing step. They give some interesting ideas on how the Regularity Lemma might be used, however they do not give too many details. Taking leads from some of their ideas we give a much more thorough analysis of the modifications needed in order to make the lemma applicable in practice. Furthermore, they only give results for using the constructive version by Alon {\em et al} \cite{AD}, here we implement the version proposed by Frieze and Kannan \cite{FK} as well. We also give a far more extensive empirical validation; we use 12 datasets instead of 3. \section{Clustering}\label{clust} Out of the various modern clustering techniques, spectral clustering has become one of the most popular. This has happened due to not only its superior performance over the traditional clustering techniques, but also due to the strong theoretical underpinnings in spectral graph theory and its ease of implementation. It has many advantages over the more traditional clustering methods such as $k$-means and expectation maximization (EM). The most important is its ability to handle datasets that have arbitrary shaped clusters. Methods such as $k$-means and EM are based on estimating explicit models of the data. Such methods fail spectacularly when the data is organized in very irregular and complex clusters. Spectral clustering on the other hand does not work by estimating explicit models of the data but does so by analysing the spectrum of the Graph Laplacian. This is useful as the top few eigenvectors can unfold the data manifold to form meaningful clusters. In this work we employ spectral clustering on the reduced graph (which is an essence of the original graph), even though any other pairwise clustering method could be used. The algorithm that we employ is due to Ng, Jordan and Weiss \cite{NJW}. Despite various advantages of spectral clustering, one major problem is that for large datasets it is very computationally intensive. And understandably this has received a lot of attention recently. As originally stated, the spectral clustering pipeline has two main bottlenecks: First, computing the affinity matrix of the pairwise distances between datapoints, and second, once we have the affinity matrix the finding of the eigendecomposition. Many ways have been suggested to solve these problems more efficiently. One approach is not to use an all-connected graph but a k-nearest neighbour graph in which each data point is typically connected to $\log{n}$ neighboring datapoints(where $n$ is the number of data-points). This considerably speeds up the process of finding the affinity matrix, however it has a drawback that by taking nearest neighbors we might miss something interesting in the global structure of the data. A method to remedy this is the Nystr\"{o}m method which takes a random sample of the entire dataset (thus preserving the global structure in a sense) and then doing spectral clustering on this much smaller sample. The results are then extended to all other points in the data set \cite{FowlChung}. Our work is quite different from such methods. The speed-up is primarily in the second stage where eigendecomposition is to be done. The original graph is represented by a reduced graph which is much smaller and hence eigendecomposition of this reduced graph can significantly ease the computational load. Further work on a practical variant of the sparse Regularity Lemma could be useful in a speed-up in the first stage, too. \section{Notation and Definitions}\label{not} Below we introduce some notation and definitions for describing the Regularity Lemma and our methodology. Let $G = (V,E)$ denote a graph, where $V$ is the set of vertices and $E$ is the set of edges. When $A, B$ are disjoint subsets of $V$, the number of edges with one endpoint in $A$ and the other in $B$ is denoted by $e(A,B)$. When $A$ and $B$ are nonempty, we define the {\em density} of edges between $A$ and $B$ as $d(A,B) = \frac{e(A,B)}{|A||B|}$. The most important concept is the following. \begin{definition}\hspace{-6pt}{\bf.\ } The bipartite graph $G=(A,B,E)$ is $\varepsilon$-{\em regular} if for every $X\subset A$, $Y\subset B$ satisfying: $\ |X|>\varepsilon|A|,\ |Y|>\varepsilon|B|,$ we have $|d(X,Y)-d(A,B)|<\varepsilon,$ otherwise it is $\varepsilon$-{\em irregular}. \end{definition} \noindent Roughly speaking this means that in an $\varepsilon$-regular bipartite graph the edge density between {\em any} two relatively large subsets is about the same as the original edge density. In effect this implies that all the edges are distributed almost uniformly. \begin{definition}\hspace{-6pt}{\bf.\ }\label{defn} A partition $P$ of the vertex set $V=V_0\cup V_1\cup \ldots \cup V_k$ of a graph $G = (V,E)$ is called an {\em equitable partition} if all the classes $V_i, 1\leq i\leq k$, have the same cardinality. $V_0$ is called the exceptional class. \end{definition} \begin{definition}\hspace{-6pt}{\bf.\ }\label{potential} For an equitable partition $P$ of the vertex set $V=V_0\cup V_1\cup \ldots \cup V_k$ of $G = (V,E)$, we associate a measure called the {\em index} of $P$ (or the potential) which is defined by $$ind(P) = \frac{1}{k^2}\sum_{s=1}^k \sum_{t=s+1}^k d(C_s,C_t)^2.$$ \end{definition} This will measure the progress towards an $\varepsilon$-regular partition. \begin{definition}\hspace{-6pt}{\bf.\ } An equitable partition $P$ of the vertex set $V=V_0\cup V_1\cup \ldots \cup V_k$ of $G = (V,E)$ is called $\varepsilon$-{\em regular} if $|V_0| < \varepsilon |V| $ and all but $\varepsilon k^2$ of the pairs $(V_i,V_j)$ are $\varepsilon$-regular where $1 \leq i < j \leq k$. \end{definition} With these definitions we are now in a position to state the Regularity Lemma. \section{The Regularity Lemma}\label{reg} \begin{theorem}[Regularity Lemma \cite{Sz}]\hspace{-6pt}{\bf.\ }\label{p1} For every positive $\varepsilon > 0$ and positive integer $t$ there is an integer $T = T(\varepsilon,t)$ such that every graph with $n > T$ vertices has an $\varepsilon$-regular partition into $k+1$ classes, where $t \leq k \leq T$. \end{theorem} In applications of the Regularity Lemma the concept of the {\em reduced graph} plays an important role. \begin{definition}\hspace{-6pt}{\bf.\ }\label{reduced} Given an $\varepsilon$-regular partition of a graph $G = (V, E)$ as provided by Theorem \ref{p1}, we define the {\em reduced graph} $G^R$ as follows. The vertices of $G^R$ are associated to the classes in the partition and the edges are associated to the $\varepsilon$-regular pairs between classes with density above $d$. \end{definition} The most important property of the reduced graph is that many properties of $G$ are inherited by $G^R$. Thus $G^R$ can be treated as a representation of the original graph $G$ albeit with a much smaller size, an ``essence'' of $G$. Then if we run any algorithm on $G^R$ instead of $G$ we get a significant speed-up. \subsection{Algorithmic Versions of the Regularity Lemma} \label{RegAlgo} The original proof of the Regularity Lemma \cite{Sz} does not give a method to construct a regular partition but only shows that one must exist. To apply the Regularity Lemma in practical settings, we need a constructive version. Alon {\em et al.} \cite{AD} were the first to give an algorithmic version. Since then a few other algorithmic versions have also been proposed \cite{FK}, \cite{KRT}. Below we present the details of the Alon {\em et al.} algorithm. \subsubsection{Alon {\em et al.} Version} \begin{theorem}[Algorithmic Regularity Lemma \cite{AD}]\hspace{-6pt}{\bf.\ }\label{al} For every $\varepsilon > 0$ and every positive integer $t$ there is an integer $T = T(\varepsilon, t)$ such that every graph with $n > T$ vertices has an $\varepsilon$-regular partition into $k + 1$ classes, where $t \le k \le T$. For every fixed $\varepsilon > 0$ and $t \ge 1$ such a partition can be found in $O(M(n))$ sequential time, where $M(n)$ is the time for multiplying two $n$ by $n$ matrices with $0, 1$ entries over the integers. The algorithm can be parallelized and implemented in $NC^1$. \end{theorem} This result is somewhat surprising from a computational complexity point of view since as it was proved in \cite{AD} that the corresponding decision problem (checking whether a given partition is $\varepsilon$-regular) is co-NP-complete. Thus the search problem is easier than the decision problem. To describe this algorithm, we need a couple of lemmas. \begin{lemma}[Alon {\em et al.} \cite{AD}]\hspace{-6pt}{\bf.\ }\label{lnew1} Let $H$ be a bipartite graph with equally sized classes $|A| = |B| = n$. Let $2n^{-1/4} < \varepsilon <\frac{1}{16}$. There is an $O(M(n))$ algorithm that verifies that $H$ is $\varepsilon$-regular or finds two subset $A' \subset A$, $B' \subset B$, $|A'| \ge \frac{{\varepsilon}^4}{16}n$, $|B'| \ge \frac{{\varepsilon}^4}{16}n$, such that $|d(A, B) - d(A', B')| \ge \varepsilon^4$. The algorithm can be parallelized and implemented in $NC^1$. \end{lemma} This lemma basically says that we can either verify that the pair is $\varepsilon$-regular or we provide certificates that it is not. The certificates are the subsets $A', B'$ and they help to proceed to the next step in the algorithm. The next lemma describes the procedure to do the refinement from these certificates. \begin{lemma}[Szemer\'edi \cite{Sz}]\hspace{-6pt}{\bf.\ }\label{lnew2} Let $G = (V,E)$ be a graph with $n$ vertices. Let $P$ be an equitable partition of the vertex set $V=V_0\cup V_1\cup \ldots \cup V_k$. Let $\gamma >0$ and let $k$ be a positive integer such that $4^k > 600\gamma^{-5}$. If more than $\gamma k^2$ pairs $(V_s, V_t)$, $1 \le s < t \le k$, are $\gamma$-irregular then there is an equitable partition $Q$ of $V$ into $1 + k4^k$ classes, with the cardinality of the exceptional class being at most $|V_0| + \frac{n}{4^k}$ and such that $ind(Q) > ind(P) + \frac{\gamma^5}{20}.$ \end{lemma} This lemma implies that whenever we have a partition that is not $\gamma$-regular, we can refine it into a new partition which has a better index (or potential) than the previous partition. The refinement procedure to do this is described below. {\bf Refinement Algorithm:} {\em Given a $\gamma$-irregular equitable partition $P$ of the vertex set $V=V_0\cup V_1\cup \ldots \cup V_k$ with $\gamma = \frac{\varepsilon^4}{16}$, construct a new partition $Q$.\\ For each pair $(V_s, V_t)$, $1 \leq s < t \leq k$, we apply Lemma \ref{lnew1} with $A=V_s$, $B=V_t$ and $\varepsilon$. If $(V_s, V_t)$ is found to be $\varepsilon$-regular we do nothing. Otherwise, the certificates partition $V_s$ and $V_t$ into two parts (namely the certificate and the complement). For a fixed $s$ we do this for all $t\not= s$. In $V_s$, these sets define the obvious equivalence relation with at most $2^{k-1}$ classes, namely two elements are equivalent if they lie in the same partition part for every $t\not= s$. The equivalence classes will be called atoms. Set $m = \lfloor \frac{|V_i|}{4^k}\rfloor$, $1 \le i \le k$. Then we construct our new partition $Q$ by choosing a maximal collection of pairwise disjoint subsets of $V$ such that every subset has cardinality $m$ and every atom $A$ contains exactly $\lfloor \frac{|A|}{m}\rfloor$ subsets; all other vertices are put in the exceptional class. The collection $Q$ is an equitable partition of $V$ into at most $1+k4^k$ classes and the cardinality of its exceptional class is at most $|V_0| + \frac{n}{4^k}$. } Now we are ready to present the main algorithm. {\bf Regular Partitioning Algorithm:} {\em Given a graph $G$ and $\varepsilon$, construct a $\varepsilon$-regular partition. \begin{enumerate} \item {\bf Initial partition:} Arbitrarily divide the vertices of $G$ into an equitable partition $P_1$ with classes $V_0, V_1, \ldots, V_b$, where $|V_1| = \lfloor \frac{n}{b} \rfloor$ and hence $|V_0| < b$. Denote $k_1 = b$. \item {\bf Check regularity:} For every pair $(V_s,V_t)$ of $P_i$, verify if it is $\varepsilon$-regular or find $X \subset V_s, Y \subset V_t, |X| \ge \frac{\varepsilon^4}{16}|V_s|,|Y| \ge \frac{\varepsilon^4}{16}|V_t|$, such that $|d(X,Y) - d(V_s,V_t)| \ge \varepsilon^4$. \item {\bf Count regular pairs:} If there are at most $\varepsilon k_i^2$ pairs that are not verified as $\varepsilon$-regular, then halt. $P_i$ is an $\varepsilon$-regular partition. \item {\bf Refinement:} Otherwise apply the Refinement Algorithm and Lemma \ref{lnew2}, where $P = P_i, k = k_i, \gamma = \frac{\varepsilon^4}{16}$, and obtain a partition $Q$ with $1 + k_i4^{k_i}$ classes. \item {\bf Iteration:} Let $k_{i+1} = k_i4^k_i, P_{i+1} = Q, i = i+1$, and go to step 2. \end{enumerate} } Since the index cannot exceed $1/2$, the algorithm must halt after at most $\lceil 10\gamma^{-5} \rceil$ iterations (see \cite{AD}). Unfortunately, in each iteration the number of classes increases to $k4^k$ from $k$. This implies that the graph $G$ must be indeed astronomically large (a tower function) to ensure the completion of this procedure. As mentioned before, Gowers \cite{G} proved that indeed this tower function is necessary in order to guarantee an $\varepsilon$-regular partition for {\em all} graphs. The size requirement of the algorithm above makes it impractical for real world situations where the number of vertices typically is a few thousand. \subsubsection{Frieze-Kannan Version} The Frieze-Kannan constructive version is quite similar to the above, the only difference is how to check regularity of the pairs in Step 2. Instead of Lemma \ref{lnew1}, another lemma is used based on the computation of singular values of matrices. For the sake of completeness we present the details. \begin{lemma}[Frieze-Kannan \cite{FK}]\hspace{-6pt}{\bf.\ }\label{singularFK} Let $W$ be an $R \times C$ matrix with $|R| = p$ and $|C| = q$ and $W_\infty\leq 1$ and $\gamma$ be a positive real. \begin{enumerate} \item[a] If there exist $S \subseteq R, T \subseteq C $ such that $|S| \geq \gamma p, |T| \geq \gamma q$ and $|W(S,T)| \geq \gamma |S| |T|$ then $\sigma_1(W) \geq \gamma^3 \sqrt{pq}$ (where $\sigma_1$ is the first singular value). \item[b] If $\sigma_1(W) \geq \gamma \sqrt{pq}$ then there exist $S \subseteq R, T \subseteq C $ such that $|S| \geq \gamma'p, |T| \geq \gamma'q$ and $W(S,T) \geq \gamma' |S| |T|$, where $\gamma' = \frac{\gamma^3}{108}$. Furthermorem $S,T$ can be constructed in polynomial time. \end{enumerate} \end{lemma} Combining Lemmas \ref{lnew2} and \ref{singularFK}, we get an algorithm for finding an $\varepsilon$-regular partition, quite similar to the Alon {\em et al.} version \cite{AD}, which we present below: {\bf Regular Partitioning Algorithm (Frieze-Kannan):} {\em Given a graph $G$ and $\varepsilon}\def\ffi{\varphi$, construct a $\varepsilon}\def\ffi{\varphi$-regular partition. \begin{enumerate} \item {\bf Initial partition:} Arbitrarily divide the vertices of $G$ into an equitable partition $P_1$ with classes $V_0, V_1, \ldots, V_b$, where $|V_1| = \lfloor \frac{n}{b} \rfloor$ and hence $|V_0| < b$. Denote $k_1 = b$. \item {\bf Check regularity:} For every pair $(V_s,V_t)$ of $P_i$, compute $\sigma_1(W_{r,s})$. If the pair $(V_r, V_s)$ are not $\varepsilon$-regular then by Lemma \ref{singularFK} we obtain a proof that they are not not $\gamma = \varepsilon^9/108$-regular. \item {\bf Count regular pairs:} If there are at most $\varepsilon}\def\ffi{\varphi k_i^2$ pairs that produce proofs of non $\gamma$-regularity, then halt. $P_i$ is an $\varepsilon}\def\ffi{\varphi$-regular partition. \item {\bf Refinement:} Otherwise apply the Refinement Algorithm and Lemma \ref{lnew2}, where $P = P_i, k = k_i, \gamma = \frac{\varepsilon}\def\ffi{\varphi^9}{108}$, and obtain a partition $P'$ with $1 + k_i4^{k_i}$ classes. \item {\bf Iteration:} Let $k_{i+1} = k_i4^{k_i}, P_{i+1} = P', i = i+1$, and go to step 2. \end{enumerate} } This algorithm is guaranteed to finish in at most $\varepsilon^{-45}$ steps with an $\varepsilon$-regular partition. \section{Modifications to the Constructive Version}\label{mod} We see that even the constructive versions are not directly applicable to real world scenarios. We note that the above algorithms have such restrictions because their aim is to be applicable to {\em all} graphs. Thus, to make the regularity lemma truly applicable we would have to give up our goal that the lemma should work for {\em every} graph and should be content with the fact that it works for {\em most} graphs. To ensure that this happens, we modify the Regular Partitioning Algorithm(s) so that instead of constructing a regular partition, we find an {\em approximately} regular partition, which should be much easier to construct. We have the following 3 major modifications to the Regular Partitioning Algorithm. {\bf Modification 1:} We want to decrease the cardinality of atoms in each iteration. In the above Refinement Algorithm the cardinality of the atoms may be $2^{k-1}$, where $k$ is the number of classes in the current partition. This is because the algorithm tries to find all the possible $\varepsilon$-irregular pairs such that this information can then be embedded into the subsequent refinement procedure. Hence potentially each class may be involved with up to $(k-1)$ $\varepsilon$-irregular pairs. One way to avoid this problem is to bound this number. To do so, instead of using all the $\varepsilon$-irregular pairs, we only use some of them. Specifically, in this paper, for each class we consider at most one $\varepsilon$-irregular pair that involves the given class. By doing this we reduce the number of atoms to at most $2$. We observe that in spite of the crude approximation, this seems to work well in practice. {\bf Modification 2:} We want to bound the rate by which the class size decreases in each iteration. As we have at most $2$ atoms for each class, we could significantly increase $m$ used in the Refinement Algorithm as $m = \frac{|V_i|}{l}$, where a typical value of $l$ could be $3$ or $4$, much smaller than $4^k$. We call this user defined parameter $l$ the refinement number. {\bf Modification 3:} Modification 2 might cause the size of the exceptional class to increase too fast. Indeed, by using a smaller $l$, we risk putting $\frac{1}{l}$ portion of all vertices into $V_0$ after each iteration. To overcome this drawback, we ``recycle'' most of $V_0$, i.e. we move back most of the vertices from $V_0$. Here is the modified Refinement Algorithm. {\bf Modified Refinement Algorithm:} {\em Given a $\gamma$-irregular equitable partition $P$ of the vertex set $V=V_0\cup V_1\cup \ldots \cup V_k$ with $\gamma = \frac{\varepsilon^4}{16}$ and refinement number $l$, construct a new partition $Q$.\\ For each pair $(V_s, V_t)$, $1 \leq s < t \leq k$, we apply Lemma \ref{lnew1} with $A=V_s$, $B=V_t$ and $\varepsilon$. For a fixed $s$ if $(V_s, V_t)$ is found to be $\varepsilon$-regular for all $t\not= s$ we do nothing, i.e. $V_s$ is one atom. Otherwise, we select one $\varepsilon$-irregular pair $(V_s, V_t)$ randomly and the corresponding certificate partitions $V_s$ into two atoms. Set $m = \lfloor \frac{|V_i|}{l}\rfloor$, $1 \le i \le k$. Then first we choose a maximal collection $Q'$ of pairwise disjoint subsets of $V$ such that every member of $Q'$ has cardinality $m$ and every atom $A$ contains exactly $\lfloor \frac{|A|}{m}\rfloor$ members of $Q'$. Then we unite the leftover vertices in each $V_s$, if there are at least $m$ vertices then we select one more subset of size $m$ from these vertices, we add these sets to $Q'$ and finally we add all remaining vertices to the exceptional class, resulting in the partition $Q$. The collection $Q$ is an equitable partition of $V$ into at most $1+lk$ classes. } \noindent Now, we present our modified Regular Partitioning Algorithm. There are three main parameters to be selected by the user: $\varepsilon$, the refinement number $l$ and $h$, the minimum class size when we must halt the refinement procedure. The parameter $h$ is used to ensure that if the class size has gone too small then the procedure should not continue. {\bf Modified Regular Partitioning Algorithm (or the Practical Regularity Partitioning Algorithm):} {\em Given a graph $G$ and parameters $\varepsilon$, $l$, $h$, construct an approx. $\varepsilon$-regular partition. \begin{enumerate} \item {\bf Initial partition:} Arbitrarily divide the vertices of $G$ into an equitable partition $P_1$ with classes $V_0, V_1, \ldots, V_l$, where $|V_1| = \lfloor \frac{n}{l} \rfloor$ and hence $|V_0| < l$. Denote $k_1 = l$. \item {\bf Check size and regularity:} If $|V_i| < h$, $1\leq i \leq k$, then halt. Otherwise for every pair $(V_s,V_t)$ of $P_i$, verify if it is $\varepsilon$-regular or find $X \subset V_s, Y \subset V_t, |X| \ge \frac{\varepsilon^4}{16}|V_s|,|Y| \ge \frac{\varepsilon^4}{16}|V_t|$, such that $|d(X,Y) - d(V_s,V_t)| \ge \varepsilon^4$. \item {\bf Count regular pairs:} If there are at most $\varepsilon k_i^2$ pairs that are not verified as $\varepsilon$-regular, then halt. $P_i$ is an $\varepsilon$-regular partition. \item {\bf Refinement:} Otherwise apply the Modified Refinement Algorithm, where $P = P_i, k = k_i, \gamma = \frac{\varepsilon^4}{16}$, and obtain a partition $Q$ with $1 + lk_i$ classes. \item {\bf Iteration:} Let $k_{i+1} = l k_i, P_{i+1} = Q, i = i+1$, and go to step 2. \end{enumerate} } The Frieze-Kannan version is modified in an identical way. \section{Application to Clustering}\label{app} To make the regularity lemma applicable in clustering settings, we adopt the following two phase strategy (as in \cite{SP}): \begin{enumerate} \item {\bf Application of the Practical Regularity Partitioning Algorithm:} In the first stage we apply the Practical Regularity partitioning algorithm as described in the previous section to obtain an approximately regular partition of the graph representing the data. Once such a partition has been obtained, the reduced graph as described in Definition \ref{reduced} could be constructed from the partition. \item {\bf Clustering the Reduced Graph:} The reduced graph as constructed above would preserve most of the properties of the original graph (see \cite{KSSS}). This implies that any changes made in the reduced graph would also reflect in the original graph. Thus, clustering the reduced graph would also yield a clustering of the original graph. We apply spectral clustering (though any other pairwise clustering technique could be used, e.g. in \cite{SP} the dominant-set algorithm is used) on the reduced graph to get a partitioning and then project it back to the higher dimension. Recall that vertices in the exceptional set $V_0$ are leftovers from the refinement process and must be assigned to the clusters obtained. Thus in the end these leftover vertices are redistributed amongst the clusters using a k-nearest neighbor classifier to get the final grouping. \end{enumerate} \section{Empirical Validation}\label{test} In this section we present extensive experimental results to indicate the efficacy of this approach by employing it for clustering on a number of benchmark datasets. We also compare the results with spectral clustering in terms of accuracy. We also report results that indicate the amount of compression obtained by constructing the reduced graph. As discussed later, the results also directly point to a number of promising directions of future work. We first review the datasets considered and the metrics used for comparisons. \subsection{Datasets and Metrics Used} \label{datasets} The datasets considered for empirical validation were taken from the University of California, Irvine machine learning repository \cite{UCI}. A total of 12 datasets were used for validation. We considered datasets with real valued features and associated labels or ground truth. In some datasets (as described below) that had a large number of real valued features, we removed categorical features to make it easier to cluster. Unless otherwise mentioned, the number of clusters was chosen so as to equal to the number of classes in the dataset (i.e. if the number of classes in the ground truth is 4, then the clustering results are for k = 4). An attempt was made to pick a wide variety of datasets, i.e. with integer features, binary features, synthetic datasets and of course real world datasets with both very high and small dimensionality. The following datasets were considered (for details about the datasets see \cite{UCI}): (1) Red Wine (R-Wine) and (2) White Wine (W-Wine), (3) The Arcene dataset (Arcene), (4) The Blood Transfusion Dataset (Blood-T), (5) The Ionosphere dataset (Ionos), (6) The Wisconsin Breast cancer dataset (Cancer), (7) The Pima Indian diabetes dataset (Pima), (8) The Vertebral Column dataset (Vertebral-1), the second task (9) (Vertebral-2) is considered as another dataset, (10) The Steel Plates Faults Dataset (Steel), (11) The Musk 2 (Musk) dataset and (12) Haberman's Survival (Haberman) data. Next we discuss the metric used for comparison with other clustering algorithms. For evaluating the quality of clustering, we follow the approach of \cite{WuScho} and use the cluster accuracy as a measure. The measure is defined as: $$\displaystyle Accuracy = 100* \biggl(\frac{\sum_{i=1}^{n} \delta (y_i, map(c_i))}{n} \biggr ), $$ where $n$ is the number of data-points considered, $y_i$ represents the true label (ground truth) while $c_i$ is the obtained cluster label of data-point $x_i$. The function $\delta(y,c)$ equals 1 if the true and the obtained labels match ($y=c$) and 0 if they don't. The function $map$ is basically a permutation function that maps each cluster label to the true label. An optimal match can be found by using the Hungarian Method for the assignment problem \cite{Kuhn}. \subsection{Case Study} \label{casestudy} Before reporting comparative results on benchmark datasets, we first consider one dataset as a case study. While experiments reported in this case study were carried on all the benchmark datasets considered, the purpose here is to illustrate the investigations conducted at each stage of application of the regularity lemma. An auxiliary purpose is also to underline a set of guidelines on what changes to the practical regularity partitioning algorithm proved to be useful. For this task we consider the Red Wine dataset which has 1599 instances with 11 attributes each, the number of classes involved is six. It must be noted though that the class distribution in this dataset is pretty skewed (with the various classes having 10, 53, 681, 638, 199 and 18 datapoints respectively), this makes clustering this dataset quite difficult when k = 6. We however consider both k = 6 and k = 3 to compare results with spectral clustering. Recall that our method has two meta-parameters that need to be user specified (or estimated by cross-validation): $\varepsilon$ and $l$. Note that $h$ is usually decided so that it is at least as big as $\frac{1}{\varepsilon}$. The first set of experiments thus explore the accuracy landscape of regularity clustering spanned over these two parameters. We consider 25 linearly spaced values of $\varepsilon$ between 0.15 and 0.50. The refinement number $l$, as noted in Section \ref{mod}, can not be too large. Since it can only take integer values, we consider six values from 2 to 7. For the sake of comparison, we also obtain clustering results on the same dataset with spectral clustering with self tuning \cite{selftuned} (both using all connected and k-nearest neighbour graph versions) and k-means clustering. Figure \ref{fig:CaseStudy} gives the accuracy of the Regularity Clustering on a grid of $\varepsilon$ and $l$. Even though this plot is only for exploratory purposes, it shows that the accuracy landscape is in general much better than the accuracy obtained by spectral clustering for this dataset. \begin{figure} \centering \includegraphics[width=2in]{AlonK6}% \hspace{0.5in}% \includegraphics[width=2in]{AlonK3} \caption{Accuracy Landscape for Regularity Clustering on the Red Wine Dataset for different values of $\varepsilon$ and refinement size $l$ (with k = 6 on the left and k = 3 on the right). The Plane cutting through in blue represents accuracy by running self-tuned spectral clustering using the fully connected similarity graph.} \label{fig:CaseStudy} \end{figure} An important aspect of the Regularity Clustering method is that by using a modified constructive version of the Regularity Lemma we obtain a much reduced representation of the original data. The size of the reduced graph depends both on $\varepsilon$ and $l$. However, in our observation it is more sensitive to changes to $l$ and understandably so. From the grid for $\varepsilon$ and $l$ we take three rows to illustrate the obtained sizes of the reduced graph (more precisely, the dimensions of the affinity matrix of the reduced graph). We compare these numbers with the original dataset size. As we note in the results over the benchmark datasets in section \ref{benchmark}, this compression is quite big in larger datasets. \begin{table} \caption{Reduced Graph Sizes. Original Affinity Matrix size : 1599 $\times$ 1599} \centering \begin{tabular}{| c | c | c | c | c | c | c |} \hline \hline \backslashbox{{\bf $\varepsilon$}}{{\bf $l$}} & {\bf 2} & {\bf 3} & {\bf 4} & {\bf 5} & {\bf 6} & {\bf 7} \\ \hline {\bf 0.15} & 16 $\times$ 16 & 27 $\times$ 27 & 27 $\times$ 27 & 27 $\times$ 27 & 36 $\times$ 36 & 49 $\times$ 49 \\ \hline {\bf 0.33} & 49 $\times$ 49 & 49 $\times$ 49 & 66 $\times$ 66 & 66 $\times$ 66 & 66 $\times$ 66 & 66 $\times$ 66 \\ \hline {\bf 0.50} & 66 $\times$ 66 & 66 $\times$ 66 & 66 $\times$ 66 & 66 $\times$ 66 & 66 $\times$ 66 & 66 $\times$ 66 \\ \hline \end{tabular} \label{redgraphsizes} \end{table} The proof of the Regularity Lemma is using a potential function, the index of the partition defined earlier in Definition \ref{potential}. In each refinement step the index increases significantly. Surprisingly this remains true in our modified refinement algorithm when the number of partition classes is not increasing as fast as in the original version, see Table \ref{potentialincrease}. Another interesting observation is that if we take $\varepsilon$ sufficiently high, we do get a regular partition in just a few iterations. A few examples where this was noticed in the Red Wine dataset are mentioned in Table \ref{regpartitions}. \begin{table} \caption{Illustration of Increase in Potential} \centering \begin{tabular}{| c | c | c | c | c |} \hline \hline \backslashbox{{ \bf ($\varepsilon$,$l$)}}{{\bf $ind(P)$}} & {\bf $ind(P_1)$} & {\bf $ind(P_2)$} & {\bf $ind(P_3)$} & {\bf $ind(P_4)$} \\ \hline {\bf 0.15, 2} & 0.1966 & 0.2892 & 0.3321 & 0.3539 \\ \hline {\bf 0.33, 2} & 0.1966 & 0.2883 & 0.3321 & 0.3683 \\ \hline {\bf 0.50, 2} & 0.1965 & 0.2968 & 0.3411 & 0.3657 \\ \hline \end{tabular} \label{potentialincrease} \end{table} \begin{table} \caption{Regular Partitions with req. no. of regular pairs and actual no. present} \centering \begin{tabular}{| c | c | c | c |} \hline \hline ($\varepsilon$, $l$) & \# for $\varepsilon$-regularity & \# of Reg. Pairs & \# Iterations \\ \hline {\bf 0.6, 2} & 1180 & 1293 & 6 \\ \hline {\bf 0.7, 6} & 352 & 391 & 2 \\ \hline {\bf 0.7, 7} & 506 & 671 & 2 \\ \hline \end{tabular} \label{regpartitions} \end{table} Finally, before reporting results we must make a comment on constructing the reduced graph. The reduced graph was defined in Definition \ref{reduced}. But note that there is some ambiguity in our case when it comes to constructing the reduced graph. The reduced graph $G^R$ is constructed such that the vertices correspond to the classes in the partition and the edges are associated to the $\varepsilon$-regular pairs between classes with density above $d$. However, in many cases the number of regular pairs is quite small (esp. when $\varepsilon$ is small) making the matrix too sparse, making it difficult to find the eigenvectors. Thus for technical reasons we added all pairs to the reduced graph. We contend that this approach works well because the classes that we consider (and thus the densities between them) are obtained after the modified refinement procedure and thus enough information is already embedded in the reduced graph. \subsection{Clustering Results on Benchmark Datasets}\label{benchmark} In this section we report results on a number of datasets described earlier in Section \ref{datasets}. We do a five fold cross-validation on each of the datasets, where a validation set is used to learn the meta-parameters for the data. The accuracy reported is the average clustering quality on the rest of the data after using the learned parameters from the validation set. We use a grid-search to learn the meta-parameters. Initially a coarse grid is initialized with a set of 25 linearly spaced values for $\varepsilon$ between 0.15 and 0.50 (we don't want $\varepsilon$ to be outside this range). For $l$ we simply pick values from 2 to 7 simply because that is the only practical range that we are looking at. \begin{table} \caption{Clustering Results on UCI Datasets. Regular1 and Regular2 represent the results by the versions due to Alon {\em et al.} and Frieze-Kannan, respectively. Spect1 and Spect2 give the results for spectral clustering with a k-nearest neighbour graph and a fully connected graph, respectively. Follow the text for more details.} \centering \begin{tabular}{llllllll} \hline\noalign{\smallskip} {\bf Dataset} & {\bf \# Feat.} & {\bf Comp.} & {\bf Regular1} & {\bf Regular2} & {\bf Spect1} & {\bf Spect2} & {\bf k-means}\\ \noalign{\smallskip} \hline \noalign{\smallskip} R-Wine & 11 & 1599-49 & {\bf 47.0919} & {\bf 46.8342} & 23.9525 & 23.9524 & 23.8899 \\ W-Wine & 11 & 4898-125 &{\bf 44.7509} &{\bf 44.9121} & 23.1319 & 20.5798 & 23.8465 \\ Arcene & 10000 & 200-9 & {\bf 68} & {\bf 68} & 61 & 62 & 59 \\ Blood-T & 4 & 748-49 & {\bf 76.2032} & {\bf 75.1453} & 65.1070 & 66.2331 & 72.3262 \\ Ionos & 34 & 351-25 & {\bf 74.0741} & {\bf 74.6787} & 70.0855 & 70.6553 & 71.2251 \\ Cancer & 9 & 683-52 & 93.5578 & 93.5578 & {\bf 97.2182} & 97.2173 & 96.0469 \\ Pima & 8 & 768-52 & {\bf 65.1042 } & {\bf 64.9691} & 51.5625 & 60.8073 & 63.0156 \\ Vertebral-1 & 6 & 310-25 &67.7419 & 67.8030 & {\bf 74.5161} & 71.9355 & 67.0968 \\ Vertebral-2 & 6 & 310-25 &{\bf 70 } &{\bf 69.9677} & 49.3948 & 48.3871 & 65.4839 \\ Steel & 27 & 1941-54 & {\bf 42.5554} & {\bf 43.0006} & 29.0057 & 34.7244 & 29.7785 \\ Musk & 166 & 6598-126 & {\bf 84.5862} & {\bf 81.4344} & 53.9103 & 53.6072 & 53.9861 \\ Haberman & 3 & 306-16 & {\bf 73.5294} & {\bf 70.6899} & 52.2876 & 51.9608 & 52.2876 \\ \hline \end{tabular}\label{UCIResultTable} \end{table} We compare our results with a fixed $\sigma$ spectral clustering with both a fully connected graph (Spect2) and a k-nearest neighbour graph (Spect1). For the sake of comparison we also include results for k-means on the entire dataset. We also report results on the compression that was achieved on each dataset in Table \ref{UCIResultTable} (The compression is indicated in the format x-y where x represents one dimension of the adjacency matrix of the dataset and y of the reduced graph). In the results we observe that the Regularity Clustering method, as indicated by the clustering accuracies is quite powerful; it gave significantly better results in 10 of the 12 datasets. It was also observed that the regularity clustering method did not appear to work very well in synthetic datasets. This seems understandable given the quasi-random aspect of the Regularity method. We also report that the results obtained by the Alon {\em et al.} and by the Frieze-Kannan versions are virtually identical, which is not surprising. \section{Future Directions} \label{future} We believe that this work opens up a lot of potential research problems. First and foremost would be establishing theoretical results for quantifying the approximation obtained by our modifications to the Regularity Lemma. Also, the original Regularity Lemma is applicable only while working with dense graphs. However, there are sparse versions of the Regularity Lemma. These sparse versions could be used in the first phase of our method such that even sparse graphs (k-nearest neighbor graphs) could be used for clustering, thus enhancing its practical utility even further. A natural generalization of pairwise clustering methods leads to hypergraph partitioning problems \cite{Bulo}, \cite{Zhou}. There are a number of results that extend the Regularity Lemma to hypergraphs \cite{ChungHyper}, \cite{GowersHyper}, \cite{RodlHyper2}. It is thus natural that our methodology could be extended to hypergraphs and then used for hypergraph clustering. In final summary, our work gives a way to harness the Regularity Lemma for the task of clustering. We report results on a number of benchmark datasets which strongly indicate that the method is quite powerful. Based on this work we also suggest a number of possible avenues for future work towards improving and generalizing this methodology.
{'timestamp': '2012-10-01T02:02:27', 'yymm': '1209', 'arxiv_id': '1209.6540', 'language': 'en', 'url': 'https://arxiv.org/abs/1209.6540'}
ArXiv
\section{Introduction} \label{intro} According to the Standard Model of particle physics and to some of its extensions, spontaneous symmetry breaking and phase transitions constitute a crucial aspect for the early evolution of the universe. When the temperature starts decreasing because of the expansion of the universe, spontaneous symmetry breaking can be triggered so that the interactions among elementary particles undergo (dis)continuous jumps from one phase to another. New phases with a broken symmetry will form in many regions at the same time, and in each of them only one single vacuum state will be spontaneously chosen. Sufficiently separated spatial regions may not be in causal contact, so that it is quite natural to assume that the early universe is divided into many causally disconnected patches whose size is roughly given by the Hubble radius\footnote{The Hubble radius is given by $R_{\rm H}\sim H^{-1}=a(t)/\dot{a}(t)$ where $a(t)$ is the scale factor depending on the cosmic time $t$. More precisely speaking, the vacuum is chosen over the region with the correlation length of the fields at that time.}, and in each of which the vacuum is independently determined. As the universe expands, it can eventually happen that patches with different vacua collide in such a way that boundaries begin to form between adjacent regions with a different vacuum state. Since the field associated with the spontaneous breaking has to vary in a continuous way between different vacua, it must interpolate smoothly from one vacuum to another via the hill of the potential. This implies that finite-energy field configurations must form at the boundaries separating patches with different vacua, and must persist even after the phase transition is completed. These objects are called \textit{topological defects}~\cite{Vilenkin:2000jqa}, and their formation mechanism (in a cosmological context) is known as \textit{Kibble mechanism}~\cite{Kibble:1976sj,Kibble:1980mv}. Topological defects can be of several type and different spatial dimensions, and their existence is in one-to-one correspondence with the topology of the vacuum manifold~\cite{Vilenkin:2000jqa}. Domain walls are two-dimensional objects that form when a discrete symmetry is broken, so that the associated vacuum manifold is disconnected. Strings are one-dimensional objects associated to a symmetry breaking whose corresponding vacuum manifold is not simply-connected, and their formation could be predicted both by some extensions of the Standard Model of particle physics and by some classes of Grand Unified Theory (GUT). Monopoles are zero-dimensional objects whose existence is ensured when the vacuum manifold is characterized by non-contractible two-spheres, and they constitute an inevitable prediction of GUT. Moreover, there exist other topological objects called textures that can form when larger groups are broken and whose vacuum-manifold topology is more complicated. Since the existence of topological defects is intrinsically related to the particular topology of the vacuum manifold, they can naturally appear in several theories beyond the Standard Model that predict a spontaneous symmetry breaking at some high-energy scale. For instance, the spontaneous breaking of $SU(5)$ symmetry in GUT leads to the formation of various topological defects. Therefore, observations and phenomenology of topological defects are very important, and should be considered as unique test-benches to test and constrain theories of particle physics and of the early universe. This also means that for any alternative theory - e.g. that aims at giving a complete ultraviolet (UV) description of the fundamental interactions - it is worth studying the existence of topological defects, investigating how their properties differ with the respect to other models/theories, and putting them to the test with current and future experiments. In this paper we discuss for the first time topological defects in the context of \textit{nonlocal field theories} in which the Lagrangians contain infinite-order differential operators. In particular, we will make a very detailed analysis of domain wall solutions. The type of differential operator that we will consider do not lead to ghost degrees of freedom in the particle spectrum despite the presence of higher-order time derivatives in the Lagrangian. The work is organized as follows: \begin{enumerate} \item[\textbf{Sec.~\ref{sec:nonlocal-review}:}] we introduce nonlocal field theories by discussing the underlying motivations and their main properties. \item[\textbf{Sec.~\ref{sec-review}:}] we briefly review the domain wall solution in the context of standard (local) two-derivative theories by highlighting various features whose mathematical and physical meanings will be important for the subsequent sections. \item[\textbf{Sec.~\ref{sec-nft-dw}:}] we analyze for the first time domain wall solutions in the context of ghost-free nonlocal field theories by focusing on the simplest choice for the infinite-order differential operator in the Lagrangian. Despite the high complexity of non-linear and infinite-order differential equations, we will be able to find an approximate analytic solution by relying on the fact that the topological structure of the vacuum manifold ensures the existence of an exact domain wall configuration. Firstly, we analytically study the asymptotic behavior of the solution close to the two symmetric vacua. Secondly, we find a linearized nonlocal solution by perturbing around the local domain wall configuration. We show that the linearized treatment agrees with the asymptotic analysis, and make remarks on the peculiar behavior close to the origin. We perform an order-of-magnitude estimation of width and energy per unit area of the domain wall. Furthermore, we derive a theoretical lower bound on the scale of nonlocality for the specific domain wall configuration under investigation. \item[\textbf{Sec.~\ref{sec-other}:}] we briefly comment on other topological defects like string and monopole. \item[\textbf{Sec.~\ref{sec-dis}:}] we summarize our results, and discuss both theoretical and phenomenological future tasks. \item[\textbf{App.~\ref{sec-corr}:}] we develop a formalism to confirm the validity of the linearized solution close to the origin. \item[\textbf{App.~\ref{app-em}:}] we find a compact expression for the energy-momentum tensor in a generic nonlocal (infinite-derivative) field theory, and apply it to the specific nonlocal scalar model analyzed in the main text. \end{enumerate} We adopt the mostly positive convention for the metric signature, $\eta=\diag(-,+,+,+),$ and work with the natural units system, $c=\hbar=1.$ \section{Nonlocal field theories}\label{sec:nonlocal-review} The wording `nonlocal theories' is quite generic and, in principle, can refer to very different theories due to the fact that the nonlocal nature of fields can manifest in various ways. In this work with `nonlocality' we specifically mean that Lagrangians are made up of certain non-polynomial differential operators containing infinite-order derivatives. A generic nonlocal Lagrangian contains both polynomial and non-polynomial differential operators, i.e. given a field $\phi(x)$ one can have \begin{equation} \mathcal{L}\equiv\mathcal{L}\left(\phi,\partial\phi,\partial^2\phi,\dots,\partial^n\phi,\frac{1}{\Box}\phi,{\rm ln}\left(- \Box/M_s^2\right)\phi,e^{\Box/M_s^2}\phi,\dots\right),\label{nonlocal-lagr} \end{equation} where $\Box=\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}$ is the flat d'Alambertian and $M_s$ is the energy scale at which nonlocal effects are expected to become important. Non-analytic differential operators like $1/\Box$ and ${\rm log}(-\Box)$ are usually important at infrared (IR) scales, e.g. they can appear as contributions in the finite part of the quantum effective action in standard perturbative quantum field theories~\cite{Barvinsky:2014lja,Woodard:2018gfj}. Whereas analytic operators like $e^{\Box/M_s^2}$ are usually responsible for UV modifications and do not affect the IR physics. Such a transcendental differential operator typically appears in the context of string field theory~\cite{Witten:1985cc,Gross:1987kza,Eliezer:1989cr,Tseytlin:1995uq,Siegel:2003vt,Pius:2016jsl,Erler:2020beb} and p-adic string~\cite{Freund:1987kt,Brekke:1988dg,Freund:1987ck,Frampton:1988kr,Dragovich:2020uwa}. We are interested in alternative theories that extend the Standard Model in the UV regime, therefore we will focus on analytic differential operators. In general, we can consider a scalar Lagrangian of the following type\footnote{To keep the formula simpler we do not write the scale of nonlocality in the argument of $F(-\Box)$ which, to be more precise, should read $F(-\Box/M_s^2).$}: \begin{equation} \mathcal{L}=-\frac{1}{2}\phi F(-\Box)\phi - V(\phi)\,,\label{analytic-Lag} \end{equation} where $V(\phi)$ is a potential term, and the the kinetic operator can be defined through its Taylor expansion \begin{equation} F(-\Box)=\sum\limits_{n=0}^\infty f_n(-\Box)^n\,,\label{nl-oper} \end{equation} where $f_n$ are constant coefficients. It should now be clear that the type of nonlocality under investigation manifests through the presence of infinte-order derivatives. To recover the correct low-energy limit and avoid IR modifications, it is sufficient to require that the function $F(z),$ with $z\in \mathbb{C},$ does not contain any poles in the complex plane. Thus, we choose $F(-\Box)$ to be an \textit{entire function} of the d'Alembertian $\Box$. By making use of the Weierstrass factorization theorem for entire functions we can write \begin{equation} F(-\Box)=e^{\gamma(-\Box)}\prod\limits_{i=1}^{N}(-\Box+m_i^2)^{r_i}\,, \end{equation} where $\gamma(-\Box)$ is another entire function, $m_i^2$ are the zeroes of the kinetic operator $F(-\Box),$ and $r_i$ is the multiplicity of the $i$-th zero. The integer $N\geq 0$ counts the number of zeroes and, in general, can be either finite or infinite. To prevent the appearance of ghost degrees of freedom, it is sufficient to exclude the possibility to have extra zeroes besides the standard two-derivative one\footnote{It is worth mentioning that this is \textit{not} the unique possibility for ghost-free higher derivative theories. In fact, we can allow additional pairs of complex conjugate poles and still avoid ghost degrees of freedom and respect unitarity, in both local~\cite{Modesto:2015ozb,Modesto:2016ofr,Anselmi:2017yux,Anselmi:2017lia} and nonlocal theories~\cite{Buoninfante:2018lnh,Buoninfante:2020ctr}. Moreover, tree-level unitarity was shown to be satisfied also if one admits branch cuts in the bare propagator~\cite{Abel:2019ufz,Abel:2019zou}.}. We impose that the kinetic operator does not contain any additional zeroes, so that the effects induced by new physics are entirely captured by the differential operator $e^{\gamma(-\Box)}.$ Therefore, we consider the following Lagrangian: \begin{equation} \mathcal{L}=\frac{1}{2}\phi\, e^{\gamma(-\Box)}(\Box-m^2)\,\phi -V(\phi)\,,\label{exp-nonlocal} \end{equation} whose propagator reads \begin{equation} \Pi(p^2)=-i\frac{e^{-\gamma(p^2)}}{p^2+m^2}\,.\label{propag-nonlocal} \end{equation} From the last equation it is evident that no additional pole appears other than $p^2=-m^2,$ because $e^{-\gamma(p^2)}$ is an exponential of an entire function and as such does not have poles in the complex plane. More generally, under the assumption that the transcendental function $e^{-\gamma(p^2)}$ is convergent in the limits $p^0\rightarrow\pm i\infty,$ it was shown that an S-matrix can be well defined for the Lagrangian~\eqref{exp-nonlocal}, and it can be proven to satisfy perturbative unitarity at any order in loop~\cite{Pius:2016jsl,Briscese:2018oyx,Chin:2018puw,Koshelev:2021orf}. Moreover, the presence of the exponential function can make loop-integrals convergent so that the scalar theory in Eq.~\eqref{exp-nonlocal} turns out to be finite in the high-energy regime~\cite{Krasnikov:1987yj,Moffat:1990jj,Tomboulis:2015gfa,Buoninfante:2018mre}. Very interestingly for such nonlocal theories, despite the presence of infinite-order time derivatives, a consistent initial value problem can be formulated in terms of a finite number of initial conditions~\cite{Barnaby:2007ve,Calcagni:2018lyd,Erbin:2021hkf}. This type of transcendental operators with some entire function $\gamma(-\Box)$ have been intensely studied in the past years not only in the context of quantum field theories in flat space~\cite{Krasnikov:1987yj,Moffat:1990jj,Tomboulis:2015gfa,Buoninfante:2018mre,Boos:2018kir,Boos:2019fbu}, but also to formulate ghost-free infinite-derivative theories of gravity~\cite{Krasnikov:1987yj,Kuzmin:1989sp,Tomboulis:1997gg,Biswas:2005qr,Modesto:2011kw,Biswas:2011ar,Frolov:2015bia,Frolov:2015bta,Buoninfante:2018xiw,Buoninfante:2018stt,Buoninfante:2018xif,Koshelev:2016xqb,Koshelev:2017tvv,Koshelev:2020foq,Kolar:2020max}. In this work we assume that fundamental interactions are intrinsically nonlocal, and that nonlocality becomes relevant in the UV regime. Thus, we consider nonlocal quantum field theories as possible candidates for UV-complete theories beyond the Standard Model. With this in mind, we will analyze topological defects in infinite-derivative field theories, and investigate the physical implications induced by nonlocality in comparison to standard (local) two-derivative theories. In what follows, we work with the simplest ghost-free nonlocal model for which the entire function is given by \begin{equation} \gamma(-\Box)=-\frac{\Box}{M_s^2}\,.\label{entire-box^1} \end{equation} \section{Standard domain wall: a brief review} \label{sec-review} Before discussing domain walls in the context of nonlocal quantum field theories, it is worth reminding some of their basic properties in standard two-derivative field theories, which will then be useful for the main part of this work. In presence of a domain wall one has to deal with a static scalar field that only depends on one spatial coordinate, e.g. $x,$ and whose Lagrangian reads~\cite{Vilenkin:2000jqa,Saikawa:2017hiv} \begin{align} {\cal L} = \frac{1}{2}\qty(\partial_x\phi)^2 -U(\phi)\,,\qquad U(\phi)=\frac{\lambda}{4}(\phi^2-v^2)^2\,,\label{local-Lag-wall} \end{align} which is $\mathbb{Z}_2$-symmetric as it is invariant under the transformation $\phi\rightarrow-\phi;$ $\lambda>0$ is a dimensionless coupling constant and $v>0$ is related to the symmetry-breaking energy scale. The quartic potential has two degenerate minima at $\phi=\pm v$ ($U(\pm v)=0$). As mentioned in the Introduction, the discrete symmetry $\mathbb{Z}_2$ can be spontaneously broken, for instance, in the early universe because of thermal effects. As a consequence, causally disconnected regions of the universe can be characterized by a different choice of the vacuum (i.e. $\phi=+v$ or $\phi=-v$), and when two regions with different vacua collide a continuous two-dimensional object -- called domain wall -- must form at the boundary of these two regions. Let us now determine explicitly such a finite-energy configuration interpolating $\pm v.$ First of all, we impose the asymptotic boundary conditions \begin{align} \phi(-\infty) = -v\,, \qquad \phi(\infty) = v\,.\label{boundary conditions} \end{align} The field configuration must be non-singular and of finite energy, therefore $\phi(x)$ must interpolate smoothly between the two vacua, this implies that there exists a point $x_0\in \mathbb{R}$ such that $\phi(x_0)=0.$ Without any loss of generality, we can choose the reference frame such that the centre of the wall is at the origin $x_0=0,$ i.e. $\phi(0)=0.$ The energy density can be computed as \begin{align} \mathcal{E}(x) \equiv T_{0}^0(x)= \frac{1}{2}(\del_x\phi)^2 + \frac{\lambda}{4}(\phi^2-v^2)^2\,,\label{local-e-dens} \end{align} from which it follows $\mathcal{E}(x)\geq U(0)=\lambda v^2/4,$ and this implies that there exists a solution that does not dissipate at infinity. Hence, the topological structure of the vacuum manifold -- which is disconnected in the case of $\mathbb{Z}_2$ symmetry -- ensures the existence of a non-trivial field configuration of finite energy. We can determine qualitatively the behavior of this field configuration by making an order-of-magnitude estimation of the width $R$ (along the $x$-direction), and of the energy per unit area $E$ of the wall. In fact, we can define the width of the wall in three ways. The first one is to use the energy density in Eq.~\eqref{local-e-dens}. The lowest energy configuration interpolating the two vacua can be found by balancing the kinetic and potential term in the energy density $\mathcal{E}(x)$. By approximating the gradient with the inverse of the width, $\partial_x\sim 1/R,$ and the field value with $\phi\sim v,$ Eq.~\eqref{local-e-dens} gives \begin{align} \frac{1}{2} \frac{1}{R^2} v^2 \sim \frac{\lambda}{4} v^4\quad \Rightarrow\quad R \sim \sqrt{\frac{2}{\lambda}} \frac{1}{v}\,,\label{local-radius-estim} \end{align} from which it follows that the width of the wall is of the same order of the Compton wavelength $R\sim(\sqrt{\lambda}v)^{-1}\sim m^{-1}$. Whereas, the energy per unit area can be estimated as \begin{eqnarray} E&=& \int_\mathbb{R}{{\rm d}x}\qty[\frac{1}{2}(\del_x\phi)^2+\frac{\lambda}{4}(\phi^2-v^2)^2] \nonumber\\[2mm] &\sim& (\text{width of the wall}) \times (\text{energy density}) \nonumber\\[2mm] & \sim& R \times \lambda v^4 \nonumber\\[2mm] &\sim&\sqrt{\lambda}v^3\,.\label{local-energ-estim} \end{eqnarray} The other two ways are to use the exact configuration (solution) of the domain wall. The field equation \begin{align} \partial_x^2 \phi(x) = \lambda \phi(\phi^2-v^2)\label{local-field-eq} \end{align} can be solved by quadrature, and an exact analytic solution can be found, and it satisfies all the qualitative properties discussed above. The exact solution is sometime called `kink', and it reads~\cite{Vilenkin:2000jqa} \begin{align} \phi(x) = v \tanh \left(\sqrt{\frac{\lambda}{2}}vx\right)\,.\label{local-dom-wal-sol} \end{align} Its asymptotic behavior is given by \begin{align} |x|\rightarrow \infty\quad \Rightarrow \quad \phi(x)\sim \pm v\qty(1-2e^{-\sqrt{2\lambda}vx})\,. \label{eq:asymp sol in local case} \end{align} Through the exact solution~\eqref{local-dom-wal-sol}, we can define the width of the wall in two ways. One way is to identify it with the typical length scale over which $\phi(x)$ changes in proximity of the origin, that is, the length scale $\ell$ defined as the inverse of the gradient at the origin, i.e. $\ell\sim v/(\partial_x\phi|_{x=0}),$ where the scale $v$ is introduced for dimensional reasons. From Eq.~\eqref{local-dom-wal-sol} we have \begin{equation} \del_x\phi(x)|_{x=0}=v^2\sqrt{\frac{\lambda}{2}}, \end{equation} which yields \begin{equation} \ell\sim \sqrt{\frac{2}{\lambda}}\frac{1}{v} = R\,.\label{ell-linearized-local} \end{equation} The last way is to use the asymptotic behavior given in Eq.~\eqref{eq:asymp sol in local case}. The width of the wall, $\widetilde{R}$, can be defined as \begin{align} |x|\rightarrow \infty\quad \Rightarrow \quad \phi(x)\sim \pm v\qty(1-2e^{-\frac{2x}{\widetilde{R}}})\,, \label{eq:asymp radius} \end{align} which yields $\widetilde{R} \sim \sqrt{\frac{2}{\lambda}} \frac{1}{v} = R = \ell\,.$ In the local case, all of the three definitions give the same expressions and we need not discriminate them. But, as we will show, in the nonlocal case, all of the definitions would give different expressions in the sub-leading order, though two of them ($R$ and $\widetilde{R}$) have the similar feature. In fact, $R$ and/or $\widetilde{R}$ might be more appropriate as the definition of the width (or the radius) of a domain wall because $\ell$ is related to the behavior of the solution close to the origin and far from the vacuum. We can obtain the energy per unit area as \begin{eqnarray} E = \int_{\mathbb{R}} {\rm d}x\mathcal{E}(x) =\int_{\mathbb{R}} {\rm d}x\qty(\frac{{\rm d}\phi}{{\rm d}x})^2 = \frac{4}{3} \sqrt{\frac{\lambda}{2}}v^3\,, \end{eqnarray} which is consistent with the estimation in Eq.~\eqref{local-energ-estim} up to an order-one numerical factor. The discussion in this Section was performed for a local two-derivative theory in one spatial dimension. However, the essential concepts and methods, like the condition for the existence of a solution related to the non-trivial topology of the vacuum manifold, and the order-of-magnitude estimations, can be applied to nonlocal field theories and to higher dimensional cases (e.g. string and monopole). \section{Domain wall in nonlocal field theories} \label{sec-nft-dw} In this Section we analyze the domain wall solution for the nonlocal field theory~\eqref{exp-nonlocal} with the simplest choice of entire function given in~\eqref{entire-box^1}. Hence, we consider a nonlocal generalization of the Lagrangian~\eqref{eq:Lagrangian for DW} given by \begin{align} {\cal L} = \frac{1}{2}\phi e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)\phi -\frac{\lambda}{4}\qty(\phi^4+v^4)\,, \label{eq:Lagrangian for DW} \end{align} whose field equation reads \begin{align} e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)\phi = \lambda\phi^3\,. \label{eq:the equation of the motion for DW} \end{align} We can easily verify that in the limit $\partial_x^2/M_s^2\rightarrow 0$ we consistently recover the local two-derivative case, i.e. Eqs.~\eqref{local-Lag-wall} and~\eqref{local-field-eq}. In this case the differential equation is non-linear and highly nonlocal, and finding a solution seems to be very difficult not only analytically but even numerically. However, despite the complexity of the scalar field equation, we can still find a domain wall configuration. First of all, the existence of such a solution is still guaranteed by the non-trivial topological structure of the vacuum manifold which is disconnected in the case of $\mathbb{Z}_2$ symmetry. Indeed, also in this case a finite-energy field configuration that smoothly interpolates between the two vacua $\phi=\pm v$ must exist. Furthermore, by relying on the fact that the presence of the exponential operator $e^{-\partial_x^2/M_s^2}$ should not change the number of degrees of freedom and of initial conditions~\cite{Barnaby:2007ve}, we can impose the same boundary conditions as done in the two-derivative case in Eq.~\eqref{boundary conditions}, i.e. $\phi(\pm \infty)=\pm v.$ We can set $x_0=0$ to be the centre of the wall such that $\phi(0)=0,$ and expect $\phi(x)$ to be a smooth odd function. \begin{figure}[t] \centering \includegraphics[scale=0.6]{schematic-illustration.jpg} \caption{Schematic illustration of the analysis made in this Section to study the domain wall configuration in nonlocal field theory, whose qualitative behavior is drawn consistently with the boundary conditions $\phi(\pm\infty)=\pm v$, and with the choice of the origin $\phi(0)=0.$ We analyze the behavior of the domain wall configuration in several regimes, and use different methods to find approximate analytic solutions. (i) We study the asymptotic behavior of the domain wall solution in the limit $|x|\rightarrow \infty.$ (ii) We find a linearized nonlocal solution by perturbing around the local domain wall configuration treated as a background, and analyse its behavior not only at infinity but also close to the origin. (iii) We make an order-of-magnitude estimation for the width and the energy per unit area of the wall, and verify the consistency with the analytic approximate solutions.} \label{fig:conceptual figure for analysis of nonlocal domain wall} \end{figure} Very interestingly, knowing that a domain wall solution must exist, and equipped with a set of boundary conditions, we can still find an approximate analytic solution by working perturbatively in some regime. We will proceed as follows. In Sec.~\ref{subsec-asym} we will determine the solution asymptotically close to the vacua $\pm v$ (i.e. for $|x|\rightarrow \infty$). In Sec.~\ref{subsec-pert} we will find a linearized nonlocal solution by perturbing around the known local `kink' configuration treated as a background. Finally, in Sec.~\ref{subsec-order} we will make an order-of-magnitude estimation for the width and the energy of the nonlocal domain wall, and check the consistency with the approximate analytic solutions. See Fig.~\ref{fig:conceptual figure for analysis of nonlocal domain wall} for a schematic illustration of our analysis. \subsection{Asymptotic solution for $|x|\rightarrow \infty$} \label{subsec-asym} As a first step we analyze the asymptotic behavior of the solution close to the two vacua $\phi=\pm v$, i.e. in the regime $|x|\rightarrow \infty.$ Let us first consider the perturbation around $\phi=+v,$ i.e. we write \begin{equation} \phi(x)=v+\delta\phi(x)\,,\qquad \frac{|\delta\phi|}{v}\ll 1\,, \end{equation} so that the linearized field equation reads \begin{align} e^{-\del_x^2/M_s^2} (\del_x^2+\lambda v^2) \delta\phi = 3\lambda v^2 \delta\phi\,. \end{align} Taking inspiration from the asymptotic behavior of the domain wall in the local case (see Eq.~\eqref{eq:asymp sol in local case}), as an ansatz we assume that $\phi(x)$ approaches the vacuum exponentially, i.e. we take \begin{align} \delta\phi = \phi - v = A e^{-B x} \label{eq:def for asymp sol} \end{align} where $A,$ and $B>0$ are two constants. Since the exponential is an eigenfunction of the kinetic operator, we can easily obtain an equation for $B,$ \begin{align} e^{-B^2/M_s^2} (B^2+\lambda v^2) = 3\lambda v^2\,.\label{eq-for-B} \end{align} By using the principal branch $W_0(x)$ of the Lambert-W function (defined as the inverse function of $f(x)=xe^x$) we can solve Eq.~\eqref{eq-for-B} as follows \begin{align} B^2 = - M_s^2\, W_0\qty(-\frac{3\lambda v^2}{M_s^2}e^{-\lambda v^2/M_s^2})-\lambda v^2\,. \label{eq:coeff B in asymp sol} \end{align} Before continuing let us make some remarks on the Lambert-W function. It is a multivalued function that has an infinite number of branches $W_n$ with $n\in \mathbb{Z}.$ The only real solutions are given by the branch $W_0(x)$ for $x\geq -1/e,$ and an additional real solution comes from the branch $W_{-1}(x)$ for $-1/e\leq x<0.$ In the equations above we have taken the so-called principal branch $W_0,$ and we will do the same in the rest of the paper. However, we will also comment on the branch $W_{-1}$ and the physical implications associated to it. Regarding higher order branches $n>0,$ they will generate non-physical complex values, and in some cases they do not even recover the local limit; therefore, we discard such solutions. Note that, by means of the asymptotic analysis above the coefficient $A$ cannot be determined as it factors out from the field equation but, as we will explain in Sec.~\ref{subsec-pert}, we will be able to determine it up to order $\mathcal{O}(1/M_s^2)$. \subsubsection{A theoretical constraint on the scale of nonlocality} \label{subsubsec-cond} The use of the Lambert-W function to obtain the solution for $B$ in Eq.~\eqref{eq:coeff B in asymp sol} relied on the fact that Eq.~\eqref{eq-for-B} could be inverted. As explained above this inversion is valid if and only if $c \in \{xe^x|x\in\mathbb{R}\}$, i.e. if $c\geq -1/e$. Applying this condition to~\eqref{eq:coeff B in asymp sol}, we get \begin{align} -\frac{3\lambda v^2}{M_s^2}e^{-\lambda v^2/M_s^2}\geq -\frac{1}{e}\,,\label{condition} \end{align} and inverting in terms of the (principal branch) Lambert-W function we obtain the following \textit{theoretical constraint}: \begin{align} M_s^2\geq -\frac{\lambda v^2}{W_0\qty(-1/3e)}\,, \label{eq:neccessary cond for asymp sol} \end{align} where we have used the fact that $W_0(x)$ is a monotonically increasing function. The inequality~\eqref{eq:neccessary cond for asymp sol} means that the energy scale of nonlocality must be greater than the symmetry-breaking scale $\sqrt{\lambda} v.$ We can evaluate $W_0\qty(-1/3e)\simeq -0.14,$ so that the lower bound reads $M_s^2\gtrsim 7.14 \lambda v^2.$ One usually obtains constraints on the free parameters of a theory by using experimental data. In the present work, instead, we found a purely theoretical constraint. See Sec.~\ref{sec-dis} for further discussions on this feature. Given the fact that $\lambda v^2/M_s^2< 0.14,$ we can expand~\eqref{eq:coeff B in asymp sol} and obtain \begin{align} B^2 = 2 \lambda v^2 \qty(1+\frac{3\lambda v^2}{M_s^2}) + {\cal O}\qty(\qty(\frac{\lambda v^2}{M_s^2})^2)\,, \label{eq:series expansion for coeff B} \end{align} or by taking the square root, \begin{align} B = \sqrt{2 \lambda} v \qty(1+\frac{3}{2}\frac{\lambda v^2}{M_s^2}) + {\cal O}\qty(\qty(\frac{\lambda v^2}{M_s^2})^2)\,\,. \label{eq:series expansion for coeff B-sqrt} \end{align} From this expression, we can obtain the width of wall, $\widetilde{R}$, defined in Eq.~\eqref{eq:asymp radius} as \begin{equation} \widetilde{R} \sim \frac{2}{B}\sim \sqrt{\frac{2}{\lambda}}\frac{1}{v}\left(1-\frac32\frac{\lambda v^2}{M_s^2}\right)\,.\label{eq:tildeR} \end{equation} Also, from Eq.~\eqref{eq:series expansion for coeff B-sqrt} we can check that in the local limit $M_s\rightarrow \infty$ we recover the two-derivative case in Eq.~\eqref{eq:asymp sol in local case}: \begin{align} \lim_{M_s\to\infty} B^2 = 2 \lambda v^2 = \qty(\sqrt{2\lambda}v)^2\equiv B_{\rm L}^2\,. \end{align} Furthermore, from~\eqref{eq:series expansion for coeff B} we can notice that $B\geq B_{\rm L}=\qty(\sqrt{2\lambda}v).$ This physically means that the nonlocal domain wall solution approaches the vacuum $\phi=+v$ \textit{faster} as compared to the local two-derivative case. This feature, which is manifest in Eq.~\eqref{eq:tildeR}, may also suggest that the width of the nonlocal domain wall is \textit{smaller} as compared to the local case; indeed this fact will also be observed with the expression of $R$ in the next Subsections. So far we have only focused on the asymptotic solution for $x\rightarrow +\infty$ ($\phi(+\infty)=+v$) but the same analysis can be applied to the other asymptotic $x\rightarrow -\infty$ ($\phi(-\infty)=-v$), and the same results hold because of the $\mathbb{Z}_2$ symmetry. \paragraph{Remark 1.} Before concluding this Subsection it is worth commenting on the validity of the asymptotic solution we determined. On one hand, we know that the existence of the domain wall is ensured by the topological structure of the vacuum manifold, and this should not depend on the value of $M_s.$ On the other hand, it appears that the asymptotic solution we found is only valid for some values of $M_s$ satisfying the inequality in Eq.~\eqref{eq:neccessary cond for asymp sol}, which seems to imply that the domain wall solution does not exist for other values of $M_s$. Is this a contradiction? The answer is no, all is consistent, and in fact the domain wall solution exists for any value of $M_s.$ First of all, we should note that to solve Eq.~\eqref{condition} in terms of $M_s$ we have used the principal branch $W_0(x)$ which is a monotonic increasing function, but an additional real solution can be found by using the branch $W_{-1}(x)$ which is, instead, a monotonically decreasing function. Thus, given the opposite monotonicity behavior of $W_{-1}$ as compared to $W_{0},$ if we solve Eq.~\eqref{condition} by means of $W_{-1}$ we get $M_s\leq -\lambda v^2/W_{-1}(-1/3e)\simeq 0.30\lambda v^2.$ Moreover, in the range of values $0.30 \lambda v^2 \lesssim M_s^2\lesssim 7.21 \lambda v^2$ the functional form in Eq.~\eqref{eq:def for asymp sol} does not represent a valid asymptotic behavior for the domain wall. In this case the domain wall configuration may be characterized by a completely different profile, but its existence is still guaranteed by the non-trivial topology. Anyway, as already mentioned above, in this paper we only work with $W_0,$ therefore with values of $M_s$ satisfying the inequality~\eqref{eq:neccessary cond for asymp sol}. See also Sec.~\ref{sec-dis} for more discussion on this in relation to physical implications. \subsection{Perturbation around the local solution} \label{subsec-pert} Let us now implement an alternative method to determine the behavior of the nonlocal domain wall not only at infinity but also close to the origin. We consider a linear perturbation around the standard two-derivative domain wall configuration $\phi_{\rm L}(x)=v\tanh(\sqrt{\lambda/2} vx)$. Let us define the deviation from the local solution as \begin{align} \delta\phi (x) = \phi(x) - \phi_{\rm L}(x)\,, \qquad \left|\frac{\delta\phi}{\phi_{\rm L}}\right|\ll 1\,, \label{pert-loc-nonloc} \end{align} in terms of which we can linearize the field equation~\eqref{eq:the equation of the motion for DW}: \begin{eqnarray} \left[e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)-3\lambda\phi_{\rm L}^2\right]\delta\phi=\lambda (1-e^{-\partial_x^2/M_s^2})\phi_{\rm L}^3\,.\label{linear-lnl} \end{eqnarray} Since the nonlocal scale appears squared in~\eqref{eq:the equation of the motion for DW}, we would expect that $\delta\phi\sim \mathcal{O}(1/M_s^2)$ such that the local limit is consistently recovered, i.e. $\delta\phi\to0$ when $M_s^2\to\infty$. We now write Eq.~\eqref{pert-loc-nonloc} up to order $\mathcal{O}(1/M_s^2)$ in order to extract the leading nonlocal correction to $\phi_{\rm L}.$ By expanding the nonlocal terms as follows \begin{eqnarray} e^{-\del_x^2/M_s^2} \delta\phi&=& \delta\phi + {\cal O}\qty(\frac{\del_x^2}{M_s^2}) \delta\phi\,,\\[2mm] e^{-\del_x^2/M_s^2} \phi_{\rm L}^3&=& \phi_{\rm L}^3-\frac{\partial_x^2}{M_s^2}\phi_{\rm L}^3 +{\cal O}\qty(\qty(\frac{\del_x^2}{M_s^2})^2)\phi_{\rm L}^3\,, \end{eqnarray} we can write~\eqref{linear-lnl} up to order $\mathcal{O}(1/M_s^2):$ \begin{align} \qty[\partial_x^2+\lambda v^2-3\lambda\phi_{\rm L}^2(x)] \delta\phi = \frac{\lambda}{M_s^2}\del_x^2(\phi_{\rm L}(x)^3)\,. \label{lin-eq-nln-2} \end{align} This expansion is valid as long as the following inequality holds: \begin{equation} \frac{1}{|\delta\phi|}\left|\frac{\del_x^2}{M_s^2}\delta\phi\right| \ll 1\,.\label{ineq-2} \end{equation} We now introduce the dimensionless variable $s=\sqrt{\lambda/2}\,vx$ and the function $f(s)=\delta\phi(x)/v,$ so that we can recast Eq.~\eqref{lin-eq-nln-2} as \begin{align} f^{\prime\prime}(s)+2(1-3\tanh^2 s)f(s) = \frac{\lambda v^2}{M_s^2}(\tanh^3 s)^{\prime\prime}\label{diff-eq-s} \end{align} where the prime $'$ denotes the derivative with respect to $s$. The above differential equation can be solved analytically, and its solution reads \begin{eqnarray} \!\!f(s)&=& \frac{C_1}{\cosh^2s} + \frac{3C_2/2+27\lambda v^2/32M_s^2}{\cosh^2s} \log\frac{1+\tanh s}{1-\tanh s} \nonumber\\[2mm] &&+\qty[ \qty(2C_2+\frac{\lambda v^2}{8M_s^2})\cosh^2s + \qty(3C_2-\frac{61\lambda v^2}{16M_s^2}) + \frac{2\lambda v^2}{M_s^2}(1+\tanh^2 s) ] \tanh s\,,\,\,\, \end{eqnarray} where $C_1$ and $C_2$ are two integration constants to be determined. \begin{figure}[t] \centering \includegraphics[scale=0.48]{perturb-sol.pdf} \caption{ % In this figure we show the linearized nonlocal domain wall solution $\phi(x)=\phi_{\rm L}(x)+\delta\phi(x)$ (solid blue line) in comparison with the local domain wall (orange dashed line). The nonlocal configuration approaches the asymptotic vacua at $x\rightarrow \pm \infty$ faster as compared to the local case. In the smaller plot we showed the behavior of the two solutions over a smaller interval in order to make more evident the differences between local and nonlocal cases. We can notice that when going from $x=0$ to $x\rightarrow \pm \infty$ the nonlocal curve slightly oscillates around the local one. We set $\lambda=2$, $v=1$ and $M_s^2=14.3,$ which are consistent with the theoretical constraint $M_s^2\geq -\lambda v^2/W_0(-1/3e)$ in Eq.~\eqref{eq:neccessary cond for asymp sol}. } \label{fig:illustration for the perturbative solution} \end{figure} The boundary conditions $\phi(\pm \infty)=\pm v$ in terms of the linearized deviation read $\delta\phi(\pm \infty)=0,$ or equivalently $f(\pm \infty)=0$. These are satisfied if and only if the algebraic relation $2C_2+\lambda v^2/8M_s^2=0$ holds true, which means that $C_2=-\lambda v^2/16M_s^2$. Moreover, the constant $C_1$ must be zero because of the $\mathbb{Z}_2$-symmetry. Thus, the solution for $\delta \phi$ is given by \begin{align} \delta\phi = vf(s) = \frac{\lambda v^3}{M_s^2}\frac{1}{\cosh^2\sqrt{\frac{\lambda}{2}}vx} \qty[\frac{3}{4}\log\frac{1+\tanh\sqrt{\frac{\lambda}{2}}vx}{1-\tanh\sqrt{\frac{\lambda}{2}}vx} -2\tanh\sqrt{\frac{\lambda}{2}}vx]\,. \label{eq:exact sol for delta phi} \end{align} In Fig.~\ref{fig:illustration for the perturbative solution} we showed the behavior of the nonlocal domain wall solution $\phi=\phi_{\rm L}+\delta\phi$ in comparison with the local two-derivative one $\phi_{\rm L};$ we have set values for $v,$ $\lambda$ and $M_s$ consistently with the theoretical lower bound in Eq.~\eqref{eq:neccessary cond for asymp sol}. From the plot we can notice that the nonlocal solution approaches the vacua $\pm v$ faster as compared to the local case, which is in agreement with the asymptotic analysis in the previous Subsection. Indeed, we can expand the solution $\phi=\phi_{\rm L}+\delta\phi$ in the regime $|x|\rightarrow \infty$, and obtain \begin{eqnarray} \phi(x)&\simeq& \pm v\left[1-2\left(\frac{4v^2\lambda}{M^2_s}\right)e^{-\sqrt{2\lambda}vx}-2e^{-\sqrt{2\lambda}vx}\left(1-\frac{3}{2}\sqrt{2\lambda}\frac{\lambda v^3}{M_s^2}x\right)\right]\nonumber\\[2mm] &\simeq& \pm v\left[1-2\left(1+\frac{4v^2\lambda}{M^2_s}\right)e^{-\sqrt{2\lambda}v(1+3\lambda v^2/2M_s^2)x}\right]\nonumber\\[2mm] &\simeq& \pm v\left[1-2\left(1+\frac{4v^2\lambda}{M^2_s}\right)e^{-B x}\right]\,, \end{eqnarray} where to go from the first to the second line we have used the freedom to add negligible terms of order higher than $\mathcal{O}(1/M_s^2),$ i.e. $4v^2\lambda/M_s^2\simeq 4v^2\lambda/M_s^2(1-3\sqrt{2\lambda}\lambda v^3x/2M_s^2)$ and $1-3\sqrt{2\lambda}\lambda v^3/2M_s^2x\simeq e^{-(3\sqrt{2\lambda}\lambda v^3/2M_s^2)x}.$ Remarkably, the asymptotic behavior of the linearized solution perfectly matches the result obtained in Eq.~\eqref{eq:series expansion for coeff B-sqrt}, indeed the coefficient $B$ in the exponent turns out to be exactly the same in both approaches. Moreover, from the linearized solution we can also determine the coefficient $A$ up to order $\mathcal{O}(1/M_s^2)$, i.e. $A=-2v-8v^3\lambda/M_s^2,$ which could not be determined through the asymptotic analysis in Sec.~\ref{subsec-asym} (see Eq.~\eqref{eq:def for asymp sol}). Furthermore, the behavior of the linearized solution close to the origin is quite peculiar as the nonlocal domain wall profile slightly oscillates around the local one. In other words, when going from $x=0$ to $x\rightarrow \infty$ the perturbation $\delta\phi$ is initially negative and then becomes positive; whereas the opposite happens when going from $x=0$ to $x\rightarrow-\infty.$ This property may suggest that the typical length scale $\ell$ over which $\phi(x)$ changes in proximity of the origin is larger as compared to the local case. As done for the local domain wall in Sec.~\ref{sec-review}, we can estimate such a length scale as the inverse of the gradient at the origin times the energy scale $v$, i.e. $\ell\sim v/(\partial_x\phi|_{x=0}).$ By doing so, up to order $\mathcal{O}(1/M_s^2)$ we get \begin{equation} \del_x\phi(x)|_{x=0}=v^2\sqrt{\frac{\lambda}{2}}\left(1-\frac{\lambda v^2}{2M_s^2}\right)+\mathcal{O}\left(\frac{1}{M_s^4}\right)\,,\label{grad-zero} \end{equation} which yields \begin{equation} \ell\sim \sqrt{\frac{2}{\lambda}}\frac{1}{v}\left(1+\frac{\lambda v^2}{2M_s^2}\right)\,.\label{ell-linearized} \end{equation} Note that such a length scale does \textit{not} coincide with the width of the wall because it is related to the behavior of the solution close to the origin and far from the vacuum. In standard two-derivative theories the above computation would give a result for $\ell$ that coincides with the size of the wall, but this is just a coincidence. We will comment more on this in Sec.~\ref{subsec-order}. \subsubsection{Validity of the linearized solution} The above linearized solution was found perturbatively, and it is valid as long as the two inequalities in Eq.~\eqref{pert-loc-nonloc} and~\eqref{ineq-2} are satisfied. We now check when these conditions are verified. By working with the variable $s=\sqrt{\lambda/2} vx$ and the field redefinition $\delta\phi=vf(s),$ the inequality~\eqref{pert-loc-nonloc} reads: \begin{equation} |H(s)|= \left|\frac{f(s)}{\tanh s}\right|\ll 1\,, \end{equation} where $H(s):=f(s)/\tanh s\,.$ By analyzing the behavior of $|H(s)|$ we can notice that it is always less than unity, thus supporting the validity of the linearized solution in Eq.~\eqref{eq:exact sol for delta phi}; see the left panel in Fig.~\ref{fig:graph for verifivation of perturbative condition}. Let us now focus on the inequality~\eqref{ineq-2}. By introducing also in this case the variable $s=\sqrt{\lambda/2}vx$ we can write \begin{eqnarray} \frac{\partial_x^2}{M_s^2}\delta\phi &=&\frac{\lambda v^2/2}{M_s^2} v f^{\prime\prime}(s) \nonumber\\[2mm] &=&\frac{\lambda v^3/2}{M_s^2} \frac{\lambda v^2}{M_s^2} \frac{{\rm d}^2}{{\rm d}s^2} \qty{ \frac{1}{\cosh^2 s} \left( \frac{3}{4} \log\frac{1+\tanh s}{1-\tanh s} - 2\tanh s \right) } \nonumber\\[2mm] &=&v \times \underbrace{\frac{1}{2}\qty(\frac{\lambda v^2}{M_s^2})^2\frac{{\rm d}^2}{{\rm d}s^2} \qty{ \frac{1}{\cosh^2 s} \left( \frac{3}{4} \log\frac{1+\tanh s}{1-\tanh s} - 2\tanh s \right) } }_{=:\,g(s)}\,, \end{eqnarray} where $g(s):=\partial_x^2\delta\phi(x)/(M_s^2v)$. In terms of the dimensionless functions $f(s)$ and $g(s)$ the inequality~\eqref{ineq-2} becomes \begin{align} \left|\frac{\partial_x^2}{M_s^2}\delta\phi\right| \ll |\delta\phi| \quad\Leftrightarrow\quad |g(s)| \ll |f(s)|\,. \end{align} Therefore, we have to analyze the function \begin{align} h(s):=\frac{g(s)}{f(s)} =\frac{\lambda v^2}{2M_s^2} \frac{\dps\frac{{\rm d}^2}{{\rm d}s^2}\qty[\frac{1}{\cosh^2 s}\qty(\frac{3}{4}\log\frac{1+\tanh s}{1-\tanh s}-2\tanh s)]} {\dps\frac{1}{\cosh^2 s}\qty(\frac{3}{4}\log\frac{1+\tanh s}{1-\tanh s}-2\tanh s)}\,, \end{align} and check for which values of $s$ its modulus $|h(s)|$ is less than unity. In the right panel of Fig.~\ref{fig:graph for verifivation of perturbative condition} we have shown the behavior of $h(s);$ we have only plotted the region $s\geq 0$ as $h(s)$ is an even function in $s$. We can notice that $h(s)$ diverges at the point $s\sim 1.03402$ where $f(s)$ vanishes; therefore, in the proximity of this point the linearized solution $\delta\phi(x)$ might not be valid. However, for $|s|\rightarrow \infty$ and $s\rightarrow 0$ the inequality~\eqref{ineq-2} can be satisfied: \begin{figure}[t] \centering \includegraphics[scale=0.315]{validity-phi_L-delta-phi}\quad\includegraphics[scale=0.305]{validity-del-2-delta-phi} \caption{ % (Left panel) behavior of $H(s)=f(s)/\tanh s$ as a function of $s=\sqrt{\lambda/2}vx.$ The modulus of the function is always less than unity, i.e. $|H(s)|< 1,$ supporting the validity of the linearized solution $\delta \phi (x) =v f(x)$ in Eq.~\eqref{eq:exact sol for delta phi}. $H(s)$ becomes smaller and smaller for larger values of $M_s.$ (Right panel) behavior of the function $h(s)=g(s)/f(s).$ As long as $|h(s)|\ll 1$ the linearized solution~\eqref{eq:exact sol for delta phi} can be trusted as a good approximation of the true behavior of the nonlocal domain wall. We can notice that close to the asymptotics ($s\rightarrow \infty$) the function can be kept less than one, but there is a singularity at $s\sim 1.03402$ caused by the fact that $f(s)$ vanishes at this point. Moreover, the linearized approximation close to the origin and at infinity becomes better for larger values of $M_s.$ In both panels we only showed the behavior for $s\geq 0$ because both functions $h(s)$ and $H(s)$ are even in $s.$ We set $\lambda=2,$ $v=1$ and $M_s^2=14.3,$ which are consistent with the theoretical constraint $M_s^2\geq -\lambda v^2/W_0(-1/3e)$ in Eq.~\eqref{eq:neccessary cond for asymp sol}. } \label{fig:graph for verifivation of perturbative condition} \end{figure} \begin{align} h(0)=\lim_{s\to0} h(s) = -7\frac{\lambda v^2}{M_s^2}\,, \qquad h(\infty)=\lim_{s\to\infty} h(s) = 2\frac{\lambda v^2}{M_s^2}\,,\label{limits} \end{align} and by using the theoretical lower bound in Eq.~\eqref{eq:neccessary cond for asymp sol}, i.e. $\lambda v^2/M_s^2\leq-W_0^{-1}(-1/3e)\sim 0.14,$ it follows that both asymptotic limits are always less than unity, i.e. $|h(0)|< 0.98$ and $h(\infty)< 0.28,$ and the approximation becomes better for larger values of the scale of nonlocality $M_s$. Let us now make two important remarks. \paragraph{Remark 2.} In light of the remark at the end of Sec.~\ref{subsec-asym} we now understand that the linearized perturbative solution~\eqref{eq:exact sol for delta phi} would have not been valid if instead we had used $W_{-1}$ and the corresponding upper bound $M_s\leq -\lambda v^2/W_{-1}(-1/3e).$ In such a case a different domain wall solution with the same functional form of the asymptotic behavior is obviously guaranteed to exist, but we are not interested in it in the current work. Therefore, we emphasize again that we only work with a domain wall configuration consistent with the bound~\eqref{eq:neccessary cond for asymp sol}. \paragraph{Remark 3.} We have noticed that the linearized approximation breaks down in the proximity of $s\sim 1.03402.$ It means that the boundary condition $\delta\phi(\pm\infty)=0$ imposed on the perturbation cannot be used to connect the behavior of the solution from $x=\pm\infty$ to $x=0$ because the boundary condition itself would break down. This might imply that apparently the linearized solution obtained in Eq.~\eqref{eq:exact sol for delta phi} can be trusted only close to the vacua, and that the above analysis is not enough to justify the behavior close to the origin. However, by using a different and reliable perturbative expansion in the intermediate region around $s\sim 1.03402,$ and imposing junction conditions to glue different pieces of the solution defined in three different regions, we checked and confirmed that the behavior close to the origin found above is well justified. See App.~\ref{sec-corr} for more details. \subsection{Estimation of width and energy} \label{subsec-order} We now estimate the width and the energy per unit area of the nonlocal domain wall by performing an analogous analysis as the one made for the local two-derivative case in Sec.~\ref{sec-review}. Let us approximate the field as $\phi\sim v,$ and the gradient as $\del_x\sim R^{-1}$ where $R$ is the width of the wall. From the expression of the energy-momentum tensor~\eqref{final-stress-exp} (with $m^2=-\lambda v^2$), we can obtain the energy density of the wall: \begin{align} \mathcal{E}(x) \equiv T_0^0(x)= -\frac{1}{2}\phi e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)\phi +\frac{\lambda}{4}(\phi^4+v^4)\,. \label{eq:energy density for DW} \end{align} The next step would be to impose the balance between the kinetic and the potential energy in~\eqref{eq:energy density for DW} for the lowest-energy configuration, and solve the resulting equation for the width $R.$ The presence of the infinite-derivative operator makes the procedure less straightforward as compared to the local case because we should first understand how to estimate $e^{-\del_x^2/M_s^2},$ i.e. whether to replace the exponent with $-1/M_s^2R^2$ or $+1/M_s^2R^2.$ The strategy to follow in order to avoid any ambiguities is to Taylor expand, recast the infinite-derivative pieces in terms of an infinite number of squared quantities, replace $\del_x\sim 1/R,$ $\phi\sim v,$ and then re-sum the series. By Taylor expanding in powers of $\partial_x^2/M_s^2$ the infinite-derivative terms in Eq.~\eqref{eq:energy density for DW}, and neglecting total derivatives, we can write \begin{align} \!\phi e^{-\del_x^2/M_s^2}\del_x^2\phi&= -\del_x\phi e^{-\del_x^2/M_s^2}\del_x\phi\nonumber\\[2mm] &= -\left[(\del_x\phi)^2-\frac{1}{M_s^2}\del_x\phi\del_x^2\del_x\phi+\frac{1}{2!M_s^4}\del_x\phi\del_x^4\del_x\phi-\cdots+\frac{(-1)^n}{n!M_s^{2n}}\del_x\phi\del_x^{2n}\del_x\phi+\cdots\right] \nonumber\\[2mm] &=-\left[(\del_x\phi)^2+\frac{1}{M_s^2}(\del_x^2\phi)^2+\frac{1}{2!M_s^4}(\del_x^3\phi)^2+\cdots+\frac{1}{n!M_s^{2n}}(\del_x^{n+1}\phi)^2 +\cdots\right]\nonumber\\[2mm] &= -\sum\limits_{n=0}^\infty \frac{1}{n!}\left(\frac{1}{M_s^2}\right)^n \left(\del_x^{n+1}\phi\right)^2\,, \end{align} and \begin{equation} \lambda v^2\phi e^{-\del_x^2/M_s^2}\phi=\lambda v^2\sum\limits_{n=0}^\infty \frac{1}{n!}\left(\frac{1}{M_s^2}\right)^n \left(\del_x^{n}\phi\right)^2\,. \end{equation} Then, by using $\phi\sim v$ and $\del_x\sim 1/R,$ we get \begin{align} \phi e^{-\del_x^2/M_s^2}\del_x^2\phi\sim -\frac{v^2}{R^2}\sum\limits_{n=0}^\infty\frac{1}{n!}\left(\frac{1}{M_s^2R^2}\right)^n = -\frac{v^2}{R^2}e^{1/(M_sR)^2}\,, \end{align} and \begin{equation} \lambda v^2\phi e^{-\del_x^2/M_s^2}\phi\sim\lambda v^4 e^{1/(M_sR)^2}\,. \end{equation} Thus, we have shown that the correct sign in the exponent when making the estimation is the positive one\footnote{To further remove any possible ambiguity and/or confusion, it is worth mentioning that the same result would have been obtained if we would have started with a positive definite expression for the kinetic energy, for instance with the expression $\del_x\phi e^{-\del_x^2/M_s^2}\del_x\phi=(e^{-\del_x^2/2M_s^2}\del_x\phi)^2\geq 0,$ where we integrated by parts and neglected total derivatives. Also in this case one can show (up to total derivatives) that $(e^{-\del_x^2/2M_s^2}\del_x\phi)^2=\sum_{k,l=0}^\infty 1/(k!\,l!) (1/2M_s^2)^{k+l}\left(\del_x^{(k+l+1)}\phi\right)^2\sim (v^2/R^2)\left[\sum_{k=0}^\infty 1/k!(1/2M_s^2R^2)^n\right]^2= (v^2/R^2)e^{1/(M_sR)^2}$.}. To make more manifest the consistency with the low-energy limit $M_s \to \infty$, it is convenient to separate the kinetic and the potential contributions in~\eqref{eq:energy density for DW} as follows: \begin{align} \mathcal{E}(x)=& \qty[ -\frac{1}{2} \phi e^{-\del_x^2/M_s^2}\partial_x^2 \phi - \frac{1}{2} \lambda v^2 \phi \qty(e^{-\del_x^2/M_s^2}-1) \phi ] + \qty[ \frac{\lambda}{4} \qty(\phi^2-v^2)^2 ]\,, \end{align} so that the balance equation between kinetic and potential energies reads \begin{align} \frac{1}{2} \frac{v^2}{R^2} e^{1/(M_s R)^2} - \frac{1}{2} \lambda v^4 \qty(e^{1/(M_s R)^2}-1) \sim \frac{\lambda}{4} v^4\,. \label{eq:full-balance-eq} \end{align} We are mainly interested in the leading nonlocal correction, thus we expand for $M_sR\gg 1$ up to the first relevant nonlocal contribution: \begin{eqnarray} \frac{1}{2}\frac{v^2}{R^2}\left(1+\frac{1}{M_s^2R^2}\right)-\frac{1}{2}\frac{\lambda v^4}{M_s^2R^2}\sim \frac{1}{4}\lambda v^4\nonumber \label{eq:full-balance-eq-leading} \end{eqnarray} The solution up to order $\mathcal{O}(1/M_s^2)$ is given by \begin{eqnarray} \frac{1}{R^2}\sim\frac{\lambda v^2}{2}\left(1+\frac{\lambda v^2}{2M_s^2} \right)\,, \end{eqnarray} from which we obtain \begin{eqnarray} R\sim\sqrt{\frac{2}{\lambda }}\frac{1}{v}\left(1-\frac{\lambda v^2}{4M_s^2} \right)\,. \end{eqnarray} Therefore, the width of the nonlocal domain wall, $R$, turns out to be \textit{thinner} as compared to the local two-derivative case. This is consistent with both the asymptotic analysis in Sec.~\ref{subsec-asym} and with the linearized solution in Sec.~\ref{subsec-pert}. In fact, in the previous Subsections we found that the nonlocal configuration approaches the vacua $\pm v$ \textit{faster} as compared to the local case, i.e. the coefficient $B$ in Eq.~\eqref{eq:series expansion for coeff B-sqrt} is larger than the corresponding local one. Then, as shown in Eq.~\eqref{eq:tildeR}, $\widetilde{R}$ becomes smaller in the nonlocal case, which is consistent with the behaviour of $R$. That is, the coefficient $B$ and the width of the wall $R$ should be inversely proportional to each other; this means that if $B$ increases then $R$ must decrease, and indeed this is what we showed. We can also estimate the energy per unit area \begin{eqnarray} E&=& \int_\mathbb{R}{{\rm d}x}\qty[-\frac{1}{2}\phi e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)\phi +\frac{\lambda}{4}(\phi^4+v^4)] \nonumber\\[2mm] &\sim& (\text{width of the wall}) \times (\text{energy density}) \nonumber\\[2mm] & \sim& R \times \lambda v^4 \nonumber\\[2mm] &\sim& \sqrt{\frac{\lambda}{2}}v^3\left(1-\frac{\lambda v^2}{4M_s^2}\right) \,,\label{energ-estim} \end{eqnarray} which is also \textit{decreased} as compared to the local case. It is worth to emphasize that the expansion for small $\lambda v^2/M_s^2$ is well justified for the domain wall solution satisfying the bound in Eq.~\eqref{eq:neccessary cond for asymp sol} which was obtained by using the principal branch $W_0$ of the Lambert-W function. \paragraph{Remark 4.} In Sec.~\ref{subsec-pert} we have estimated an additional scale $\ell$ in addition to the width ($\ell> R, \widetilde{R}$). In standard local theories all of the three scales are the same because there is only one physical scale, $\ell_{\rm L}=R_{\rm L}=\widetilde{R}_{\rm L} \sim (\sqrt{\lambda}v)^{-1}.$ In fact, in general $\ell$ and $R$ ($\widetilde{R}$) represent two different physical scales, and this becomes manifest in the nonlocal theory under investigation. The length scale $R$ ($\widetilde{R}$) is the one that contains the information about the size of the wall because it is proportional to $1/B,$ and it is related to how fast the field configuration approaches the vacuum. Whereas, the scale $\ell$ is related to how fast the field changes in the proximity of the origin, indeed it is inversely proportional to the gradient at $x=0$ (see Eqs.~\eqref{grad-zero} and~\eqref{ell-linearized}), and we have $\ell>\ell_{\rm L}.$ The difference between $\ell$ and $R$ ($\widetilde{R}$) is caused by the oscillatory behavior of the nonlocal solution around the local one. \section{Comments on other topological defects} \label{sec-other} So far we have only focused on the domain wall configuration. However, it would be very interesting if one could repeat the same analysis also in the case of other topological defects, like \textit{string} and \textit{monopole} which can appear in nonlocal models characterized by continuous-symmetry breaking. A full study of these topological defects in nonlocal field theories goes beyond the scope of this paper, however we can make some important comments. First of all, the existence of such finite-energy configurations is always guaranteed by the non-trivial topological structure of the vacuum manifold.\footnote{Of course, they can exist only dynamically in the local case because of Derrick's theorem~\cite{Vilenkin:2000jqa}. But, in a nonlocal case, even Derrick's theorem might be circumvented. This issue will be left for a future work.} Knowing that a solution must exist, then we could ask how some of their properties would be affected by nonlocality. Actually, a similar order-of-magnitude estimation as the one carried out in Sec.~\ref{subsec-order} can be applied to these other global topological defects. In particular, by imposing the balance between kinetic and potential energy one would obtain that the radius of both string and monopole are \textit{smaller} as compared to the corresponding ones in the local case. We leave a more detailed investigation of higher dimensional topological defects, including stabilizing gauge fields, for future tasks. \section{Discussion \& conclusions}\label{sec-dis} \paragraph{Summary.} In this paper, we studied for the first time topological defects in the context of the nonlocal field theories. In particular, we mainly focused on the domain wall configuration associated to the $\mathbb{Z}_2$-symmetry breaking in the simplest nonlocal scalar field theory with nonlocal differential operator $e^{-\Box/M_s^2}$. Despite the complexity of non-linear infinite-order differential equations, we managed to find an approximate analytic solution. Indeed, we were able to understand how nonlocality affects the behavior of the domain wall both asymptotically close to the vacua and around the origin. Let us briefly highlight our main results: \begin{itemize} \item We showed that the nonlocal domain wall approaches the asymptotic vacua $\pm v$ faster as compared to the local two-derivative case. We confirmed this feature in two ways: (i) studying the behavior of the solution towards infinity ($|x|\rightarrow \infty$); (ii) analyzing a linearized nonlocal solution found through perturbations around the local domain wall configuration. \item Such a faster asymptotic behavior also means that the width of the wall, $R$ ($\widetilde{R}$), is smaller than the corresponding local one. This physically means that the boundary separating two adjacent casually disconnected spatial regions with two different vacua (i.e. $+v$ and $-v$) becomes thinner as compared to the local case. We confirmed this property by making an order-of-magnitude estimation involving the balance equation between kinetic and potential energy. As a consequence, also the energy per unit area can be shown to be smaller. \item We noticed that the nonlocal domain wall has a very peculiar behavior around the origin, i.e. in the proximity of $x\sim 0$. We found that the linearized nonlocal solution, $\phi=\phi_{\rm L}+\delta \phi,$ oscillates around the local domain wall when going from $x=0$ to $|x|\rightarrow \infty.$ In other words, the perturbation $\delta\phi$ changes sign: when going from $x=0$ to $x=+\infty$ it is first negative and then positive, and vice-versa when going from $x=0$ to $x=-\infty$ . We confirmed the validity of the solution close to the origin in App.~\ref{sec-corr}. \item The specific nonlocal domain wall solution analyzed in this paper can exist only if the nonlocal scale $M_s$ satisfies the lower bound $M_s\gtrsim \sqrt{\lambda} v,$ namely if the energy scale of nonlocality is larger than the symmetry-breaking scale. \end{itemize} \paragraph{Discussion \& Outlook.} Here we have only dealt with nonlocal field theories in flat spacetime without assuming any specific physical scenario. However, it might be interesting to understand how to embed our analysis in a cosmological context where we could expect also gravity to be nonlocal; see Refs.~\cite{Biswas:2005qr,Koshelev:2016xqb,Koshelev:2017tvv,Koshelev:2020foq} and references therein. In particular, in Refs.~\cite{Koshelev:2017tvv,Koshelev:2020foq} inflationary cosmology in nonlocal (infinite-derivative) gravity was investigated, and the following experimental bound on the scale of nonlocality was obtained: $M_s\gtrsim H,$ where $H$ is the Hubble constant during inflation, i.e. $H\sim 10^{14}$GeV. Our theoretical lower bound is consistent with the experimental constraint derived in~\cite{Koshelev:2017tvv,Koshelev:2020foq} for the gravity sector. Indeed, some symmetry breaking is expected to happen after inflation, i.e. at energies $v\lesssim H,$ which is consistent with the theoretical lower bound $M_s\gtrsim \sqrt{\lambda} v$ in Eq.~\eqref{eq:neccessary cond for asymp sol}. Hence, in a cosmological context one would expect the following hierarchy of scales\footnote{We are implicitly assuming that there exists only one scale of nonlocality $M_s$ for both gravity and matter sectors.}: \begin{equation} M_s\gtrsim H\gtrsim v\,.\label{set-ineq} \end{equation} Very interestingly, this cosmological scenario can be used to rule out some topological-defect solutions in nonlocal field theory. For instance, in light of the discussions a the end of Sec.~\ref{subsec-asym} and Sec.~\ref{subsec-pert}, there must exist at least another domain wall configuration that is valid for $M_s^2\lesssim \lambda v^2.$ In such a case the set of inequalities~\eqref{set-ineq} would be replaced by $v\gtrsim M_s\gtrsim H,$ which implies that the symmetry breaking would happen before inflation. Thus, if we are interested in domain wall formation after inflation, then we can surely discard any configuration valid in the regime $M_s^2\lesssim \lambda v^2.$ It would be very interesting if one would consider other topological defects, like strings and monopoles, which can appear in nonlocal models characterized by continuous-symmetry breaking. In fact, global string might play important roles like axion emissions in an expanding universe. In the local case, the topological defects that are formed from global-symmetry breaking should be unstable because of Derrick's theorem~\cite{Vilenkin:2000jqa} which excludes the existence of stationary stable configurations in dimensions greater than one. Then, they can exist only dynamically e.g. in an expanding universe. However, Derrick's theorem might not apply to a nonlocal case thanks to the nonlocality. It would be also interesting to investigate whether such stationary stable configurations could exist in a nonlocal case or not. As another potential future direction to follow we can consider another class of models characterized by gauge symmetries as well as global ones. In fact, among the possible physical applications that can be studied in relation to topological defects, we have gravitational waves, e.g. the ones emitted by cosmic strings. We would expect that the presence of nonlocality would change the dynamics in such a way to modify non-trivially the gravitational wave-form. This type of investigations will provide powerful test-benches to test nonlocal field theories, and to further constrain the structure of the nonlocal differential operators in the Lagrangian and the value of the nonlocal scale. In this work we only focused on the simplest model with $F(-\Box)=e^{-\Box/M_s^2},$ but one could work with more generic operators. Actually, the class of viable differential operators is huge (e.g. see~\cite{Buoninfante:2020ctr}), and it would be interesting to reduce it by means of new phenomenological studies. As yet another future work we also wish to consider other type of field theoretical objects. An important phenomenon is the \textit{false vacuum decay}~\cite{Coleman:1977py,Callan:1977pt,Coleman:1980aw} according to which the false vacuum -- which corresponds to a local minimum of the potential -- has a non-zero probability to decay through quantum tunneling into the true vacuum -- which corresponds to a global minimum. This tunneling process consists in interpolating between the false and the true vacuum through an instanton (a bounce solution). It may be very interesting to generalize the standard analysis done for a local two-derivative theory to the context of nonlocal field theories, and to understand how nonlocality would affect this phenomenon, e.g. how the tunneling probability would change. Another interesting direction is to consider non-topological solitons like Q-balls and oscillons/I-balls. The existence of these objects is also related to the presence of a bounce solution. But, one should notice that, different from topological defects, the presence of these bounce solutions is not guaranteed in nonlocal theories. In fact, it is very difficult to guarantee the presence of such a bounce solution in a nonlocal theory, different from the local cases. Therefore, even if one would obtain (possible) approximate solutions somehow, one cannot make any argument based on such approximate solutions without the proof of the existence of exact bounce solutions. This is the reason why we dealt only with topological defects in this paper, and left a study of non-topological field theoretical objects for future work. Finally, we should emphasize that non-linear and infinite-order differential equations are not only difficult to solve analytically but even numerically. Indeed, up to our knowledge no numerical technique to find domain wall solutions is currently known. Some techniques to solve nonlinear equations involving infinite-order derivatives have been developed in the last decades~\cite{Moeller:2002vx,Arefeva:2003mur,Volovich2003,Joukovskaya:2008cr,Calcagni:2008nm,Frasca:2020ojd}, but none of them seem to be useful for the type of field equations considered in this paper. Therefore, as a future task it will be extremely interesting to develop new numerical and analytic methods to find topological-defect solutions. This will also be important to investigate the stability of topological defects in nonlocal field theory, something that we have not done in this paper. \subsection*{Acknowledgements} The authors are grateful to Sravan Kumar for useful discussions. Y.~M. acknowledges the financial support from Advanced Research Center for Quantum Physics and Nanoscience, Tokyo Institute of Technology. M.~Y. acknowledges financial support from JSPS Grant-in-Aid for Scientific Research No. JP18K18764, JP21H01080, JP21H00069.
{'timestamp': '2022-03-10T02:31:55', 'yymm': '2203', 'arxiv_id': '2203.04942', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.04942'}
ArXiv
\section{#1}\setcounter{equation}{0}\setcounter{theorem}{0}} \newcommand{\hfill\nonumber}{\hfill\nonumber} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \theoremstyle{definition} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newcommand{\lie}[1]{\mathfrak{#1}} \newcommand{\gpd} {\mathcal{G}} \newcommand{\source} {\mathsf{s}} \newcommand{\target} {{\mathsf{t}}} \newcommand{\tg} {\mathsf t} \newcommand{\s} {\mathsf s} \newcommand{\core} {\mathsf c} \newcommand{\rr} {\rightrightarrows} \newcommand{\Cour}[1] {[\![#1]\!]} \newcommand{{\bar{P}}} \DeclareMathOperator{\ann}{ann}{{\bar{P}}} \DeclareMathOperator{\ann}{ann} \newcommand{{\mathcal{C}}} \newcommand{\D}{{\mathcal{D}}}{{\mathcal{C}}} \newcommand{\D}{{\mathcal{D}}} \newcommand{{\mathcal{V}}} \newcommand{\TT}{{\mathbb{T}}}{{\mathcal{V}}} \newcommand{\TT}{{\mathbb{T}}} \newcommand{{\mathcal{K}}} \newcommand{\m}{{\mathsf{m}}}{{\mathcal{K}}} \newcommand{\m}{{\mathsf{m}}} \newcommand{{\rm lr}} \newcommand{\T}{{\mathcal{T}}}{{\rm lr}} \newcommand{\T}{{\mathcal{T}}} \newcommand{\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}}{\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}} \newcommand{\mathbb{S}} \newcommand{\inv}{^{-1}}{\mathbb{S}} \newcommand{\inv}{^{-1}} \newcommand{\mathbb{N}} \newcommand{\smoo}{C^\infty(G)}{\mathbb{N}} \newcommand{\smoo}{C^\infty(G)} \newcommand{\mathfrak{X}} \newcommand{\dr}{\mathbf{d}}{\mathfrak{X}} \newcommand{\dr}{\mathbf{d}} \newcommand{\tilde G} \newcommand{\tH}{{\tilde H}}{\tilde G} \newcommand{\tH}{{\tilde H}} \newcommand{{M_{(H)}}} \newcommand{\ldr}[1]{{{\pounds}}_{#1}}{{M_{(H)}}} \newcommand{\ldr}[1]{{{\pounds}}_{#1}} \newcommand{\ip}[1]{{\mathbf{i}}_{#1}} \newcommand{\an}[1]{\arrowvert_{#1}} \newcommand{\pair}[1]{\langle {#1}\rangle} \newcommand{\langle\cdot\,,\cdot\rangle}{\langle\cdot\,,\cdot\rangle} \newcommand{\poi}[1]{\{#1\}} \newcommand{\poi{\cdot\,,\cdot}}{\poi{\cdot\,,\cdot}} \newcommand{\llbracket} \newcommand{\rb}{\rrbracket}{\llbracket} \newcommand{\rb}{\rrbracket} \newcommand{\Lambda^L-\Lambda^R}{\Lambda^L-\Lambda^R} \DeclareMathOperator{\rad}{rad} \DeclareMathOperator{\rk}{rank} \DeclareMathOperator{\dom}{Dom} \DeclareMathOperator{\erz}{span} \DeclareMathOperator{\Exp}{Exp} \DeclareMathOperator{\pr}{pr} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\graphe}{graph} \DeclareMathOperator{\Skew}{Skew} \DeclareMathOperator{\Id}{Id} \begin{document} \title{Dorfman connections and Courant algebroids} \author{M. Jotz} \address{Georg-August-Universit\"at G\"ottingen\\ Mathematisches Institut\\ Bunsenstr. 3-5\\ 37073 G\"ottingen\\ Germany} \thanks{Supported by the \emph{Dorothea Schl\"ozer Programme} of the University of G\"ottingen.} \email{[email protected]} \subjclass[2010]{Primary ; Secondary } \begin{abstract} A linear connection $\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(E)\to\Gamma(E)$ on a vector bundle $E$ over a smooth manifold $M$ is tantamount to a splitting $TE\xrightarrow{\sim}{} V\oplus H_\nabla$, where $V$ is the set of vectors tangent to the fibres of $E$. Furthermore, the curvature of the connection measures the failure of the horizontal space $H_\nabla$ to be involutive. In this paper, we show that splittings $TE\oplus T^*E\xrightarrow{\sim}{} (V\oplus V^\circ)\oplus L$ of the Pontryagin bundle over the vector bundle $E$ can be described in the same manner via a certain class of maps $\Delta:\Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to\Gamma(E\oplus T^*M)$. Similarly to the tangent case, we find that, after the choice of a splitting, the Courant algebroid structure of $TE\oplus T^*E\to E$ can be completely described by properties of the map $\Delta$. The maps $\Delta$ in this correspondence theorem are particular examples of connection-like maps that we define in this paper and name Dorfman connections. Roughly said, these objects are to Courant algebroids what connections are to Lie algebroids. In a second part of this paper, we study splittings of $TA\oplus T^*A$ over a Lie algebroid $A$, and we show how $\mathcal LA$-Dirac structures on $A$ are in bijection with a class of Manin triples over the base manifold $M$. This has as special cases the correspondences between Lie bialgebroids and Poisson algebroids, and between IM-2-forms and presymplectic algebroids. \end{abstract} \maketitle \tableofcontents \section{Introduction} Let us start with a simple observation. Take a subbundle $F\subseteq TM$ of the tangent space of a smooth manifold $M$. Then the $\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$-bilinear map \[\tilde \nabla: \Gamma(F)\times \mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\to \Gamma(TM/F),\] \[\tilde \nabla_fX=\overline{[f, X}]\] measures the failure of vector fields on $M$ to preserve $F$. The subbundle $F$ is involutive if and only if $\tilde\nabla_ff'=0$ for all $f,f'\in\Gamma(F)$, and in this case, $\tilde \nabla$ induces a flat connection \[\nabla:\Gamma(F)\times\Gamma(TM/F)\to\Gamma(TM/F),\] \[\tilde \nabla_f\bar X=\overline{[f, X}],\] the \textbf{Bott connection} associated to $F$ \cite{Bott72}. In the same manner, given a Courant algebroid $\mathsf E\to M$ with bracket $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb$, anchor $\rho$ and pairing $\langle\cdot\,,\cdot\rangle$, and a subbundle $K\subseteq \mathsf E$, one can define an $\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$-bilinear map \[\tilde\Delta:\Gamma(K)\times\Gamma(\mathsf E)\to \Gamma(\mathsf E/K),\] \[\tilde\Delta_ke=\overline{\llbracket} \newcommand{\rb}{\rrbracket k,e\rb}.\] Again, we have $\tilde\Delta_kk'=0$ for all $k,k'\in\Gamma(K)$ if and only if $K$ is a subalgebroid of $\mathsf E$. If $K$ is in addition isotropic, it is a Lie algebroid over $M$ and the pairing on $\mathsf E$ induces a pairing $K\times_M(\mathsf E/K)\to \mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$. The $\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$-bilinear map \[\Delta:\Gamma(K)\times\Gamma(\mathsf E/K)\to\Gamma(\mathsf E/K),\] \[\tilde\Delta_k\bar e=\overline{\llbracket} \newcommand{\rb}{\rrbracket k,e\rb}\] that is induced by $\tilde \Delta$ is not a connection because it is not $C^\infty(M)$-homogeneous in the first argument, but the obstruction to this is, as we will see, measured by the pairing, the anchor of the Courant algebroid and the de Rham derivative on $C^\infty(M)$. This map is an example of what is called in this paper a Dorfman connection, namely the \textbf{Bott--Dorfman connection} associated to $K$. In this paper, Dorfman connections are defined, and some of their properties and applications are studied. Dorfman connections appear naturally in several situations related to Courant algebroids and play a role similar to the one that connections play for tangent bundles and Lie algebroids. \medskip It is for instance well-known that a linear $TM$-connection $\nabla$ on a vector bundle $q_E\colon E\to M$ corresponds to a splitting $TE\xrightarrow{\sim}{} T^{q_E}E\oplus H_\nabla$. The failure of the horizontal space $H_\nabla$ to be involutive is measured by the curvature of the connection, and the connection itself is, roughly said, nothing else than the Lie bracket of horizontal and vertical vector fields, since it can be seen as a projection of the Bott connection \[\nabla^{H_\nabla}: \Gamma(H_\nabla)\times\Gamma(TE/H_\nabla)\to \Gamma(TE/H_\nabla). \] in a sense that we will explain. The first main result of this paper is the one-to-one correspondence in the same spirit between splittings of the standard Courant algebroid over $E$ as \[TE\oplus T^*E\xrightarrow{\sim}{} (T^{q_E}E\oplus(T^{q_E}E)^\circ)\oplus L_\Delta\] and $TM\oplus E^*$-Dorfman connections $\Delta$ on $E\oplus T^*M$. The failure of $L$ to be isotropic (and so Lagrangian) relative to the canonical pairing on $TE\oplus T^*E$ is equivalent to the failure of the dual of the Dorfman connection (in the sense of connections) to be antisymmetric, and the failure of $L$ to be closed under the Courant algebroid bracket is measured by the curvature of the Dorfman connection. The Dorfman connection is the same as the Courant-Dorfman bracket on horizontal and vertical sections. We caracterize then double vector subbundles of $TE\oplus T^*E$ over $E$ and a subbundle $U\subseteq TM\oplus E^*$. Such double vector bundles can be described by triples $U, K$and $\Delta$, where $\Delta$ is a Dorfman connection and $K$ a subbundle of $E\oplus T^*M$. For instance, we study Dirac structures on $E$ that define double vector subbundles of the Pontryagin bundle. We prove that maximal isotropy and integrability of the Dirac structure depend only on (simple) properties of the corresponding triple. \bigskip If the vector bundle $E=:A$ has a Lie algebroid structure $(q_A:A\to M, \rho, [\cdot\,,\cdot])$, then the Pontryagin bundle $TA\oplus T^*A$ has a naturally induced Lie algebroid structure over $TM\oplus A^*$. Given a $TM\oplus A^*$-Dorfman connection $\Delta$ on $A\oplus T^*M$, we compute the representation up to homotopy that corresponds to the splitting $TA\oplus T^*A\xrightarrow{\sim}{} (T^{q_A}A\oplus (T^{q_A}A)^*)\oplus L_\Delta$ and describes the VB-algebroid $TA\oplus T^*A\to TM\oplus A^*$ \cite{GrMe10a}. This representation up to homotopy is in general not the product of the two representations up to homotopy describing $TA\to TM$ and $T^*A\to A^*$. Knowing this, one can ask when a Dirac structure on $A$, that is a double sub-vector bundle of $TA\oplus T^*A$, is at the same time a Lie subalgebroid of $TA\oplus T^*A\to TM\oplus A^*$ over its base $U\subseteq TM\oplus A^*$. In that case, the Dirac structure has the induced structure of a double Lie algebroid, and is called an \textbf{$\mathcal{LA}$-Dirac structure} on $A$. A known example of an $\mathcal{LA}$-Dirac structure on a Lie algebroid $A$ is the graph of $\pi^\sharp:T^*A\to TA$, where $\pi_A$ is the linear Poisson bivector field defined on $A$ by a Lie algebroid structure on $A^*$ such that $(A,A^*)$ is a Lie bialgebroid \cite{MaXu00}. In that case, we know that the $\mathcal{LA}$-Dirac structure is completely encoded in the Lie bialgebroid, which is itself equivalent to a Courant algebroid with two transverse Dirac structures \cite{LiWeXu97}. The second standard example is that of the graph of a linear presymplectic form $\sigma^*\omega_{\rm can}\in \Omega^2(A)$, where $\sigma:A\to T^*M$ is an IM-$2$-form \cite{BuCrWeZh04, BuCaOr09}. Here also, the $\mathcal{LA}$-Dirac structure is equivalent to the IM-$2$-form, and any $\mathcal{LA}$-Dirac structure that is the graph of a presymplectic form on $A$ arises in this manner. The final, most easy, example is $F_A\oplus F_A^\circ$, where $F_A\to A$ is an involutive subbundle that has at the same time a Lie algebroid structure over some subbundle $F_M\subseteq TM$. Here we know that the $\mathcal{LA}$-Dirac structure corresponds to an infinitesimal ideal system, i.e. a triple $(C,F_M,\nabla)$, where $C\subseteq A$ is a subalgebroid, $F_M\subseteq TM$ and involutive subbundle and $\nabla:\Gamma(F_M)\times\Gamma(A/C)\to\Gamma(A/C)$ a flat connection satisfying some properties that allow ones to make sense of a quotient Lie algebroid $(A/C)/\nabla\to M/F_M$ \cite{JoOr12}. \medskip The second main result of this paper is a classification of $\mathcal{LA}$-Dirac structures on Lie algebroids via a certain class of Manin pairs \cite{BuIgSe09} on the base space of the Lie algebroid. This result unifies the three correspondences summarized above. \subsubsection*{Outline of the paper} Some background on Courant algebroids and Dirac structures, connections, and double vector bundles is collected in the first section. In the second section, Dorfman sections and dull algebroids are defined, and examples are listed. In the third section, splittings of the standard Courant algebroid $TE\oplus T^*E$ over a vector bundle $E$ are shown to be equivalent to a certain class of $TM\oplus E^*$-Dorfman connections on $E\oplus T^*M$. Linear Dirac structures on the vector bundle $E\to M$ are studied via Dorfman connections. In the fourth section, the geometric structures on the two sides of the standard $\mathcal{LA}$-Courant algebroid $TA\oplus T^*A$ over a Lie algebroid $A\to M$ are expressed via splittings of $TA\oplus T^*A$, and $\mathcal{LA}$-Dirac structures on $A$ are classified via Dorfman connections and some adequate vector bundles over the units $M$. Finally, it is shown that this data is equivalent to a Manin pair over $M$, that is in a sense compatible with the Lie algebroid $A$. \subsubsection*{Notation and conventions} Let $M$ be a smooth manifold. We will denote by $\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ and $\Omega^1(M)$ the spaces of (local) smooth sections of the tangent and the cotangent bundle, respectively. For an arbitrary vector bundle $E\to M$, the space of (local) sections of $E$ will be written $\Gamma(E)$. We write $p_M:TM\to M$, $c_M:T^*M \to M$ and $\pi_M:TM\oplus T^*M\to M$ for the canonical projections, and $q_E:E\to M$ for vector bundle maps. The flow of a vector field $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ will be written $\phi^X_\cdot$, unless specified otherwise. Let $f:M\to N$ be a smooth map between two smooth manifolds $M$ and $N$. Then two vector fields $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ and $Y\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(N)$ are said to be \textbf{$f$-related} if $Tf\circ X=Y\circ f$ on $\dom(X)\cap f\inv(\dom(Y))$. We write then $X\sim_f Y$. Given a section $\xi$ of $E^*$, we will always write $\ell_\xi:E\to \mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$ for the linear function associated to it, i.e. the function defined by $e_m\mapsto \langle \xi(m), e_m\rangle$ for all $e_m\in E$. Given a section $e\in\Gamma(E)$, we write $e^\uparrow\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(E)$ for the derivation along $e$, i.e. the vector field with complete flow $\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}\times E\to E$, $(t,e_m')\mapsto e_m'+te(m)$. Note that the vector fields $e^\uparrow$, for all $e\in\Gamma(E)$, span the subbundle $V:=T^{q_E}E\subseteq TE$ and $e^\uparrow$ is completely determined by $e^\uparrow(\ell_\xi)=q_E^*\langle \xi, e\rangle$ and $e^\uparrow(q_E^*\varphi)=0$ for all $\varphi\in C^\infty(M)$ and $\xi\in\Gamma(E^*)$. Note furthermore that $e^\uparrow(f_m)$ depends only on $e(m)$ and $f_m$, so the expression $(e(m))^\uparrow(f_m)$ makes sense. In the same manner, given a vector bundle morphism $\phi\in\Gamma(\Hom(E;E))$, we can define $\phi^\uparrow\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(E)$, $\phi^\uparrow(e_m)=(\phi(e_m))^\uparrow(e_m)$ for all $e_m\in E$. \subsubsection*{Acknowledgement} The author wishes to thank Thiago Drummond and Cristian Ortiz for many discussions that considerably influenced some parts of this paper, which grew out of a project with them. Many thanks also go out to David Li-Bland, Rajan Mehta and Marco Zambon for interesting discussions and remarks, and especially to Kirill Mackenzie for interesting discussions, advice and encouragements. \section{Background definitions} We recall first some necessary background notions on Courant algebroids, of the double vector bundle structures on the tangent and cotangent spaces $TE$ and $T^*E$ of a vector bundle $E$, and on connections. \subsection{Courant algebroids and Dirac structures} A Courant algebroid \cite{LiWeXu97,Roytenberg99} over a manifold $M$ is a vector bundle $\mathsf E\to M$ equipped with a fibrewise nondegenerate symmetric bilinear form $\langle\cdot\,,\cdot\rangle$, a bilinear bracket $[\cdot\,,\cdot]$ on the smooth sections $\Gamma(\mathsf E)$, and a vector bundle map $\rho: \mathsf E\to TM$ over the identity called the anchor, which satisfy the following conditions \begin{enumerate} \item $[e_1, [e_2, e_3]] = [[e_1, e_2], e_3] + [e_2, [e_1, e_3]]$, \item $\rho(e_1 )\langle e_2, e_3\rangle = \langle[e_1, e_2], e_3\rangle + \langle e_2, [e_1 , e_3]\rangle$, \item $[e_1, e_2] +[e_2, e_1] =\mathcal D\langle e_1 , e_2\rangle$ \end{enumerate} for all $e_1, e_2, e_3\in\Gamma(\mathsf E)$ and $\varphi\in C^\infty(M)$. Here, we use the notation $\mathcal D := \rho^*\circ\dr : C^\infty(M)\to\Gamma(\mathsf E)$, using $\langle\cdot\,,\cdot\rangle$ to identify $\mathsf E$ with $\mathsf E^*$: \[\langle \mathcal D\varphi, e\rangle=\rho(e)(\varphi) \] for all $\varphi\in C^\infty(M)$ and $e\in\Gamma(E)$. The following conditions \begin{enumerate} \setcounter{enumi}{3} \item $\rho([e_1, e_2]) = [\rho(e_1), \rho(e_2)]$, \item $[e_1, \varphi e_2] = \varphi [e_1 , e_2] + (\rho(e_1 )\varphi )e_2$ \end{enumerate} are then also satisfied. They are often part of the definition in the literature, but it was already observed in \cite{Uchino02} that they follow from $(1)-(3)$.\footnote{Actually, they both follow immediately from $(2)$. To get $(4)$ replace $e_2$ by $\varphi e_2$ in $(2)$, and to get $(5)$, replace $e_1$ by $[e_1,e_1']$: an easy computation yields then that $\rho[e_1,e_1']\langle e_2, e_3\rangle=[\rho(e_1), \rho(e_1')]\langle e_2, e_3\rangle$ for all $e_2,e_3\in\Gamma(E)$.} \begin{example}\label{ex_pontryagin} The direct sum $TM\oplus T^*M$ endowed with the projection on $TM$ as anchor map, $\rho=\pr_{TM}$, the symmetric bracket $\langle\cdot\,,\cdot\rangle$ given by \begin{equation} \langle(v_m,\alpha_m), (w_m,\beta_m)\rangle=\alpha_m(w_m)+\beta_m(v_m) \label{sym_bracket} \end{equation} for all $m\in M$, $v_m,w_m\in T_mM$ and $\alpha_m,\beta_m\in T_m^*M$ and the \textbf{Courant-Dorfman bracket} given by \begin{align} [(X,\alpha), (Y,\beta)]&=\left([X,Y], \ldr{X}\beta-\ip{Y}\dr\alpha\right)\label{wrong_bracket}\\ \end{align} for all $(X,\alpha), (Y, \beta)\in\Gamma(TM\oplus T^*M)$, yield the standard example of a Courant algebroid (often called the \emph{standard Courant algebroid over $M$}). The map $\mathcal D: C^\infty(M)\to \Gamma(TM\oplus T^*M)$ is given by $\mathcal D f=(0, \dr f)$. \end{example} A \textbf{Dirac structure} $\mathsf D\subseteq \mathsf E$ is a subbundle satisfying \begin{enumerate} \item $\mathsf D^\perp=\mathsf D$ relative to the pairing on $\mathsf E$, \item $[\Gamma(\mathsf D), \Gamma(\mathsf D)]\subseteq \Gamma(\mathsf D)$. \end{enumerate} The rank of the Dirac bundle $\mathsf D$ is then half the rank of $\mathsf E$, and the triple \linebreak $(\mathsf D\to M, \rho\an{\mathsf D}, [\cdot\,,\cdot]\an{\Gamma(\mathsf D)\times\Gamma(\mathsf D)})$ is a Lie algebroid on $M$. Dirac structures appear naturally in several contexts in geometry and geometric mechanics (see for instance \cite{Bursztyn11} for an introduction to the geometry and applications of Dirac structures). \subsection{The double vector bundles $TE$ and $T^*E$} Consider a vector bundle $q_E:E\to M$. Then the tangent space $TE$ of $E$ has two vector bundle structures. First, the usual vector bundle structure of the tangent space, $p_E:TE\to E$ and second the vector bundle structure $Tq_E:TE\to TM$, with the addition defined as follows. If $x_{e_m}$ and $x_{e_m'}$ are such that $Tq_E(x_{e_m})=Tq_E(x_{e_m'})=:x_m\in TM$, then there exist curves $c,c':(-\varepsilon, \varepsilon)\to E$ such that $\dot c(0)=x_{e_m}$, $\dot c'(0)=x_{e_m'}$ and $q_E\circ c=q_E\circ c'$. The sum $x_{e_m}+_{Tq_E}x_{e_m'}$ is then defined as \[x_{e_m}+_{Tq_E}x_{e_m'}=\left.\frac{d}{dt}\right\an{t=0}c(t)+_{q_E}c'(t)\in T_{e_m+e_m'}E. \] We get a double vector bundle \begin{align*} \begin{xy} \xymatrix{ TE\ar[r]^{p_E}\ar[d]_{Tq_E} &E\ar[d]^{q_E}\\ TM\ar[r]_{p_M}&M }\end{xy}, \end{align*} that is, the structure maps of each vector bundle structure are vector bundle morphisms relative to the other structure \cite{Mackenzie05}. \bigskip Dualizing $TE$ over $E$, we get the double vector bundle \begin{align*} \begin{xy} \xymatrix{ T^*E\ar[r]^{c_E}\ar[d]_{r_E} &E\ar[d]^{q_E}\\ E^*\ar[r]_{q_{E^*}}&M }\end{xy}. \end{align*} The map $r_E$ is given as follows. For $\alpha_{e_m}$, $r_E(\alpha_{e_m})\in E^*_m$, \[\langle r_E(\alpha_{e_m}), f_m\rangle=\langle \alpha_{e_m}, \left.\frac{d}{dt}\right\an{t=0}e_m+tf_m\rangle \] for all $f_m\in E_m$. The addition in $T^*E\to E^*$ is defined as follows. If $\alpha_{e_m}$ and $\beta_{e_m'}$ are such that $r_E(\alpha_{e_m})=r_E(\beta_{e_m'})=\xi_m\in E^*_m$, then the sum $\alpha_{e_m}+_{r_E}\beta_{e_m'}\in T_{e_m+e_m'}^*E$ is given by \[\langle \alpha_{e_m}+_{r_E}\beta_{e_m'}, x_{e_m}+_{Tq_E}x_{e_m'}\rangle =\langle \alpha_{e_m}, x_{e_m}\rangle+\langle\beta_{e_m'}, x_{e_m'}\rangle. \] Note that $T^*_{e_m}E$ is generated by $\dr_{e_m}\ell_\xi$ and $\dr_{e_m}(q_E^*\varphi)$ for $\xi\in\Gamma(E^*)$ and $\varphi\in C^\infty(M)$. We have $r_E(\dr_{e_m}\ell_\xi)=\xi(m)$ and $r_E(\dr_{e_m}(q_E^*\varphi))=0^{E^*}_m$. The sum $\dr_{e_m}\ell_\xi+_{r_E}\dr_{e_m'}\ell_\xi$ equals $\dr_{e_m+e_m'}\ell_\xi$. \subsection{Basic facts about connections} In this paper, connections will not be defined on Lie algebroids, but more generally on \emph{dull algebroids}. We make the following definition. \begin{definition} A \textbf{dull algebroid} is a vector bundle $Q\to M$ endowed with an \emph{anchor}, i.e. a vector bundle morphism $\rho_Q:Q\to TM$ over the identity on $M$ and a bracket $[\cdot\,,\cdot]_Q$ on $\Gamma(Q)$ with \begin{equation}\label{anchor_preserves_bracket} \rho_Q[q,q']_Q=[\rho_Q(q),\rho_Q(q')] \end{equation} for all $q, q'\in\Gamma(Q)$, and satisfying the Leibniz identity \begin{equation*} [\varphi_1 q_1, \varphi_2 q_2]_Q=\varphi_1\varphi_2[q_1, q_2]_Q+\varphi_1\rho_Q(q_1)(\varphi_2)q_2-\varphi_2\rho_Q(q_2)(\varphi_1)q_1 \end{equation*} for all $\varphi_1,\varphi_2\in C^\infty(M)$, $q_1, q_2\in\Gamma(Q)$. \end{definition} In other words, a dull algebroid is a \textbf{Lie algebroid} if its bracket is in addition skew-symmetric and satisfies the Jacobi-identity. \medskip Let $(Q\to M,\rho_Q, [\cdot,\cdot]_Q)$ be a dull algebroid and $B\to M$ a vector bundle. A $Q$-connection on $B$ is a map \[\nabla:\Gamma(Q)\times\Gamma(B)\to\Gamma(B), \] with the usual properties. By the properties of a dull algebroid, one can still make sense of the curvature $R_\nabla$ of the connection, which is an element of $\Gamma(Q^*\otimes Q^*\otimes B^*\otimes B)$. The dual connection $\nabla^*:\Gamma(Q)\times\Gamma(B^*)\to\Gamma(B^*)$ is defined by \[\langle \nabla^*_q\xi, b\rangle=\rho_Q(q)\langle \xi, b\rangle- \langle\xi, \nabla_qb\rangle \] for all $q\in\Gamma(Q)$, $b\in\Gamma(B)$ and $\xi\in\Gamma(B^*)$. \subsubsection{The Bott connection associated to a subbundle $F\subseteq TM$} Recall the definition of the Bott connection associated to an involutive subbundle of $TM$: Let $F\subseteq TM$ be a subbundle, then the Lie bracket on vector fields on $M$ induces a map \[\tilde\nabla^F:\Gamma(F)\times\Gamma(TM)\to\Gamma(TM/F), \] defined by \[\tilde\nabla^F_XY=\overline{[X,Y]}. \] The subbundle $F$ is involutive if and only if \[\tilde\nabla^F_XX'=0\quad \text{ for all }\quad X,X'\in\Gamma(F). \] In that case, the map $\tilde \nabla^F$ quotients to a flat connection \[\nabla^F:\Gamma(F)\times\Gamma(TM/F)\to\Gamma(TM/F),\] the Bott connection. \subsubsection{The basic connections associated to a connection on a quasi Lie algebroid}\label{basic_connections} Consider here a dull algebroid $(Q, \rho_Q, [\cdot\,,\cdot])$ together with a connection $\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(Q)\to\Gamma(Q)$. Then there are $Q$-connections on $Q$ and $TM$, called the \textbf{basic connections} and defined as follows \cite{CrFe05,ArCr11a}. \[\nabla^{\rm bas}=\nabla^{{\rm bas},Q}:\Gamma(Q)\times\Gamma(Q)\to\Gamma(Q),\] \[\nabla^{\rm bas}_qq'=[q,q']+\nabla_{\rho_Q(q')}q\] and \[\nabla^{\rm bas}=\nabla^{{\rm bas},TM}:\Gamma(Q)\times\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\to\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M),\] \[\nabla^{\rm bas}_qX=[\rho_Q(q),X]+\rho_Q(\nabla_{X}q).\] The \textbf{basic curvature} is the map \[R_\nabla^{\rm bas}:\Gamma(Q)\times\Gamma(Q)\times\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\to\Gamma(Q), \] \[R_\nabla^{\rm bas}(q,q')(X)=-\nabla_X[q,q']+[\nabla_Xq,q']+[q,\nabla_Xq']+\nabla_{\nabla_{q'}^{\rm bas}X}q -\nabla_{\nabla^{\rm bas}_qX}q'. \] The basic curvature is tensorial and we have the identities \[ \nabla^{{\rm bas},TM}\circ \rho_Q=\rho_Q\circ \nabla^{{\rm bas},Q}, \qquad \rho_Q\circ R_\nabla^{\rm bas}=R_{\nabla^{{\rm bas},TM}}\quad \text{ and }\quad R_\nabla^{\rm bas}\circ\rho_Q=R_{\nabla^{{\rm bas},Q}}. \] \subsubsection{Connections on a vector bundle $E$, splittings of $TE$ and the Lie bracket on $\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(E)$.}\label{connections_and_splittings_of_TE} We recall here the relation between a connection on a vector bundle $E$ and the Lie bracket of vector fields on $E$. \medskip Let $q_E:E\to M$ be a vector bundle and $\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(E)\to\Gamma(E)$ a connection. Then for each $e_m\in E$ and $v_m\in TM$, we can define the vector \[ \widetilde{v_m,e_m}=T_mev_m-\left.\frac{d}{dt}\right\an{t=0}e_m+t\nabla_{v_m}e\in T_{e_m}E \] for any section $e\in\Gamma(E)$ such that $e(m)=e_m$. We have \[\widetilde{v_m,e_m}(\ell_\xi)=v_m\langle \xi, e\rangle-\langle \xi(m), \nabla_{v_m}e\rangle=\ell_{\nabla_{v_m}^*\xi}(e_m)\] and \[\widetilde{v_m,e_m}(q_E^*\varphi)=v_m(\varphi) \] for all $\varphi\in C^\infty(M)$ and $\xi\in\Gamma(E^*)$. The set of all vectors in $TE$ defined in this manner is a subbundle $H_\nabla$ of $p_E:TE\to E$ that is in direct sum with the vertical space $V:=T^{q_E}E=\{v_{e_m}\in TE\mid T_{e_m}q_Ev_{e_m}=0\}$: \[TE\cong V\oplus H_\nabla\to E.\] For each section $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$, the vector field $\tilde X\in\Gamma(H_\nabla)\subseteq\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(E)$ is defined by $\tilde X(e_m)=\widetilde{X(m), e_m}$. For all functions $\varphi\in C^\infty(M)$ and sections $\xi\in\Gamma(E^*)$, we have \begin{equation}\label{ableitungen} \tilde X(\ell_\xi)=\ell_{\nabla_{X}^*\xi}, \qquad \tilde X(q_E^*\varphi)=q_E^*(X(\varphi)), \qquad f^\uparrow(\ell_\xi)=q_E^*\langle\xi, e\rangle, \qquad f^\uparrow(q_E^*\varphi)=0. \end{equation} Conversely, consider a splitting $TE\cong V\oplus H$ of $TE\to E$. Then, since $H\cong TE/V$ is isomorphic to the pullback $q_E^!TM\to E$, we find for each vector field $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ a unique section $\tilde X$ of $H$ such that $\tilde X\sim_{q_E}X$. Using this uniqueness, one can show easily that the pair $(X,\tilde X)$ defines a vector bundle morphism \begin{align*} \begin{xy} \xymatrix{ E\ar[d]_{q_E}\ar[r]^{\tilde X}&TE\ar[d]^{Tq_E}\\ M\ar[r]_{X}&TM } \end{xy} \end{align*} and that we have the equality \[\widetilde{\varphi\cdot X}=q_E^*\varphi\cdot \tilde X \] for all $\varphi\in C^\infty(M)$ and $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$. Using this and $\ell_{\varphi\cdot\xi}=q_E^*\varphi\cdot \ell_\xi$ for all $\varphi\in C^\infty(M)$ and $\xi\in\Gamma(E^*)$, one can then \emph{define} a connection $\nabla^H:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(E)\to \Gamma(E)$ by setting \[\tilde X(\ell_\xi)=:\ell_{{\nabla^H}^*_X\xi}\] for all $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$, $\xi\in\Gamma(E^*)$. \medskip This shows the correspondence of the two definitions of a connection; the first as the map \[\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(E)\to\Gamma(E),\] the second as a splitting \[TE\cong V\oplus H\to E.\] Given $\nabla$ or $H_\nabla$, it is easy to see using the equalities in \eqref{ableitungen} that \begin{align*} \left[\tilde X, \tilde Y\right]&=\widetilde{[X,Y|}-R_\nabla(X,Y)^\uparrow,\\ \left[\tilde X, e^\uparrow\right]&=(\nabla_Xe)^\uparrow,\\ \left[e^\uparrow,f^\uparrow\right]&=0 \end{align*} for all $X,Y\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ and $e,f\in\Gamma(E)$. That is, the Lie bracket of vector fields on $E$ can be described using the connection. The connection itself can also be seen as a suitable quotient of the Bott connection $\nabla^{H_\nabla}$: \[\nabla^{H_\nabla}_{\tilde X}\overline{e^\uparrow}=\overline{(\nabla_Xe)^\uparrow} \] for all $e\in\Gamma(E)$ and $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$, i.e. the Bott connection associated to $H_\nabla$ restricts well to linear (horizontal) and vertical sections. \medskip \begin{remark} To any derivation $D:\Gamma(E)\to \Gamma(E)$ over a vector field $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$, we can associate a vector field $\widehat{D}\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(E)$ as follows: \[\widehat{D}(q_E^*\varphi)=q_E^*(X(\varphi))\] for all $\varphi\in C^\infty(M)$, and \[\widehat{D}(\ell_\xi)=\ell_{D^*\xi} \] for all $\xi\in\Gamma(E^*)$. Here, $D^*:\Gamma(E^*)\to\Gamma(E^*)$ is the dual of the derivation $D$, i.e. \[\langle D^*(\xi), e\rangle=X\langle \xi,e\rangle-\langle \xi, D(e)\rangle \] for all $e\in\Gamma(E)$ and $\xi\in\Gamma(E^*)$. We will use this notation in the paper. Given a connection $\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(E)\to\Gamma(E)$, the vector field $\tilde X$ defined as above is just $\widehat{\nabla_X}$. \end{remark} \section{Dorfman connections: definition and examples} \begin{definition} Let $(Q\to M,\rho_Q,[\cdot\,,\cdot]_Q)$ be a dull algebroid. Let $B\to M$ be a vector bundle with a fiberwise pairing $\langle\cdot\,,\cdot\rangle:Q\times_M B\to \mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$ and a map $\dr_B: C^\infty(M)\to \Gamma(B)$ such that \begin{equation}\label{compatibility_anchor_pairing} \langle q, \dr_B\varphi\rangle=\rho_Q(q)(\varphi) \end{equation} for all $q\in\Gamma(Q)$ and $\varphi\in C^\infty(M)$. Then $(B,\dr_B, \langle\cdot\,,\cdot\rangle)$ will be called a \textbf{pre-dual} of $Q$ and $Q$ and $B$ are said to be \textbf{paired by $\langle\cdot\,,\cdot\rangle$}. \end{definition} \begin{remark} Note that if the pairing is nondegenerate, then $(B\to M,\dr_B, \langle\cdot\,,\cdot\rangle)$ is the \textbf{dual} of $(Q\to M,\rho_Q,[\cdot\,,\cdot]_Q)$ and $\dr_{Q^*}:C^\infty(M)\to\Gamma(Q^*)$ is \emph{defined} by \eqref{compatibility_anchor_pairing}. We have then $\dr_{Q^*}\varphi=\rho_Q^*\dr\varphi$, i.e. $\langle \dr_{Q^*}\varphi, q\rangle =\rho_Q(q)(\varphi)$ for all $q\in\Gamma(Q)$ and $\varphi\in C^\infty(M)$. \end{remark} The main definition of this paper is the following. \begin{definition}\label{the_def} Let $(Q\to M,\rho_Q,[\cdot\,,\cdot]_Q)$ be a dull algebroid and $(B\to M, \dr_B, \langle\cdot\,,\cdot\rangle)$ a pre-dual of $Q$. \begin{enumerate} \item A \textbf{Dorfman ($Q$-)connection on $B$} is an $\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$-bilinear map \begin{equation*} \Delta:\Gamma(Q)\times\Gamma(B)\to\Gamma(B) \end{equation*} such that \begin{enumerate} \item $\Delta_{\varphi q}b=\varphi\Delta_qb+\langle q, b\rangle \cdot \dr_B \varphi$, \item $\Delta_q(\varphi b)=\varphi\Delta_qb+\rho_Q(q)(\varphi)b$, \item $\rho(q)\langle q', b\rangle=\langle[q,q']_Q, b\rangle+\langle q', \Delta_qb\rangle$ \end{enumerate} for all $\varphi\in C^\infty(M)$, $q,q'\in\Gamma(Q)$, $b\in\Gamma(B)$. \item The curvature of $\Delta$ is the map \[R_\Delta:\Gamma(Q)\times\Gamma(Q)\to\Gamma(B^*\otimes B),\] defined on $q,q'\in\Gamma(Q)$ by $R_\Delta(q,q'):=\Delta_q\Delta_{q'}-\Delta_{q'}\Delta_q-\Delta_{[q,q']_Q}$. \end{enumerate} \end{definition} The failure of a Dorfman connection to be a connection is hence measured by the map $\dr_B$ and the pairing of $Q$ with $B$. Before we go on with examples,we have to check that $R_\Delta(q,q')$ is an element of $\Gamma(B^*\otimes B)$ for all $q,q'\in\Gamma(Q)$. But this is a straightforward computation, and we omit the proof of the following proposition. \begin{proposition}\label{curvature_tensor} Let $(Q\to M,\rho_Q,[\cdot\,,\cdot]_Q)$ be a dull algebroid and $(B,\dr_B,\langle\cdot\,,\cdot\rangle)$ a pre-dual of $Q$. Let $\Delta$ be a Dorfman $Q$-connection on $B$. Then: \begin{enumerate} \item For all $\varphi\in C^\infty(M)$ and $q,q'\in \Gamma(Q)$, $b\in \Gamma(B)$, we have \[R_\Delta(q,q')(\varphi\cdot b)=\varphi\cdot R_\Delta(q,q'). \] \item For all $q_1,q_2,q_3\in\Gamma(Q)$ and $b\in\Gamma(B)$, we have \[\langle R_\Delta(q_1,q_2)(b),q_3\rangle=\langle[[q_1,q_2]_Q,q_3]_Q+[q_2,[q_1,q_3]_Q]_Q-[q_1,[q_2,q_3]_Q]_Q, b\rangle. \] In particular, if $Q$ is a Lie algebroid, the Dorfman connection is always flat in this sense. \end{enumerate} \end{proposition} \begin{remark} Note that this doesn't mean that the curvature of a Dorfman connection vanishes everywhere if $Q$ is a Lie algebroid, since the pairing of $Q$ and $B$ can be degenerate. We will see a trivial example for this phenomenon in Example \ref{trivial}. \end{remark} \begin{example}\label{trivial} Let $(Q\to M, \rho_Q, [\cdot\,,\cdot]_Q)$ be a dull algebroid and $B\to M$ a vector bundle. Take the pairing $\langle\cdot\,,\cdot\rangle:Q\times_M B\to \mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$ and the map $\dr_B:C^\infty(M)\to \Gamma(B)$ to be trivial. Then any $Q$-connection on $B$ is also a Dorfman connection. \end{example} \begin{example} The easiest non-trivial example of a Dorfman connection is the map \[\ldr{}:\Gamma(Q)\times\Gamma(Q^*)\to\Gamma(Q^*), \] \[\langle\ldr{q}\xi,q'\rangle=\rho_Q(q)\langle q, \xi\rangle-\langle\xi, [q,q']_Q\rangle, \] for a dull algebroid $(Q\to M, \rho_Q,[\cdot\,,\cdot]_Q)$ and its dual $(Q^*,\dr_{Q^*})$, i.e. with the canonical pairing $Q\times_MQ^*\to\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$ and $\dr_{Q^*}=\rho_Q^*\dr:C^\infty(M)\to\Gamma(Q^*)$. The third property of a Dorfman connection is immediate by definition of $\ldr{}$ and the first two properties are easily verified. The curvature vanishes if and only if $(Q, \rho_Q, [\cdot\,,\cdot]_Q)$ is a Lie algebroid. \end{example} \bigskip Let $(\mathsf E\to M, \rho:\mathsf E\to TM, \langle\cdot\,,\cdot\rangle, [\cdot\,,\cdot])$ be a Courant algebroid. If $K$ is a subalgebroid of $\mathsf E$, the (in general singular) distribution $S:=\rho(K)\subseteq TM$ is algebraically involutive and we can define the ``singular'' Bott connection \[\nabla^S:\Gamma(S)\times\frac{\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)}{\Gamma(S)}\to\frac{\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)}{\Gamma(S)} \] by \[\nabla^S_{s}\bar X=\overline{[s,X]}\] for all $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ and $s\in\Gamma(S)$. The anchor $\rho:\mathsf E\to TM$ induces a map $\bar\rho:\Gamma(\mathsf E/K)\to \mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)/\Gamma(S)$, $\bar\rho(\bar e)=\rho(e)+\Gamma(S)$. \begin{proposition}\label{Bott_Dorfman} Let $\mathsf E\to M$ be a Courant algebroid and $K\subseteq \mathsf E$ an isotropic subalgebroid. Then the map \begin{align*} \Delta:\Gamma(K)\times\Gamma(\mathsf E/K)&\to\Gamma(\mathsf E/K)\\ \Delta_k\bar e&=\overline{[k,e]} \end{align*} is a Dorfman connection. The dull algebroid structure on $K$ is its induced Lie algebroid structure, the map $\dr_{\mathsf E/K}$ is just $\mathcal D+\Gamma(K)$ and the pairing $\langle\cdot\,,\cdot\rangle:K\times_M (\mathsf E/K)\to \mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$ is the natural pairing induced by the pairing on $\mathsf E$. We have \[\bar\rho(\Delta_k\bar e)=\nabla^S_{\rho(k)}\bar\rho(\bar e) \] for all $k\in\Gamma(K)$ and $\bar e\in\Gamma(\mathsf E/K)$. \end{proposition} \begin{remark} \begin{enumerate} \item Because of the analogy of the Dorfman connection in the last proposition with the Bott connection defined by involutive subbundles of $TM$, we name this Dorfman connection the \emph{Bott--Dorfman connection associated to $K$}. \item Note that if $K$ is a Dirac structure $\mathsf D$ in $\mathsf E$, then $\mathsf E/\mathsf D\simeq \mathsf D^*$ and the Dorfman connection is just the Lie algebroid derivative of $\mathsf D$ on $\Gamma(\mathsf D^*)$. \end{enumerate} \end{remark} \subsection{Example: reduction of Courant algebroids} Let $\mathsf E\to M$ be a Courant algebroid and $K\subseteq E$ an isotropic subalgebroid. Choose $k,k'\in\Gamma(K)$ and $e\in\Gamma(K^\perp)$. Then the equality \[\langle [k,e], k'\rangle=-\langle e, [k,k']\rangle+\rho(k)\langle e, k'\rangle=0 \] shows that $[k,e]\in\Gamma(K^\perp)$. Thus, the Dorfman connection in Proposition \ref{Bott_Dorfman} restricts to a flat connection \[\nabla:\Gamma(K)\times\Gamma(K^\perp/K)\to \Gamma(K^\perp/K). \] Assume that $\rho(K)\subseteq TM$ is simple, i.e. it has constant rank, is hence Frobenius integrable and its space of leaves is a smooth manifold such that the canonical projection is a smooth surjective submersion. The $\nabla$-parallel sections of $K^\perp/K$ are the sections that project to the quotient $(K^\perp/K)/\nabla\to M/\rho(K)$ and the properties of $\nabla$: \begin{enumerate} \item $\nabla$ is flat \item $\rho$ intertwines $\nabla$ with the Bott connection $\nabla^{\rho(K)}$, \end{enumerate} can be used to show as in \cite{Zambon08} that, under necessary regularity conditions, the quotient has the structure of a Courant algebroid over $M/\rho(K)$. (The $\nabla$-parallel sections of $K^\perp/K$ are exactly the sections that are called basic in \cite{Zambon08}.) \begin{example} Consider the standard Courant algebroid $TM\oplus T^*M\to M$ over a smooth manifold $M$ and an involutive subbundle $F\subseteq TM$. The Dorfman connection associated to $F\oplus \{0\}\subseteq TM\oplus T^*M$ is given by \[\Delta_X\overline{(Y,\beta)}=\overline{(\ldr{X}Y,\ldr{X}\beta)} \] for all $X\in\Gamma(F)$ and $(Y,\beta)\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Omega^1(M)$. Hence, from the preceding example, we recover the fact that the sections of $TM\oplus T^*M$ that project to the reduced Courant algebroid $T(M/F)\oplus T^*(M/F)\to (M/F)$ are the sections $(Y,\beta)$ of $TM\oplus F^\circ$ that are invariant under $F$, i.e. such that $\ldr{X}(Y,\beta)\in\Gamma(F\oplus\{0\})$ for all $X\in\Gamma(F)$ (e.g. \cite{JoRa11b}). Note that $(TM\oplus F^\circ)/(F\oplus 0)\simeq (TM/F)\oplus F^\circ$ and $(TM/F)^*\simeq F^\circ$. The connection \[\nabla:\Gamma(F)\times \Gamma(TM/F\oplus F^\circ)\to \Gamma(TM/F\oplus F^\circ) \] is, modulo these identifications, just the product of the Bott connection $\nabla^F$ and its dual \[{\nabla^F}^*:\Gamma(F)\times \Gamma((TM/F)^*)\to \Gamma((TM/F)^*). \] \end{example} \subsection{Generalized complex structures and Dorfman connections} Let $V$ be a vector space. Consider a linear endomorphism $\mathcal J$ of $V\oplus V^*$ such that $\mathcal J^2=-\Id_V$ and $\mathcal J$ is orthogonal with respect to the inner product $$(X+\xi, Y+\eta)_V =\xi(Y)+\eta(X), \quad \forall X,Y\in V, \; \xi, \eta \in V^* .$$ Such a linear map is called a \emph{linear generalized complex structure} by Hitchin \cite{Hitchin03}. The complexified vector space $(V\oplus V^*)\otimes \mathbb{C}$ decomposes as the direct sum $$(V\oplus V^* )\otimes \mathbb{C}=E_+\oplus E_-$$ of the eigenbundles of $\mathcal J$ corresponding to the eigenvalues $\pm \rm i$ respectively, i.e., $$E_{\pm}=\left\{(X+\xi)\mp \rm i \mathcal J(X+\xi) \mid X+\xi\in V\oplus V^* \right\}.$$ Both eigenspaces are maximal isotropic with respect to $\langle \cdot\,,\cdot\rangle$ and they are complex conjugate to each other. The linear generalized complex structures are in 1-1 correspondence with the splittings $(V\oplus V^*)\otimes \mathbb{C}=E_+\oplus E_-$ with $E_{\pm}$ maximal isotropic and $E_-=\overline{E_+}$. \medskip Now, let $M$ be a manifold and $\mathcal J$ a bundle endomorphism of $TM\oplus T^*M$ such that $\mathcal J^2=-\Id_{TM\oplus TM}$, and $\mathcal J$ is orthogonal with respect to $\langle \cdot\,,\cdot\rangle_M$. Then $\mathcal J$ is a \emph{generalized almost complex structure}. In the associated eigenbundle decomposition $$T_\mathbb{C} M\oplus T_\mathbb{C}^*M =E_+\oplus E_- ,$$ if $\Gamma(E_+)$ is closed under the (complexified) Courant bracket, then $E_+$ is a (complex) Dirac structure on $M$ and one says that $\mathcal J$ is a \emph{generalized complex structure} \cite{Hitchin03, Gualtieri03}. In this case, $E_-$ must also be a Dirac structure since $E_-=\overline{E_+}$. Indeed $(E_+, E_-)$ is a complex Lie bialgebroid in the sense of Mackenzie-Xu \cite{MaXu94}, in which $E_+$ and $E_-$ are complex conjugate to each other. \medskip Since $E_-=\overline{E_+}$ and $E_-\cap E_+=\{0\}$, we have a vector bundle isomorphism \[ \frac{T_{\mathbb C}M\oplus T_{\mathbb C}^*M}{E_-}\to E_+ \] that is given by\footnote{To avoid confusions, we write in this subsection $\hat d$ for the class of $d\in\Gamma(T_{\mathbb C}M\oplus T_{\mathbb C}^*M)$ in $(T_{\mathbb C}M\oplus T_{\mathbb C}^*M)/E_-$.} \[ \hat d\mapsto \frac{1}{2} (d-\rm{i}\mathcal J(d)). \] The Dorfman connection \[\Delta^{E_-}:\Gamma(E_-)\times\Gamma(E_+)\to\Gamma(E_+) \] is then simply given by \[\Delta^{E_-}_{d-}d_+=\frac{1}{2}\left([d_-,d_+]-\rm{i}\mathcal J[d_-,d_+]\right). \] In the same manner, \[\Delta^{E_+}:\Gamma(E_+)\times\Gamma(E_-)\to\Gamma(E_-) \] is given by \[\Delta^{E_+}_{d+}d_-=\frac{1}{2}\left([d_+,d_-]+\rm{i}\mathcal J[d_+,d_-]\right) \] for all $d_-\in\Gamma(E_-)$ and $d_+\in\Gamma(E_+)$. It would be interesting to study in more detail the properties of these two (flat!) Dorfman connections, maybe in the spirit of the results in \cite{LaStXu08}. \section{Splittings of $TE\oplus T^*E$} Consider a vector bundle $q_E:E\to M$. In this section, the vector bundle $TM\oplus E^*$ will always be anchored by the projection $\pr_{TM}:TM\oplus E^*\to TM$ and the dual $E\oplus T^*M$ will always be paired with $TM\oplus E^*$ by the canonical pairing. The map $\dr_{E\oplus T^*M}:C^\infty(M)\to \Gamma(E\oplus T^*M)$ will consequently always be given by \[\dr_{E\oplus T^*M}=\pr_{TM}^*\circ \dr, \] i.e. \[\dr_{E\oplus T^*M}\varphi =(0,\dr \varphi ) \] for all $\varphi \in C^\infty(M)$. A Dorfman connection $\Delta$ will here always be a $TM\oplus E^*$-Dorfman connection on $E\oplus T^*M$, with dual $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$. Note that since the pairing is nondegenerate, the Dorfman connection is completely determined by its dual structure, the associated bracket $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$ and vice-versa. Hence, we can say here that a Dorfman connection is equivalent to a dull algebroid $(TM\oplus E^*, \pr_{TM},\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta)$. It is easy to see, using Proposition \ref{curvature_tensor}, that the curvature $R_\Delta$ will always vanish on $(TM\oplus E^*)\otimes(TM\oplus E^*)\otimes(0\oplus T^*M)$. \bigskip Recall from \S \ref{connections_and_splittings_of_TE} that an ordinary connection \[\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(E)\to\Gamma(E) \] corresponds to a splitting \[TE=T^{q_E}E\oplus H_\nabla\to E. \] We show in this section that a Dorfman connection \[\Delta:\Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to \Gamma(E\oplus T^*M) \] is the same as a splitting \[TE\oplus T^*E=(V\oplus V^\circ)\oplus L_\Delta \] where, as before, $V:=T^{q_E}E$. The subbundle $L_\Delta$ is an almost Dirac structure on $E$ if and only if the bracket $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$ dual to the Dorfman connection is skew-symmetric and the failure of $\Gamma(L_\Delta)$ to be closed under the Dorfman bracket is measured by the curvature $R_\Delta$. \medskip In the following, for any section $(f,\theta)$ of $E\oplus T^*M$, the section $(f,\theta)^\uparrow$ of $V\oplus V^\circ$ is the pair defined by \[(f,\theta)^\uparrow(e_m)=\left(\left.\frac{d}{dt}\right\an{t=0}e_m+tf(m), (T_{e_m}q_E)^*\theta(m)\right) \] for all $e_m\in E$. Note that the pairs $(f,\theta)^\uparrow(e_m)$ for all $(f,\theta)\in\Gamma(E\oplus T^*M)$, span by construction the fiber $(V\oplus V^\circ)(e_m)$. \subsection{The standard almost Dorfman connection associated to an usual connection on $E$.}\label{sec_standard_almost_dorfman} We start in this subsection with a simple, motivating example. \begin{definition}\label{standard_almost_dorfman} Let $E\to M$ be a vector bundle with a connection $\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(E)\to\Gamma(E)$. Then the standard Dorfman connection associated to $\nabla$ is the map \[\Delta:\Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to\Gamma(E\oplus T^*M),\] \begin{equation*} \Delta_{(X,\xi)}(e,\theta)=(\nabla_Xe,\ldr{X}\theta+\langle \nabla^*_\cdot \xi, e\rangle). \end{equation*} The dual bracket is in this case defined by \[\llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_\Delta=([X,Y], \nabla_X^*\eta-\nabla_Y^*\xi) \] for all $(X,\xi), (Y,\eta)\in \Gamma(TM\oplus E^*)$. \end{definition} \begin{proposition} Let $E\to M$ be a vector bundle endowed with a connection $\nabla$. \begin{enumerate} \item The curvature of the standard Dorfman connection $\Delta$ associated to $\nabla$ is given by \[R_\Delta((X,\xi),(Y,\eta))=(R_\nabla(X,Y), R_{\nabla^*}(\cdot,X)(\eta)-R_{\nabla^*}(\cdot,Y)(\xi)).\] \item $(TM\oplus E^*, \operatorname{pr}_{TM}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta)$ is a Lie algebroid if and only if $\nabla$ is flat. \end{enumerate} \end{proposition} \begin{proof} The first claim is proved by a straightforward computation. The second statement follows then, using Proposition \ref{curvature_tensor} and the fact that the pairing is here nondegenerate. \end{proof} \begin{proposition} Let $E\to M$ be a vector bundle endowed with a connection $\nabla$. and let $\Delta$ be the standard Dorfman connection associated to $\nabla$. For any section $(X,\xi)\in\Gamma(TM\oplus E^*)$, set $\widetilde{(X,\xi)}\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(E)\times\Omega^1(E)$, \begin{align*} \widetilde{(X,\xi)}(e_m)&=\left(T_meX(m), \dr_{e_m}\ell_\xi\right)-\left(\Delta_{(X,\xi)}(e,0)\right)^\uparrow(e_m)\\ &=\left(T_meX(m), \dr_{e_m}\ell_\xi\right)-\left(\left.\frac{d}{dt}\right\an{t=0}e_m+t\nabla_Xe,(T_{e_m}q_E)^*\langle \nabla^*_\cdot \xi, e\rangle\right). \end{align*} The subbundle $L_\Delta$ spanned by these sections is equal to $H_\nabla\oplus H_\nabla^\circ$. Hence, the standard Dorfman connection associated to a connection $\nabla$ is the same as a splitting \[TE\oplus T^*E\cong(V\oplus V^\circ)\oplus(H_\nabla\oplus H_\nabla^\circ), \] the sum of a Dirac structure and an almost Dirac structure. \end{proposition} \begin{proof} We just have to check that the space spanned by the sections $\widetilde{(X,\xi)}$ is equal to $H_\nabla\oplus H_\nabla^\circ$. But since $\Delta_{(X,0)}(e,0)=(\nabla_Xe,0)$ and $\Delta_{(0,\xi)}(e,0)=(0,\langle\nabla_\cdot^*\xi,e\rangle)$, the subbundle that we are considering is spanned by the sections $\widetilde{(X,0)}=(\widetilde X, 0)$ and $\widetilde{(0,\xi)}=(0,\widetilde\xi)$, i.e. it is the direct sum of a subbundle of $TE$ and a subbundle of $T^*E$. The tangent part is obviously equal to $H_\nabla$ and the cotangent part is easily seen to be $H_\nabla^\circ$. \end{proof} \subsection{Almost Dorfman connection associated to a splitting $TE\oplus T^*E\cong(V\oplus V^\circ)\oplus L$} Consider now a splitting \[TE\oplus T^*E\cong(V\oplus V^\circ)\oplus L\to E, \] where, as before, $V:=T^{q_E}E$. Recall that the vector bundle morphism \begin{equation*} \begin{xy} \xymatrix{ \Phi_E:=({q_E}_*, r_E):&TE\oplus T^*E\ar[r]\ar[d]& TM\oplus E^*\ar[d]\\ & E\ar[r]_{q_E}&M } \end{xy} \end{equation*} is a fibration of vector bundles over the projection $q_E:E\to M$. That is, \begin{equation*} \begin{xy} \xymatrix{ TE\oplus T^*E\ar[d]\ar[rr]^{q_E^!\Phi_E}& &q_E^!(TM\oplus E^*)\ar[d]\\%=E\oplus TM\oplus E^*\\ E\ar[rr]&&E } \end{xy} \end{equation*} is a surjective vector bundle morphism over the identity on $E$. Since $V\oplus V^\circ$ is the kernel of $\Phi_E$, the diagram above factors as \begin{equation*} \begin{xy} \xymatrix{ L\simeq(TE\oplus T^*E)/(V\oplus V^\circ) \ar[d]\ar[rr]^{\qquad q_E^!\Phi_E}& &q_E^!(TM\oplus E^*)\ar[d]\\%=E\oplus TM\oplus E^*\\ E\ar[rr]&&E } \end{xy} \end{equation*} and we find that for any section $(X,\xi)$ of $TM\oplus E^*$, there exists a unique section $\widetilde{(X,\xi)}$ of $L$ such that \[\Phi_E\circ \widetilde{(X,\xi)}=(X,\xi)\circ q_E. \] Note that by the uniqueness of the section $\widetilde{(X,\xi)}$ of $L$ over $(X,\xi)$, we have $\widetilde{\varphi \cdot(X,\xi)}=q_E^*\varphi \cdot \widetilde{(X,\xi)}$ for all $\varphi \in C^\infty(M)$. We start by proving the following observation. \begin{lemma} Choose $(X,\xi), (Y,\eta)\in\Gamma(TM\oplus E^*)$ Then \[e_m\mapsto \langle \widetilde{(X,\xi)}(e_m), (T_meY(m),\dr_{e_m} \ell_\eta)\rangle-Y(m)\langle\xi,e\rangle,\] where $e\in\Gamma(E)$ is such that $e(m)=e_m$, defines a linear map on $E$. \end{lemma} \begin{proof} Choose first $e,f\in \Gamma(E)$. Then we have \[ \Phi_E\bigl(\widetilde{(X,\xi)}(e(m))+_{\Phi_E}\widetilde{(X,\xi)}(f(m))\bigr)=(X,\xi)(m) \] and \[ \Phi_E\bigl(\widetilde{(X,\xi)}(e(m)+f(m))\bigr)=(X,\xi)(m). \] Since $\widetilde{(X,\xi)}(e(m))+_{\Phi_E}\widetilde{(X,\xi)}(f(m))$ and $\widetilde{(X,\xi)}(e(m)+f(m))$ are both elements of $L_{e_m+f_m}$, we find by definition of $\widetilde{(X,\xi)}$ that \[\widetilde{(X,\xi)}(e(m))+_{\Phi_E}\widetilde{(X,\xi)}(f(m))=\widetilde{(X,\xi)}(e(m)+f(m)). \] By definition of the addition in the $\Phi_E$-fibers, we find also \[(T_meY(m),\dr_{ e_m} \ell_\eta)+_{\Phi_E}(T_mfY(m),\dr_{f_m} \ell_\eta)=(T_m(e+f)Y(m),\dr_{e_m+f_m} \ell_\eta) \] if $e,f\in\Gamma(E)$ are such that $e(m)=e_m$ and $f(m)=f_m$. Again by definition of the addition in the $\Phi_E$-fibers, we get hence \begin{align*} &\langle \widetilde{(X,\xi)}(e(m)+f(m)), (T_m(e+f)Y(m),\dr_{e_m+f_m} \ell_\eta)\rangle\\ =&\langle \widetilde{(X,\xi)}(e(m)), (T_meY(m),\dr_{ e_m} \ell_\eta)\rangle + \langle \widetilde{(X,\xi)}(f(m)), (T_mfY(m),\dr_{f_m} \ell_\eta)\rangle. \end{align*} In particular, we find \[\langle \widetilde{(X,\xi)}(r\cdot e(m)), (T_m(r\cdot e)Y(m),\dr_{r\cdot e_m} \ell_\eta)\rangle =r\cdot\langle \widetilde{(X,\xi)}(e(m)), (T_meY(m),\dr_{ e_m} \ell_\eta)\rangle \] for all $r\in\mathbb{N}} \newcommand{\smoo}{C^\infty(G)$. The same equality follows then for $r\in\mathbb{Q}$ and by continuity for all $r\in\mathbb{R}} \newcommand{\bxi}{\boldsymbol{\xi}$. Choose $\varphi \in C^\infty(M)$ and set $\varphi (m)=\alpha$. Then \begin{align*} &\langle \widetilde{(X,\xi)}(\alpha e_m), (T_m(\varphi \cdot e)Y(m),\dr_{\alpha e_m} \ell_\eta)\rangle-Y(m)\langle\xi,\varphi \cdot e\rangle\\ =&\langle \widetilde{(X,\xi) }(\alpha e_m), (T_m(\alpha e)Y(m)+Y(m)(\varphi )e^\uparrow(\alpha e_m),\dr_{\alpha e_m} \ell_\eta)\rangle -Y(m)\langle\xi,\varphi \cdot e\rangle\\ =&\langle \widetilde{(X,\xi) }(\alpha e_m), (T_m(\alpha e)Y(m),\dr_{\alpha e_m} \ell_\eta) \rangle -\alpha Y(m)\langle\xi, \cdot e\rangle\\ =&\alpha\cdot \bigl( \langle \widetilde{(X,\xi) }(e_m), (T_m eY(m),\dr_{e_m} \ell_\eta) \rangle -Y(m)\langle\xi, \cdot e\rangle \bigr). \end{align*} Thus, we have shown that the function \[e_m\mapsto \langle \widetilde{(X,\xi)}(e_m), (T_meY(m),\dr_{e_m} \ell_\eta)\rangle-Y(m)\langle\xi,e\rangle\] is well-defined, i.e. doesn't depend on the choice of the section $e$, and linear. \end{proof} Thus, we can consider the map \[\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_L:\Gamma(TM\oplus E^*)\times\Gamma(TM\oplus E^*)\to \Gamma(TM\oplus E^*)\] defined by \[\langle\llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L,(e_m,0)\rangle=\langle \widetilde{(X,\xi)}(e_m), (T_meY(m),\dr_{e_m} \ell_\eta)\rangle-Y(m)\langle\xi,e\rangle,\] for any section $e\in\Gamma(E)$ such that $e(m)=e_m$, and \[ \pr_{TM}\llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L=[X,Y]. \] \begin{theorem} Let $E\to M$ be a vector bundle and consider a splitting \[TE\oplus T^*E\cong(V\oplus V^\circ)\oplus L\to E. \] The triple $(TM\oplus E^*, \pr_{TM}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_L)$, where $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_L$ is defined as above, is a dull algebroid. \end{theorem} \begin{proof} We compute for $e\in\Gamma(E)$, setting $\alpha:=\varphi (m)$: \begin{align*} \langle\llbracket} \newcommand{\rb}{\rrbracket(X,\xi), \varphi\cdot(Y,\eta)\rb_L, (e,0)\rangle=&\langle \widetilde{(X,\xi)}(e_m), (T_me(\alpha Y(m)),\dr_{e_m} (q_E^*\varphi \cdot \ell_\eta))\rangle-\alpha Y(m)\langle\xi,e\rangle\\ =&\alpha\langle \widetilde{(X,\xi)}(e_m), (T_meY(m),\dr_{e_m} \ell_\eta)\rangle-\alpha Y(m)\langle\xi,e\rangle +X(\varphi )\langle \eta, e\rangle\\ =&\langle \varphi \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L+X(\varphi )(Y,\eta), (e,0)\rangle(m),\\ \langle\llbracket} \newcommand{\rb}{\rrbracket \varphi\cdot (X,\xi), (Y,\eta)\rb_L, (e,0)\rangle&=\langle (q_E^*\varphi \widetilde{(X,\xi)})(e_m), (T_meY(m),\dr_{e_m}\ell_\eta)\rangle\\ &\qquad\qquad-Y(m)(\varphi )\langle\xi,e\rangle(m)-\varphi (m)Y(m)\langle\xi,e\rangle\\ =&\langle \varphi \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L-Y(\varphi )(X,\xi), (e,0)\rangle(m). \end{align*} Since \[\langle\llbracket} \newcommand{\rb}{\rrbracket(X,\xi), \varphi\cdot(Y,\eta)\rb_L, (0,\theta)\rangle=\langle \varphi \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L+X(\varphi )(Y,\eta), (0,\theta)\rangle\] and \[\langle\llbracket} \newcommand{\rb}{\rrbracket \varphi\cdot (X,\xi), (Y,\eta)\rb_L, (0,\theta)\rangle =\langle \varphi \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L-Y(\varphi )(X,\xi), (0,\theta)\rangle \] hold by construction for all $\theta\in\Omega^1(M)$, we have shown the two Leibniz equalities. Compatibility of $\pr_{TM}$ with $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_L$ is given by construction. \end{proof} \begin{corollary}\label{great} Let $E\to M$ be a vector bundle and consider a splitting \[TE\oplus T^*E\cong(V\oplus V^\circ)\oplus L\to E. \] Define the map \[\Delta^L:\Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to\Gamma(E\oplus T^*M), \] \[\langle\Delta^L_{(X,\xi)}(e,\theta), (Y,\eta)\rangle=X\langle(e,\theta), (Y,\eta)\rangle -\langle \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L, (e,\theta)\rangle \] Then $\Delta^L$ is a Dorfman connection. \end{corollary} \begin{remark}\label{dorfman_boring} Note that, by definition, we have $\Delta^L_{(X,\xi)}(0,\theta)=(0,\ldr{X}\theta)$ for all $X\in \mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ and $\theta\in\Omega^¹(M)$. To see this, choose $Y\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ and compute \[\langle \Delta^L_{(X,\xi)}(0,\theta), (Y,0)\rangle=X\langle\theta, Y\rangle-\langle \theta, [X,Y]\rangle. \] \end{remark} \begin{proof}[Proof of Corollary \ref{great}] By construction $\Delta^L$ is dual to the dull bracket $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_L$ on $\Gamma(TM\oplus E^*)$. \end{proof} We end this subsection with a proposition relating directly the Dorfman connection $\Delta^L$ with the subbundle $L\subseteq TE\oplus T^*E$. \begin{proposition}\label{relation_Delta_L} Let $E\to M$ be a vector bundle and consider a splitting \[TE\oplus T^*E\cong(V\oplus V^\circ)\oplus L\to E. \] Choose $(X,\xi)\in\Gamma(TM\oplus E^*)$. Then the corresponding section of $L$ is given by \[\widetilde{(X,\xi)}(e_m)=\left( T_meX(m), \dr_{e_m}\ell_\xi\right)-\Delta^L_{(X,\xi)}(e,0)^\uparrow(e_m) \] for all $e_m\in E$ and $e\in\Gamma(E)$ such that $e(m)=e_m$. \end{proposition} \begin{proof} Since $\Phi_E\widetilde{(X,\xi)}(e_m)=(X,\xi)(m)=\Phi_E(T_meX(m), \dr_{e_m}\ell_\xi)$ for $e_m\in E$, there exists a pair $(f,\theta)\in\Gamma(E\oplus T^*M)$ such that \[\widetilde{(X,\xi)}(e_m)=(T_meX(m), \dr_{e_m}\ell_\xi)+(f,\theta)^\uparrow(e_m). \] We compute for $(Y,\eta)\in\Gamma(TM\oplus E^*)$: \begin{align*} \langle (f,\theta), (Y,\eta)\rangle(m)&=\langle (f,\theta)^\uparrow(e_m), (T_meY(m), \dr_{e_m}\ell_\eta)\rangle\\ &=\langle\widetilde{(X,\xi)}(e_m)-(T_meX(m), \dr_{e_m}\ell_\xi) , (T_meY(m), \dr_{e_m}\ell_\eta)\rangle\\ &=\langle \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L,(e_m,0)\rangle +Y(m)\langle \xi, e\rangle-X(m)\langle \eta,e\rangle -Y(m)\langle \xi, e\rangle\\ &=\langle \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_L,(e_m,0)\rangle-X(m)\langle \eta,e\rangle \\ &=-\langle \Delta^L_{(X,\xi)}(e,0), (Y,\eta)\rangle(m). \end{align*} \end{proof} \subsection{The converse construction} Let $E\to M$ be a vector bundle and consider a Dorfman connection \[\Delta:\Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to \Gamma(E\oplus T^*M).\] We define the subset \[L_\Delta\subseteq TE\oplus T^*E\] by \[L_\Delta(e_m)=\left\{\left.\left(T_me X(m), \dr \ell_\xi(e_m)\right)-\Delta_{(X,\xi)}(e,0)^\uparrow(e_m)\right| (X,\xi)\in\Gamma(TM\oplus E^*)\right\} \] for any section $e\in\Gamma(E)$ such that $e(m)=e_m$. For any pair $(X,\xi)\in\Gamma(TM\oplus E^*)$, we will write $\widetilde{(X,\xi)}\in\Gamma_E(TE\oplus T^*E)=\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(E)\times\Omega^1(E)$ for the section defined by \[\widetilde{(X,\xi)}(e_m)=\left(T_me X(m), \dr \ell_\xi(e_m)\right)-\Delta_{(X,\xi)}(e,0)^\uparrow(e_m) \] for all $e_m\in E$. Note that $\Phi_E\circ \widetilde{(X,\xi)}=(X,\xi)\circ q_E$. \begin{proposition} $L_\Delta$ is a well-defined subbundle of $TE\oplus T^*E\to E$ such that $(V\oplus V^\circ)\oplus L_\Delta\xrightarrow{\sim}{} TE\oplus T^*E$. \end{proposition} \begin{proof} We show that the fiber over $e_m$ doesn't depend on the choice of the section $e\in\Gamma(E)$ such that $e(m)=e_m$. Note first that for any pair $(f,\theta)\in\Gamma(E\oplus T^*M)$, we have \begin{align*} \left\langle \left(T_me X(m), \dr \ell_\xi(e_m)\right)-\Delta_{(X,\xi)}(e,0)^\uparrow(e_m), (f,\theta)^\uparrow(e_m)\right\rangle &= \langle \theta(m), X(m)\rangle +\langle \xi(m), f(m)\rangle. \end{align*} This pairing doesn't depend on $e$. Choose then any connection $\nabla$ on $E$ and a pair $(Y,\eta)\in\Gamma(TM\oplus E^*)$. Consider the pair \[(Y_\nabla,\eta_\nabla)(e_m):=\left(T_meY(m)-\left.\frac{d}{dt}\right\an{t=0}e_m+t\nabla_{Y(m)}e, \dr \ell_\eta(e_m)-(T_{e_m}q_E)^*\langle \nabla^*_\cdot\eta, e\rangle\right).\] Then \begin{align*} &\left\langle \left(T_me X(m), \dr \ell_\xi(e_m)\right)-\Delta_{(X,\xi)}(e,0)^\uparrow(e_m), (Y_\nabla,\eta_\nabla)(e_m)\right\rangle\\ =\,& X(m)\langle \eta, e\rangle -\langle \nabla^*_{X(m)}\eta, e\rangle -\langle \eta(m), \pr_{E}\Delta_{(X,\xi)}(e,0)\rangle\\ &+Y(m)\langle \xi, e\rangle -\langle \pr_{T^*M}\Delta_{(X,\xi)}(e,0), Y(m)\rangle +\langle \xi(m), \nabla_{Y(m)}e\rangle\\ =\,& X(m)\langle \eta, e\rangle -\langle \nabla^*_{X(m)}\eta, e(m)\rangle -\langle (Y,\eta)(m), \Delta_{(X,\xi)}(e,0)\rangle+\langle \nabla_{Y(m)}^*\xi, e(m)\rangle\\ =\,& \langle \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), (Y,\eta)\rb_\Delta, (e(m), 0)\rangle-\langle \nabla^*_{X(m)}\eta, e(m)\rangle +\langle \nabla_{Y(m)}^*\xi, e(m)\rangle. \end{align*} Again, this does only depend on the value $e_m$ of $e$ at $m$. Since pairs $(Y_\nabla,\eta_\nabla)$ and $(f,\theta)^\uparrow$ span the whole of $TE\oplus T^*E$ and the pairing is nondegenerate, we have shown that $L_\Delta$ is well-defined. The second claim is immediate, using the fact that $\Phi_E\circ \widetilde{(X,\xi)}=(X,\xi)$ for all $(X,\xi)\in\Gamma(TM\oplus E^*)$ and that $V\oplus V^\circ$ is spanned by the sections $(e,\theta)^\uparrow$ for $(e,\theta)\in\Gamma(E\oplus T^*M)$. \end{proof} The results in the last two subsections are summarized in the following theorem. \begin{theorem} Let $q_E:E\to M$ be a Lie algebroid. The maps \begin{align*} \Delta&\mapsto L_\Delta,\\ \Delta^L&\mapsfrom L \end{align*} define a bijection \begin{equation*} \left\{\begin{array}{c} (TM\oplus E^*)\text{-Dorfman connections }\\ \Delta \text{ on } E\oplus T^*M \end{array} \right\} \leftrightarrow \left\{\begin{array}{c} \text{ Splittings }\\ TE\oplus T^*E\cong(V\oplus V^\circ)\oplus L \end{array}\right\}. \end{equation*} \end{theorem} Since a $(TM\oplus E^*)$-Dorfman connection $\Delta$ on $E\oplus T^*M$ is the same as a dull algebroid structure $(\pr_{TM}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta)$ on $TM\oplus E^*$, we can reformulate this bijection as follows: \begin{equation*} \left\{\begin{array}{c} \text{Dull algebroids }\\ (TM\oplus E^*, \pr_{TM}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb) \end{array} \right\} \leftrightarrow \left\{\begin{array}{c} \text{ Splittings }\\ TE\oplus T^*E\cong(V\oplus V^\circ)\oplus L \end{array}\right\}. \end{equation*} \subsection{The canonical pairing and the Courant-Dorfman bracket on $TE\oplus T^*E$} We show in this section that the failure of a splitting $L$ of $TE\oplus T^*E$ to be Lagrangian is equivalent to the failure of $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_L$ to be skew-symmetric, and the failure of its set of sections to be closed under the Courant-Dorfman bracket is measured by the curvature of $\Delta$. Here and later, we will need the following notation. Let $E\to M$ be a vector bundle and consider a Dorfman connection $\Delta:\Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to\Gamma(E\oplus T^*M)$. We call $\Skew_\Delta\in\Gamma((TM\oplus E^*)\otimes(TM\oplus E^*)\otimes E^*)$ the tensor defined by \[\Skew_\Delta(v_1,v_2)=\pr_{E^*}(\llbracket} \newcommand{\rb}{\rrbracket v_1, v_2\rb_\Delta+\llbracket} \newcommand{\rb}{\rrbracket v_2, v_1\rb_\Delta) \] for all $v_1,v_2\in\Gamma(TM\oplus E^*)$. By the Leibniz identity, this is indeed $C^\infty(M)$-linear in both arguments. Note that the $TM$-part of $\llbracket} \newcommand{\rb}{\rrbracket v_1, v_2\rb_\Delta+\llbracket} \newcommand{\rb}{\rrbracket v_2, v_1\rb_\Delta$ always vanishes since the Lie bracket of vector fields is skew-symmetric. \begin{theorem}\label{Lagrangian} Let $\Delta:\Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to\Gamma(E\oplus T^*M)$ be a Dorfman connection and choose $v, v_1,v_2\in\Gamma(TM\oplus E^*)$ and $\sigma,\sigma_1, \sigma_2\in\Gamma(E\oplus T^*M)$. Then \begin{enumerate} \item $\left\langle \tilde v_1, \tilde v_2\right\rangle=\ell_{ \Skew_\Delta(v_1, v_2)}$, \item $\left\langle \tilde v, \sigma^\uparrow\right\rangle=q_E^*\langle v, \sigma\rangle$, \item $\left\langle \sigma_1^\uparrow, \sigma_2^\uparrow\right\rangle=0$. \end{enumerate} \end{theorem} \begin{proof} Since the second and third equalities are immediate, we prove only the first one. We write $v_1=(X,\xi)$, $v_2=(Y,\eta)$ and compute for any section $e\in\Gamma(E)$: \begin{align*} &\left\langle \left(T_me X(m), \dr \ell_\xi(e_m)\right)-\Delta_{(X,\xi)}(e,0)^\uparrow(e_m), \left(T_me Y(m), \dr \ell_\eta(e_m)\right)-\Delta_{(Y,\eta)}(e,0)^\uparrow(e_m)\right\rangle\\ =\,&X(m)\langle \eta, e\rangle -\langle\pr_{T^*M}\Delta_{(Y,\eta)}(e,0), X(m)\rangle- \langle \eta(m), \pr_E\Delta_{(X,\xi)}(e,0)\rangle\\ &+ Y(m)\langle \xi, e\rangle -\langle\pr_{T^*M}\Delta_{(X,\xi)}(e,0), Y(m)\rangle- \langle \xi(m), \pr_E\Delta_{(Y,\eta)}(e,0)\rangle\\ =\,&\left(X\langle \eta, e\rangle -\langle\Delta_{(Y,\eta)}(e,0), (X,\xi)\rangle+ Y\langle \xi, e\rangle -\langle\Delta_{(X,\xi)}(e,0), (Y,\eta)\rangle\right)(m)\\ =\,&\langle(e,0), \llbracket} \newcommand{\rb}{\rrbracket v_2, v_1\rb_\Delta+\llbracket} \newcommand{\rb}{\rrbracket v_1, v_2\rb_\Delta\rangle. \end{align*} \end{proof} \begin{corollary} The bracket $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$ associated to a Dorfman connection $\Delta$ is skew-symmetric, if and only if $L_\Delta$ is Lagrangian. The corresponding splitting \[ TE\oplus T^*E\cong(V\oplus V^\circ)\oplus L_\Delta\] is then the direct sum of the Dirac structure $V\oplus V^\circ$ and the almost Dirac structure $L_\Delta$. \end{corollary} \begin{proof} Since the rank of $L_\Delta$ is equal to the dimension of $E$ as a manifold, we have only to show that $L_\Delta$ is isotropic if and only if $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$ is skew-symmetric. But this is immediate by the preceding theorem. \end{proof} \bigskip Next, we will see how the Dorfman connection encodes the Courant-Dorfman bracket on linear and core sections. The next theorem shows how integrability of $L_\Delta$ is related to the curvature $R_\Delta$ of the Dorfman connection. \begin{theorem}\label{super} Choose $v, v_1, v_2$ in $\Gamma(TM\oplus E^*)$ and $\sigma, \sigma_1, \sigma_2\in\Gamma(E\oplus T^*M)$. Then \begin{enumerate} \item $\left[\sigma_1^\uparrow, \sigma_2^\uparrow\right]=0$, \item $\left[\widetilde{v}, \sigma^\uparrow\right]=\left(\Delta_{v}\sigma\right)^\uparrow$, \item $ \left[\widetilde{v_1},\widetilde{v_2}\right] =\widetilde{\llbracket} \newcommand{\rb}{\rrbracket v_1, v_2\rb_\Delta}-R_\Delta(v_1, v_2)(\cdot,0)^\uparrow$. \end{enumerate} \end{theorem} The proof of this theorem is quite long and technical, it can be found in Appendix \ref{proof_of_super}. \begin{remark}\label{twisted_shit} \begin{enumerate} \item If the Courant-Dorfman bracket is twisted by a linear closed $3$-form $H$ over a map $\bar H:TM\wedge TM\to E^*$ \cite{BuCa12}, then the bracket $[\tilde v_1,\tilde v_2]$ will be linear over $\llbracket} \newcommand{\rb}{\rrbracket v_1, v_2\rb_{\bar H}=\llbracket} \newcommand{\rb}{\rrbracket v_1, v_2\rb+(0,\bar H(X_1,X_2))$. Note that the Dorfman connection dual to this bracket is $\Delta^{\bar H}_v\sigma=\Delta_v\sigma+(0,\langle \bar H(X,\cdot), e\rangle)$. A more careful study of general exact Courant algebroids \cite{Roytenberg99} over vector bundles and the corresponding twistings of the Dorfman connections and dull algebroids corresponding to splittings of $TE\oplus T^*E$ will be done later. \item The \emph{Courant bracket}, i.e. the anti-symmetric counterpart of the Courant-Dorfman bracket, is given by \begin{enumerate} \item $\left[\sigma_1^\uparrow, \sigma_2^\uparrow\right]_C=0$, \item $\left[\widetilde{v}, \sigma^\uparrow\right]_C=\left[\widetilde{v}, \sigma^\uparrow\right]-(0, \frac{1}{2}q_E^*\dr\langle v,\sigma\rangle)=\left(\Delta_{v}\sigma-(0, \frac{1}{2}\dr\langle v,\sigma\rangle)\right)^\uparrow$, \item $ \left[\widetilde{v_1},\widetilde{v_2}\right]_C =\widetilde{\llbracket} \newcommand{\rb}{\rrbracket v_1, v_2\rb_\Delta}-R_\Delta(v_1, v_2)(\cdot,0)^\uparrow-(0, \frac{1}{2}\dr\ell_{\Skew_\Delta(v_1,v_2)})$. \end{enumerate} Our choice of working with non anti-symmetric Courant algebroids is because of the fact that Dorfman connections describe naturally the Courant-Dorfman bracket: recall Remark \ref{dorfman_boring} and Proposition \ref{relation_Delta_L}. Here, we see from (b) that working with a ``Courant connection'' starting from a splitting like as we did would yield much more complicated formulas and axioms for the connection. This is also why we chose to call the Dorfman connections after I. Dorfman. \end{enumerate} \end{remark} The following corollary of Theorem \ref{super} is immediate. \begin{corollary} Let $E\to M$ be a vector bundle and consider a splitting $TE\oplus T^*E=(V\oplus V^\circ)\oplus L$. Then the horizontal space $L$ is a Dirac structure if and only if the corresponding dull algebroid $(TM\oplus E^*,\pr_{TM},\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_L)$ is a Lie algebroid. \end{corollary} We will study more general (non-horizontal) Dirac structures on $E$ in the next section. Before that, we end this subsection with some examples. \begin{example}\label{example_easy} Recall that an ordinary connection on a vector bundle $E\to M$ is a splitting $TE\cong V\oplus H_\nabla$. We have seen in Section \ref{sec_standard_almost_dorfman} that the Dorfman connection \[\Delta:\Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to\Gamma(E\oplus T^*M), \] \[\Delta_{(X,\xi)}(e,\theta)=(\nabla_Xe,\ldr{X}\theta+\langle\nabla_\cdot^*\xi,e\rangle)\] corresponds to the splitting \[TE\oplus T^*E\cong (V\oplus V^\circ)\oplus(H_\nabla\oplus H_\nabla)^\circ. \] \end{example} \begin{example}\label{lie_algebroid_dual} Consider a dull algebroid $(A,\rho, [\cdot\,,\cdot])$ with \emph{skew-symmetric bracket}. We construct a $TM\oplus A$-connection $\Delta$ on $A^*\oplus T^*M$, hence corresponding to a splitting $TA^*\oplus T^*A^*=(V\oplus V^\circ)\oplus L_\Delta$ of the Pontryagin bundle of $A^*$. Take any connection $\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times \Gamma(A)\to\Gamma(A)$ and recall the definition of the basic connection $\nabla^{\rm bas}:\Gamma(A)\times\Gamma(A)\to\Gamma(A)$ associated to $\nabla$ and the dull algebroid structure on $A$: \[\nabla_a^{\rm bas}b=[a,b]+\nabla_{\rho(b)}a \] for all $a,b\in\Gamma(A)$. The Dorfman connection \[\Delta: \Gamma(TM\oplus A)\times\Gamma(A^*\oplus T^*M)\to \Gamma(A^*\oplus T^*M)\] is defined by \[\Delta_{(X,a)}(\xi,\theta)=\left(\langle \xi, \nabla_\cdot^{\rm bas}a\rangle+\nabla_X^*\xi-\rho^*\langle \nabla_\cdot a, \xi\rangle, \ldr{X}\theta+ \langle \nabla_\cdot a, \xi\rangle \right). \] The bracket $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$ on sections of $TM\oplus A$ is then given by \[\llbracket} \newcommand{\rb}{\rrbracket(X,a), (Y,b)\rb_\Delta=\left([X,Y], \nabla_Xb-\nabla_Ya+\nabla_{\rho(b)}a-\nabla_{\rho(a)}b+[a,b] \right). \] Since it is skew-symmetric, the horizontal space $L_\Delta$ is in this case Lagrangian. The projection $\pr_{TM}$ intertwines obviously this bracket with the Lie bracket of vector fields. The curvature of this Dorfman connection is given by \begin{align} \langle R_\Delta((X,a),(Y,b))(\xi,\theta), (Z,c)\rangle =&- \langle \llbracket} \newcommand{\rb}{\rrbracket(X,a),\llbracket} \newcommand{\rb}{\rrbracket(Y,b), (Z,c)\rb_\Delta\rb_\Delta+{\rm c.p.}, (\xi,\theta)\rangle\nonumber\\ =&-\langle \bigl(R_\nabla(X,Y)c- R_\nabla(\rho(a),Y)c\bigr) +{\rm c.p.}, \xi\rangle\label{curvature_poisson}\\ &-\langle \bigl(R_\nabla(\rho(a),\rho(b))c-R_\nabla(X,\rho(b))c\bigr) +{\rm c.p.}, \xi\rangle\nonumber\\ &-\langle \bigl(R^{\rm bas}_\nabla(a,b)Z- R^{\rm bas}_\nabla(a,b)\rho(c)\bigr) +{\rm c.p.}, \xi\rangle\nonumber\\ &-\langle [a,[b,c]]+[b,[c,a]]+[c,[a,b]], \xi\rangle.\nonumber \end{align} The proof of this formula is a rather long, but straightforward computation that is ommitted here. We will see in the next subsection the signification of this example in terms of the linear almost Poisson structure defined on $A^*$ by the skew-symmetric dull algebroid structure. \end{example} \begin{example}\label{IM_2_form} Consider a vector bundle $E\to M$ endowed with a vector bundle morphism $\sigma:E\to T^*M$ over the identity and a connection $\nabla:\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)\times\Gamma(E)\to\Gamma(E)$. Define the Dorfman connection \[\Delta: \Gamma(TM\oplus E^*)\times\Gamma(E\oplus T^*M)\to \Gamma(E\oplus T^*M) \] by \[\Delta_{(X,\xi)}(e,\theta)=(\nabla_Xe, \ldr{X}(\theta-\sigma(e))+\langle \nabla_\cdot^*(\sigma^*X+\xi), e\rangle +\sigma(\nabla_Xe)). \] The bracket $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$ on sections of $TM\oplus E^*$ is here given by \[\llbracket} \newcommand{\rb}{\rrbracket (X,\xi), (Y,\eta)\rb_\Delta=([X,Y], \nabla_X^*(\eta+\sigma^*Y)-\nabla_Y^*(\xi+\sigma^*X)-\sigma^*[X,Y]). \] In this case also, $L_\Delta$ is Lagrangian. Here also, we give the curvature of the Dorfman connection in terms of the Jacobiator of the associated bracket: \begin{equation}\label{curvature_sigma} \llbracket} \newcommand{\rb}{\rrbracket(X,\xi), \llbracket} \newcommand{\rb}{\rrbracket(Y,\eta), (Z,\gamma)\rb_\Delta\rb_\Delta+{\rm c.p.}= \Bigl(0, R_{\nabla^*}(X,Y)(\gamma+\sigma^*Z)+{\rm c.p.}\Bigl). \end{equation} We will see in the next section how this Dorfman connection is related to the $2$-form $\sigma^*\omega_{\rm can}\in\Omega^2(E)$, where $\omega_{\rm can}$ is the canonical symplectic form on $T^*M$. \end{example} \subsection{Dirac structures and Dorfman connections} In this subsection, we will consider sub- double vector bundles \begin{equation*} \begin{xy} \xymatrix{ D\ar[d]\ar[r]&U\ar[d]\\ E\ar[r]&M }\end{xy} \qquad \text{ of } \qquad \begin{xy} \xymatrix{ TE\oplus T^*E\ar[d]\ar[r]&TM\oplus E^*\ar[d]\\ E\ar[r]&M }\end{xy} \end{equation*} The intersection of such a sub- double vector bundle $D$ with the vertical space $V\oplus V^\circ$ always has constant rank on $E$ and there is a subbundle $K\subseteq E\oplus T^*M$ such that $D\cap (V\oplus V^\circ)$ is spanned over $E$ by the sections $k^\uparrow$ for all $k\in \Gamma(K)$. To see that, use for instance \cite{Mackenzie05}. We will call $K$ the \textbf{core} of $D$. The following proposition follows from this observation. \begin{proposition} Let $E$ be a vector bundle endowed with a sub-double vector bundle $D\subseteq TE\oplus T^*E$ over $U\subseteq TM\oplus E^*$ and with core $K\subseteq E\oplus T^*M$. Then there exists a Dorfman connection $\Delta$ such that $D$ is spanned by the sections $k^\uparrow$, for all $k\in\Gamma(K)$ and $\widetilde{u}$ for all $u\in\Gamma(U)$. \end{proposition} The Dorfman connection $\Delta$ is then said to be \emph{adapted} to $D$. Conversely, given a Dorfman connection and two subbundles $U\subseteq TM\oplus E^*$ and $K\subseteq E\oplus T^*M$, we call $D_{U,K,\Delta}$ the sub-double vector bundle that is spanned by $\sigma^\uparrow$, for all $\sigma\in\Gamma(K)$ and $\widetilde{u}$ for all $u\in\Gamma(U)$. \begin{definition} Two Dorfman connections $\Delta,\Delta'$ are said to be $(U,K)$-\emph{equivalent} if $(\Delta-\Delta')(\Gamma(U)\times\Gamma(E\oplus 0)\subseteq\Gamma(K)$. \end{definition} The following proposition shows that this defines an $(U,K)$-equivalence relation on the set of Dorfman connections. We will write $[\Delta]_{U,K}$, or simply $[\Delta]$ since there will never be a risk of confusion, for the $(U,K)$-class of the Dorfman connection $\Delta$. The triple $(U,K,[\Delta])$ will be called a VB-triple in the following. By the next proposition, VB-triples are in one-one correspondence with sub-double vector bundles of $TE\oplus T^*E\to E$. \begin{proposition} Choose two Dorfman connections $\Delta, \Delta'$ and assume that $\Delta$ is adapted to $D$. Then $\Delta'$ is adapted to $D$ if and only if $\Delta$ and $\Delta'$ are $(U,K)$-equivalent. \end{proposition} \begin{proof} Assume that $\Delta$ is adapted to $D$. Then $D$ is spanned by the sections $\widetilde{u}^\Delta$ and $\sigma^\uparrow$ for all $\sigma\in\Gamma(K)$ and $u\in\Gamma(U)$. If $\Delta$ and $\Delta'$ are $(U,K)$-equivalent, we have $\widetilde u^\Delta-\widetilde u^{\Delta'}=k^\uparrow$ for some $k\in\Gamma(K)$. This implies immediately that $\Delta'$ is adapted to $D$. The converse implication can be proved in a similar manner. \end{proof} The following theorem follows immediately from the results in the preceding subsection. \begin{theorem} Let $D$ be a sub- double vector bundle of $TE\oplus T^*E$ over $U\subseteq TM\oplus E^*$ and $K\subseteq E\oplus T^*M$ and choose a Dorfman connection $\Delta$ that is adapted to $D$. Then \begin{enumerate} \item $D$ is isotropic if and only if $\Skew_\Delta\an{U\otimes U}=0$ and $K\subseteq U^\circ$. \item $D$ is Lagrangian if and only if $\Skew_\Delta\an{U\otimes U}=0$ and $K=U^\circ$. \item $\Gamma(D)$ is closed under the Courant-Dorfman bracket if and only if \begin{enumerate} \item $\Delta_uk\in\Gamma(K)$ for all $u\in \Gamma(U)$, $k\in\Gamma(K)$, \item $\llbracket} \newcommand{\rb}{\rrbracket\Gamma(U), \Gamma(U)\rb_\Delta\subseteq \Gamma(U)$, \item $R_\Delta\Bigl(U\otimes U\otimes(E\oplus T^*M)\Bigr)\subseteq K$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} This is an immediate corollary of the results in the preceding subsection, using the fact that $R_\Delta\Bigl((TM\oplus E^*)\otimes (TM\oplus E^*)\otimes(0\oplus T^*M)\Bigr)=0$. (To see this, use Proposition \ref{curvature_tensor} and the fact that the anchor is $\pr_{TM}$.) \end{proof} \begin{corollary}\label{Dirac_triples} Let $D$ be a sub- double vector bundle of $TE\oplus T^*E$ over $U\subseteq TM\oplus E^*$ and $K\subseteq E\oplus T^*M$ and choose a Dorfman connection $\Delta$ that is adapted to $D$. Then \begin{enumerate} \item $D$ is an isotropic subalgebroid of $TE\oplus T^*E\to E$ if and only if \begin{enumerate} \item $U\subseteq K^\circ$, \item $\Delta_uk\in\Gamma(K)$ for all $u\in \Gamma(U)$, $k\in\Gamma(K)$, \item $(U, \pr_{TM}\an{U}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,, \cdot\rb_\Delta\an{\Gamma(U)\times\Gamma(U)})$ is a skew-symmetric dull algebroid. \item the induced Dorfman connection \[\bar\Delta:\Gamma(U)\times\Gamma((E\oplus T^*M)/K)\to \Gamma((E\oplus T^*M)/K) \] is flat. \end{enumerate} \item $D$ is a Dirac structure if and only if \begin{enumerate} \item $U=K^\circ$, \item $\Delta_uk\in\Gamma(K)$ for all $u\in \Gamma(U)$, $k\in\Gamma(K)$, \item $(U,\pr_{TM}\an{U}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta\an{\Gamma(U)\times\Gamma(U)})$ is a Lie algebroid. \end{enumerate} \end{enumerate} \end{corollary} Note that in the second situation, the induced Dorfman connection $\bar\Delta$ is just the Lie derivative \[\ldr{}=\bar\Delta:\Gamma(U)\times\Gamma(U^*)\to \Gamma(U^*), \] which flatness is equivalent to the fact that the restriction of $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$ to $\Gamma(U)$ satisfies the Jacobi-identity. \begin{remark} \begin{enumerate} \item Note that $K=U^\circ$ and $\Delta_uk\in\Gamma(K)$ for all $u\in\Gamma(U)$, $k\in\Gamma(K)$ imply together that the dull bracket restricts to a bracket on $\Gamma(U)$. \item Using the following Proposition \ref{brackets_equal_on_U} and Remark \ref{dorfman_boring}, it can be checked directly that if the conditions in (2) of Corollary \ref{Dirac_triples} are satisfied for $\Delta$, then they are also satisfied for any $\Delta'$ that is $(U,K)$-equivalent to $\Delta$. \item We will say that $(U,K,[\Delta])$ is a Dirac triple if the corresponding sub-double vector bundle $D_{(U,K,[\Delta])}$ is a Dirac structure on $E$. By the considerations above, we find that Dirac sub- double vector bundles of $TE\oplus T^*E\to E$ are in one-one correspondence with Dirac triples. \end{enumerate} \end{remark} \begin{proposition}\label{brackets_equal_on_U} Let $E\to M$ be a vector bundle and choose a VB-triple $(U,K,[\Delta]_{U,K})$ such that $U=K^\circ$. Then for any two representatives $\Delta,\Delta'\in [\Delta]_{U,K}$, we have \[\llbracket} \newcommand{\rb}{\rrbracket u_1, u_2\rb_\Delta=\llbracket} \newcommand{\rb}{\rrbracket u_1, u_2\rb_{\Delta'} \] for all $u_1,u_2\in\Gamma(U)$. \end{proposition} \begin{proof} Since $\pr_{TM}\llbracket} \newcommand{\rb}{\rrbracket u_1, u_2\rb_\Delta=[\pr_{TM}u_1, \pr_{TM}u_2]=\pr_{TM}\llbracket} \newcommand{\rb}{\rrbracket u_1, u_2\rb_{\Delta'}$, we need only to check that \[\langle \llbracket} \newcommand{\rb}{\rrbracket u_1, u_2\rb_\Delta, (e,0)\rangle=\langle\llbracket} \newcommand{\rb}{\rrbracket u_1, u_2\rb_{\Delta'}, (e,0)\rangle \] for all $e\in\Gamma(E)$. But this is immediate by the hypothesis, the duality of $\Delta$ and $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta$ and the definition of $(U,K)$-equivalence. \end{proof} \bigskip We conclude this section with a study of our recurrent examples. \begin{example}\label{foliation_example} In the situation of Example \ref{example_easy}, choose two subbundles $F_M\subseteq TM$ and $C\subseteq E$. Set $U:=F_M\oplus C^*$ and $K:=C\oplus F_M^\circ=U^\circ$. The sub-double vector bundle $D_{U,K,\Delta}$ corresponding to $U$, $K$ and the standard Dorfman connection associated to $\nabla$ is then the direct sum of a subbundle $F_E\subseteq TM$, with $C_E\subseteq T^*E$. Since $U=K^\circ$, we get immediately that $C_E=F_E^\circ$ and $D_{(U,K,[\Delta])}$ is the trivial almost Dirac structure $F_E\oplus F_E^\circ$. An application of Corollary \ref{Dirac_triples} to this situation yields that $F_E\oplus F_E^\circ$ is Dirac if and only if \begin{enumerate} \item $F_M$ is involutive, \item $\nabla_Xc\in\Gamma(C)$ for all $X\in\Gamma(F_M)$ and $c\in\Gamma(C)$ and \item the induced connection $\tilde \nabla:\Gamma(F_M)\times \Gamma(E/C)\to\Gamma(E/C)$ is flat. \end{enumerate} Since $F_E\oplus F_E^\circ$ is Dirac if and only if $F_E\subseteq TE$ is involutive, we recover one of the results in \cite{JoOr12}. \end{example} \begin{example}\label{lie_algebroid_dual_2} In the situation of Example \ref{lie_algebroid_dual}, consider $U=\graphe(\rho:A\to TM)$ and $K=\graphe(-\rho^*:T^*M\to A^*)=U^\circ$. A straightforward computation shows that $\Delta_{(\rho(a), a)}(-\rho^*(\omega),\omega)=(-\rho^*({\nabla^{\rm bas}_a}^*\omega), {\nabla^{\rm bas}_a}^*\omega)\in\Gamma(K)$ for all $ a\in\Gamma(A)$ and $\omega\in\Omega^1(M)$. Furthermore, we have \[ \llbracket} \newcommand{\rb}{\rrbracket(\rho(a),a), (\rho(b),b)\rb_\Delta=(\rho([a,b]), [a,b]) \] for all $a,b\in\Gamma(A)$, which shows that $(U,\pr_{TM}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta)$ is a Lie algebroid if and only if $A$ is a Lie algebroid. We have: \begin{align*} \bar\Delta_{(\rho(a),a)}\overline{(\xi,0)}&=\overline{\left(\langle \xi, \nabla_\cdot^{\rm bas}a\rangle+\nabla_{\rho(a)}^*\xi-\rho^*\langle \nabla_\cdot a, \xi\rangle, \langle \nabla_\cdot a, \xi\rangle \right)}\\ &=\overline{\left(\langle \xi, \nabla_\cdot^{\rm bas}a\rangle+\nabla_{\rho(a)}^*\xi, 0 \right)}=\overline{\left(\ldr{a}\xi, 0 \right)}. \end{align*} Finally, the right-hand side of \eqref{curvature_poisson} vanishes for $(\rho(a), a), (\rho(b), b), (\rho(c),c)\in\Gamma(U)$ and arbitrary $(\xi,\theta)$ if and only if $A$ is a Lie algebroid. Hence, we find that the sub-double vector bundle $D$ of $TA^*\oplus T^*A^*$ associated to $U, K$ and $\Delta$ is an almost Dirac structure on $A^*$ and a Dirac structure if and only if $A$ is a Lie algebroid. The vector bundle $D\to A^*$ is in fact the graph of the vector bundle morphism \[\pi_{A}^\sharp:T^*A^*\to TA^* \] associated to the linear amost Poisson structure defined on $A^*$ by the skew-symmetric dull algebroid structure on $A$. Indeed, $D$ is spanned by the sections $k^\uparrow$ for $k\in \Gamma(K)$ and $\widetilde{u}$ for $u\in\Gamma(U)$, or, equivalently, by the sections \[ (-\rho^*\theta^\uparrow, q_{A^*}^*\theta) \] for $\theta\in\Omega^1(M)$ and \[ (\widetilde{\rho(a)}, \tilde a) \] for $a\in \Gamma(A)$, where \begin{align*} \widetilde{\rho(a)}(\xi_m)&=T_m\xi(\rho(a)(m))-\left.\frac{d}{dt}\right\an{t=0}\xi_m+t\cdot\ldr{a}\xi(m)\\ \tilde a(\xi_m)&=\dr_{\xi_m} l_a. \end{align*} But by Appendix \ref{appendix_linear_Poisson}, these are exactly the sections $(\pi_A^\sharp(q_{A^*}^*\theta), q_{A^*}^*\theta)$ and $(\pi_A^\sharp(\dr l_a), \dr l_a)$. \end{example} \begin{example}\label{IM_2_form_2} Consider, in the situation of Example \ref{IM_2_form}, $U=:\graphe(-\sigma^*:TM\to E^*)$ and $K=\graphe(\sigma:E\to T^*M)$. Then $U=K^\circ$ by definition and since \[\Delta_{(X,-\sigma^*X)}(e,\sigma(e))=(\nabla_Xe,\sigma(\nabla_Xe)) \] by definition, we find that $\Delta_uk\in\Gamma(K)$ for all $u\in\Gamma(U)$ and $k\in\Gamma(K)$. Furthermore, we have $\llbracket} \newcommand{\rb}{\rrbracket (X,-\sigma^*X), (Y,-\sigma^*Y)\rb_\Delta=([X,Y], -\sigma^*[X,Y])$ for all $X,Y\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$ and $U$ is a Lie algebroid (isomorphic to $TM$ with the Lie bracket of vector fields). Alternatively, the Jacobiator in \eqref{curvature_sigma} is easily seen to vanish on this type of sections. This shows that the double vector subbundle $D\subseteq TE\oplus T^*E$ defined by $U,K$ and $\Delta$ is a Dirac structure. By the considerations in Appendix \ref{pullback_canonical_symplectic}, $D$ is the graph of the vector bundle morphism $TE\to T^*E$ defined by the closed $2$-form $\sigma^*\omega_{\rm can}$. \end{example} \begin{example} We now combine Examples \ref{foliation_example} and \ref{IM_2_form_2} to recover an example in \cite{JoRa12a}. We consider the vector bundle $T^*M\to M$ endowed with a $TM$-connection $\nabla$ and the Dorfman connection \[\Delta:\Gamma(TM\oplus TM)\times \Gamma(T^*M\oplus T^*M)\to \Gamma(T^*M\oplus T^*M),\] \[\Delta_{(X,Y)}(\theta,\omega)=(\nabla_X\theta, \ldr{X}(\omega-\theta)+\langle \nabla^*_\cdot(X+Y),\omega\rangle+\nabla_X\theta). \] Consider a subbundle $F\subseteq TM$ and $U:=\{(x,-x)\mid x\in F\}\subseteq TM\oplus TM$. The annihilator $K=U^\circ$ is then given by $K=\{(\alpha,\beta)\in T^*\oplus T^*M\mid \alpha-\beta\in F^\circ\}$. Note that by Example \ref{IM_2_form}, the dull bracket on $TM\oplus TM$ is skew-symmetric. In fact, it is easy to see that its restriction to $U$ is just the Lie bracket of vector fields \[ \llbracket} \newcommand{\rb}{\rrbracket (X,-X), (Y,-Y)\rb_\Delta=([X,Y],-[X,Y]) \] for all $X,Y\in\Gamma(F)$. Hence, we know already that the sub-double vector bundle $D_{(U,K,[\Delta])}$ is an almost Dirac structure on $T^*M$. An easy computation using Appendix \ref{pullback_canonical_symplectic} yields that \[D_{(U,K,[\Delta])} (\alpha) =\{(x_\alpha, \omega_{\rm can}^\flat(x_\alpha)+\theta_\alpha)\mid x_\alpha\in\mathcal F(\alpha), \theta_\alpha\in \mathcal F^\circ(\alpha) \} \] for all $\alpha\in T^*M$, where $\mathcal F=(Tc_M)\inv(F)$. Assume that $M$ is the configuration space of a nonholonomic mechanical system and $F$ the constraints distribution. If $L$ is the Lagrangian of the system, then the pullback of the Dirac structure that we find to the contraints submanifold $\mathbb FL(F)\subseteq T^*M$ is one of the frameworks proposed in \cite{JoRa12a} for the study of the nonholonomic system. \end{example} \section{ Dorfman connections and Lie algebroids} We consider in this section a Lie algebroid $(A\to M, \rho, [\cdot\,,\cdot])$ and a Dorfman connection \[\Delta: \Gamma(TM\oplus A^*)\times\Gamma(A\oplus T^*M)\to \Gamma(A\oplus T^*M) \] with corresponding dull bracket $\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_{\Delta}$ and anchor $\pr_{TM}$ on $TM\oplus A^*$. We will work here with the map \[\Omega:\Gamma(TM\oplus A^*)\times\Gamma(A)\to\Gamma(A\oplus T^*M) \] \[\Omega_{(X,\xi)}a=\Delta_{(X,\xi)}(a,0)-(0,\dr\langle\xi, a\rangle). \] $\Omega$ has the following properties \begin{enumerate} \item $\Omega_{\varphi (X,\xi)}a=\varphi \Omega_{(X,\xi)}a$, \item $\Omega_{(X,\xi)}(\varphi a)=\varphi \Omega_{(X,\xi)}a+X(\varphi )(a,0)-\langle \xi, a\rangle (0,\dr \varphi )$ \end{enumerate} for all $\varphi\in C^\infty(M)$, $a\in\Gamma(A)$ and $(X,\xi)\in\Gamma(TM\oplus A^*)$. For each $a\in\Gamma(A)$, we have two derivations over $\rho(a)$: \[\ldr{a}:\Gamma(A\oplus T^*M)\to\Gamma(A\oplus T^*M),\] \[\ldr{a}(b,\theta)=([a,b], \ldr{\rho(a)}\theta)\] and \[\ldr{a}:\Gamma(TM\oplus A^*)\to\Gamma(TM\oplus A^*)\] \[\ldr{a}(X,\xi)=([\rho(a),X], \ldr{a}\xi).\] Note that \begin{align*} \ldr{\varphi a}(b,\theta)=\varphi \ldr{a}(b,\theta) +(-\rho(b)(\varphi)a,\langle \theta, \rho(a)\rangle \dr\varphi). \end{align*} Finally, note that there is a ``Dorfman-like'' bracket $[\cdot\,,\cdot]_D$ on sections of $A\oplus T^*M$: \[ [(a,\theta), (b,\omega)]_D=([a,b], \ldr{\rho(a)}\omega-\ip{\rho(b)}\dr\theta) \] for $(a,\theta), (b,\omega)\in\Gamma(A\oplus T^*M)$. We have \begin{equation}\label{Dorfman_skew-sym} [\sigma_1, \sigma_2]_D+[\sigma_2, \sigma_1]_D=(0, \dr\langle \sigma_1, (\rho, \rho^*)\sigma_2\rangle)\footnote{We will write $\langle \cdot\,,\cdot\rangle_D$ for this pairing of $A\oplus T^*M$ with itself, i.e. $\langle\sigma_1,\sigma_2\rangle_D=\langle \sigma_1, (\rho, \rho^*)\sigma_2\rangle$ for all $\sigma_1,\sigma_2\in\Gamma(A\oplus T^*M)$.} \end{equation} and the Jacobi identity in Leibniz form \begin{equation}\label{Dorfman_Jacobi} [\sigma_1, [\sigma_2, \sigma_3]] = [[\sigma_1, \sigma_2], \sigma_3] + [\sigma_2, [\sigma_1, \sigma_3]] \end{equation} for all $\sigma_1,\sigma_2,\sigma_3\in\Gamma(A\oplus T^*M)$. Note that $(A\oplus T^*M, \rho\circ\pr_{A}, [\cdot\,,\cdot]_D,\langle \cdot\,,\cdot\rangle_D)$ is not a Courant algebroid because $\langle \cdot\,,\cdot\rangle_D$ is in general degenerate. \subsection{The basic connections associated to $\Delta$} \begin{proposition}\label{prop_def_basic_connections} The two maps \begin{align*} \nabla^{\rm bas}: \Gamma(A)\times\Gamma(TM\oplus A^*)&\to \Gamma(TM\oplus A^*)\\ \nabla^{\rm bas}_a(X,\xi &=(\rho,\rho^*)(\Omega_{(X,\xi)}a)+\ldr{a}(X,\xi) \end{align*} and \begin{align*} \nabla^{\rm bas}: \Gamma(A)\times\Gamma(A\oplus T^*M)&\to \Gamma(A\oplus T^*M)\\ \nabla^{\rm bas}_a(b,\theta &=\Omega_{(\rho,\rho^*)(b,\theta)}a+\ldr{a}(b,\theta) \end{align*} are connections in the usual sense. \end{proposition} \begin{proof} Recall the notation $\dr_{A^*} \varphi :=\rho^*\dr \varphi $ for all $\varphi \in C^\infty(M)$. We compute for $\varphi\in C^\infty(M)$, $a, b\in\Gamma(A)$, $\xi\in\Gamma(A^*)$, $\theta\in\Omega^1(M)$ and $X\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$: \begin{align*} \nabla^{\rm bas}_{\varphi a}(X,\xi)&=(\rho,\rho^*)(\Omega_{(X,\xi)}(\varphi a))+([\varphi \rho(a),X], \ldr{\varphi a}\xi)\\ &=\varphi \cdot \nabla^{\rm bas}_a(X,\xi)+(\rho,\rho^*)(X(\varphi ) a,- \langle\xi, a\rangle\dr \varphi )+ (-X(\varphi )\rho(a), \langle \xi, a\rangle \dr_{A}^* \varphi )\\ &=\varphi \cdot \nabla^{\rm bas}_a(X,\xi),\\ \nabla^{\rm bas}_{\varphi a}(b,\theta)&=\Omega_{(\rho,\rho^*)(b,\theta)}(\varphi a) +\ldr{\varphi a}(b,\theta)\\ &=\varphi \cdot\nabla^{\rm bas}_a(b,\theta)+ (\rho(b)(\varphi )\cdot a, 0)-\langle \theta, \rho(a)\rangle(0,\dr \varphi )- (\rho(b)(\varphi )a,0)+\langle\theta, \rho(a)\rangle(0,\dr \varphi )\\ &=\varphi \cdot\nabla^{\rm bas}_a(b,\theta),\\ \nabla^{\rm bas}_{a}\left(\varphi (X,\xi)\right)&=(\rho,\rho^*)(\Omega_{\varphi (X,\xi)}a)+\ldr{a}(\varphi (X, \xi))=\varphi \nabla^{\rm bas}_{a}(X,\xi) +\rho(a)(\varphi )(X, \xi)\\ \nabla^{\rm bas}_{a}\left(\varphi (b,\theta)\right)&=\Omega_{(\rho,\rho^*)\varphi (b,\theta)}a+\ldr{a}(\varphi (b,\theta)) =\varphi \nabla^{\rm bas}_{a}(b,\theta)+\rho(a)(\varphi )(b, \theta). \end{align*} \end{proof} The following proposition is easily checked, and shows that the connections are in general not dual to each other. \begin{proposition}\label{prop_basic_connections} We have \begin{equation}\label{not_dual} \langle \nabla_a^{\rm bas}v, \sigma\rangle +\langle v, \nabla_a^{\rm bas}\sigma\rangle =\rho(a)\langle v, \sigma\rangle-\langle \Skew_\Delta(v, (\rho,\rho^*)\sigma), a\rangle \end{equation} and \begin{equation}\label{intertwined} \nabla_a^{\rm bas}(\rho,\rho^*)\sigma=(\rho,\rho^*)\nabla_a^{\rm bas}\sigma \end{equation} for all $a\in \Gamma(A)$, $v\in\Gamma(TM\oplus A^*)$ and $\sigma\in\Gamma(A\oplus T^*M)$. \end{proposition} \begin{definition} The connections in Proposition \ref{prop_def_basic_connections} will be called the \textbf{basic connections} associated to $\Delta$. We will sometimes also write $\nabla_\sigma^{\rm bas}:=\nabla_{\pr_A\sigma}^{\rm bas}$ for a section $\sigma\in\Gamma(A\oplus T^*M)$. \end{definition} \begin{proposition}\label{prop_tensoriality} The map \[R_\Delta^{\rm bas}:\Gamma(A)\times\Gamma(A)\times\Gamma(TM\oplus A^*)\to\Gamma(A\oplus T^*M) \] given by \begin{align*} R_\Delta^{\rm bas}(a,b)(X,\xi)=-\Omega_{(X,\xi)}[a,b] +\ldr{a}\left(\Omega_{(X,\xi)}b\right)-\ldr{b}\left(\Omega_{(X,\xi)}a\right) + \Omega_{\nabla^{\rm bas}_b(X,\xi)}a-\Omega_{\nabla^{\rm bas}_a(X,\xi)}b. \end{align*} is tensorial, i.e. a section of \[A^*\otimes A^*\otimes (A\oplus T^*M)\otimes (A\oplus T^*M).\] \end{proposition} \begin{definition} The tensor $R_\Delta^{\rm bas}$ will be called the \textbf{basic curvature} associated to $\Delta$. \end{definition} \begin{proof}[Proof of Proposition \ref{prop_tensoriality}] We compute for $v=(X,\xi)\in\Gamma(TM\oplus A^*)$ and $a,b\in\Gamma(A)$: \begin{align*} R_\Delta(\varphi a, b)v=&-\Omega_{v}\left(\varphi[a,b]-\rho(b)(\varphi)a\right) +\ldr{\varphi a}\Omega_{v}b\\ &-\ldr{b}\left(\varphi\Omega_{v}a+X(\varphi)a-\langle \xi, a\rangle(0,\dr \varphi)\right)+ \Omega_{\nabla^{\rm bas}_bv}(\varphi a)-\Omega_{\varphi\nabla^{\rm bas}_{a}v}b\\ =&\varphi R_\Delta(a, b)v-X(\varphi)([a,b],0)+\langle \xi, [a,b]\rangle (0,\dr\varphi)+\rho(b)(\varphi)\Omega_{v}a\\ &+X(\rho(b)(\varphi))(a,0)-\langle \xi, a\rangle (0, \dr (\rho(b)(\varphi)))\\ & -\left((\rho\circ\pr_A)(\Omega_vb)(\varphi)a,-\langle \Omega_vb, (a,0)\rangle\dr\varphi\right)\\ &-\rho(b)(\varphi)\Omega_{v}a-\rho(b)(X(\varphi))(a,0)-X(\varphi)([b,a],0)\\ &+\rho(b)\langle \xi, a\rangle(0,\dr \varphi)+\langle \xi, a\rangle(0,\ldr{\rho(b)}\dr \varphi)\\ & +((\rho\circ\pr_A)(\Omega_vb)+[\rho(b),X])(\varphi) (a,0)-\langle \Omega_vb, (a,0)\rangle(0, \dr\varphi)\\ &-\langle \ldr{b}\xi,a\rangle(0, \dr\varphi)\\ =&\varphi R_\Delta(a, b)v,\\ R_\Delta(a, b)(\varphi v)=&\varphi R_\Delta(a, b)v -\rho(a)(\varphi)\Omega_{v}b+\rho(b)(\varphi)\Omega_{v}a-\rho(b)(\varphi)\Omega_{v}a+\rho(a)(\varphi)\Omega_{v}b\\ =&\varphi R_\Delta(a, b)v. \end{align*} \end{proof} \begin{proposition}\label{basic_curvatures} The basic curvature has the following properties: \begin{enumerate} \item $R_{\nabla^{\rm bas}}=R^{\rm bas}_\Delta\circ (\rho,\rho^*)$, \item $R_{\nabla^{\rm bas}}=(\rho,\rho^*)\circ R^{\rm bas}_\Delta$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item For $\sigma\in\Gamma(A\oplus T^*M)$ and $a,b\in\Gamma(A)$, we have \begin{align*} R_\Delta(a,b)((\rho,\rho^*)\sigma)=&-\Omega_{(\rho,\rho^*)\sigma}[a,b] +\ldr{a}\left(\Omega_{(\rho,\rho^*)\sigma}b\right)-\ldr{b}\left(\Omega_{(\rho,\rho^*)\sigma}a\right)\\ & + \Omega_{\nabla^{\rm bas}_b(\rho,\rho^*)\sigma}a-\Omega_{\nabla^{\rm bas}_a(\rho,\rho^*)\sigma}b\\ =&-\Omega_{(\rho,\rho^*)\sigma}[a,b] -\ldr{[a,b]}\sigma+\ldr{a}\left(\Omega_{(\rho,\rho^*)\sigma}b+\ldr{b}\sigma\right)-\ldr{b}\left(\Omega_{(\rho,\rho^*)\sigma}a+\ldr{a}\sigma\right)\\ & + \Omega_{\nabla^{\rm bas}_b(\rho,\rho^*)\sigma}a-\Omega_{\nabla^{\rm bas}_a(\rho,\rho^*)\sigma}b\\ =&-\nabla_{[a,b]}^{\rm bas}\sigma+\nabla_a^{\rm bas}\nabla_b^{\rm bas}\sigma-\nabla_b^{\rm bas}\nabla_a^{\rm bas}\sigma=R_{\nabla^{\rm bas}}(a,b)\sigma. \end{align*} \item The second equality is shown in the same manner. \end{enumerate} \end{proof} \subsection{The Lie algebroid structure on $TA\oplus T^*A\to TM\oplus A^*$} Consider a Lie algebroid $A$ and a Dorfman connection \[\Delta:\Gamma(TM\oplus A^*)\times\Gamma(A\oplus T^*M)\to \Gamma(A\oplus T^*M). \] Then, for any section $a\in\Gamma(A)$, we define \[\Sigma_a\in\Gamma_{TM\oplus A^*}(TA\oplus T^*A) \] by \begin{align*} \Sigma_a(v_m,\xi_m)&=(T_mav_m,\dr_{a_m} \ell_\xi)-\Delta_{(X,\xi)}(a,0)^\uparrow(a_m) \end{align*} for any choice of section $(X,\xi)\in\Gamma(TM\oplus A^*)$ such that $(X,\xi)(m)=(v_m,\xi_m)$. That is, we have \[\Sigma_a=(Ta, R(\dr \ell_a))-\Omega_\cdot a^\dagger=\tilde a-\Omega_\cdot a^\dagger \] for all $a\in \Gamma(A)$ (see the description of the Lie algebroid structure on $T^*A\to A^*$ in Appendix \ref{big_lie_algebroids}). \begin{theorem}\label{rep_up_to_hom} The Lie algebroid structure on $TA\oplus T^*A\to TM\oplus A^*$ can be caracterized as follows, where we write $\Theta: TA\oplus T^*A\to T(TM\oplus A^*)$ for its anchor: \begin{enumerate} \item $[\Sigma_a,\Sigma_b]=\Sigma_{[a,b]}-R_\Delta^{\rm bas}(a,b)^\uparrow$, \item $[\Sigma_a,\sigma^\dagger]=(\nabla_a^{\rm bas}\sigma)^\dagger$, \item $[\sigma_1^\dagger, \sigma_2^\dagger]=0$, \item $\Theta(\Sigma_a)=\widehat{\nabla_a^{\rm bas}}\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(TM\oplus A^*)$, \item $\Theta(\sigma^\dagger)=((\rho,\rho^*)\sigma)^\uparrow \in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(TM\oplus A^*)$. \end{enumerate} \end{theorem} \begin{remark} In other words, $(\rho,\rho^*):A\oplus T^*M\to TM\oplus A^*$, the basic connections $\nabla^{\rm bas}$ and the basic curvature $R_\Delta^{\rm bas}$ define the representation up to homotopy describing the VB-Lie algebroid structure on $TA\oplus T^*A\to TM\oplus A^*$ in terms of the splitting given by $\Delta$ (see \cite{GrMe10a}). \end{remark} \begin{proof}[Proof of Theorem \ref{rep_up_to_hom}] The proof is this theorem is just checking of the formulas, using the description of the Lie algebroid structure on $TA\oplus T^*A\to TM\oplus A^*$ that can be found in Appendix \ref{big_lie_algebroids}. We start with the Lie algebroid brackets. Choose $a,b\in\Gamma(A)$ and $\sigma\in\Gamma(A\oplus T^*M)$. We have, using Proposition \ref{structure_of_TAT*A}: \begin{enumerate} \item \begin{align*} [\Sigma_a, \Sigma_b]&=\left[\tilde a-\Omega_\cdot a^\dagger, \tilde b-\Omega_\cdot b^\dagger\right]\\ &=\widetilde{[a,b]}-(\ldr{a}\Omega_\cdot b)^\dagger+(\ldr{b}\Omega_\cdot a)^\dagger+(\Omega_\cdot b\circ(\rho,\rho^*)\circ \Omega_\cdot a- \Omega_\cdot a\circ(\rho,\rho^*)\circ \Omega_\cdot b)^\dagger\\ &=\Sigma_{[a,b]}-R_\Delta(a,b)^\dagger, \end{align*} since we have, for all $v\in \Gamma(TM\oplus A^*)$: \begin{align*} &-(\ldr{a}\Omega_\cdot b)(v)+(\ldr{b}\Omega_\cdot a)(v)+(\Omega_\cdot b\circ(\rho,\rho^*)\circ \Omega_\cdot a)(v)- (\Omega_\cdot a\circ(\rho,\rho^*)\circ \Omega_\cdot b)(v)\\ =&-\ldr{a} \Omega_vb+\Omega_{\ldr{a}v}b+\ldr{b} \Omega_va-\Omega_{\ldr{b}v}a +\Omega_{(\rho,\rho^*)\Omega_v a}b- \Omega_{(\rho,\rho^*)\Omega_v b}a\\ =&-\ldr{a} \Omega_vb+\ldr{b} \Omega_va +\Omega_{\nabla_a^{\rm bas}v}b- \Omega_{\nabla_b^{\rm bas}v}a =-R_\Delta(a,b)v-\Omega_v[a,b]. \end{align*} \item $[\Sigma_a,\sigma^\dagger]=(\ldr{a}\sigma)^\dagger+\Omega_{(\rho,\rho^*)\sigma}a^\dagger=(\nabla_a^{\rm bas}\sigma)^\dagger$. \end{enumerate} For the anchor map, we compute: \begin{enumerate} \setcounter{enumi}{3} \item $\Theta(\Sigma_a)(\ell_\sigma)=\ell_{\ldr{a}\sigma-(\Omega_\cdot a)^*((\rho,\rho^*)\sigma)}$, which yields the desired equality since \begin{align*} \langle(\Omega_\cdot a)^*((\rho,\rho^*)\sigma),v\rangle &=\langle\Omega_v a, (\rho,\rho^*)\sigma\rangle=\langle(\rho,\rho^*)\Omega_v a, \sigma\rangle=\langle\nabla_a^{\rm bas}v-\ldr{a}v, \sigma\rangle\\ &=\rho(a)\langle v, \sigma\rangle-\langle\Skew_\Delta(v,(\rho,\rho^*)\sigma),a\rangle-\langle v, \nabla_a^{\rm bas}\sigma\rangle-\langle\ldr{a}v,\sigma\rangle\\ &=-\langle\Skew_\Delta(v,(\rho,\rho^*)\sigma),a\rangle-\langle v, \nabla_a^{\rm bas}\sigma\rangle+\langle v, \ldr{a}\sigma\rangle \end{align*} and consequently \begin{align*} \langle v, \ldr{a}\sigma-(\Omega_\cdot a)^*((\rho,\rho^*)\sigma)\rangle= \langle v, {\nabla_a^{\rm bas}}^*\sigma\rangle \end{align*} by (1) of Proposition \ref{prop_basic_connections}. \end{enumerate} The remaining equalities follow from Proposition \ref{structure_of_TAT*A}. \end{proof} \begin{theorem}\label{subalgebroids} Consider a Lie algebroid $A$ and a Dorfman connection \[\Delta:\Gamma(TM\oplus A^*)\times\Gamma(A\oplus T^*M)\to \Gamma(A\oplus T^*M). \] Let $U\subseteq TM\oplus A^*$ and $K\subseteq A\oplus T^*M$ be subbundles. Then the sub-double vector bundle $D_{(U,K,[\Delta])}$ is a subalgebroid of $TA\oplus T^*A\to TM\oplus A^*$ over $U$ if and only if: \begin{enumerate} \item $(\rho,\rho^*)(K)\subseteq U$, \item $\nabla_a^{\rm bas}k\in\Gamma(K)$ for all $a\in\Gamma(A)$ and $k\in\Gamma(K)$, \item $\nabla_a^{\rm bas}u\in\Gamma(U)$ for all $a\in\Gamma(A)$ and $u\in\Gamma(U)$, \item $R_\Delta^{\rm bas}(a,b)u\in\Gamma(K)$ for all $u\in\Gamma(U)$, $a,b\in\Gamma(A)$. \end{enumerate} \end{theorem} \begin{proof} Assume that $D_{(U,K,[\Delta])}\to U$ is a subalgebroid of $TA\oplus T^*A\to TM\oplus A^*$. Then we have $((\rho,\rho^*)k)^\uparrow\an{U}=\Theta(k^\dagger\an{U})\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(U)$ and $\widehat{\nabla_a^{\rm bas}}\an{U}=\Theta(\tilde a\an{U})\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(U)$ for all $a\in\Gamma(A)$ and $k\in\Gamma(K)$. This is the case if and only if $((\rho,\rho^*)k)^\uparrow(\ell_{l})\an{U}=0$ and $\widehat{\nabla_a^{\rm bas}}(\ell_{l})\an{U}=0$ for all $l\in\Gamma(U^\circ)$. Since $((\rho,\rho^*)k)^\uparrow(\ell_{l})=\pi^*\langle (\rho,\rho^*)k, l\rangle$ and $\widehat{\nabla_a^{\rm bas}}(\ell_{l})=\ell_{{\nabla_a^{\rm bas}}^*l}$, we find that $(\rho,\rho^*)k$ must be a section of $U$ and ${\nabla_a^{\rm bas}}^*l\in\Gamma(U^\circ)$ for all $l\in\Gamma(U^\circ)$. But the latter is equivalent to $\nabla_a^{\rm bas}u\in\Gamma(U)$ for all $u\in\Gamma(U)$. We have in the same manner $(\nabla_a^{\rm bas}k)^\dagger\an{U}=[\tilde a, k^\dagger]\an{U}\in\Gamma(D_{(U,K,[\Delta])})$ and $\bigl(\widetilde{[a,b]}-R_\Delta^{\rm bas}(a,b)^\dagger\bigr)\an{U}=[\tilde a, \tilde b]\an{U}\in\Gamma(D_{(U,K,[\Delta])})$ for all $a,b\in\Gamma(A)$ and $k\in\Gamma(K)$. But this is only the case if $\nabla_a^{\rm bas}k\in\Gamma(K)$ and, since $\widetilde{[a,b]}\an{U}\in\Gamma(D_{(U,K,[\Delta])})$, $R_\Delta^{\rm bas}(a,b)^\dagger\an{U}\in\Gamma(D_{(U,K,[\Delta])})$. The lattest holds only if $R_\Delta^{\rm bas}(a,b)u\in\Gamma(K)$ for all $u\in\Gamma(U)$. \medskip The converse implication is shown in the same manner. \end{proof} \subsection{$\mathcal {LA}$-Dirac structures in $TA\oplus T^*A$} In this subsection and the next, we will study in more details the triples $(U,K,[\Delta]_{U,K})$ associated to Dirac structures on $A$ that are at the same time Lie subalgebroids of $TA\oplus T^*A\to TM\oplus A^*$. We call such a Dirac structure an $\mathcal {LA}$-Dirac structure on $A$. \begin{theorem}\label{morphic_Dirac_triples} Consider a Lie algebroid $A$ and a Dorfman connection \[\Delta:\Gamma(TM\oplus A^*)\times\Gamma(A\oplus T^*M)\to \Gamma(A\oplus T^*M). \] Let $U\subseteq TM\oplus A^*$ and $K\subseteq A\oplus T^*M$ be subbundles. Then $D_{(U,K,[\Delta])}$ is a Dirac structure in $TA\oplus T^*A\to A$ and a subalgebroid of $TA\oplus T^*A\to TM\oplus A^*$ over $U$ if and only if $(U,K,[\Delta])$ is a $\mathcal {LA}$-Dirac triple, i.e. if and only if: \begin{enumerate} \item $K=U^\circ$ \item $(\rho,\rho^*)(K)\subseteq U$, \item $(U,\pr_{TM}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta)$ is a Lie algebroid, \item $\nabla_a^{\rm bas}k\in\Gamma(K)$ for all $a\in\Gamma(A)$ and $k\in\Gamma(K)$, \item $R_\Delta^{\rm bas}(a,b)u\in\Gamma(K)$ for all $u\in\Gamma(U)$, $a,b\in\Gamma(A)$. \end{enumerate} \end{theorem} \begin{proof} This theorem follows from (2) in Corollary \ref{Dirac_triples} and Theorem \ref{subalgebroids}. Note that if $U=K^\circ$, $(\rho,\rho^*)K\subseteq U$ and $(U,\pr_{TM}, \llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb_\Delta)$ is a Lie algebroid, then $\nabla^{\rm bas}_a$ preserves $\Gamma(U)$ if and only if $\nabla^{\rm bas}_a$ preserves $\Gamma(K)$. So (2) and (3) in Theorem \ref{subalgebroids} become the same condition. \end{proof} \begin{remark} A triple $(U,K,[\Delta]_{U,K})$ satisfying (1)--(5) in Theorem \ref{subalgebroids} will be called an $\mathcal {LA}$-Dirac triple on $A$. We have found a one-one correspondence between $\mathcal {LA}$-Dirac triples on $A$ and $\mathcal {LA}$-Dirac structures on the Lie algebroid $A$. \end{remark} \begin{example}\label{lie_algebroid_dual_3} Consider again Examples \ref{lie_algebroid_dual} and \ref{lie_algebroid_dual_2}. Assume that $A^*$ has itself also a Lie algebroid structure with anchor $\rho_*$ and bracket $[\cdot\,,\cdot]_*$. For simplicity, we switch the roles of $A$ and $A^*$ in Examples \ref{lie_algebroid_dual} and \ref{lie_algebroid_dual_2}. Here, the second condition is equivalent to \begin{equation}\label{antisym} \rho_*\circ\rho^*=-\rho\circ\rho_*^* \end{equation} We assume in the following that this condition is satisfied. We have also: \[\Omega_{(\rho_*(\xi),\xi)}a=\left(\ldr{\xi}a-\rho_*^*\langle \nabla^*_\cdot \xi, a\rangle,\langle \nabla^*_\cdot \xi, a\rangle \right)-(0,\dr\langle\xi,a \rangle) =\left(\ip{\xi}\dr_Aa+\rho_*^*\langle \xi, \nabla_\cdot a\rangle,-\langle \xi, \nabla_\cdot a\rangle \right), \] for all $\xi\in\Gamma(A^*)$ and $a\in\Gamma(A)$ and so \begin{align*} \Omega_{(\rho,\rho^*)(-\rho_*^*\theta,\theta)}a&=\Omega_{(\rho_*(\rho^*\theta),\rho^*\theta)}a=\left(\ip{\rho^*\theta}\dr_Aa+\rho_*^*\langle \rho^*\theta, \nabla_\cdot a\rangle,-\langle \rho^*\theta, \nabla_\cdot a\rangle \right). \end{align*} for all $\theta\in\Omega^1(M)$. In particular, if $\theta=\dr\varphi$ for some $\varphi\in C^\infty(M)$, we get: \begin{align*} \nabla^{\rm bas}_a(-\rho_*^*\dr\varphi, \dr\varphi) &=\Omega_{(\rho,\rho^*)(-\rho_*^*\dr\varphi,\dr\varphi)}a+\ldr{a}(-\rho_*^*\dr\varphi,\dr\varphi)\\ &=\left(\ip{\rho^*\dr\varphi}\dr_Aa+\rho_*^*\langle \rho^*\dr\varphi, \nabla_\cdot a\rangle -[a,\rho_*^*\dr\varphi], -\langle \rho^*\dr\varphi, \nabla_\cdot a\rangle +\ldr{\rho(a)}(\dr\varphi) \right)\\ &=\left(\ip{\dr_{A^*}\varphi}\dr_Aa+\rho_*^*\langle \dr_{A^*}\varphi, \nabla_\cdot a\rangle -[a,\dr_A\varphi], -\langle \dr_{A^*}\varphi, \nabla_\cdot a\rangle +\dr(\rho(a)(\varphi)) \right). \end{align*} Thus, using the first condition, $\nabla^{\rm bas}_a(-\rho_*^*\dr\varphi, \dr\varphi)\in\Gamma(K)$ if and only if \[\langle \left(\ip{\dr_{A^*}\varphi}\dr_Aa+\rho_*^*\langle \dr_{A^*}\varphi, \nabla_\cdot a\rangle -[a,\dr_A\varphi], -\langle \dr_{A^*}\varphi, \nabla_\cdot a\rangle +\dr(\rho(a)(\varphi)) \right), (\rho_*\xi,\xi)\rangle=0 \] for all $\xi\in\Gamma(A^*)$. But this pairing equals \begin{align*} &\langle \left(\ip{\dr_{A^*}\varphi}\dr_Aa+\rho_*^*\langle \dr_{A^*}\varphi, \nabla_\cdot a\rangle -[a,\dr_A\varphi], -\langle \dr_{A^*}\varphi, \nabla_\cdot a\rangle +\dr(\rho(a)(\varphi)) \right), (\rho_*\xi,\xi)\rangle\\ =\,&-\langle \dr_{A^*}\varphi, \nabla_{\rho_*(\xi)} a\rangle+\rho_*(\xi)\rho(a)(\varphi) +(\dr_Aa)(\dr_{A^*}\varphi,\xi) +\langle \dr_{A^*}\varphi, \nabla_{\rho_*(\xi)} a\rangle -\langle\xi, [a,\dr_A\varphi]\rangle\\ =\,&\rho_*(\xi)\rho(a)(\varphi)+\rho_*(\dr_{A^*}\varphi)\langle \xi,a\rangle -\rho_*(\xi)\langle\dr_{A^*}\varphi, a\rangle-\langle a, [\dr_{A^*}\varphi,\xi]_*\rangle-\langle\xi, [a,\dr_A\varphi]\rangle\\ =\,&\langle(\rho_*\circ\rho^*)(\dr\varphi),\dr(\langle \xi,a\rangle)\rangle +\rho_*(\xi)\langle a,\dr_{A^*}\varphi\rangle-\langle\ldr{\xi}a,\dr_{A^*}\varphi\rangle-\rho(a)\langle\xi, \dr_A\varphi\rangle +\langle \ldr{a}\xi,\dr_A\varphi\rangle \\ =\,&\langle\dr\varphi,(\rho\circ\rho^*_*)\dr(\langle \xi,a\rangle)\rangle +\rho_*(\xi)\rho(a)(\varphi)-\rho(\ldr{\xi}a)(\varphi)-\rho(a)\rho_*(\xi)(\varphi) +\rho_*(\ldr{a}\xi)(\varphi)\\ =\,&\bigl([\rho_*(\xi),\rho(a)]+\rho_*(\ldr{a}\xi)-\rho(\ldr{\xi}a)+\rho(\dr_A\langle \xi,a\rangle)\bigr)(\varphi). \end{align*} Since $\varphi$ was arbitrary, we have shown that the fourth condition is satisfied if and only if \begin{equation}\label{compatibility_of_derivations} [\rho(a),\rho_*(\xi)]-\rho_*(\ldr{a}\xi)+\rho(\ldr{\xi}a)=\rho(\dr_A\langle \xi,a\rangle) \end{equation} for all $a\in\Gamma(A)$ and $\xi\in\Gamma(A^*)$. Thus, we have found until here \eqref{antisym} and \eqref{compatibility_of_derivations}, which are shown in \cite[Theorem 12.1.9]{Mackenzie05} to imply the fact that $(A,A^*)$ is a Lie bialgebroid. We conclude by showing that the last condition, on the basic curvature, follows as well from \eqref{antisym} and \eqref{compatibility_of_derivations}. Since $\Omega_{(\rho_*(\xi),\xi)}a=(\ip{\xi}\dr_Aa, 0)-(-\rho_*^*\langle \xi, \nabla_\cdot a\rangle,\langle \xi, \nabla_\cdot a\rangle)$ and $K=U^\circ$, we find \[\langle \Omega_{(\rho_*(\xi),\xi)}a, (\rho_*\eta,\eta)\rangle=(\dr_Aa)(\xi,\eta) \] for all $a\in\Gamma(A)$, $\xi,\eta\in\Gamma(A^*)$. The fourth condition together with the first identity in Proposition \ref{prop_basic_connections} and the first and third conditions imply that $\nabla_a^{\rm bas}u\in\Gamma(A)$ for all $u\in\Gamma(U)$. Hence, \begin{align*} \nabla_a^{\rm bas}(\rho_*(\xi),\xi)&=(\rho,\rho^*)\left(\ip{\xi}\dr_Aa+\rho_*^*\langle \xi, \nabla_\cdot a\rangle,-\langle \xi, \nabla_\cdot a\rangle\right)+\ldr{a}(\rho_*(\xi),\xi)\\ &=\left(\rho_*(-\rho^*\langle \xi, \nabla_\cdot a\rangle+\ldr{a}\xi),-\rho^*\langle \xi, \nabla_\cdot a\rangle+\ldr{a}\xi\right) \end{align*} and \[\langle \Omega_{\nabla^{\rm bas}_a(\rho_*\xi,\xi)}b, (\rho_*\eta,\eta)\rangle= (\dr_Ab)(-\rho^*\langle \xi, \nabla_\cdot a\rangle+\ldr{a}\xi,\eta) \] for all $a,b\in\Gamma(A)$, $\xi,\eta\in\Gamma(A^*)$. We get hence that \begin{align*} \langle R_\Delta^{\rm bas}(a,b)(\rho_*\xi,\xi), (\rho_*\eta,\eta)\rangle =\,&(\dr_A[a,b])(\xi,\eta) -\langle\ldr{a}(\ip{\xi}\dr_Ab+\rho_*^*\langle \xi, \nabla_\cdot b\rangle,-\langle \xi, \nabla_\cdot b\rangle), (\rho_*\eta,\eta)\rangle\\ &\,+\langle\ldr{b}(\ip{\xi}\dr_Aa+\rho_*^*\langle \xi, \nabla_\cdot a\rangle,-\langle \xi, \nabla_\cdot a\rangle), (\rho_*\eta,\eta)\rangle\\ &\,-(\dr_Aa)(-\rho^*\langle \xi, \nabla_\cdot b\rangle+\ldr{b}\xi,\eta) +(\dr_Ab)(-\rho^*\langle \xi, \nabla_\cdot a\rangle+\ldr{a}\xi,\eta)\\ =\,&(\dr_A[a,b])(\xi,\eta) -\rho(a)(\dr_Ab(\xi,\eta))+\dr_Ab(\xi,\ldr{a}\eta)\\ &+\langle(\rho_*^*\langle \xi, \nabla_\cdot b\rangle,-\langle \xi, \nabla_\cdot b\rangle), \ldr{a}(\rho_*\eta,\eta)\rangle\\ &+\rho(b)(\dr_Aa(\xi,\eta))-\dr_Aa(\xi,\ldr{b}\eta) -\langle(\rho_*^*\langle \xi, \nabla_\cdot a\rangle,-\langle \xi, \nabla_\cdot a\rangle), \ldr{b}(\rho_*\eta,\eta)\rangle\\ &\,-(\dr_Aa)(-\rho^*\langle \xi, \nabla_\cdot b\rangle+\ldr{b}\xi,\eta) +(\dr_Ab)(-\rho^*\langle \xi, \nabla_\cdot a\rangle+\ldr{a}\xi,\eta)\\ =\,&(\dr_A[a,b]-[a,\dr_Ab]+[b,\dr_Aa])(\xi,\eta)\\ &+\langle \xi,\nabla_{\rho_*(\ldr{a}\eta)} b\rangle-\langle \xi, \nabla_{[\rho(a),\rho_*\eta]} b\rangle\\ &-\langle \xi,\nabla_{\rho_*(\ldr{b}\eta)} a\rangle+\langle \xi, \nabla_{[\rho(b),\rho_*\eta]} a\rangle\\ &\,+\rho_*\rho^*\langle \xi, \nabla_\cdot b\rangle(\langle a,\eta\rangle) -\rho_*\eta\langle \xi, \nabla_{\rho(a)} b\rangle+\langle a,[\eta,\rho^*\langle \xi, \nabla_\cdot b\rangle]_*\rangle\\ &\,-\rho_*\rho^*\langle \xi, \nabla_\cdot a\rangle(\langle b,\eta\rangle) +\rho_*\eta\langle \xi, \nabla_{\rho(b)} a\rangle-\langle b,[\eta, \rho^*\langle \xi, \nabla_\cdot a\rangle]_*\rangle\\ =\,&(\dr_A[a,b]-[a,\dr_Ab]+[b,\dr_Aa])(\xi,\eta)\\ &+\langle \xi,\nabla_{\rho_*(\ldr{a}\eta)} b\rangle-\langle \xi, \nabla_{[\rho(a),\rho_*\eta]} b\rangle\\ &-\langle \xi,\nabla_{\rho_*(\ldr{b}\eta)} a\rangle+\langle \xi, \nabla_{[\rho(b),\rho_*\eta]} a\rangle\\ &\,+\langle \xi, \nabla_{\rho(\dr_A\langle a,\eta\rangle)} b\rangle -\langle \xi, \nabla_{\rho(\ldr{\eta}a)} b\rangle\\ &\,-\langle \xi, \nabla_{\rho(\dr_A\langle b,\eta\rangle)} a\rangle +\langle \xi, \nabla_{\rho(\ldr{\eta}b)} a\rangle. \end{align*} By \eqref{compatibility_of_derivations}, the second and the fourth lines, and the third and the fifth lines cancel each other. We find hence that the last condition is satisfied if and only if $(A,A^*)$ is a Lie bialgebroid. Hence, $(U,K,[\Delta])$ is an $\mathcal{LA}$-Dirac triple if and only if $(A,A^*)$ is a Lie bialgebroid, and so the graph of $\pi_A$ is mrophic and Dirac if and only if $(A,A^*)$ is a Lie bialgebroid. This recovers a result in \cite{MaXu00}. \end{example} \begin{example}\label{IM_2_form_3} In the situation of Examples \ref{IM_2_form} and \ref{IM_2_form_2}, assume furthermore that $E=:A$ is a Lie algebroid. To avoid confusions, we write $\nabla^{A}$ for the $A$-basic connections induced on $A$ and $TM$ by the Lie algebroid structure on $A$ and the connection $\nabla$, and $R_{\nabla}^A$ for the basic curvature associated to it. \medskip The second condition of the last theorem reads here \[(\rho,\rho^*)(a,\sigma(a))=(\rho(a),-\sigma^*\rho(a)) \] for all $a\in\Gamma(A)$, i.e. \[ \rho^*\circ\sigma=-\sigma^*\circ\rho. \] This is exactly the first axiom defining an IM-$2$-form $\sigma:A\to T^*M$ \cite{BuCrWeZh04,BuCaOr09}: \[\langle \sigma(a),\rho(b)\rangle=-\langle\rho(a), \sigma(b)\rangle \] for all $a,b\in\Gamma(A)$. We next compute $\nabla^{\rm bas}_a(b,\sigma(b))$. We have \[\Omega_{(X,-\sigma^*X)}a=(\nabla_Xa,-\ldr{X}\sigma(a)+\sigma(\nabla_Xa))+(0,\dr\langle\sigma(a),X\rangle) =(\nabla_Xa,-\ip{X}\dr\sigma(a)+\sigma(\nabla_Xa)) \] and as a consequence \[\nabla_a^{\rm bas}(b,\sigma(b))=\Omega_{(\rho,\rho^*)(b,\sigma(b))}a+\ldr{a}(b,\sigma(b)) =(\nabla_{\rho(b)}a+[a,b], \ldr{\rho(a)}\sigma(b)-\ip{\rho(b)}\dr\sigma(a)+\sigma(\nabla_{\rho(b)}a)). \] We find hence that $\nabla_a^{\rm bas}(b,\sigma(b))\in\Gamma(K)$ if and only if $([a,b], \ldr{\rho(a)}\sigma(b)-\ip{\rho(b)}\dr\sigma(a))\in\Gamma(K)$, i.e. if and only if \[ \sigma([a,b])=\ldr{\rho(a)}\sigma(b)-\ip{\rho(b)}\dr\sigma(a). \] Since this is the second axiom in the definition of an IM-$2$-form, we recover the fact that the graph of $(\sigma^*\omega_{\rm can})^\flat:TA\to T^*A$ is a subalgebroid of $TA\oplus T^*A\to TM\oplus A^*$ over $U=\graphe(-\sigma^*)$ only if $\sigma:A\to T^*M$ is an IM-$2$-form. To show the equivalence, we show that the last condition in the last theorem follows here again from the four other. We have, for $a,b\in\Gamma(A)$ and $X,Y\in\mathfrak{X}} \newcommand{\dr}{\mathbf{d}(M)$: \begin{align*} \Omega_{(X,-\sigma^*X)}a &=-(0,\ip{X}\dr\sigma(a))+(\nabla_Xa,\sigma(\nabla_Xa)), \end{align*} hence \begin{align*} \ldr{b}\Omega_{(X,-\sigma^*X)}a&=-(0,\ldr{\rho(b)}\ip{X}\dr\sigma(a))+([b,\nabla_Xa],\ldr{\rho(b)}\sigma(\nabla_Xa))\\ &=-(0,\ip{[\rho(b),X]}\dr\sigma(a)+\ip{X}\ldr{\rho(b)}\dr\sigma(a))+([b,\nabla_Xa],\sigma([b,\nabla_Xa])+\ip{\rho(\nabla_Xa)}\dr\sigma(b)) \end{align*} and \begin{align*} \nabla_a^{\rm bas}(X,-\sigma^*X)=-(\rho,\rho^*)(0,\ip{X}\dr\sigma(a))+(\rho,\rho^*)(\nabla_Xa,\sigma(\nabla_Xa)) +\ldr{a}(X,-\sigma^*X) \end{align*} which equals $(\nabla_a^AX,-\sigma^*(\nabla_a^AX))$ since $\nabla_a^{\rm bas}u\in\Gamma(U)$ for all $u\in\Gamma(U)$. We get hence \begin{align*} R_\Delta^{\rm bas}(a,b)(X,-\sigma^*X) =&\bigl( R_\nabla^A(a,b)(X), \sigma(R_\nabla^A(a,b)(X))\bigr)\\ &+(0,-\ip{X}\dr\sigma([a,b]) +\ip{[\rho(a),X]}\dr\sigma(b)+\ip{X}\ldr{\rho(a)}\dr\sigma(b)-\ip{\rho(\nabla_Xb)}\dr\sigma(a)\\ &\quad -\ip{[\rho(b),X]}\dr\sigma(a)-\ip{X}\ldr{\rho(b)}\dr\sigma(a)+\ip{\rho(\nabla_Xa)}\dr\sigma(b) +\ip{\nabla^A_bX}\dr\sigma(a)-\ip{\nabla^A_aX}\dr\sigma(b) )\\ =&\bigl( R_\nabla^A(a,b)(X), \sigma(R_\nabla^A(a,b)(X))\bigr)-(0,\ip{X}\dr(\sigma([a,b])-\ldr{\rho(a)}\sigma(b) +\ip{\rho(b)}\dr\sigma(a) ))\\ =&\bigl( R_\nabla^A(a,b)(X), \sigma(R_\nabla^A(a,b)(X))\bigr)\in\Gamma(K). \end{align*} \end{example} \bigskip We continue with the study of $\mathcal{LA}$-Dirac triples. We first observe that if $(U,K,\Delta)$ is a $\mathcal {LA}$-Dirac triple, then $K$ inherits a Lie algebroid structure. \begin{theorem}\label{K_algebroid} Consider an $\mathcal{LA}$-Dirac triple $(U,K,[\Delta]_{U,K})$. Then \[(K, \rho_K:=\rho\circ\pr_A, [\cdot\,,\cdot]_D\an{\Gamma(K)\times\Gamma(K)})\] is a Lie algebroid and the map $(\rho, \rho^*):K\to U$ is a Lie algebroid morphism. \end{theorem} We need the following two lemmas, which will also be useful later. \begin{lemma}\label{basic_like_eq} The equality \begin{equation}\label{basicLike_eqq} \nabla_{\sigma_1}^{\rm bas}\sigma_2=-[\sigma_2,\sigma_1]_D+\Delta_{(\rho,\rho^*)(\sigma_2)}\sigma_ \end{equation} holds for all $\sigma_1,\sigma_2\in\Gamma(A\oplus T^*M)$. \end{lemma} \begin{proof}Write $\sigma_1=(a_1,\theta_2)$ and $\sigma_2=(a_2,\theta_2)\in\Gamma(A\oplus T^*M)$. Then: \begin{align*} \nabla_{\sigma_1}^{\rm bas}\sigma_2&=\nabla_{a_1}^{\rm bas}\sigma_2=\Omega_{(\rho,\rho^*)\sigma_2}a_1+\ldr{a_1}\sigma_2\\ &=\Delta_{(\rho,\rho^*)\sigma_2}\sigma_1-(0,\dr\langle\theta_2,\rho(a_1)\rangle)-(0,\ldr{\rho(a_2)}\theta_1)+([a_1,a_2], \ldr{\rho(a_1)}\theta_2)\\ &=\Delta_{(\rho,\rho^*)\sigma_2}\sigma_1-[\sigma_2, \sigma_1]_D. \end{align*} \end{proof} \begin{lemma}\label{complicated_eq} Let $(U,K,[\Delta]_{U,K})$ be an $\mathcal{LA}$-Dirac triple. Then, for all $v\in\Gamma(TM\oplus A^*)$ and $\tau,\sigma\in\Gamma(A\oplus T^*M)$: \begin{equation}\label{complicated_eqq} \langle (\rho,\rho^*)\Delta_{v}\tau-\llbracket} \newcommand{\rb}{\rrbracket v,(\rho,\rho^*)\tau\rb_\Delta-\nabla_{\tau}^{\rm bas}v,\sigma\rangle=\langle\nabla_{\sigma}^{\rm bas}v, \tau\rangle. \end{equation} \end{lemma} This yields the following corollary. \begin{corollary}\label{eq_for_morphism} Let $(U,K,[\Delta]_{U,K})$ be an $\mathcal{LA}$-Dirac triple. Then, for all $u\in\Gamma(U)$ and $k\in\Gamma(K)$: \begin{equation*} (\rho,\rho^*)\Delta_{u}k=\llbracket} \newcommand{\rb}{\rrbracket u,(\rho,\rho^*)k\rb_\Delta+\nabla_{k}^{\rm bas}u. \end{equation*} \end{corollary} \begin{proof} By Lemma \ref{complicated_eq}, we have \[\langle (\rho,\rho^*)\Delta_{u}k-\llbracket} \newcommand{\rb}{\rrbracket u,(\rho,\rho^*)k\rb_\Delta-\nabla_{k}^{\rm bas}u,\sigma\rangle=\langle\nabla_{\sigma}^{\rm bas}u, k\rangle\] for all $\sigma\in\Gamma(A\oplus T^*M)$. Since $\nabla_\sigma^{\rm bas}$ preserves $\Gamma(U)$ by Theorem \ref{morphic_Dirac_triples}, this vanishes. \end{proof} \begin{proof}[Proof of Lemma \ref{complicated_eq}] We write $\tau=(b,\theta)$ and $v=(X,\xi)$. Then we have for any $\sigma=(a,\omega)\in\Gamma(A\oplus T^*M)$: \begin{align*} &\langle (\rho,\rho^*)\Delta_{v}\tau-\llbracket} \newcommand{\rb}{\rrbracket v,(\rho,\rho^*)\tau\rb_\Delta-\nabla_{b}^{\rm bas}v, \sigma\rangle\nonumber\\ =&\langle \Delta_v\tau-\Omega_vb, (\rho,\rho^*)\sigma\rangle -\langle \ldr{b}v,\sigma\rangle +\langle \llbracket} \newcommand{\rb}{\rrbracket(\rho,\rho^*)\tau, v\rb_\Delta, \sigma\rangle-\langle \Skew_\Delta((\rho,\rho^*)\tau, v), a\rangle\nonumber\\ =&\langle (0, \ldr{X}\theta+\dr\langle\xi, b\rangle), (\rho,\rho^*)\sigma\rangle -\langle \ldr{b}v,\sigma\rangle +\rho(b)\langle v, \sigma\rangle-\langle v, \Delta_{(\rho,\rho^*)\tau}\sigma\rangle-\langle \Skew_\Delta((\rho,\rho^*)\tau, v), a\rangle\nonumber \\ =&\langle \ldr{X}\theta, \rho(a)\rangle+\rho(a)\langle\xi, b\rangle +\langle v, \ldr{b}\sigma\rangle-\langle v, \nabla_{\sigma}^{\rm bas}\tau-[\sigma,\tau]_D+(0,\dr\langle\sigma,(\rho^*,\rho)\tau\rangle)\rangle\nonumber\\ &-\langle \Skew_\Delta((\rho,\rho^*)\tau, v), a\rangle\hspace*{7cm} \text{by Lemma \ref{basic_like_eq}}\nonumber\\ =&-\langle v, \nabla_{\sigma}^{\rm bas}\tau\rangle+\langle \ldr{X}\theta, \rho(a)\rangle+\rho(a)\langle\xi, b\rangle +\langle v, \ldr{b}\sigma\rangle\nonumber\\ &+\langle\xi,[a,b]\rangle +\langle \ldr{\rho(a)}\theta-\ldr{\rho(b)}\omega, X\rangle+X\langle\omega, \rho(b)\rangle -X\langle\sigma,(\rho^*,\rho)\tau\rangle-\langle \Skew_\Delta((\rho,\rho^*)\tau, v), a\rangle\nonumber\\ =&-\langle v, \nabla_{\sigma}^{\rm bas}\tau\rangle-\langle \theta, [X,\rho(a)]\rangle+\rho(a)\langle v, \tau\rangle +\langle v, \ldr{b}\sigma\rangle\nonumber\\ &+\langle\xi,[a,b]\rangle -\langle\theta, [\rho(a), X]\rangle-\langle \ldr{\rho(b)}\omega, X\rangle-\langle \Skew_\Delta((\rho,\rho^*)\tau, v), a\rangle\nonumber\\ =&-\langle v, \nabla_{\sigma}^{\rm bas}\tau\rangle+\rho(a)\langle v, \tau\rangle -\langle \Skew_\Delta((\rho,\rho^*)\tau, v), a\rangle=\langle\nabla_{\sigma}^{\rm bas}v, \tau\rangle. \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{K_algebroid}] By \eqref{Dorfman_skew-sym}, the equality $U=K^\circ$ and the inclusion $(\rho,\rho^*)(K)\subseteq U$, the bracket $[\cdot,\cdot]_D$ is skew-symmetric on sections of $K$. Choose $k_1=(a_1,\theta_1),k_2\in\Gamma(K)$. Then, by Lemma \ref{basic_like_eq}, we have \[[k_1, k_2]_D=\nabla_{a_1}^{\rm bas}k_2-\Delta_{(\rho,\rho^*)k_2}k_1 \] Since by Theorem \ref{morphic_Dirac_triples}, the sections $\nabla_{a_1}^{\rm bas}k_2$ and $\Delta_{(\rho,\rho^*)k_2}k_1$ are elements of $\Gamma(K)$, we find that $[k_1, k_2]_D\in\Gamma(K)$. The Jacobi-identity follows directly from \eqref{Dorfman_Jacobi}. \bigskip We show next that $(\rho,\rho^*):K\to U$ is a Lie algebroid morphism. We have \[\rho_U\circ(\rho,\rho^*)=\pr_{TM}\circ(\rho,\rho^*)=\rho\circ\pr_A=\rho_K\] and, for all $k_1,k_2\in\Gamma(K)$, using Lemma \ref{eq_for_morphism}, Lemma \ref{basic_like_eq} and Proposition \ref{prop_basic_connections}: \begin{align*} \llbracket} \newcommand{\rb}{\rrbracket(\rho,\rho^*)k_1, (\rho,\rho^*)k_2\rb_\Delta&=(\rho,\rho^*)\Delta_{(\rho,\rho^*)k_1}k_2-\nabla_{k_2}^{\rm bas}(\rho,\rho^*)k_1\\ &=(\rho,\rho^*)\left(\Delta_{(\rho,\rho^*)k_1}k_2-\nabla_{k_2}^{\rm bas}k_1\right)\\ &=(\rho,\rho^*)\left([k_1,k_2]_D\right). \end{align*} \end{proof} \begin{example} \begin{enumerate} \item In the case of a $\mathcal {LA}$-Dirac triple as in Example \ref{lie_algebroid_dual_3}, we have $K=\{(-\rho_*^*\theta,\theta)\mid \theta\in T^*M\}$ and $U=\{\rho_*(\xi),\xi)\mid \xi\in A^*\}$. The Lie algebroid structure on $K$ is just the Lie algebroid defined by the graph of the anchor of the Lie algebroid $(T^*M)_\pi$ with $\pi^\sharp=-\rho\circ \rho_*^*=\rho_*\circ\rho^*$ and the fact that \[ (\rho,\rho^*):K\to U \] recovers the fact that, for a Lie bialgebroid, the map $\rho^*:T^*M\to A^*$ is a Lie algebroid morphism (see for instance \cite{Mackenzie05}, Proposition 12.1.13). \item In the case of a $\mathcal {LA}$-Dirac triple as in Example \ref{IM_2_form_3}, we have $[(a,\sigma(a)), (b,\sigma(b))]_D=([a,b], \ldr{\rho(a)}\sigma(b)-\ip{\rho(b)}\dr\sigma(a)) =([a,b],\sigma([a,b]))\in\Gamma(K)$ by the results in Example \ref{IM_2_form_2}. Hence, $K$ is a Lie algebroid. The map $(\rho,\rho^*)$ sends $(a,\sigma(a))$ to $(\rho(a), -\sigma^*\rho(a))\in\Gamma(U)$ since $\rho^*\circ\sigma=-\sigma^*\circ\rho$. The fact that $(\rho,\rho^*)$ is a Lie algebroid morphism also follows from this equality and the fact that $\rho$ is is a Lie algebroid morphism. \end{enumerate} \end{example} \section{The Manin pair associated to an $\mathcal{LA}$-Dirac structure} We conclude this paper by showing that the infinitesimal description of an $\mathcal{LA}$-Dirac structures on Lie algebroids is a Manin pair \cite{BuIgSe09} $(C,U)$, where $C\to M$ is a Courant algebroid that is in a particular sense compatible with $A$ and such that the Dirac structure $U$ in $C$ can be seen as a subbundle of $TM\oplus A^*$ with anchor $\pr_{TM}$. We start by describing the representations up to homotopy describing the two sides of a Dirac algebroid. \subsection{The two VB-Lie algebroid structures on an $\mathcal{LA}$-Dirac structure}\label{rep_up_to} Let $A\to M$ be a Lie algebroid and $D_A$ a $\mathcal {LA}$-Dirac structure on $A$, hence corresponding to a $\mathcal{LA}$-Dirac triple $(U,K,[\Delta]_{U,K})$. The Lie algebroid structure $D_{(U,K,[\Delta])}\to A$ is described by the representation up to homotopy \cite{GrMe10a} defined by the vector bundle morphism $\pr_A:K\to A$, the connections \[(\pr_A\circ\,\Omega):\Gamma(U)\times\Gamma(A)\to\Gamma(A) \] and \[\Delta:\Gamma(U)\times\Gamma(K)\to\Gamma(K),\] and the curvature $R_\Delta:\Gamma(U^*\otimes U^*\otimes A^*\times K)$. The Lie algebroid structure of the other side $D_{(U,K,[\Delta])}\to U$ is described by the complex $(\rho,\rho^*):K\to U$ with the basic connections $\nabla^{\rm bas}:\Gamma(A)\times\Gamma(U)\to \Gamma(U)$ and $\nabla^{\rm bas}:\Gamma(A)\times\Gamma(K)\to \Gamma(K)$ and the basic curvature $R_\Delta^{\rm bas}\in\Gamma(A^*\otimes A^*\otimes U^*\otimes K)$. \begin{theorem}\label{difficult_computation} Let $A\to M$ be a Lie algebroid and $(K,U,\Delta)$ a $\mathcal {LA}$-Dirac triple. Then: \begin{enumerate} \item Condition (5) of Theorem \ref{morphic_Dirac_triples} is equivalent to: \begin{equation*}\label{bialgebroid1} \nabla^{\rm bas}_\sigma\llbracket} \newcommand{\rb}{\rrbracket u, v\rb_\Delta-\llbracket} \newcommand{\rb}{\rrbracket \nabla^{\rm bas}_\sigma u, v\rb_\Delta-\llbracket} \newcommand{\rb}{\rrbracket u, \nabla^{\rm bas}_\sigma v\rb_\Delta+\nabla^{\rm bas}_{\Delta_u\sigma}-\nabla^{\rm bas}_{\Delta_v\sigma}u=-(\rho,\rho^*)R_\Delta(u,v)\sigma \end{equation*} for all $u,v\in\Gamma(U)$ and $\sigma\in\Gamma(A\oplus T^*M)$. \item The equality \begin{align*}\label{bialgebroid2} &\Delta_u[\sigma_1,\sigma_2]_D-[\Delta_u\sigma_1,\sigma_2]_D-[\sigma_1,\Delta_u\sigma_2]_D +\Delta_{\nabla_{a_1}^{\rm bas}u}\sigma_2-\Delta_{\nabla_{a_2}^{\rm bas}u}\sigma_1+(0,\dr\langle\sigma_1,\nabla^{\rm bas}_{a_2}u\rangle)\\ =\,&-R_\Delta^{\rm bas}(a_1,a_2)u \end{align*} holds for all $\sigma_1,\sigma_2\in\Gamma(A\oplus T^*M)$ with $\pr_A(\sigma_i)=:a_i$ and all $u\in\Gamma(U)$. \end{enumerate} \end{theorem} The proof of these formulas is quite long, but straightforward. It can be found in Appendix \ref{proof_courant}. A double Lie algebroid is not only a double vector bundle with VB-algebroid structures on both sides such that the VB-structure maps on each side are Lie algebroid morphisms of the other side. There is an additionnal more complicated criterion that one needs to check, namely the existence of a Lie bialgebroid over the dual of the core of the double vector bundle \cite{Mackenzie11}. The two formulas in the last theorem seem to be part of the compatibility conditions on the two representations up to homotopy, that are necessary for this Lie bialgebroid condition to be satisfied. \subsection{The Courant algebroid associated to an $\mathcal {LA}$-Dirac triple}\label{the_courant_alg} Assume that the triple $(U,K,\Delta)$ is an $\mathcal {LA}$-Dirac triple and consider the vector bundle \begin{equation}\label{def_of_C} C:=\frac{U\oplus(A\oplus T^*M)}{\operatorname{graph}(-(\rho,\rho^*)\an{K})}\to M. \end{equation} We will write $u\oplus \sigma$ for the class in $C$ of a pair $(u,\sigma)\in\Gamma(U\oplus(A\oplus T^*M))$. It is easy to check that \begin{equation}\label{pairing} \langle\langle u_1\oplus \sigma_1, u_2\oplus\sigma_2\rangle\rangle_C:=\langle u_1,\sigma_2\rangle+\langle u_2,\sigma_1\rangle+\langle\sigma_1,(\rho,\rho^*)\sigma_2\rangle \end{equation} defines a symmetric fiberwise pairing $\langle\langle\cdot\,,\cdot\rangle\rangle_C$ on $C$. \begin{proposition} The pairing $\langle\langle\cdot\,,\cdot\rangle\rangle_C$ is nondegenerate and the vector bundle $C$ is isomorphic to $U\oplus U^*$. \end{proposition} \begin{proof} The nondegeneracy of $\langle\langle\cdot\,,\cdot\rangle\rangle_C$ is easy to check. Define the map $\iota: U\to C$, $u\mapsto u\oplus 0$ and $\pi: C\to U^*$, $\pi(u\oplus \sigma)(v)=\langle\langle u\oplus \sigma, v\oplus 0\rangle\rangle_C$ for all $u\oplus\sigma\in C$, $v\in U$. The map $\iota$ is obviously injective. Assume that $\psi(u\oplus\sigma)=0$. Then $\langle \sigma, v\rangle =0$ for all $v\in U$. Hence, $\sigma\in K$ and $u\oplus\sigma=(u+(\rho,\rho^*)\sigma)\oplus 0\in\operatorname{im}(\pi)$. A dimension count yields that $\pi$ is surjective, and since the sequence \[ 0\rightarrow U\rightarrow C\rightarrow U^*\rightarrow 0\] is exact, we are done. \end{proof} Set \[c:C\to TM,\] \[c(u\oplus\sigma)=\pr_{TM}(u)+\rho\circ\pr_{A}(\sigma).\] \begin{theorem}\label{Courant_algebroid} Assume that $(U,K,[\Delta])$ is an $\mathcal {LA}$-Dirac triple. Then $C$ is a Courant algebroid with anchor $c$, pairing $\langle\langle\cdot\,,\cdot\rangle\rangle_C$ and bracket \[\llbracket} \newcommand{\rb}{\rrbracket\cdot\,,\cdot\rb:\Gamma(C)\times\Gamma(C)\to\Gamma(C), \] \begin{align}\label{bracket_on_C} &\llbracket} \newcommand{\rb}{\rrbracket u_1\oplus\sigma_1, u_2\oplus\sigma_2\rb\\ =\,&\left(\llbracket} \newcommand{\rb}{\rrbracket u_1,u_2\rb_\Delta+\nabla_{\sigma_1}^{\rm bas}u_2-\nabla_{\sigma_2}^{\rm bas}u_1\right)\oplus\left( [\sigma_1,\sigma_2]_D+\Delta_{u_1}\sigma_2-\Delta_{u_2}\sigma_1 +(0,\dr\langle \sigma_1,u_2\rangle)\right).\nonumber \end{align} The map \[\mathcal D=\,c^*\circ \dr:C^\infty(M)\to \Gamma(C)\] is given by \[f\mapsto 0\oplus(0,\dr f). \] \end{theorem} The proof of this theorem can be found in the appendix. \begin{remark} \begin{enumerate} \item It is easy to check that the Courant algebroid structure only depends on the $(U,K)$-equivalence class of $\Delta$. \item This construction has some similarities with the one of matched pairs of Courant algebroids in \cite{GrSt12}. It would be interesting to understand the relation between the two constructions. \end{enumerate} \end{remark} \begin{definition} \begin{enumerate} \item A Manin pair over a manifold $M$ is a pair $(E,D)$ of vector bundles over $M$, such that $E$ is a Courant algebroid and $D$ a Dirac structure in $E$ \cite{BuIgSe09}. \item Let $(A\to M, \rho, [\cdot\,,\cdot])$ be a Lie algebroid. An $A$-Manin pair is a Manin pair $(C,U)$ over $M$, where \begin{enumerate} \item $U\subseteq TM\oplus A^*$ is a subbundle such that $(\rho,\rho^*)(U^\circ)\subseteq U$, \item $C$ is the vector bundle \[\frac{U\oplus(A\oplus T^*M)}{\operatorname{graph}(-(\rho,\rho^*)\an{U^\circ})}\to M \] endowed with the anchor $c=\pr_{TM}\oplus\rho\circ\pr_A$ and the pairing $\langle\langle\cdot\,,\cdot\rangle\rangle_C$. \item The bracket of $C$ satisfies $\llbracket} \newcommand{\rb}{\rrbracket 0\oplus \sigma_1, 0\oplus\sigma_2\rb=0\oplus[\sigma_1,\sigma_2]_D$ for all $\sigma_1,\sigma_2\in\Gamma(A\oplus T^*M)$. \end{enumerate} \end{enumerate} \end{definition} The final theorem and second main result of this paper is the following: \begin{theorem} Let $A$ be a Lie algebroid over a manifold $M$. Then there is a one-one correspondence between $\mathcal{LA}$-Dirac structures on $A$ and $A$-Manin pairs. \end{theorem} \begin{proof} We have already seen that $\mathcal{LA}$-Dirac structures on a Lie algebroid $A$ are in bijection with $\mathcal{LA}$-Dirac triples on $A$. We show here that there is a one-one correspondence between $\mathcal{LA}$-Dirac triples on $A$ and $A$-Manin pairs. We have seen in Theorem \ref{Courant_algebroid} how to associate an $A$-Manin pair to an $\mathcal{LA}$-Dirac triple on $A$. Conversely, choose an $A$-Manin pair $(C,U)$ and set $K:=U^\circ\subseteq A\oplus T^*M$. Then $U\simeq U\oplus 0$ is a Dirac structure in $C$ and there is hence an induced Dorfman connection \[\Delta^U:\Gamma(U)\times\Gamma(C/U)\to \Gamma(C/U). \] It is easy to verify that the map $C/U\to (A\oplus T^*M)/U^\circ$ sending $\overline{u\oplus\sigma}\in C/U$ to $\bar\sigma\in (A\oplus T^*M)/U^\circ$ is an isomorphism of vector bundles. Using the Leibniz equality in both arguments, extend the Lie algebroid bracket of $U$ to a dull algebroid structure on $TM\oplus A^*$ with anchor $\pr_{TM}$. It is easy to see that the corresponding $TM\oplus A^*$-Dorfman connection $\Delta$ on $A\oplus T^*M$ satisfies $\Delta_uk\in\Gamma(K)$ for all $k\in\Gamma(K)$ and $u\in\Gamma(U)$, and that the induced $U$-Dorfman connection on the quotient $(A\oplus T^*M)/K$ is equal to $\Delta^U$. Furthermore, for two dull extentions of $[\cdot\,,\cdot]_U$, we find that the corresponding Dorfman connections are $(U,K)$-equivalent. Hence, we can write $\Delta^U=[\Delta]$. We check that $(U,K,\Delta^U)$ is an $\mathcal{LA}$-Dirac triple. For this, we only check that the Courant bracket of $C$ is defined as in \eqref{bracket_on_C}. The proof of Theorem \ref{Courant_algebroid} shows that all the conditions in Theorem \ref{morphic_Dirac_triples} are then satisfied. Choose $\tau=(a,\theta)\in\Gamma(A\oplus T^*M)$, $\sigma=(b,\omega)\in\Gamma(A\oplus T^*M)$ and $u=(X,\xi)\in\Gamma(U)$. We want to compute $v=v(\tau,u)\in\Gamma(U)$ such that $\llbracket} \newcommand{\rb}{\rrbracket u\oplus 0, 0\oplus \tau\rb=v\oplus\Delta_u\tau$. Note first that \[\llbracket} \newcommand{\rb}{\rrbracket u\oplus 0, 0\oplus \tau\rb+\llbracket} \newcommand{\rb}{\rrbracket 0\oplus \tau, u\oplus 0\rb =\mathcal D\langle\langle u\oplus 0, 0\oplus \tau\rangle\rangle_C=\mathcal D\langle \tau,u\rangle. \] The map $\mathcal D:C^\infty(M)\to\Gamma(C)$ is given by \[\langle\langle u\oplus\sigma, \mathcal D \varphi\rangle\rangle_C= (\pr_{TM}(u)+\rho\circ\pr_A(\sigma))\varphi\] for all $u\oplus \sigma\in\Gamma(C)$, i.e. $\mathcal D\varphi=0\oplus(0,\dr\varphi)$. Then, by the Leibniz property of the Courant algebroid bracket on $C$, we find \begin{align*} \rho(a)\langle u, \sigma\rangle&=c(0\oplus\tau)\langle\langle u\oplus 0, 0\oplus\sigma\rangle\rangle_C\\ &=\langle\langle\llbracket} \newcommand{\rb}{\rrbracket 0\oplus\tau, u\oplus 0\rb, 0\oplus\sigma\rangle\rangle_C +\langle\langle u\oplus 0, \llbracket} \newcommand{\rb}{\rrbracket 0\oplus\tau, 0\oplus\sigma\rb\rangle\rangle_C\\ &=\langle\langle (-v)\oplus(-\Delta_u\tau+(0,\dr\langle \tau, u\rangle)) , 0\oplus\sigma\rangle\rangle_C +\langle\langle u\oplus 0, 0\oplus(\ldr{a}\sigma+(0,-\ip{\rho(b)}\dr\theta))\rangle\rangle_C\\ &=-\langle v,\sigma\rangle-\langle (\rho,\rho^*)\Omega_ua,\sigma\rangle-\langle\ldr{X}\theta,\rho(b)\rangle+\rho(b)\langle\theta,X\rangle+\langle u, \ldr{a}\sigma\rangle -\dr\theta(\rho(b),X) \end{align*} This leads to \[ -\langle v,\sigma\rangle=\langle (\rho,\rho^*)\Omega_ua,\sigma\rangle+\langle \ldr{a}u, \sigma\rangle \] and, since $\sigma$ was arbitrary, we have shown that \[\llbracket} \newcommand{\rb}{\rrbracket u\oplus 0, 0\oplus \tau\rb=(-\nabla^{\rm bas}_\tau u)\oplus\Delta_u\tau.\] \end{proof} \begin{example} Consider an $\mathcal{LA}$-Dirac triple as in Example \ref{lie_algebroid_dual_3}. The vector bundle morphisms \[\Psi: C\to A\oplus A^*, \qquad \Psi((\rho_*\xi,\xi)\oplus(a,\theta))=( a+\rho_*^*\theta, \xi+\rho^*\theta) \] and \[\Phi:A\oplus A^*\to C, \qquad \Phi(a,\xi)=(\rho_*\xi,\xi)\oplus(a,0) \] are well-defined and inverse to each other. A straightforward computation using the considerations in Examples \ref{lie_algebroid_dual} and \ref{lie_algebroid_dual_3} shows that $C$ is isomorphic to the Courant algebroid structure on $A\oplus A^*$ induced by the Lie bialgebroid $(A,A^*)$ \cite{LiWeXu97}\footnote{In \cite{LiWeXu97}, the authors work with the definition of Courant algebroids with antisymmetric brackets. Here, we get the corresponding Courant algebroid as we chose to define them.} \end{example} \begin{example} Consider now an $\mathcal{LA}$-Dirac triple as in Example \ref{IM_2_form_3}. Again, we find that the vector bundle morphisms \[\Pi: C\to TM\oplus T^*M, \qquad \Pi((X,-\sigma^*X)\oplus(a,\theta))=(X+\rho(a), \theta+\sigma(a)) \] and \[\Theta:TM\oplus T^*M\to C, \qquad \Theta(X,\theta)=(X,-\sigma^*X)\oplus (0,\theta) \] are well-defined and inverse to each other. Here, one gets immediately that $C$ and the standard Courant algebroid $TM\oplus T^*M$ are isomorphic via these maps. \end{example} \def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{'timestamp': '2012-09-28T02:00:54', 'yymm': '1209', 'arxiv_id': '1209.6077', 'language': 'en', 'url': 'https://arxiv.org/abs/1209.6077'}
ArXiv
\section{Introduction} Given a data set and a model with some unknown parameters, the inverse problem aims to find the values of the model parameters that best fit the data. In this work, in which we focus on systems of interacting elements, the inverse problem concerns the statistical inference of the underling interaction network and of its coupling coefficients from observed data on the dynamics of the system. Versions of this problem are encountered in physics, biology (e.g., \cite{Balakrishnan11,Ekeberg13,Christoph14}), social sciences and finance (e.g.,\cite{Mastromatteo12,yamanaka_15}), neuroscience (e.g., \cite{Schneidman06,Roudi09a,tyrcha_13}), just to cite a few, and are becoming more and more important due to the increase in the amount of data available from these fields.\\ \indent A standard approach used in statistical inference is to predict the interaction couplings by maximizing the likelihood function. This technique, however, requires the evaluation of the partition function that, in the most general case, concerns a number of computations scaling exponentially with the system size. Boltzmann machine learning uses Monte Carlo sampling to compute the gradients of the Log-likelihood looking for stationary points \cite{Murphy12} but this method is computationally manageable only for small systems. A series of faster approximations, such as naive mean-field, independent-pair approximation \cite{Roudi09a, Roudi09b}, inversion of TAP equations \cite{Kappen98,Tanaka98}, small correlations expansion \cite{Sessak09}, adaptive TAP \cite{Opper01}, adaptive cluster expansion \cite{Cocco12} or Bethe approximations \cite{Ricci-Tersenghi12, Nguyen12} have, then, been developed. These techniques take as input means and correlations of observed variables and most of them assume a fully connected graph as underlying connectivity network, or expand around it by perturbative dilution. In most cases, network reconstruction turns out to be not accurate for small data sizes and/or when couplings are strong or, else, if the original interaction network is sparse.\\ \indent A further method, substantially improving performances for small data, is the so-called Pseudo-Likelyhood Method (PLM) \cite{Ravikumar10}. In Ref. \cite{Aurell12} Aurell and Ekeberg performed a comparison between PLM and some of the just mentioned mean-field-based algorithms on the pairwise interacting Ising-spin ($\sigma = \pm 1$) model, showing how PLM performs sensitively better, especially on sparse graphs and in the high-coupling limit, i.e., for low temperature. In this work, we aim at performing statistical inference on a model whose interacting variables are continuous $XY$ spins, i.e., $\sigma \equiv \left(\cos \phi,\sin \phi\right)$ with $\phi \in [0, 2\pi )$. The developed tools can, actually, be also straightforward applied to the $p$-clock model \cite{Potts52} where the phase $\phi$ takes discretely equispaced $p$ values in the $2 \pi$ interval, $\phi_a = a 2 \pi/p$, with $a= 0,1,\dots,p-1$. The $p$-clock model, else called vector Potts model, gives a hierarchy of discretization of the $XY$ model as $p$ increases. For $p=2$, one recovers the Ising model, for $p=4$ the Ashkin-Teller model \cite{Ashkin43}, for $p=6$ the ice-type model \cite{Pauling35,Baxter82} and the eight-vertex model \cite{Sutherland70,Fan70,Baxter71} for $p=8$. It turns out to be very useful also for numerical implementations of the continuous $XY$ model. Recent analysis on the multi-body $XY$ model has shown that for a limited number of discrete phase values ($p\sim 16, 32$) the thermodynamic critical properties of the $p\to\infty$ $XY$ limit are promptly recovered \cite{Marruzzo15, Marruzzo16}. Our main motivation to study statistical inference is that these kind of models have recently turned out to be rather useful in describing the behavior of optical systems, including standard mode-locking lasers \cite{Gordon02,Gat04,Angelani07,Marruzzo15} and random lasers \cite{Angelani06a,Leuzzi09a,Antenucci15a,Antenucci15b,Marruzzo16}. In particular, the inverse problem on the pairwise XY model analyzed here might be of help in recovering images from light propagated through random media. This paper is organized as follows: in Sec. \ref{sec:model} we introduce the general model and we discuss its derivation also as a model for light transmission through random scattering media. In Sec. \ref{sec:plm} we introduce the PLM with $l_2$ regularization and with decimation, two variants of the PLM respectively introduced in Ref. \cite{Wainwright06} and \cite{Aurell12} for the inverse Ising problem. Here, we analyze these techniques for continuous $XY$ spins and we test them on thermalized data generated by Exchange Monte Carlo numerical simulations of the original model dynamics. In Sec. \ref{sec:res_reg} we present the results related to the PLM-$l_2$. In Sec. \ref{sec:res_dec} the results related to the PLM with decimation are reported and its performances are compared to the PLM-$l_2$ and to a variational mean-field method analyzed in Ref. \cite{Tyagi15}. In Sec. \ref{sec:conc}, we outline conclusive remarks and perspectives. \section{The leading $XY$ model} \label{sec:model} The leading model we are considering is defined, for a system of $N$ angular $XY$ variables, by the Hamiltonian \begin{equation} \mathcal{H} = - \sum_{ik}^{1,N} J_{ik} \cos{\left(\phi_i-\phi_k\right)} \label{eq:HXY} \end{equation} The $XY$ model is well known in statistical mechanics, displaying important physical insights, starting from the Berezinskii-Kosterlitz-Thouless transition in two dimensions\cite{Berezinskii70,Berezinskii71,Kosterlitz72} and moving to, e.g., the transition of liquid helium to its superfluid state \cite{Brezin82}, the roughening transition of the interface of a crystal in equilibrium with its vapor \cite{Cardy96}. In presence of disorder and frustration \cite{Villain77,Fradkin78} the model has been adopted to describe synchronization problems as the Kuramoto model \cite{Kuramoto75} and in the theoretical modeling of Josephson junction arrays \cite{Teitel83a,Teitel83b} and arrays of coupled lasers \cite{Nixon13}. Besides several derivations and implementations of the model in quantum and classical physics, equilibrium or out of equilibrium, ordered or fully frustrated systems, Eq. (\ref{eq:HXY}), in its generic form, has found applications also in other fields. A rather fascinating example being the behavior of starlings flocks \cite{Reynolds87,Deneubourg89,Huth90,Vicsek95, Cavagna13}. Our interest on the $XY$ model resides, though, in optics. Phasor and phase models with pairwise and multi-body interaction terms can, indeed, describe the behavior of electromagnetic modes in both linear and nonlinear optical systems in the analysis of problems such as light propagation and lasing \cite{Gordon02, Antenucci15c, Antenucci15d}. As couplings are strongly frustrated, these models turn out to be especially useful to the study of optical properties in random media \cite{Antenucci15a,Antenucci15b}, as in the noticeable case of random lasers \cite{Wiersma08,Andreasen11,Antenucci15e} and they might as well be applied to linear scattering problems, e.g., propagation of waves in opaque systems or disordered fibers. \subsection{A propagating wave model} We briefly mention a derivation of the model as a proxy for the propagation of light through random linear media. Scattering of light is held responsible to obstruct our view and make objects opaque. Light rays, once that they enter the material, only exit after getting scattered multiple times within the material. In such a disordered medium, both the direction and the phase of the propagating waves are random. Transmitted light yields a disordered interference pattern typically having low intensity, random phase and almost no resolution, called a speckle. Nevertheless, in recent years it has been realized that disorder is rather a blessing in disguise \cite{Vellekoop07,Vellekoop08a,Vellekoop08b}. Several experiments have made it possible to control the behavior of light and other optical processes in a given random disordered medium, by exploiting, e.g., the tools developed for wavefront shaping to control the propagation of light and to engineer the confinement of light \cite{Yilmaz13,Riboli14}. \\ \indent In a linear dielectric medium, light propagation can be described through a part of the scattering matrix, the transmission matrix $\mathbb{T}$, linking the outgoing to the incoming fields. Consider the case in which there are $N_I$ incoming channels and $N_O$ outgoing ones; we can indicate with $E^{\rm in,out}_k$ the input/output electromagnetic field phasors of channel $k$. In the most general case, i.e., without making any particular assumptions on the field polarizations, each light mode and its polarization polarization state can be represented by means of the $4$-dimensional Stokes vector. Each $ t_{ki}$ element of $\mathbb{T}$, thus, is a $4 \times 4$ M{\"u}ller matrix. If, on the other hand, we know that the source is polarized and the observation is made on the same polarization, one can use a scalar model and adopt Jones calculus \cite{Goodman85,Popoff10a,Akbulut11}: \begin{eqnarray} E^{\rm out}_k = \sum_{i=1}^{N_I} t_{ki} E^{\rm in}_i \qquad \forall~ k=1,\ldots,N_O \label{eq:transm} \end{eqnarray} We recall that the elements of the transmission matrix are random complex coefficients\cite{Popoff10a}. For the case of completely unpolarized modes, we can also use a scalar model similar to Eq. \eqref{eq:transm}, but whose variables are the intensities of the outgoing/incoming fields, rather than the fields themselves.\\ In the following, for simplicity, we will consider Eq. (\ref{eq:transm}) as our starting point, where $E^{\rm out}_k$, $E^{\rm in}_i$ and $t_{ki}$ are all complex scalars. If Eq. \eqref{eq:transm} holds for any $k$, we can write: \begin{eqnarray} \int \prod_{k=1}^{N_O} dE^{\rm out}_k \prod_{k=1}^{N_O}\delta\left(E^{\rm out}_k - \sum_{j=1}^{N_I} t_{kj} E^{\rm in}_j \right) = 1 \nonumber \\ \label{eq:deltas} \end{eqnarray} Observed data are a noisy representation of the true values of the fields. Therefore, in inference problems it is statistically more meaningful to take that noise into account in a probabilistic way, rather than looking at the precise solutions of the exact equations (whose parameters are unknown). To this aim we can introduce Gaussian distributions whose limit for zero variance are the Dirac deltas in Eq. (\ref{eq:deltas}). Moreover, we move to consider the ensemble of all possible solutions of Eq. (\ref{eq:transm}) at given $\mathbb{T}$, looking at all configurations of input fields. We, thus, define the function: \begin{eqnarray} Z &\equiv &\int_{{\cal S}_{\rm in}} \prod_{j=1}^{N_I} dE^{\rm in}_j \int_{{\cal S}_{\rm out}}\prod_{k=1}^{N_O} dE^{\rm out}_k \label{def:Z} \\ \times &&\prod_{k=1}^{N_O} \frac{1}{\sqrt{2\pi \Delta^2}} \exp\left\{-\frac{1}{2 \Delta^2}\left| E^{\rm out}_k -\sum_{j=1}^{N_I} t_{kj} E^{\rm in}_j\right|^2 \right\} \nonumber \end{eqnarray} We stress that the integral of Eq. \eqref{def:Z} is not exactly a Gaussian integral. Indeed, starting from Eq. \eqref{eq:deltas}, two constraints on the electromagnetic field intensities must be taken into account. The space of solutions is delimited by the total power ${\cal P}$ received by system, i.e., ${\cal S}_{\rm in}: \{E^{\rm in} |\sum_k I^{\rm in}_k = \mathcal{P}\}$, also implying a constraint on the total amount of energy that is transmitted through the medium, i. e., ${\cal S}_{\rm out}:\{E^{\rm out} |\sum_k I^{\rm out}_k=c\mathcal{P}\}$, where the attenuation factor $c<1$ accounts for total losses. As we will see more in details in the following, being interested in inferring the transmission matrix through the PLM, we can omit to explicitly include these terms in Eq. \eqref{eq:H_J} since they do not depend on $\mathbb{T}$ not adding any information on the gradients with respect to the elements of $\mathbb{T}$. Taking the same number of incoming and outcoming channels, $N_I=N_O=N/2$, and ordering the input fields in the first $N/2$ mode indices and the output fields in the last $N/2$ indices, we can drop the ``in'' and ``out'' superscripts and formally write $Z$ as a partition function \begin{eqnarray} \label{eq:z} && Z =\int_{\mathcal S} \prod_{j=1}^{N} dE_j \left( \frac{1}{\sqrt{2\pi \Delta^2}} \right)^{N/2} \hspace*{-.4cm} \exp\left\{ -\frac{ {\cal H} [\{E\};\mathbb{T}] }{2\Delta^2} \right\} \\ &&{\cal H} [\{E\};\mathbb{T}] = - \sum_{k=1}^{N/2}\sum_{j=N/2+1}^{N} \left[E^*_j t_{jk} E_k + E_j t^*_{kj} E_k^* \right] \nonumber \\ &&\qquad\qquad \qquad + \sum_{j=N/2+1}^{N} |E_j|^2+ \sum_{k,l}^{1,N/2}E_k U_{kl} E_l^* \nonumber \\ \label{eq:H_J} &&\hspace*{1.88cm } = - \sum_{nm}^{1,N} E_n J_{nm} E_m^* \end{eqnarray} where ${\cal H}$ is a real-valued function by construction, we have introduced the effective input-input coupling matrix \begin{equation} U_{kl} \equiv \sum_{j=N/2+1}^{N}t^*_{lj} t_{jk} \label{def:U} \end{equation} and the whole interaction matrix reads (here $\mathbb{T} \equiv \{ t_{jk} \}$) \begin{equation} \label{def:J} \mathbb J\equiv \left(\begin{array}{ccc|ccc} \phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\ \phantom{()}&-\mathbb{U} \phantom{()}&\phantom{()}&\phantom{()}&{\mathbb{T}}&\phantom{()}\\ \phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\ \hline \phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\ \phantom{()}& \mathbb T^\dagger&\phantom{()}&\phantom{()}& - \mathbb{I} &\phantom{()}\\ \phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}\\ \end{array}\right) \end{equation} Determining the electromagnetic complex amplitude configurations that minimize the {\em cost function} ${\cal H}$, Eq. (\ref{eq:H_J}), means to maximize the overall distribution peaked around the solutions of the transmission Eqs. (\ref{eq:transm}). As the variance $\Delta^2\to 0$, eventually, the initial set of Eqs. (\ref{eq:transm}) are recovered. The ${\cal H}$ function, thus, plays the role of an Hamiltonian and $\Delta^2$ the role of a noise-inducing temperature. The exact numerical problem corresponds to the zero temperature limit of the statistical mechanical problem. Working with real data, though, which are noisy, a finite ``temperature'' allows for a better representation of the ensemble of solutions to the sets of equations of continuous variables. Now, we can express every phasor in Eq. \eqref{eq:z} as $E_k = A_k e^{\imath \phi_k}$. As a working hypothesis we will consider the intensities $A_k^2$ as either homogeneous or as \textit{quenched} with respect to phases. The first condition occurs, for instance, to the input intensities $|E^{\rm in}_k|$ produced by a phase-only spatial light modulator (SLM) with homogeneous illumination \cite{Popoff11}. With \textit{quenched} here we mean, instead, that the intensity of each mode is the same for every solution of Eq. \eqref{eq:transm} at fixed $\mathbb T$. We stress that, including intensities in the model does not preclude the inference analysis but it is out of the focus of the present work and will be considered elsewhere. If all intensities are uniform in input and in output, this amount to a constant rescaling for each one of the four sectors of matrix $\mathbb J$ in Eq. (\ref{def:J}) that will not change the properties of the matrices. For instance, if the original transmission matrix is unitary, so it will be the rescaled one and the matrix $\mathbb U$ will be diagonal. Otherwise, if intensities are \textit{quenched}, i.e., they can be considered as constants in Eq. (\ref{eq:transm}), they are inhomogeneous with respect to phases. The generic Hamiltonian element will, therefore, rescale as \begin{eqnarray} E^*_n J_{nm} E_m = J_{nm} A_n A_m e^{\imath (\phi_n-\phi_m)} \to J_{nm} e^{\imath (\phi_n-\phi_m)} \nonumber \end{eqnarray} and the properties of the original $J_{nm}$ components are not conserved in the rescaled one. In particular, we have no argument, anymore, to possibly set the rescaled $U_{nm}\propto \delta_{nm}$. Eventually, we end up with the complex couplings $XY$ model, whose real-valued Hamiltonian is written as \begin{eqnarray} \mathcal{H}& = & - \frac{1}{2} \sum_{nm} J_{nm} e^{-\imath (\phi_n - \phi_m)} + \mbox{c.c.} \label{eq:h_im} \\ &=& - \frac{1}{2} \sum_{nm} \left[J^R_{nm} \cos(\phi_n - \phi_m)+ J^I_{nm}\sin (\phi_n - \phi_m)\right] \nonumber \end{eqnarray} where $J_{nm}^R$ and $J_{nm}^I$ are the real and imaginary parts of $J_{nm}$. Being $\mathbb J$ Hermitian, $J^R_{nm}=J^R_{mn}$ is symmetric and $J_{nm}^I=-J_{mn}^I$ is skew-symmetric. \begin{comment} \textcolor{red}{ F: comment about quenched: I think that to obtain the XY model, it is not necessary that the intensities are strictly quenched (that is also a quite unfeasible situation, I guess). Indeed eq (2) does not deal with the dynamics of the modes, but just connect the in and out ones. For this, what it is necessary to have the XY model, it is that the intensities are always the same on the different samples (so that the matrix $t_{ij}$ is the same for different phase data). If the intensities are fixed, then they can be incorporated in $t_{ij}$ and eq (2) can be written just for phases as described. \\ } \end{comment} \section{Pseudolikelihood Maximization} \label{sec:plm} The inverse problem consists in the reconstruction of the parameters $J_{nm}$ of the Hamiltonian, Eq. (\ref{eq:h_im}). Given a set of $M$ data configurations of $N$ spins $\bm\sigma = \{ \cos \phi_i^{(\mu)},\sin \phi_i^{(\mu)} \}$, $i = 1,\dots,N$ and $\mu=1,\dots,M$, we want to \emph{infer} the couplings: \begin{eqnarray} \bm \sigma \rightarrow \mathbb{J} \nonumber \end{eqnarray} With this purpose in mind, in the rest of this section we implement the working equations for the techniques used. In order to test our methods, we generate the input data, i.e., the configurations, by Monte-Carlo simulations of the model. The joint probability distribution of the $N$ variables $\bm{\phi}\equiv\{\phi_1,\dots,\phi_N\}$, follows the Gibbs-Boltzmann distribution: \begin{equation}\label{eq:p_xy} P(\bm{\phi}) = \frac{1}{Z} e^{-\beta \mathcal{H\left(\bm{\phi}\right)}} \quad \mbox{ where } \quad Z = \int \prod_{k=1}^N d\phi_k e^{-\beta \mathcal{H\left(\bm{\phi}\right)}} \end{equation} and where we denote $\beta=\left( 2\Delta^2 \right)^{-1}$ with respect to Eq. (\ref{def:Z}) formalism. In order to stick to usual statistical inference notation, in the following we will rescale the couplings by a factor $\beta / 2$: $\beta J_{ij}/2 \rightarrow J_{ij}$. The main idea of the PLM is to work with the conditional probability distribution of one variable $\phi_i$ given all other variables, $\bm{\phi}_{\backslash i}$: \begin{eqnarray} \nonumber P(\phi_i | \bm{\phi}_{\backslash i}) &=& \frac{1}{Z_i} \exp \left \{ {H_i^x (\bm{\phi}_{\backslash i}) \cos \phi_i + H_i^y (\bm{\phi}_{\backslash i}) \sin \phi_i } \right \} \\ \label{eq:marginal_xy} &=&\frac{e^{H_i(\bm{\phi}_{\backslash i}) \cos{\left(\phi_i-\alpha_i(\bm{\phi}_{\backslash i})\right)}}}{2 \pi I_0(H_i)} \end{eqnarray} where $H_i^x$ and $H_i^y$ are defined as \begin{eqnarray} H_i^x (\bm{\phi}_{\backslash i}) &=& \sum_{j (\neq i)} J^R_{ij} \cos \phi_j - \sum_{j (\neq i) } J_{ij}^{I} \sin \phi_j \phantom{+ h^R_i} \label{eq:26} \\ H_i^y (\bm{\phi}_{\backslash i}) &=& \sum_{j (\neq i)} J^R_{ij} \sin \phi_j + \sum_{j (\neq i) } J_{ij}^{I} \cos \phi_j \phantom{ + h_i^{I} }\label{eq:27} \end{eqnarray} and $H_i= \sqrt{(H_i^x)^2 + (H_i^y)^2}$, $\alpha_i = \arctan H_i^y/H_i^x$ and we introduced the modified Bessel function of the first kind: \begin{equation} \nonumber I_k(x) = \frac{1}{2 \pi}\int_{0}^{2 \pi} d \phi e^{x \cos{ \phi}}\cos{k \phi} \end{equation} Given $M$ observation samples $\bm{\phi}^{(\mu)}=\{\phi^\mu_1,\ldots,\phi^\mu_N\}$, $\mu = 1,\dots, M$, the pseudo-loglikelihood for the variable $i$ is given by the logarithm of Eq. (\ref{eq:marginal_xy}), \begin{eqnarray} \label{eq:L_i} L_i &=& \frac{1}{M} \sum_{\mu = 1}^M \ln P(\phi_i^{(\mu)}|\bm{\phi}^{(\mu)}_{\backslash i}) \\ \nonumber & =& \frac{1}{M} \sum_{\mu = 1}^M \left[ H_i^{(\mu)} \cos( \phi_i^{(\mu)} - \alpha_i^{(\mu)}) - \ln 2 \pi I_0\left(H_i^{(\mu)}\right)\right] \, . \end{eqnarray} The underlying idea of PLM is that an approximation of the true parameters of the model is obtained for values that maximize the functions $L_i$. The specific maximization scheme differentiates the different techniques. \subsection{PLM with $l_2$ regularization} Especially for the case of sparse graphs, it is useful to add a regularizer, which prevents the maximization routine to move towards high values of $J_{ij}$ and $h_i$ without converging. We will adopt an $l_2$ regularization so that the Pseudolikelihood function (PLF) at site $i$ reads: \begin{equation}\label{eq:plf_i} {\cal L}_i = L_i - \lambda \sum_{i \neq j} \left(J_{ij}^R\right)^2 - \lambda \sum_{i \neq j} \left(J_{ij}^I\right)^2 \end{equation} with $\lambda>0$. Note that the values of $\lambda$ have to be chosen arbitrarily, but not too large, in order not to overcome $L_i$. The standard implementation of the PLM consists in maximizing each ${\cal L}_i$, for $i=1\dots N$, separately. The expected values of the couplings are then: \begin{equation} \{ J_{i j}^*\}_{j\in \partial i} := \mbox{arg max}_{ \{ J_{ij} \}} \left[{\cal L}_i\right] \end{equation} In this way, we obtain two estimates for the coupling $J_{ij}$, one from maximization of ${\cal L}_i$, $J_{ij}^{(i)}$, and another one from ${\cal L}_j$, say $J_{ij}^{(j)}$. Since the original Hamiltonian of the $XY$ model is Hermitian, we know that the real part of the couplings is symmetric while the imaginary part is skew-symmetric. The final estimate for $J_{ij}$ can then be obtained averaging the two results: \begin{equation}\label{eq:symm} J_{ij}^{\rm inferred} = \frac{J_{ij}^{(i)} + \bar{J}_{ij}^{(j)}}{2} \end{equation} where with $\bar{J}$ we indicate the complex conjugate. It is worth noting that the pseudolikelihood $L_i$, Eq. \eqref{eq:L_i}, is characterized by the following properties: (i) the normalization term of Eq.\eqref{eq:marginal_xy} can be computed analytically at odd with the {\em full} likelihood case that in general require a computational time which scales exponentially with the size of the systems; (ii) the $\ell_2$-regularized pseudolikelihood defined in Eq.\eqref{eq:plf_i} is strictly concave (i.e. it has a single maximizer)\cite{Ravikumar10}; (iii) it is consistent, i.e. if $M$ samples are generated by a model $P(\phi | J*)$ the maximizer tends to $J*$ for $M\rightarrow\infty$\cite{besag1975}. Note also that (iii) guarantees that $|J^{(i)}_{ij}-J^{(j)}_{ij}| \rightarrow 0$ for $M\rightarrow \infty$. In Secs. \ref{sec:res_reg}, \ref{sec:res_dec} we report the results obtained and we analyze the performances of the PLM having taken the configurations from Monte-Carlo simulations of models whose details are known. \subsection{PLM with decimation} Even though the PLM with $l_2$-regularization allows to dwell the inference towards the low temperature region and in the low sampling case with better performances that mean-field methods, in some situations some couplings are overestimated and not at all symmetric. Moreover, in the technique there is the bias of the $l_2$ regularizer. Trying to overcome these problems, Decelle and Ricci-Tersenghi introduced a new method \cite{Decelle14}, known as PLM + decimation: the algorithm maximizes the sum of the $L_i$, \begin{eqnarray} {\cal L}\equiv \frac{1}{N}\sum_{i=1}^N \mbox{L}_i \end{eqnarray} and, then, it recursively set to zero couplings which are estimated very small. We expect that as long as we are setting to zero couplings that are unnecessary to fit the data, there should be not much changing on ${\cal L}$. Keeping on with decimation, a point is reached where ${\cal L}$ decreases abruptly indicating that relevant couplings are being decimated and under-fitting is taking place. Let us define by $x$ the fraction of non-decimated couplings. To have a quantitative measure for the halt criterion of the decimation process, a tilted ${\cal L}$ is defined as, \begin{eqnarray} \mathcal{L}_t &\equiv& \mathcal{L} - x \mathcal{L}_{\textup{max}} - (1-x) \mathcal{L}_{\textup{min}} \label{$t$PLF} \end{eqnarray} where \begin{itemize} \item $\mathcal{L}_{\textup{min}}$ is the pseudolikelyhood of a model with independent variables. In the XY case: $\mathcal{L}_{\textup{min}}=-\ln{2 \pi}$. \item $\mathcal{L}_{\textup{max}}$ is the pseudolikelyhood in the fully-connected model and it is maximized over all the $N(N-1)/2$ possible couplings. \end{itemize} At the first step, when $x=1$, $\mathcal{L}$ takes value $\mathcal{L}_{\rm max}$ and $\mathcal{L}_t=0$. On the last step, for an empty graph, i.e., $x=0$, $\mathcal{L}$ takes the value $\mathcal{L}_{\rm min}$ and, hence, again $\mathcal{L}_t =0$. In the intermediate steps, during the decimation procedure, as $x$ is decreasing from $1$ to $0$, one observes firstly that $\mathcal{L}_t$ increases linearly and, then, it displays an abrupt decrease indicating that from this point on relevant couplings are being decimated\cite{Decelle14}. In Fig. \ref{Jor1-$t$PLF} we give an instance of this behavior for the 2D short-range XY model with ordered couplings. We notice that the maximum point of $\mathcal{L}_t$ coincides with the minimum point of the reconstruction error, the latter defined as \begin{eqnarray}\label{eq:errj} \mbox{err}_J \equiv \sqrt{\frac{\sum_{i<j} (J^{\rm inferred}_{ij} -J^{\rm true}_{ij})^2}{N(N-1)/2}} \label{err} \end{eqnarray} We stress that the ${\cal L}_t$ maximum is obtained ignoring the underlying graph, while the err$_J$ minimum can be evaluated once the true graph has been reconstructed. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor1_dec_tPLF_new.eps} \caption{The tilted likelyhood ${\cal L}_t$ curve and the reconstruction error vs the number of decimated couplings for an ordered, real-valued J on 2D XY model with $N=64$ spins. The peak of ${\cal L}_t$ coincides with the dip of the error.} \label{Jor1-$t$PLF} \end{figure} In the next sections we will show the results obtained on the $XY$ model analyzing the performances of the two methods and comparing them also with a mean-field method \cite{Tyagi15}. \section{Inferred couplings with PLM-$l_2$} \label{sec:res_reg} \subsection{$XY$ model with real-valued couplings} In order to obtain the vector of couplings, $J_{ij}^{\rm inferred}$ the function $-\mathcal{L}_i$ is minimized through the vector of derivatives ${\partial \mathcal{L}_i}/\partial J_{ij}$. The process is repeated for all the couplings obtaining then a fully connected adjacency matrix. The results here presented are obtained with $\lambda = 0.01$. For the minimization we have used the MATLAB routine \emph{minFunc\_2012}\cite{min_func}. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor11_2D_l2_JR_soJR_TPJR} \caption{Top panels: instances of single site coupling reconstruction for the case of $N=64$ XY spins on a 2D lattice with ordered $J$ (left column) and bimodal distributed $J$ (right column). Bottom panels: sorted couplings.} \label{PL-Jor1} \end{figure} To produce the data by means of numerical Monte Carlo simulations a system with $N=64$ spin variables is considered on a deterministic 2D lattice with periodic boundary conditions. Each spin has then connectivity $4$, i.e., we expect to infer an adjacency matrix with $N c = 256$ couplings different from zero. The dynamics of the simulated model is based on the Metropolis algorithm and parallel tempering\cite{earl05} is used to speed up the thermalization of the system. The thermalization is tested looking at the average energy over logarithmic time windows and the acquisition of independent configurations starts only after the system is well thermalized. For the values of the couplings we considered two cases: an ordered case, indicated in the figure as $J$ ordered (e.g., left column of Fig. \ref{PL-Jor1}) where the couplings can take values $J_{ij}=0,J$, with $J=1$, and a quenched disordered case, indicated in the figures as $J$ disordered (e.g., right column of Fig. \ref{PL-Jor1}) where the couplings can take also negative values, i.e., $J_{ij}=0,J,-J$, with a certain probability. The results here presented were obtained with bimodal distributed $J$s: $P(J_{ij}=J)=P(J_{ij}=-J)=1/2$. The performances of the PLM have shown not to depend on $P(J)$. We recall that in Sec. \ref{sec:plm} we used the temperature-rescaled notation, i.e., $J_{ij}$ stands for $J_{ij}/T$. To analyze the performances of the PLM, in Fig. \ref{PL-Jor1} the inferred couplings, $\mathbb{J}^R_{\rm inf}$, are shown on top of the original couplings, $\mathbb{J}^R_{\rm true}$. The first figure (from top) in the left column shows the $\mathbb{J}^R_{\rm inf}$ (black) and the $\mathbb{J}^R_{\rm tru}$ (green) for a given spin at temperature $T/J=0.7$ and number of samples $M=1024$. PLM appears to reconstruct the correct couplings, though zero couplings are always given a small inferred non-zero value. In the left column of Fig. \ref{PL-Jor1}, both the $\mathbb{J}^R_{\rm{inf}}$ and the $\mathbb{J}^R_{\rm{tru}}$ are sorted in decreasing order and plotted on top of each other. We can clearly see that $\mathbb{J}^R_{\rm inf}$ reproduces the expected step function. Even though the jump is smeared, the difference between inferred couplings corresponding to the set of non-zero couplings and to the set of zero couplings can be clearly appreciated. Similarly, the plots in the right column of Fig. \ref{PL-Jor1} show the results obtained for the case with bimodal disordered couplings, for the same working temperature and number of samples. In particular, note that the algorithm infers half positive and half negative couplings, as expected. \begin{figure} \centering \includegraphics[width=1\linewidth]{Jor11_2D_l2_errJ_varT_varM} \caption{Reconstruction error $\mbox{err}_J$, cf. Eq. (\ref{eq:errj}), plotted as a function of temperature (left) for three values of the number of samples $M$ and as a function $M$ (right) for three values of temperature in the ordered system, i.e., $J_{ij}=0,1$. The system size is $N=64$.} \label{PL-err-Jor1} \end{figure} In order to analyze the effects of the number of samples and of the temperature regimes, we plot in Fig. \ref{PL-err-Jor1} the reconstruction error, Eq. (\ref{err}), as a function of temperature for three different sample sizes $M=64,128$ and $512$. The error is seen to sharply rise al low temperature, incidentally, in the ordered case, for $T<T_c \sim 0.893$, which is the Kosterlitz-Thouless transition temperature of the 2XY model\cite{Olsson92}. However, we can see that if only $M=64$ samples are considered, $\mbox{err}_J$ remains high independently on the working temperature. In the right plot of Fig. \ref{PL-err-Jor1}, $\mbox{err}_J$ is plotted as a function of $M$ for three different working temperatures $T/J=0.4,0.7$ and $1.3$. As we expect, $\mbox{err}_J$ decreases as $M$ increases. This effect was observed also with mean-field inference techniques on the same model\cite{Tyagi15}. To better understand the performances of the algorithms, in Fig. \ref{PL-varTP-Jor1} we show several True Positive (TP) curves obtained for various values of $M$ at three different temperatures $T$. As $M$ is large and/or temperature is not too small, we are able to reconstruct correctly all the couplings present in the system (see bottom plots). The True Positive curve displays how many times the inference method finds a true link of the original network as a function of the index of the vector of sorted absolute value of reconstructed couplings $J_{ij}^{\rm inf}$. The index $n_{(ij)}$ represents the related spin couples $(ij)$. The TP curve is obtained as follows: first the values $|J^{\rm inf}_{ij}|$ are sorted in descending order and the spin pairs $(ij)$ are ordered according to the sorting position of $|J^{\rm inf}_{ij}|$. Then, a cycle over the ordered set of pairs $(ij)$, indexed by $n_{(ij)}$, is performed, comparing with the original network coupling $J^{\rm true}_{ij}$ and verifying whether it is zero or not. The true positive curve is computed as \begin{equation} \mbox{TP}[n_{(ij)}]= \frac{\mbox{TP}\left[n_{(ij)}-1\right] (n_{ij}-1)+ 1 -\delta_{J^{\rm true}_{ij},0}}{n_{(ij)}} \end{equation} As far as $J^{\rm true}_{ij} \neq 0$, TP$=1$. As soon as the true coupling of a given $(ij)$ couple in the sorted list is zero, the TP curve departs from one. In our case, where the connectivity per spin of the original system is $c=4$ and there are $N=64$ spins, we know that we will have $256$ non-zero couplings. If the inverse problem is successful, hence, we expect a steep decrease of the TP curve when $n_{ij}=256$ is overcome. In Fig. \ref{PL-varTP-Jor1} it is shown that, almost independently of $T/J$, the TP score improves as $M$ increases. Results are plotted for three different temperatures, $T=0.4,1$ and $2.2$, with increasing number of samples $M = 64, 128,512$ and $1024$ (clockwise). We can clearly appreciate the improvement in temperature if the size of the data-set is not very large: for small $M$, $T=0.4$ performs better. When $M$ is high enough (e.g., $M=1024$), instead, the TP curves do not appear to be strongly influenced by the temperature. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor11_2D_l2_TPJR_varT_varM} \caption{TP curves for 2D short-range ordered $XY$ model with $N=64$ spins at three different values of $T/J$ with increasing - clockwise from top - $M$.} \label{PL-varTP-Jor1} \end{figure} \subsection{$XY$ model with complex-valued couplings} For the complex $XY$ we have to contemporary infer $2$ apart coupling matrices, $J^R_{i j}$ and $J^I_{i j}$. As before, a system of $N=64$ spins is considered on a 2D lattice. For the couplings we have considered both ordered and bimodal disordered cases. In Fig. \ref{PL-Jor3}, a single row of the matrix $J$ (top) and the whole sorted couplings (bottom) are displayed for the ordered model (same legend as in Fig. \ref{PL-Jor1}) for the real, $J^R$ (left column), and the imaginary part, $J^I$. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor3_l2_JRJI_soJRJI_TPJRJI} \caption{Results related to the ordered complex XY model with $N=64$ spins on a 2D lattice. Top: instances of single site reconstruction for the real, JR (left column), and the imaginary, JI (right column), part of $J_{ij}$. Bottom: sorted values of JR (left) and JI (right).} \label{PL-Jor3} \end{figure} \section{PLM with Decimation} \label{sec:res_dec} \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor1_dec_tPLF_varT_varM} \caption{Tilted Pseudolikelyhood, ${\cal L}_t$, plotted as a function of decimated couplings. Top: Different ${\cal L}_t$ curves obtained for different values of $M$ plotted on top of each other. Here $T=1.3$. The black line indicates the expected number of decimated couplings, $x^*=(N (N-1) - N c)/2=1888$. As we can see, as $M$ increases, the maximum point of ${\cal L}_t$ approaches $x^*$. Bottom: Different ${\cal L}_t$ curves obtained for different values of T with $M=2048$. We can see that, with this value of $M$, no differences can be appreciated on the maximum points of the different ${\cal L}_t$ curves.} \label{var-$t$PLF} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor1_dec_tPLF_peak_statistics_varM_prob.eps} \caption{Number of most likely decimated couplings, estimated by the maximum point of $\mathcal{L}_t$, as a function of the number of samples $M$. We can clearly see that the maximum point of $\mathcal{L}_t$ tends toward $x^*$, which is the right expected number of zero couplings in the system.} \label{PLF_peak_statistics} \end{figure} For the ordered real-valued XY model we show in Fig. \ref{var-$t$PLF}, top panel, the outcome on the tilted pseudolikelyhood, $\mathcal{L}_t$ Eq. \eqref{$t$PLF}, of the progressive decimation: from a fully connected lattice down to an empty lattice. The figure shows the behaviour of $\mathcal{L}_t$ for three different data sizes $M$. A clear data size dependence of the maximum point of $\mathcal{L}_t$, signalling the most likely value for decimation, is shown. For small $M$ the most likely number of couplings is overestimated and for increasing $M$ it tends to the true value, as displayed in Fig. \ref{PLF_peak_statistics}. In the bottom panel of Fig. \ref{var-$t$PLF} we display instead different $\mathcal{L}_t$ curves obtained for three different values of $T$. Even though the values of $\mathcal{L}_t$ decrease with increasing temperature, the value of the most likely number of decimated couplings appears to be quite independent on $T$ with $M=2048$ number of samples. In Fig. \ref{fig:Lt_complex} we eventually display the tilted pseudolikelyhood for a 2D network with complex valued ordered couplings, where the decimation of the real and imaginary coupling matrices proceeds in parallel, that is, when a real coupling is small enough to be decimated its imaginary part is also decimated, and vice versa. One can see that though the apart errors for the real and imaginary parts are different in absolute values, they display the same dip, to be compared with the maximum point of $\mathcal{L}_t$. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor3_dec_tPLF_new} \caption{Tilted Pseudolikelyhood, ${\cal L}_t$, plotted with the reconstruction errors for the XY model with $N=64$ spins on a 2D lattice. These results refer to the case of ordered and complex valued couplings. The full (red) line indicates ${\cal L}_t$. The dashed (green) and the dotted (blue) lines show the reconstruction errors (Eq. \eqref{eq:errj}) obtained for the real and the imaginary couplings respectively. We can see that both ${\rm err_{JR}}$ and ${\rm err_{JI}}$ have a minimum at $x^*$.} \label{fig:Lt_complex} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor1_dec_JR_soJR_TPJR} \caption{XY model on a 2D lattice with $N=64$ sites and real valued couplings. The graphs show the inferred (dashed black lines) and true couplings (full green lines) plotted on top of each other. The left and right columns refer to the cases of ordered and bimodal disordered couplings, respectively. Top figures: single site reconstruction, i.e., one row of the matrix $J$. Bottom figures: couplings are plotted sorted in descending order.} \label{Jor1_dec} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{Jor3_dec_JRJI_soJRJI_TPJRJI} \caption{XY model on a 2D lattice with $N=64$ sites and ordered complex-valued couplings. The inferred and true couplings are plotted on top of each other. The left and right columns show the real and imaginary parts, respectively, of the couplings. Top figures refer to a single site reconstruction, i.e., one row of the matrix $J$. Bottom figures report the couplings sorted in descending order.} \label{Jor3_dec} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{MF_PL_Jor1_2D_TPJR_varT} \caption{True Positive curves obtained with the three techniques: PLM with decimation, (blue) dotted line, PLM with $l_2$ regularization, (greed) dashed line, and mean-field, (red) full line. These results refer to real valued ordered couplings with $N=64$ spins on a 2D lattice. The temperature is here $T=0.7$ while the four graphs refer to different sample sizes: $M$ increases clockwise.} \label{MF_PL_TP} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{MF_PL_Jor1_2D_errJ_varT_varM} \caption{Variation of reconstruction error, ${\rm err_J}$, with respect to temperature as obtained with the three different techniques, see Fig. \ref{MF_PL_TP}, for four different sample size: clockwise from top $M=512,1024, 2048$ and $4096$.} \label{MF_PL_err} \end{figure} Once the most likely network has been identified through the decimation procedure, we perform the same analysis displayed in Fig. \ref{Jor1_dec} for ordered and then quenched disordered real-valued couplings and in Fig. \ref{Jor3_dec} for complex-valued ordered couplings. In comparison to the results shown in Sec. \ref{sec:res_reg}, the PLM with decimation leads to rather cleaner results. In Figs. \ref{MF_PL_err} and \ref{MF_PL_TP} we compare the performances of the PLM with decimation in respect to ones of the PLM with $l_2$-regularization. These two techniques are also analysed in respect to a mean-field technique previously implemented on the same XY systems\cite{Tyagi15}. For what concerns the network of connecting links, in Fig. \ref{MF_PL_TP} we compare the TP curves obtained with the three techniques. The results refer to the case of ordered and real valued couplings, but similar behaviours were obtained for the other cases analysed. The four graphs are related to different sample sizes, with $M$ increasing clockwise. When $M$ is high enough, all techniques reproduce the true network. However, for lower values of $M$ the performances of the PLM with $l_2$ regularization and with decimation drastically overcome those ones of the previous mean field technique. In particular, for $M=256$ the PLM techniques still reproduce the original network while the mean-field method fails to find more than half of the couplings. When $M=128$, the network is clearly reconstructed only through the PLM with decimation while the PLM with $l_2$ regularization underestimates the couplings. Furthermore, we notice that the PLM method with decimation is able to clearly infer the network of interaction even when $M=N$ signalling that it could be considered also in the under-sampling regime $M<N$. In Fig. \ref{MF_PL_err} we compare the temperature behaviour of the reconstruction error. In can be observed that for all temperatures and for all sample sizes the reconstruction error, ${\rm err_J}$, (plotted here in log-scale) obtained with the PLM+decimation is always smaller than that one obtained with the other techniques. The temperature behaviour of ${\rm err_J}$ agrees with the one already observed for Ising spins in \cite{Nguyen12b} and for XY spins in \cite{Tyagi15} with a mean-field approach: ${\rm err_J}$ displays a minimum around $T\simeq 1$ and then it increases for very lower $T$; however, the error obtained with the PLM with decimation is several times smaller than the error estimated by the other methods. \section{Conclusions} \label{sec:conc} Different statistical inference methods have been applied to the inverse problem of the XY model. After a short review of techniques based on pseudo-likelihood and their formal generalization to the model we have tested their performances against data generated by means of Monte Carlo numerical simulations of known instances with diluted, sparse, interactions. The main outcome is that the best performances are obtained by means of the pseudo-likelihood method combined with decimation. Putting to zero (i.e., decimating) very weak bonds, this technique turns out to be very precise for problems whose real underlying interaction network is sparse, i.e., the number of couplings per variable does not scale with number of variables. The PLM + decimation method is compared to the PLM + regularization method, with $\ell_2$ regularization and to a mean-field-based method. The behavior of the quality of the network reconstruction is analyzed by looking at the overall sorted couplings and at the single site couplings, comparing them with the real network, and at the true positive curves in all three approaches. In the PLM +decimation method, moreover, the identification of the number of decimated bonds at which the tilted pseudo-likelihood is maximum allows for a precise estimate of the total number of bonds. Concerning this technique, it is also shown that the network with the most likely number of bonds is also the one of least reconstruction error, where not only the prediction of the presence of a bond is estimated but also its value. The behavior of the inference quality in temperature and in the size of data samples is also investigated, basically confirming the low $T$ behavior hinted by Nguyen and Berg \cite{Nguyen12b} for the Ising model. In temperature, in particular, the reconstruction error curve displays a minimum at a low temperature, close to the critical point in those cases in which a critical behavior occurs, and a sharp increase as temperature goes to zero. The decimation method, once again, appears to enhance this minimum of the reconstruction error of almost an order of magnitude with respect to other methods. The techniques displayed and the results obtained in this work can be of use in any of the many systems whose theoretical representation is given by Eq. \eqref{eq:HXY} or Eq. \eqref{eq:h_im}, some of which are recalled in Sec. \ref{sec:model}. In particular, a possible application can be the field of light waves propagation through random media and the corresponding problem of the reconstruction of an object seen through an opaque medium or a disordered optical fiber \cite{Vellekoop07,Vellekoop08a,Vellekoop08b, Popoff10a,Akbulut11,Popoff11,Yilmaz13,Riboli14}.
{'timestamp': '2016-03-24T01:11:16', 'yymm': '1603', 'arxiv_id': '1603.05101', 'language': 'en', 'url': 'https://arxiv.org/abs/1603.05101'}
ArXiv
\section{Introduction} Instrumental variable methods are powerful tools for causal inference with unmeasured treatment-outcome confounding. \citet{angrist1996identification} use potential outcomes to clarify the role of a binary instrumental variable in identifying causal effects. They show that the classic two-stage least squares estimator is consistent for the complier average causal effect under the monotonicity and exclusion restriction assumptions. Measurement error is common in empirical research, which is also called misclassification for discrete variables. \citet{black2003measurement} study the return of a possibly misreported education status. \citet{boatman2017estimating} study the effect of a self-reported smoking status. In those settings, the treatments are endogenous and mismeasured. \citet{chalak2017instrumental} considers the measurement error of an instrumental variable. \citet{pierce2012effect} consider a continuous treatment and either a continuous or a binary outcome with measurement errors. The existing literature often relies on modeling assumptions \citep{schennach2007instrumental, pierce2012effect}, auxiliary information \citep{black2003measurement, kuroki2014measurement, chalak2017instrumental,boatman2017estimating}, or repeated measurements of the unobserved variables \citep{battistin2014misreported}. With binary variables, we study all possible scenarios of measurement errors of the instrumental variable, treatment and outcome. Under non-differential measurement errors, we show that the measurement error of the instrumental variable does not result in bias, the measurement error of the treatment moves the estimate away from zero, the measurement error of the outcome moves the estimate toward zero. This differs from the result for the total effect \citep{bross1954misclassification} where measurement errors of the treatment and outcome both move the estimate toward zero. For non-differential measurement errors, we focus on qualitative analysis and nonparametric bounds. For differential measurement errors, we focus on sensitivity analysis. In both cases, we do not impose modeling assumptions or require auxiliary information. \section{Notation and assumptions for the instrumental variable estimation} For unit $i$, let $Z_i$ denote the treatment assigned, $D_i$ the treatment received, and $Y_i$ the outcome. Assume that $ (Z_i,D_i, Y_i)$ are all binary taking values in $\{0,1\}$. We ignore pretreatment covariates without loss of generality, because all the results hold within strata of covariates. We use potential outcomes to define causal effects. Define the potential values of the treatment received and the outcome as $D_{zi}$ and $Y_{zi}$ if unit $i$ were assigned to treatment arm $z$ ($z=0, 1$). The observed values are $D_i = Z_iD_{1i} + (1-Z_i)D_{0i}$ and $Y_i = Z_iY_{1i} + (1 - Z_i)Y_{0i}$. \citet{angrist1996identification} classify the units into four latent strata based on the joint values of $(D_{1i}, D_{0i} ) $. They define $U_i=a$ if $(D_{1i}, D_{0i} ) =(1,1)$, $U_i=n$ if $(D_{1i}, D_{0i} ) =(0,0)$, $U_i=c$ if $(D_{1i}, D_{0i} ) =(1,0)$, and $U_i=d$ if $(D_{1i}, D_{0i} ) =(0,1)$. The stratum with $U_i=c$ consists of compliers. For notational simplicity, we drop the subscript $i$. We invoke the following assumption for the instrumental variable model. \begin{assumption} \label{asm:iv} Under the instrumental variable model, (a) $Z \mbox{$\perp\!\!\!\perp$} (Y_1, Y_0, D_1, D_0)$, (b) $D_1 \geq D_0$, and (c) $\pr(Y_1=1 \mid U=u) =\pr(Y_0=1 \mid U=u)$ for $u=a$ and $n$. \end{assumption} Assumption~\ref{asm:iv}(a) holds in randomized experiments. Assumption~\ref{asm:iv}(b) means that the treatment assigned has a monotonic effect on the treatment received for all units, which rules out the latent strata $U=d.$ Assumption~\ref{asm:iv}(c) implies that the treatment assigned affects the outcome only through the treatment received, which is called exclusion restriction. Define $\textsc{RD}_{R\mid Q}= \pr(R=1 \mid Q=1)-\pr(R=1 \mid Q=0)$ as the risk difference of $Q$ on $R$. For example, $\textsc{RD}_{YD\mid (1-Z)}= \pr(Y=1,D=1 \mid Z=0)-\pr(Y=1,D=1 \mid Z=1)$. \citet{angrist1996identification} show that the complier average causal effect \begin{eqnarray*} \CACE \equiv E(Y_1-Y_0\mid U=c) =\frac{\pr(Y=1 \mid Z=1)-\pr(Y=1 \mid Z=0)}{\pr(D=1 \mid Z=1)-\pr(D=1 \mid Z=0)} =\frac{\textsc{RD}_{Y\mid Z}}{\textsc{RD}_{D\mid Z}} \end{eqnarray*} can be identified by the ratio of the risk differences of $Z$ on $Y$ and $D$ if $\textsc{RD}_{D\mid Z}\neq 0.$ \section{Non-differential measurement errors}\label{sec::nondiffmeasure} Let $(Z',D',Y')$ denote the possibly mismeasured values of $(Z,D,Y)$. Without the true variables, we use the naive estimator based on the observed variables to estimate $\CACE$: \begin{eqnarray*} \CACE' \equiv \frac{\pr(Y'=1 \mid Z'=1)-\pr(Y'=1 \mid Z'=0)}{\pr(D'=1 \mid Z'=1)-\pr(D'=1 \mid Z'=0)}= \frac{\textsc{RD}_{Y'\mid Z'}}{\textsc{RD}_{D'\mid Z'}}. \end{eqnarray*} \begin{assumption} \label{asm:nondif} All measurement errors are non-differential: $\pr(D' \mid D, Z',Z,Y,Y')=\pr(D' \mid D)$, $\pr(Y' \mid Y, Z,Z',D,D')=\pr(Y' \mid Y)$, and $\pr(Z' \mid Y, Y',Z,D,D')=\pr(Z' \mid Z) .$ \end{assumption} Under Assumption \ref{asm:nondif}, the measurements of the variables do not depend on other variables conditional on the unobserved true variables. We use the sensitivities and specificities to characterize the non-differential measurement errors: \begin{align*} \textsc{SN}_D&=\pr(D'=1 \mid D=1),\quad & \textsc{SP}_D&=\pr(D'=0 \mid D=0),\quad & r_D &= \textsc{SN}_D+\textsc{SP}_D-1 \leq 1, \\ \textsc{SN}_Y&=\pr(Y'=1 \mid Y=1) , \quad &\textsc{SP}_Y&=\pr(Y'=0 \mid Y=0),\quad &r_Y &= \textsc{SN}_Y+\textsc{SP}_Y-1 \leq 1. \end{align*} Without measurement errors, $r_D = r_Y = 1.$ Assume $r_D >0$ and $r_Y >0$, which means that the observed variable is informative for the true variable, i.e., the observed variable is more likely to be 1 if the true variable is 1 rather than 0. We state a simple relationship between $\CACE$ and $\CACE'$. \begin{theorem} \label{thm:cace} Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif}, $\CACE = \CACE' \times r_D/r_Y$. \end{theorem} Theorem \ref{thm:cace} shows that measurement errors of $Z$, $D$ and $Y$ have different consequences. The measurement error of $Z$ does not bias the estimate. The measurement error of $D$ biases the estimate away from zero. The measurement error of $Y$ biases the estimate toward zero. In contrast, measurement errors of the treatment and outcome both bias the estimate toward zero in the total effect estimation \citep{bross1954misclassification}. Moreover, the measurement errors of $D$ and $Y$ have mutually independent influences on the estimation of $\CACE$. Theorem \ref{thm:cace} also shows that $\CACE$ and $\CACE'$ have the same sign when $r_D>0$ and $r_Y > 0$. \section{Bounds on $\CACE$ with non-differential measurement errors} \label{sec::nondiff} When $D$ or $Y$ is non-differentially mismeasured, we can identify $\CACE$ if we know $r_D$ and $r_Y$. Without knowing them, we cannot identify $\CACE$. Fortunately, the observed data still provide some information about $\CACE$. We can derive its sharp bounds based on the joint distribution of the observed data. We first introduce a lemma. \begin{lemma} \label{lem:bound} Define $\textsc{SN}'_Z= \pr(Z=1\mid Z'=1)$ and $\textsc{SP}'_Z= \pr(Z=0\mid Z'=0)$. Under Assumption 1, given the values of $(\textsc{SN}'_Z,\textsc{SP}'_Z,\textsc{SN}_D,\textsc{SP}_D,\textsc{SN}_Y,\textsc{SP}_Y)$, there is a one-to-one mapping between the set $\{\pr(Z=z), \pr(U=u),\pr(Y_z=1\mid U=u) : z=0,1; u=a,n,c\}$ and the set $\{ \pr(Z'=z', D' = d', Y'=y'): z', d', y' = 0,1 \}$. \end{lemma} Lemma~\ref{lem:bound} allows for simultaneous measurement errors of more than one elements of $(Y, Z, D)$. From Lemma~\ref{lem:bound}, given the sensitivities and specificities, we can recover the joint distribution of $(Y_z,U,Z)$ for $z=0,1$. Conversely, the conditions $\{ 0\leq \pr(Z=z) \leq 1, 0\leq \pr(U=u) \leq 1, 0\leq \pr(Y_z=1\mid U=u) \leq 1 : z=0,1;u=a,n,c\}$ induce sharp bounds on the sensitivities and specificities, which further induce sharp bounds on $\CACE$. This is a general strategy that we use to derive sharp bounds on $\CACE$. First, we discuss the measurement error of $Y$. \begin{theorem} \label{thm:bound:Y} Suppose that $\CACE '\geq 0$ and only $Y$ is mismeasured with $r_Y>0$. Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif}, the sharp bounds are $\textsc{SN}_Y \geq M_Y$, $\textsc{SP}_Y \geq 1- N_Y$, and $\CACE ' \leq \CACE \leq \CACE '/(M_Y- N_Y)$, where $M_Y$ and $N_Y$ are the maximum and minimum values of the set \begin{eqnarray*} \left \{\pr(Y'=1\mid D=0, Z=1),\pr(Y'=1 \mid D=1, Z=0), \frac{\textsc{RD}_{Y'D\mid Z}}{\textsc{RD}_{D\mid Z}}, \frac{\textsc{RD}_{Y'(1-D)\mid (1-Z)}}{\textsc{RD}_{D\mid Z}}\right\}. \end{eqnarray*} \end{theorem} We can obtain the bounds under $\CACE ' <0$ by replacing $Y$ with $1-Y$ and $Y'$ with $1-Y'$ in Theorem \ref{thm:bound:Y}. Thus, we only consider $\CACE ' \geq 0$ in Theorem \ref{thm:bound:Y} and the theorems in later parts of the paper. In Theorem \ref{thm:bound:Y}, the lower bounds on $\textsc{SN}_Y $ and $\textsc{SP}_Y$ must be smaller than or equal to $1$, i.e., $M_Y \leq 1$ and $1- N_Y \leq 1$. These two inequalities further imply the following corollary on the testable conditions of the instrumental variable model with the measurement error of $Y.$ \begin{corollary} \label{cor:testY} Suppose that only $Y$ is mismeasured with $r_Y>0$. Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif}, \begin{eqnarray*} \pr(Y'=y,D=1 \mid Z=1) &\geq& \pr(Y'=y, D=1 \mid Z=0), \quad (y=0,1),\\ \pr(Y'=y,D=0 \mid Z=0) &\geq& \pr(Y'=y, D=0 \mid Z=1), \quad (y=0,1). \end{eqnarray*} \end{corollary} The conditions in Corollary \ref{cor:testY} are all testable with observed data $(Z,D,Y')$, and they are the same under $\CACE ' \geq 0$ and $\CACE '<0$. \citet{balke1997bounds} derive the same conditions as in Corollary \ref{cor:testY} without the measurement error of $Y$. \citet{wang2017falsification} propose statistical tests for these conditions. From Corollary \ref{cor:testY}, the non-differential measurement error of $Y$ does not weaken the testable conditions of the binary instrumental variable model. Second, we discuss the measurement error of $D$. \begin{theorem} \label{thm:bound:D} Suppose that $\CACE ' \geq 0$ and only $D$ is mismeasured with $r_D>0$. Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif}, the sharp bound are $ M_D\leq \textsc{SN}_D \leq U_D ,$ $ 1-N_D \leq \textsc{SP}_D \leq 1-V_D, $ and $ \CACE ' \times (M_D - N_D) \leq \CACE \leq \CACE ' \times (U_D - V_D), $ where \begin{eqnarray*} M_D&=& \max \left\{ \max_{z=0,1} \pr(D'=1\mid Z=z), \max_{y=0,1} \pr(D'=1 \mid Y=y,Z=1),\frac{\textsc{RD}_{(1-Y)D'\mid (1-Z)}}{\textsc{RD}_{Y\mid Z}}\right\} ,\\ N_D&=& \min \left\{\min_{z=0,1} \pr (D'=1\mid Z=z),\min_{y=0,1} \pr (D'=1 \mid Y=y,Z=0),\frac{\textsc{RD}_{YD'\mid Z}}{\textsc{RD}_{Y\mid Z}}\right\} ,\\ U_D &=& \min\left \{1,\frac{\textsc{RD}_{YD'\mid Z}}{\textsc{RD}_{Y\mid Z}}\right\},\quad V_D = \max\left \{0,\frac{\textsc{RD}_{(1-Y)D'\mid (1-Z)}}{\textsc{RD}_{Y\mid Z}}\right\} . \end{eqnarray*} \end{theorem} With a mis-measured $D$, \citet{ura2018heterogeneous} derives sharp bounds with and without Assumption \ref{asm:nondif}, respectively. The former bounds are equivalent to ours, but the latter bounds are wider. In Theorem \ref{thm:bound:D}, the lower bounds on $\textsc{SN}_D$ and $\textsc{SP}_D$ must be smaller than or equal to their upper bounds. This further implies the following corollary on the testable conditions of the binary instrumental variable model with the measurement error of $D.$ \begin{corollary} \label{cor:testD} Suppose that $\CACE ' \geq 0$ and only $D$ is mismeasured with $r_D>0$. Under Assumptions~\ref{asm:iv} and~\ref{asm:nondif}, \begin{eqnarray} \pr(Y=1,D'=1 \mid Z=1) &\geq& \pr(Y=1, D'=1 \mid Z=0), \label{eq::testableD1} \\ \pr(Y=0,D'=0 \mid Z=0) &\geq &\pr(Y=0, D'=0 \mid Z=1), \nonumber \\ \pr(D'=1 \mid Y=y,Z=1) &\leq & \textsc{RD}_{YD'\mid Z} / \textsc{RD}_{Y\mid Z}, \quad \hspace{1.23cm} (y=0,1), \nonumber \\ \pr(D'=1 \mid Y=y,Z=0) &\geq & \textsc{RD}_{(1-Y)D'\mid (1-Z)} / \textsc{RD}_{Y\mid Z} , \quad (y=0,1). \nonumber \end{eqnarray} \end{corollary} We can obtain the conditions under $\CACE ' < 0$ by replacing $Y$ with $1-Y$. In the Supplementary material, we show that the conditions in Corollary~\ref{cor:testD} are weaker than those in \citet{balke1997bounds}. Thus, the non-differential measurement error of $D$ weakens the testable conditions of the binary instrumental variable model. It is complicated to obtain closed-form bounds under simultaneous measurement errors of more than one elements of $(Z, D, Y)$. In those cases, we can numerically calculate the sharp bounds on $\CACE$ with details in the Supplementary material. \section{Results under strong monotonicity} \label{sec::mono} Sometimes, units in the control group have no access to the treatment. It is called the one-sided noncompliance problem with the following assumption. \begin{assumption} \label{asm:str} For all individual $i$, $D_{0i}=0$. \end{assumption} Under strong monotonicity, we have only two strata with $U=c$ and $U=n$. Theorem \ref{thm:cace} still holds. Moreover, strong monotonicity sharpens the bounds in \S \ref{sec::nondiff}. First, we consider the measurement error of $Y.$ We have \begin{eqnarray*} \CACE ' =\left\{ \pr(Y'=1 \mid Z=1)-\pr(Y'=1 \mid Z=0) \right\} / \pr(D=1 \mid Z=1) , \quad \CACE =\CACE '/r_Y. \end{eqnarray*} \begin{theorem} \label{thm:bound:str:Y} Suppose that $\CACE ' \geq 0$ and only $Y$ is mismeasured with $r_Y>0$. Under Assumptions~\ref{asm:iv}--\ref{asm:str}, the sharp bounds are $\textsc{SP}_Y \geq 1- N_Y^{\textup{m}}$, $\SN_Y \geq M_Y^{\textup{m}}$, and $\CACE ' \leq \CACE \leq \CACE '/ (M_Y^{\textup{m}} - N_Y^{\textup{m}})$, where \begin{eqnarray*} N_Y^{\textup{m}} &=& \min \{\pr(Y'=1\mid D=0,Z=1), \pr(Y'=1 \mid D=1,Z=1)-\CACE '\},\\ M_Y^{\textup{m}} &=& \max \{\pr(Y'=1\mid D=0,Z=1), \pr(Y'=1\mid D=1,Z=1)\}. \end{eqnarray*} \end{theorem} Second, we consider the measurement error of $D.$ Subtle issues arise. When $D$ is mismeasured, $\pr(D'=0 \mid D=0, Z=0)=1$ is known, and $\pr(D'=1 \mid D=1, Z=0)$ is not well defined. Thus, Assumption \ref{asm:nondif} of non-differential measurement error is implausible. We need modifications. Define \begin{eqnarray*} &\textsc{SN}_D^1&= \pr(D'=1 \mid D=1, Z=1), \quad \textsc{SP}_D^1= \pr(D'=0 \mid D=0, Z=1) \end{eqnarray*} as the sensitivity and specificity conditional on $Z=1$. We have \begin{eqnarray*} \CACE ' = \textsc{RD}_{Y\mid Z} / \left\{ \pr(D'=1 \mid Z=1)-(1-\textsc{SP}_D^1) \right\},\quad \CACE = \CACE ' \times (\textsc{SN}_D^1+\textsc{SP}_D^1-1) . \end{eqnarray*} \begin{theorem} \label{thm:bound:str:D} Suppose that $\CACE ' \geq 0$, only $D$ is mismeasured, and \begin{equation} \pr(D'=1 \mid Y=1,Z=1) \geq \pr(D'=1 \mid Y=0,Z=1). \label{eq::conditionD} \end{equation} Under Assumptions~\ref{asm:iv} and~\ref{asm:str}, the sharp bounds are \begin{eqnarray*} &&\textsc{SP}_D^1 \geq 1- \pr(D'=1 \mid Y=0,Z=1), \quad \textsc{SN}_D^1 \geq \pr(D'=1 \mid Y=1,Z=1),\\ && \textsc{SN}_D^1 \leq \left\{ \pr(Y=1,D'=1 \mid Z=1)-(1-\textsc{SP}_D^1)\times \pr(Y=1\mid Z=0) \right\} / \textsc{RD}_{Y\mid Z} ,\\ && \pr(D'=1 \mid Y=1, Z=1)\times \textsc{RD}_{Y\mid Z} / \pr(D'=1 \mid Z=1) \leq \CACE \leq 1 . \end{eqnarray*} \end{theorem} Unlike Theorems~\ref{thm:bound:Y}--\ref{thm:bound:str:Y}, the upper bound on $ \textsc{SN}_D^1$ depends on $\textsc{SP}_D^1$ in Theorem~\ref{thm:bound:str:D}. The condition in \eqref{eq::conditionD} is not necessary for obtaining the bounds, but it helps to simplify the expression of the bounds. It holds in our applications in \S \ref{sec::illustration}. We give the bounds on $\CACE$ without \eqref{eq::conditionD} in the Supplementary material. The upper bound on $\CACE$ is not informative in Theorem~\ref{thm:bound:str:D}, but, fortunately, we are more interested in the lower bound in this case. It is complicated to obtain closed-form bounds under simultaneous measurement errors of more than one elements of $(Z, D, Y)$. In those cases, we can numerically calculate the sharp bounds with more details in the Supplementary material. \section{Sensitivity analysis formulas under differential measurement errors} \label{sec::sensitivityanalysis} Non-differential measurement error is not plausible in some cases. \S \ref{sec::mono} shows that under strong monotonicity, the measurement error of $D$ cannot be non-differential because it depends on $Z$ in general. In this section, we consider differential measurement errors of $D$ and $Y$ without requiring strong monotonicity. We do not consider the differential measurement error of $Z$, because the measurement of $Z$ often precedes $(D, Y)$ and its measurement error is unlikely to depend on later variables. We first consider the differential measurement error of $Y$. \begin{theorem} \label{thm:diff:Y} Suppose that only $Y$ is mismeasured. Define \begin{align} \textsc{SN}_Y^1&= \pr(Y'=1 \mid Y=1, Z=1), \quad &\textsc{SN}_Y^0&= \pr(Y'=1 \mid Y=1, Z=0), \label{eq::misY1} \\ \textsc{SP}_Y^1&= \pr(Y'=0 \mid Y=0, Z=1), \quad &\textsc{SP}_Y^0&= \pr(Y'=0 \mid Y=0, Z=0). \label{eq::misY2} \end{align} Under Assumption~\ref{asm:iv}, \begin{eqnarray*} \CACE = \left\{ \frac{\pr(Y'=1 \mid Z=1)-(1-\textsc{SP}_Y^1)}{\textsc{SN}_Y^1+\textsc{SP}_Y^1-1}-\frac{\pr(Y'=1 \mid Z=0)-(1-\textsc{SP}_Y^0)}{\textsc{SN}_Y^0+\textsc{SP}_Y^0-1}\right\} \bigg/ \textsc{RD}_{D\mid Z}. \end{eqnarray*} \end{theorem} Theorem \ref{thm:diff:Y} allows the measurement error of $Y$ to depend on $D$, but the formula of $\CACE$ only needs the sensitivities and specificities in \eqref{eq::misY1} and \eqref{eq::misY2} conditional on $(Z, Y)$. It is possible that $\CACE'$ is positive but $\CACE$ is negative. For example, if $ \textsc{SN}_Y^1+\textsc{SP}_Y^1=\textsc{SN}_Y^0+\textsc{SP}_Y^0>1 $ and $ \textsc{SP}_Y^0-\textsc{SP}_Y^1>\textsc{RD}_{Y'\mid Z}, $ then $\CACE$ and $\CACE'$ have different signs. We then consider the differential measurement error of $D$. \begin{theorem} \label{thm:diff:D} Suppose that only $D$ is mismeasured. Define \begin{align} \textsc{SN}_D^1&= \pr(D'=1 \mid D=1, Z=1), \quad &\textsc{SN}_D^0&= \pr(D'=1 \mid D=1, Z=0), \label{eq::misD1}\\ \textsc{SP}_D^1&= \pr(D'=0 \mid D=0, Z=1), \quad &\textsc{SP}_D^0&= \pr(D'=0 \mid D=0, Z=0). \label{eq::misD2} \end{align} Under Assumption~\ref{asm:iv}, \begin{eqnarray*} \CACE =\textsc{RD}_{Y\mid Z} \bigg /\left\{\frac{\pr(D'=1 \mid Z=1)-(1-\textsc{SP}_D^1)}{\textsc{SN}_D^1+\textsc{SP}_D^1-1}-\frac{\pr(D'=1 \mid Z=0)-(1-\textsc{SP}_D^0)}{\textsc{SN}_D^0+\textsc{SP}_D^0-1}\right\}. \end{eqnarray*} \end{theorem} Theorem \ref{thm:diff:D} allows the measurement error of $D$ to depend on $Y$, but the formula of $\CACE$ only needs the sensitivities and specificities \eqref{eq::misD1} and \eqref{eq::misD2} conditional on $Z$. Similar to the discussion after Theorem \ref{thm:diff:Y}, it is possible that $\CACE'$ and $\CACE$ have different signs. Based on Theorems~\ref{thm:diff:Y} and~\ref{thm:diff:D}, if we know or can consistently estimate the sensitivities and specificities in \eqref{eq::misY1}--\eqref{eq::misD2}, then we can consistently estimate $\CACE$; if we only know the ranges of the sensitivities and specificities, then we can obtain bounds on $\CACE$. For simultaneous differential measurement errors of $D$ and $Y$, the formula of $\CACE$ depends on too many sensitivity and specificity parameters. Thus we omit the discussion. \section{Illustrations}\label{sec::illustration} We give three examples and present the data in the Supplementary material. \begin{example}\label{eg::1} \citet{improve2014endovascular} assess the effectiveness of the emergency endovascular versus the open surgical repair strategies for patients with a clinical diagnosis of ruptured aortic aneurism. Patients are randomized to either the emergency endovascular or the open repair strategy. The primary outcome is the survival status after 30 days. Let $Z$ be the treatment assigned, with $Z=1$ for the endovascular strategy and $Z=0$ for the open repair. Let $D$ be the treatment received. Let $Y$ be the survival status, with $Y=1$ for dead, and $Y=0$ for alive. If none of the variables are mismeasured, then the estimate of $\CACE $ is $0.131$ with 95\% confidence interval $(-0.036, 0.298)$ including $0$. If only $Y$ is non-differentially mismeasured, then $0.382 \leq \textsc{SP}_Y \leq 1$, $0.759 \leq \textsc{SN}_Y \leq 1$, $0.141 \leq r_Y \leq 1$, and thus $0.131 \leq \CACE \leq 0.928$ from Theorem \ref{thm:bound:Y}. If only $D$ is non-differentially mismeasured, then $0.658 \leq \textsc{SN}_D \leq 1$, $0.908 \leq \textsc{SP}_D \leq 1$, $0.566 \leq r_D \leq 1$, and thus $0.074 \leq \CACE \leq 0.131$ from Theorem \ref{thm:bound:D}. \end{example} \begin{example}\label{eg::2} In \citet{hirano2000assessing}, physicians are randomly selected to receive a letter encouraging them to inoculate patients at risk for flu. The treatment is the actual flu shot, and the outcome is an indicator for flu-related hospital visits. However, some patients do not comply with their assignments. Let $Z_i$ be the indicator of encouragement to receive the flu shot, with $Z=1$ if the physician receives the encouragement letter, and $Z=0$ otherwise. Let $D$ be the treatment received. Let $Y$ be the outcome, with $Y=0$ if for a flu-related hospitalization during the winter, and $Y=1$ otherwise. If none of the variables are mismeasured, then the estimate of $\CACE $ is $0.116$ with 95\% confidence interval $(-0.061, 0.293)$ including $0$. If only $Y$ is non-differentially mismeasured, then from Theorem \ref{thm:bound:Y}, $\textsc{SP}_Y \geq 1.004 > 1$, and thus the assumptions of the instrumental variable do not hold. If only $D$ is non-differentially mismeasured, then from Theorem \ref{thm:bound:D}, $\textsc{SN}_D \geq 8.676 > 1$, and thus the assumptions of the instrumental variable do not hold either. We reject the testable condition \eqref{eq::testableD1} required by both Corollaries~\ref{cor:testY} and \ref{cor:testD} with $p$-value smaller than $10^{-9}$. As a result, the non-differential measurement error of $D$ or $Y$ cannot explain the violation of the instrumental variable assumptions in this example. \end{example} \begin{example}\label{eg::3} \citet{sommer1991estimating} study the effect of vitamin A supplements on the infant mortality in Indonesia. The vitamin supplements are randomly assigned to villages, but some of the individuals in villages assigned to the treatment group do not receive them. Strong monotonicity holds, because the individuals assigned to the control group have no access to the supplements. Let $Y$ denote a binary outcome, with $Y=1$ if the infant survives to twelve months, and $Y=0$ otherwise. Let $Z$ denote the indicator of assignment to the supplements. Let $D$ denote the actual receipt of the supplements. If none of the variables are mismeasured, then the estimate of $\CACE $ is $0.003$ with 95\% confidence interval $(0.001, 0.005)$ excluding $0$. If only $Y$ is non-differentially mismeasured, then $\textsc{SP}_Y \geq 0.014$, $\textsc{SN}_Y \geq 0.999$, and thus $0.003 \leq \CACE \leq 0.252$ from Theorem \ref{thm:bound:str:Y}. The 95\% confidence interval is $(0.001,1)$. If only $D$ is non-differentially mismeasured, then $\textsc{SP}^1_D \geq 0.739$, $\textsc{SN}^1_D \geq 0.802$, and thus $0.003 \leq \CACE \leq 1$ from Theorem \ref{thm:bound:str:D}. The 95\% confidence interval is $(-1\times 10^{-5},1)$. In the Supplementary material, we give the details for constructing confidence intervals for $\CACE$ based on its sharp bounds. \end{example} In Examples \ref{eg::1} and \ref{eg::3}, the upper bounds on $\CACE$ are too large to be informative, but fortunately, the lower bounds are of more interest in these applications. \section{Discussion}\label{sec::discussion} \subsection{Further comments on the measurement errors of $Z$} If only $Z$ is mismeasured and the measurement error is non-differential, then $\textsc{RD}_{D\mid Z'}= r_Z' \times \textsc{RD}_{D\mid Z}$ where $r_Z' = \textsc{SN}'_Z+\textsc{SP}'_Z-1$ with $\textsc{SN}'_Z$ and $\textsc{SP}'_Z$ defined in Lemma \ref{lem:bound}. If $r_Z' $ and $\textsc{RD}_{D\mid Z}$ are both constants that do not shrink to zero as the sample size $n$ increases, then $\textsc{RD}_{D\mid Z'}$ does not shrink to zero either. In this case, measurement error of $Z$ does not cause the weak instrumental variable problem \citep{nelson1990distribution,staiger1997instrumental}. Theorem~\ref{thm:cace} shows that the non-differential measurement error of $Z$ does not affect the large-sample limit of the naive estimator. We further show in the Supplementary material that it does not affect the asymptotic variance of the naive estimator either. Nevertheless, in finite samples, the measurement error of $Z$ does result in smaller estimate for $\textsc{RD}_{D\mid Z'}$. If we consider the asymptotic regime that $ r_Z' = o(n^{-\alpha})$ for some $\alpha >0$, then it is possible to have the weak instrumental variable problem. In this case, we need tools that are tailored to weak instrumental variables \citep{nelson1990distribution,staiger1997instrumental}. Practitioners sometimes dichotomize a continuous instrumental variable $Z$ into a binary one based on the median or other quantiles. The dichotomized variable based on other quantiles are measurement errors of the dichotomized variable based on the median. However, these measurement errors are differential and thus our results in \S\ref{sec::nondiffmeasure} and \S\ref{sec::nondiff} are not applicable. \subsection{Further commments on the measurement errors of $D$} We discussed binary $D$. If we dichotomize a discrete $D \in \{0, 1, \ldots,J\}$ at $k$, i.e., $D'=1(D \geq k)$, then we can define two-stage least squares estimators based on $D$ and $D'$: $$ \tau_{\text{2sls}} = \frac{ E(Y\mid Z=1)-E(Y\mid Z=0) }{ E(D \mid Z=1)- E(D \mid Z=0) } ,\quad \tau_{\text{2sls}} ' = \frac{ E(Y\mid Z=1)-E(Y\mid Z=0) }{ E(D' \mid Z=1)- E(D' \mid Z=0) } . $$ \citet{angrist1995two} show that $\tau_{\text{2sls}}$ is a weighted average of some subgroup causal effects. Analogous to Theorem \ref{thm:cace}, we show in the Supplementary material that $ \tau_{\text{2sls}} = \tau_{\text{2sls}} ' \times w_k $, where $w_k = \pr(D_1 \geq k>D_0) / \sum_{j=1}^J \pr(D_1 \geq j>D_0) \in [0,1]$ if Assumptions \ref{asm:iv}(a) and (b) hold. Therefore, the dichotomization biases the estimate away from zero. \subsection{Further comments on the measurement errors of $Y$} For a continuous outcome, it is common to assume that the measurement error of $Y$ is additive and non-differential, i.e., $Y'=Y+U$, where $U$ is the error term with mean zero. If the binary $Z$ and $D$ are non-differentially mismeasured as in Assumption \ref{asm:nondif}, then $\CACE = \CACE ' \times r_D$. In this case, the measurement error of $Y$ does not bias the estimate for $\CACE$. \newpage
{'timestamp': '2019-06-06T02:16:11', 'yymm': '1906', 'arxiv_id': '1906.02030', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.02030'}
ArXiv
\section{Introduction} The transition between the microscopic and macroscopic worlds is a fundamental issue in quantum theory both from the point of view of foundations of physics and of the application to quantum computation \cite{giulini,zurek}. Spontaneous symmetry breaking (SSB) indicates a situation where, given a symmetry of the Hamiltonian, there are eigenstates which are not invariant under the action of this symmetry, unless a term is added which explicitly breaks the symmetry. Usually, when the control parameter reaches a critical value, the lowest energy eigenstate keeping the Hamiltonian symmetry is no longer stable in the presence of infinitely small perturbations, and new stable solutions appear which are not symmetric. SSB leads naturally to a degenerate manifold of ground states. Symmetry breaking usually occurs in the thermodynamic limit, when superselection destroys quantum coherence. Important exceptions are the XXX Heisenberg model and the XY model in a transverse magnetic field at the particular value of the field where ground state factorization occurs \cite{kurmann}. In these cases, ground state degeneracy occurs for any size of the system, and it is therefore possible to explicitly study the transition from quantum to classical behavior. How entanglement is affected, in the thermodynamic limit, by the presence of a term which explicitly breaks the symmetry, ha been discussed by Sylju\aa{}sen \cite{syl} and by Osterloh \textit{et al.} \cite{osterloh}. Here, we face the problem from another point of view, starting from small systems, and then increasing the size until the thermodynamic limit is reached. The XY model in a transverse field has been introduced in the early sixties and solved by Katsura \cite{katsura}, by means of the Jordan-Wigner transformation, formerly introduced by Lieb, Schultz, and Mattis \cite{lieb}. Subsequently, the correlation functions were investigated in great detail by Barouch and MacCoy \cite{barouch}, who found the existence of a second critical value of the transverse field separating qualitatively different behaviors of the correlation functions. Later on, Kurmann, Thomas and Muller \cite{kurmann} discovered ground state factorization for a large class of spin models. In the particular case of the XY model, the field at which factorization occurs is exactly the critical field of Barouch and MacCoy. Recent interest has been devoted to the study of entanglement properties of many-body systems undergoing a quantum phase transition \cite {osborne,fazio,vidal,vedral}. As shown in Ref. \cite{amico}, the critical point turns out to separate two regions with qualitatively different bipartite entanglement. It has been shown in Ref. \cite{baroni} that, in the vicinity of the factorizing field, the range of concurrence diverges, and that such divergence corresponds the appearance of a characteristic length scale in the system. Recently, the conditions for the existence of the factorizing field for models with long-range interaction have been determined \cite{giampaolo}, and the study of this effect has been extended to dimerized chains \cite{gianluca}. The two systems we wish to investigate belong to different classes of symmetry. While the XXX Heisenberg model has the SU(2) continuous symmetry, i.e. the Hamiltonian commutes with the total spin along any possible direction, the XY model is invariant under parity transformations and possesses the discrete $Z_{2}$ symmetry. It is commonly accepted that purely quantum effects are not observable on the macroscopic scale, except for superconductivity, superfluidity. On the other hand, quantifying entanglement (perhaps the most genuine manifestation of quantum properties) as a function of the system size represents a fundamental issue \cite{vednat}. Here, we wish to investigate in detail the two-spin entanglement dependence on the total number of spins for these models. In particular, we shall derive the difference in the size effects due to the difference in the system symmetry. To be more specific, in the case of a discrete symmetry there is an exponential entanglement decrease, while with a continuous symmetry entanglement shrinks linearly with the growth of the system size. Since the XY model has been studied through the last four decades, and results are scattered over a vast literature, for convenience we shall give here a brief survey of the main results, with the primary aim of focusing on the existence of the factorizing field and its independence from the system size. The paper is organized as follows. In Sec. \ref{maro} the XY model in a transverse field is discussed. We give special emphasis to the finite size solution with the scope of enlightening the emergence of the factorizing field as a size-independent degeneracy point. Furthermore, by means of the finite size picture, we are able to explain in a simple way the appearance of spontaneous symmetry breaking in the thermodynamic limit. In Sec. \ref {XXX} we describe briefly the structure of the ground state for the isotropic Heisenberg (XXX) model. Even if, for any finite number of spins the ground state manifold has finite dimension, an over-complete set of states can be introduced that allows to study the microscopic-to-macroscopic transition. In Sec. \ref{2sp} we derive the value of the concurrence for pairs of spins and the order parameter fluctuation in a superposition state as a function of the size systems both for the XY model and the XXX model. Finally, in Sec. \ref{disc}, results are discussed. In particular, we will focus on the influence of the symmetry in the different behaviors. \section{XY Model\label{maro}} Let us consider a chain of $N$ spins \begin{equation} H=\sum_{l}\left[ J\frac{\left( 1+\gamma \right) }{2}\sigma _{l}^{x}\sigma _{l+1}^{x}+J\frac{\left( 1-\gamma \right) }{2}\sigma _{l}^{y}\sigma _{l+1}^{y}+h\sigma _{l}^{z}\right] , \end{equation} where $\sigma ^{\epsilon }$ are the three Pauli matrices $\left( \epsilon =x,y,z\right) $, and periodic boundary conditions ($\sigma _{N+1}^{\epsilon }=\sigma _{1}^{\epsilon }$) are assumed. In the following we will assume $J=-1 $ (ferromagnetic coupling). The above Hamiltonian is invariant under the Z_{2}$ group of the rotations by $\pi $ about the $z$ axis, since it commutes with the parity operator $P=\prod_{l}\sigma _{l}^{z}$. Due to this symmetry, eigenstates are classified depending on parity eigenvalue. This system is known to undergo a quantum phase transition at the critical point $h_{c}=1$. Below this value, in the thermodynamic limit, spontaneous magnetization along the $x$ axis appears. Since the work of Ref. \cite{lieb}, the Jordan-Wigner transformation, defined through $\sigma _{l}^{z}=1-2c_{l}^{\dagger }c_{l}$, $\sigma _{l}^{+}=\prod_{j<l}\left( 1-2c_{l}^{\dagger }c_{l}\right) c_{l}$, $\sigma _{l}^{-}=\prod_{j<l}\left( 1-2c_{l}^{\dagger }c_{l}\right) c_{l}^{\dagger }$ is introduced to map spins in spinless fermions. The transformed Hamiltonian is $H=H_{0}-PH_{1}$ with \begin{eqnarray} H_{0} &=&-\sum_{l=1}^{N-1}\left[ \left( c_{l}^{\dagger }c_{l+1}-c_{l}c_{l+1}^{\dagger }\right) +\gamma \left( c_{l}^{\dagger }c_{l+1}^{\dagger }-c_{l}c_{l+1}\right) -h\left( 1-2c_{l}^{\dagger }c_{l}\right) \right] , \\ H_{1} &=&-\left[ \left( c_{N}^{\dagger }c_{1}-c_{N}c_{1}^{\dagger }\right) +\gamma \left( c_{N}^{\dagger }c_{1}^{\dagger }-c_{N}c_{1}\right) \right] . \end{eqnarray} Since $\left[ H,P\right] =0$, all eigenstates of $H$\ have definite parity, and we can proceed to a separate diagonalization of $H$ in the two subspaces labelled by to $P=\pm 1$. Then, the complete set of eigenvectors of $H$ will be given by the even eigenstates of\ $H^{+}=H_{0}-H_{1}$ and the odd eigenstates of $H^{-}=H_{0}+H_{1}$. Both for $H^{+}$\ and $H^{-}$ the diagonalization is obtained by first carrying out the space Fourier transform \begin{equation} c_{k}=\frac{1}{\sqrt{N}}\sum_{k}e^{-i\frac{2\pi }{N}kl}c_{l}, \end{equation} where $k=0,1,\ldots ,N-1$ in $H^{-}$, and $k=1/2,3/2,\ldots ,N-1/2$ in $H^{+} $, and then making the Bogoliubov transformation \begin{equation} c_{k}=\cos \vartheta _{k}\eta _{k}+i\sin \vartheta _{k}\eta _{-k}^{\dagger }, \end{equation} with $\vartheta _{k}=-\vartheta _{-k}$. Here, $\eta _{-k}$ stands for $\eta _{N-k}$. The diagonalization condition implies for $\vartheta _{k}$ \begin{equation} \tan 2\vartheta _{k}=-\frac{\gamma \sin k}{h-\cos k}. \end{equation} Eventually, we end up with the quasi-particle Hamiltonians \begin{eqnarray} H^{+} &=&\sum_{k=1/2}^{N-1/2}\Lambda _{k}\left( \eta _{k}^{\dagger }\eta _{k}-\frac{1}{2}\right) , \\ H^{-} &=&\sum_{k=0}^{N-1}\Lambda _{k}\left( \eta _{k}^{\dagger }\eta _{k} \frac{1}{2}\right) , \end{eqnarray} where the eigenvalues are given by \begin{equation} \Lambda_{k}=2\sqrt{\left(h-\cos \frac{2\pi}{N}k\right)^{2}+\gamma^{2}\sin^{2}\frac{2\pi}{N}k}. \end{equation} The ground states of $H^{+}$ and $H^{-}$ are the corresponding vacuum states with eigenvalues $E_{0}^{+}=-\sum_{k=1/2}^{N-1/2}\Lambda _{k}$ and E_{0}^{-}=-\sum_{k=0}^{N-1}\Lambda _{k}$. The vacuum in the generic $k$ mode is determined by $\eta _{k}\left| 0^{\pm }\right\rangle =0$.\ While for every $k\neq 0$ the Bogoliubov vacuum corresponds to an even state (the absence of quasi-particles implies zero or two particles), the mode $k=0 $ plays a special role. In fact, the correspondent Bogoliubov transformation reads \begin{equation} \eta _{0}=\frac{1}{2}\left( 1+\frac{h-1}{\left| h-1\right| }\right) c_{0} \frac{i}{2}\left( 1-\frac{h-1}{\left| h-1\right| }\right) c_{0}^{\dagger }, \end{equation} with the important consequence that the quasi-particle vacuum corresponds to a zero-particle state for $h<1$ and to one-particle state for $h>1$. The presence or the absence of the particle in the $k=0$ mode changes the parity of the state. Thus, for $h>1$, the vacuum of $H^{-}$, because of its symmetry, does not belong to the set of eigenstates of $H$, while for $h<1$ it becomes an eigenstate of physical interest. Above the $h=1$, the odd state of lowest energy is obtained by adding one quasi-particle corresponding to the bottom of the energy band with energy $\Lambda _{\min }=2\left( h-1\right) $. This energy gap prevents the degeneracy even in the thermodynamic limit. \subsection{Quantum phase transition and ground state factorization} The change of symmetry of the vacuum of H$^{-}$ is the very cause of the phase transition in the thermodynamic limit. Indeed, on the macroscopic scale the sum over $k$ becomes an integral yielding $E_{0}^{+}=E_{0}^{-}$. Then, below the critical point $h_{C}=1$ the odd and the even lowest eigenstates are degenerate, and the Hamiltonian symmetry is spontaneously broken, while, for $h>h_{C}$, \ due to the existence of the energy gap $\Lambda _{\min }$, the ground state keeps its parity (even). For $h<h_{C}$, because of superselection rules, the system is necessarily found in symmetry-broken states. As pointed out in Ref. \cite{barouch}, below the critical point there are two different regions where two-body correlation functions can decrease monotonically or oscillate as a function of the spin distance, depending on the Hamiltonian parameters. These regions are separate, in the $\{h,\gamma \} $ diagram, by the set of points satisfying $h_{F}^{2}+\gamma ^{2}=1$. More recently, it has been shown that on this border line the ground state factorizes \cite{verrucchi}, i.e. it can be written as $\left| \Psi _{F}^{\pm }\right\rangle =\otimes _{l}\left| \Psi _{F,l}^{\pm }\right\rangle $, with $\left| \Psi _{F,l}^{\pm }\right\rangle =\left( \cos \alpha \left| \uparrow _{l}\right\rangle \pm \sin \alpha \left| \downarrow _{l}\right\rangle \right) $, where $\cos 2\alpha =\left[ \left( 1-\gamma \right) /\left( 1+\gamma \right) \right] ^{1/2}$. The existence of the factorizing field, originally derived by requiring only size-independent degeneracy between the lowest odd and even eigenvalues \cite {kurmann}, can be studied within the general solution of the model. By analyzing lowest odd and even eigenvalues of $H$ in the symmetry broken region for finite $N$ as a function of the transverse field, we observe a series of $N/2$ level crossings for $h=h_{i}$ (see Fig. \ref{energie}). In correspondence of each $h_{i}$ the ground state changes its symmetry. The existence of such points has been discussed in Ref. \cite{hoeger} and more recently in Ref. \cite{rossignoli}, and is responsible for the magnetization jumps reported in Ref. \cite{puga}. In the thermodynamic limit, this kind of structure implies two different symmetry breaking mechanisms. For $0<h<h_{F}$, as $N\rightarrow \infty $, the set $\left\{ h_{i}\right\} $ of the degeneracy points becomes a denumerable infinity, while for $h_{F}<h<1$ there is a the usual symmetry breaking due to the vanishing of the gap. An interesting problem would be to check whether this is the microscopic mechanism responsible for the qualitative change in the behavior of the correlation functions above and below $h_{F}$. \begin{figure} \includegraphics[width=.5\textwidth,height=40mm]{En8.eps} \caption{Difference of energy between the lowest odd and even eigenvalues of $H$ as a function of the transverse field for a 8-spin chain. Given the anisotropy amplitude $\gamma=0.6$, the factorizing field is $h_F=0.8$. As predicted, we observe $N/2$ level crossing point, the last of them being $h_F$.} \label{energie} \end{figure} While spontaneous symmetry breaking arises only for $N\rightarrow \infty $, it can be seen from the previous analysis (see also Ref. \cite{rossignoli}) that at the factorizing point $h_{F}$ degeneracy appears for any $N$. It is simple to show that $E_{0}^{+}\left( h_{F}\right) =E_{0}^{-}\left( h_{F}\right) $ holds for any $N$, while the positions of all the other level crossing points $h_{i}$ change with $N$. Then, at the special field $h_{F}$, the Hamiltonian symmetry is broken independently of the system size, and any linear superposition of the two symmetric eigenstates $\left| \alpha ^{\pm }\right\rangle $ ($\left| \alpha ^{+}\right\rangle $ for the even eigenstate and $\left| \alpha ^{-}\right\rangle $ for the odd eigenstate), is a possible eigenstate. Obviously, each one of the factorized states can be expressed as a linear combination of the two\thinspace symmetric eigenstates \begin{equation} \left| \Psi _{F}^{\pm }\right\rangle =u_{+}\left| \alpha ^{+}\right\rangle \pm u_{-}\left| \alpha ^{-}\right\rangle , \label{ftoa} \end{equation} with $u_{\pm }=\left[ \left( 1+\cos ^{N}2\alpha \right) /2\right] ^{1/2}$. Notice that, for finite size systems, $\left| \Psi _{F}^{+}\right\rangle $ and $\left| \Psi _{F}^{-}\right\rangle $ are not orthogonal, while $\ \left\langle \Psi _{F}^{+}|\Psi _{F}^{-}\right\rangle =0$ in the thermodynamic limit. \section{XXX model\label{XXX}} The homogeneous (ferromagnetic) Heisenberg model is defined by the Hamiltonian \begin{equation} H_{XXX}=-J\sum_{l=0}^{N-1}\left( \sigma _{l}^{x}\sigma _{l+1}^{x}+\sigma _{l}^{y}\sigma _{l+1}^{y}+\sigma _{l}^{z}\sigma _{l+1}^{z}\right) , \end{equation} with the boundary condition $\sigma _{N}^{\epsilon }=\sigma _{0}^{\epsilon }$. The model has been solved using the Bethe ansatz \cite {bethe}. As far as the ground state properties are concerned, a simple argument can be introduced to show that, for any number of spins, any factorized state $\left| \Phi \left( \theta ,\phi \right) \right\rangle =\otimes _{l \left[ \cos \theta \left| \uparrow \right\rangle +\exp \left( i\phi \right) \sin \theta \left| \downarrow \right\rangle \right] $ minimizes the energy. Given the invariance of $H_{XXX}$ with respect to rotations of arbitrary amplitude $\beta $ around any direction $\hat{n}$ $\mathcal{R}\left( \beta \hat{n}\right) =\prod_{l}\exp \left[ i\beta \vec{\sigma}_{l}\cdot \hat{n \right] $, due to $\left[ H_{XXX},\mathcal{R}\left( \beta ,\hat{n}\right) \right] =0$, we shall restrict the attention on the particular state $\left| \Phi \left( 0,0\right) \right\rangle =\left| \uparrow ,\uparrow ,\ldots ,\uparrow \right\rangle $. It can be immediately seen that $\left| \Phi \left( 0,0\right) \right\rangle $ belongs to the ground state subspace of any of two-body terms of $H_{XXX}$, and then its energy represents the minimum achievable value. To make a link with the XY model, we could say that factorization point for the Heisenberg model corresponds to $h=0$. In the absence of spontaneous symmetry breaking, i.e. for finite systems, and in the absence of external fields, the ground state belongs to an $\left( N+1\right) $-dimensional manifold, and can be expanded in the over-complete set of factorized states \begin{equation} \left| \Phi \right\rangle =\frac{1}{\mathcal{N}^{\prime }}\int d\theta \int d\phi f\left( \theta ,\phi \right) \left| \Phi \left( \theta ,\phi \right) \right\rangle , \end{equation} where $f\left( \theta ,\phi \right) $ is a weight function.The inner product between states pointing in different directions reads \begin{equation} \left\langle \Phi \left( \theta ^{\prime },\phi ^{\prime }\right) |\Phi \left( \theta ,\phi \right) \right\rangle =\left[ \cos \theta \cos \theta ^{\prime }+e^{i\left( \phi -\phi ^{\prime }\right) }\sin \theta \sin \theta ^{\prime }\right] ^{N}. \label{inner} \end{equation} Then, only in the thermodynamic limit we have a set of orthogonal states. Given the continuous $SU(2)$ symmetry of the model, spontaneous symmetry breaking implies that the system will select one direction out of all the possible choices in the $\left( \theta ,\phi \right) $ space. As we are interested in studying the problem of vanishing of peculiar quantum properties, we shall choose initial states with given symmetry properties, which, in the finite size limit, do exhibit those properties. \section{Transition to the Schr\"{o}dinger cat regime\label{2sp}} According to the superposition principle, every linear combination of quantum states is allowed. On the other hand, it is well known that superposition cannot be observed on the macroscopic scale because of superselection, the most convincing argument being the Schr\"{o}dinger cat paradox. Then, on this scale, all but a small set of states belonging the total Hilbert space are actually forbidden. This process, which leads to a diagonal form of the density operator in a preferred basis, implies the vanishing of the most peculiar of quantum properties: state interference. In order to study the vanishing of state interference we analyze two different quantities: two-spin entanglement and the fluctuation properties of the order parameter $M_{x}=\left( \sum_{l}\sigma _{l}^{x}\right) /N$. Given a density matrix $\rho $, fluctuations statistics is associated to the generating function \begin{equation} G_{\rho }\left( \lambda \right) =\mathrm{Tr}\left\{ \rho e^{\frac{i\lambda } N}\sum_{l}\sigma _{l}^{x}}\right\} , \end{equation} which is the Fourier transform of the probability distribution function of $M_{x}$. When the system can be observed in $m$ states $\Psi _{1},\Psi _{2},\ldots \Psi _{m}$, whit related generating functions $G_{\Psi _{n}}\left( \lambda \right)$, quantum superposition effects appear if \begin{equation} \Delta G=G_{\rho }\left( \lambda \right) -\frac{1}{m}\sum_{n=1}^{m}G_{\Psi _{n}}\left( \lambda \right) \neq 0. \end{equation} We expect that in the symmetry broken regime $\lim_{N\rightarrow \infty }\Delta G=0$. Similar considerations, carried out about entanglement properties, lead to establish that, when $N\rightarrow \infty$, only factorized states can be observed. Even if superselection can be assumed as a principle, the size dependence of quantum interference effects will be related to the particular system observed. In the following, we find different decaying behaviors for the XY and the XXX models, which are caused by the difference in the symmetry of the two systems. We will start in both models by considering symmetric states (which are expected not to survive in the thermodynamic limit) and we shall study two-spin entanglement as a function of $N$. In fact, the existence of degenerate ground state manifolds for any $N$ allows to calculate coherence properties as a function of the size of the system. For qubit systems, like spins, two-body entanglement can be measured through concurrence \cite{wootters}. As shown in Ref. \cite{verrucchi}, for states which are invariant under the action of the parity operator, the concurrence $\mathcal{C}_{ij}$ of two spins at sites $i$ and $j$ is related to the quantum correlation functions by simple relations which will be used here. As pointed out in Ref. \cite{rossignoli}, for the XY model at the factorizing field the two-spin concurrence does not depend on the spin distance $|i-j|$. Similar arguments can be used also for the XXX chain. Since we are dealing with superpositions of ferromagnetic states, the entanglement will be of ferromagnetic kind as well. In this case, \begin{equation} \mathcal{C}_{ij}=\frac{1}{2}\left| p_{1}-p_{2}\right| -p_{III}, \end{equation} where $\left| p_{1}-p_{2}\right| $ is the average value of $\left| \uparrow \uparrow \right\rangle \left\langle \downarrow \downarrow \right| +\left| \downarrow \downarrow \right\rangle \left\langle \uparrow \uparrow \right| , and $p_{III}$ is the average value of $\left| \uparrow \downarrow \right\rangle \left\langle \uparrow \downarrow \right| $. \subsection{XY model} Let us first consider the order parameter fluctuations for the symmetric states $\left| \alpha ^{\pm }\right\rangle $. These states could be obtained by starting with $h\neq h_F$. In this case the exact ground state would have definite parity. For instance, for $h> h_F$, the ground state is even. By lowering the field until the value $h_F$ is reached, the system is driven in $\left| \alpha ^{+ }\right\rangle $.The generating function is \begin{equation} G\left( \lambda ,\alpha ^{\pm }\right) =\frac{1}{4u_{\pm }^{2}}\left[ G\left( \lambda ,\Psi _{F}^{+}\right) +G\left( \lambda ,\Psi _{F}^{-}\right) +\tilde{G}\left( \lambda ,\Psi _{F}^{+},\Psi _{F}^{-}\right) +\tilde{G \left( \lambda ,\Psi _{F}^{-},\Psi _{F}^{+}\right) \right], \end{equation} where \begin{equation} \tilde{G}\left( \lambda ,\Psi _{F}^{\pm },\Psi _{F}^{\mp }\right) =\left\langle \Psi _{F}^{\pm }\right| e^{\frac{i\lambda }{N}\sum_{l}\sigma _{l}^{x}}\left| \Psi _{F}^{\mp }\right\rangle. \end{equation} It is easy to show that \begin{equation} G\left( \lambda ,\Psi _{F}^{\pm }\right) =\left( \cos \frac{\lambda }{N \pm i\sin \frac{\lambda }{N}\sin 2\theta \right) ^{N}, \end{equation} \begin{equation} \tilde{G}\left( \lambda ,\Psi _{F}^{\pm },\Psi _{F}^{\mp }\right)=\left( \cos \frac{\lambda }{N}\cos 2\theta \right) ^{N}. \end{equation} Then, interference effects (manifested by the off diagonal elements) disappear exponentially with $N$. As a second characterization, we study the concurrence for the symmetric states $\left| \alpha ^{\pm }\right\rangle $. This can be easily derived using the expression of $\left| \alpha^{\pm }\right\rangle $ in terms of $\left| \Psi _{F}^{\pm }\right\rangle $. The result (see also Ref. \cite{rossignoli}) is given by \begin{equation} \mathcal{C}_{ij}\left( \alpha ^{\pm }\right) =\left( \cos 2\alpha \right) ^{N-2}\frac{\sin ^{2}2\alpha }{1\pm \left( \cos 2\alpha \right) ^{N}}, \end{equation} where the factor $\left( \cos 2\theta \right) ^{N-2}$ derives from the non-orthogonality of $\left| \Psi _{F}^{+}\right\rangle $ and $\left| \Psi _{F}^{-}\right\rangle $ and determines the speed of classicalization. In the macroscopic limit, $\mathcal{C}_{ij}\left( \alpha ^{\pm }\right) $ vanishes as $\left[ \left( 1-\gamma \right) /\left( 1+\gamma \right) \right] ^{N/2}$. Then, for every finite value of the anisotropy $\gamma $, entanglement decays exponentially with $N$. A small but finite anisotropy will enhance entanglement. Actually, the $\gamma =0$ limit implies a non-analytic change \subsection{XXX model} In analogy with the previous case, we introduce a state which is invariant under a given spin rotation. In particular, if we choose the state $\left| \Phi _{e}\right\rangle$ invariant under rotations about the $y$ axis $\exp \left[ -i\theta \sum_{l}\sigma _{l}^{y}\right] \left| \Phi _{e}\right\rangle =\left| \Phi _{e}\right\rangle $, we have \begin{equation} \left| \Phi _{e}\right\rangle =\frac{1}{\mathcal{N}}\int_{0}^{2\pi }d\theta \left| \Phi _{\theta }\right\rangle, \label{equat} \end{equation} where $\left| \Phi _{\theta }\right\rangle =\otimes _{l}\left| \Phi _{l}\left( \theta ,0\right) \right\rangle $, and where $\left| \Phi _{l}\left( \theta ,0\right) \right\rangle =\cos \theta \left| \uparrow _{l}\right\rangle +\sin \theta \left| \downarrow _{l}\right\rangle $. Requiring the normalization of $\left| \Phi _{e}\right\rangle $ implies \int d\theta ^{\prime }d\theta \left[ \cos \left( \theta -\theta ^{\prime }\right) \right] ^{N}=\mathcal{N}^{2}$. It is easy to verify that $\left| \Phi _{e}\right\rangle $ is also an eigenstate of the parity operator. In fact, the integration over $\theta $ cancels, in the superposition, all terms with an odd number of down spins. Each $\left| \Phi _{\theta }\right\rangle$ would be the actual ground state in the presence of an external field directed along the direction $\theta $. Let us analyze the order parameter. First, we calculate fluctuations for a given element of the ground state degeneracy manifold, obtaining \begin{equation} G_{\Phi _{\theta }}\left( \lambda \right) =\frac{1}{\mathcal{N}^{2}}\left( \cos \frac{\lambda }{N}+i\sin \frac{\lambda }{N}\sin 2\theta \right) ^{N}. \end{equation} For large $N$ we see that $G_{\Phi _{\theta }}\left( \lambda \right) \simeq \exp \left( i\lambda \sin 2\theta \right) $. \ This is the typical expression of the generating function of a non fluctuating quantity. Its Fourier transform, which is the probability distribution function of the order parameter is indeed, for any $\theta $, a Dirac's delta distribution around $\sin 2\theta $. Furthermore, in the superposition state we have \begin{equation} G_{\Phi _{e}}\left( \lambda \right) =\frac{1}{\mathcal{N}^{2}}\int d\theta ^{\prime }d\theta \left[ g\left( \lambda ,\theta ,\theta ^{\prime }\right) \right] ^{N}, \end{equation} where \begin{equation} g\left( \lambda ,\theta ,\theta ^{\prime }\right) =\cos \frac{\lambda }{N \cos \left( \theta -\theta ^{\prime }\right) +i\sin \frac{\lambda }{N}\sin \left( \theta +\theta ^{\prime }\right). \end{equation} As $N$ gets large, the vanishing of interference is observed. In the large $N$ limit, the steepest descent method gives \begin{equation} \Delta G_{\Phi _{e}}\left( \lambda \right) =\int d\theta G_{\Phi _{\theta }}\left( \lambda \right) \left[ \exp \left( \frac{\lambda ^{2}\cos 2\theta } 2N}\right) -1\right], \end{equation} implying \begin{equation} \Delta G_{\Phi _{e}}\left( \lambda \right) \sim 1/N \end{equation} in the asymptotic regime. As far as the two-spin concurrence is concerned, it is straightforward to find \begin{equation} \left| p_{1}-p_{2}\right| =\frac{1}{\mathcal{N}^{2}}\int d\theta d\theta ^{\prime }\left[ \cos ^{2}\theta \sin ^{2}\theta ^{\prime }+\cos ^{2}\theta ^{\prime }\sin ^{2}\theta \right] \left[ \cos \left( \theta -\theta ^{\prime }\right) \right] ^{N-2}, \end{equation} and \begin{equation} p_{III}=\int d\theta d\theta ^{\prime }\cos \theta \cos \theta ^{\prime }\sin \theta \sin \theta ^{\prime }\left[ \cos \left( \theta -\theta ^{\prime }\right) \right] ^{N-2}, \end{equation} yielding \begin{equation} \mathcal{C}_{ij}\left( \Phi _{e}\right) =\frac{1}{2}\frac{\int d\theta d\theta ^{\prime }\left[ \tan \left( \theta -\theta ^{\prime }\right) \right] ^{2}\left[ \cos \left( \theta -\theta ^{\prime }\right) \right] ^{N}}{\int d\theta d\theta ^{\prime }\left[ \cos \left( \theta -\theta ^{\prime }\right) \right] ^{N}}. \end{equation} This result is very simple for $N$ even. In that case one gets \begin{equation} \mathcal{C}_{ij}^{N\text{\textrm{\ even}}}\left( \Phi _{e}\right) =\frac{1} 2\left( N-1\right) }. \end{equation} This result can be understood taking into account that, given Eq. (\ref {inner}), the inner product between $\left| \Phi \left( \theta ,0\right) \right\rangle $ and $\left| \Phi \left( \theta ^{\prime },0\right) \right\rangle $ vanishes exponentially with $N$ . This allows to evaluate integrals, in the large $N$ regime, by means of the steepest descent method. It is clear that, when $N$ gets large, $\left[ \cos \left( \theta -\theta ^{\prime }\right) \right] ^{N}$ is different from zero only for \left( \theta -\theta ^{\prime }\right) $ $\simeq 0$. Expanding all terms around this value, the concurrence is well approximated by the ratio between two Gaussian integrals \begin{equation} \mathcal{C}_{ij}\left( \Phi _{e}\right) \simeq \frac{1}{2}\frac \int_{-\infty }^{\infty }x^{2}\exp \left( -\frac{Nx^{2}}{2}\right) dx} \int_{-\infty }^{\infty }\exp \left( -\frac{Nx^{2}}{2}\right) dx}, \end{equation} which eventually gives $\mathcal{C}_{ij}\left( \Phi _{e}\right) \simeq 1/2N$ in the asymptotic regime. \section{Discussion\label{disc}} We tackled the problem of describing how quantum coherence effects vanish as the system size becomes macroscopic. Even if this phenomenon is expected to appear in generic systems, two different symmetry-broken model have been considered when exact and analytic treatment are possible. In the first one (the XY model in a transverse field), because of the discrete symmetry, the ground state spans a two-dimensional manifold. As for the Heisenberg model, the dimension of the manifold grows with $N$, eventually reaching a dense structure. In Fig. \ref{figura} we plot the behavior of the concurrence $\mathcal{C}_{ij}$ for the two models. In the case of the XY chain, we used also different values of the anisotropy \begin{figure \includegraphics[width=.5\textwidth,height=40mm]{fig2.eps} \caption{Two-spin concurrence as a function of the system size $N$ for the XXX model (dots) and for the XY model (continuous lines). In this latter case, we considered two different anisotropy parameters: the red line represents $\gamma=0.5$, and the blue line corresponds to $\gamma=0.8$.} \label{figura} \end{figure} . A necessary step to determine local quantities, like two-spin entanglement or magnetization, is the introduction of the reduced density matrix. Given the peculiar structure of the factorized states we have considered here, calculating the reduced density matrix requires the computation of inner products between states defined on $\left( N-m\right) $ (where $m$ is a finite number) spin subspaces) aligned along different directions . Both for the XY and for the isotropic model these quantities vanish exponentially in the large $N$ limit, destroying in such a way the quantum interference between different components. However, since the XXX model has a continuous symmetry, the ground state manifold is continuous as well, and all matrix elements are integrated. The integration implies a reduction in the decoherence rate, which turns out to be linear in $1/N$. In this paper we considered two exactly solvable models, and studied how superselection tends to destroy their quantum properties. A typical tool used to study problems whose solution is not known is the mean-field approximation, that, in fact, consists in the introduction of ``product states'' with the same aspect of those described in this paper. For example, in the BCS theory of superconductivity the solution introduced is factorized in the space of the modes $k$. Since in the finite-size case the symmetry is expected to be conserved, while the mean-field states are widely unsymmetrical, the linear superposition of degenerate states is a way to restore it. Once the thermodynamic limit is performed, all the considerations made in this paper apply. Then, we can conclude that our results apply not only to the models explicitly studied, but they could used, in the limit of validity of the mean-field theory, in all systems belonging to the classes of symmetry discussed. \acknowledgments The authors wish to acknowledge S. Paganelli for useful discussions. GLG acknowledges the “Juan de la Cierva” fellowship of the Spanish Ministry of Science and Innovation.
{'timestamp': '2010-10-19T02:05:14', 'yymm': '1010', 'arxiv_id': '1010.3584', 'language': 'en', 'url': 'https://arxiv.org/abs/1010.3584'}
ArXiv
\section{Introduction}\label{sec:Introduction} Toward the sixth generation of wireless networks (6G), a number of exciting applications will benefit from sensing services provided by future perceptive networks, where sensing capabilities are integrated in the communication network. Once the communication network infrastructure is already deployed with multiple interconnected nodes, a multi-static sensory mesh can be enabled and exploited to improve the performance of the network itself~\cite{9296833}. Therefore, the joint communications and sensing (JCAS) concept has emerged as an enabler for an efficient use of radio resources for both communications and sensing purposes, where high frequency bands, that are expected to be available in 6G, can favor very accurate sensing based on radar-like technology~\cite{art:Wild_Nokia}. Relying on the coordination of the network and a distributed processing, sensing signals can be transmitted from one node, and the reflections on the environment can be received at multiple nodes, in a coordinated manner~\cite{art:Wild_Nokia}. Thus, distributed multi-static sensing approaches can improve sensing accuracy while alleviating the need of full-duplex operation at sensing nodes. In this context, the high-gain directional beams provided by performing beamforming in multiple-input multiple-output (MIMO) and massive MIMO systems, which are essential for the operation of communication systems at higher frequencies, will be also exploited for improving sensing by considering distributed implementations~\cite{8764485,art:multistatic_Merlano}. In multi-static MIMO radar settings, the synchronization among sensing nodes is crucial, thus, this issue has motivated the study of feasibility of synchronization. For instance, a synchronization loop using in-band full duplex (IBFD) was demonstrated for a system with two MIMO satellites sensing two ground targets in~\cite{art:multistatic_Merlano}. Additionally, multicarrier signals such as orthogonal frequency-division multiplexing (OFDM) waveforms have proven to provide several advantages for the use on JSAC systems, including independence from the transmitted user data, high dynamic range, possibility to estimate the relative velocity, and efficient implementation based on fast Fourier transforms~\cite{5776640}. For instance, uplink OFDM 5G New Radio (NR) waveforms have been effectively used for indoor environment mapping in \cite{art:Baquero_mmWaveAlg}. Therein, a prototype full-duplex transceiver was used to perform range-angle chart estimation and dynamic tracking via extended Kalman Filter. Moreover, the capabilities of distributed sensing system can be further extended by relying on the advantages of flexible nodes as unmanned aerial vehicles (UAVs), which have already raised significant attention for their applicability in numerous scenarios and even in harsh environments~\cite{8877114}. Therefore, UAVs have been already considered for sensing purposes~\cite{art:Wei_UAV_Safe,art:UAVs_Guerra,Chen_JSACUAVSystem}. For instance, in~\cite{art:Wei_UAV_Safe}, UAVs are explored to perform simultaneous jamming and sensing of UAVs acting as eavesdroppers by exploiting the jamming signals for sensing purposes. Therein, sensing information is used to perform optimal online resource allocation to maximise the amount of securely served users constrained by the requirements on the information leakage to the eavesdropper and the data rate to the legitimate users. Besides in~\cite{art:UAVs_Guerra}, a UAV-based distributed radar is proposed to perform distributed sensing to locate and track malicious UAVs using frequency modulated continuous wave (FMCW) waveforms. It was shown that the mobility and distributed nature of the UAV-based radar benefits the accuracy for tracking mobile nodes if compared with a fixed radar. However, it does not make complete use of its distributed nature, as each UAV performs local sensing accounting for only the sensing information of its neighbouring UAVs, and there is no consideration on communication tasks. In the context of JSAC, in~\cite{Chen_JSACUAVSystem}, a general framework for a full-duplex JSAC UAV network is proposed, where area-based metrics are developed considering sensing and communication parameters of the system and sensing requirements. This work uses a full-duplex operation for local sensing at each UAV while considering reflections from other UAVs as interference. Different from previous works and considering the complexity of full-duplex systems, this work focuses on half-duplex operation and proposes a framework for performing a grid-based distributed sensing relying on the coordination of multiple UAVs to sense a ground target located on an area of interest. It is considered that MIMO UAVs employ OFDM waveforms and digital beamforming is implemented at the receiver side. A periodogram is used for the estimation of the radar cross-section (RCS) of each cell in the grid, leveraging the knowledge of the geometry of the system. The RCS estimation is performed by all of the non-transmitting UAVs, simultaneously, while one UAV is illuminating a certain sub-area of the grid. This process is performed until all UAVs have illuminated their respective sub-areas, then all UAVs inform the measured RCS per cell on the grid to a UAV acting as a fusion center (FC), which performs information-level fusion. This process allows a half-duplex operation in a distributed sensing setting. \section{System Model}\label{sec:SysModel} \begin{figure}[bt] \centering \includegraphics[width=0.73\linewidth]{sysmod4.pdf} \caption{System model.\vspace{-1em}} \label{fig:sysModel} \end{figure} Consider the system depicted in Fig.~\ref{fig:sysModel}, where a single point-like target of RCS $\sigma_{\mathrm{T}}$ is positioned on a square area $S$ of $\ell$ meters of side length. $U$ UAVs are deployed (for simplicity and insighfulness) at a common altitude $h$ and are coordinated to perform distributed sensing to locate the ground target. Each UAV $u$ in the set of all UAVs $\mathcal{U}$, with $u\in\mathcal{U}$, is positioned at coordinates $\mathbf{r}_u = (x_u,y_u,h)$, with $|\mathcal{U}|=U$. Also, the RCS of a ground cell is denoted by $\sigma_{G}$. Similar to~\cite{Chen_JSACUAVSystem}, it is assumed that each UAV has two arrays of antennas namely a square uniform planar array (UPA) (mounted facing downward) for sensing and a uniform linear array (ULA) (mounted horizontally) to communicate with the FC for information fusion and coordination tasks. The square UPA consists of $n\times n$ isotropic antenna elements spaced $\lambda/2$ from each-other, where $\lambda=f_0/c_0$ is the wavelength of the signal, $f_0$ is the frequency of the signal, and $c_0$ is the speed of light. \if\mycmd1 To perform sensing, the UAV $u \in \mathcal{U}$ estimates the RCS of a certain point on the ground, denoted as $p$, located at the coordinates $\mathbf{r}_p=(x_p,y_p,0)$. For this purpose, $u$ utilizes a digital receive beamformer $\mathbf{w}_p\in\mathbb{C}^{n\times 1}$. The reflection from point $p$ arriving at UAV $u$ has an angle-of-arrival (AoA) of $\varphi_{p,u} = (\theta_{p,u},\phi_{p,u})$, where $\theta_{p,u}$ corresponds to the elevation angle and $\phi_{p,u}$ to the azimuth. The corresponding beam-steering vector $\mathbf{g}(\varphi_{p,u})$ has its elements $g_{ij}(\varphi_{p,u})$ for all $i,j = 0,...,n-1$, where $i$ is the index corresponding to the antenna element in the $x$ axis and $j$ in the $y$ axis of the UPA defined as~\cite{book:BalanisAntennas} \begin{align}\nonumber g_{ij}(\varphi_{p,u}) = &e^{-j\pi (i-1)\sin(\theta_{p,u})\sin(\phi_{p,u})} \times \\ &e^{-j\pi (j-1)\sin(\theta_{p,u})\cos(\phi_{p,u})}. \end{align} The steering matrix $\mathbf{G}_u \in \mathbb{C}^{n^2\times H}$ contains the steering vectors corresponding to the $H$ reflections captured at UAV $u$ as \begin{equation} \mathbf{G}_u = [\mathbf{g}(\varphi_{1,u}),..., \mathbf{g}(\varphi_{H,u})]_{n^2\times H}, \end{equation} where $n^2$ is the total number of antennas at UAV $u$. After applying the receive beamformer $\mathbf{w}_{p}$ at reception, the beam pattern from the reflections captured at $u$ is given by \begin{align} \pmb{\chi} = \mathbf{G}_u^T\mathbf{w}_{p} = [\chi(\varphi_{1,u}),...,\chi(\varphi_{H,u})]^T, \end{align} where $\chi(\varphi_{p,u})$ is the gain of the reflection coming from $p$ and $\pmb{\chi}$ is the beam pattern vector of size $H\times 1$ at every AoA by applying beamformer $\mathbf{w}_{p}$ at UAV $u$. \fi \section{Distributed Sensing Protocol}\label{sec:Protocol} For the sensing process, it is considered that the total area $S$ is sectioned into a grid composed of $L\times L$ square cells with dimensions $ d \times d $ with $d = \ell/L$. Each cell is characterised by its middle point $p$ of position $\mathbf{r}_p =(x_p,y_p,0)$ such that $p\in\mathcal{P}$, where $\mathcal{P}$ is the set of all cells. For notational simplicity we will refer a certain cell by its middle point $p$. The point $p^*$ represents the target, which is located in the position $\mathbf{r}_{p^*} =(x_{p^*},y_{p^*},0)$. It is also considered that, at a certain time, a UAV $u\in\mathcal{U}$ illuminates straight down with its UPA working as a phased array, thus the half power beam width (HPBW) projection forms a circle on the ground. In this sense, it is assumed that the cells that are completely inside the largest inscribed square of the HPBW projection are the intended ones to be sensed by the reflections produced from the illumination of UAV $u$, and are characterised as the cell set $\mathcal{P}_{u}$, while the set of non-intended illuminated cells $\mathcal{P}_{u}'$ contains the cells that are not inside the largest inscribed squared, which are treated as clutter, as illustrated in Fig.~\ref{fig:gridCells}. \begin{figure}[bt] \centering \includegraphics[width=0.95\linewidth]{gridFig_Cellsu.pdf} \caption{Illumination grid.} \label{fig:gridCells} \vspace{-1em} \end{figure} In total, the set of illuminated cells is given as $\mathcal{Q}_{u} = \mathcal{P}_{u}\cup\mathcal{P}_{u}'$. The distributed sensing framework is summarized as follows \begin{itemize}[label={},leftmargin=*] \item \textbf{Step 1:} The UAVs coordinate and assume their positions to cover the whole area of interest $S$, such that every cell in $\mathcal{P}$ is contained in a single $\mathcal{P}_u$, $u\in\mathcal{U}$. \item \textbf{Step 2:} UAV $u\in\mathcal{U}$ illuminates the ground directly below acting as a phased array, illuminating the elements of $\mathcal{Q}_u$, and potentially, the target $p^*$. \item \textbf{Step 3:} Every other UAV $u'\in\mathcal{U}\setminus\{u\}$ processes the incoming reflections by choosing a cell $p\in\mathcal{P}_u$ and for that cell \begin{itemize} \item computes and applies a digital receive beamformer as described in Section~\ref{sec:BF}, and \item computes the periodogram corresponding to $p$, and estimates its RCS as described in Section~\ref{sec:Periodogram}. \end{itemize} \item \textbf{Step 4:} Repeat Step 3 for all cells $p\in\mathcal{P}_u$. \item \textbf{Step 5:} Repeat Steps 2-4 for all UAVs $u\in\mathcal{U}$. After this, each UAV $u$ has an estimated RCS map of the grid, $\mathbf{\Hat{\Gamma}}_u$, which is a matrix of RCS estimates of all cells in $\mathcal{P} \setminus \mathcal{P}_u$. This occurs because the UAV $u$ does not estimate the RCS of the cells in $\mathcal{P}_u$, thus avoiding the need for a full-duplex system at the UAVs. \item \textbf{Step 6:} All UAVs $u\in\mathcal{U}$ send their RCS estimation maps $\mathbf{\Hat{\Gamma}}_u$ to the FC for information-level fusion. \item \textbf{Step 7:} The FC fuses the estimates together into the fused RCS map $\mathbf{\Hat{\Gamma}}$, and, by assuming a non-reflective ground such that the RCS of the ground is smaller than that of the target ($\sigma_{\mathrm{G}} < \sigma_{\mathrm{T}}$), the target is estimated to be located in the cell of highest estimated RCS, i.e. in $\argmax \mathbf{\Hat{\Gamma}}$, as described in Section~\ref{sec:Fuse}. \end{itemize} \section{Beamformer Design}\label{sec:BF} The receive beamformer is designed to have the main lobe of the resulting beam pattern steered towards the intended cell $p$ in order to estimate its RCS. To this end, two different approaches are considered for the design of the receive beamformer, namely least-squares (LS) heuristic formulation and the minimum variance beam pattern based on Capon method. These approaches are described in the following. \subsection{Least-Squares heuristic approach} For this approach, the beamformer is obtained by solving the following constrained LS optimisation problem \cite{art:Shi_ILS,art:Zhang_DBR} \begin{subequations} \begin{alignat}{3} \mathrm{\textbf{P1:}}\;\;\;\;&\mathrm{minimise}&\qquad&|| \mathbf{G}^T\mathbf{w}_{p} - \mathbf{v} ||_2^2\\ &\mathrm{subject~to}&\qquad&|| \mathbf{w}_{p} ||_2^2 = 1, \end{alignat} \end{subequations} where $\mathbf{v}$ is the desired response vector over the $H'$ AoAs in the beam-steering matrix $\mathbf{G}\in\mathbb{C}^{n\times H'}$. In this approach, a heuristic is employed, in which the AoAs in $\mathbf{A}$ are chosen such that they span evenly on the elevation and azimuth ranges, centred around the intended AoA $\varphi_{p,u}$. The AoAs are taken as a mesh of $n$ elevation angles and $4n$ azimuth angles respectively given by \begin{alignat}{3} \theta_i =&\!\mod{\left( \theta_{p,u} + \frac{i\pi}{2(n-1)}, \frac{\pi}{2} \right)}, \; & i=0,...,n-1\\ \phi_j =&\!\mod{\left( \phi_{p,u} + \frac{j2\pi}{4n-1}, 2\pi \right)}, \; & j=0,...,4n-1, \end{alignat} such that $H' = 4n^2$. The solution of \textbf{P1} is well known to be $\mathbf{w}_{p} = (\mathbf{A}^T)^\dagger \mathbf{v}$ \cite{art:Zhang_DBR} where $(\cdot)^\dagger$ is the pseudo-inverse, but, since $\mathbf{A}^T$ is a matrix with more rows than columns, it can be efficiently solved by applying Cholesky factorization. Therefore, the iterative LS algorithm proposed in \cite{art:Shi_ILS} can be employed to solve \textbf{P1}. \subsection{Capon method} The Capon method provides minimum-variance distortionless response beamforming and can be formulated as a quadratic program (QP) convex optimisation problem \cite{art:Stoica_Capon} \begin{subequations} \begin{alignat}{3} \mathrm{\textbf{P2:}}\;\;\;\;&\mathrm{minimise}&\qquad&\mathbf{w}_{p}^H\mathbf{R}\mathbf{w}_{p}\\ &\mathrm{subject~to}&\qquad& \mathbf{w}_{p}^H\mathbf{g}(\varphi_{p,u'}) = 1, \end{alignat} \end{subequations} where $\mathbf{R}\in\mathbb{C}^{n\times n}$ is the covariance matrix of the received signal over the desired direction, which can be defined as $\mathbf{R} = \mathbf{g}(\varphi_{p,u'})\mathbf{g}(\varphi_{p,u'})^H + \alpha \mathbf{I}$~\cite{art:Shi_ILS} , where $\mathbf{I}\in\mathbb{R}^{n\times n}$ is the identity matrix and $\alpha$ is a small real number. The solution for \textbf{P2} is obtained as in~\cite{art:Stoica_Capon}, and given by \begin{equation} \mathbf{w_{p}} = \frac{\mathbf{R}^{-1}\mathbf{g}(\varphi_{p,u'})}{\mathbf{g}(\varphi_{p,u'})^H\mathbf{R}^{-1}\mathbf{g}(\varphi_{p,u'})}. \end{equation} \section{Periodogram}\label{sec:Periodogram} \if\mycmdPeriodogram0 \fi \if\mycmdPeriodogram1 For performing sensing, the UAVs illuminating ground transmit frames consisting of $N$ OFDM symbols, each consisting of $M$ orthogonal subcarriers. The transmitted OFDM frame can be expressed as a matrix denoted by $\mathbf{F_{TX}}=[c^{TX}_{k,l}]\in\mathcal{A}^{N\times M}$ with $k=0,...,N-1$, $l=0,...,M-1$ and $\mathcal{A}$ is the modulated symbol alphabet. At the side of sensing UAVs, the received frame matrix is denoted by $\mathbf{F_{RX}}=[c^{RX}_{k,l}]$ and is composed by the received baseband symbols corresponding to all reflections from $\mathcal{Q}_{\mathrm{u}}$ at UAV $u'$. The elements of the received frame matrix have the form~\cite{art:Baquero_OFDM,art:OFDM_Samir \begin{align}\label{eq:bPointTarg_Mult} \nonumber &c^{RX}_{k,l} = ~~b_{p}c^{TX}_{k,l}\chi(\varphi_{p,u'}) e^{j2\pi f_{D,p}T_o l}e^{-j2\pi \tau_{p} \Delta f k}e^{-j\zeta_{p}}\\ \nonumber &+\sum_{p'\in\mathcal{Q}_{\mathrm{u}}\setminus\{p\}} b_{p'}c^{TX}_{k,l}\chi(\varphi_{p',u'}) e^{j2\pi f_{D,p'}T_o l}e^{-j2\pi \tau_{p'} \Delta f k}e^{-j\zeta_{p'}}\\ &+ \delta_{u}b_{p^*}c^{TX}_{k,l}\chi(\varphi_{p^*,u'}) e^{j2\pi f_{D,p^*}T_o l}e^{-j2\pi \tau_{p^*} \Delta f k}e^{-j\zeta_{p^*}}+ z_{k,l}, \end{align} where $f_{D,p}$ is the Doppler experienced by the reflection from $p$ (assumed constant through the frame), $T_o$ is the OFDM symbol duration (including the cyclic prefix time $T_{CP}$), $\tau_p$ is the delay of the reflection from $p$, $\Delta f$ is the inter-carrier spacing, $\zeta_p$ is a random phase shift of the reflection from $p$, $z_{k,l}$ is additive white Gaussian noise (AWGN) of spectral density $N_0$, and $b_p$ is a term embedding the propagation of the wave and the reflecting characteristics of the reflector in $p$. In this expression, the first term corresponds to the intended cell $p$; the second term corresponds to the interference from the other cells, $p'$, in $\mathcal{Q}_u$; and the third term corresponds to the target reflection, with $\delta_{u}=1$ if the target has been illuminated by UAV $u$, and $\delta_{u}=0$ otherwise. Considering a point-source model, $b_p$ is the amplitude attenuation of the signal, given by~\cite{art:OFDM_Samir} \begin{equation}\label{eq:b_val} b_p = \sqrt{\frac{P_T G_T \sigma_p \lambda^2}{(4\pi)^3 d_{p,1}^2d_{p,2}^2}}, \end{equation} where $\sigma_p\in\{\sigma_{\mathrm{G}},\sigma_{\mathrm{T}}\}$, $P_T$ is the transmit power, $G_T$ is the transmit antenna gain, $d_{p,1}$ is the distance from $u$ to $p$ and $d_{p,2}$ is the distance from $p$ to $u'$. The received complex symbols $c^{RX}_{k,l}$ contain the transmitted symbols $c^{TX}_{k,l}$, thus, are data-dependent. In order to process $\mathbf{F_{RX}}$, this data-dependency is removed by performing element-wise division, $\mathbf{F}$$=$$\mathbf{F_{RX}}\oslash \mathbf{F_{TX}}$, to obtain the processed samples consisting of elements $c_{k,l}$$ = $$c^{RX}_{k,l}/c^{TX}_{k,l}$. To estimate the delay and Doppler from $\mathbf{F}$, a common approach for OFDM signals is to use the periodogram, which provides the maximum likelihood (ML) estimator~\cite{art:Baquero_OFDM}. The periodogram takes the fast Fourier transform (FFT) of $\mathbf{F}$ over OFDM symbols, followed by the inverse FFT (IFFT) over subcarriers at a given delay-Doppler pair $(n,m)$ as~\cite{art:Baquero_OFDM} \begin{align}\label{eq:periodogram} \nonumber P&_{\mathbf{F}}(n,m) = \\ &\frac{1}{NM} \left| \sum_{k=0}^{N'-1}\left( \sum_{l=0}^{M'-1} c_{k,l} e^{-j2\pi l\frac{m}{M'}} \right) e^{-j2\pi k\frac{n}{N'}} \right|^2, \end{align} where $M'\geq M$ and $N'\geq N$ are the lengths of the FFT and IFFT operations respectively, $n=0,...,N'-1$ and $m=0,...,M'-1$\footnote{If $M' > M$ or $N' > N$ is needed in order to have more $n$ or $m$ values, padding is used by setting the added padded symbols to zero.}. It has been proven that the ML estimator of the delay and Doppler for a single target coincides with the maximum point in the periodogram as $(\hat{n}, \hat{m}) = \argmax_{n,m} P_{\mathbf{F}}(n,m)$~\cite{art:Baquero_OFDM}, which is maximised when \begin{align}\label{eq:periodogramMax} \frac{f_D}{\Delta f} -\frac{\hat{m}}{M} = 0 \;\;\;\land\;\;\; \frac{\tau}{T_o} -\frac{\hat{n}}{N} =0. \end{align} Then, from \eqref{eq:bPointTarg_Mult}, \eqref{eq:b_val} and \eqref{eq:periodogram}, $\sigma_p$ can be estimated a \begin{equation}\label{eq:rcsestcalc} \Hat{\sigma}_p = \left(\frac{1}{NM}\right) \frac{P_{\mathbf{F}}(\hat{n}, \hat{m})(4\pi)^3d_{p,1}^2d_{p,2}^2}{P_TG_T\lambda^2}. \end{equation} Then, considering the the geometry and protocol of the system, each UAV can set $\hat{m}$, $M$, $\hat{n}$ and $N$ so that \eqref{eq:periodogramMax} is met exactly for each cell $p$ to be sensed, and the corresponding RCS estimate is obtained by computing \eqref{eq:rcsestcalc}. \fi \section{Information-Level Fusion}\label{sec:Fuse} After all UAVs $u\in\mathcal{U}$ have sent their local RCS estimates for each cell on the grid $\mathbf{\Hat{\Gamma}}_{u}$ to the FC, it performs information-level fusion of the local estimates to obtain a global estimate $\mathbf{\Hat{\Gamma}}$. Then, the following hypothesis test is performed for all the elements of the grid, $p\in\mathcal{P}$ \begin{subequations}\label{eq:hypothesis} \begin{alignat}{3} \mathcal{H}_0:\;\;\;\;&|| \mathbf{r}_{p^*} - \mathbf{r}_{p} ||_\infty > \frac{d}{2} \\ \mathcal{H}_1:\;\;\;\;&|| \mathbf{r}_{p^*} - \mathbf{r}_{p} ||_\infty \leq \frac{d}{2}, \end{alignat} \end{subequations} where $||\cdot||_\infty$ is the $L^\infty$ norm. Hypothesis $\mathcal{H}_1$ states the cases when the target $p^* $ is considered to be located in the corresponding cell $p$, and is considered to be met at the cell $p$ that has the maximum value estimate $\Hat{\sigma}=\max \mathbf{\Hat{\Gamma}}$. On the other hand, $\mathcal{H}_0$ states the cases when the target is not located at $p$, and considered to be met at every other cell $p$ that has an estimate $\Hat{\sigma}$ such that $\Hat{\sigma} < \max \mathbf{\Hat{\Gamma}}$. The information-level fusion will be carried out using two techniques, namely averaging and pre-normalising the local estimates before averaging. \textbf{Average: } The FC averages the values of the cells over the local maps from all UAVs in $\mathcal{U}$ such that $\mathbf{\Hat{\Gamma}} = \frac{1}{U}\sum_{u\in\mathcal{U}} \mathbf{\Hat{\Gamma}}_{u}$. \textbf{Pre-normalised average: } An average of the pre-normalised local maps is obtained, in which each local map $\mathbf{\Hat{\Gamma}}_{u}$ is normalised between 0 and 1 as \begin{equation} \Bar{\sigma} = \frac{\Hat{\sigma} - \min{(\mathbf{\Hat{\Gamma}}_{u})}}{\max{(\mathbf{\Hat{\Gamma}}_{u})} - \min{(\mathbf{\Hat{\Gamma}}_{u})}}\enskip,\enskip \forall \Hat{\sigma}\in\mathbf{\Hat{\Gamma}}_{u} \enskip,\enskip \forall u \in\mathcal{U}. \end{equation} The resulting normalised local maps $\Bar{\Gamma}_{u}$ are then averaged as in the previous approach. \section{Numerical Results}\label{sec:Results} In this section, the performance of the proposed sensing protocol will be evaluated in terms of the probability of detection, where detection is considering to occur whenever $\mathcal{H}_1$ is achieved for the cell that contains the target. For this purpose, Monte Carlo simulations were performed, where the target is randomly located at each iteration, and the simulation parameters are presented in Table~\ref{tab:commonPar}, unless stated otherwise. The value for $\sigma_\mathrm{T}$ is assumed as 10 dBsm\footnote{$\sigma$[dBsm] = 10$\log_{10}(\sigma$[$m^2$]$/1 m^2)$}, which is a reasonable value for large vehicles~\cite{art:RCSvals01}, and -30 dBsm for the ground, which is reasonable for grass or dirt~\cite{art:RCSvalsGND}. The OFDM parameters are taken from \cite{Chen_JSACUAVSystem}. The UAVs are set uniformly distributed across $S$ in a way to cover the whole grid, and each one illuminates $L/\sqrt{U}\times L/\sqrt{U}$ cells and avoiding intersections between the $\mathcal{P}_{u}$ cells, unless stated otherwise. \begin{table}[h!]\vspace{-1em}\centering \caption{Common simulation parameters}\label{tab:commonPar} \begin{tabular}{|c|c||c|c|} \hline \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} \\ \hline $P_T$ & 1 [W] & $M$ & 64 \\ \hline $G_T$ & 1 & $n$ & 8 \\ \hline $\ell$ & 100 [m] & $f_0$ & 24 [GHz] \\ \hline $U$ & 16 & $BW$ & 200 [MHz] \\ \hline $N_0$ & -174 [dBm/Hz] & $T_{CP}$ & 2.3 [$\mu$s] \\ \hline $\sigma_{\mathrm{G}}$ & -30 [dBsm] & $L$ & 20 \\ \hline $\sigma_{\mathrm{T}}$ & 10 [dBsm] & $f_D$ & 0 [Hz] \\ \hline $N$ & 16 & & \\ \hline \end{tabular} \vspace{-1em} \end{table} \begin{figure} \centering \includegraphics[width=0.73\linewidth]{result_d_ALL_N_1_PDF.pdf} \caption{Detection probability of the target for different cell length $d$ for different beamforming techniques, fusion techniques and $\sigma_{\mathrm{G}}$ values. The number of UAVs and number of cells illuminated per UAV is kept constant, so larger $d$ values imply total area and higher UAV altitude.\vspace{-1em}} \label{fig:resultNew_Pd_d_BF} \end{figure} \begin{figure} \centering \includegraphics[width=0.73\linewidth]{result_Lx_ALL_N_0_PDF.pdf} \caption{Detection probability of the target for different cell length $d$ for different beamforming and fusion techniques. Total area and UAV altitude is kept constant, so larger $d$ values imply less cells illuminated per UAV.\vspace{-2em}} \label{fig:resultNew_Pd_d_VAR} \end{figure} In Fig.~\ref{fig:resultNew_Pd_d_BF} the detection probability is shown as a function of the cell side length $d$, for different $\sigma_{\mathrm{G}}$ values. The number of intended cells per UAV is maintained constant, thus the size of the cells determine the total size of the area $S$ and the altitude of the UAVs $h$, which increases as $d$ increases to accommodate the same number of larger cells in its HPBW. Note that, $d$ increases as $h$ increases, and the $b_p$ value from \eqref{eq:b_val} is closer to the noise level, then the probability of detection decreases. There exists a maximum point around $d=2$m, where the best probability of detection is achieved. As expected, as $\sigma_{\mathrm{G}}$ increases, the difference between the RCS of the ground and the target decreases, so that the probability of detection also decreases. By comparing beamforming techniques, both show a similar behaviour. However, when comparing fusion techniques, pre-normalising the local estimates performs better only for larger $d$ and $\sigma_{\mathrm{G}}$ values. Conversely, in Fig.~\ref{fig:resultNew_Pd_d_VAR} the total size of the area $S$ and the altitude of the UAVs $h$ is kept constant, while varying $d$. This is accomplished by adjusting the number of cells in the grid $L$. In this case, note that higher $d$ values lead to better probability of detection, as there is a higher area per cell. However, a local optimum can be appreciated around $d=4$m, which shows the presence of a local optimum that offers more precise detection. \begin{figure} \centering \includegraphics[width=0.73\linewidth]{result_Deltad2_ALL_N_0_PDF.pdf} \caption{Detection probability of the target at a $\Delta$ cells distance for different $\sigma_{\mathrm{G}}$ values and different cell size $d$.\vspace{-1em}} \label{fig:resultNew_Pd_Dd} \end{figure} Furthermore, in Fig.~\ref{fig:resultNew_Pd_Dd} the detection probability is plotted for different values of $\sigma_{\mathrm{G}}$, different values of a modified threshold $d(\frac{1}{2} + \Delta$) in the hypothesis test \eqref{eq:hypothesis}, and different values of $d$. The curves show the probability of detection of the target at $\Delta$ cells away from the cell with the maximum value in $\Bar{\Gamma}$. Note that values of $\sigma_{\mathrm{G}}\leq -10$dBsm present probability of detection close to 100\% for $\Delta \geq 1$, which is within a distance of one cell. This suggests a probability of detection above 99.89\% with high accuracy within $5$cm ($d=0.01$m, $\Delta=2$, $\sigma_{\mathrm{G}}\leq -10$dBsm), which is more accurate than other state-of-the-art works utilising MIMO OFDM bistatic-radar such as in~\cite{art:Lyu_MIMOOFDMEurasip}, where they achieve an accuracy of $3$m which uses passive radar in a multi-user MIMO OFDM setting. The results show that for small $\sigma_{\mathrm{G}}$ values, most misdetections occur in an adjacent cell \begin{figure} \centering \includegraphics[width=0.73\linewidth]{result_n_ALL_N_0_PDF.pdf} \caption{Detection probability of the target for different total number of antennas for the UAV UPA $n^2$ for different beamforming techniques, fusion techniques and $\sigma_{\mathrm{G}}$ values.\vspace{-2em}} \label{fig:resultNew_Pd_n_BF} \end{figure} Fig.~\ref{fig:resultNew_Pd_n_BF} illustrates the detection probability as a function of the number of antennas in the UPAs of the UAVs, for different $\sigma_{\mathrm{G}}$ values. Therein, the number of UAVs and the number of illuminated cells per UAV is maintained constant, so that narrower beams imply that the UAVs increase their altitudes to accommodate the same intended cells. It is worth noting that the increase of the number of antennas derives into a narrower main beam, and as the beam becomes narrower (higher $n$ values), it is observed an improvement on the probability of detection due to the increased directionality and precision towards the intended sensed cells. However when the beam becomes too narrow, asmall beam misalignment have a bigger impact on the detection of the target, and the increases in the UAV altitudes causes a stronger pathloss, making the received signal closer to the noise level, thus the probability of detection decreases. For larger $\sigma_{\mathrm{G}}$ values, the probability of detection decreases even further, as expected. It is also noticed that both fusion techniques show a similar detection probability results, similar to the case with both beamforming techniques. However, the Capon method shows a slightly better performance for a high number of antennas and a small $\sigma_{\mathrm{G}}$ value. Moreover, for smaller $\sigma_{\mathrm{G}}$ values, the fusion by averaging slightly outperforms the pre-normalised averaging approach, while for higher $\sigma_{\mathrm{G}}$ values the opposite is true. \begin{figure} \centering \includegraphics[width=0.73\linewidth]{result_z_ALL_N_0_PDF.pdf} \caption{Detection probability of the target at a $\Delta$ cells distance for different $\sigma_{\mathrm{G}}$ values for different UAV altitude $h$ values.\vspace{-2em}} \label{fig:result_Pd_zUAV} \end{figure} Fig.~\ref{fig:result_Pd_zUAV} illustrates the detection probability as a function of the common UAV altitude $h$ for varying $\sigma_{\mathrm{G}}$ and $\Delta$ values. The UAVs are positioned in a similar configuration to the previous figure, thus, less cells are covered by the main beam of the transmitting UAVs at smaller altitude, thus resulting in cells not being illuminated by any UAV. The maximum altitude is considered to be the one where all cells are illuminated once (no overlapping). As the $h$ value increases, each $\mathcal{P}_{u}$ set goes from allocating $1\times 1$ cell, to $3\times 3$ cells and finally to $5\times 5$ cells, such that all cells are illuminated once. This behaviour can be seen in the $\Delta=0$ curve, where a sudden increasing in the probability of detection is observed at altitudes where more cells are allocated in $\mathcal{P}_{u}$, whereas this tendency is also observed for higher $\sigma_{\mathrm{G}}$ values, with worse performance. For higher $\Delta$ values, the probability of detection is higher and increases smoothly with $h$ as higher $\Delta$ implies that the detection can be considered as successful on non-illuminated cells that are adjacent to illuminated cells. This is particularly observed for $\Delta=2$, where every cell in the grid is considered for detection. \vspace{-1em} \section{Conclusions}\label{sec:Conclusions} \vspace{-0.2em} In this paper, a half-duplex distributed sensing framework for UAV-assisted networks was proposed in which the area of interest is sectioned into a grid, and the RCS of each cell is estimated by employing receive digital beamforming and a periodogram-based approach, and later sent to a FC for information-level fusion. Results show that the detection probability of the system increases for ground cells of smaller RCS values and that higher accuracy can be achieved within a one-cell distance. Increasing the number of antennas in the UAVs improves the detection probability of the target, however the increase of the altitude of the UAVs can deteriorate it. Moreover, it was found that detection probability is higher for larger cell size $d$ if the UAV altitude is kept constant, however there is a small $d$ value local maximum. Future works can consider the effect of the Doppler and position control of the UAVs to increase the sensing performance of the framework. \iffalse \fi \section*{Acknowledgement} This research has been supported by the Academy of Finland, 6G Flagship program under Grant 346208 and project FAITH under Grant 334280 \bibliographystyle{unsrt}
{'timestamp': '2023-02-22T02:15:23', 'yymm': '2302', 'arxiv_id': '2302.10673', 'language': 'en', 'url': 'https://arxiv.org/abs/2302.10673'}
ArXiv
\section{Parties positives d'un r\'eseau\label{section positive}} \medskip Le but de cette section est d'\'etudier l'ensemble des parties positives de $\Lambda$. Nous munirons cet ensemble d'une topologie dans la section suivante. \def{\mathrm{tor}}{{\mathrm{tor}}} \bigskip \subsection{D\'efinitions, pr\'eliminaires\label{sous positive}} Une partie $X$ de $\Lambda$ est dite {\it positive} si les conditions suivantes sont satisfaites~: \begin{itemize}\itemindent1cm \itemth{P1} $\Lambda=X \cup (-X)$. \itemth{P2} $X+X \subset X$. \itemth{P3} $X \cap (-X)$ est un sous-groupe de $\Lambda$. \end{itemize} Nous noterons $\PC\! os(\Lambda)$ l'ensemble des parties positives de $\Lambda$. Donnons quelques exemples. Pour commencer, notons que $\Lambda \in \PC\! os(\Lambda)$. Soit $\Gamma$ un groupe totalement ordonn\'e et soit $\varphi : \Lambda \rightarrow \Gamma$ un morphisme de groupes. Posons $$\Pos(\varphi)=\{\lambda \in \Lambda~|~\varphi(\lambda) \mathop{\geqslant}\nolimits 0\}$$ $$\Pos^+(\varphi)=\{\lambda \in \Lambda~|~\varphi(\lambda) > 0\}.\leqno{\text{et}}$$ Alors il est clair que \refstepcounter{theo}$$~\label{noyau pos} \Ker \varphi = \Pos(\varphi) \cap \Pos(-\varphi) = \Pos(\varphi) \cap -\Pos(\varphi) \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ et que \refstepcounter{theo}$$~\label{positif exemple} \text{\it $\Pos(\varphi)$ est une partie positive de $\Lambda$}. \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ \bigskip \begin{lem}\label{proprietes positives} Soit $X$ une partie positive de $\Lambda$. Alors~: \begin{itemize} \itemth{a} $-X \in \PC\! os(\Lambda)$. \itemth{b} $0 \in X$. \itemth{c} Si $\lambda \in \Lambda$ et si $r \in \ZM_{>0}$ est tel que $r\lambda \in X$. Alors $\lambda \in X$. \itemth{d} $\Lambda/(X \cap (-X))$ est sans torsion. \end{itemize} \end{lem} \bigskip \begin{proof} (a) est imm\'ediat. (b) d\'ecoule de la propri\'et\'e (P1) des parties positives. (d) d\'ecoule de (c). Il nous reste \`a montrer (c). Soient $\lambda \in \Lambda$ et $r \in \ZM_{>0}$ tels que $r\lambda \in X$. Si $\lambda \not\in X$, alors $-\lambda \in X$ d'apr\`es la propri\'et\'e (P1). D'o\`u $\lambda=r\lambda + (r-1)(-\lambda) \in X$ d'apr\`es (P2), ce qui est contraire \`a l'hypoth\`ese. Donc $\lambda \in X$. \end{proof} \bigskip Nous allons montrer une forme de r\'eciproque facile \`a la propri\'et\'e \ref{positif exemple}. Soit $X \in \PC\! os(\Lambda)$. Notons $\can_X : \Lambda \rightarrow \Lambda/(X \cap(-X))$ le morphisme canonique. Si $\gamma$ et $\gamma'$ appartiennent \`a $\Lambda/(X \cap(-X))$, nous \'ecrirons $\gamma \mathop{\leqslant}\nolimits_X \gamma'$ s'il existe un repr\'esentant de $\gamma' - \gamma$ appartenant \`a $X$. Il est facile de v\'erifier que \refstepcounter{theo}$$~\label{relation independante} \text{\it $\gamma \mathop{\leqslant}\nolimits_X \gamma'$ si et seulement si tout repr\'esentant de $\gamma'-\gamma$ appartient \`a $X$.} \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ On d\'eduit alors facilement des propri\'et\'es (P1), (P2) et (P3) des parties positives que \refstepcounter{theo}$$~\label{ordre X} \text{\it $(\Lambda/(X \cap (-X)),\mathop{\leqslant}\nolimits_X)$ est un groupe ab\'elien totalement ordonn\'e} \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ et que \refstepcounter{theo}$$~\label{X pos} X=\Pos(\can_X). \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ \bigskip \subsection{Cons\'equences du th\'eor\`eme de Hahn-Banach} Si $X$ est une partie positive de $\Lambda$, on pose $X^+=X \setminus (-X)$. On a alors \refstepcounter{theo}$$~\label{disjonction} \Lambda = X \hskip1mm\dot{\cup}\hskip1mm (-X^+) = X^+ \hskip1mm\dot{\cup}\hskip1mm (-X) = X^+ \hskip1mm\dot{\cup}\hskip1mm (X \cap (-X)) \hskip1mm\dot{\cup}\hskip1mm (-X^+), \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ o\`u $\dot{\cup}$ d\'esigne l'union disjointe. De plus, si $\varphi : \Lambda \rightarrow \Gamma$ est un morphisme de groupes ab\'eliens et si $\Gamma$ est un groupe totalement ordonn\'e, alors \refstepcounter{theo}$$~\label{pos plus} \Pos^+(\varphi) = \Pos(\varphi)^+. \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ Si $\varphi$ est une forme lin\'eaire sur $V$, nous noterons abusivement $\Pos(\varphi)$ et $\Pos^+(\varphi)$ les parties $\Pos(\varphi|_\Lambda)$ et $\Pos^+(\varphi|_\Lambda)$ de $\Lambda$. \bigskip \begin{lem}\label{equivalent separant} Soit $X$ une partie positive propre de $\Lambda$, soit $\Gamma$ un groupe ab\'elien totalement ordonn\'e archim\'edien et soit $\varphi : \Lambda \rightarrow \Gamma$ un morphisme de groupes. Alors les conditions suivantes sont \'equivalentes~: \begin{itemize} \itemth{1} $X \subseteq \Pos(\varphi)$; \itemth{2} $X^+ \subseteq \Pos(\varphi)$; \itemth{3} $\Pos^+(\varphi) \subseteq X^+$; \itemth{4} $\Pos^+(\varphi) \subseteq X$. \end{itemize} \end{lem} \begin{proof} Il est clair que (1) implique (2) et que (3) implique (4). \medskip Montrons que (2) implique (3). Supposons donc que (2) est v\'erifi\'ee. Soit $\lambda \in \Lambda$ tel que $\varphi(\lambda) > 0$ et supposons que $\lambda \not\in X^+$. Alors, d'apr\`es \ref{disjonction}, $\lambda \in -X$. Or, si $\mu \in \Lambda$, il existe $k \in \ZM_{> 0}$ tel que $\varphi(\mu - k \lambda)=\varphi(\mu)-k\varphi(\lambda) < 0$ (car $\Gamma$ est archim\'edien). Donc $\mu-k\lambda \not\in X^+$ d'apr\`es (2). Donc $\mu - k \lambda \in -X$ d'apr\`es \ref{disjonction}. Donc, $\mu=(\mu -k\lambda) + k\lambda \in (-X)$. Donc $\Lambda \subseteq -X$, ce qui est contraire \`a l'hypoth\`ese. \medskip Montrons que (4) implique (1). Supposons donc que $\Pos^+(\varphi) \subseteq X$. En prenant le compl\'ementaire dans $\Lambda$, on obtient $(-X^+)=(-X)^+ \subseteq \Pos(-\varphi)$ et donc, puisque (2) implique (3), on a $\Pos^+(-\varphi) \subseteq -X^+$. En reprenant le compl\'ementaire dans $\Lambda$, on obtient $X \subseteq \Pos(\varphi)$. \end{proof} \bigskip Nous aurons aussi besoin du lemme suivant~: \bigskip \begin{lem}\label{solutions} Soient $\lambda_1$,\dots, $\lambda_n$ des \'el\'ements de $\Lambda$ et supposons trouv\'e un $n$-uplet $t_1$,\dots, $t_n$ de nombres r\'eels {\bfseries\itshape strictement} positifs tels que $t_1 \lambda_1 + \cdots + t_n \lambda_n=0$. Alors il existe des entiers naturels non nuls $r_1$,\dots, $r_n$ tels que $r_1 \lambda_1 + \cdots + r_n \lambda_n = 0$. \end{lem} \bigskip \begin{proof} Notons $\SC$ l'ensemble des $n$-uplets $(u_1,\dots,u_n)$ de nombres r\'eels qui satisfont $$\begin{cases} u_1+\cdots + u_n = 1, &\\ u_1 \lambda_1 + \cdots + u_n \lambda_n = 0.&\\ \end{cases}$$ \'Ecrit dans une base de $\Lambda$, ceci est un syst\`eme lin\'eaire d'\'equations \`a coefficients dans $\QM$. Le proc\'ed\'e d'\'elimination de Gauss montre que l'existence d'une solution {\it r\'eelle} implique l'existence d'une solution {\it rationnelle} $t^\circ = (t_1^\circ,\dots, t_n^\circ)$ et l'existence de vecteurs $v_1$,\dots, $v_r \in \QM^n$ tels que $$\SC=\{t^\circ + x_1 v_1 + \cdots +x_r v_r~|~(x_1,\dots,x_r) \in \RM^r\}.$$ En particulier, il existe $x_1$,\dots, $x_r \in \RM$ tels que $$(t_1,\dots,t_n) = t^\circ + x_1 v_1 + \cdots +x_r v_r.$$ Puisque $t_1$,\dots, $t_n$ sont strictement positifs, il existe $x_1'$,\dots, $x_r'$ dans $\QM$ tels que les coordonn\'ees de $t^\circ + x_1' v_1 + \cdots +x_r' v_r$ soient strictement positives. Posons alors $(u_1,\dots,u_n)=t^\circ + x_1' v_1 + \cdots +x_r' v_r$. On a donc $u_i \in \QM_{> 0}$ pour tout $i$ et $$u_1 \lambda_1 + \cdots + u_n \lambda_n = 0.$$ Quitte \`a multiplier par le produits des d\'enominateurs des $u_i$, on a trouv\'e $r_1$,\dots, $r_n \in \ZM_{> 0}$ tels que $$r_1 \lambda_1 + \cdots + r_n \lambda_n = 0,$$ comme attendu. \end{proof} \bigskip \begin{theo}\label{hahn banach} Soit $X$ une partie positive de $\Lambda$ diff\'erente de $\Lambda$. Alors~: \begin{itemize} \itemth{a} Il existe une forme lin\'eaire $\varphi$ sur $V$ telle que $X \subseteq \Pos(\varphi)$. \itemth{b} Si $\varphi$ et $\varphi'$ sont deux formes lin\'eaires sur $V$ telles que $X \subseteq \Pos(\varphi) \cap \Pos(\varphi')$, alors il existe $\kappa \in \RM_{> 0}$ tel que $\varphi'=\kappa \varphi$. \end{itemize} \end{theo} \bigskip \begin{proof} Notons $\CC^+$ l'enveloppe convexe de $X^+$. Notons que $X^+$ (et donc $\CC^+$) est non vide car $X \neq \Lambda$. Nous allons commencer par montrer que $0 \not\in \CC^+$. Supposons donc que $0 \in \CC^+$. Il existe donc $\lambda_1$,\dots, $\lambda_n$ dans $X^+$ et $t_1$,\dots, $t_n$ dans $\RM_{> 0}$ tels que $$\begin{cases} t_1+\cdots + t_n = 1, &\\ t_1 \lambda_1 + \cdots + t_n \lambda_n = 0.&\\ \end{cases}$$ D'apr\`es le lemme \ref{solutions}, il existe $r_1$,\dots, $r_n \in \ZM_{> 0}$ tels que $m_1 \lambda_1 = - (m_2 \lambda_2 + \cdots + m_n \lambda_n)$. Donc $m_1\lambda_1 \in X \cap -X$ (voir la propri\'et\'e (P2)). Donc, d'apr\`es le lemme \ref{proprietes positives} (a) et (c), on a $\lambda_1 \in X \cap -X$, ce qui est impossible car $\lambda_1 \in X^+ = X \setminus (-X)$. Cela montre donc que $$0 \not\in \CC^+.$$ L'ensemble $\CC^+$ \'etant convexe, il d\'ecoule du th\'eor\`eme de Hahn-Banach qu'il existe une forme lin\'eaire non nulle $\varphi$ sur $V$ telle que $$\CC^+ \subseteq \{\lambda \in V~|~\varphi(\lambda) \mathop{\geqslant}\nolimits 0\}.$$ En particulier, $$X^+ \subseteq \CC^+ \cap \Lambda \subseteq \Pos(\varphi).\leqno{(*)}$$ D'apr\`es le lemme \ref{equivalent separant}, et puisque $\RM$ est archim\'edien, on a bien $X \subseteq \Pos(\varphi)$. Cela montre (a). \medskip Soit $\varphi'$ une autre forme lin\'eaire telle que $X \subseteq \Pos(\varphi')$. Posons $U=\{\lambda \in V~|~\varphi(\lambda) > 0$ et $\varphi'(\lambda) < 0\}$. Alors $U$ est un ouvert de $V$. Fixons une $\ZM$-base $(e_1,\dots,e_d)$ de $\Lambda$. Si $U \neq \varnothing$, alors il existe $\lambda \in \QM \otimes_\ZM \Lambda$ et $\varepsilon \in \QM_{> 0}$ tels que $\lambda$, $\lambda + \varepsilon e_1$,\dots, $\lambda+\varepsilon e_d$ appartiennent \`a $U$. Quitte \`a multiplier par le produit des d\'enominateurs de $\varepsilon$ et des coordonn\'ees de $\lambda$ dans la base $(e_1,\dots,e_d)$, on peut supposer que $\lambda \in \Lambda$ et $\varepsilon \in \ZM_{> 0}$. Mais il est clair que $U \cap X^+=\varnothing$ et $U \cap (-X^+) = \varnothing$. Donc $U \cap \Lambda$ est contenu dans $X \cap (-X)$. Ce dernier \'etant un sous-groupe, on en d\'eduit que $\varepsilon e_i \in X \cap (-X)$ pour tout $i$. D'o\`u, d'apr\`es le lemme \ref{proprietes positives} (c), $e_i \in X \cap (-X)$ pour tout $i$. Donc $X=\Lambda$, ce qui est contraire \`a l'hypoth\`ese. On en d\'eduit que $U$ est vide, c'est-\`a-dire qu'il existe $\kappa \in \RM_{> 0}$ tel que $\varphi'=\kappa\varphi$. D'o\`u (b). \end{proof} \bigskip Si $\varphi$ est une forme lin\'eaire sur $V$, nous noterons ${\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ sa classe dans $V^*/\RM_{>0}$. Nous noterons $p : V^* \longrightarrow V^*/\RM_{>0}$ la projection canonique. L'application $\Pos : V^* \rightarrow \PC\! os(\Lambda)$ se factorise \`a travers $p$ en une application $\Posbar : V^*/\RM_{>0} \rightarrow \PC\! os(\Lambda)$ rendant le diagramme $$\diagram V^* \ddrrto^{\displaystyle{\Pos}} \ddto_{\displaystyle{p}}&& \\ &&\\ V^*\!/\RM_{>0} \rrto^{\displaystyle{\Posbar}} && \PC\! os(\Lambda) \enddiagram$$ commutatif. D'autre part, si $X \in \PC\! os(\Lambda)$, nous noterons $\pi(X)$ l'unique \'el\'ement ${\bar{\varphi}}} \def\phh{{\hat{\varphi}} \in V^*/\RM_{>0}$ tel que $X \subseteq \Posbar({\bar{\varphi}}} \def\phh{{\hat{\varphi}})$ (voir le th\'eor\`eme \ref{hahn banach}). On a donc d\'efini deux applications $$\diagram V^*/\RM_{>0}\rrto^{\displaystyle{\Posbar}} && \PC\! os(\Lambda) \rrto^{\displaystyle{\pi}} && V^*\!/\RM_{>0} \enddiagram$$ et le th\'eor\`eme \ref{hahn banach} (b) montre que \refstepcounter{theo}$$~\label{pi pos} \pi \circ \Posbar = \Id_{V^*\!/\RM_{>0}}. \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ Donc $\pi$ est surjective et $\Posbar$ est injective. En revanche, ni $\pi$, ni $\Posbar$ ne sont des bijections (sauf si $\Lambda$ est de rang $1$). Nous allons d\'ecrire les fibres de $\pi$~: \bigskip \begin{prop}\label{fibres pi} Soit $\varphi$ une forme lin\'eaire non nulle sur $V$. Alors l'application $$\fonction{i_{{\bar{\varphi}}} \def\phh{{\hat{\varphi}}}}{\PC\! os(\Ker \varphi|_\Lambda)}{\pi^{-1}({\bar{\varphi}}} \def\phh{{\hat{\varphi}})}{X}{X \cup \Pos^+(\varphi)}$$ est bien d\'efinie et bijective. Sa r\'eciproque est l'application $$\fonctio{\pi^{-1}({\bar{\varphi}}} \def\phh{{\hat{\varphi}})}{\PC\! os(\Ker \varphi|_\Lambda)}{Y}{Y \cap \Ker \varphi|\lambda.}$$ \end{prop} \noindent{\sc Remarque - } Il est facile de voir que $\pi^{-1}(\bar{0})=\{\Lambda\}$. Nous pouvons aussi d\'efinir une application $i_{\bar{0}} : \PC\! os(\Lambda) \rightarrow \PC\! os(\Lambda)$ par la m\^eme formule que dans la proposition \ref{fibres pi}~: alors $i_{\bar{0}}$ est tout simplement l'application identit\'e mais on a dans ce cas-l\`a $\pi^{-1}(\bar{0})\neq i_{\bar{0}}(\PC\! os(\Lambda))$.~$\SS \square$ \bigskip \begin{proof} Montrons tout d'abord que l'application $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ est bien d\'efinie. Soit $X \in \PC\! os(\Ker \varphi|\Lambda)$. Posons $Y=X \cup \Pos^+(\varphi)$. Montrons que $Y$ est une partie positive de $\Lambda$. Avant cela, notons que $$Y \subseteq \Pos(\varphi).\leqno{(*)}$$ (1) Si $\lambda \in \Lambda$, deux cas se pr\'esentent. Si $\varphi(\lambda) \neq 0$, alors $\lambda \in \Pos^+(\varphi) \cup -\Pos^+(\varphi) \subseteq Y \cup (-Y)$. Si $\varphi(\lambda)=0$, alors $\lambda \in \Ker \varphi|_\Lambda$, donc $\lambda \in X \cup (-X) \subseteq Y \cup (-Y)$ car $X$ est une partie positive de $\Ker \varphi|_\Lambda$. Donc $\Lambda=Y \cup (-Y)$. \smallskip (2) Soient $\lambda$, $\mu \in Y$. Montrons que $\lambda+\mu \in Y$. Si $\varphi(\lambda+\mu) > 0$, alors $\lambda+\mu \in \Pos^+(\varphi) \subseteq Y$. Si $\varphi(\lambda+\mu)=0$, alors il r\'esulte de $(*)$ que $\varphi(\lambda)=\varphi(\mu)=0$, donc $\lambda$, $\mu \in \Ker \varphi|_\Lambda$. En particulier, $\lambda$, $\mu \in X$ et donc $\lambda+\mu \in X+X \subseteq X \subseteq Y$. Donc $Y + Y \subseteq Y$. \smallskip (3) On a $Y \cap (-Y)=X \cap (-X)$, donc $Y \cap (-Y)$ est un sous-groupe de $\Lambda$. \medskip Les points (1), (2) et (3) ci-dessus montrent que $Y$ est une partie positive de $\Lambda$. L'inclusion $(*)$ montre que $\pi(Y)={\bar{\varphi}}} \def\phh{{\hat{\varphi}}$, c'est-\`a-dire que $Y \in \pi^{-1}({\bar{\varphi}}} \def\phh{{\hat{\varphi}})$. Donc l'application $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ est bien d\'efinie. \medskip Elle est injective car, si $X \in \PC\! os(\Ker \varphi|_\Lambda)$, alors $X \cap \Pos^+(\varphi) = \varnothing$. Montrons maintenant qu'elle est surjective. Soit $Y \in \pi^{-1}({\bar{\varphi}}} \def\phh{{\hat{\varphi}})$. Posons $X=Y \cap \Ker \varphi|_\Lambda$. Alors $X$ est une partie positive de $\Ker \varphi|_\Lambda$ d'apr\`es le corollaire \ref{pos inclusion}. Posons $Y'=X \cup \Pos^+(\varphi)$. Il nous reste \`a montrer que $Y=Y'$. Tout d'abord, $\Pos^+(\varphi) \subseteq Y$ car $\pi(Y)={\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ par hypoth\`ese et $X \subseteq Y$. Donc $Y' \subseteq Y$. R\'eciproquement, si $\lambda \in Y$, deux cas se pr\'esentent. Si $\varphi(\lambda) > 0$, alors $\lambda \in \Pos^+(\varphi) \subseteq Y'$. Si $\varphi(\lambda)=0$, alors $\lambda \in Y \cap \Ker \varphi|_\Lambda=X \subseteq Y'$. Dans tous les cas, $\lambda \in Y'$. \end{proof} \bigskip \example{maximal} Si $\varphi$ est une forme lin\'eaire non nulle sur $V$, alors $\Pos(\varphi) = i_{{\bar{\varphi}}} \def\phh{{\hat{\varphi}}}(\Ker \varphi|_\Lambda)$.~$\SS \square$ \bigskip Nous pouvons maintenant classifier les parties positives de $\Lambda$ en termes de formes lin\'eaires. Notons $\FC(\Lambda)$ l'ensemble des suites finies $(\varphi_1,\dots,\varphi_r)$ telles que (en posant $\varphi_0=0$), pour tout $i \in \{1,2,\dots,r\}$, $\varphi_i$ soit une forme lin\'eaire non nulle sur $\RM \otimes_\ZM (\Lambda \cap \Ker \varphi_{i-1})$. Par convention, nous supposerons que la suite vide, not\'ee $\varnothing$, appartient \`a $\FC(\Lambda)$. Posons $d = \dim V$. Notons que, si $(\varphi_1,\dots,\varphi_r) \in \FC(\Lambda)$, alors $r \mathop{\leqslant}\nolimits d$. Nous d\'efinissons donc l'action suivante de $(\RM_{>0})^d$ sur $\FC(\Lambda)$~: si $(\kappa_1,\dots,\kappa_d) \in (\RM_{>0})^d$ et si $(\varphi_1,\dots,\varphi_r) \in \FC(\Lambda)$, on pose $$(\kappa_1,\dots,\kappa_d)\cdot (\varphi_1,\dots,\varphi_r) = (\kappa_1 \varphi_1,\dots,\kappa_r \varphi_r).$$ Munissons $\RM^r$ de l'ordre lexicographique~: c'est un groupe ab\'elien totalement ordonn\'e et $(\varphi_1,\dots,\varphi_r) : \Lambda \rightarrow \RM^r$ est un morphisme de groupes. Donc $\Pos(\varphi_1,\dots,\varphi_r)$ est bien d\'efini et appartient \`a $\PC\! os(\Lambda)$. En fait, toute partie positive de $\Lambda$ peut \^etre retrouv\'ee ainsi~: \bigskip \begin{prop}\label{surjection pos} L'application $$\fonctio{\FC(\Lambda)}{\PC\! os(\Lambda)}{{\boldsymbol{\varphi}}} \def\pht{{\tilde{\varphi}}}{\Pos({\boldsymbol{\varphi}}} \def\pht{{\tilde{\varphi}})}$$ est bien d\'efinie et induit une bijection $\FC(\Lambda)/(\RM_{> 0})^d \stackrel{\sim}{\longrightarrow} \PC\! os(\Lambda)$. \end{prop} \noindent{\sc Remarque - } Dans cette proposition, on a pos\'e par convention $\Pos(\varnothing)=\Lambda$.~$\SS \square$ \bigskip \begin{proof} Cela r\'esulte imm\'ediatement d'un raisonnement par r\'ecurrence sur le rang de $\Lambda$ en utilisant le th\'eor\`eme \ref{hahn banach} et la proposition \ref{fibres pi}. \end{proof} \bigskip Si ${\boldsymbol{\varphi}}} \def\pht{{\tilde{\varphi}} \in \FC(\Lambda)$, nous noterons $\bar{{\boldsymbol{\varphi}}} \def\pht{{\tilde{\varphi}}}$ sa classe dans $\FC(\Lambda)/(\RM_{>0})^d$ et nous poserons $\Posbar(\bar{{\boldsymbol{\varphi}}} \def\pht{{\tilde{\varphi}}})=\Pos({\boldsymbol{\varphi}}} \def\pht{{\tilde{\varphi}})$. Comme corollaire de la proposition \ref{surjection pos}, nous obtenons une classification des ordres totaux sur $\Lambda$ (compatibles avec la structure de groupe), ce qui est un r\'esultat classique \cite{robbiano}. En fait, se donner un ordre total sur $\Lambda$ est \'equivalent \`a se donner une partie positive $X$ de $\Lambda$ telle que $X \cap (-X)=0$. Notons $\FC_0(\Lambda)$ l'ensemble des \'el\'ements $(\varphi_1,\dots,\varphi_r) \in \FC(\Lambda)$ tels que $\Lambda \cap \Ker \varphi_r = 0$. \bigskip \begin{coro}\label{ordres totaux} L'application d\'ecrite dans la proposition \ref{surjection pos} induit une bijection entre l'ensemble $\FC_0(\Lambda)/(\RM_{> 0})^d$ et l'ensemble des ordres totaux sur $\Lambda$ compatibles avec la structure de groupe. \end{coro} \bigskip \subsection{Fonctorialit\'e} Soit $\sigma : \Lambda' \rightarrow \Lambda$ un morphisme de groupes ab\'eliens. Le r\'esultat suivant est facile~: \bigskip \begin{lem}\label{sigma} Si $X$ est une partie positive de $\Lambda$, alors $\sigma^{-1}(X)$ est une partie positive de $\Lambda'$. \end{lem} \begin{proof} Soit $X \in \PC\! os(\Lambda)$. Posons $X' = \sigma^{-1}(X)$. Montrons que $X' \in \PC\! os(\Lambda')$. \medskip (1) On a $X' \cup (-X') = \sigma^{-1}(X \cup (-X))=\sigma^{-1}(\Lambda)=\Lambda'$. \smallskip (2) Si $\lambda'$ et $\mu'$ sont deux \'el\'ements de $X'$, alors $\sigma(\lambda')$ et $\sigma(\mu')$ appartiennent \`a $X$. Donc $\sigma(\lambda')+\sigma(\mu') \in X$. En d'autres termes, $\lambda' + \mu' \in X'$. Donc $X'+X' \subseteq X'$. \smallskip (3) On a $X' \cap (-X') = \sigma^{-1}(X \cap (-X))$, donc l'ensemble $X' \cap (-X')$ est un sous-groupe de $\Lambda'$. \end{proof} \bigskip \begin{coro}\label{pos inclusion} Si $\Lambda'$ est un sous-groupe de $\Lambda$ et si $X$ est une partie positive de $\Lambda$, alors $X \cap \Lambda'$ est une partie positive de $\Lambda'$. \end{coro} \bigskip Si $\sigma : \Lambda' \rightarrow \Lambda$ est un morphisme de groupes ab\'eliens, nous noterons $\sigma^* : \PC\! os(\Lambda) \rightarrow \PC\! os(\Lambda')$, $X \mapsto \sigma^{-1}(X)$ induite par le lemme \ref{sigma}. Si $\tau : \Lambda'' \rightarrow \Lambda'$ est un morphisme de groupe ab\'eliens, il est alors facile de v\'erifier que \refstepcounter{theo}$$~\label{composition} (\tau \circ \sigma)^* = \sigma^* \circ \tau^*. \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ D'autre part, si on note $V' = \RM \otimes_\ZM \Lambda'$, alors $\sigma$ induit une application $\RM$-lin\'eaire $\sigma_\RM : V' \longrightarrow V$ dont nous noterons $\lexp{t}{\sigma_\RM} : V^* \longrightarrow V^{\prime *}$ l'application duale et $\lexp{t}{{\bar{\s}}} \def\sigh{{\hat{\s}}}_\RM : V^*/\RM_{>0} \longrightarrow V^{\prime *}/\RM_{> 0}$ l'application (continue) induite. Il est alors facile de v\'erifier que le diagramme \refstepcounter{theo}$$~\label{diag sigma} \diagram V^*/\RM_{>0} \rrto^{\displaystyle{\Posbar}} \ddto^{\displaystyle{\lexp{t}{{\bar{\s}}} \def\sigh{{\hat{\s}}}_\RM}} && \PC\! os(\Lambda) \rrto^{\displaystyle{\pi}} \ddto^{\displaystyle{\sigma^*}} && V^*/\RM_{>0} \ddto^{\displaystyle{\lexp{t}{{\bar{\s}}} \def\sigh{{\hat{\s}}}_\RM}} \\ &&&&\\ V^{\prime *}/\RM_{>0} \rrto^{\displaystyle{\Posbar'}} && \PC\! os(\Lambda') \rrto^{\displaystyle{\pi'}} && V^{\prime *}/\RM_{>0} \enddiagram \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ est commutatif (o\`u $\Posbar'$ et $\pi'$ sont les analogues de $\Posbar$ et $\pi$ pour le r\'eseau $\Lambda'$). \bigskip \section{Topologie sur $\PC\! os(\Lambda)$} \medskip Dans cette section, nous allons d\'efinir sur $\PC\! os(\Lambda)$ une topologie et en \'etudier les propri\'et\'es. Nous allons notamment montrer que la plupart des applications introduites dans la section pr\'ec\'edente ($\Posbar$, $\pi$, $i_{{\bar{\varphi}}} \def\phh{{\hat{\varphi}}}$,...) sont continues. \bigskip \subsection{D\'efinition} Si $E$ est une partie de $\Lambda$, nous poserons $$\UC(E)=\{X \in \PC\! os(\Lambda)~|~X \cap E=\varnothing\}.$$ Si $\lambda_1$,\dots, $\lambda_n$ sont des \'el\'ements de $\Lambda$, nous noterons pour simplifier $\UC(\lambda_1,\dots,\lambda_n)$ l'ensemble $\UC(\{\lambda_1,\dots,\lambda_n\})$. Si cela est n\'ecessaire, nous noterons ces ensembles $\UC_\Lambda(E)$ ou $\UC_\Lambda(\lambda_1,\dots,\lambda_n)$. On a alors \refstepcounter{theo}$$~\label{u inter} \UC(E)=\bigcap_{\lambda \in E} \UC(\lambda). \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ Notons que \refstepcounter{theo}$$~\label{u vide} \UC(\varnothing)=\PC\! os(\Lambda)\qquad\text{et}\qquad \UC(\Lambda)=\{\varnothing\}. \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ D'autre part, si $(E_i)_{i \in I}$ est une famille de parties de $\Lambda$, alors \refstepcounter{theo}$$~\label{u intersection} \bigcap_{i \in I} \UC(E_i) = \UC\bigl(\bigcup_{i \in I} E_i\bigr). \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ Une partie $\UC$ de $\PC\! os(\Lambda)$ sera dite {\it ouverte} si, pour tout $X \in \UC$, il existe une partie {\bf finie} $E$ de $\Lambda$ telle que $X \in \UC(E)$ et $\UC(E) \subset \UC$. L'\'egalit\'e \ref{u intersection} montre que cela d\'efinit bien une topologie sur $\PC\! os(\Lambda)$. \bigskip \begin{prop}\label{connexite} Si $\UC$ est un ouvert de $\PC\! os(\Lambda)$ contenant $\Lambda$, alors $\UC=\PC\! os(\Lambda)$. En particulier, $\PC\! os(\Lambda)$ est connexe. Si $\Lambda \neq 0$, alors il n'est pas s\'epar\'e. \end{prop} \bigskip \begin{proof} Par d\'efinition, il existe une partie finie $E$ de $\Lambda \setminus \Lambda$ telle que $\UC(E) \subseteq \UC$. Mais on a forc\'ement $E=\varnothing$, donc $\UC=\PC\! os(\Lambda)$ d'apr\`es \ref{u vide}. D'o\`u le r\'esultat. Le fait que $\PC\! os(\Lambda)$ n'est pas s\'epar\'e (lorsque $\Lambda \neq 0$) en d\'ecoule~: le point $\Lambda$ de $\PC\! os(\Lambda)$ ne peut \^etre s\'epar\'e d'aucun autre. \end{proof} \bigskip \example{Z} L'espace topologique $\PC\! os(\ZM)$ n'a que trois points~: $\ZM$, $\ZM_{\mathop{\geqslant}\nolimits 0}$ et $\ZM_{\mathop{\leqslant}\nolimits 0}$. Sur ces trois points, seul $\ZM$ est un point ferm\'e et $\ZM_{\mathop{\geqslant}\nolimits 0}$ et $\ZM_{\mathop{\leqslant}\nolimits 0}$ sont des points ouverts (en effet, $\{\ZM_{\mathop{\geqslant}\nolimits 0}\} = \UC(-1)$ et $\{\ZM_{\mathop{\leqslant}\nolimits 0}\} = \UC(1)$).~$\SS \square$ \bigskip Il est clair que la topologie sur $\PC\! os(\Lambda)$ d\'efinie ci-dessus est la topologie induite par une topologie sur l'ensemble des parties de $\Lambda$ (d\'efinie de fa\c{c}on analogue)~: cette derni\`ere est tr\`es grossi\`ere mais sa restriction \`a $\PC\! os(\Lambda)$ est plus int\'eressante. \medskip Nous aurons besoin de la propri\'et\'e suivante des ensembles $\UC(E)$~: \bigskip \begin{lem}\label{uce vide} Soit $E$ une partie {\bfseries\itshape finie} de $\Lambda$. Alors les assertions suivantes sont \'equivalentes~: \begin{itemize} \itemth{1} $\UC(E)=\varnothing$. \itemth{2} Il existe $n \mathop{\geqslant}\nolimits 1$, $\lambda_1$,\dots, $\lambda_n \in E$ et $r_1$,\dots, $r_n \in \ZM_{>0}$ tels que $\displaystyle{\sum_{i=1}^n r_i\lambda_i=0}$. \itemth{3} Il n'existe pas de forme lin\'eaire $\varphi$ sur $V^*$ telle que $\varphi(E) \subset \RM_{>0}$. \end{itemize} \end{lem} \bigskip \begin{proof} S'il existe une forme lin\'eaire $\varphi$ sur $V^*$ telle que $\varphi(E) \subseteq \RM_{>0}$, alors $\Pos(-\varphi) \in \UC(E)$, et donc $\UC(E) \neq \varnothing$. Donc (1) $\Rightarrow$ (3). \medskip Supposons trouv\'es $\lambda_1$,\dots, $\lambda_n \in E$ et $r_1$,\dots, $r_n \in \ZM_{>0}$ tels que $r_1\lambda_1 + \cdots + r_n \lambda_n = 0$. Alors, si $X \in \UC(E)$, on a $-\lambda_2$,\dots, $-\lambda_n \in X$. Mais $r_1\lambda_1 = -r_2\lambda_2 - \cdots - r_n \lambda_n \in X$, donc $\lambda_1 \in X$, ce qui contredit l'hypoth\`ese. Donc (2) $\Rightarrow$ (1). \medskip Il nous reste \`a montrer que (3) $\Rightarrow$ (2). Supposons que (2) n'est pas vraie. Nous allons montrer qu'alors (3) n'est pas vraie en raisonnant par r\'ecurrence sur la dimension de $V$ (c'est-\`a-dire le rang de $\Lambda$). Posons $$\CC=\{t_1\lambda_1+\cdots t_n \lambda_n~|~n \mathop{\geqslant}\nolimits 1, ~\lambda_1,\dots, \lambda_n \in \Lambda,~t_1,\dots,t_n \in \RM_{>0}\}.$$ Alors $\CC$ est une partie convexe de $V$ contenant $E$ et, d'apr\`es le lemme \ref{solutions} et le fait que (2) ne soit pas vraie, on a $0 \not\in \CC$. Par cons\'equent, il r\'esulte du th\'eor\`eme de Hahn-Banach qu'il existe une forme lin\'eaire non nulle $\varphi$ telle que $\varphi(\CC)\subseteq \RM_{\mathop{\geqslant}\nolimits 0}$. Posons $\Lambda'=(\Ker \varphi)\cap \Lambda$ et $E' = E \cap \Lambda$. Alors l'assertion (2) pour $E'$ n'est pas vraie elle aussi donc, par hypoth\`ese de r\'ecurrence, il existe un forme lin\'eaire $\psi$ sur $V'=\RM \otimes_\ZM \Lambda' \subseteq V$ telle que $\psi(E') \subset \RM_{>0}$. Soit $\psit$ une extension de $\psi$ \`a $V$. Puisque $\varphi(E \setminus E') \subset \RM_{>0}$, il existe $\varepsilon > 0$ tel que $\varphi(\lambda) + \varepsilon \psit(\lambda) > 0$ pour tout $\lambda \in E\setminus E'$. Mais on a aussi, si $\lambda \in E'$, $\varphi(\lambda)+\varepsilon\psit(\lambda) = \varepsilon \psit(\lambda) > 0$. Donc $(\varphi + \varepsilon \psit)(E) \subset \RM_{>0}$. \end{proof} \bigskip Si $\EC$ est une partie de $\PC\! os(\Lambda)$, nous noterons $\overline{\EC}$ son adh\'erence dans $\PC\! os(\Lambda)$. \bigskip \begin{coro}\label{adherence u} Soit $E$ une partie {\bfseries\itshape finie} de $\Lambda$ telle que $\UC(E) \neq \varnothing$. Alors $$\overline{\UC(E)} = \PC\! os(\Lambda) \setminus \Bigl(\bigcup_{\lambda \in E} \UC(-\lambda)\Bigr).$$ \end{coro} \bigskip \begin{proof} Posons $$\OC=\bigcup_{\lambda \in E} \UC(-\lambda)$$ $$\FC=\PC\! os(\Lambda) \setminus \OC. \leqno{\text{et}}$$ Alors $\FC$ est ferm\'e dans $\PC\! os(\Lambda)$ et contient $\UC(E)$. Donc $\overline{\UC(E)} \subseteq \FC$. R\'eciproquement, soit $X \in \PC\! os(\Lambda) \setminus \overline{\UC(E)}$. Nous devons montrer que $$X \in \OC.\leqno{(?)}$$ Puisque $\PC\! os(\Lambda) \setminus \overline{\UC(E)}$ est un ouvert, il existe une partie finie $F$ de $\Lambda$ telle que $X \in \UC(F)$ et $\UC(F) \subseteq \OC$. En particulier, $\UC(F) \cap \UC(E) = \varnothing$. En d'autres termes, d'apr\`es \ref{u intersection}, on a $\UC(E \cup F) = \varnothing$. Donc, d'apr\`es le lemme \ref{uce vide}, il existe $m \mathop{\geqslant}\nolimits 0$, $n \mathop{\geqslant}\nolimits 0$, $\lambda_1$,\dots, $\lambda_m \in E$, $\mu_1$,\dots, $\mu_n \in F$, $r_1$,\dots, $r_m$, $s_1$,\dots, $s_n \in \ZM_{>0}$ tels que $$r_1\lambda_1 + \cdots + r_m \lambda_m + s_1 \mu_1 + \cdots + s_n \mu_n=0$$ et $m+n \mathop{\geqslant}\nolimits 1$. En fait, comme $\UC(E)$ et $\UC(F)$ sont toutes deux non vides, il d\'ecoule du lemme \ref{uce vide} que $m$, $n \mathop{\geqslant}\nolimits 1$. Si $X \not\in \OC$, alors $-\lambda_i \in X$ pour tout $i$, ce qui implique que $s_1 \mu_1 + \cdots + s_n \mu_n \in X$. Cela ne peut se produire que si au moins l'un des $\mu_j$ appartient \`a $X$, mais c'est impossible car $F \cap X =\varnothing$. D'o\`u (?). \end{proof} \bigskip \example{viditude} Le corollaire \ref{adherence u} n'est pas forc\'ement vrai si $\UC(E)=\varnothing$. En effet, si $\lambda \in \Lambda \setminus\{0\}$, alors $\UC(\lambda,-\lambda)=\varnothing$ mais, du moins lorsque $\dim V \mathop{\geqslant}\nolimits 2$, on a $\UC(\lambda) \cup \UC(-\lambda) \neq \PC\! os(\Lambda)$.~$\SS \square$ \bigskip \subsection{Fonctorialit\'e} Si $\Lambda'$ est un autre r\'eseau et si $\sigma : \Lambda' \rightarrow \Lambda$ est un morphisme de groupes et si $E'$ est une partie de $\Lambda'$, alors \refstepcounter{theo}$$~\label{sigma continue} (\sigma^*)^{-1}\bigl(\UC_{\Lambda'}(E')\bigr)=\UC_\Lambda\bigl(\sigma(E')\bigr). \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ \begin{proof}[Preuve de \ref{sigma continue}] Soit $X$ une partie positive de $\Lambda$. Alors $X \in (\sigma^*)^{-1}\bigl(\UC_{\Lambda'}(E')\bigr)$ (resp. $X \in \UC_\Lambda\bigl(\sigma(E')\bigr)$) si et seulement si $\sigma^{-1}(X) \cap E' = \varnothing$ (resp. $X \cap \sigma(E') = \varnothing$). Il est alors facile de v\'erifier que ces deux derni\`eres conditions sont \'equivalentes. \end{proof} Cela implique le r\'esultat suivant~: \bigskip \begin{prop}\label{continu} L'application $\sigma^* : \PC\! os(\Lambda) \rightarrow \PC\! os(\Lambda')$ est continue. \end{prop} \bigskip \subsection{Continuit\'e} D'apr\`es la section \ref{section positive}, nous avons \'equip\'e l'espace topologique $\PC\! os(\Lambda)$ de deux applications $\Posbar : V^*/\RM_{> 0} \rightarrow \PC\! os(\Lambda)$ et $\pi : \PC\! os(\Lambda) \rightarrow V^*/\RM_{>0}$ telles que $\pi \circ \Posbar = \Id_{V^*/\RM_{>0}}$. Nous montrerons dans le th\'eor\`eme \ref{pi pos continues} que ces applications sont continues (lorsque $V^*/\RM_{>0}$ est bien s\^ur muni de la topologie quotient) et nous en d\'eduirons quelques autres propri\'et\'es topologiques de ces applications. Avant cela, introduisons la notation suivante~: si $E$ est une partie finie de $\Lambda$, on pose $$\VC(E)=\{{\bar{\varphi}}} \def\phh{{\hat{\varphi}}\in V^*/\RM_{> 0}~|~\forall~\lambda \in E,~\varphi(\lambda) < 0\}.$$ Si cela est n\'ecessaire, nous noterons $\VC_\Lambda(E)$ l'ensemble $\VC(E)$. Alors $$p^{-1}(\VC(E))=\{\varphi \in V^*~|~\forall~\lambda \in E,~\varphi(\lambda) < 0\}.$$ Donc $p^{-1}(\VC(E))$ est ouvert, donc $\VC(E)$ est ouvert dans $V^*/\RM_{>0}$ par d\'efinition de la topologie quotient. \bigskip \begin{prop}\label{pi pos continues} Les applications $\Posbar$ et $\pi$ sont continues. De plus~: \begin{itemize} \itemth{a} $\Posbar$ induit un hom\'eomorphisme sur son image. \itemth{b} L'image de $\Posbar$ est dense dans $\PC\! os(\Lambda)$. \end{itemize} \end{prop} \begin{proof} Soit $E$ une partie finie de $\Lambda$. Alors \refstepcounter{theo}$$~\label{pos uc} \Posbar^{-1}(\UC(E))=\VC(E). \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ Donc $\Posbar^{-1}(\UC(E))$ est un ouvert de $V^*/\RM_{>0}$. Donc $\Posbar$ est continue. Montrons maintenant que $\pi$ est continue. Nous proc\`ederons par \'etapes~: \medskip \begin{quotation} \begin{lem}\label{base ouverts} Les $\VC(E)$, o\`u $E$ parcourt l'ensemble des parties finies de $\Lambda$, forment une base d'ouverts de $V^*/\RM_{>0}$. \end{lem} \begin{proof}[Preuve du lemme \ref{base ouverts}] Soit $\UC$ un ouvert de $V^*/\RM_{>0}$ et soit $\varphi$ une forme lin\'eaire sur $V$ telle que ${\bar{\varphi}}} \def\phh{{\hat{\varphi}} \in \UC$. Nous devons montrer qu'il existe une partie finie $E$ de $V$ telle que ${\bar{\varphi}}} \def\phh{{\hat{\varphi}} \in \VC(E)$ et $\VC(E) \subset \UC$. Si $\varphi=0$, alors $\UC=V^*/\RM_{>0}$ et le r\'esultat est clair. Nous supposerons donc que $\varphi \neq 0$. Il existe alors $\lambda_0 \in \Lambda$ tel que $\varphi(\lambda_0) > 0$. Quitte \`a remplacer $\varphi$ par un multiple positif, on peut supposer que $\varphi(\lambda_0)=1$. Notons $\HC_0$ l'hyperplan affine $\{\psi \in V^*~|~\psi(\lambda_0)=1\}$. Alors l'application naturelle $\HC_0 \rightarrow V^*/\RM_{>0}$ induit un hom\'eomorphisme $\nu : \HC_0 \stackrel{\sim}{\longrightarrow} \VC(-\lambda_0)$. De plus, $\varphi=\nu^{-1}(\varphi) \in \HC_0$. Donc $\varphi \in \nu^{-1}(\UC \cap \VC(-\lambda_0))$. Il suffit dons de v\'erifier que les intersections finies de demi-espaces ouverts {\it rationnels} (i.e. de la forme $\{\psi \in \HC_0~|~\psi(\lambda) > n\}$ o\`u $n \in \ZM$ et $\lambda \in \Lambda \setminus \ZM \lambda_0$) forment une base de voisinages de l'espace affine $\HC_0$, ce qui est imm\'ediat. \end{proof} \end{quotation} \medskip Compte tenu du lemme \ref{base ouverts}, il suffit de montrer que, si $E$ est une partie finie de $\Lambda$, alors $\pi^{-1}(\VC(E))$ est un ouvert de $\PC\! os(\Lambda)$. De plus, $$\VC(E)=\bigcap_{\lambda \in E} \VC(\lambda).$$ Par cons\'equent, la continuit\'e de $\pi$ d\'ecoulera du lemme suivant: \medskip \begin{quotation} \begin{lem}\label{image inverse} Si $\lambda \in \Lambda$, alors $\pi^{-1}(\VC(\lambda))$ est un ouvert de $\PC\! os(\Lambda)$. \end{lem} \begin{proof}[Preuve du lemme \ref{image inverse}] Soit $X \in \pi^{-1}(\VC(\lambda))$ et soit $\varphi=\pi(X)$. Par d\'efinition, $\varphi(\lambda) < 0$ et donc $\lambda \not\in X$. Soit $e_1$,\dots, $e_n$ une $\ZM$-base de $\Lambda$. Il existe un entier naturel non nul $N$ tel que $\varphi(\lambda \pm \displaystyle{\frac{1}{N}e_i}) < 0$ pour tout $i$. Quitte \`a remplacer $\lambda$ par $N\lambda$, on peut supposer que $\varphi(\lambda \pm e_i) < 0$ pour tout $i$. On pose alors $$E=\{\lambda+e_1,\lambda-e_1,\dots,\lambda+e_n,\lambda-e_n\}.$$ Alors $X \in \UC(E)$ par construction. Il reste \`a montrer que $\UC(E) \subseteq \pi^{-1}(\VC(\lambda))$. Soit $Y \in \UC(E)$ et posons $\psi = \pi(Y)$. Supposons de plus que $\psi \not\in \VC(\lambda)$. On a alors $\psi(\lambda) \mathop{\geqslant}\nolimits 0$. D'autre part, $\psi(\lambda \pm e_i) \mathop{\leqslant}\nolimits 0$ pour tout $i$. Cela montre que $2\psi(\lambda)=\psi(\lambda+e_1)+\psi(\lambda-e_1) \mathop{\leqslant}\nolimits 0$, et donc $\psi(\lambda)=\psi(\lambda+e_i)=0$, et donc $\psi(e_i)=0$ pour tout $i$. Donc $\psi$ est nulle et donc $Y=\Lambda$, ce qui contredit le fait que $Y \in \UC(E)$. Cela montre donc que $\psi \in \VC(\lambda)$, comme attendu. \end{proof} \end{quotation} \medskip Puisque $\pi$ et $\Posbar$ sont continues et v\'erifient $\pi \circ \Posbar = \Id_{V^*/\RM_{>0}}$, $\Posbar$ induit un hom\'eomorphisme sur son image. D'o\`u (a). \medskip L'assertion (b) d\'ecoule du lemme suivant (qui est une cons\'equence imm\'ediate de l'\'equivalence entre (1) et (3) dans le lemme \ref{uce vide}) et de \ref{pos uc}~: \begin{quotation} \begin{lem}\label{non vide} Soit $E$ une partie finie de $\Lambda$ telle que $\UC(E) \neq \varnothing$. Alors $\VC(E) \neq \varnothing$. \end{lem} \end{quotation} La preuve de la proposition \ref{pi pos continues} est termin\'ee. \end{proof} \bigskip Nous allons maintenant \'etudier les propri\'et\'es topologiques des applications $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$. \bigskip \begin{prop}\label{proprietes topologie} Soit $\varphi \in V^*$ et supposons $\varphi \neq 0$. Alors~: \begin{itemize} \itemth{a} $\displaystyle{\bigcap_{\stackrel{\UC {\mathrm{~ouvert~de~}}\PC\! os(\Lambda)}{\Pos(\varphi) \in \UC}} \UC = i_{{\bar{\varphi}}} \def\phh{{\hat{\varphi}}}\bigl(\PC\! os(\Ker \varphi|_\Lambda)\bigr)} = \pi^{-1}({\bar{\varphi}}} \def\phh{{\hat{\varphi}})$. \itemth{d} $i_{{\bar{\varphi}}} \def\phh{{\hat{\varphi}}}$ est continue et induit un hom\'eomorphisme sur son image. \end{itemize} \end{prop} \begin{proof} (a) Notons $I_\varphi$ l'image de $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$. On a alors, d'apr\`es le lemme \ref{equivalent separant}, $$I_\varphi=\{X \in \PC\! os(\Lambda)~|~\Pos^+(\varphi) \subseteq X\}.\leqno{(*)}$$ Si $\UC$ est un ouvert contenant $\Pos(\varphi)$, alors il existe une partie finie $E$ de $\Lambda \setminus \Pos(\varphi)$ telle que $\UC(E) \subseteq \UC$. Mais, si $X$ est dans l'image de $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$, alors $X \subseteq \Pos(\varphi)$, donc $X \cap E = \varnothing$. Et donc $X \in \UC$, ce qui montre que $$I_\varphi \subseteq \bigcap_{\stackrel{\UC {\mathrm{~ouvert~de~}}\PC\! os(\Lambda)}{\Pos(\varphi) \in \UC}} \UC.$$ Montrons l'inclusion r\'eciproque. Soit $X \in \PC\! os(\Lambda)$ tel que $X \not\in I_\varphi$. Posons $\psi=\pi(X)$. Alors ${\bar{\psi}}} \def\psih{{\hat{\psi}} \neq {\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ donc, d'apr\`es la preuve du th\'eor\`eme \ref{hahn banach}, il existe $\lambda \in \Lambda$ tel que $\varphi(\lambda) < 0$ et $\psi(\lambda) > 0$. On a donc, d'apr\`es le lemme \ref{equivalent separant}, $X \not\in \UC(\lambda)$. D'autre part, $\Pos(\varphi) \in \UC(\lambda)$. D'o\`u (a). \medskip Montrons (b). On note $\pi_{\bar{\varphi}}} \def\phh{{\hat{\varphi}} : I_\varphi \rightarrow \PC\! os(\Ker \varphi|_\Lambda)$, $X \mapsto X \cap \Ker \varphi|_\Lambda$. D'apr\`es la proposition \ref{fibres pi}, $\pi_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ est la bijection r\'eciproque de $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}} : \PC\! os(\Ker \varphi|_\Lambda) \rightarrow I_\varphi$. Il nous faut donc montrer que $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ et $\pi_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ sont continues. Si $F$ est une partie finie de $\Ker \varphi|_\Lambda$, nous noterons $\UC_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}(F)$ l'analogue de l'ensemble $\UC(F)$ d\'efini \`a l'int\'erieur de $\PC\! os(\Ker \varphi|_\Lambda)$. Soit $E$ une partie finie de $\Lambda$. Nous voulons montrer que $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}^{-1}(\UC(E))$ est un ouvert de $\PC\! os(\Ker \varphi|_\Lambda)$. S'il existe $\lambda \in E$ tel que $\varphi(\lambda) > 0$, alors $\UC(E) \cap I_\varphi = \varnothing$ (voir $(*)$). On peut donc supposer que $\varphi(\lambda) \mathop{\leqslant}\nolimits 0$ pour tout $\lambda \in E$. Il est alors facile de v\'erifier que $$i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}^{-1}(\UC(E)) = \UC_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}(E \cap \Ker \varphi|_\Lambda).$$ Donc $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ est continue. Soit $F$ une partie finie de $\Ker \varphi|_\Lambda$. Alors $$\pi_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}^{-1}(\UC_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}(F)) =\UC(F) \cap I_\varphi.$$ Donc $\pi_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ est continue. \end{proof} Nous allons r\'esumer dans le th\'eor\`eme suivant la plupart des r\'esutats obtenus dans cette sous-section. \bigskip \begin{theo}\label{theo:topologie} Supposons $\Lambda \neq 0$ et soit $\varphi \in V^*$, $\varphi \neq 0$. \begin{itemize} \itemth{a} $\PC\! os(\Lambda)$ est connexe. Il n'est pas s\'epar\'e si $\Lambda \neq 0$. \itemth{b} Les applications $\pi : \PC\! os(\Lambda) \rightarrow V^*/\RM_{>0}$ et $\Posbar : V^*/\RM_{>0} \rightarrow \PC\! os(\Lambda)$ sont continues et v\'erifient $\pi \circ \Posbar = \Id_{V^*/\RM_{>0}}$. \itemth{c} $\Posbar$ induit un hom\'eomorphisme sur son image~; cette image est dense dans $\PC\! os(\Lambda)$. \itemth{d} $\pi^{-1}({\bar{\varphi}}} \def\phh{{\hat{\varphi}})$ est l'intersection des voisinages de $\Pos(\varphi)$ dans $\PC\! os(\Lambda)$. \itemth{e} $i_{\bar{\varphi}}} \def\phh{{\hat{\varphi}}$ est un hom\'eomorphisme. \end{itemize} \end{theo} \bigskip \section{Arrangements d'hyperplans} \medskip L'application continue $\Pos : V^* \rightarrow \PC\! os(\Lambda)$ a une image dense. Nous allons \'etudier ici comment se transpose la notion d'arrangement d'hyperplans (et les objets attach\'es~: facettes, chambres, support...) \`a l'espace topologique $\PC\! os(\Lambda)$ \`a travers $\Pos$. Cela nous permettra d'\'enoncer les conjectures sur les cellules de Kazhdan-Lusztig sous la forme la plus g\'en\'erale possible \cite[Conjectures A et B]{bonnafe}. \bigskip \subsection{Sous-espaces rationnels} Si $E$ est une partie de $\Lambda$, on pose $$\LC(E)=\{X \in \PC\! os(\Lambda)~|~E \subset X \cap (-X)\}.$$ Si cela est n\'ecessaire, nous le noterons $\LC_\Lambda(E)$. On appelle {\it sous-espace rationnel} de $\PC\! os(\Lambda)$ toute partie de $\PC\! os(\Lambda)$ de la forme $\LC(E)$, o\`u $E$ est une partie de $\Lambda$. Si $\lambda \in \Lambda\setminus \{0\}$, on notera $\HC_\lambda$ le sous-espace rationnel $\LC(\{\lambda\})$~: un tel sous-espace rationnel sera appel\'e un {\it hyperplan rationnel}. Notons que \refstepcounter{theo}$$~\label{partage espace} \PC\! os(\Lambda)= \UC(\lambda) \hskip1mm\dot{\cup}\hskip1mm \HC_\lambda \hskip1mm\dot{\cup}\hskip1mm \UC(-\lambda). \leqno{\boldsymbol{(\arabic{section}.\arabic{theo})}}~$$ La proposition suivante justifie quelque peu la terminologie~: \begin{prop}\label{topo espace} Soit $E$ est une partie de $\Lambda$. Notons $\Lambda(E)$ le sous-r\'eseau $\Lambda \cap \sum_{\lambda \in E} \QM E$ de $\Lambda$ et soit $\sigma_E : \Lambda \rightarrow \Lambda/\Lambda(E)$ l'application canonique. Alors~: \begin{itemize} \itemth{a} $\LC(E)=\displaystyle{\bigcap_{\lambda \in E\setminus\{0\}} \HC_\lambda} = \{X \in \PC\! os(\Lambda)~|~\Lambda(E) \subseteq X\}$. \itemth{b} $\LC(E)$ est ferm\'e dans $\PC\! os(\Lambda)$. \itemth{c} $\Pos^{-1}(\LC(E))=\{\varphi \in V^*~|~\forall~\lambda \in E,~\varphi(\lambda)=0\}=E^\perp$. \itemth{d} $\Posbar(\pi(\LC(E)) \subseteq \LC(E)$. \itemth{e} $\Posbar^{-1}(\LC(E))=\pi(\LC(E))$. \itemth{f} $\LC(E)=\overline{\Pos(\Pos^{-1}(\LC(E)))}$ . \itemth{g} L'application $\sigma_E^* : \PC\! os(\Lambda/\Lambda(E)) \rightarrow \PC\! os(\Lambda)$ a pour image $\LC(E)$ et induit un hom\'eomorphisme $\PC\! os(\Lambda/\Lambda(E)) \stackrel{\sim}{\longrightarrow} \LC(E)$. \end{itemize} \end{prop} \begin{proof} La premi\`ere \'egalit\'e de (a) est imm\'ediate. La deuxi\`eme d\'ecoule de la proposition \ref{proprietes positives} (c). (b) d\'ecoule de (a) et de \ref{partage espace}. (c) est tout aussi clair. \medskip (d) Si $X \in \Posbar(\pi(\LC(E))$, alors il existe $Y \in \LC(E)$ tel que $X=\Posbar(\pi(Y))$. Posons ${\bar{\varphi}}} \def\phh{{\hat{\varphi}} = \pi(Y)$, o\`u $\varphi \in V^*$. Alors $E \subseteq Y \cap (-Y)$ et $Y \subseteq \Pos(\varphi)$. Or, $\varphi(\lambda)=0$ si $\lambda \in Y \cap (-Y)$, donc $X=\Pos(\varphi) \in \LC(E)$. \medskip (e) D'apr\`es (d), on a $\pi(\Lambda(E)) \subseteq \Posbar^{-1}(\LC(E))$. R\'eciproquement, soit $\varphi$ un \'el\'ement de $\Pos^{-1}(\LC(E))$, alors ${\bar{\varphi}}} \def\phh{{\hat{\varphi}}=\pi(\Posbar({\bar{\varphi}}} \def\phh{{\hat{\varphi}})) \in \pi(\LC(E))$. D'o\`u (e). \medskip (f) Notons $\FC_E=\Pos(\Pos^{-1}(\LC(E)))$. On a $\FC_E \subseteq \LC(E)$ donc il d\'ecoule du (a) que $\overline{\FC}_E \subseteq \LC(E)$. R\'eciproquement, soit $F$ une partie finie de $\Lambda$ telle que $\UC(F) \cap \FC_E = \varnothing$. Nous devons montrer que $\UC(F) \cap \LC(E)=\varnothing$. Or, le fait que $\UC(F) \cap \FC_E=\varnothing$ est \'equivalent \`a l'assertion suivante (voir (c))~: $$\forall~\varphi \in E^\perp,~\forall~\lambda \in F,~\varphi(\lambda) \mathop{\geqslant}\nolimits 0.$$ Or, si $\varphi \in E^\perp$, alors $-\varphi \in E^\perp$, ce qui implique que~: $$\forall~\varphi \in E^\perp,~\forall~\lambda \in F,~\varphi(\lambda) = 0.$$ En d'autres termes, $F \subseteq (E^\perp)^\perp \cap \Lambda = \Lambda(E)$. Mais, si $X \in \LC(E)$, alors $\Lambda(E) \subseteq X$ d'apr\`es (a). Donc $X \not\in \UC(F)$, comme esp\'er\'e. \medskip (g) Le fait que l'image de $\sigma_E^*$ soit $\LC(E)$ d\'ecoule de (a). D'autre part, $\sigma_E^*$ est continue d'apr\`es la proposition \ref{continu}. Notons $$\fonction{\gamma_E}{\LC(E)}{\PC\! os(\Lambda/\Lambda(E))}{X}{X/\Lambda(E).}$$ Alors $\gamma_E$ est la r\'eciproque de $\sigma_E^*$. Il ne nous reste qu'\`a montrer que $\gamma_E$ est continue. Soit donc $F$ une partie finie de $\Lambda/\Lambda(E)$ et notons ${\tilde{F}}} \def\fti{{\tilde{f}}} \def\FCt{{\tilde{\mathcal{F}}}$ un ensemble de repr\'esentants des \'el\'ements de $F$ dans $\Lambda$. On a \begin{eqnarray*} \gamma_E^{-1}(\UC_{\Lambda/\Lambda(E)}(F))&=&\{X \in \LC(E)~|~\forall~\lambda \in F,~\lambda \not\in X/\Lambda(E)\}\\ &=&\{X \in \PC\! os(\Lambda)~|~\Lambda(E) \subseteq X\text{ et }\forall~\lambda \in F,~\lambda \not\in X/\Lambda(E)\}\\ &=&\{X \in \PC\! os(\Lambda)~|~\Lambda(E) \subseteq X\text{ et }\forall~\lambda \in {\tilde{F}}} \def\fti{{\tilde{f}}} \def\FCt{{\tilde{\mathcal{F}}},~\lambda \not\in X\}\\ &=&\{X \in \LC(E)~|~\forall~\lambda \in {\tilde{F}}} \def\fti{{\tilde{f}}} \def\FCt{{\tilde{\mathcal{F}}},~\lambda \not\in X\}\\ &=&\LC(E) \cap \UC_\Lambda({\tilde{F}}} \def\fti{{\tilde{f}}} \def\FCt{{\tilde{\mathcal{F}}}). \end{eqnarray*} Donc $\gamma_E^{-1}(\UC_{\Lambda/\Lambda(E)})$ est un ouvert de $\LC(E)$. Cela montre la continuit\'e de $\gamma_E$. \end{proof} \bigskip \subsection{Demi-espaces} Soit $\HC$ un hyperplan rationnel de $\PC\! os(\Lambda)$ et soit $\lambda\in \Lambda\setminus\{0\}$ tel que $\HC=\HC_\lambda$. D'apr\`es \ref{partage espace}, l'hyperplan $\HC$ nous d\'efinit une unique relation d'\'equivalence $\smile_{\!\HC}$ sur $\PC\! os(\Lambda)$ pour laquelle les classes d'\'equivalence sont $\UC(\lambda)$, $\HC$ et $\UC(-\lambda)$~: notons que cette relation ne d\'epend pas du choix de $\lambda$. De plus~: \bigskip \begin{prop}\label{composantes connexes} $\HC$ est un ferm\'e de $\PC\! os(\Lambda)$ et $\UC(\lambda)$ et $\UC(-\lambda)$ sont les composantes connexes de $\PC\! os(\Lambda)\setminus \HC$. De plus $$\overline{\UC(\lambda)}=\UC(\lambda) \cup \HC_\lambda.$$ \end{prop} \bigskip \begin{proof} La derni\`ere assertion est un cas particulier du corollaire \ref{adherence u}. \medskip Montrons pour finir que $\UC(\lambda)$ est connexe. Soient $\UC$ et $\VC$ deux ouverts de $\UC(\lambda)$ tels que $\UC(\lambda)=\UC \coprod \VC$. Alors $$\Pos^{-1}(\UC(\lambda))=\Pos^{-1}(\UC) \coprod \Pos^{-1}(\VC).$$ Mais $\Pos^{-1}(\UC(\lambda))=\{\varphi \in V^*~|~\varphi(\lambda) < 0\}$. Donc $\Pos^{-1}(\UC(\lambda))$ est connexe. Puisque $\Pos$ est continue, cela implique que $\Pos^{-1}(\UC)=\varnothing$ ou $\Pos^{-1}(\VC)=\varnothing$. Le lemme \ref{non vide} implique que $\UC=\varnothing$ ou $\VC=\varnothing$. \end{proof} \bigskip Si $X \in \PC\! os(\Lambda)$, nous noterons $\DC_\HC(X)$ la classe d'\'equivalence de $X$ sous la relation $\smile_{\! \HC}$. Il r\'esulte de la proposition \ref{composantes connexes} que $\overline{\DC_\HC(X)}$ est une r\'eunion de classes d'\'equivalences pour $\smile_{\! \HC}$. \bigskip \subsection{Arrangements} Nous travaillerons d\'esormais sous l'hypoth\`ese suivante~: \bigskip \begin{quotation} {\it Fixons maintenant, et ce jusqu'\`a la fin de cette section, un ensemble {\bfseries\itshape fini} ${\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$ d'hyperplans rationnels de $\PC\! os(\Lambda)$.} \end{quotation} \bigskip Nous allons red\'efinir, dans notre espace $\PC\! os(\Lambda)$, les notions de {\it facettes}, {\it chambres} et {\it faces} associ\'ees \`a ${\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$, de fa\c{c}on similaire \`a ce qui se fait pour les arrangements d'hyperplans dans un espace r\'eel \cite[Chapitre V, \S 1]{bourbaki}. Les propri\'et\'es des applications $\pi$ et $\Posbar$ \'etablies pr\'ec\'edemment permettent facilement de d\'emontrer les r\'esultats analogues. Nous d\'efinissons la relation $\smile_{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$ sur $\PC\! os(\Lambda)$ de la fa\c{c}on suivante~: si $X$ et $Y$ sont deux \'el\'ements de $\PC\! os(\Lambda)$, nous \'ecrirons $X \smile_{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}} Y$ si $X \smile_{\!\HC} Y$ pour tout $\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$. Nous appellerons {\it facettes} (ou {\it ${\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$-facettes}) les classes d'\'equivalence pour la relation $\smile_{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$. Nous appellerons {\it chambres} (ou {\it ${\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$-chambres}) les facettes qui ne rencontrent aucun hyperplan de ${\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$. Si $\FC$ est une facette, nous noterons $$\LC_{{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}} ( \FC) = \bigcap_{\overset{\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}}{\FC \subset \HC}} \HC,$$ avec la convention habituelle que $\LC_{{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}} ( \FC) = \PC\! os (\Lambda)$ si $\FC$ est une chambre. Nous l'appellerons le {\it support} de $\FC$ et nous appellerons {\it dimension} de $\FC$ l'entier $$\dim \FC = \dim_{\RM} \Pos^{- 1} ( \LC_{{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}} ( \FC)) .$$ De m\^eme nous appellerons {\it codimension} de $\FC$ l'entier $$\codim \FC=\dim_\RM V - \dim \FC.$$ Avec ces d\'efinitions, une chambre est une facette de codimension $0$. \bigskip \begin{prop}\label{facettes} Soit $\FC$ une facette et soit $X \in \FC$. Alors~: \begin{itemize} \itemth{a} $\FC=\displaystyle{\bigcap_{\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}}} \DC_\HC(X)$. \itemth{b} $\overline{\FC}=\displaystyle{\bigcap_{\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}}} \overline{\DC_\HC(X)}$. \itemth{c} $\overline{\FC}$ est la r\'eunion de $\FC$ et de facettes de dimension strictement inf\'erieures. \itemth{d} Si $\FC'$ est une facette telle que $\overline{\FC}=\overline{\FC}'$, alors $\FC=\FC'$. \end{itemize} \end{prop} \bigskip \begin{proof} (a) est une cons\'equence des d\'efinitions. Montrons (b). Posons $${\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}_1=\{\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}~|~\FC \subseteq \HC\}$$ $${\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}_2={\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}} \setminus {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}_1.$$ Pour tout $\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$, on fixe un \'el\'ement $\lambda(\HC) \in \Lambda$ tel que $\HC=\HC_{\lambda(\HC)}$~: si de plus $\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}_2$, on choisit $\lambda(\HC)$ de sorte que $\FC \subseteq \UC(\lambda(\HC))$. On pose $$E_i=\{\lambda(\HC)~|~\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}_i\}.$$ Par cons\'equent, $$\FC=\LC \cap \UC(E_i).$$ Puisque $\LC$ est ferm\'e, $\overline{\FC}$ est aussi l'adh\'erence de $\FC$ dans $\LC$. En utilisant alors l'hom\'eomorphisme $\sigma_{E_1}^* : \PC\! os(\Lambda/\Lambda(E_1)) \stackrel{\sim}{\longrightarrow} \LC$ de la proposition \ref{topo espace} (g), on se ram\`ene \`a calculer l'adh\'erence de $\sigma_{E_1}^{* -1}(\FC)$ dans $\PC\! os(\Lambda/\Lambda(E_1))$. Mais $$\sigma_{E_1}^{* -1}(\FC)= \sigma_{E_1}^{* -1}(\UC_\Lambda(E_2))=\UC_{\Lambda/\Lambda(E_1)}(\sigma_{E_1}(E_2)).$$ Or, d'apr\`es le corollaire \ref{adherence u}, on a $$\overline{\UC_{\Lambda/\Lambda(E_1)}(\sigma_{E_1}(E_2))} = \PC\! os(\Lambda/\Lambda(E_1)) \setminus \bigl(\bigcup_{\lambda \in E_2} \UC_{\Lambda/\Lambda(E_1)}(\sigma_{E_1}(-\lambda))\bigr).$$ Par cons\'equent, $$\overline{\FC}=\LC \cap \sigma_{E_1}^*\Bigl(\PC\! os(\Lambda/\Lambda(E_1)) \setminus \bigl(\bigcup_{\lambda \in E_2} \UC_{\Lambda/\Lambda(E_1)}(\sigma_{E_1}(-\lambda))\bigr)\Bigr),$$ et donc $$\overline{\FC}=\LC \cap \Bigl(\bigcap_{\lambda \in E_2} \bigl(\PC\! os(\Lambda)\setminus \UC_\Lambda(-\lambda)\bigr) = \bigcap_{\lambda \in E_1 \cup E_2} \overline{\DC_{\HC_\lambda}(X)},$$ comme attendu. \medskip Montrons maintenant (c). D'apr\`es (b), $\overline{\FC}$ est bien une r\'eunion de facettes. Si de plus $\FC'$ est une facette diff\'erente de $\FC$ et contenue dans $\overline{\FC}$, l'assertion (b) montre qu'il existe $\HC \in {\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}_2$ tel que $\FC' \subseteq \HC$. Donc $\FC' \subseteq \LC_{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}(\FC) \cap \HC$, et $\dim \Pos^{-1}\bigl(\LC_{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}(\FC) \cap \HC\bigr) = \dim \Pos^{-1}\bigl(\LC_{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}(\FC)\bigr)-1$. D'o\`u (c). \medskip L'assertion (d) d\'ecoule imm\'ediatement de (c). \end{proof} \bigskip On d\'efinit une relation $\hspace{0.1em}\mathop{\preccurlyeq}\nolimits\hspace{0.1em}$ (ou $\preccurlyeq_{\mathfrak A}} \def\aG{{\mathfrak a}} \def\AM{{\mathbb{A}}$ s'il est n\'ecessaire de pr\'eciser) entre les facettes~: on \'ecrit $\FC \hspace{0.1em}\mathop{\preccurlyeq}\nolimits\hspace{0.1em} \FC'$ si $\overline{\FC} \subseteq \overline{\FC}'$ (c'est-\`a-dire si $\FC \subseteq \overline{\FC}'$). La proposition \ref{facettes} (d) montre que~: \bigskip \begin{coro}\label{ordre facettes} La relation $\hspace{0.1em}\mathop{\preccurlyeq}\nolimits\hspace{0.1em}$ entre les facettes est une relation d'ordre. \end{coro} \bigskip
{'timestamp': '2008-08-26T16:41:27', 'yymm': '0808', 'arxiv_id': '0808.3518', 'language': 'fr', 'url': 'https://arxiv.org/abs/0808.3518'}
ArXiv
\section{Introduction}\label{sec:intro} The stability condition of a vacuum is one of the important constraints on viable models of particle physics. Even in the standard model, it gives a nontrivial constraint on the Higgs boson mass and the top quark mass \cite{Degrassi:2012ry}. Furthermore, physics beyond the standard model often introduces additional scalar fields, and they could destabilize the standard model vacuum by giving a deeper vacuum. In these situations, the standard model vacuum is a false vacuum and its lifetime should be longer than the age of the Universe. The lifetime of a false vacuum in quantum field theory can be calculated by using Coleman's semiclassical method \cite{Coleman:1977py}. In this method, the decay rate of a false vacuum per volume is evaluated as $\Gamma/V \sim A e^{-S}$ where $A$ is a prefactor and $S$ is the action for a nontrivial solution of the equation of motion which gives the minimal action. Such a solution is called a bounce solution. To obtain the bounce solution, we have to solve the equation of motion of scalar fields with an appropriate boundary condition. However, it is not always easy to obtain the explicit solution of the equation of motion. In particular, we have to solve a large number of coupled equations of motion if we consider some model with a large number of scalar fields such as the landscape scenario \cite{Greene:2013ida, Dine:2015ioa}. It is convenient if we can discuss a possible range of the minimal bounce action value without solving the equation of motion explicitly. In this context, for example, a generic upper bound on the minimal bounce action is discussed in Refs.~\cite{Dasgupta:1996qu, Sarid:1998sn}. A lower bound is discussed in Ref.~\cite{Aravind:2014aza} which focuses on quartic scalar potential, and Ref.~\cite{Masoumi:2017trx} which reduces the problem to the effective single scalar problem. In this Letter, we derive a generic lower bound on the minimal bounce action, which can be applied to a broad class of scalar potential with any number of scalar fields. Our bound can be derived by using a quite simple discussion which is based on the Lagrange multiplier method. The bound has a simple form, and it provides a sufficient condition for the stability of a false vacuum. Therefore, even if it is difficult to obtain the explicit form of the bounce solution, our bound is useful as a quick check on the stability of the false vacuum. In section \ref{sec:lower}, we discuss the lower bound on the minimal bounce action. In section \ref{sec:comparisons}, we compare our lower bound with the actual value or the upper bound for some representative examples. \section{An lower bound on the bounce action}\label{sec:lower} Here, we derive an absolute lower bound on the bounce action. We consider $m$ scalar fields with the canonical kinetic term in $N$-dimensional Euclidean space. The action is given by \begin{align} S[\vec{\phi}]&=T[\vec{\phi}]-U[\vec{\phi}],\\ T&= \int d^Nx~ \sum_{a=1}^m\sum_{i=0}^{N-1}\frac{1}{2}\left(\frac{\partial\phi_a}{\partial x_i}\right)^2,\\ U&=\int d^Nx~U(\phi), \end{align} where $\vec{\phi}\equiv(\phi_1,\phi_2,..,\phi_m)$ and $U$ is the inverted potential: $U(\Phi)\equiv-V(\Phi)$ with $V(\Phi)$ being the actual one. Throughout this Letter, we set the false vacuum at $\vec{\phi}=0$ (and $V(\vec{0})=0$) without loss of generality. Let us consider $N=4$ dimensional Euclidean space for a while. If $\phi$ is a solution of equation of motion, it stationalizes the action. Considering the rescaling of the Euclidean space coordinates $\phi(x)\rightarrow \phi(\xi x)$, we have the following relation \begin{align} \left.\frac{\partial S[\phi(\xi x)]}{\partial \xi}\right|_{\xi=1}= -2T+4U=0, \end{align} which leads to \begin{align} S=\frac{T}{2}. \end{align} Thus, the problem of finding the minimal action solution can be reduced to that of finding the minimal kinetic energy solution. Since the minimal action bounce solution is known to be O$(N)$ symmetric for $N>2$ and even for multiscalar cases~\cite{Coleman:1977th, lopes1996radial, byeon2009symmetry, Blum:2016ipp}, we consider an O$(4)$ symmetric bounce whose radial coordinate is $r$. With O$(4)$ symmetry, the kinetic energy $T$ is given by \begin{align} T\equiv \sum_{a=1}^m \int_0^{\infty} dr~\pi^2 r^3\dot{\phi}_a^2. \end{align} The equation of motion is \begin{align} \ddot{\phi}_a + \frac{3}{r}\dot{\phi}_a + \frac{\partial U}{\partial \phi_a} = 0 \quad(a=1,\cdots,m), \label{eq:eom} \end{align} where we denote the ``dot'' as a derivative with respect to $r$. To discuss the minimum kinetic energy $T$, we define a class of bounce solutions. We characterize them by two parameters: field difference $\Delta \phi_a \equiv \phi_a(0) - \phi_a(\infty)$ and potential difference $\Delta U \equiv U[\phi(0)] - U[\phi(\infty)]$. Our first goal is to derive a lower bound for such a class of solution. We can easily see $\Delta\phi_a$ and $\Delta U$ are functional of $\dot\phi_a$'s. By multiplying $\dot{\phi}_a$ to Eq.~(\ref{eq:eom}) and integrating from zero to infinity, we obtain \begin{align} \left[\sum_{a=1}^m \dot{\phi}_a^2/2 + U\right]^{r=0}_{r=\infty} = \Delta U = \sum_{a=1}^m \int_0^\infty dr \frac{3\dot{\phi}_a^2}{r}. \end{align} On the other hand, \begin{align} \Delta \phi_a = -\int_0^\infty dr \dot{\phi}_a, \end{align} holds. To consider the minimization problem on $T$ with fixed $\Delta \phi_a$ and $\Delta U$, we introduce the Lagrange multiplier $\alpha_a$ and $\beta$, and define $\tilde T$ as \begin{align} \tilde T[\phi, \{\alpha_a\}, \beta] = & T[\phi] + \sum_{a=1}^m 2\alpha_a \left( \Delta \phi_a + \int_0^\infty dr \dot\phi_a \right) \nonumber\\ & \quad - \beta \left( \Delta U - \sum_{a=1}^m \int_0^\infty dr \frac{3\dot\phi_a^2}{r} \right). \end{align} An extremum condition $\delta\tilde T / \delta\dot\phi_a = 0$ gives \begin{align} \dot\phi_a = -\frac{\alpha_a r}{\pi^2 r^4 + 3\beta} \qquad (a=1,\cdots,m). \label{eq:phisol} \end{align} In the above solution, the Lagrange multiplier $\alpha_a$ and $\beta$ are determined from the constraints $\int_0^\infty dr \dot\phi_a = -\Delta\phi_a$ and $\sum_{a=1}^m \int_0^\infty dr (3/r)\dot\phi_a^2 = \Delta U$ as \begin{align} \alpha_a = \frac{24\Delta\phi_a|\Delta\phi|^2}{\Delta U}, \qquad \beta = \frac{12|\Delta\phi|^4}{\Delta U^2}, \label{eq:absol} \end{align} where $|\Delta\phi| = \sqrt{\sum_{a=1}^m \Delta\phi_a^2 }$. At this point, the solution Eqs.~(\ref{eq:phisol}, \ref{eq:absol}) is just an extremum, and it is not clear whether this point is the global minimum or not. To check this point, let us see $\tilde T$ again with Eq.~(\ref{eq:absol}). \begin{align} & \tilde T\left[\phi,~ \left\{\alpha_a=\frac{24\Delta\phi_a|\Delta\phi|^2}{\Delta U}\right\},~ \beta=\frac{12|\Delta\phi|^4}{\Delta U^2} \right] \nonumber\\ =& \frac{12|\Delta\phi|^4}{\Delta U} + \sum_{a=1}^m \int_0^\infty \left( \frac{\pi^2 r^4 + 3\beta}{r} \right) \left( \dot\phi_a + \frac{\alpha_ar}{\pi^2 r^4 + 3\beta} \right)^2. \end{align} The above equations tells us that the solution Eqs.~(\ref{eq:phisol}, \ref{eq:absol}) does give the global minimum on $T$ for fixed $\Delta\phi_a$ and $\Delta U$. Then, we can write the following inequality on the bounce action $S$ (if it exists) by using $|\Delta\phi|$ and $\Delta U$ as \begin{align} S \geq \frac{24}{\lambda_\phi(\Delta\phi_a)},\qquad \lambda_\phi(\Delta\phi_a) \equiv \frac{4\Delta U}{|\Delta\phi|^4}. \label{eq:bound1} \end{align} The above inequality is saturated if and only if $\dot\phi_a = -\alpha_a r / (\pi^2 r^4 + 3\beta)$ holds\footnote{ One may be interested in the potential which realizes the bounce solution $\dot\phi = -\alpha_a r / (\pi^2 r^4 + 3\beta)$. The explicit form of the potential for a single scalar field case is $U = \Delta U[ \phi/\Delta \phi - (4/3\pi)\sin(\pi\phi/\Delta\phi) + (1/6\pi)\sin(2\pi\phi/\Delta\phi)]$. This potential give the minimum of the bound Eq.~(\ref{eq:bound1}). However, we do not know an example which saturates the bound Eq.~(\ref{eq:bound2}). Thus, the bound Eq.~(\ref{eq:bound2}) may be weaker than Eq.~(\ref{eq:bound1}). }. To calculate this bound, we need $\Delta\phi_a$, \textit{i.e.}, $\phi_a(r=0)$. Although we do not know about $\Delta\phi_a$ unless we explicitly solve the equation of motion, we can set a bound on the minimal action even without solving the equation of motion. Suppose there exists $\lambda (>0)$ such that \begin{align} -U(\phi_a) = V(\phi_a)\geq -\frac{\lambda}{4}|\phi|^4. \label{eq:def_of_lambda} \end{align} We can find $\lambda$ for the potential such that $V(\phi)/|\phi|^4$ is bounded below. Then, we can define $\lambda_{\rm cr}$ as \begin{align} \lambda_{\rm cr} \equiv {\rm max}\left[ \frac{-4V(\phi^a)}{|\phi|^4}\right]. \label{eq:def_of_lambdacr} \end{align} We can see this $\lambda_{\rm cr}$ is the minimum of a set of $\lambda$ which satisfy Eq.~(\ref{eq:def_of_lambda}). Then, the bounce action has an absolute lower bound \begin{align} S \geq \frac{24}{\lambda_{\rm cr}}. \label{eq:bound2} \end{align} because $\lambda_\phi \leq \lambda_{\rm cr}$ is satisfied for any value of $\phi_a$. As a reference, the Fubini instanton~\cite{Fubini:1976jm}, which is a bounce solution with negative quartic potential $V = (\lambda_4/4) \phi^4$, has $S = 8\pi^2/3\lambda_4 \simeq 26.3/\lambda_4 >24/\lambda_{\rm cr}$ because $\lambda_{\rm cr} = \lambda_4$ holds in this case. To derive the above bound Eq.~(\ref{eq:bound2}), we do not need the explicit form of the bounce solution. Although the bound Eq.~(\ref{eq:bound2}) may be weaker than Eq.~(\ref{eq:bound1}), we can derive Eq.~(\ref{eq:bound2}) \textit{only} from the information of the potential. In $N(>2)$ dimensional case, the same procedure gives the lower bound: \begin{align} S \geq \frac{4\left[N(N-1)(N-2)\right]^{\frac{N-2}{2}}}{N\Gamma(N/2)} \sin(2\pi/N)^{\frac{N}{2}}\left(\frac{1}{\lambda_N}\right)^{\frac{N-2}{2}}, \end{align} with $\lambda_N \equiv N\Delta{U}/(\Delta\phi)^{\frac{2N}{N-2}}$. One may be interested in the condition in which the lower bound Eq.~(\ref{eq:bound2}) becomes close to the actual value. As long as the true vacuum and the false vacuum are not degenerated, our method can give a good estimation on the lower bound of the decay rate. For detailed discussion, see the Appendix. So far, we have derived a lower bound on the bounce action. Here let us comment on an upper bound on the bounce action. As is discussed above, by finding a point $\phi_{\rm cr}$ which maximizes $ [ -4V(\phi) /|\phi|^4] $, we can obtain a lower bound on the action. By using this $\phi_{\rm cr}$, we can easily obtain an upper bound as discussed in Ref.~\cite{Dasgupta:1996qu}. First, we restrict the field space into $\phi_{\rm cr}$ direction, which is a straight line passing through the false vacuum $\phi=0$ and $\phi_{\rm cr}$. We obtain a reduced single field theory on this straight line, and we can easily estimate the bounce action of this reduced action. Then, the resulting bounce action becomes an upper bound on the actual minimal bounce action. Thus, by finding a point $\phi_{\rm cr}$ which maximizes $ [ -4V(\phi) /|\phi|^4] $ , we can obtain both a lower and an upper bound on the actual minimal bounce action at the same time. Now, let us briefly discuss the applicability of our results. As long as the kinetic term is canonical and once the potential of the scalar fields is determined, our method gives a lower bound on the classical bounce action in a simple way. In some models, quantum corrections or thermal loop corrections are essential to generate a barrier between the false and the true vacuum. In such cases, the effective potential can be used for our method, and our method gives a good estimation on the lower bound of the bounce action as long as the perturbative calculation around the bounce can be used. In general, to obtain the vacuum decay rate precisely, we need to estimate the prefactor by integrating out fluctuations around the actual bounce solution\footnote{ The gauge dependence and renormalization scale dependence are canceled by considering loop corrections \cite{Endo:2017gal, Endo:2017tsz}.}: \begin{align} \frac{\Gamma}{V}=A'\mu^{4} \exp[{-S_{\rm cl}}]=\mu^{4}\exp[{-S_{\rm cl}+\ln A'}], \end{align} where $\Gamma/V$ denotes decay rate per unit four-dimensional volume, $\mu$ denotes a typical energy scale of the bounce dynamics, $S_{\rm cl}$ denotes classical bounce action, and $A'$ is a (normalized) prefactor. As long as the theory is perturbative, we may expect $\ln A' \sim\mathcal{O}(1)$ although there is a little ambiguity on the definition of $\mu$. In the case of $S_{\rm cl}\gg {\cal O}(1)$, the vacuum decay rate is mainly determined by the classical bounce action and our bound becomes useful to determine the order of the decay rate. Actually, when we consider the cosmological history of the vacuum, the relevant range of the action is $S_{\rm cl}\sim\mathcal{O}(100)$. Also, the condition for the thermal transition in the expanding Universe is $H^4 \sim T^4 e^{-S_3/T}$, where $H$ is the Hubble expansion rate and $T$ is temperature. The typical size of dimensionless action $S_3/T$ is ${\cal O}(100)$. In these cases, our bound can provide a lower bound on the vacuum decay rate. Finally, we derive a sufficient condition of vacuum stability in our present Universe. In order to have a stable Universe, the vacuum decay should not happen within a Hubble volume in a Hubble time: \begin{align} \label{eq:cond:vac} H_0^4 \gtrsim \Gamma/V, \end{align} where $H_0\sim 10^{-42}~$GeV is the Hubble constant today. On the other hand, by using Eq.~(\ref{eq:bound1}), the vacuum decay rate per volume to the point ${\phi}_a$ (if it exists) is bounded as \begin{align} \label{eq:cond:min} \frac{\Gamma({\phi}_a)}{V} \lesssim |\phi|^4 \exp\left( - \frac{6|\phi|^4}{-V(\phi_a)} \right), \end{align} where we assume that the size of prefactor is roughly given by $|\phi|^4$. By using Eqs.~(\ref{eq:cond:vac}) and (\ref{eq:cond:min}), we can show a sufficient condition of vacuum stability on the shape of the potential: \begin{align} V(\phi) + \frac{1}{4} \lambda_{H_0}(|\phi|) |\phi|^4>0, \label{eq:potential_bound} \end{align} with\footnote{ Strictly speaking, this condition should be imposed in $\phi>H_0$ region. } \begin{align} \lambda_{H_0}(|\phi|)&=\frac{3}{\ln(|\phi|^2/H_0^2)}\nonumber\\ &\simeq \frac{3}{\ln(|\phi|^2/1\text{ GeV}^2)+193}. \end{align} This is a sufficient condition for the stability of a false vacuum. Any false vacuum with any potential which satisfies Eq.~(\ref{eq:potential_bound}) has a lifetime which is longer than the age of the Universe. \section{Comparisons with the actual value}\label{sec:comparisons} In this section, we discuss several explicit examples in four-dimensional space. We will see the consistency of Eq.~(\ref{eq:bound2}), and furthermore, see that the lower bound Eq.~(\ref{eq:bound2}) becomes close to the actual value of the bounce action in many cases. In this sense, the lower bound Eq.~(\ref{eq:bound2}) is a quite useful tool to estimate the value of the bounce action when an explicit calculation is difficult. \subsection{Single scalar field} \begin{figure}[h!] \centering \includegraphics[width=0.9\hsize]{shat.pdf} \caption{ The minimal bounce action and the lower bound. The blue solid line shows the lower bound which is given in Eq.~(\ref{eq:bound2}). The green dotted line is taken from Fig.~1 in Ref.~\cite{Sarid:1998sn}. } \label{fig:comparison with sarid} ~\\[5mm] \includegraphics[width=0.9\hsize]{s_ratio.pdf} \caption{ The ratio between the green dotted line and the blue solid line in Fig.~\ref{fig:comparison with sarid}. }\label{fig:comparison with sarid (ratio)} \end{figure} The first example is a single scalar field theory with a polynomial potential: \begin{align} V(\phi) = \frac{1}{2}M^2 \phi^2 - \frac{1}{3} A \phi^3 + \frac{1}{4} \lambda_4 \phi^4. \end{align} This potential gives us good insight into a relationship between our bound Eq.~(\ref{eq:bound2}) and the minimal bounce action $S$. As discussed in Ref.~\cite{Sarid:1998sn}, we can parametrize the minimal bounce action as \begin{align} S = \frac{9M^2}{2A^2} \hat{S}(\kappa),\qquad \kappa \equiv \frac{9 \lambda_4 M^2}{8 A^2}. \end{align} Here $\hat S$ is a function which only depends on $\kappa$. According to the definition given in Eq.~(\ref{eq:def_of_lambdacr}), $\lambda_{\rm cr}$ is calculated as \begin{align} \label{eq:lamcrsingle} \lambda_{\rm cr}=\frac{2A^2}{9M^2}-\lambda_4. \end{align} By using this $\lambda_{\rm cr}$, we obtain the bound on $\hat S(\kappa)$ as \begin{align} \hat{S}(\kappa) \geq \frac{24}{1-4\kappa}. \label{eq:bound on shat} \end{align} Ref.~\cite{Sarid:1998sn} gives the numerical result of $\hat S(\kappa)$ by calculating the bounce configuration, and we show a comparison between the result of Ref.~\cite{Sarid:1998sn} and the bound Eq.~(\ref{eq:bound on shat}) in Fig.~\ref{fig:comparison with sarid} and Fig.~\ref{fig:comparison with sarid (ratio)}. We can see that our bound becomes close for large negative $\kappa$. In this regime, the bounce solution is well described by the Fubini instanton \cite{Fubini:1976jm}. On the other hand, our bound departs from the numerical value of the minimal bounce action if $\kappa$ is close to $1/4$, in this regime, the false and true vacua are almost degenerate and the bounce solution is well described by thin-wall approximation. There exists a potential barrier between the true vacuum and false vacuum. \subsection{Multi scalar fields} \begin{table} \centering \begin{tabular}{|r||c|c|c|c|} \hline & $m=1$ & $m=2$ & $m=4$ & $m=8$ \\\hline\hline $1 \leq R<1.2$ & 0.16 & 0.52 & 0.78 & 0.96 \\\hline $1.2\leq R<1.5$ & 0.22 & 0.26 & 0.13 & 0.02 \\\hline $1.5\leq R<2$ & 0.14 & 0.12 & 0.09 & 0.01 \\\hline $2 \leq R<5$ & 0.09 & 0.04 & 0.01 & 0 \\\hline $5 \leq R$ & 0.02 & 0.01 & 0 & 0 \\\hline Stable & 0.37 & 0.04 & 0 & 0 \\\hline \end{tabular} \caption{ The distribution of $R\equiv S_{\rm upper}/S_{\rm lower}$ for the multiscalar potential. }\label{tab:multiscalar} \end{table} The second example is a polynomial potential with multiscalar field $\phi_1,...,\phi_m$. We consider a term up to the quartic interaction, and parametrize it as follows \begin{align} V=\sum_{i}M^2\mu_i \phi_i^2+\sum _{i,j,k}M\gamma_{ijk}\phi_i\phi_j\phi_k +\sum_{ijkl}\lambda_{ijkl}\phi_i\phi_j\phi_k\phi_l, \end{align} where $M$ is a mass scale which does not affect the value of classical action and $\mu_i,\gamma_{ijk},\lambda_{ijkl}$ denote some dimensionless coupling. Here we do not calculate the bounce configuration explicitly. Instead of the explicit calculation, we estimate a lower and upper bound on the bounce action. The upper bound is estimated by the straight line method described in the later part of Sec.~\ref{sec:lower}. We define the ratio between the upper and lower bound as $R\equiv S_{\rm upper}/S_{\rm lower}$. If this $R$ is close to $1$, our lower bound is close to the actual value of the bounce action. We calculate the ratio $R$ by taking $\mu$'s, $\gamma$'s, and $\lambda$'s as random variables as in Ref.~\cite{Dine:2015ioa}. The ranges of the parameters are taken as \begin{align} 0<\mu_i<1,~~ -\frac{1}{m}<\gamma_{ijk}<\frac{1}{m},~~ -\frac{1}{m}<\lambda_{ijkl}<\frac{1}{m}. \end{align} We take the range of $\gamma$ and $\lambda$ so that the theory remains stable against loop corrections. We generate 1000 parameter points, and show the distribution of $R$ in Tab.~\ref{tab:multiscalar}. This result shows the lower bound Eq.~(\ref{eq:bound2}) becomes close to the actual value of the bounce action in the case of a large number of scalar fields. This feature can be understood as follows. As we have seen in the previous single scalar example, $\lambda_{\rm cr}$ depends on quartic coupling and the cubic coupling square (see Eq.~(\ref{eq:lamcrsingle})). In the present case, the typical value of quartic coupling is $1/m$ and that of cubic coupling square is $1/m^2$. With larger $m$, quartic coupling becomes more and more relevant and the bounce action becomes close to our lower bound. \subsection{MSSM} \begin{figure} \centering\includegraphics[width=0.9\hsize]{mu700_200.pdf} \caption{ Vacuum stability constraints on the $m_{\tilde L}$-$\tan\beta$ plane. Here we take $\mu = 700~{\rm GeV}$ and $m_{\tilde\tau_R} = m_{\tilde L} + 200~{\rm GeV}$. The blue line is written by using the RHS of Eq.~(\ref{eq:bound2}). The red dashed line is written by using a fitting formula given in Ref.~\cite{Hisano:2010re}. }\label{fig:stau} \end{figure} The last example is the MSSM. Supersymmetric models introduce a lot of scalar partners of the standard model fermions, and sometimes they destabilize the standard model-like vacuum. For example, Ref.~\cite{Hisano:2010re} discussed a vacuum stability in a direction of the third generation slepton with large $\tan\beta$. The scalar potential for the up-type Higgs $H_u$, the left-handed stau $\tilde L$, and the right-handed stau $\tilde\tau_R$ is given as \begin{align} V = & (m_{H_u}^2 + \mu^2) |H_u|^2 + m_{\tilde L}^2 |\tilde L|^2 + m_{\tilde \tau_R}^2 |\tilde \tau_R|^2 \nonumber\\ & + \frac{g_2^2}{8}( |\tilde L|^2 + |H_u|^2 )^2 + \frac{g_Y^2}{8}( |\tilde L|^2 - 2|\tilde\tau_R|^2 - |H_u|^2 )^2 \nonumber\\ & + \frac{g_2^2 + g_Y^2}{8} \delta_H |H_u|^4 + y_\tau^2 |\tilde L\tilde \tau_R|^2 \nonumber\\ & - (y_\tau \mu H_u^* \tilde L \tilde\tau_R + h.c.). \end{align} Here we do not consider the down-type Higgs $H_d$ because its VEV is suppressed by $1/\tan\beta$. $\delta_H$ expresses a radiative correction from the top quark and the stop, and its typical value is $\delta_H \simeq 1$. A cubic term $H_u^* \tilde L \tilde\tau_R$ in the last line destabilizes the standard model-like vacuum. Its coupling constant is proportional to $\mu\tan\beta$. In Fig.~\ref{fig:stau}, we show a comparison between the lower bound on the bounce action which is given in Eq.~(\ref{eq:bound2}) and Ref.~\cite{Hisano:2010re}. The lower bound on the bounce action $S$ is 400 at the blue line, and the standard model-like vacuum is sufficiently stable in the lower right region of the blue line. By using the result in Ref.~\cite{Hisano:2010re}, in Fig.~\ref{fig:stau}, we show the red line on which $S=400$ is satisfied. We can see our bound Eq.~(\ref{eq:bound2}) is consistent with the result of Ref.~\cite{Hisano:2010re}. To discuss the stability in the upper left region, Eq.~(\ref{eq:bound2}) is not enough in general. However, Fig.~\ref{fig:stau} shows that the sufficient stability condition by the blue line only differs by 5 \% from the upper bound on $\tan\beta$ by the red line. This means that Eq.~(\ref{eq:bound2}) gives a good estimation on the upper bound of $\tan\beta$. Actually, Figs.~\ref{fig:comparison with sarid}, \ref{fig:comparison with sarid (ratio)} show the lower bound on the bounce action gives a good estimation on the actual value unless the true and false vacua are degenerated. Such a degenerated situation is a special situation in the sense that it requires a tuning of the parameters or an approximate symmetry between two vacua. Thus, we can expect that our discussion is useful to discuss more complicated models. \section{Conclusion}\label{sec:conclusion} In this Letter, we derived a generic lower bound Eq.~(\ref{eq:bound2}) on the bounce action by using a quite simple discussion with the Lagrange multiplier. Our bound can be applied to a broad class of scalar potential with any number of scalar field. Necessary information to derive this bound is only $\lambda_{\rm cr}$ which is defined by Eq.~(\ref{eq:def_of_lambdacr}). In particular, our bound provides useful information for a model with a large number of scalar fields such as the landscape scenario because we do not need the explicit form of the bounce solution. By using this result, in Eq.~(\ref{eq:potential_bound}), we derived a sufficient condition of the stable vacuum of the Universe for a general scalar potential. The bound Eq.~(\ref{eq:potential_bound}) can be used as a quick check on the stability of a false vacuum in a broad class of models. As we discussed in section \ref{sec:comparisons}, the lower bound Eq.~(\ref{eq:bound2}) gives a good estimation on the actual value in many cases. We also investigated a condition for when the bounce action becomes close to the lower bound. As long as two vacua are not almost degenerated the minimal bounce action can be close to the lower bound. We have seen this feature in some representative examples. \section*{Acknowledgements} We thank Kfir Blum for fruitful discussions and careful reading of the Letter. We are also grateful to Sonia Paban for useful comments. We also thank Junji Hisano for a clarification of Ref.~\cite{Hisano:2010re}. The work of MT is supported by the JSPS Research Fellowship for Young Scientists.
{'timestamp': '2018-03-06T02:06:17', 'yymm': '1707', 'arxiv_id': '1707.01099', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.01099'}
ArXiv
\section{Introduction} The Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo have detected six sources of GWs as of writing this article \citep{abbott16a,abbott16b,abbott17,abbott17a,abbott17b,abbott17c}. All of them but one, GW170817, last reference, correspond to a system of BHB. Typically, the masses are larger than the nominal one derived from stellar evolution of $10\,M_{\odot}$, as predicted by \cite{Amaro-SeoaneChen2016}, i.e. ``hyperstellar-mass black holes'', as the authors coined, and previously discussed by e.g. also \citealt{heger2003,mapelli08,zampieri09,mapelli10,belczy2010,fryer12,mapelli13,ziosi2014,speetal15}.\\ There are two different channels to form a BHB, namely either (i) in the field in isolation and via stellar evolution of a binary of two extended stars \citep[see e.g.][]{tutukov,bethe, belczynski2002,belckzynski2010,postnov14,loeb16,tutukov17,postnov17,giacobbo18}. (ii) or via dynamical interactions in a dense stellar system (see the review of \citealt{BenacquistaDowning2013} and e.g. also \citealt{sambaran10,downing2010,mapelli13,ziosi2014,rodriguez15,rodriguez16,mapelli16,askar17,antonini16,megan17,frak18}). In this article we address the evolution of a BHB in an open cluster with a suit of 1500 direct-summation $N-$body simulations. We model the evolution of the BHB with properties similar to what can be expected to be detected by LIGO/Virgo \citep{Amaro-SeoaneChen2016}, using different representations of small- and intermediate-mass isolated open star clusters. Our cluster models are considered as a proxy of the Galactic population of open clusters, which are characterized by central densities of a few M$_\odot\,{\rm pc}^{-3}$, i.e. much lower than the typical densities of globular clusters, and contain a number of stars from two to three orders of magnitude less than a globular cluster. We investigate the evolution of stellar BHBs in low-mass and low-density open clusters models by means of high precision direct-summation $N-$body simulations. In an open cluster the impulsive effect produced by the large fluctuations over the mean field, whose amplitude is of order $\sqrt{N}/N$, can significantly affect the BHB evolution. Assuming an initial number of star $N_{\rm o}=1000$ for this type of open clusters and $N_{\rm g}=10^6$ for a typical globular cluster, we can calculate the expected fluctuations over the mean field amplitude as $f=\sqrt[]{N_{\rm g}/N_{\rm o}}=\sqrt[]{1000}\simeq 32$, thus implying a larger such effect in open clusters. This enhanced effect of stochastic fluctuations (physically given by the rare but close approaches among cluster stars) reflects in the ratio of the 2-body relaxation time scales which, given the cluster sizes as $R_{\rm o}$ and $R_{\rm g}$, writes as \citep{spi87} \begin{equation} \frac{t_{\rm rlx,\,o}}{t_{\rm rlx,\,g}}=\frac{1}{f}\frac{\log(0.11\,N_{\rm o})}{\log(0.11\,N_{\rm g})}\left(\frac{R_{\rm o}}{R_{\rm g}}\right)^{3/2}. \end{equation} Assuming $R_{\rm o}/R_{\rm g} = 1/5 $, the above equation yields to $t_{\rm rlx,\,o}/t_{\rm rlx,\,g}\simeq 0.02$, meaning that the smaller system evolves 50 times faster. Of course, this enhanced effect of individual strong encounters is partly compensated by their smaller time rate. In this paper we address the evolution of a BHB which, as a result of dynamical friction orbital decay \citep[see e.g.][]{bt}, we assume to be located at the centre of an open cluster-like system. The masses of the black holes are set to $30\,M_{\odot}$ each, following the first LIGO/Virgo detection, the GW150914 source \citep{abbott16a}, and \cite{Amaro-SeoaneChen2016}. Despite the possible ejection due to the supernova natal kick, there is margin for such kind of remnant to be retained in an open cluster. Indeed, compact remnants such as neutron stars and black holes formed in massive binaries are much easier retained in clusters because the kick momentum is shared with a massive companion, which leads to a much lower velocity for the post-supernova binary \citep{podsi04,podslia2005}. In the case of neutron stars, \cite{podsi04} showed that for periods below 100 days, the supernova explosion leads to a very little or no natal kick at all (their Fig.2, the dichotomous kick scenario). Open clusters have binaries mostly with periods of 100 days and below (see \citealt{Mathieu2008}, based on the data of \citealt{DuquennoyMayor1991}). These results can be extrapolated to black holes because they receive a similar kick to neutron stars (see \citealt{RepettoEt2012} and also the explanation of \citealt{Janka2013} of this phenomenon). In any case, black holes with masses greater than $10\,M_{\odot}$ at solar metallicity form via direct collapse and do not undergo supernova explosion, and hence do not receive a natal kick \citep{PernaEtAl2018}. Also, while the solar metallicity in principle could not lead to the formation of black holes more massive than $25\,M_{\odot}$ \cite{spema17}, we note that the resonant interaction of two binary systems can lead to a collisional merger which leads to formation of this kind of black hole \citep{goswami2004,fregeau2004} at the centre of a stellar system, where they naturally segregate due to dynamical friction. Moreover, we note that another possibility is that these stellar-mass black holes could have got their large masses due to repeated relativistic mergers of lighter progenitors. However, the relativistic recoil velocity is around $200-450\,{\rm km/s}$ for progenitors with mass ratio $\sim [0.2,\,1]$ respectively, so that this possibility is unlikely \citep[see e.g.][their Fig. 1, lower panel]{Amaro-SeoaneChen2016}, because they would escape the cluster, unless the initial distribution of spins is peaked around zero and the black holes have the same mass (as in the work of \cite{RodriguezEtAl2018} in the context of globular clusters). In this case, second generation mergers are possible and, hence, one can form a more massive black hole via successive mergers of lighter progenitors. The relatively low number of stars of open clusters gives the possibility to integrate over at least a few relaxation times in a relatively short computational time, so that, contrarily to the cases of globular clusters or galactic nuclei, it is possible to fully integrate these systems without the need to rescale the results. In this article we present a series of 1500 dedicated direct-summation $N-$body simulations of open clusters with BHBs. The paper is organized as follows: in Sect. 2 we describe the numerical methods used and our set of models; in Sect.3 we present and discuss the results of the BHB dynamics; in Sect. 4 we discuss the implication of our BHBs as sources of gravitational waves; in Sect. 5 we present the results on tidal disruption events, in Sect. 6 we draw overall conclusions. \section{Method and Models} \label{met_mod} To study the BHB evolution inside its parent open cluster (henceforth OC) we used \texttt{NBODY7} \citep{aarsethnb7}, a direct N-body code that integrates in a reliable way the motion of stars in stellar systems, and implements a careful treatment to deal with strong gravitational encounters, taking also into account stellar evolution. We performed several simulations at varying both OC and BHB main properties, taking advantage of the two high-performance workstations hosted at Sapienza, University of Roma, and the Kepler cluster, hosted at the Heidelberg University. \begin{table*} \centering \caption{ Main parameters characterizing our models. The first two columns refers to the cluster total number of stars, $N_{\rm cl}$, and its mass $M_{\rm cl}$. The second two-column group refers to the BHB parameters: semi-major axis, \textit{a}, and initial eccentricity, \textit{e}. The last column gives the model identification name. Each model is comprised of 150 different OC realizations. } \label{ictab} \begin{tabular}{@{}ccccccc@{}} \toprule \multicolumn{2}{c}{\textbf{Cluster}} & & \multicolumn{2}{c}{\textbf{BHB}} & \textbf{} & \textbf{$N$-body set} \\ \midrule $N_{\rm cl}$ & $M_{\rm cl}$ (M$_\odot$) & & \textit{a} (pc) & \textit{e} & & Model \\ \midrule \multirow{2}{*}{512} & \multirow{2}{*}{$ 3.2 \times 10^{2}$} & & \multirow{2}{*}{0.01} & 0.0 & & A00 \\ & & & & 0.5 & & A05 \\ \midrule \multirow{2}{*}{1024} & \multirow{2}{*}{$ 7.1 \times 10^{2}$} & & \multirow{2}{*}{0.01} & 0.0 & & B00 \\ & & & & 0.5 & & B05 \\ \midrule \multirow{2}{*}{2048} & \multirow{2}{*}{$ 1.4\times 10^{3}$} & & \multirow{2}{*}{0.01} & 0.0 & & C00 \\ & & & & 0.5 & & C05 \\ \midrule \multirow{2}{*}{4096} & \multirow{2}{*}{$ 2.7 \times 10^{3}$} & & \multirow{2}{*}{0.01} & 0.0 & & D00 \\ & & & & 0.5 & & D05 \\ \bottomrule \end{tabular} \end{table*} Table \ref{ictab} summarizes the main properties of our $N$-Body simulation models. We created four simulation groups representing OC models at varying initial number of particles, namely $512\leq N \leq 4096$. Assuming a \citet{kroupa01} initial mass function ($0.01$ M$_\odot$ $\leq$ M $\leq$ $100$ M$_\odot$), our OC model masses range between $ 300$ M$_{\odot}$ and $3000$ M$_\odot$. All clusters are modeled according to a Plummer density profile \citep{Plum} at virial equilibrium with a core radius ($r_c = 1$ pc), and adopting solar metallicity (Z$_\odot$). We perform all the simulations including the stellar evolution recipes that are implemented in the \texttt{NBODY7} $\,$ code which come from the standard \texttt{SSE} and \texttt{BSE} tools \citep{hurley2000, hurley2002}, with updated stellar mass loss and remnant formation prescriptions from \citet{bel2010}. Further, for simplicity, we do not take into account primordial binaries, which we leave to future work. To give statistical significance to the results we made $150$ different realizations of every model, which are denoted with names A00, A05, B00, B05, C00, C05, D00 and D05, where the letter refers to increasing $N$ and the digits to the initial BHB orbital eccentricity. Additionally, we ran a further sample of $421$ simulations, aiming at investigating the implications of some of our assumptions on the BHB evolution. These models are deeply discussed in Sect. \ref{tde}. In all our simulations, we assumed that the BHB is initially placed at the centre of its host OC, and is composed of two equal mass BHs with individual mass M$_{\rm{BH}}$ = $30$ M$_\odot$. The initial BHB semi major axis is $0.01$ pc with two initial eccentricities, $e_{\rm BHB}=0$, $e_{\rm BHB}=0.5$. The initial conditions drawn this way are obtained updating the procedure followed in \cite{ASCD15He}. The choice of a BHB initially at rest at centre of the cluster with that not very small separation is not a limitation because the dynamical friction time scale of $30$ M$_{\odot}$ is short enough to make likely that both the orbital decay of the BHB occurs rapidly and also that the probability of a rapid formation of a BHB from two individual massive BHs is large even on a short time. The BHB orbital period is, for the given choices of masses and semimajor axis, P$_{\rm BHB}= 0.012$ Myr. Note that our BHBs are actually "hard" binaries \citep{heggie75,hills75,bt}, having binding energy BE$\sim$ 3.8 $10^{45}$ erg, which is larger than the average kinetic energy of the field stars in each type of cluster studied in this work. All models were evolved up to $3$ Gyr, which is about 3 times the simulated OC internal relaxation time. The scope of the present work is to give investigate the BHB dynamical evolution, hence we focus on tracking mainly its evolution. We also note that stellar-mass BHs naturally form binary systems in open clusters over a wide cluster mass range, and can also undergo triple-/subsystem-driven mergers, as recently shown through explicit direct N-body simulations by \citet{kimpson} and \citet{sambaran1}. \section{Dynamics of the black hole binary} \subsection{General evolution} \label{BHB_ev} The BHB is assumed to be located at the centre of the cluster. Due to interactions with other stars, the BHB can either undergo one of the following three outcomes. First, (i) the BHB can shrink and hence become harder, meaning that the kinetic energy of the BHB is higher than the average in the system \citep[see e.g.][]{bt}; also (ii) the BHB can gain energy and therefore increase its semi-major axis and (iii) the BHB can be ionised in a typically three-body encounter. In Table~\ref{ub} we show the percentages of these three outcomes in our simulations. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{images/semi_sha.pdf} \includegraphics[width=0.5\textwidth]{images/semi_shb.pdf} \caption{ Semi-major axis evolution in four random, representative simulations. The upper panel shows binaries which initially are circular and lower panel depicts eccentric ones. We normalise the time to the initial (i.e. at $T=0$) period of the binary. } \label{fig:param_sh} \end{figure} \begin{table} \centering \caption{ Percentage of BHB which undergo one of the three processes described in the text. They either shrink (column 2), or increase their semi-major axis (column 3) or break up (column 4). } \label{ub} \begin{tabular}{@{}cccc@{}} \toprule \textbf{Model} & \textbf{Harder} & \textbf{Wider}& \textbf{Break up} \\ & \% & \% & \% \\\midrule A00 & 89.1 & 7.9 & 2.9 \\ A05 & 97.1 & 2.1 & 0.7 \\ B00 & 92.5 & 2.7 & 4.8 \\ B05 & 94.0 & 2.0 & 4.0 \\ C00 & 93.6 & 0 & 6.4 \\ C05 & 96.5 & 0 & 3.5 \\ D00 & 94.2 & 0 & 5.8 \\ D05 & 97.1 & 0 & 2.8 \\ \bottomrule \end{tabular} \end{table} We can see that typically about $90$\% of all binaries shrink their semi-major axis as they evolve, as one can expect from the so-called \textit{Heggie's law} \citep{heggie75}. We note that models in which the binary initially was eccentric lead to a higher percentage in the ``harder'' outcome. We display in Fig.~\ref{fig:param_sh} a few representative examples of these processes. The decrease is gradual for model A and B while model C and D (which are the more massive) show a steeper decrease. There are however cases in which the binary gains energy from gravitational encounters and increases its semi-major axis, becoming ``wider'' (Table \ref{ub}, column 2). If the host cluster is massive enough, the semi-major axis always decreases (models C and D), contrary to lighter models, in which it can increase (models A and B). Because of the initial choice of the BHB semi-major axis, gravitational encounters with other stars rarely ionise it, although we observe a few events, typically below $7\,\%$ (circular binaries are easier to ionise). This ionisation happens between $\sim 5$ Myr and up to $\sim 100$ Myr and it is usually driven by the encounter with a massive star ($\gtrsim 10$ M$_\odot$). In such case, the massive star generally pair with one of the BHs, while the other BH is usually ejected from the stellar systems. \subsubsection{Pericentre evolution} For a BHB to become an efficient source of GWs, the pericentre distance must be short enough. In this section we analyse the evolution of the pericentres for our different models. In Table~\ref{peritab} we summarise the results of Figs.~\ref{fig:per_a}, \ref{fig:per_b}. In the table we show the average pericentre distance at three different times(1, 2 and 3 Gyr) in the evolution of the cluster as well as the absolute minimum pericentre distance we find at each of these times. \begin{table} \centering \caption{ Evolution of the BHB pericentre distance for all the models. The columns from left to right denote, respectively: the model, the initial BHB pericentre (r$_{\rm p}^{i}$), the time in which we have calculated the average ($T$), the BHB pericentre distance averaged over all the simulations of the respective model ($\langle r_{\rm p}\rangle$), and the absolute minimum distance we record (r$_{\rm p}^{min}$). We note that the Schwarzschild radius of a $30\,M_{\odot}$ is $1.43\times\,10^{-12}~{\rm pc}$. } \label{peritab} \begin{tabular}{@{}ccccccl@{}} \hline \textbf{Model} & \textbf{r$_{\rm p}^{i}$} & \textbf{T} & \textbf{$\langle r_{\rm p}\rangle$} & \textbf{r$_{\rm p}^{min}$} & \\ & (pc) & (Gyr) & (pc) & (pc) & \\ \hline & & 1 & $2.3\times 10^{-3}$ & $5.0\times 10^{-6}$ & \\ A00 & $1.0\times 10^{-2}$ & 2 & $2.3\times 10^{-3}$ & $3.2\times 10^{-5}$ & \\ & & 3 & $2.1\times 10^{-3}$ & $4.9\times 10^{-6}$ & \\ \midrule & & 1 & $1.7\times 10^{-3}$ & $1.4\times 10^{-5}$ & \\ A05 & $5.0\times 10^{-3}$ & 2 & $1.9\times 10^{-3}$ & $1.0\times 10^{-5}$ & \\ & & 3 & $1.7\times 10^{-3}$ & $2.7\times 10^{-4}$ & \\ \midrule & & 1 & $5.4\times 10^{-4}$ & $3.4\times 10^{-6}$ & \\ B00 & $1.0\times 10^{-2}$ & 2 & $5.7\times 10^{-4}$ & $2.2\times 10^{-6}$ & \\ & & 3 & $5.1\times 10^{-4}$ & $4.1\times 10^{-6}$ & \\ \midrule & & 1 & $1.1\times 10^{-3}$ & $2.4\times 10^{-7}$ & \\ B05 & $5.0\times 10^{-3}$ & 2 & $8.9\times 10^{-4}$ & $2.4\times 10^{-7}$ & \\ & & 3 & $7.9\times 10^{-4}$ & $1.5\times 10^{-6}$ & \\ \midrule & & 1 & $2.8\times 10^{-4}$ & $1.7\times 10^{-6}$ & \\ C00 & $1.0\times 10^{-2}$ & 2 & $2.6\times 10^{-4}$ & $2.2\times 10^{-7}$ & \\ & & 3 & $2.5\times 10^{-4}$ & $5.3\times 10^{-7}$ & \\ \midrule & & 1 & $3.7\times 10^{-4}$ & $1.5\times 10^{-6}$ & \\ C05 & $5.0\times 10^{-3}$ & 2 & $3.1\times 10^{-4}$ & $2.5\times 10^{-7}$ & \\ & & 3 & $2.6\times 10^{-4}$ & $2.5\times 10^{-7}$ & \\ \midrule & & 1 & $1.3\times 10^{-4}$ & $2.7\times 10^{-6}$ & \\ D00 & $1.0\times 10^{-2}$ & 2 & $1.0\times 10^{-4}$ & $9.2\times 10^{-7}$ & \\ & & 3 & $9.1\times 10^{-5}$ & $8.8\times 10^{-7}$ & \\ \midrule & & 1 & $1.5\times 10^{-4}$ & $1.8\times 10^{-6}$ & \\ D05 & $5.0\times 10^{-3}$& 2 & $1.5\times 10^{-4}$ & $3.6\times 10^{-6}$ & \\ & & 3 & $1.3\times 10^{-4}$ & $9.8\times 10^{-6}$ & \\ \midrule \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{}& \end{tabular} \end{table} \begin{figure*} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/peri_A00.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/peri_B00.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/peri_C00.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/peri_D00.pdf} \end{minipage} \caption{ BHB pericentre distance distribution for all simulations of models A00, B00, C00 and D00. The histograms are calculated at three different points in the evolution of the systems, namely at $1$ Gyr (blue), $2$ Gyr (red) and $3$ Gyr (green). We show with a vertical, black dashed line the initial pericentre in the model. } \label{fig:per_a} \end{figure*} \begin{figure*} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/peri_A05.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/peri_B05.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/peri_C05.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/peri_D05.pdf} \end{minipage} \caption{As in Fig.\ref{fig:per_a}, but for the models A05, B05, C05 and D05.} \label{fig:per_b} \end{figure*} For BHB which initially are circular, we can see in the table and in Fig.~\ref{fig:per_a} that in all models there is a significative shrinkage of the pericentre distance of, at least, one order of magnitude. Such shrinkage occurs after only 1 Gyr. For the most massive clusters, i.e. model D00, about $20\%$ of all binaries achieve a pericentre distance which is of about two orders of magnitude smaller than the initial value. We note, however, that a very few binaries shrink to extremely small values, reaching pericentre distances of down to $10^{-7}\,pc$. Eccentric binaries also shrink and achieve smaller pericentre values, as we can see in Fig.~\ref{fig:per_b}. In both the case of eccentric and circular orbit, we note that in low dense clusters, i.e. model A and B, the BHB preserves a pericentre relatively close to the initial value indicating that such stellar systems are less efficient in favouring the BHB shrinkage. In such models the pericentres data appears more spread than in models C and D. A further difference is that for example in model A00, even after 3 Gyr, the BHB have larger pericentres, indicating that the binary becomes wider, contrary to what is observed in model A05. We note additionally, that in the intermediate low massive model, B05, the pericentre reaches very small values (of the order of $10^{-7}\,pc$) which does not occur for an initial circular orbit. These results indicate that in such cases, both the cluster stellar density and the initial orbital eccentricity play a relevant role in favouring the BHB shrinkage. \subsection{Retained and dynamically-ejected BHBs} \label{ret_esc} We observe that in the majority of cases these dynamically--formed binaries interacting with other stars in the system can also be ejected away from the cluster. In the code that we are using, \texttt{NBODY7}, single or binary stars are considered escapers of their energy is positive and their distance to the centre of the OC centre is at least two times the initial half mass radius \citep{aarseth_esc, aarseth_book}. Taking into account the evolutionary scenarios discussed in Sect. \ref{BHB_ev}, we derive for each model the fraction of escaping and retained BHBs. Table \ref{esc_ret} summarizes the results of this analysis. In model A all the BHBs that become harder, i.e. shrink their semi-major axis, are retained in the OC both in the cases in which the binary has an initial circular orbit (A00) and in the cases in which the BHBs has initial eccentric orbit (A05). In model B only a small fraction of BHBs ($0.7$ \%) is ejected from the cluster while a large fraction is retained. In particular, in model B05 the fraction of ejected BHBs ($2.7$ \%) is higher than in model B00. In model C the percentage of ejected BHBs is larger than in the previous cases. In particular, when the binary has an initial eccentric orbit (model C05) the fraction of escaping BHBs is about the $10.5$ \%. Finally, in model D the majority ($\geq 85$ \%) of BHBs is retained in the cluster even if in this case, contrary to the previous situations, circular orbits have a higher fraction of ejected BHBs. After an ionisation of the BHB, the individual black holes usually form a new binary with a star in the cluster. These dynamically-formed binaries are usually short-lived, and last at most some tens of Myrs. After a BHB has been separated, one of the black holes stays in the cluster and forms a new binary with a star. These dynamically-formed binaries do not survive for long. Moreover, we notice that the newly single BHs are more likely to be expelled from the stellar systems than retained because of multiple scattering with massive stars ($\gtrsim 10$ M$_\odot$). The presence of such massive stars is comparable to the time at which the BHs are expelled from the systems, which is generally short ($\lesssim 100$ Myr). As it is shown in Table \ref{esc_ret}, only in three models studied (A00, C00 and D00) the BHs are retained after the binary breaking. Note that the larger fraction of ejected BHBs "belongs" to more massive clusters (C and D) in spite of their larger escape velocity. The time at which the bound BHB is ejected from the cluster varies among the models, with a BHB mean ejection time between $0.4$ and $1.2$ Gyr. We noticed that low dense clusters (models A and B) show a BHB ejection time shorter than massive clusters (models C and D). \begin{table} \centering \caption{ Percentage of BHBs retained (ret) by the cluster or ejected (esc). The first column indicates the models. Column 2 and 3 indicate the percentage of retained or ejected BHB that become harder. Column 4 and 5 refers to wider BHBs. Column 6 and 7 give the percentage of retained and ejected single black hole after the binary breaking.} \label{esc_ret} \begin{tabular}{@{}ccccccc@{}} \toprule \textbf{Model} & \multicolumn{2}{c}{\textbf{hard}} &\multicolumn{2}{c}{\textbf{wider}} & \multicolumn{2}{c}{\textbf{break}} \\ \midrule & ret & esc& ret & esc & ret & esc \\ & \% & \% & \% & \% & \% & \% \\ A00 & 89.1 & 0.0 & 7.9 & 0.0 & 0.7 & 2.1 \\ A05 & 97.1 & 0.0 & 2.1 & 0.0 & 0.0 & 0.7 \\ B00 & 91.8 & 0.7 & 2.7 & 0.0 & 0.0 & 4.8 \\ B05 & 91.3 & 2.7 & 2.0 & 0.0 & 0.0 & 4.0 \\ C00 & 88.6 & 4.9 & 0.0 & 0.0 & 0.7 & 5.6 \\ C05 & 86.5 & 9.9 & 0.0 & 0.0 & 0.0 & 3.5 \\ D00 & 85.5 & 8.6 & 0.0 & 0.0 & 0.7 & 5.0 \\ D05 & 90.0 & 7.2 & 0.0 & 0.0 & 0.0 & 2.8 \\ \bottomrule \end{tabular} \end{table} The pie charts in Fig. \ref{fig:pie} illustrate the probabilities of the different channels for two models studied, C and D (both configuration 00 and 05). Harder binaries are denoted with letter "$h$", wider with "$w$", broken up binaries with "$b$". Then, each of the three scenarios are split into two cases: BHB retained by the cluster (indicate with "$ret$") and BHB or BHs ejected from the system (indicate with "$esc$"). From the pie charts it is clear that in the majority of cases the BHBs shrink the semi major axis, becoming harder and remaining bound to the parent cluster. Model C00 and D00 show also a very small fraction of broken up binaries (hence newly single BHs) which are retained by the clusters. Such result, on the contrary, is not observed in model C05 and D05. A considerable number of harder BHB escaped from the cluster is observed in model C05. Furthermore, the percentage of newly single BHs escaped from the cluster ($b_{\rm esc}$) is higher in model C00 and D00. Finally it is worth noticing the fraction of coalescence events (black slices) in each model. \begin{figure*} \includegraphics[width=1\textwidth]{images/pie_tot_bn.pdf} \caption{ The pie charts indicate the various evolutionary scenario of the BHB discussed in Sect. \ref{BHB_ev} and Sect. \ref{ret_esc} for models C and D. The colour is referred to: the fraction of retained harder BHB ($h_{\rm ret}$), the fraction of ejected harder BHB ($h_{\rm esc}$), the fraction of escaped wider BHB ($w_{\rm ret}$), the fraction of retained wider BHB ($w_{\rm ret}$), the fraction of broke binaries retained ($b_{\rm ret}$), the fraction of broke binaries ejected ($b_{\rm esc}$) and the fraction of mergers ($merge$). On the other hand the striped slices referred to BHB that broke up. The width of each slice indicate the percentage as shown in Table \ref{esc_ret} and Table \ref{table_merge}. } \label{fig:pie} \end{figure*} \subsection{External Tidal Field} For a Milky Way-like galaxy the dynamical evolution of open clusters may be significantly influenced by an external tidal field \citep{sambaran1}. To investigate such effect, we assume our clusters are embedded in a tidal field like that of the solar neighbourhood. The Galactic potential is modelled using a bulge mass of MB = $1.5 \cdot 10^{10}$ M$_\odot$ \citep{fdisk} and disc mass MD = $5 \cdot 10^{10}$ M$_\odot$. The geometry of the disc is modelled following the formulae of \citet{fbulge} with the following scale parameters a=5.0 kpc and b=0.25 kpc. A logarithmic halo is included such that the circular velocity is 220 km/s at 8.5 kpc from the Galactic center. Adopting these configurations, we ran a further sub-set of simulations for each model A, B, C and D. The external tidal field generally contribute stripping stars from the cluster, accelerating its dissolution through the field. In our models the complete dissolution of the clusters occur between 1.5 Gyr and 3 Gyr. We notice that the significant reduction of the BHB semi major axis (up to 1-2 order of magnitude) occurs in a time which ranges between $\sim 50$ and $\sim 7 \cdot 10^{2}$ Myr. In such time-range the clusters are still bound and the tidal forces have not yet contribute to dilute the clusters, avoiding the binary harden. The gravitational interactions that contribute to significantly shrink the BHB semi major axis act in a short time-range and in such time the cluster still contain between 60\% and 80\% of bound stars. The complete disruption of clusters occur when the gravitational interactions do not play anymore a dynamical role in the evolution of the black hole binary. It is worth mentioning that such result are typical of open cluster that lie at 8.5 Kpc from the Galactic center, otherwise clusters closer to the central regions would dissolve in a shorter time scale. \section{Sources of gravitational waves} \subsection{Relativistic binaries} \label{subsec.relbin} The code that we used for this work (\texttt{NBODY7}) identifies those compact objects that will eventually merge due to the emission of gravitational radiation. Note that the \texttt{NBODY7}~ code indicates a binary as `merging' when at least one of the conditions described in \cite{aarsethnb7} is satisfied \footnote{https://www.ast.cam.ac.uk/~sverre/web/pages/pubs.htm} (see also \citet[Sect. 2.3.1]{sambaran3}). However, the code does not integrate in time these binaries down to the actual coalescence, because this would require a reduction of the time-step down to such small values to make the integration stall. In Table~\ref{table_merge} we give the percentage of BHB mergers as identified by \texttt{NBODY7} in the simulations. The more massive the cluster, the larger the number of relativistic mergers found. We noted that the initial value of the binary eccentricity is not necessarily correlated with the number of coalescences. The majority of mergers occur in a time range between $5$ Myr and $1.5$ Gyr. Only two merger events take longer, between $\approx 1.5$ Gyr and $\approx 2$ Gyr. In our models, the clusters have not yet disrupted when the BHB coalescences occur, still containing more than the 80 \% of the initial number of stars. \begin{table} \centering \caption{Percentage of BHB mergers found for each model studied.} \label{table_merge} \begin{tabular}{@{}cc@{}} \toprule \textbf{Model} & \% Mergers \\ \midrule A00 & 0.0 \\ A05 & 0.7 \\ \midrule B00 & 0.7 \\ B05 & 0.7 \\ \midrule C00 & 2.1 \\ C05 & 4.3 \\ \midrule D00 & 7.1 \\ D05 & 5.7 \\ \bottomrule \end{tabular} \label{table.mergers} \end{table} In Figs.~\ref{fig:parammergea} and \ref{fig:parammergeb} we show the evolution of the BHB semi-major axis and pericentre distance of a few representative cases of Table~\ref{table.mergers} which initially were circular or eccentric, respectively. It is remarkable that the pericentre distances drop down to $7--8$ orders of magnitude with respect to the initial value. The eccentricities fluctuate significantly, episodically reaching values very close to unity. \begin{figure} \centering \includegraphics[width=0.50\textwidth]{images/eccmerge.pdf} \caption{The evolution of the eccentricity for two cases in which the BHBs merge, for models A05 and D00.} \label{fig:eccmer} \end{figure} Because of the relativistic recoil kick \citep{CampanelliEtAl2007,BakerEtAl2006,GonzalezEtAl2007,fragk17,frlgk18}, the product of the merger of the BHB might achieve very large velocities, such to escape the host cluster in all of the cases due to the very small escape velocity. Fig.~\ref{fig:bhbesctgw} shows the distribution of the BHB semi-major axis and eccentricity in the last output before the gravitational wave regime drives the merger. Because these binaries have undergone many dynamical interactions with other stars, the eccentricities are very high, ranging between $0.99996$ and above $0.99999$. Taking into account the expression for the GW emission time, $\mathcal{T}_{gw}$, \citep{peters64}, \begin{equation} \centering \mathcal{T}_{gw} (yr) = 5.8 \enskip 10^{6} \enskip \frac{(1+q)^{2}}{q} \enskip \left(\frac{a}{10^{2}\quad \rm{pc}}\right)^{4} \enskip \left(\frac{m_{1}+m_{2}}{10^{8} \quad \rm{M}_{\odot{}}}\right)^{-3} (1-e^{2})^{7/2} \label{peters} \end{equation} where $q$ is the mass ratio between the two BHs of mass $m_{1}$ and $m_{2}$ \footnote{note that the r.h.s of Eq. \ref{peters} is invariant on the choice $q=m_1/m_2$ or $q=m_2/m_1$}, we found that about 50\% of the mergers are mediated by a three body encounter with a pertuber star.Such three body interaction is thus a fundamental ingredient for BHB coalescence in low dense star clusters, as already pointed out by \citet{sambaran3}. An example of such mechanism is discussed in the next section (\ref{mikkola}). \begin{figure} \centering \includegraphics[width=0.50\textwidth]{images/sea_merg_semi_a.pdf} \includegraphics[width=0.50\textwidth]{images/sea_merg_peri_a.pdf} \caption{Evolution of the BHB semi major axis (upper panel) and pericentre distance (lower panel) of three illustrative cases which initially were circular. } \label{fig:parammergea} \end{figure} \begin{figure} \centering \includegraphics[width=0.50\textwidth]{images/sea_merg_semi_b.pdf} \includegraphics[width=0.50\textwidth]{images/sea_merg_peri_b.pdf} \caption{Same as Fig.~\ref{fig:parammergea} but for BHBs which initially had an eccentric orbit.} \label{fig:parammergeb} \end{figure} \begin{figure} \resizebox{\hsize}{!} {\includegraphics[scale=1,clip]{images/newaemerged.pdf}} \caption { Distribution of the semi-major axis ($a$) and eccentricity ($e$) for all BHBs which merge in our simulations. The various symbols refer to the models as defined in Table~\ref{ictab}. } \label{fig:bhbesctgw} \end{figure} \subsection{A detailed example of a merger event} \label{mikkola} As we said above, \texttt{NBODY7} ~identifies the binary merger events in a different way when a close interaction with a third object occurs. However, it does not fully integrate those specific cases because following the detailed binary evolution would practically make the code stuck. So, in order to check with accuracy the process of BHB coalescence upon perturbation, we followed the evolution of one of the allegedly merging BHB by mean of the few-body integrator \texttt{ARGdf } \citep{megan17}. Based on the \texttt{ARCHAIN } code \citep{mikkola99}, \texttt{ARGdf } includes a treatment of dynamical friction effect in the algorithmic regularization scheme, which models at high precision strong gravitational encounters also in a post-Newtonian scheme with terms up to the $2.5$ order (\citealt{mikkola08}, whose first implementation in a direct-summation code is in \citealt{KupiEtAl06}). We chose, at random, one of our simulations of the D00 model to set initial conditions for the high precision evolution of a "pre merger" BHB considering its interaction with the closest 50 neighbours, number that we checked sufficient to give accurate predictions at regard. \begin{figure} \resizebox{\hsize}{!} {\includegraphics[scale=1,clip]{images/plot652.pdf}} \caption { Trajectories of the BHs in our resimulation (model D00). The cyan circle and dashed line represents the perturbing star and its trajectory, the black holes are shown as a blue and red circle and solid lines. The grey circle and lines indicate the stars of the sub cluster sample simulated. } \label{3bd} \end{figure} \begin{figure} \resizebox{\hsize}{!} {\includegraphics[scale=1,clip]{images/ecc_mikk.pdf}} \caption { Evolution of the BHB eccentricity of Fig.~\ref{mkk} as a consequence of the three body encounter. Each jump in the eccentricity corresponds to a close passage of the third star to the BHB, as described in the text. } \label{mkke} \end{figure} \begin{figure} \resizebox{\hsize}{!} {\includegraphics[scale=1,clip]{images/ifig6202.pdf}} \caption { Trajectories of the BHs in our resimulation (model D00). The cyan circle and dashed line represents the perturbing star and its trajectory, the black holes are shown as a blue and red circle and solid lines.} \label{mkk} \end{figure} This integration is a clear example of the relevance of dynamical interactions with other stars. Fig.~\ref{3bd} is a snapshot of the BHB evolution and the formation of a triple system with a pertuber star. \footnote{An animation of the triple orbit and the eccentricity evolution is available on line with the name triple\_argdf.avi}. The BHB shrinks by interacting with such pertuber, of mass $3.4\,M_\odot$, which is in retrograde orbit as compared to the inner binary with an inclination of $105\degree$ indicating an eccentric \citet{kozai} \citet{lidov} mechanism \citet{naoz}. We note also a flyby star of mass $0.5\,M_\odot$ which interacts with the triple system (BHB \& pertuber) In Fig.~\ref{mkke} we display the step-like increase of the BHB eccentricity, which is marked by the repeated interactions with the outer star. Each time the pertuber orbits around the BHB we observe a step increasing of the eccentricity. On the contrary the flyby encounter is not efficient to make a significant perturbation on the eccentricity evolution. Fig.~\ref{mkk} shows a zoom of the evolution of the BHB latest orbits before the coalescence event. The plot in the rectangle is a zoom of the final part of the BHB trajectory (at its right side), spanning a length scale $\sim 10^{-7}$pc. Therefore, in this particular case the triple built up is the main ingredient that drives the BHB coalescence. A similar result is derived by \citet{sambaran3} for low dense star clusters-like. \subsection{Gravitational Waves} In Fig.~\ref{fig.30_30_Peters} we show the amplitude vs frequency of emitted gravitational waves for the case described in the above subsection. Using the last orbital parameters of the binary which correspond to the last integration made with \texttt{ARGdf }, we evolve the last phase of the binary by means of Eq.\ref{peters} deriving a coalescence time $T_{\rm mrg} \cong 7\,\textrm{yrs}$. The amplitude is estimated following the approach of Keplerian orbits of \cite{PM63} and the orbital evolution as in the work of \cite{peters64}. We have set the luminosity distance to that of the first source detected by LIGO \citep{abbott16a}, which corresponds to a redshift of about $z=0.11$. As described by the work of \cite{ChenAmaro-Seoane2017}, only circular sources are audible by LISA, which is ``deaf'' to eccentric binaries of stellar-mass black holes that emit their maximum power at frequencies farther away from LISA. Hence, this particular source only enters the Advanced LIGO detection band. \begin{figure} \includegraphics[width=1\columnwidth]{images/30_30_Peters.pdf} \caption{ Characteristic amplitude $h_c$ of the first most important harmonics for the model of Fig.~\ref{mkk} at a luminosity distance of $D=500~{\rm Mpc}$. We also pinpoint seven moments in the evolution with dots which correspond, respectively, to 5 min, 8 sec and 1 sec before the merger of the two black holes. } \label{fig.30_30_Peters} \end{figure} \subsection{Black holes inspiraling outside of the cluster} In our simulations some BHBs undergo a strong interaction with a star and they are kicked out from the cluster. The BHBs become escapers as defined in Section~(\ref{ret_esc}). In this case, the BHBs remain almost frozen in their relative configuration without any possible further evolution of their orbital parameters as described in Section~(\ref{subsec.relbin}): the escaping BHB evolves only due to the emission of gravitational radiation. For all these escaping BHBs (47 cases over the whole set of our simulations), we estimate the timescale for coalescence using the approach of Keplerian orbits of \cite{peters64} and find that it always exceeds the Hubble time. The inspiral phase of these binaries falls in the sensitivity window of LISA. However, they evolve very slowly in frequency due to the fact that the semi-major axis is still large, and the time to coalescence scales as $\propto a^4$. For an observational time of 5 years, the source would virtually not have moved in frequency, and hence the accumulated SNR over that time is negligible. \subsection{Merger Rate} To estimate approximately the merger rate, $\mathcal{R}_{\rm mrg}$ we first derive the mean number density of open clusters, $n_{\rm OC}$, over a volume $\Omega$ corresponding to redshift $z\leq 1$ as \begin{equation} {n_{\rm OC}}= \frac{{N_{\rm OC-MW}} \enskip {N_{\rm MW-\Omega}}}{\Omega}. \end{equation} \noindent In this equation ${N_{\rm OC-MW}}$ is the number of OCs in Milky Way (MW)-like galaxies and ${N_{\rm MW-\Omega}}$ is the number of MW-like galaxies within ${z=1}$. We estimate the number of OCs in our Galaxy on the basis of the open-cluster mass function discussed in \cite{piskunov08,spz2010} for the mass range of OCs considered in our work (from $300$ M$_\odot$ to approximately $3000$ M$_\odot$). We take N$_{\rm {MW}}=10^8$ as the number of Milky Way-like galaxies at redshift $\sim 1$, as discussed in \cite{tal17}. We stress here that the estimated merger rate is an upper limit, since it assumes that each open cluster host a massive BHB similarly to the clusters studied in our models. Hence, the black hole binary meger rate can be estimated to be \begin{equation} \centering \mathcal{R}_{\rm mrg} = \frac{1}{N_{\rm s}} \sum_{k=1}^{N_{\rm s}} \frac{n_{\rm OC}}{t_{\rm k}} \approx 2 \textrm{Gpc}^{-3}\,\textrm{yr}^{-1}, \label{rate} \end{equation} \noindent where N$_{\rm s}$ is the total number of $N$-body simulations performed in this work, and t$_{\rm k}$ is the time of each coalescence event as found in our simulations. This estimate is however derived under the most favourable conditions and represents the most optimistic merger rate expected from low-mass open clusters. Note that the BHB merger rate inferred from the first LIGO observations (GW150914) is in the range $2$ - $600$ Gpc$^{-3}$ yr$^{-1}$ \citep{abbrate16}. The most updated estimate of the merger rate from LIGO-Virgo events (after including GW170104) is $12$-$213$ Gpc$^{-3}$ yr$^{-1}$ \citep{abbott17}. Our BHB merger rate is consistent with those found in \cite{sambaran1,sambaran2,sambaran3} for BHB mergers in Young Massive Star Clusters (YMSC). \cite{antoninirasio2016} found a merger rate ranging from $0.05$ to $1$ Gpc$^{-3}$ yr$^{-1}$ for possible progenitor of GW150914 in globular clusters at z$<$0.3. \cite{rodriguez16b} and \cite{rodriguez16} derived that in the local universe BHBs formed in Globular Clusters (GCs) will merge at a rate of $5$ Gpc$^{-3}$ yr$^{-1}$. A result very similar was derived by \citet{askar17} who, for BHB originated in globular cluster, derived a rate of $5.4$ Gpc$^{-3}$ yr$^{-1}$. When the history of star clusters across cosmic time is included, \citet{frak18} showed that the rate in the local Universe is $\sim 10$ Gpc$^{-3}$, i.e. nearly twice the rate predicted for isolated clusters. In Fig.~\ref{fig:mrate} we show the estimated merger rate as a function of the initial number of cluster stars ($N$). The merger rates derived from our models A, B, C and D are well fitted with a linear relation. An extension of our merger rate estimate to globular cluster-like systems ($N > 10 ^{5}$) gives a result in agreement with that found in \cite{park17}, and previously found by \cite{bae14} and \cite{rodriguez16,rodriguez16b}. \begin{figure} \centering \includegraphics[width=0.50\textwidth]{images/ratemerg.pdf} \caption{The most optimistic merger rate ($\mathcal{R}_{\rm mrg}$, in Gpc$^{-3}$ yr$^{-1}$) obtained for each model studied in this work, as function of the total initial number of cluster stars N. The merger rates are well fitted with a linear relation, $R_{\rm mrg}=aN+b$, where $a= 6.2e-{05}$ and $b= -0.04$.} \label{fig:mrate} \end{figure} Although BHB mergers originating in open clusters-like systems might be less numerous than those produced in massive star clusters, they would add a comparable amount to the BHB merger rate in the Universe because of their larger abundance \citep{sambaran2,sambaran3}. \section{Tidal disruption events and BHB ejection} \label{tde} All numerical models we have considered so far have solar metallicity, $Z = 2.02$, and are based on the standard stellar evolution recipes \citep{hurley2000, hurley2002}. Moreover, they all consider an equal-mass BHB sitting in the host cluster centre with an initial mass $M_{{\rm BHB}} = 60$ M$_\odot$. In order to explore the role played by metallicity, stellar evolution recipes adopted, BHB mass and mass ratio, we present in this section an additional sample consisting of 421 simulations (the supplementary models), gathered in 5 different groups. In all these 5 supplementary groups, the OC initial conditions are the same as in D00 principal models. This implies $N_{\rm cl} = 4096$ and, for the BHB initial orbit, $a_{\rm BHB} = 0.01$ pc and $e_{\rm BHB} = 0$, unless specified otherwise. We labels each group with the letter M and a number between 1 and 5. In model M1, we model the OC assuming an initial metallicity value $Z=0.0004$, typical of an old stellar population. The BHB initial conditions are the same as in model D00. Stellar evolution is treated taking advantage of a new implementation of the \texttt{SSE} and \texttt{BSE} tools, which includes metal-dependent stellar winds formulae and improvements described in \cite{belczynski2010}. In the following, we identify the updated stellar evolution treatment used for model M1 as BSEB, while in all the other cases we label them with BSE. Note that these updates allow the formation of BHs with natal masses above $30$ M$_\odot$, while this is not possible in the standard \texttt{SSE} implementation \citep{hurley2000}. Moreover, it must be stressed that the updates affect only metallicities below the solar value. Model M2 is similar to model M1 in terms of initial metallicity and BHB initial condition, while we used the standard \texttt{SSE} and \texttt{BSE} codes to model stellar evolution. Therefore, the underlying difference between this and the previous is that in the latter the mass of compact remnants is systematically lower. This, in turn, implies that the number of perturbers that can have a potentially disruptive effect on the BHB evolution is reduced in model M2. In model M3 we adopt $Z = 0.02$, i.e. solar values, and we focuse on a BHB with component masses $M_1 = 13$ M$_\odot$ and $M_2 = 7$ M$_\odot$. This set has a twofold focus. On one side, it allows us to investigate the evolution of a BHB with mass ratio lower than 1. On the other side, since in this case the BHB total mass is comparable to the maximum mass of compact remnants allowed from stellar evolution recipes, gravitational encounters should be more important in the BHB evolution. To further investigate the role of mass ratio, M4 models are similar to M3, but in this case the BHB mass ratio is smaller, namely $q=0.23$, i.e. the components mass are $M_1 = 30$ M$_\odot$ and $M_2 = 7$ M$_\odot$. In all the principal and supplementary models discussed above, we assume that the BHB is initially at the centre of the OC. In order to investigate whether such a system can be formed dynamically, i.e. via strong encounters, in model M5 we set two BHs, with masses $M_1 = M_2 = 30$ M$_\odot$, initially unbound. In this case we set $Z=0.02$, in order to compare with D00 principal models. The results of these runs are summarized in Table \ref{newsim}. \begin{table} \centering \caption{Supplementary models. The columns refer to: model name, BHB individual masses and mass ratio, metallicity, stellar evolution recipes used, number of simulations performed. The cluster is always simulated with 4096 stars.} \begin{tabular}{ccccccc} \hline Model & $M_1$ & $M_2$ & $q$ & $Z$ & SE & $N_{\rm mod}$ \\ & M$_\odot$ & M$_\odot$ & & Z$_\odot$ & &\\ \hline M1 & 30 & 30 & 1 & $10^{-4}$ & BSEB & 109\\ M2 & 30 & 30 & 1 & $10^{-4}$ & BSE & 131\\ M3 & 13 & 7 & 0.54 & $1$ & BSE & 100\\ M4 & 30 & 7 & 0.23 & $1$ & BSE & 42 \\ M5 & 30 & 30 & 1 & $1$ & BSE & 89 \\ \hline \end{tabular} \label{newsim} \end{table} Since we are interested only in the evolution of the initial BHB, we stop the simulations if at least one of the BHB initial components is ejected away from the parent OC. When metallicity-dependent stellar winds are taken into account (model M1), the reduced mass loss causes the formation of heavier compact remnants, with masses comparable to the BHB components. Since the number of BHs is $\sim 10^{-3}N_{\rm cl}$, according to a Kroupa IMF, in models M1 at least 4-5 heavy BHs can form, interact and possibly disrupt the initial BHB. This is confirmed in the simulations results - we find one of the BHB components kicked out in $P_{\rm esc} = 34.9\%$ of the cases investigated. After the component ejection, the remaining BH can form a new binary with one of the perturbers, or a new BHB. The ``ejection probability'' in models M2 is only slightly lower than in M1, $P_{\rm esc} = 33.6\%$, thus implying that the heavier perturbers forming in models M1 only marginally affect the BHB evolution. This is likely due to two factors: (i) their number is relatively low (4-5), (ii) the mass segregation process in such a low-density, relatively light stellar system is slower than the time over which stellar encounters determine the BHB evolution. The latter point implies that the BHB evolution is mostly driven by the cumulative effects of multiple stellar encounters, rather than to a few interactions with a heavy perturber. In model M3, characterized by a lighter BHB and solar metallicity, the BHB total mass falls in the high-end of the BH mass spectrum, $20$ M$_\odot$. This implies a larger number of massive perturbers with respect to the standard case discussed in the previous sections and provides insight on the fate of light BHBs in OCs. Due to the high-efficiency of strong interactions, the BHB unbinds in $f_{\rm esc} = 32\%$ of the cases, and in no case the BHB undergoes coalescence. Model M5 is characterized by a similar ejection probability, which instead rises up to $40.5\%$ in model M4. This is likely due to the relatively low-mass of the secondary component. Indeed, as shown through scattering experiments, BHB-BH interactions seem to naturally lead to a final state in which the resulting BHB has a larger mass ratio \citep[see for instance][]{ASKL18}. In a few cases, we found that the BHB disruption is mediated by a star, which binds to one of the two BHB former components. The newly formed BH-star pair is characterized by a high eccentricity ($e>0.9$) and pericentre sufficiently small to rip the star apart and give rise to a tidal disruption event (TDE). In the current \texttt{Nbody6} implementation, only the $10\%$ of the star mass is accreted on the BH, while this percentage can be as high as $50\%$. The fraction of models in which a TDE takes place spans one order of magnitude, being $f_{\rm TDE} \cong 0.03-0.3$, with the maximum achieved in models M4 and the minimum in M1. Note that in model M5 we did not found any TDE (see Table \ref{newsim}), but in this case the two BHs are initially moving outside the OC inner regions. In our models, TDEs involve either main sequence stars (MS), stars in the core He burning phase (HB) or in the early asymptotic giant branch (AGB) phase. In model M3 ($f_{\rm TDE} = 0.14$) TDEs involve MS ($29\%$), early AGB ($57\%$) and AGB ($14\%$) stars. In model M4, where the BHB has a low mass ratio ($q=7/30$), TDEs are boosted, since in this case is easier to replace the lighter BH. Indeed, a component swap occurs in $28.5\%$ of the cases, with the new companion star being swallowed by the heavier BH. Our findings suggest that X-ray or UV emission from OCs can be the signature of the presence of BHs with masses as high as $20-30$ M$_\odot$. \begin{table} \centering \caption{Summary of results from the supplementary models. Columns refer to: model name, percentage of cases in which at least one of the BHB components is ejected, percentage of cases in which a star is swallowed by one of the two BHs, percentage of cases in which the BHB merges.} \begin{tabular}{ccccccc} \hline Model & $P_{\rm esc}$ & $P_{\rm TDE}$ & $P_{\rm mer}$ \\ & $\%$ & $\%$ & $\%$ \\ \hline M1 & 34.9& 2.8 & 0.0\\ M2 & 33.6& 6.9 & 3.8\\ M3 & 32.0& 14.0& 0.0\\ M4 & 40.5& 28.5& 0.0\\ M5 & 32.6& 0.0 & 0.0\\ \hline \end{tabular} \label{newsim} \end{table} Using our results we can calculate the TDE rate for Milky Way - like galaxies as \begin{equation} \Gamma_{\rm TDE} = \frac{f_{\rm TDE}N_{\rm OC}N_{\rm MW}}{\Omega T} = 0.3-3.07\times 10^{-6} {\rm yr}^{-1}, \end{equation} Our estimates nicely agree with similar TDE-rate calculation provided by \citet{perets16}, and results in a $\sim 1$ order of magnitude lower than the values calculated for TDEs occurring around supermassive black holes \citep{fraglei18,stonemetzger,stone17,stone16b,megan1}. Here $f_{\rm TDE}$ is the fraction of TDE inferred from simulation, while we adopt the values for $N_{\rm OC}, ~ N_{\rm MW}$ and $\Omega$ discussed in the previous section. Moreover, we assumed $T = 3$ Gyr, i.e. the simulated time. We apply the same analysis to our principal models and find a TDE rate for solar-metallicity OCs of $\Gamma_{\rm TDE} = 0.3-3.07\times 10^{-6} {\rm yr}^{-1}$ for MW-like galaxies in the local Universe. The BHB coalescence occurs in a few cases $f_{\rm mer} \cong 0.004$, and only in models M2, where metallicity dependent mass loss is disabled. This suggests that there exists three different regimes, depending on the perturber maximum mass $M_p$. If (1) $M_{\rm BHB} \gg M_p$, the BHB is much more massive than field stars and stellar encounters poorly affects its evolution; however, if (2) $M_{\rm BHB} \geq M_p$, a few perturbers have masses comparable to the BHB, and can efficiently drive it toward coalescence, causing for instance an increase in the BHB eccentricity or a considerable shrinkage; in case (3) that $M_{\rm BHB} = M_p$, there is at least one perturber with a mass similar, or even larger, than the BHB. The BHB-perturber interactions causes either the BHB disruption, or the formation of a new BHB with the perturber replacing the lighter BHB component. Note that we cannot exclude that a BHB merge in other models, since we stop the computation if the original BHB gets disrupted. Hence, we can infer a lower merger rate for metal poor OCs as follows \begin{equation} \mathcal{R}_{\rm mrg} = \frac{f_{\rm mer}N_{\rm MW} N_{\rm OC}}{\Omega T} \simeq 0.26{\rm yr}^{-1}{\rm Gpc}^{-3}. \end{equation} These models highlight the importance of stellar evolution in our calculations, since stronger stellar winds lead to smaller remnants reducing the number of objects massive enough to cause the BHB disruption. This leads to a higher probability for the BHB to shrink by the repeated interactions with smaller objects. As described above, in model M5 the two BHs are initially unbound, and their initial position and velocities are kept coherently to the OC distribution function. In this situation, the fraction of cases in which at least one of the BHs is ejected from the cluster is similar to that of the other models ($f_{\rm esc}\sim 32.6\%$), but in none of the models the two BHs bind together. This is due to the low efficiency of dynamical friction in the OC that avoids the two BHB to decay in the innermost potential well. Also TDEs are suppressed, due to the low number of strong encounters between BH and cluster stars because of the low density of the surrounding environment. To conclude, our supplementary models confirm that the possibility for a BHB to coalesce in an OC depends strongly on the environment in which the BHB formed and on its total mass and mass ratio. In metal-poor OCs (metal-dependent) stellar winds drive the formation of a seizable number of massive perturbers that can efficiently disrupt the BHB, thus reducing the coalescence probability. Coalescence is strongly reduced also in the case of low mass ratios ($q\sim 0.2$) or relatively light BHBs ($M_1+M_2 \sim 20$ M$_\odot$). One of the most interesting outcomes of the models presented in this section is the possibility to use the OC TDE rate as a proxy to infer the presence of a massive BH or BHB around the OC centre. \section{Conclusions} In this paper we address the evolution of an equal mass, non-spinning, stellar BHB with total mass $60$ M$_{\odot}$ inhabiting the centre of a small/intermediate star cluster (open cluster like, OC), using the direct $N$-body code \texttt{NBODY7} \citep{aarsethnb7}. In order to quantify the effect of repeated stellar encounters on the BHB evolution, we vary the OC total mass and the BHB orbital properties, providing a total of $\sim1150$ simulations which we refer to as {\it principal} models. For the sake of comparison, we also investigate the role played by the BHB total mass, the stellar evolution recipes adopted and the OC metallicity. These can be considered as {\it supplementary} models. The total simulations sample hence consists of $\sim 1500$ different OC models, with masses in the range $300-3000$ M$_\odot$. In $\sim 95\%$ of all the principal simulations performed, the BHB hardens due to the repeated scatterings with flyby stars, while its eccentricity increases significantly. This process takes place on a relatively short time-scale, $\sim 1$ Gyr. In $\sim 1.2\%$ of the principal simulations, instead, the perturbations induced by massive stars that occasionally approach the BHB make it wider. In the remaining $\sim 4.8\%$ cases, the interactions with OC stars are sufficiently strong to break up the BHB. When the BHB gets harder, its semi-major axis reduces by 2 to 4 orders of magnitude, thus decreasing the merger time-scale by a factor 16 in the best case. Hardened BHBs are retained within the parent OC with a probability of $95\%$, while those becoming wider are all retained. In the case of BHB breakup, the two BHs tend to form short-lived binary systems with other OC stars, and eventually at least one of the two BHs is ejected from the parent cluster. In $\sim 3\%$ of the models, the star-BHB interactions are sufficiently effective to drive the BHB coalescence within a Hubble time. We find that a crucial ingredient for the BHB to merge is the interaction with a perturbing star, which considerably shortens the merger time. These dynamical perturbers enhance the number of GW sources by as much as $50\%$. The merger takes place in a time ranging from $5$ Myr to $2.9$ Gyr. In a few cases, the merging binaries emit GWs shifting from the $10^{-3}$ to the $10$ Hz frequency band. This suggests that merging BHBs in OCs can potentially be seen both by LISA, $\sim 200$ yr before the merger, and LIGO, during the last phase preceding the merger. Extrapolating our results to the typical population of OCs in MW-like galaxies in the local Universe, we found that the most optimistic merger rate for BHB mergers in low-mass stellar systems is $\mathcal{R}_{\rm mrg}\sim$ 2 yr$^{-1}$ Gpc$^{-3}$, a value compatible with the merger rate expected for galactic nuclei, but smaller than the merger rate inferred for globular and young massive clusters. According to our supplementary models, in low-metal environments the BHB hardening is suppressed, due to the presence of a large number of high-mass perturbers that can efficiently drive the BHB disruption. In this regard, different stellar evolution recipes may affect significantly the results, since they regulate the maximum mass of compact remnant. Assuming a smaller BHB and a solar metallicity for the cluster stars leads to similar results, since, again, the fraction of perturbers sufficiently massive to drive the BHB disruption is much larger. In none of the cases in which the BHB components are initially kept unbound the BHB forms via dynamical processes. This is due to the low efficiency of dynamical friction in the OC low-dense environment, which is unable to drive the BHs orbital segregation and pairing. So binaries as the ones considered in this paper should be primordial. In a noticeable fraction of the supplementary models, we found that the BHB breaks up and one of the BHs forms a very eccentric binary with an OC star, typically a main sequence or an AGB star. These binaries are usually short-living systems and result in a tidal disruption event, with part of the stellar debris being swallowed by the BH. Our supplementary models suggest that TDEs in OCs occur at a rate $\Gamma_{\rm TDE} = 3.08\times 10^{-6}$ yr$^{-1}$ per MW-like galaxies in the local Universe. \section*{Acknowledgements} SR acknowledges Sapienza, Universit\'a di Roma, which funded the research project ''Black holes and Star clusters over mass and spatial scale'' via the grant AR11715C7F89F177. SR is thankful to Sverre Aarseth of the Institute of Astronomy, Cambridge, for his helpful comments and suggestions during the development of this work. MAS acknowledges the Sonderforschungsbereich SFB 881 "The Milky Way System" (subproject Z2) of the German Research Foundation (DFG) for the financial support provided. PAS acknowledges support from the Ram{\'o}n y Cajal Programme of the Ministry of Economy, Industry and Competitiveness of Spain, as well as the COST Action GWverse CA16104. GF is supported by the Foreign Postdoctoral Fellowship Program of the Israel Academy of Sciences and Humanities. GF also acknowledges support from an Arskin postdoctoral fellowship and Lady Davis Fellowship Trust at the Hebrew University of Jerusalem. \bibliographystyle{mnras}
{'timestamp': '2018-11-28T02:00:29', 'yymm': '1811', 'arxiv_id': '1811.10628', 'language': 'en', 'url': 'https://arxiv.org/abs/1811.10628'}
ArXiv
\section{Introduction} We study output trajectory tracking for a class of nonlinear reaction diffusion equations such that a prescribed performance of the tracking error is achieved. To this end, we utilize the method of funnel control which was developed in~\cite{IlchRyan02b}, see also the survey~\cite{IlchRyan08}. The funnel controller is a model-free output-error feedback of high-gain type. Therefore, it is inherently robust and of striking simplicity. The funnel controller has been successfully applied e.g.\ in temperature control of chemical reactor models~\cite{IlchTren04}, control of industrial servo-systems~\cite{Hack17} and underactuated multibody systems~\cite{BergOtto19}, speed control of wind turbine systems~\cite{Hack14,Hack15b,Hack17}, current control for synchronous machines~\cite{Hack15a,Hack17}, DC-link power flow control~\cite{SenfPaug14}, voltage and current control of electrical circuits~\cite{BergReis14a}, oxygenation control during artificial ventilation therapy~\cite{PompAlfo14}, control of peak inspiratory pressure~\cite{PompWeye15} and adaptive cruise control~\cite{BergRaue18}. A funnel controller for a large class of systems described by functional differential equations with arbitrary (well-defined) relative degree has been developed in~\cite{BergHoan18}. It is shown in~\cite{BergPuch19b} that this abstract class indeed allows for fairly general infinite-dimensional systems, where the internal dynamics are modeled by a (PDE). In particular, it was shown in~\cite{BergPuch19} that the linearized model of a moving water tank, where sloshing effects appear, belongs to the aforementioned system class. On the other hand, not even every linear, infinite-dimensional system has a well-defined relative degree, in which case the results as in~\cite{BergHoan18,IlchRyan02b} cannot be applied. Instead, the feasibility of funnel control has to be investigated directly for the (nonlinear) closed-loop system, see~\cite{ReisSeli15b} for a boundary controlled heat equation and~\cite{PuchReis19} for a general class of boundary control systems. The nonlinear reaction diffusion system that we consider in the present paper is known as the monodomain model and represents defibrillation processes of the human heart~\cite{Tung78}. The monodomain equations are a reasonable simplification of the well accepted bidomain equations, which arise in cardiac electrophysiology~\cite{SundLine07}. In the monodomain model the dynamics are governed by a parabolic reaction diffusion equation which is coupled with a linear ordinary differential equation that models the ionic current. It is discussed in~\cite{KuniNagaWagn11} that, under certain initial conditions, reentry phenomena and spiral waves may occur. From a medical point of view, these situations can be interpreted as fibrillation processes of the heart that should be terminated by an external control, for instance by applying an external stimulus to the heart tissue, see~\cite{NagaKuniPlan13}. The present paper is organized as follows: In Section~\ref{sec:mono_main} we introduce the mathematical framework, which strongly relies on preliminaries on Neumann elliptic operators. The control objective is presented in Section~\ref{sec:mono_controbj}, where we also state the main result on the feasibility of the proposed controller design in Theorem~\ref{thm:mono_funnel}. The proof of this result is given in Section~\ref{sec:mono_proof_mt} and it uses several auxiliary results derived in Appendices~\ref{sec:mono_prep_proof} and~\ref{sec:mono_prep_proof2}. We illustrate our result by a simulation in Section~\ref{sec:numerics}. \textbf{Nomenclature}. The set of bounded operators from $X$ to $Y$ is denoted by $\mathcal{L}(X,Y)$, $X'$ stands for the dual of a~Banach space $X$, and $B'$ is the dual of an operator $B$.\\ For a bounded and measurable set $\Omega\subset{\mathbb{R}}^d$, $p\in[1,\infty]$ and $k\in{\mathbb{N}}_0$, $W^{k,p}(\Omega;{\mathbb{R}}^n)$ denotes the Sobolev space of equivalence classes of $p$-integrable and $k$-times weakly differentiable functions $f:\Omega\to{\mathbb{R}}^n$, $W^{k,p}(\Omega;{\mathbb{R}}^n)\cong (W^{k,p}(\Omega))^n$, and the Lebesgue space of equivalence classes of $p$-integrable functions is $L^p(\Omega)=W^{0,p}(\Omega)$. For $r\in(0,1)$ we further set \[ W^{r,p}(\Omega) := \setdef{f\in L^p(\Omega)}{ \left( (x,y)\mapsto \frac{|f(x)-f(y)|}{|x-y|^{d/p+r}}\right) \in L^p(\Omega\times\Omega)}. \] For a domain $\Omega$ with smooth boundary, $W^{k,p}(\partial\Omega)$ denotes the Sobolev space at the boundary.\\ We identify functions with their restrictions, that is, for instance, if $f\in L^p(\Omega)$ $\Omega_0\subset \Omega$, then the restriction $f|_{\Omega_0}\in L^p(\Omega_0)$ is again dentoted by~$f$. For an interval $J\subset{\mathbb{R}}$, a Banach space $X$ and $p\in[1,\infty]$, we denote by $L^p(J;X)$ the vector space of equivalence classes of strongly measurable functions $f:J\to X$ such that $\|f(\cdot)\|_X\in L^p(J)$. Note that if $J=(a,b)$ for $a,b\in{\mathbb{R}}$, the spaces $L^p((a,b);X)$, $L^p([a,b];X)$, $L^p([a,b);X)$ and $L^p((a,b];X)$ coincide, since the points at the boundary have measure zero. We will simply write $L^p(a,b;X)$, also for the case $a=-\infty$ or $b=\infty$. We refer to \cite{Adam75} for further details on Sobolev and Lebesgue spaces.\\ In the following, let $J\subset{\mathbb{R}}$ be an interval, $X$ be a Banach space and $k\in{\mathbb{N}}_0$. Then $C^k(J;X)$ is defined as the space of $k$-times continuously differentiable functions $f:J\to X$. The space of bounded $k$-times continuously differentiable functions with bounded first $k$ derivatives is denoted by $BC^k(J;X)$, and it is a Banach space endowed with the usual supremum norm. The space of bounded and uniformly continuous functions will be denoted by $BUC(J;X)$. The Banach space of H\"older continuous functions $C^{0,r}(J;X)$ with $r\in(0,1)$ is given by \begin{align*} C^{0,r}(J;X)&:=\setdef{f\in BC(J;X)}{[f]_{r}:=\sup_{t,s\in J,s<t}\frac{\|f(t)-f(s)\|}{(t-s)^r}<\infty},\\ \|f\|_r&:=\|f\|_\infty+[f]_{r}, \end{align*} see \cite[Chap.~0]{Luna95}. We like to note that for all $0<r<q<1$ we have that \[ C^{0,q}(J;X) \subseteq C^{0,r}(J;X) \subseteq BUC(J;X). \] For $p\in[1,\infty]$, the symbol $W^{1,p}(J;X)$ stands for the Sobolev space of $X$-valued equivalance classes of weakly differentiable and $p$-integrable functions $f:J\to X$ with $p$-integrable weak derivative, i.e., $f,\dot{f}\in L^p(J;X)$. Thereby, integration (and thus weak differentiation) has to be understood in the Bochner sense, see~\cite[Sec.~5.9.2]{Evan10}. The spaces $L^p_{\rm loc}(J;X)$ and $W^{1,p}_{\rm loc}(J;X)$ consist of all $f$ whose restriction to any compact interval $K\subset J$ are in $L^p(K;X)$ or $W^{1,p}(K;X)$, respectively. \section{The FitzHugh-Nagumo model} \label{sec:mono_main} Throughout this paper we will frequently use the following assumption. For $d\in{\mathbb{N}}$ we denote the scalar product in $L^2(\Omega;{\mathbb{R}}^d)$ by $\scpr{\cdot}{\cdot}$ and the norm in $L^2(\Omega)$ by $\|\cdot\|$. \begin{Ass}\label{Ass1} Let $d\le 3$ and $\Omega\subset {\mathbb{R}}^d$ be a bounded domain with Lipschitz boundary $\partial\Omega$. Further, let $D\in L^\infty(\Omega;{\mathbb{R}}^{d\times d})$ be symmetric-valued and satisfy the \emph{ellipticity condition} \begin{equation} \exists\,\delta>0:\ \ \text{for a.e.}\,\zeta\in\Omega\ \forall\, \xi\in{\mathbb{R}}^d:\ \xi^\top D(\zeta) \xi = \sum_{i,j=1}^d D_{ij}(\zeta)\xi_i\xi_j\geq\delta \|\xi\|_{{\mathbb{R}}^d}^2.\label{eq:ellcond} \end{equation} \end{Ass} To formulate the model of interest, we consider the sesquilinear form \begin{equation} \mathfrak{a}:W^{1,2}(\Omega)\times W^{1,2}(\Omega)\to{\mathbb{R}},\ (z_1,z_2)\mapsto\scpr{\nabla z_1}{D\nabla z_2}.\label{eq:sesq} \end{equation} We can associate a linear operator to $\mathfrak{a}$. \begin{Prop}\label{prop:Aop} Let Assumption~\ref{Ass1} hold. Then there exists exactly one operator $\mathcal{A}:\mathcal{D}(\mathcal{A})\subset L^2(\Omega)\to L^2(\Omega)$ with \[ \mathcal{D}(\mathcal{A})=\setdef{\!\!z_2\in W^{1,2}(\Omega)\!\!}{\exists\, y_2\in L^2(\Omega)\ \forall\, z_1\in W^{1,2}(\Omega):\,\mathfrak{a}(z_1,z_2)=-\scpr{z_1}{y_2}\!\!}, \] and \[ \forall\, z_1\in W^{1,2}(\Omega)\ \forall\, z_2\in \mathcal{D}(\mathcal{A}):\ \mathfrak{a}(z_1,z_2)=-\scpr{z_1}{\mathcal{A} z_2}. \] We call $\mathcal{A}$ the {\em Neumann elliptic operator on $\Omega$ associated to $D$}. The operator $\mathcal{A}$ is closed, self-adjoint, and $\mathcal{D}(\mathcal{A})$ is dense in $W^{1,2}(\Omega)$. \end{Prop} \begin{proof} Existence, uniqueness and closedness of $A$ as well as the density of $\mathcal{D}(\mathcal{A})$ in $W^{1,2}(\Omega)$ follow from Kato's First Representation Theorem~\cite[Sec.~VI.2, Thm~2.1]{Kato80}, whereas self-adjointness is an immediate consequence of the property $\mathfrak{a}(z_1,z_2)={\mathfrak{a}(z_2,z_1)}$ for all $z_1,z_2\in W^{1,2}(\Omega)$. \end{proof} Note that the operator $\mathcal{A}$ in Proposition~\ref{prop:Aop} is well-defined, independent of any further smoothness requirements on $\partial\Omega$. In particular, the classical Neumann boundary trace, i.e., the derivative of a function in the direction of the outward normal unit vector $\nu:\partial\Omega\to{\mathbb{R}}^d$ does not need to exist. However, if $\partial\Omega$ and the coefficient matrix $D$ are sufficiently smooth, then \[ \mathcal{A} z={\rm div} D\nabla z,\quad z\in\mathcal{D}(\mathcal{A})=\setdef{z\in W^{2,2}(\Omega) }{ (\nu^\top\cdot D\nabla z)|_{\partial\Omega}=0}, \] see \cite[Thm.~2.2.2.5]{Gris85}. This justifies to call $\mathcal{A}$ a Neumann elliptic operator. We collect some further properties of such operators in Appendix~\ref{sec:neum_lapl}. Now we are in the position to introduce the model for the interaction of the electric current in a cell, namely \begin{equation}\label{eq:FHN_model} \begin{aligned} \dot v(t)&=\mathcal{A} v(t)+p_3(v(t))-u(t)+I_{s,i}(t)+\mathcal{B} I_{s,e}(t),\quad&v(0)&=v_0,\\ \dot u(t)&=c_5v(t)-c_4u(t),&u(0)&=u_0,\\ y(t) &= \mathcal{B}'v(t), \end{aligned} \end{equation} where \[p_3(v):=-c_1v+c_2v^2-c_3v^3,\] with constants $c_i>0$ for $i=1,\dots,5$, initial values $v_0,u_0\in L^2(\Omega)$, the Neumann elliptic operator $\mathcal{A}:\mathcal{D}(\mathcal{A})\subseteq L^2(\Omega)\to L^2(\Omega)$ on $\Omega$ associated to $D\in L^\infty(\Omega;\mathbb R^{d\times d})$ and control operator $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega)')$, where $W^{1,2}(\Omega)'$ is the dual of $W^{1,2}(\Omega)$ with respect to the pivot space $L^{2}(\Omega)$; consequently, $\mathcal{B}'\in\mathcal{L}(W^{1,2}(\Omega),{\mathbb{R}}^m)$. System~\eqref{eq:FHN_model} is known as the FitzHugh-Nagumo model for the ionic current~\cite{Fitz61}, where \[I_{ion}(u,v)=p_3(v)-u.\] The functions $I_{s,i}\in L^2_{{\rm loc}}(0,T;L^2(\Omega))$, $I_{s,e}\in L^2_{{\rm loc}}(0,T;{\mathbb{R}}^m)$ are the intracellular and extracellular stimulation currents, respectively. In particular, $I_{s,e}$ is the control input of the system, whereas $y$ is the output.\\ Next we introduce the solution concept. \begin{Def}\label{def:solution} Let Assumption~\ref{Ass1} hold and $\mathcal{A}$ be a Neumann elliptic operator on $\Omega$ associated to $D$ (see Proposition~\ref{prop:Aop}), let $\mathcal{B}\in \mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega)')$, and $u_0,v_0\in L^2(\Omega)$ be given. Further, let $T\in(0,\infty]$ and $I_{s,i}\in L^2_{{\rm loc}}(0,T;L^2(\Omega))$, $I_{s,e}\in L^2_{{\rm loc}}(0,T;{\mathbb{R}}^m)$. A triple of functions $(u,v,y)$ is called \emph{solution} of~\eqref{eq:FHN_model} on $[0,T)$, if \begin{enumerate}[(i)] \item $v\in L^2(0,T;W^{1,2}(\Omega))\cap C([0,T);L^{2}(\Omega))$ with $v(0)=v_0$; \item $u\in C([0,T);L^{2}(\Omega))$ with $u(0)=u_0$; \item for all $\chi\in L^2(\Omega)$, $\theta\in W^{1,2}(\Omega)$, the scalar functions $t\mapsto\scpr{u(t)}{\chi}$, $t\mapsto\scpr{v(t)}{\theta}$ are weakly differentiable on $[0,T)$, and for almost all $t\in (0,T)$ we have \begin{equation}\label{eq:solution} \begin{aligned} {\textstyle\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}}\scpr{v(t)}{\theta}&=-\mathfrak{a}(v(t),\theta)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{\theta}+\scpr{I_{s,e}(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m},\\ {\textstyle\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}} \scpr{u(t)}{\chi}&=\scpr{c_5v(t)-c_4u(t)}{\chi},\\ y(t)&=\mathcal{B}'v(t), \end{aligned} \end{equation} where $\mathfrak{a}:W^{1,2}(\Omega)\times W^{1,2}(\Omega)\to{\mathbb{R}}$ is the sesquilinear defined as in \eqref{eq:sesq}. \end{enumerate} \end{Def} \begin{Rem}\label{rem:openloop} \hspace{1em} \begin{enumerate}[a)] \item\label{rem:openloop1} Weak differentiability of $t\mapsto\scpr{u(t)}{\chi}$, $t\mapsto\scpr{v(t)}{\theta}$ for all $\chi\in L^2(\Omega)$, $\theta\in W^{1,2}(\Omega)$ on $(0,T)$ further leads to $v\in W^{1,2}(0,T;W^{1,2}(\Omega)')$ and $u\in W^{1,2}(0,T;L^{2}(\Omega))$. \item\label{rem:openloop2} The Sobolev Embedding Theorem \cite[Thm.~5.4]{Adam75} implies that the inclusion map $W^{1,2}(\Omega)\hookrightarrow L^6(\Omega)$ is bounded. This guarantees that $p_3(v)\in L^2(0,T;L^2(\Omega))$, whence the first equation in \eqref{eq:solution} is well-defined. \item\label{rem:openloop3} Let $w\in L^2(\Omega)$. An input operator of the form $\mathcal{B} u=u\cdot w$ corresponds to distributed input, and we have $\mathcal{B}\in\mathcal{L}({\mathbb{R}},L^{2}(\Omega))$. In this case, the output is given by \[y(t)=\int_\Omega w(\xi)\cdot(v(t))(\xi)d\xi.\] A typical situation is that $w$ is an indicator function on a subset of $\Omega$; such choices have been considered in~\cite{KuniSouz18} for instance. \item\label{rem:openloop4} Let $w\in L^2(\partial\Omega)$. An input operator with \begin{equation}\mathcal{B}' z=\int_{\partial\Omega} w(\xi)\cdot z(\xi)d\sigma\label{eq:neumanncontr}\end{equation} corresponds to a~Neumann boundary control \[\nu(\xi)^\top\cdot (\nabla v(t))(\xi) =w(\xi)\cdot I_{s,e}(t),\quad \xi\in\partial\Omega.\] In this case, the output is given by a~weighted integral of the Dirichlet boundary values. More precisely \[y(t)=\int_{\partial\Omega} w(\xi)\cdot(v(t))(\xi)d\sigma.\] Note that $\mathcal{B}'$ is the composition of the trace operator \[\begin{aligned} {\rm tr}:&& z&\,\mapsto \left. z\right|_{\partial\Omega} \end{aligned}\] and the inner product in $L^2(\partial\Omega)$ with respect to $w$. The trace operator satisfies ${\rm tr}\in \mathcal{L}(W^{1/2+\varepsilon,2}(\Omega),L^2(\partial\Omega))$ for all $\varepsilon>0$ by the Trace Theorem \cite[Thm.~1.39]{Yagi10}. In particular, ${\rm tr}\in \mathcal{L}(W^{1,2}(\Omega),L^2(\partial\Omega))$, which implies that $\mathcal{B}'\in \mathcal{L}(W^{1,2}(\Omega),{\mathbb{R}})$ and $\mathcal{B}\in \mathcal{L}({\mathbb{R}},W^{1,2}(\Omega)')$. \end{enumerate} \end{Rem} \section{Control objective}\label{sec:mono_controbj} The objective is that the output~$y$ of the system~\eqref{eq:FHN_model} tracks a given reference signal which is $y_{\rm ref}\in W^{1,\infty}(0,\infty;{\mathbb{R}}^m)$ with a prescribed performance of the tracking error $e:= y- y_{\rm ref}$, that is~$e$ evolves within the performance funnel \[ \mathcal{F}_\varphi := \setdef{ (t,e)\in[0,\infty)\times{\mathbb{R}}^m}{ \varphi(t) \|e\|_{{\mathbb{R}}^m} < 1} \] defined by a function~$\varphi$ belonging to \[ \Phi_\gamma:=\setdef{\varphi\in W^{1,\infty}(0,\infty;{\mathbb{R}}) }{\varphi|_{[0,\gamma]}\equiv0,\ \forall\delta>0, \inf_{t>\gamma+\delta}\varphi(t)>0 }, \] for some $\gamma>0$. \begin{figure}[h] \begin{minipage}{0.45\textwidth} \begin{tikzpicture}[domain=0.001:4,scale=1.2] \fill[color=blue!20,domain=0.47:4] (0,0)-- plot (\x,{min(2.2,1/\x+2*exp(-3))})--(4,0)-- (0,0); \fill[color=blue!20] (0,0) -- (0,2.2) -- (0.47,2.2) -- (0.47,0) -- (0,0); \draw[->] (-0.2,0) -- (4.3,0) node[right] {$t$}; \draw[->] (0,-0.2) -- (0,2.5) node[above] {}; \draw[color=blue,domain=0.47:4] plot (\x,{min(2.2,1/\x+2*exp(-3))}) node[above] {$1/\varphi(t)$}; \draw[smooth,color=red,style=thick] (1,0) node[below] {$\|e(t)\|_{{\mathbb{R}}^m}$} plot coordinates{(0,0.8)(0.5,1.2)(1,0)}-- plot coordinates{(1,0)(1.25,0.6)(2,0.2)(3,0.3)(4,0.05)} ; \end{tikzpicture} \caption{Error evolution in a funnel $\mathcal F_{\varphi}$ with boundary~$1/\varphi(t)$.} \label{Fig:monodomain_funnel} \end{minipage} \quad \begin{minipage}{0.5\textwidth} The situation is illustrated in Fig.~\ref{Fig:monodomain_funnel}. The funnel boundary given by~$1/\varphi$ is unbounded in a small interval $[0,\gamma]$ to allow for an arbitrary initial tracking error. Since $\varphi$ is bounded there exists $\lambda>0$ such that $1/\varphi(t) \ge \lambda$ for all $t>0$. Thus, we seek practical tracking with arbitrary small accuracy $\lambda>0$, but asymptotic tracking is not required in general. \end{minipage} \end{figure} The funnel boundary is not necessarily monotonically decreasing, while in most situations it is convenient to choose a monotone funnel. Sometimes, widening the funnel over some later time interval might be beneficial, for instance in the presence of periodic disturbances or strongly varying reference signals. For typical choices of funnel boundaries see e.g.~\cite[Sec.~3.2]{Ilch13}.\\ A controller which achieves the above described control objective is the funnel controller. In the present paper, it suffices to restrict ourselves to the simple version developed in~\cite{IlchRyan02b}, which is the feedback law \begin{equation}\label{eq:monodomain_funnel_controller} I_{s,e}(t)=-\frac{k_0}{1-\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}(\mathcal{B}'v(t)-y_{\rm ref}(t)), \end{equation} where $k_0>0$ is some constant used for scaling and agreement of physical units. Note that, by $\varphi|_{[0,\gamma]}\equiv0$, the controller satisfies \[\forall\, t\in[0,\gamma]:\ I_{s,e}(t)=-k_0(\mathcal{B}'v(t)-y_{\rm ref}(t)).\] We are interested in considering solutions of~(\ref{eq:FHN_feedback}), which leads to the following weak solution framework. \begin{Def}\label{def:solution_feedback} Use the assumptions from Definition~\ref{def:solution}. Furthermore, let $k_0>0$, $y_{\rm ref}\in W^{1,\infty}(0,\infty;{\mathbb{R}}^m)$, $\gamma>0$ and $\varphi\in\Phi_\gamma$. A triple of functions $(u,v,y)$ is called \emph{solution} of system~\eqref{eq:FHN_model} with feedback~\eqref{eq:monodomain_funnel_controller} on $[0,T)$, if $(u,v,y)$ satisfies the conditions~(i)--(iii) from Definition~\ref{def:solution} with $I_{s,e}$ as in~\eqref{eq:monodomain_funnel_controller}. \end{Def} \begin{Rem}\ \begin{enumerate}[a)] \item Inserting the feedback law~\eqref{eq:monodomain_funnel_controller} into the system~\eqref{eq:FHN_model}, we obtain the closed-loop system \begin{equation}\label{eq:FHN_feedback} \begin{aligned} \dot v(t)&=\mathcal{A} v(t)+p_3(v)(t)-u(t)+I_{s,i}(t) -\frac{k_0 \mathcal{B} (\mathcal{B}' v(t)-y_{\rm ref}(t))}{1-\varphi(t)^2\|\mathcal{B}'v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}},\\ \dot u(t)&=c_5v(t)-c_4u(t). \end{aligned} \end{equation} Consequently, $(u,v,y)$ is a {solution} of~\eqref{eq:FHN_model},~\eqref{eq:monodomain_funnel_controller} (resp.~\eqref{eq:FHN_feedback}) if, and only if, \begin{enumerate}[(i)] \item $v\in L^2(0,T;W^{1,2}(\Omega))\cap C([0,T);L^{2}(\Omega)))$ with $v(0)=v_0$; \item $u\in C([0,T);L^{2}(\Omega))$ with $u(0)=u_0$; \item for all $\chi\in L^2(\Omega)$, $\theta\in W^{1,2}(\Omega)$, the scalar functions $t\mapsto\scpr{u(t)}{\chi}$, $t\mapsto\scpr{v(t)}{\theta}$ are weakly differentiable on $[0,T)$, and it holds that, for almost all $t\in (0,T)$, \begin{equation}\label{eq:solution_cl} \begin{aligned} {\textstyle\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}}\scpr{v(t)}{\theta}&=-\mathfrak{a}(v(t),\theta)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{\theta}\\&\qquad-{\frac{k_0 \scpr{\mathcal{B}'v(t)-y_{\rm ref}(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m}}{1-\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}},\\[2mm] {\textstyle\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}}\scpr{u(t)}{\chi}&=\scpr{c_5v(t)-c_4u(t)}{\chi},\\ y(t)&=\mathcal{B}'v(t). \end{aligned} \end{equation} \end{enumerate} The system~\eqref{eq:FHN_feedback} is a nonlinear and non-autonomous PDE and any solution needs to satisfy that the tracking error evolves in the prescribed performance funnel~$\mathcal{F}_\varphi$. Therefore, existence and uniqueness of solutions is a nontrivial problem and even if a solution exists on a finite time interval $[0,T)$, it is not clear that it can be extended to a global solution. \item For global solutions it is desirable that $I_{s,e}\in L^\infty(\delta,\infty;{\mathbb{R}}^m)$ for all $\delta>0$. Note that this is equivalent to \[\limsup_{t\to\infty}\varphi(t)^2\|\mathcal{B}'v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}<1.\] It is as well desirable that $y$ and $I_{s,e}$ have a certain smoothness. \end{enumerate} \end{Rem} In the following we state the main result of the present paper. We will show that the closed-loop system~\eqref{eq:FHN_feedback} has a unique global solution so that all signals remain bounded. Furthermore, the tracking error stays uniformly away from the funnel boundary. We further show that we gain more regularity of the solution, if $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)')$ for some $r\in [0,1)$ or even $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega))$. Recall that $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)')$ if, and only if, $\mathcal{B}'\in\mathcal{L}(W^{r,2}(\Omega),{\mathbb{R}}^m)$. Furthermore, for any $r\in(0,1)$ we have the inclusions \[ \mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega)) \subset \mathcal{L}({\mathbb{R}}^m,L^2(\Omega)) \subset \mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)') \subset \mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega)'). \] \begin{Thm}\label{thm:mono_funnel} Use the assumptions from Definition~\ref{def:solution_feedback}. Furthermore, assume that $\ker\mathcal{B}=\{0\}$ and $I_{s,i}\in L^\infty(0,\infty;L^2(\Omega))$. Then there exists a unique solution of~\eqref{eq:FHN_feedback} on $[0,\infty)$ and we have \begin{enumerate}[(i)] \item $u,\dot{u},v\in BC([0,\infty);L^2(\Omega))$; \item for all $\delta>0$ we have \begin{align*} v&\in BUC([\delta,\infty);W^{1,2}(\Omega))\cap C^{0,1/2}([\delta,\infty);L^{2}(\Omega)),\\ y,I_{s,e}&\in BUC([\delta,\infty);{\mathbb{R}}^m); \end{align*} \item $\exists\,\varepsilon_0>0\ \forall\,\delta>0\ \forall\, t\geq\delta:\ \varphi(t)^2\|\mathcal{B}'v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon_0.$ \end{enumerate} Furthermore, \begin{enumerate}[a)] \item if additionally $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)')$ for some $r\in (0,1)$, then for all $\delta>0$ we have that \begin{align*} v\in C^{0,1-r/2}([\delta,\infty);L^{2}(\Omega)),\quad y,I_{s,e}\in C^{0,1-r}([\delta,\infty);{\mathbb{R}}^m). \end{align*} \item if additionally $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,L^2(\Omega))$, then for all $\delta>0$ and all $\lambda\in(0,1)$ we have \begin{align*} v\in C^{0,\lambda}([\delta,\infty);L^{2}(\Omega)),\quad y,I_{s,e}\in C^{0,\lambda}([\delta,\infty);{\mathbb{R}}^m). \end{align*} \item if additionally $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega))$, then for all $\delta>0$ we have $y,I_{s,e}\in W^{1,\infty}(\delta,\infty;{\mathbb{R}}^m)$. \end{enumerate} \end{Thm} \begin{Rem}\label{rem:main} \hspace{1em} \begin{enumerate}[a)] \item\label{rem:main1} The condition $\ker \mathcal{B}=\{0\}$ is equivalent to $\im \mathcal{B}'$ being dense in ${\mathbb{R}}^m$. The latter is equivalent to $\im \mathcal{B}'={\mathbb{R}}^m$ by the finite-dimensionality of ${\mathbb{R}}^m$.\\ Note that surjectivity of $\mathcal{B}'$ is mandatory for tracking control, since it is necessary that any reference signal $y_{\rm ref}\in W^{1,\infty}(0,\infty;{\mathbb{R}}^m)$ can actually be generated by the output $y(t)=\mathcal{B}' v$. This property is sometimes called \emph{right-invertibility}, see e.g.~\cite[Sec.~8.2]{TrenStoo01}. \item\label{rem:main2} If the input operator corresponds to Neumann boundary control, i.e., $\mathcal{B}$ is as in~\eqref{eq:neumanncontr} for some $w\in L^2(\partial\Omega)$, then $\mathcal{B}\in\mathcal{L}({\mathbb{R}},W^{r,2}(\Omega)')$ for some $r\in(1/2,1)$, cf.\ Remark~\ref{rem:openloop}\,\ref{rem:openloop4}), and the assertions of Theorem~\ref{thm:mono_funnel}\,a) hold. \item\label{rem:main3} If the input operator corresponds to distributed control, that is $\mathcal{B} u=u\cdot w$ for some $w\in L^2(\Omega)$, then $\mathcal{B}\in\mathcal{L}({\mathbb{R}},L^2(\Omega))$, cf.\ Remark~\ref{rem:openloop}\,\ref{rem:openloop3}), and the assertions of Theorem~\ref{thm:mono_funnel}\,b) hold. \end{enumerate} \end{Rem} \section{Proof of Theorem~\ref{thm:mono_funnel}} \label{sec:mono_proof_mt} The proof is inspired by~the results of \cite{Jack90} on existence and uniqueness of (non-controlled) FitzHugh-Nagamo equations, which is based on a spectral approximation and subsequent convergence proofs by using arguments from~\cite{Lion69}. We divide the proof in two major parts. First, we show that there exists a unique solution on the interval $[0,\gamma]$. After that we show that the solution also exists on $(\gamma,\infty)$, is continuous at $t=\gamma$ and has the desired properties. \subsection{Solution on $[0,\gamma]$} \label{ssec:mono_proof_tleqgamma} Assuming that $t\in[0,\gamma]$, we have that $\varphi(t)\equiv0$ so that we need to show existence of a pair of functions $(v,u)$ with the properties as in Definition~\ref{def:solution}~(i)--(iii), where~\eqref{eq:solution} simplifies to \begin{equation}\label{eq:weak_uv_delta} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{v(t)}{\theta}&=-\mathfrak{a}(v(t),\theta)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{\theta}+\scpr{I_{s,e}(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m},\\ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{u(t)}{\chi}&=\scpr{c_5 v(t)-c_4u(t)}{\chi},\\ I_{s,e}(t)&=-k_0(\mathcal{B}' v(t)-y_{\rm ref}(t)),\\ y(t)&= \mathcal{B}' v(t). \end{aligned} \end{equation} Recall that $\mathfrak{a}:W^{1,2}(\Omega)\times W^{1,2}(\Omega)\to{\mathbb{R}}$ is the sesquilinear form \eqref{eq:sesq}. \emph{Step 1: We show existence and uniqueness of a solution.}\\ \emph{Step 1a: We show existence of a local solution on $[0,\gamma]$.} To this end, let $(\theta_i)_{i\in{\mathbb{N}}_0}$ be the eigenfunctions of $-\mathcal{A}$ and $\alpha_i$ be the corresponding eigenvalues, with $\alpha_i\geq0$ for all $i\in{\mathbb{N}}_0$. Recall that $(\theta_i)_{i\in{\mathbb{N}}_0}$ form an orthonormal basis of $L^2(\Omega)$ by Proposition~\ref{prop:Aop_n}\,\ref{item:Aop5}). Hence, with $a_i := \scpr{v_0}{\theta_i}$ and $b_i := \scpr{u_0}{\theta_i}$ for $i\in{\mathbb{N}}_0$ and \[ v_0^n:= \sum_{i=0}^na_{i}\theta_i,\quad u_0^n:= \sum_{i=0}^nb_{i}\theta_i,\quad n\in{\mathbb{N}}, \] we have that $v^n_0\to v_0$ and $u^n_0\to u_0$ strongly in $L^2(\Omega)$.\\ Fix $n\in{\mathbb{N}}_0$ and let $\gamma_i:=\mathcal{B}'\theta_i$ for $i=0,\dots,n$. Consider, for $j=0,\ldots,n$, the differential equations \begin{equation}\label{eq:muj-nuj-0gamma} \begin{aligned} \dot{\mu}_j(t)&=-\alpha_j\mu_j(t)-\nu_j(t)-\scpr{k_0\left(\sum_{i=0}^n\gamma_i\mu_i(t)-y_{\rm ref}(t)\right)}{\gamma_j}_{{\mathbb{R}}^m}+\scpr{I_{s,i}(t)}{\theta_j} \\ &\quad +\scpr{p_3\left(\sum_{i=0}^n\mu_i(t)\theta_i\right)}{\theta_j},\\ \dot{\nu}_j(t)&=-c_4\nu_j(t)+c_5\mu_j(t),\qquad\qquad \text{with}\ \mu_j(0)=a_j,\ \nu_j(0)=b_j, \end{aligned} \end{equation} defined on ${\mathbb{D}}:=[0,\infty)\times{\mathbb{R}}^{2(n+1)}$. Since the functions on the right hand side of~\eqref{eq:muj-nuj-0gamma} are continuous, it follows from ODE theory, see e.g.~\cite[\S~10, Thm.~XX]{Walt98}, that there exists a weakly differentiable solution $(\mu^n,\nu^n)=(\mu_0,\dots,\mu_n,\nu_0,\dots,\nu_n):[0,T_n)\to{\mathbb{R}}^{2(n+1)}$ of~\eqref{eq:muj-nuj-0gamma} such that $T_n\in(0,\infty]$ is maximal. Furthermore, the closure of the graph of~$(\mu^n,\nu^n)$ is not a compact subset of ${\mathbb{D}}$.\\ Now, set $v_n(t):=\sum_{i=0}^n{\mu}_i(t)\theta_i$ and $u_n(t):=\sum_{i=0}^n{\nu}_i(t)\theta_i$. Invoking~\eqref{eq:muj-nuj-0gamma} and using the functions $\theta_j$ we have that for $j=0,\dots,n$ the functions $(v_n,u_n)$ satisfy \begin{equation}\label{eq:weak_delta} \begin{aligned} \scpr{\dot{v}_n(t)}{\theta_j}&=-\mathfrak{a}(v_n(t),\theta_j)- \scpr{u_n(t)}{\theta_j}+\scpr{p_3(v_n(t))}{\theta_j}+\scpr{I_{s,i}(t)}{\theta_j}\\ &\quad -\scpr{k_0(\mathcal{B}' v_n(t)-y_{\rm ref}(t))}{\mathcal{B}'\theta_j}_{{\mathbb{R}}^m},\\ \scpr{\dot{u}_n(t)}{\theta_j} &= -c_4\scpr{u_n(t)}{\theta_j}+c_5\scpr{v_n(t)}{\theta_j}. \end{aligned} \end{equation} \emph{Step 1b: We show boundedness of $(v_n,u_n)$.} Consider the Lyapunov function candidate \begin{equation}\label{eq:Lyapunov} V:L^2(\Omega)\times L^2(\Omega)\to{\mathbb{R}},\ (v,u)\mapsto\frac12(c_5\|v\|^2+\|u\|^2). \end{equation} Observe that, since $(\theta_i)_{i\in{\mathbb{N}}_0}$ are orthonormal, we have $\|v_n\|^2 = \sum_{j=0}^n \mu_j^2$ and $\|u_n\|^2 = \sum_{j=0}^n \nu_j^2$. Hence we find that, for all $t\in[0, T_n)$, \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(v_n(t),u_n(t)) &\stackrel{\eqref{eq:muj-nuj-0gamma}}{=} c_5\sum_{j=0}^n\mu_j(t)\dot{\mu}_j(t)+\sum_{j=0}^n\nu_j(t)\dot{\nu}_j(t)\\ &=-c_5\sum_{j=0}^{n}\alpha_j\mu_j(t)^2-c_4\sum_{j=0}^{n}\nu_j(t)^2\\ &\quad -c_5\scpr{k_0\left(\sum_{i=0}^n\gamma_i\mu_i(t)-y_{\rm ref}(t)\right)}{\sum_{j=0}^n\gamma_j\mu_j(t)}_{{\mathbb{R}}^m}\\ &\quad +c_5\scpr{p_3\left(v_n(t)\right)}{v_n(t)}+c_5\scpr{I_{s,i}(t)}{v_n(t)} \end{align*} hence, omitting the argument~$t$ for brevity in the following, \begin{equation}\label{eq:Lyapunov_delta} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(v_n,u_n)=&-c_5\mathfrak{a}(v_n,v_n)-c_4\|u_n\|^2+c_5\scpr{I_{s,i}}{v_n}\\ &-c_5k_0\|\overline{e}_n\|_{{\mathbb{R}}^m}^2+c_5k_0\scpr{\overline{e}_n}{y_{\rm ref}}_{{\mathbb{R}}^m}+c_5\scpr{p_3(v_n)}{v_n}, \end{aligned} \end{equation} where \[\overline{e}_n(t):=\sum_{i=0}^n\gamma_i\mu_i(t)-y_{\rm ref}(t)=\mathcal{B}' v_n(t)-y_{\rm ref}(t).\] Before proceeding, recall Young's inequality for products, i.e., for $a,b\ge 0$ and $p,q\ge 1$ such that $1/p + 1/q = 1$ we have that \[ a b \le \frac{a^p}{p} + \frac{b^q}{q}, \] which will be frequently used in the following. Note that \begin{align*} \scpr{p_3(v_n)}{v_n}&=-c_1\|v_n\|^2+c_2\scpr{v_n^2}{v_n}-c_3\|v_n\|^4_{L^4},\\ c_2|\scpr{v_n^2}{v_n}|&=|\scpr{\epsilon v_n^3}{\epsilon^{-1}c_2}|\leq \frac{3\epsilon^{4/3}}{4}\|v_n\|^4_{L^4}+\frac{c_2^4}{4\epsilon^4}|\Omega|, \end{align*} where the latter follows from Young's inequality with $p=\tfrac43$ and $q=4$. Choosing $\epsilon=\left(\tfrac23 c_3\right)^{\tfrac34}$ we obtain \[\scpr{p_3(v_n)}{v_n}\leq \frac{27 c_2^4}{32 c_3^3}|\Omega|-c_1\|v_n\|^2-\frac{c_3}{2}\|v_n\|_{L^4}^4.\] Moreover, \[\scpr{\overline{e}_n}{y_{\rm ref}}_{{\mathbb{R}}^m}\leq\frac{1}{2}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2+\frac{1}{2}\|y_{\rm ref}\|_{{\mathbb{R}}^m}^2\] and \[\scpr{I_{s,i}}{v_n}\leq \frac{c_1}{2}\|v_n\|^2+\frac{1}{2c_1}\|I_{s,i}\|^2,\] such that \eqref{eq:Lyapunov_delta} can be estimated by \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(v_n,u_n) \leq&-c_5\mathfrak{a}(v_n,v_n)-\frac{c_1c_5}{2}\|v_n\|^2-\frac{c_5k_0}{2}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2-\frac{c_3 c_5}{2}\|v_n\|_{L^4}^4\\ &+\frac{k_0c_5}{2}\|y_{\rm ref}\|_{{\mathbb{R}}^m}^2+\frac{1}{2c_1}\|I_{s,i}\|^2+\frac{27 c_2^4}{32 c_3^3}|\Omega|\\ \leq&\ -c_5\mathfrak{a}(v_n,v_n)-\frac{c_1c_5}{2}\|v_n\|^2-\frac{c_5k_0}{2}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2-\frac{c_3 c_5}{2}\|v_n\|_{L^4}^4\\ &+\frac{k_0c_5}{2}\|y_{\rm ref}\|_{\infty}^2+\frac{1}{2c_1}\|I_{s,i}\|^2_{2,\infty}+\frac{27 c_2^4}{32 c_3^3}|\Omega|, \end{align*} where $\|I_{s,i}\|_{2,\infty} = \esssup_{t\ge 0} \left( \int_\Omega |I_{s,i}(\zeta,t)|^2 \ds{\zeta}\right)^{1/2}$. Setting \[C_\infty:=\frac{k_0c_5}{2}\|y_{\rm ref}\|_{\infty}^2+\frac{1}{2c_1}\|I_{s,i}\|^2_{2,\infty}+\frac{27 c_2^4}{32 c_3^3}|\Omega|,\] we obtain that, for all $t\in[0, T_n)$, \begin{align*} &V(v_n(t),u_n(t))+c_5\int_0^t\mathfrak{a}(v_n(s),v_n(s))\ds{s}+\frac{c_1 c_5}{2}\int_0^t\|v_n(s)\|^2\ds{s}\\ &+\frac{c_5k_0}{2}\int_0^t\|\overline{e}_n(s)\|_{{\mathbb{R}}^m}^2\ds{s} +\frac{c_3 c_5}{2}\int_0^t\|v_n(s)\|_{L^4}^4\ds{s}\leq V(v_0^n,u_0^n)+C_\infty t. \end{align*} Since $(u_n^0,v_n^0)\to (u_0,v_0)$ strongly in $L^2(\Omega)$ and we have for all $p\in L^2(\Omega)$ that \[ \left\|\sum_{i=0}^n \scpr{p}{\theta_i} \theta_i\right\|^2=\sum_{i=0}^n \scpr{p}{\theta_i}^2\leq\sum_{i=0}^\infty \scpr{p}{\theta_i}^2=\left\|\sum_{i=1}^\infty \scpr{p}{\theta_i}\theta_i\right\|^2=\|p\|^2, \] it follows that, for all $t\in[0, T_n)$, \begin{equation}\label{eq:bound_1} \begin{aligned} &c_5\|v_n(t)\|^2+\|u_n(t)\|^2+ 2c_5\int_0^t\mathfrak{a}(v_n(s),v_n(s))\ds{s}+ {c_1 c_5}\int_0^t\|v_n(s)\|^2\ds{s} \\ &+ {c_5k_0}\int_0^t\|\overline{e}_n(s)\|_{{\mathbb{R}}^m}^2\ds{s} +{c_3 c_5}\int_0^t\|v_n(s)\|_{L^4}^4\ds{s}\leq 2C_\infty t+c_5\|u_0\|^2+\|v_0\|^2. \end{aligned} \end{equation} \emph{Step 1c: We show that $T_n = \infty$.} Assume that $T_n<\infty$, then it follows from~\eqref{eq:bound_1} together with~\eqref{eq:sesq} that $(v_n,u_n)$ is bounded, thus the solution $(\mu^n,\nu^n)$ of~\eqref{eq:muj-nuj-0gamma} is bounded on $[0,T_n)$. But this implies that the closure of the graph of $(\mu^n,\nu^n)$ is a compact subset of ${\mathbb{D}}$, a contradiction. Therefore, $T_n=\infty$ and in particular the solution is defined for all $t\in[0,\gamma]$.\\ \emph{Step 1d: We show convergence of $(v_n,u_n)$ to a solution of~\eqref{eq:weak_uv_delta} on $[0,\gamma]$.} First note that it follows from~\eqref{eq:bound_1} that \begin{equation}\label{eq:uv_bound_delta} \forall\, t\in[0,\gamma]:\ \|v_n(t)\|^2\leq C_v,\quad \|u_n(t)\|^2\leq C_u \end{equation} for some $C_v, C_u>0$. From \eqref{eq:bound_1} and condition~\eqref{eq:ellcond} in Assumption~\ref{Ass1} it follows that there is a constant $C_\delta>0$ such that \[\int_0^\gamma\|\nabla v_n(t)\|^2\ds{t}\leq\delta^{-1}\int_0^\gamma\mathfrak{a}(v_n(t),v_n(t))\ds{t}\leq C_\delta.\] This together with~\eqref{eq:bound_1} and~\eqref{eq:uv_bound_delta} implies that there exist constants $C_1,C_2>0$ with \begin{equation}\label{eq:extra_bounds_delta} \|v_n\|^4_{L^4(0,\gamma;L^{4}(\Omega))}\leq C_1,\quad \|v_n\|_{L^2(0,\gamma;W^{1,2}(\Omega))}\leq C_2. \end{equation} Note that \eqref{eq:extra_bounds_delta} directly implies that \begin{equation}\label{eq:vn2} \begin{aligned}\|v_n^2\|^2_{L^2(0,\gamma;L^{2}(\Omega))}\leq&\, C_1,\\ \|v_n^3\|_{L^{4/3}(0,\gamma;L^{4/3}(\Omega))} =&\, \left( \|v_n^2\|^2_{L^2(0,\gamma;L^{2}(\Omega))}\right)^{3/4} \le C_1^{3/4}. \end{aligned}\end{equation} Multiplying the second equation in~\eqref{eq:weak_delta} by $\dot{\nu}_j$ and summing up over $j\in\{0,\ldots,n\}$ leads to \begin{align*} \|\dot{u}_n\|^2 &= -\frac{c_4}{2} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \|u_n\|^2 + c_5 \scpr{v_n}{\dot{u}_n}\\ &\le -\frac{c_4}{2} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|u_n\|^2 + \frac{c_5^2}{2}\|v_n\|^2+\frac{1}{2}\|\dot{u}_n\|^2, \end{align*} thus \[\|\dot{u}_n\|^2\leq-c_4 \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|u_n\|^2+c_5^2\|v_n\|^2.\] Upon integration over $[0,\gamma]$ and using \eqref{eq:uv_bound_delta} this yields that \[\int_0^\gamma\|\dot{u}_n(t)\|^2\ds{t}\leq c_4 C_u + c_5^2\int_0^\gamma\|v_n(t)\|^2\ds{t} \le \hat{C}_3\] for some $\hat{C}_3>0$, where the last inequality is a consequence of~\eqref{eq:bound_1}. This together with \eqref{eq:uv_bound_delta} implies that there is $C_3>0$ such that $\|u_n\|_{W^{1,2}(0,\gamma;L^2(\Omega))}\leq C_3$. Now, let $P_n$ be the orthogonal projection of $L^2(\Omega)$ onto the subspace generated by the set $\setdef{\theta_i}{i=1,\dots,n}$. Consider the norm \[\|v\|_{W^{1,2}}=\left(\sum_{i=0}^n(1+\alpha_i)|\scpr{v}{\theta_i}|^2\right)^{1/2}\] on $W^{1,2}(\Omega)$ according to Proposition \ref{prop:Aop2_n} and Remark \ref{rem:X_alpha}. By duality we have that \[\|\hat v\|_{(W^{1,2})'}=\left(\sum_{i=0}^n(1+\alpha_i)^{-1}|\scpr{\hat v}{\theta_i}|^2\right)^{1/2}\] is a norm on $W^{1,2}(\Omega)'$, cf.~\cite[Prop.~3.4.8]{TucsWeis09}. Note that we can consider $P_n:W^{1,2}(\Omega)'\to W^{1,2}(\Omega)'$, which is a bounded linear operator with norm one, independent of $n$. Using this together with the fact that the injection from $L^2(\Omega)$ into $W^{1,2}(\Omega)'$ is continuous and $\mathcal{A}\in\mathcal{L}(W^{1,2}(\Omega),W^{1,2}(\Omega)')$, we can rewrite the weak formulation~\eqref{eq:weak_delta} as \begin{equation}\label{eq:approx_dual} \dot{v}_n=P_n\mathcal{A} v_n+P_np_3(v_n)-P_nu_n+P_nI_{s,i}-P_n\mathcal{B} k_0(\mathcal{B}' v_n-y_{\rm ref}). \end{equation} Since $v_n\in L^2(0,\gamma;W^{1,2}(\Omega))$ and hence, by the Sobolev Embedding Theorem, $v_n\in L^2(0,\gamma;L^p(\Omega))$ for all $2\leq p\leq6$, we find that $p_3(v_n)\in L^2(0,\gamma;L^2(\Omega))$. We also have $\mathcal{A} v_n\in L^2(0,\gamma;W^{1,2}(\Omega)')$ and $\mathcal{B} k_0(\mathcal{B}' v_n-y_{\rm ref})\in L^2(0,\gamma;W^{1,2}(\Omega)')$ so that by using the previously derived estimates and~\eqref{eq:approx_dual}, there exists $C_4>0$ independent of $n$ and $t$ with \[\|\dot{v}_n\|_{L^2(0,\gamma;W^{1,2}(\Omega)')}\leq C_4.\] Now, by Lemma~\ref{lem:weak_convergence} we have that there exist subsequences of $(u_n)$, $(v_n)$ and $(\dot v_n)$, resp., again denoted in the same way, for which \begin{equation}\label{eq:convergence_subseq} \begin{aligned} u_n\to u&\in W^{1,2}(0,\gamma;L^2(\Omega))\mbox{ weakly},\\ u_n\to u&\in W^{1,\infty}(0,\gamma;L^{2}(\Omega))\mbox{ weak}^\star,\\ v_n\to v&\in L^2(0,\gamma;W^{1,2}(\Omega))\mbox{ weakly},\\ v_n\to v&\in L^\infty(0,\gamma;L^{2}(\Omega))\mbox{ weak}^\star,\\ v_n\to v&\in L^4(0,\gamma;L^{4}(\Omega))\mbox{ weakly},\\ \dot{v}_n\to \dot{v}&\in L^2(0,\gamma;W^{1,2}(\Omega)')\mbox{ weakly}. \end{aligned} \end{equation} Moreover, let $p_0=p_1=2$ and $X=W^{1,2}(\Omega)$, $Y=L^2(\Omega)$, $Z=W^{1,2}(\Omega)'$. Then, \cite[Chap.~1, Thm.~5.1]{Lion69} implies that \[W:=\setdef{u\in L^{p_0}(0,\gamma;X) }{ \dot{u}\in L^{p_1}(0,\gamma;Z) }\] with norm $\|u\|_{L^{p_0}(0,\gamma;X)}+\|\dot{u}\|_{L^{p_1}(0,\gamma;Y)}$ has a compact injection into $L^{p_0}(0,\gamma;Y)$, so that the weakly convergent sequence $v_n\to v\in W$ converges strongly in $L^2(0,\gamma;L^2(\Omega))$ by \cite[Lem.~1.6]{HinzPinn09}. Further, $(u(0),v(0))=(u_0,v_0)$ and by $v\in W^{1,2}(0,\gamma;L^2(\Omega))$, $v\in L^2(0,\gamma;W^{1,2}(\Omega))$ and $\dot{v}\in L^2(0,\gamma;W^{1,2}(\Omega)')$ it follows that $u,v\in C([0,\gamma];L^2(\Omega))$, see for instance \cite[Thm.~1.32]{HinzPinn09}. Moreover, note that $\mathcal{B}' v-y_{\rm ref}\in L^2(0,\gamma;{\mathbb{R}}^m)$. Hence, $(u,v)$ is a solution of \eqref{eq:FHN_feedback} in $[0,\gamma]$ and \begin{equation}\label{eq:strong_delta} \dot{v}(t)=\mathcal{A} v(t)+p_3(v(t))-u(t)+I_{s,i}(t)-\mathcal{B} k_0(\mathcal{B}' v(t)-y_{\rm ref}(t)) \end{equation} is satisfied in $W^{1,2}(\Omega)'$. Moreover, by~\eqref{eq:vn2},~\cite[Chap.~1, Lem.~1.3]{Lion69} and $v_n\to v$ in $L^4(0,\gamma;L^{4}(\Omega))$ we have that $v_n^3\to v^3$ weakly in $L^{4/3}(0,\gamma;L^{4/3}(\Omega))$ and $v_n^2\to v^2$ weakly in $L^2(0,\gamma;L^{2}(\Omega))$.\\ \emph{Step 1e: We show uniqueness of the solution $(v,u)$.} To this end, we separate the linear part of~$p_3$ so that \[p_3(v)=-c_1v-c_3\hat{p}_3(v),\quad \hat{p}_3(v)\coloneqq v^2\left(v-c\right),\ \quad c\coloneqq c_2/c_3.\] Assume that $(v_1,u_1)$ and $(v_2,u_2)$ are two solutions of~\eqref{eq:FHN_feedback} on $[0,\gamma]$ with the same initial values, $v_1(0) = v_2(0) = v_0$ and $u_1(0) = u_2(0) = u_0$. Let $t_0\in(0,\gamma]$ be given. Let $Q_0\coloneqq(0,t_0)\times\Omega$. Define \[\Sigma(t,\zeta):= |v_1(t,\zeta)|+|v_2(t,\zeta)|,\] and let \[Q^\Lambda:= \{(t,\zeta)\in Q_0\ |\ \Sigma(t,\zeta)\leq\Lambda\},\quad \Lambda>0.\] Note that, by convexity of the map $x\mapsto x^p$ on $[0,\infty)$ for $p>1$, we have that \[ \forall\, a,b\ge 0:\ \big(\tfrac12 a+ \tfrac12 b)^p \le \tfrac12 a^p + \tfrac12 b^p. \] Therefore, since $v_1,v_2\in L^4(0,\gamma;L^{4}(\Omega))$, we find that $\Sigma\in L^4(0,\gamma;L^{4}(\Omega))$. Hence, by the monotone convergence theorem, for all $\epsilon>0$ we may choose $\Lambda$ large enough such that \[\int_{Q_0\setminus Q^\Lambda}|\Sigma(\zeta,t)|^4\ds{(\zeta,t)}<\epsilon.\] Note that without loss of generality we may assume that $\Lambda>\frac{c}{3}$. Let $V:= v_2-v_1$ and $U:= u_2-u_1$, then, by~\eqref{eq:FHN_feedback}, \begin{align*} \dot V &= (\mathcal{A}-c_1I) V -c_3(\hat{p}_3(v_2) - \hat{p}_3(v_1)) - U - k_0 \mathcal{B}\cB' V,\\ \dot U &= c_5 V - c_4 U. \end{align*} By~\cite[Thm.~1.32]{HinzPinn09}, we have for all $t\in(0,\gamma)$ that \[\tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|V(t)\|^2=\scpr{\dot{V}(t)}{V(t)},\quad \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|U(t)\|^2=\scpr{\dot{U}(t)}{U(t)},\] thus we may compute that \begin{align*} \tfrac{c_5}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|V\|^2+\tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|U\|^2&= \scpr{(\mathcal{A}-c_1I) V - U - k_0 \mathcal{B}\cB' V}{c_5 V} - c_4 \|U\|^2 + c_5 \langle U,V \rangle\\ &\quad-c_5 c_3\scpr{ \hat{p}_3(v_2) - \hat{p}_3(v_1)}{V}\\ &= c_5 \scpr{(\mathcal{A}-c_1I) V}{V} - c_5 k_0 \scpr{\mathcal{B}' V}{\mathcal{B}' V} - c_4 \|U\|^2 \\ &\quad - c_5c_3 \scpr{\hat{p}_3(v_2)-\hat{p}_3(v_1)}{V}\\ &\le-c_5c_3 \scpr{\hat{p}_3(v_2)-\hat{p}_3(v_1)}{V}. \end{align*} Integration over $[0,t_0]$ and using $(U(0),V(0))=(0,0)$ leads to \begin{align*} \tfrac{c_5}{2}\|V(t_0)\|^2+\tfrac{1}{2}\|U(t_0)\|^2 &=-c_5c_3\int_0^{t_0}\int_\Omega (\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t)))V(\zeta,t)\ds{\zeta}\ds{t}\\ &=-c_5c_3\int_{Q^\Lambda} (\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t)))V(\zeta,t)\ds{\zeta}\ds{t}\\ &\quad-c_5c_3\int_{Q_0\setminus Q^\Lambda} (\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t)))V(\zeta,t)\ds{\zeta}\ds{t}. \end{align*} Note that on $Q^\Lambda$ we have $-\Lambda\leq v_1\leq\Lambda$ and $-\Lambda\leq v_2\leq\Lambda$. Let $a,b\in[-\Lambda,\Lambda]$, then the mean value theorem implies \begin{align*} (\hat{p}_3(b)-\hat{p}_3(a))(b-a)=\hat{p}_3'(\xi)(b-a)^2 \end{align*} for some $\xi\in(-\Lambda,\Lambda)$. Since $\hat{p}_3'(\xi)=3\xi^2-2c\xi$ has a minimum at \[\xi^\ast=\frac{c}{3}\] we have that \[(\hat{p}_3(b)-\hat{p}_3(a))(b-a)=\hat{p}_3'(\xi)(b-a)^2\geq-\frac{c^2}{3}(b-a)^2.\] Using that in the above inequality leads to \begin{align*} \tfrac{c_5}{2}\|V(t_0)\|^2+\tfrac{1}{2}\|U(t_0)\|^2&\leq c_5c_3\frac{c^2}{3}\int_{Q^\Lambda} V(\zeta,t)^2\ds{\zeta}\ds{t}\\ &\quad-c_5c_3\int_{Q_0\setminus Q^\Lambda} (\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t)))V(\zeta,t)\ds{\zeta}\ds{t}\\ &\leq c_5c_3\frac{c^2}{3}\int_{Q_0} V(\zeta,t)^2\ds{\zeta}\ds{t}\\ &\quad+c_5c_3\int_{Q_0\setminus Q^\Lambda} |\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t))||V(\zeta,t)|\ds{\zeta}\ds{t}\\ &\le c_5c_3\frac{c^2}{3}\int_0^{t_0}\|V(t)\|^2\ds{t}+2c_5c_3\int_{Q_0\setminus Q^\Lambda}|\Sigma(\zeta,t)|^4\ds{(\zeta,t)}\\ &\leq c_3\frac{c^2}{3}\int_0^{t_0}c_5\|V(t)\|^2+\|U(t)\|^2\ds{t}+2c_5c_3\epsilon. \end{align*} Since $\epsilon>0$ was arbitrary we may infer that \[\tfrac{c_5}{2}\|V(t_0)\|^2+\tfrac{1}{2}\|U(t_0)\|^2\leq \frac{2c_3c^2}{3}\int_0^{t_0}\tfrac{c_5}{2}\|V(t)\|^2+\tfrac12\|U(t)\|^2\ds{t}.\] Hence, by Gronwall's lemma and $U(0)=0,V(0)=0$ it follows that $U(t_0)=0$ and $V(t_0)=0$. Since $t_0$ was arbitrary, this shows that $v_1 = v_2$ and $u_1 = u_2$ on $[0,\gamma]$. \emph{Step 2: We show that for all $\epsilon\in(0,\gamma)$ and all $t\in[\epsilon,\gamma]$ we have $v(t)\in W^{1,2}(\Omega)$.}\\ Fix $\epsilon\in(0,\gamma)$. First we show that $v\in BUC([\epsilon,\gamma];W^{1,2}(\Omega))$. Multiplying the first equation in~\eqref{eq:weak_delta} by $\dot{\mu}_j$ and summing up over $j\in\{0,\dots,n\}$ we obtain \begin{align*} \|\dot{v}_n\|^2&=-\tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \mathfrak{a}(v_n,v_n)-\scpr{u_n}{\dot{v}_n}+\scpr{p_3(v_n)}{\dot{v}_n}+\scpr{I_{s,i}}{\dot{v}_n}\\ &\quad -k_0\scpr{\mathcal{B}' v_n-y_{\rm ref}}{\mathcal{B}'\dot{v}_n}_{{\mathbb{R}}^m}\\ &=-\tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \mathfrak{a}(v_n,v_n)-\scpr{u_n}{\dot{v}_n}+\scpr{p_3(v_n)}{\dot{v}_n}+\scpr{I_{s,i}}{\dot{v}_n}\\ &\quad -k_0\scpr{\mathcal{B}' v_n-y_{\rm ref}}{\mathcal{B}'\dot{v}_n-\dot{y}_{\rm ref}}_{{\mathbb{R}}^m} - k_0\scpr{\mathcal{B}' v_n-y_{\rm ref}}{\dot{y}_{\rm ref}}_{{\mathbb{R}}^m} \end{align*} Furthermore, we may derive that \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} v_n^4 &= 4 v_n^3 \dot v_n = -\frac{4}{c_3}\big( p_3(v_n) - c_2 v_n^2 + c_1 v_n\big) \dot v_n,\quad \text{thus}\\ p_3(v_n) \dot v_n &= -\frac{c_3}{4} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} v_n^4 + c_2 v_n^2 \dot v_n - c_1 v_n \dot v_n, \end{align*} and this implies, for any $\delta>0$, \begin{align*} \scpr{p_3(v_n)}{\dot v_n} &\le -\frac{c_3}{4} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \|v_n\|_{L^4}^4 + c_2 \scpr{v_n^2}{\dot v_n} - c_1 \scpr{v_n}{\dot v_n}\\ &\le -\frac{c_3}{4} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \|v_n\|_{L^4}^4 + \frac{c_2}{2} \left( \delta \|v_n\|_{L^4}^4 + \frac{1}{\delta} \|\dot v_n\|^2\right) + \frac{c_1}{2} \left(\delta \|v_n\|^2 + \frac{1}{\delta} \|\dot v_n\|^2\right)\\ &\stackrel{\eqref{eq:uv_bound_delta}}{\le} -\frac{c_3}{4} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \|v_n\|_{L^4}^4 + \frac{c_2}{2} \left( \delta \|v_n\|_{L^4}^4 + \frac{1}{\delta} \|\dot v_n\|^2 \right) + \frac{c_1}{2} \left(\delta C_v + \frac{1}{\delta} \|\dot v_n\|^2\right). \end{align*} Moreover, we find that, recalling $\overline{e}_n = \mathcal{B}' v_n-y_{\rm ref}$ \begin{align*} \scpr{u_n}{\dot{v}_n} &\le \frac{\delta}{2} \|u_n\|^2 + \frac{1}{2\delta} \|\dot v_n\|^2 \stackrel{\eqref{eq:uv_bound_delta}}{\le} \frac{\delta C_u}{2} + \frac{1}{2\delta} \|\dot v_n\|^2,\\ \scpr{I_{s,i}}{\dot{v}_n} &\le \frac{\delta}{2} \|I_{s,i}\|_{2,\infty}^2 + \frac{1}{2\delta} \|\dot v_n\|^2,\\ \scpr{\overline{e}_n}{\dot{y}_{\rm ref}}_{{\mathbb{R}}^m} &\le \frac{1}{2} \|\overline{e}_n\|^2_{{\mathbb{R}}^m} + \frac12 \|\dot{y}_{\rm ref}\|_\infty^2. \end{align*} Therefore, choosing $\delta$ large enough, we obtain that there exist constants $Q_1,Q_2>0$ independent of $n$ such that \begin{align*} \|\dot{v}_n\|^2\le&-\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \mathfrak{a}(v_n,v_n)-\frac{c_3}{4}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|v_n\|_{L^4}^4-\frac{k_0}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2+\frac{1}{2}\|\dot{v}_n\|^2\\ &+Q_1\|v_n\|^4_{L^4}+Q_2+\frac{k_0}{2}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2, \end{align*} thus, \begin{equation}\label{eq:est-norm-dotvn} \begin{aligned} \|\dot{v}_n\|^2&+\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\left(\mathfrak{a}(v_n,v_n)+\frac{c_3}{2}\|v_n\|_{L^4}^4+k_0\|\overline{e}_n\|_{{\mathbb{R}}^m}^2\right)\\ \le &\ 2Q_1\|v_n\|^4_{L^4}+2Q_2+k_0\|\overline{e}_n\|_{{\mathbb{R}}^m}^2. \end{aligned} \end{equation} As a consequence, we find that for all $t\in[0,\gamma]$ we have \begin{align*} t\|\dot{v}_n(t)\|^2&+\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\left(t\mathfrak{a}(v_n(t),v_n(t))+\frac{c_3t}{2}\|v_n(t)\|_{L^4}^4+k_0t\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2\right)\\ \stackrel{\eqref{eq:est-norm-dotvn}}{\le}&\ \left(2Q_1 t +\frac{c_3}{2}\right)\|v_n(t)\|^4_{L^4}+\mathfrak{a}(v_n(t),v_n(t)) +2Q_2 t+k_0(t+1)\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2. \end{align*} Since $t\|\dot{v}_n(t)\|^2\geq0$ and $t\le \gamma$ for all $t\in[0,\gamma]$, it follows that \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}&\left(t\mathfrak{a}(v_n(t),v_n(t))+\frac{c_3t}{2}\|v_n(t)\|_{L^4}^4+k_0t\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2\right)\\ \le&\ \left(2Q_1\gamma+\frac{c_3}{2}\right)\|v_n(t)\|^4_{L^4}+\mathfrak{a}(v_n(t),v_n(t)) +2Q_2\gamma+k_0(\gamma+1)\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2. \end{align*} Integrating the former and using \eqref{eq:bound_1}, there exist $P_1,P_2>0$ independent of $n$ such that for $t\in[0,\gamma]$ we have \begin{align*} t\mathfrak{a}(v_n(t),v_n(t))+\frac{c_3t}{2}\|v_n(t)\|_{L^4}^4+k_0t\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2 \le P_1+P_2t. \end{align*} Thus, there exist constants $C_5,C_6>0$ independent of $n$ such that \[\forall\, t\in[0,\gamma]:\ t\mathfrak{a}(v_n(t),v_n(t))\leq C_5\ \wedge\ t\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}\leq C_6.\] Hence, for all $\epsilon\in(0,\gamma)$, it follows from the above estimates together with~\eqref{eq:bound_1} that $v_n\in L^\infty(\epsilon,\gamma;W^{1,2}(\Omega))$ and $\overline{e}_n\in L^\infty(\epsilon,\gamma;{\mathbb{R}}^m)$, so that in addition to~\eqref{eq:convergence_subseq}, from Lemma~\ref{lem:weak_convergence} we further have that there exists a subsequence such that \[v_n\to v\in L^\infty(\epsilon,\gamma;W^{1,2}(\Omega))\mbox{ weak}^\star\] and $\mathcal{B}'v\in L^\infty(\epsilon,\gamma;{\mathbb{R}}^m)$ for all $\epsilon\in(0,\gamma)$, hence $I_{s,e}\in L^2(0,\gamma;{\mathbb{R}}^m)\cap L^\infty(\epsilon,\gamma;{\mathbb{R}}^m)$. By the Sobolev Embedding Theorem, $W^{1,2}(\Omega)\hookrightarrow L^p(\Omega)$ for $2\leq p\leq 6$ we have that $p_3(v)\in L^\infty(\epsilon,\gamma;L^2(\Omega))$. Moreover, since \eqref{eq:strong_delta} holds, we can rewrite it as \[\dot{v}(t)=(\mathcal{A}-c_1 I) v(t)+I_r(t)+\mathcal{B} I_{s,e}(t),\] where $I_r:=c_2v^2-c_3v^3-u+I_{s,i}\in L^2(0,\gamma;L^2(\Omega))\cap L^\infty(\epsilon,\gamma;L^2(\Omega))$ and Proposition~\ref{prop:hoelder} (recall that $W^{1,2}(\Omega)' = X_{-1/2}$ and hence $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,X_{-1/2})$) with $c=c_1$ implies that $v\in BUC([\epsilon,\gamma];W^{1,2}(\Omega))$. Hence, for all $\epsilon\in(0,\gamma)$, $v(t)\in W^{1,2}(\Omega)$ for $t\in[\epsilon,\gamma]$, so that in particular $v(\gamma)\in W^{1,2}(\Omega)$. \subsection{Solution on $(\gamma,\infty)$} \label{ssec:mono_proof_tgeqgamma} The crucial step in this part of the proof is to show that the error remains uniformly bounded away from the funnel boundary while $v\in L^\infty(\gamma,\infty;W^{1,2}(\Omega))$. The proof is divided into several steps. \emph{Step 1: We show existence of an approximate solution by means of a time-varying state-space transformation.}\\ Again, let $(\theta_i)_{i\in{\mathbb{N}}_0}$ be the eigenfunctions of $-\mathcal{A}$ and let $\alpha_i$ be the corresponding eigenvalues, with $\alpha_i\geq0$ for all $i\in{\mathbb{N}}_0$. Recall that $(\theta_i)_{i\in{\mathbb{N}}_0}$ form an orthonormal basis of $L^2(\Omega)$ by Proposition~\ref{prop:Aop_n}\ref{item:Aop5}). Let $(u_\gamma,v_\gamma):=(u(\gamma),v(\gamma))$, $a_i := \scpr{v_\gamma}{\theta_i}$ and $b_i := \scpr{u_\gamma}{\theta_i}$ for $i\in{\mathbb{N}}_0$ and \[ v_\gamma^n:= \sum_{i=0}^na_{i}\theta_i,\quad u_\gamma^n:= \sum_{i=0}^nb_{i}\theta_i,\quad n\in{\mathbb{N}}. \] Then we have that $v^n_\gamma\to v_\gamma$ strongly in $W^{1,2}(\Omega)$ and $u^n_\gamma\to u_\gamma$ strongly in $L^2(\Omega)$. As stated in Remark~\ref{rem:main}\,\ref{rem:main1}) we have that $\ker\mathcal{B}=\{0\}$ implies $\mathcal{B}' \mathcal{D}(\mathcal{A})={\mathbb{R}}^m$. As a~consequence, there exist $q_1,\ldots,q_m\in\mathcal{D}(\mathcal{A})$ such that $\mathcal{B}'q_k=e_k$ for $k=1,\dots,m$. By Proposition~\ref{prop:Aop_n}\,\ref{item:Aop3}), we further have $q_k\in C^{0,\nu}(\Omega)$ for some $\nu\in(0,1)$.\\ Note that $U\coloneqq\bigcup_{n\in{\mathbb{N}}} U_n$, where $U_n = \mathrm{span}\{\theta_i\}_{i=0}^n$, satisfies $\overline{U}=W^{1,2}(\Omega)$ with the respective norm. Moreover, $\overline{\mathcal{B}' U}={\mathbb{R}}^m$. Since ${\mathbb{R}}^m$ is complete and finite dimensional and $\mathcal{B}'$ is linear and continuous it follows that $\mathcal{B}' U={\mathbb{R}}^m$. By the surjectivity of $\mathcal{B}'$ we have that for all $k\in\{1,\dots,m\}$ there exist $n_k\in{\mathbb{N}}$ and $q_k\in U_{n_k}$ such that $\mathcal{B}' q_k=e_k$. Thus, there exists $n_0\in{\mathbb{N}}$ with $q_k\in U_{n_0}$ for all $k=\{1,\dots,m\}$, hence the $q_k$ are a (finite) linear combination of the eigenfunctions $\theta_i$.\\ Define $q\in W^{1,2}(\Omega;{\mathbb{R}}^m)\cap C^{0,\nu} (\Omega;{\mathbb{R}}^m)$ by $q(\zeta)=\big(q_1(\zeta),\ldots,q_m(\zeta)\big)^\top$ and $q\cdoty_{\rm ref}$ by \[ (q\cdoty_{\rm ref}) (t,\zeta) := \sum_{k=1}^mq_k(\zeta)y_{{\rm ref},k}(t),\quad \zeta\in \Omega,\, t\ge 0. \] We may define $q\cdot\dot{y}_{\rm ref}$ analogously. Note that we have $(q\cdoty_{\rm ref})\in BC([0,\infty)\times\Omega)$, because \[ |(q\cdoty_{\rm ref}) (t,\zeta)| \le \sum_{k=1}^m \|q_k\|_\infty\, \|y_{{\rm ref},k}\|_\infty \] for all $\zeta\in \Omega$ and $t\ge 0$, where we write $\|\cdot\|_\infty$ for the supremum norm. We define $q_{k,j}:= \scpr{q_k}{\theta_j}$ for $k=1,\ldots,m$, $j\in{\mathbb{N}}_0$ and $q_k^n:=\sum_{j=0}^nq_{k,j}$ for $n\in{\mathbb{N}}_0$. Similarly, $q^n:=(q_1^n,\dots,q_m^n)^\top$ for $n\in{\mathbb{N}}$, so that $q^n\to q$ strongly in $W^{1,2}(\Omega)$. In fact, since $q_k\in U_{n_0}$ for all $k=1,\ldots,m$, it follows that $q^n = q$ for all $n\ge n_0$.\\ Since $\mathcal{B}':W^{r,2}(\Omega)\to{\mathbb{R}}^m$ is continuous for some $r\in[0,1]$, it follows that for all $\theta\in W^{r,2}(\Omega)$ there exists $\Gamma_r>0$ such that \[\|\mathcal{B}'\theta\|_{{\mathbb{R}}^m}\leq\Gamma_r\|\theta\|_{W^{r,2}}.\] For $n\in{\mathbb{N}}_0$, let \[\kappa_n\coloneqq \big((n+1) \Gamma_r (1+\|v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma)\|^2_{W^{r,2}})\big)^{-1}.\] Note that for $v_\gamma\in W^{1,2}(\Omega)$ it holds that $\kappa_n>0$ for all $n\in{\mathbb{N}}_0$, $(\kappa_n)_{n\in{\mathbb{N}}_0}$ is bounded by $\Gamma_r^{-1}$ (and monotonically decreasing) and $\kappa_n\to0$ as $n\to\infty$ and by construction \[\forall\,n\in{\mathbb{N}}_0:\ \kappa_n\|\mathcal{B}' (v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma))\|_{{\mathbb{R}}^m}<1.\] Consider a modification of $\varphi$ induced by $\kappa_n$, namely \[\varphi_n\coloneqq\varphi+\kappa_n,\quad n\in{\mathbb{N}}_0.\] It is clear that for each $n\in{\mathbb{N}}_0$ we have $\varphi_n\in W^{1,\infty}([\gamma,\infty);{\mathbb{R}})$, the estimates $\|\varphi_n\|_\infty\leq\|\varphi\|_\infty+\Gamma_r^{-1}$ and $\|\dot{\varphi}_n\|_\infty=\|\dot{\varphi}\|_\infty$ are independent of $n$, and $\varphi_n\to\varphi\in\Phi_\gamma$ uniformly. Moreover, $\inf_{t>\gamma}\varphi_n(t)>0$.\\ Now, fix $n\in{\mathbb{N}}_0$. For $t\ge \gamma$, define \begin{align*} \phi(e)&:= \frac{k_0}{1-\|e\|_{{\mathbb{R}}^m}^2}e,\quad e\in{\mathbb{R}}^m,\ \|e\|_{{\mathbb{R}}^m}<1,\\ \omega_0(t)&:= \dot{\varphi}_n(t)\varphi_n(t)^{-1},\\ F(t,z)&:= \varphi_n(t)f_{-1}(t)+\varphi_n(t)f_0(t)+f_1(t)z+\varphi_n(t)^{-1}f_2(t)z^2\\ &\quad \ -c_3\varphi_n(t)^{-2}z^3,\quad z\in{\mathbb{R}},\\ f_{-1}(t)&:= I_{s,i}(t)+\sum_{k=1}^my_{{\rm ref},k}(t)\mathcal{A} q_k,\\ f_0(t)&:= -q\cdot(\dot{y}_{\rm ref}(t)+c_1y_{\rm ref}(t))+c_2(q\cdoty_{\rm ref}(t))^2-c_3(q\cdoty_{\rm ref}(t))^3,\\ f_1(t)&:= (q\cdoty_{\rm ref}(t))(2c_2-3c_3(q\cdoty_{\rm ref}(t))),\\ f_2(t)&:= c_2-3c_3(q\cdoty_{\rm ref}(t)),\\ g(t)&:= c_5(q\cdoty_{\rm ref}(t)). \end{align*} We have that $f_{-1}\in L^\infty(\gamma,\infty;L^2(\Omega))$, since \begin{align*} \|f_{-1}\|_{2,\infty} &:= \esssup_{t\ge \gamma} \left(\int_\Omega f_{-1}(\zeta,t)^2 \ds{\lambda} \right)^{1/2}\\ &\le \|I_{s,i}\|_{2,\infty} + \sum_{k=1}^m \|y_{{\rm ref},k}\|_\infty\, \|A q_k\|_{L^2} < \infty. \end{align*} Furthermore, we have that $f_0\in L^\infty((\gamma,\infty)\times\Omega)$, because \begin{align*} |f_0(\zeta,t)|&\leq\ (\|\dot{y}_{\rm ref}\|_\infty+c_1\|y_{\rm ref}\|_\infty)\sum_{k=1}^m\|q_k\|_\infty+c_2\|y_{\rm ref}\|_\infty^2\left(\sum_{k=1}^m\|q_k\|_\infty\right)^2\\ &\quad +c_3\|y_{\rm ref}\|^3_\infty\left(\sum_{k=1}^m\|q_k\|_\infty\right)^3\text{ for a.a.\ $(\zeta,t)\in \Omega\times [\gamma,\infty)$}, \end{align*} whence \[\|f_0\|_{\infty,\infty}:= \esssup_{t\geq\gamma,\zeta\in\Omega}|f_0(\zeta,t)|<\infty.\] Similarly $\|f_1\|_{\infty,\infty}<\infty$, $\|f_2\|_{\infty,\infty}<\infty$ and $\|g\|_{\infty,\infty}<\infty$.\\ Consider the system of $2(n+1)$ ODEs \begin{equation}\label{eq:appr_ODE} \begin{aligned} \dot{\mu}_j(t)&=-\alpha_j\mu_j(t)-(c_1-\omega_0(t))\mu_j(t)-\nu_j(t)- \scpr{\phi\left(\sum_{i=0}^n \mathcal{B}'\theta_i \mu_i(t)\right)}{\mathcal{B}'\theta_j}_{{\mathbb{R}}^m} \\ &\quad +\scpr{F\left(t,\sum_{i=0}^n\mu_i(t)\theta_i\right)}{\theta_j},\\ \dot{\nu}_j(t)&=-(c_4-\omega_0(t))\nu_j(t)+c_5\mu_j(t)+\varphi_n(t)\scpr{g(t)}{\theta_j} \end{aligned} \end{equation} defined on \[ {\mathbb{D}}:= \setdef{(t,\mu_0,\dots,\mu_n,\nu_0,\dots,\nu_n)\in[\gamma,\infty)\times{\mathbb{R}}^{2(n+1)} }{ \left\|\sum_{i=0}^n\gamma_i\mu_i\right\|_{{\mathbb{R}}^m}<1 }, \] with initial value \[\mu_j(\gamma)=\kappa_n\left(a_j-\sum_{k=1}^mq_{k,j}y_{{\rm ref},k}(\gamma)\right),\quad \nu_j(\gamma)=\kappa_n b_j,\quad j\in{\mathbb{N}}_0.\] Since the functions on the right hand side of~\eqref{eq:appr_ODE} are continuous, the set~${\mathbb{D}}$ is relatively open in $[\gamma,\infty)\times{\mathbb{R}}^{2(n+1)}$ and by construction the initial condition satisfies $(\gamma,\mu_0(\gamma),\dots,\mu_n(\gamma),\nu_0(\gamma),\dots,\nu_n(\gamma))\in{\mathbb{D}}$ it follows from ODE theory, see e.g.~\cite[\S~10, Thm.~XX]{Walt98}, that there exists a weakly differentiable solution \[(\mu^n,\nu^n)=(\mu_0,\dots,\mu_n,\nu_0,\dots,\nu_n):[\gamma,T_n)\to{\mathbb{R}}^{2(n+1)}\] such that $T_n\in(\gamma,\infty]$ is maximal. Furthermore, the closure of the graph of~$(\mu^n,\nu^n)$ is not a compact subset of ${\mathbb{D}}$.\\ With that, we may define \[z_n(t):=\sum_{i=0}^n\mu_i(t)\theta_i,\quad w_n(t):=\sum_{i=0}^n\nu_i(t)\theta_i,\quad e_n(t):= \sum_{i=0}^n\mathcal{B}'\theta_i\mu_i(t),\quad t\in[\gamma,T_n)\] and note that \[z_\gamma^n:=z_n(\gamma)=\kappa_n(v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma)),\quad w_\gamma^n:=w_n(\gamma)=\kappa_n u_\gamma^n.\] From the orthonormality of the $\theta_i$ we have that \begin{equation}\label{eq:weak} \begin{aligned} \scpr{\dot{z}_n(t)}{\theta_j}&=-\mathfrak{a}(z_n(t),\theta_j)-(c_1-\omega_0(t))\scpr{z_n(t)}{\theta_j} - \scpr{w_n(t)}{\theta_j}\\ &\quad -\scpr{\phi\left(\mathcal{B}' z_n(t)\right)}{\mathcal{B}' \theta_j}_{{\mathbb{R}}^m} +\scpr{F\left(t,z_n(t)\right)}{\theta_j},\\ \scpr{\dot{w}_n(t)}{\theta_j} &= -(c_4-\omega_0(t))\scpr{w_n(t)}{\theta_j}+c_5\scpr{z_n(t)}{\theta_j}+\varphi_n\scpr{g(t)}{\theta_j}. \end{aligned} \end{equation} Define now \begin{equation}\label{eq:transformation} \begin{aligned} v_n(t)&\coloneqq \varphi_n(t)^{-1}z_n(t)+q^n\cdoty_{\rm ref}(t),\\ u_n(t)&\coloneqq \varphi_n(t)^{-1}w_n(t),\\ \tilde{\mu}_i(t) &\coloneqq \varphi_n(t)^{-1}\mu_i(t) +\sum_{k=1}^m q_{k,i}y_{{\rm ref},k}(t),\\ \tilde{\nu}_i(t) &\coloneqq \varphi_n(t)^{-1}\nu_i(t), \end{aligned} \end{equation} then $v_n(t)=\sum_{i=0}^n\tilde{\mu}_i(t)\theta_i$ and $u_n(t)=\sum_{i=0}^n\tilde{\nu}_i(t)\theta_i$. With this transformation we obtain that $(v_n, u_n)$ satisfies, for all $\theta\in W^{1,2}(\Omega)$, $\chi\in L^2(\Omega)$ and all $t\in[\gamma,T_n)$ that \begin{equation*} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{v_n(t)}{\theta}=&-\mathfrak{a}(v_n(t),\theta)+\scpr{p_3(v_n(t)+(q-q^n)\cdoty_{\rm ref}(t))-u_n(t)}{\theta}\\ &+\scpr{I_{s,i}(t)-(q-q^n)\cdot\dot{y}_{\rm ref}(t)+\sum_{k=1}^my_{{\rm ref},k}(t)\mathcal{A}(q_k-q_k^n) }{\theta}\\ &+\scpr{I_{s,e}^n(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m},\\ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{u_n(t)}{\chi}=&\scpr{c_5(v_n(t)+(q-q^n)\cdoty_{\rm ref}(t))-c_4u_n(t)}{\chi},\\ I_{s,e}^n(t)=&-\frac{k_0}{1-\varphi_n(t)^2\|\mathcal{B}' (v_n(t)-q^n\cdoty_{\rm ref}(t))\|^2_{{\mathbb{R}}^m}}(\mathcal{B}'( v_n(t)-q^n\cdoty_{\rm ref}(t))), \end{aligned} \end{equation*} with $(u_n(\gamma),v_n(\gamma))=(u_\gamma,v_\gamma)$. Since there exists some $n_0\in{\mathbb{N}}$ with $q^{n}=q$ for all $n\geq n_0$, we have for all $n\geq n_0$, $\theta\in W^{1,2}(\Omega)$ and $\chi\in L^2(\Omega)$ that \begin{equation}\label{eq:weak_uv} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{v_n(t)}{\theta}=&-\mathfrak{a}(v_n(t),\theta)+\scpr{p_3(v_n(t))-u_n(t)}{\theta}\\ &+\scpr{I_{s,i}(t)}{\theta}+\scpr{I_{s,e}^n(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m},\\ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{u_n(t)}{\chi}=&\scpr{c_5v_n(t)-c_4u_n(t)}{\chi},\\ I_{s,e}^n(t)=&-\frac{k_0}{1-\varphi_n(t)^2\|\mathcal{B}' v_n(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}(\mathcal{B}' v_n(t)-y_{\rm ref}(t)), \end{aligned} \end{equation} \emph{Step 2: We show boundedness of $(z_n,w_n)$ in terms of~$\varphi_n$.}\\ Consider again the Lyapunov function \eqref{eq:Lyapunov} and observe that $\|z_n(t)\|^2 = \sum_{j=0}^n \mu_j(t)^2$ and $\|w_n(t)\|^2 = \sum_{j=0}^n \nu_j(t)^2$. We find that, for all $t\in[\gamma, T_n)$, \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(z_n(t),w_n(t)) &= c_5\sum_{j=0}^n\mu_j(t)\dot{\mu}_j(t)+\sum_{j=0}^n\nu_j(t)\dot{\nu}_j(t)\\ &=-c_5\sum_{j=0}^{n}\alpha_j\mu_j(t)^2- c_5 (c_1-\omega_0(t))\sum_{j=0}^{n}\mu_j(t)^2\\ &\quad-(c_4-\omega_0(t))\sum_{j=0}^{n}\nu_j(t)^2 -c_5\scpr{\phi(e_n(t))}{e_n(t)}_{{\mathbb{R}}^m} \\ &\quad +\varphi_n (t) \scpr{g(t)}{\sum_{i=0}^n\nu_i(t)\theta_i}\\ &\quad+c_5\scpr{F\left(t,\sum_{i=0}^n\mu_i(t)\theta_i\right)}{\sum_{i=0}^n\mu_i(t)\theta_i}, \end{align*} hence, omitting the argument~$t$ for brevity in the following, \begin{equation}\label{eq:Lyapunov_boundary_1} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(z_n,w_n)=&-c_5\mathfrak{a}(z_n,z_n)-c_5(c_1-\omega_0)\|z_n\|^2-(c_4-\omega_0)\|w_n\|^2\\ &-c_5\frac{k_0\|e_n\|_{{\mathbb{R}}^m}^2}{1-\|e_n\|_{{\mathbb{R}}^m}^2}+c_5\scpr{F(t,z_n)}{z_n}+\varphi_n \scpr{g}{w_n}. \end{aligned} \end{equation} Next we use some Young and Hölder inequalities to estimate the term \begin{align*} \scpr{F(t,z_n)}{z_n}&=\underbrace{\varphi_n(t)\scpr{f_{-1}(t)}{z_n}}_{I_{-1}} +\underbrace{\varphi_n(t)\scpr{f_0(t)}{z_n}}_{I_0}+\underbrace{\scpr{f_1(t)z_n}{z_n}}_{I_1}\\ &\quad+\underbrace{\varphi_n(t)^{-1}\scpr{f_2(t)z_n^2}{z_n}}_{I_2}-c_3\varphi_n(t)^{-2}\underbrace{\scpr{z_n^3}{z_n}}_{=\|z_n\|_{L^4}^4}. \end{align*} For the first term we derive, using Young's inequality for products with $p=4/3$ and $q=4$, that \begin{align*} I_{-1}&\leq \scpr{\frac{2^{1/2}\varphi_n^{3/2} |I_{s,i}|}{c_3^{1/4}}}{\frac{c_3^{1/4}|z_n|}{2^{1/2}\varphi_n^{1/2}}}+ \sum_{k=1}^m\scpr{\frac{(4m)^{1/4}\varphi_n^{3/2}\|y_{\rm ref}\|_\infty |Aq_k|}{c_3^{1/4}}}{\frac{c_3^{1/4}|z_n|}{(4m)^{1/4}\varphi_n^{1/2}}}\\ &\leq\frac{2^{2/3} 3\varphi_n^2\|I_{s,i}\|_{2,\infty}^{4/3}|\Omega|^{1/3}}{4c_3^{1/3}}+ \sum_{k=1}^m\frac{3(4m)^{1/3}\varphi_n^2\|y_{\rm ref}\|_\infty^{4/3}\|Aq_k\|^{4/3}|\Omega|^{1/3}}{4c_3^{1/3}}+ \frac{c_3\|z_n\|^4_{L^4}}{8\varphi_n^2} \end{align*} and with the same choice we obtain for the second term \[I_0\leq\scpr{\frac{2^{1/4}\varphi_n^{3/2}\|f_0\|_{\infty,\infty}}{c_3^{1/4}}}{\frac{c_3^{1/4}|z_n|}{2^{1/4}\varphi_n^{1/2}}}\leq \frac{2^{1/3}3\varphi_n^2\|f_0\|_{\infty,\infty}^{4/3}|\Omega|}{4c_3^{1/3}}+\frac{c_3\|z_n\|^4_{L^4}}{8\varphi_n^2}.\] Using $p=q=2$ we find that the third term satisfies \[ I_1\leq\scpr{\frac{2\varphi_n\|f_1\|_{\infty,\infty}}{\sqrt{c_3}}}{\frac{\sqrt{c_3}|z_n|^2}{2\varphi_n}}\leq\frac{2\varphi_n^2\|f_1\|_{\infty,\infty}^2|\Omega|}{c_3}+ \frac{c_3\|z_n\|^4_{L^4}}{8\varphi_n^2}, \] and finally, with $p=4$ and $q=4/3$, \begin{align*} I_2&\leq\scpr{\varphi_n^{-1}\|f_2\|_{\infty,\infty}}{|z_n|^3}= \scpr{\frac{3^{3/2}\varphi_n^{1/2}\|f_2\|_{\infty,\infty}}{c_3^{3/4}}}{\left|\frac{c_3^{1/4}z_n}{\varphi_n^{1/2}\sqrt{3}}\right|^3}\\ &\leq\frac{9^3\varphi_n^2\|f_2\|^4_{\infty,\infty}|\Omega|}{4c_3^3}+\frac{c_3}{12\varphi_n^2}\|z_n\|^4_{L^4}. \end{align*} Summarizing, we have shown that \[\scpr{F(t,z_n)}{z_n}\leq K_0\varphi_n^2-\frac{13c_3}{24\varphi_n^2}\|z_n\|^4_{L^4}\leq K_0\varphi_n^2-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4},\] where \begin{align*} K_0\coloneqq & \ \frac{2^{2/3} 3\|I_{s,i}\|_{2,\infty}^{4/3}|\Omega|^{1/3}}{4c_3^{1/3}}+ \sum_{k=1}^m\frac{3(4m)^{1/3}\|y_{\rm ref}\|_\infty^{4/3}\|Aq_k\|^{4/3}|\Omega|^{1/3}}{4c_3^{1/3}}\\ &+ \frac{2^{1/3}3\|f_0\|_{\infty,\infty}^{4/3}|\Omega|}{4c_3^{1/3}}+\frac{2\|f_1\|_{\infty,\infty}^2|\Omega|}{c_3}+\frac{9^3\|f_2\|^4_{\infty,\infty}|\Omega|}{4c_3^3}. \end{align*} Finally, using Young's inequality with $p=q=2$, we estimate the last term in~\eqref{eq:Lyapunov_boundary_1} as follows \[\varphi_n \scpr{g}{w_n}\leq\frac{\varphi_n^2\|g\|_{\infty,\infty}^2|\Omega|}{2c_4}+\frac{c_4}{2}\|w_n\|^2.\] We have thus obtained the estimate \begin{equation}\label{eq:Lyapunov_boundary_2} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(z_n,w_n)\leq&-(\sigma-2\omega_0) V(z_n,w_n)\\ &-c_5\mathfrak{a}(z_n,z_n)-c_5\frac{k_0\|e_n\|_{{\mathbb{R}}^m}^2}{1-\|e_n\|_{{\mathbb{R}}^m}^2}-\frac{c_3c_5}{2\varphi_n^{2}}\|z_n\|_{L^4}^4+\varphi_n^2K_1, \end{aligned} \end{equation} where \begin{align*} \sigma\coloneqq 2\min\{c_1,c_4\},\quad K_1\coloneqq c_5K_0+\frac{\|g\|_{\infty,\infty}^2|\Omega|}{2c_4}. \end{align*} In particular, we have the conservative estimate \[ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(z_n,w_n) \le -(\sigma-2\omega_0) V(z_n,w_n) +\varphi_n^2K_1 \] on $[\gamma,T_n)$, which implies that \[ V(z_n(t),w_n(t)) \le \mathrm{e}^{-K(t,\gamma)} V(z_n(\gamma),w_n(\gamma)) + \int_\gamma^t \mathrm{e}^{-K(t,s)} \varphi_n(s)^2 K_1 \ds{s}, \] where \[ K(t,s) = \int_s^t \sigma -2\omega_0(\tau) \ds{\tau} = \sigma(t-s) - 2\ln \varphi_n(t) + 2 \ln \varphi_n(s),\quad \gamma\le s\le t < T_n. \] Therefore, invoking $\varphi_n(\gamma)=\kappa_n$, for all $t\in[\gamma, T_n)$ we have \begin{align*} &c_5\|z_n(t)\|^2+\|w_n(t)\|^2 = 2V(z_n(t),w_n(t))\\ &\leq 2\mathrm{e}^{-\sigma (t-\gamma)}\frac{\varphi_n(t)^2}{\kappa_n^2}V(z_n(\gamma),w_n(\gamma))+\frac{2K_1}{\sigma}\varphi_n(t)^2\\ &= \varphi_n(t)^2\left((c_5\|v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma)\|^2+\|u_\gamma^n\|^2)\mathrm{e}^{-\sigma (t-\gamma)}+2K_1\sigma^{-1}\right)\\ &\le \varphi_n(t)^2\left(c_5\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2+\|u_\gamma\|^2+2K_1\sigma^{-1}\right). \end{align*} Thus there exist $M,N>0$ which are independent of $n$ and $t$ such that \begin{equation}\label{eq:L2_bound} \forall\, t\in[\gamma,T_n):\ \|z_n(t)\|^2\leq M\varphi_n(t)^2\ \ \text{and}\ \ \|w_n(t)\|^2\leq N\varphi_n(t)^2, \end{equation} and, as a consequence, \begin{equation}\label{eq:L2_bound_uv} \forall\, t\in[\gamma,T_n):\ \|v_n(t)-q^n\cdoty_{\rm ref}(t)\|^2\leq M\ \ \text{and}\ \ \|u_n(t)\|^2\leq N. \end{equation} \emph{Step 3: We show $T_n=\infty$ and that $e_n$ is uniformly bounded away from~1 on~$[\gamma,\infty)$.}\\ \emph{Step 3a: We derive some estimates for $\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2$ and for an integral involving $\|z_n\|^4_{L^4}$.} In a similar way in which we have derived~\eqref{eq:Lyapunov_boundary_2} we can obtain the estimate \begin{equation}\label{eq:energy_boundary_z} \begin{aligned} \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2\leq&-\mathfrak{a}(z_n,z_n)-(c_1-\omega_0)\|z_n\|^2+\|z_n\|\|w_n\|\\ &-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_0\varphi_n^2. \end{aligned} \end{equation} Using \eqref{eq:L2_bound} and $-c_1\|z_n\|^2\le 0$ leads to \begin{align*} \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2\leq&-\mathfrak{a}(z_n,z_n)-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}\\ &+\|\dot{\varphi}\|_\infty M\varphi_n+(K_0+\sqrt{MN})\varphi_n^2. \end{align*} Hence, \begin{equation}\label{eq:energy_boundary_2} \begin{aligned} \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2\leq&-\mathfrak{a}(z_n,z_n)-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_1\varphi_n+K_2\varphi_n^2 \end{aligned} \end{equation} on $[\gamma,T_n)$, where $K_1\coloneqq M\|\dot{\varphi}\|_\infty$ and $K_2\coloneqq K_0+\sqrt{MN}$. Observe that \[ \frac{c_3}{2}\varphi_n^{-3}\|z_n\|^4_{L^4}\leq-\frac{\varphi_n^{-1}}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+K_3, \] where $K_3\coloneqq K_1+K_2\|\varphi\|_\infty$. Therefore, \begin{align*} &\frac{c_3}{2}\int_\gamma^t\mathrm{e}^s\varphi_n(s)^{-3}\|z_n(s)\|^4_{L^4}\ds{s}\\ &\le K_3(\mathrm{e}^t-\mathrm{e}^\gamma)-\frac{1}{2}\int_\gamma^t\mathrm{e}^s\varphi_n(s)^{-1}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n(s)\|^2\ds{s}\\ &= K_3(\mathrm{e}^t-\mathrm{e}^\gamma)-\frac{1}{2}\left(\mathrm{e}^t\varphi_n(t)^{-1}\|z_n(t)\|^2-\frac{\|z_\gamma^n\|^2}{\kappa_n}\mathrm{e}^\gamma\right)\\ &\quad +\frac{1}{2}\int_\gamma^t\mathrm{e}^s\varphi_n(s)^{-2}(\varphi_n(s)-\dot{\varphi}_n(s))\|z_n(s)\|^2\ds{s}\\ &\le \frac{\mathrm{e}^t}{2}(2K_3+(\|\varphi\|_\infty+\Gamma_r^{-1}+\|\dot{\varphi}\|_\infty)M)+\kappa_n\mathrm{e}^\gamma(\|v_\gamma\|^2+\|q\cdoty_{\rm ref}(\gamma)\|^2), \end{align*} and hence there exist $D_0,D_1>0$ independent of $n$ and $t$ such that \begin{equation} \label{eq:expz4_boundary} \forall\, t\in[\gamma,T_n):\ \int_\gamma^t\mathrm{e}^s\varphi_n(s)^{-3}\|z_n(s)\|^4_{L^4}\ds{s}\leq D_1\mathrm{e}^t+\kappa_n D_0. \end{equation} \emph{Step 3b: We derive an estimate for $\|\dot z_n\|^2$.} Multiplying the first equation in~\eqref{eq:weak} by $\dot{\mu}_j$ and summing up over $j\in\{0,\ldots,n\}$ we obtain \begin{align*} \|\dot{z}_n\|^2=&-\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\mathfrak{a}(z_n,z_n)-\frac{c_1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+\frac{k_0}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})\\ &+ \scpr{\omega_0z_n+F\left(t,z_n\right)-w_n}{\dot{z}_n}. \end{align*} We can estimate the last term above by \begin{align*} \scpr{\omega_0z_n}{\dot{z}_n}\leq&\ \frac{7}{2}\|\dot{\varphi}\|_\infty^2\varphi_n^{-2}\|z_n\|^2+\frac{1}{14}\|\dot{z}_n\|^2 \stackrel{\eqref{eq:L2_bound}}{\leq} \frac{7}{2}\|\dot{\varphi}\|_\infty^2M+\frac{1}{14}\|\dot{z}_n\|^2,\\ \scpr{-w_n}{\dot{z}_n}\leq&\ \frac{7}{2}\|w_n\|^2+\frac{1}{14}\|\dot{z}_n\|^2,\\ \scpr{F\left(t,z_n\right)}{\dot{z}_n}\leq&\ \frac{7}{2}\varphi_n^2\left(m\sum_{k=1}^m \|y_{{\rm ref},k}\|_\infty^2 \|\mathcal{A} q_k\|^2+\|I_{s,i}\|^2_{2,\infty}+\|f_0\|_{\infty,\infty}^2|\Omega|\right)\\ &+\frac{7}{2}\|f_1\|^2_{\infty,\infty}\|z_n\|^2+\frac{7}{2}\varphi_n^{-2}\|f_2\|_{\infty,\infty}^2\|z_n\|^4_{L^4}\\ &+\frac{5}{14}\|\dot{z}_n\|^2-\frac{c_3}{4\varphi_n^2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^4_{L^4}. \end{align*} Inserting these inequalities, substracting $\tfrac12 \|\dot{z}_n\|^2$ and then multiplying by~$2$ gives \begin{equation*}\label{eq:energy_boundary_1} \begin{aligned} \|\dot{z}_n\|^2=&-\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\mathfrak{a}(z_n,z_n)-c_1\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+k_0\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})-\frac{c_3}{2\varphi_n^2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^4_{L^4}\\ &+7\varphi_n^2\left(m\sum_{k=1}^m\|y_{{\rm ref},k}\|_\infty^2 \|\mathcal{A} q_k\|^2+\|I_{s,i}\|^2_{2,\infty}\!+\|f_0\|_{\infty,\infty}^2|\Omega|+\|f_1\|^2_{\infty,\infty}M\!+\!N\!\right)\\ &+7\|\dot{\varphi}\|_\infty^2M+7\varphi_n^{-2}\|f_2\|_{\infty,\infty}^2\|z_n\|^4_{L^4}. \end{aligned} \end{equation*} Now we add and subtract $\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2$, thus we obtain \begin{align*} \|\dot{z}_n\|^2\leq&-\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\mathfrak{a}(z_n,z_n)-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+k_0\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\ln(1-\|e_n\|^2_{{\mathbb{R}}^m}) -\frac{c_3}{2\varphi_n^2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^4_{L^4}\\ &+7(\|\varphi\|_\infty+\Gamma_r^{-1})^2\left(m\sum_{k=1}^m\|y_{{\rm ref},k}\|_\infty^2\|\mathcal{A} q_k\|^2+\|I_{s,i}\|^2_{2,\infty}+\|f_0\|_{\infty,\infty}^2|\Omega|\right.\\ &\left.\phantom{\sum_{i=0}^n}\hspace*{-6mm}+\|f_1\|^2_{\infty,\infty}M\right)+7(N(\|\varphi\|_\infty+\Gamma_r^{-1})^2 +\|\dot{\varphi}\|_\infty^2M)+7\varphi_n^{-2}\|f_2\|_{\infty,\infty}^2\|z_n\|^4_{L^4}\\ &+\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2. \end{align*} By the product rule we have \[-\frac{c_3}{2\varphi_n^2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^4_{L^4}=-\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\left(\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}\right)- c_3\varphi_n^{-3}\dot{\varphi_n}\|z_n\|^4_{L^4},\] thus we find that \begin{equation}\label{eq:energy_boundary_3} \begin{aligned} \|\dot{z}_n\|^2&+\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\mathfrak{a}(z_n,z_n)-k_0\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})+\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\left(\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}\right)\\ \leq&-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_1+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}+\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2, \end{aligned} \end{equation} where \begin{align*} E_1&\coloneqq \ 7(\|\varphi\|_\infty+\Gamma_r^{-1})^2\left(m\sum_{k=1}^m\|y_{{\rm ref},k}\|_\infty^2\|Aq_k\|^2+ \|I_{s,i}\|^2_{2,\infty}+\|f_0\|_{\infty,\infty}^2|\Omega|\right.\\ &\quad\ \ \left.\phantom{\sum_{i=0}^n}\hspace*{-6mm}+\|f_1\|^2_{\infty,\infty}M\right) +7\big(N(\|\varphi\|_\infty+\Gamma_r^{-1})^2+\|\dot{\varphi}\|_\infty^2M\big),\\ E_2&\coloneqq 7\|f_2\|_{\infty,\infty}^2(\|\varphi\|_\infty+\Gamma_r^{-1})+c_3\|\dot{\varphi}\|_\infty \end{align*} are independent of $n$ and $t$.\\ \emph{Step 3c: We show uniform boundedness of~$e_n$.} Using~\eqref{eq:energy_boundary_2} in~\eqref{eq:energy_boundary_3} we obtain \begin{align*} \|\dot{z}_n\|^2+\dot\rho_n\leq&-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_1+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}\\ &-\mathfrak{a}(z_n,z_n)-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_1\varphi_n+K_2\varphi_n^2\\ =&-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}\\ &-\mathfrak{a}(z_n,z_n)-\frac{k_0}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+\Lambda, \end{align*} where \begin{align*} \rho_n&\coloneqq \mathfrak{a}(z_n,z_n)-k_0\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})+\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4},\\ \Lambda&\coloneqq E_1+K_1(\|\varphi\|_\infty+\Gamma_r^{-1})+K_2(\|\varphi\|_\infty+\Gamma_r^{-1})^2+k_0, \end{align*} and we have used the equality $$\frac{\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}=-1+\frac{1}{1-\|e_n\|^2_{{\mathbb{R}}^m}}.$$ Adding and subtracting $k_0\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})$ leads to \begin{align} \|\dot{z}_n\|^2+\dot\rho_n\leq&-\rho_n-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}\notag\\ &-k_0\left(\frac{1}{1-\|e_n\|^2_{{\mathbb{R}}^m}}+\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})\right)+\Lambda\notag\\ \leq&-\rho_n-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}+\Lambda, \label{eq:L2zdot_boundary} \end{align} where for the last inequality we have used that \[\forall\, p\in(-1,1):\ \frac{1}{1-p^2}\geq\ln\left(\frac{1}{1-p^2}\right) = -\ln(1-p^2).\] We may now use the integrating factor $\mathrm{e}^t$ to obtain \[ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \left(\mathrm{e}^t\rho_n\right) = \mathrm{e}^t(\rho_n + \dot \rho_n) \leq -\mathrm{e}^t\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_2\mathrm{e}^t\varphi_n^{-3}\|z_n\|^4_{L^4}+\Lambda \mathrm{e}^t \underset{\le 0}{\underbrace{- \mathrm{e}^t \|\dot z_n\|^2}}. \] Integrating and using \eqref{eq:expz4_boundary} yields that for all $t\in[\gamma,T_n)$ we have \begin{align*} \mathrm{e}^t\rho_n(t)-\rho_n(\gamma)\mathrm{e}^\gamma\leq&\ (E_2D_1+\Lambda)\mathrm{e}^t+\kappa_n E_2D_0-\int_\gamma^t\mathrm{e}^s\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n(s)\|^2\ds{s}\\ \leq&\ (E_2D_1+\Lambda)\mathrm{e}^t+\kappa_n E_2D_0+\left(c_1+\frac{1}{2}\right)\|z_\gamma^n\|^2\mathrm{e}^\gamma\\ &+\left(c_1+\frac{1}{2}\right)\int_\gamma^t\mathrm{e}^s\|z_n(s)\|^2\ds{s}\\ \stackrel{\eqref{eq:L2_bound}}{\leq}&\ (E_2D_1+\Lambda)\mathrm{e}^t+\kappa_n E_2D_0+\left(c_1+\frac12\right)\kappa_n^2\mathrm{e}^\gamma(\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2)\\ &+\left(c_1+\frac{1}{2}\right)(\|\varphi\|_\infty+\Gamma_r^{-1})^2M\mathrm{e}^t. \end{align*} Thus, there exit $\Xi_1,\Xi_2,\Xi_3>0$ independent of $n$ and $t$, such that \[\rho_n(t)\leq\rho_n(\gamma)\mathrm{e}^{-(t-\gamma)}+\Xi_1+\kappa_n(\Xi_2+\kappa_n\Xi_3)\mathrm{e}^{-(t-\gamma)}.\] Invoking the definition of $\rho_n$ and that $\mathrm{e}^{-(t-\gamma)}\leq1$ for $t\geq\gamma$ we find that \begin{equation}\label{eq:rho} \forall\, t\in[\gamma,T_n):\ \rho_n(t)\leq \rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3, \end{equation} where \begin{align*} \rho_{n}^0\coloneqq &\ \kappa_n^2\mathfrak{a}(v_\gamma^n\!-\!q^n\cdoty_{\rm ref}(\gamma),v_\gamma^n\!-\!q^n\cdoty_{\rm ref}(\gamma))\!-\!k_0\ln(1\!-\!\kappa_n^2\|\mathcal{B}' (v_\gamma^n\!-\!q^n\cdoty_{\rm ref}(\gamma))\|^2_{{\mathbb{R}}^m})\\ &+\kappa_n^2\|v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma)\|_{L^4}^4 = \rho_n(\gamma). \end{align*} Note that by construction of $\kappa_n$ and the Sobolev Embedding Theorem, $(\rho_n^0)_{n\in{\mathbb{N}}}$ is bounded, $\rho_n^0\to0$ as $n\to\infty$, so that $\rho_n^0$ can be bounded independently of $n$.\\ Again using the definition of $\rho_n$ and~\eqref{eq:rho} we find that \begin{align*} k_0 \ln\left(\frac{1}{1-\|e_n\|^2_{{\mathbb{R}}^m}}\right) = \rho_n - \mathfrak{a}(z_n,z_n)- \frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4} \le \rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3, \end{align*} and hence \[\frac{1}{1-\|e_n\|^2_{{\mathbb{R}}^m}}\leq\exp\left(\frac{1}{k_0}\left(\rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3\right)\right)=:\varepsilon(n).\] We may thus conclude that \begin{equation}\label{eq:err_bounded_away} \forall\, t\in[\gamma,T_n):\ \|e_n(t)\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon(n), \end{equation} or, equivalently, \begin{equation}\label{eq:err_bdd} \forall\, t\in[\gamma,T_n):\ \varphi_n(t)^2\|\mathcal{B}'( v_n(t)-q^n\cdoty_{\rm ref}(t))\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon(n). \end{equation} Moreover, from~\eqref{eq:rho}, the definition of $\rho$, $k_0\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})\leq0$ and Assumption~\ref{Ass1} we have that \begin{align*} \delta\|\nabla z_n\|^2+\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4} \leq\rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3. \end{align*} Reversing the change of variables leads to \begin{equation}\label{eq:potential} \begin{aligned} \forall\, t\in[\gamma,T_n):\ \delta\varphi_n(t)^2\|\nabla(v_n(t)-q^n\cdoty_{\rm ref}(t))\|^2&+\varphi_n(t)^2\|v_n(t)-q^n\cdoty_{\rm ref}(t)\|_{L^4}^4\\ &\leq\rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3, \end{aligned} \end{equation} which implies that for all $t\in[\gamma,T_n)$ we have $v_n(t)\in W^{1,2}(\Omega)$.\\ \emph{Step 3d: We show that~$T_n=\infty$.} Assuming $T_n<\infty$ it follows from~\eqref{eq:err_bounded_away} that the graph of the solution~$(\mu^n,\nu^n)$ from Step~2 would be a compact subset of~${\mathbb{D}}$, a contradiction. Therefore, we have $T_n=\infty$. \emph{Step 4: We show convergence of the approximate solution, uniqueness and regularity of the solution in $[\gamma,\infty)\times \Omega$.}\\ \emph{Step~4a: we prove some inequalities for later use.} From~\eqref{eq:rho} we have that, on $[\gamma,\infty)$, \[\varphi_n^{-2}\|z_n\|_{L^4}^4\leq\rho_{n}^0+\Xi_1+\kappa_n\Xi_2 + \kappa_n^2\Xi_3.\] Using a similar procedure as for the derivation of~\eqref{eq:expz4_boundary} we may obtain the estimate \begin{equation}\label{eq:noexpz4_boundary} \forall\, t\ge 0:\ \int_\gamma^t\varphi_n(s)^{-3}\|z_n(s)\|^4_{L^4}\ds{s}\leq\kappa_n d_0+d_1t \end{equation} for $d_0,d_1>0$ independent of $n$ and $t$. Further, we can integrate \eqref{eq:L2zdot_boundary} on the interval $[\gamma,t]$ to obtain, invoking $\rho_n(t)\ge 0$ and~\eqref{eq:noexpz4_boundary}, \[ \int_\gamma^t\|\dot{z}_n(s)\|^2\ds{s}\leq\rho_{n}^0+ \left(c_1+\frac12\right)\kappa_n^2(\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2)+E_2(\kappa_n d_0+d_1t)+\Lambda t \] for all $t\ge \gamma$. Hence, there exist $S_0,S_1,S_2>0$ independent of $n$ and $t$ such that \begin{equation}\label{eq:intzdot} \forall\, t\ge \gamma:\ \int_\gamma^t\|\dot{z}_n(s)\|^2\ds{s}\leq\rho_{n}^0+S_0\kappa_n+S_1\kappa_n^2+S_2t. \end{equation} This implies existence of $S_3,S_4>0$ such that \begin{equation}\label{eq:dot_var_v} \forall\, t\ge \gamma:\ \int_\gamma^t\left\|\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\varphi_nv_n)\right\|^2\ds{s}\leq\rho_{n}^0+S_0\kappa_n+S_1\kappa_n^2+S_3t+S_4. \end{equation} In order to improve~\eqref{eq:noexpz4_boundary}, we observe that from~\eqref{eq:energy_boundary_z} it follows \begin{align*} \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2\leq&-\mathfrak{a}(z_n,z_n)-(c_1-\omega_0)\|z_n\|^2+\|z_n\|\|w_n\|\\ &-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_0\varphi_n^2\\ \leq&\ \omega_0\|z_n\|^2-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_2\varphi_n^2 -\mathfrak{a}(z_n,z_n)-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}, \end{align*} which gives \[\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\varphi_n^{-2}\|z_n\|^2\leq 2K_2-c_3\varphi_n^{-4}\|z_n\|^4_{L^4}-2\varphi_n^{-2}\mathfrak{a}(z_n,z_n)-\frac{2k_0\varphi_n^{-2}\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}.\] This implies that for all $t\ge \gamma$ we have \begin{equation}\label{eq:vphi4z4} \begin{aligned} \int_\gamma^t c_3\varphi_n(s)^{-4}\|z_n(s)\|^4_{L^4} &+2\varphi_n(s)^{-2}\mathfrak{a}(z_n(s),z_n(s))+\frac{2k_0\varphi_n(s)^{-2}\|e_n(s)\|^2_{{\mathbb{R}}^m}}{1-\|e_n(s)\|^2_{{\mathbb{R}}^m}}\ds{s}\\ &\leq2K_2t+\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2, \end{aligned} \end{equation} which is bounded independently of $n$. This shows that for all $t\ge \gamma$ we have \begin{equation}\label{eq:L4} \begin{aligned} c_3\int_\gamma^t\|v_n(s)-q^n\cdoty_{\rm ref}(s)\|^4_{L^4}\ds{s}+\int_\gamma^t2\mathfrak{a}(v_n(s)-q^n\cdot y_{\rm ref}(s),v_n(s)-q^n\cdot y_{\rm ref}(s))\ds{s}\\ +\int_\gamma^t\frac{2k_0\|\mathcal{B}' (v_n(s)-q^n\cdoty_{\rm ref}(s))\|^2_{{\mathbb{R}}^m}}{1-\varphi_n(s)^{2}\|\mathcal{B}' (v_n(s)-q^n\cdoty_{\rm ref}(s))\|^2_{{\mathbb{R}}^m}}\ds{s}\leq2K_2t+\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2. \end{aligned} \end{equation} In order to prove that $\|\dot{w}_n\|^2$ is bounded independently of $n$ and $t$, a last calculation is required. Multiply the second equation in~\eqref{eq:weak} by $\dot{\nu}_j$ and sum over $j$ to obtain \[\|\dot{w}_n\|^2=-(c_4-\omega_0)\scpr{w_n}{\dot{w}_n}+c_5\scpr{z_n}{\dot{w}_n}+\varphi_n \scpr{g}{\dot{w}_n}.\] Using $(\omega_0 - c_4) w_n = (\dot\varphi_n - c_4\varphi_n) \varphi_n^{-1} w_n$ and the inequalities \begin{align*} -(c_4-\omega_0)\scpr{w_n}{\dot{w}_n}&\leq\frac{3}{2} \| \dot\varphi - c_4\varphi\|_\infty^2 \varphi_n^{-2}\|w_n\|^2+\frac{\|\dot{w}_n\|^2}{6}\\ &\leq\frac{3}{2}(\| \dot{\varphi}\|_\infty + c_4(\|\varphi\|_\infty+\Gamma_r^{-1}))^2 N+\frac{\|\dot{w}_n\|^2}{6},\\ c_5\scpr{z_n}{\dot{w}_n}&\leq\frac{3c_5^2}{2}\|z_n\|^2+\frac{1}{6}\|\dot{w}_n\|^2\\ &\leq\frac{3c_5^2M}{2}(\|\varphi\|_\infty+\Gamma_r^{-1})^2+\frac{1}{6}\|\dot{w}_n\|^2,\\ \varphi_n \scpr{g}{\dot{w}_n}& \leq\frac{3}{2}(\|\varphi\|_\infty+\Gamma_r^{-1})^2\|g\|^2_{\infty,\infty}|\Omega|+\frac{1}{6}\|\dot{w}_n\|^2, \end{align*} it follows that for all $t\ge\gamma$ we have \begin{equation}\label{eq:dotw} \begin{aligned} \|\dot{w}_n(t)\|^2\leq&3\|(\| \dot{\varphi}\|_\infty + c_4(\|\varphi\|_\infty+\Gamma_r^{-1}))^2 N\\&+3c_5^2M(\|\varphi\|_\infty+\Gamma_r^{-1})^2+3(\|\varphi\|_\infty+\Gamma_r^{-1})^2\|g\|^2_{\infty,\infty}|\Omega|,\end{aligned} \end{equation} which is bounded independently of $n$ and $t$. Multiplying the second equation in~\eqref{eq:weak} by $\varphi_n^{-1}$ and $\theta_i$ and summing up over $i\in\{0,\dots,n\}$ leads to \[\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\varphi_n^{-1}w_n)=-\varphi^{-2}\dot{\varphi}_nw_n+\varphi_n^{-1}\dot{w}_n=-c_4\varphi_n^{-1}w_n+c_5\varphi_n^{-1}z_n+g_n,\] where \[g_n\coloneqq \sum_{i=0}^n\scpr{g}{\theta_i}\theta_i.\] Taking the norm of the latter gives \begin{align*} \left\|\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\varphi_n^{-1}w_n)\right\|&\leq c_4\varphi_n^{-1}\|w_n\|+c_5\varphi_n^{-1}\|z_n\|+\|g_n\|\\ &\leq c_4N+c_5M+\|g\|_{\infty,\infty}, \end{align*} thus \begin{equation}\label{eq:dot_u} \forall\, t\ge \gamma:\ \|\dot{u}_n(t)\|\leq c_4N+c_5M+\|g\|_{\infty,\infty}. \end{equation} \emph{Step 4b: We show that $(v_n,u_n)$ converges weakly.} Let $T>\gamma$ be given. Using a similar argument as in Section~\ref{ssec:mono_proof_tleqgamma}, we have that $v_n\in L^2(\gamma,T;W^{1,2}(\Omega))$ and $\dot{v}_n\in L^2(\gamma,T;W^{1,2}(\Omega)')$, since~\eqref{eq:L4} together with~\eqref{eq:err_bdd} implies that $I_{s,e}^n\in L^2(\gamma,T;{\mathbb{R}}^m)$ and $v_n\in L^2(\gamma,T;W^{1,2}(\Omega))$.\\ Furthermore, analogously to Section~\ref{ssec:mono_proof_tleqgamma}, we have that there exist subsequences such that \begin{align*} u_n\to u&\in W^{1,2}(\gamma,T;L^{2}(\Omega))\mbox{ weakly},\\ v_n\to v&\in L^2(\gamma,T;W^{1,2}(\Omega))\mbox{ weakly},\\ \dot{v}_n\to\dot{v}&\in L^2(\gamma,T;(W^{1,2}(\Omega))')\mbox{ weakly}, \end{align*} so that $u,v\in C([\gamma,T];L^2(\Omega))$. Also $v_n^2\to v^2$ weakly in $L^2((\gamma,T)\times\Omega)$ and $v_n^3\to v^3$ weakly in $L^{4/3}((\gamma,T)\times\Omega)$.\\ We may infer further properties of $u$ and $v$. By \eqref{eq:L2_bound_uv}, \eqref{eq:potential}, \eqref{eq:dot_var_v} \& \eqref{eq:dot_u} we have that $u_n,\dot{u}_n$ lie in a bounded subset of $L^\infty(\gamma,\infty;L^2(\Omega))$ and that $v_n$ lie in a bounded subset of $L^\infty(\gamma,\infty;L^2(\Omega))$. Moreover, $\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\varphi_nv_n)\in L^2_{\rm loc}(\gamma,\infty;L^2(\Omega))$. Then, using Lemma~\ref{lem:weak_convergence}, we find a subsequence such that \begin{align*} u_n\to u&\in L^\infty(\gamma,T;L^2(\Omega))\mbox{ weak}^\star,\\ \dot{u}_n\to \dot{u}&\in L^\infty(\gamma,T;L^2(\Omega))\mbox{ weak}^\star,\\ v_n\to v&\in L^\infty(\gamma,T;L^{2}(\Omega))\mbox{ weak}^\star,\\ \varphi_nv_n\to \varphi v&\in L^\infty(\gamma,T;W^{1,2}(\Omega))\mbox{ weak}^\star,\\ \dot{v}_n\to \dot{v}&\in L^2(\gamma,T;W^{1,2}(\Omega)')\mbox{ weakly},\\ \varphi_n\dot{v}_n\to \varphi\dot{v}&\in L^2(\gamma,T;L^2(\Omega))\mbox{ weakly}, \end{align*} since $\varphi_n\to\varphi$ in $BC([\gamma,T];{\mathbb{R}})$. Moreover, by $\inf_{t>\gamma+\delta}\varphi(t)>0$, we also have that $v\in L^\infty(\gamma+\delta,T;W^{1,2}(\Omega))$ and $\dot{v}\in L^2(\gamma+\delta,T;L^2(\Omega))$ for all~$\delta>0$.\\ Further, $\kappa_n,\rho_n^0\to0$ and \[\varepsilon(n)\underset{n\to\infty}{\to}\varepsilon_0\coloneqq \exp\left(-k_0^{-1}\Xi_1\right).\] Thus, by \eqref{eq:L2_bound_uv}, \eqref{eq:err_bdd}, \eqref{eq:potential} \& \eqref{eq:L4} we have $v\in L^4((\gamma,T)\times\Omega)$ and for almost all $t\in[\gamma,T)$ the following estimates hold: \begin{equation}\label{eq:potential_limit} \begin{aligned} &\|v(t)-q\cdoty_{\rm ref}(t)\|\leq \sqrt{M},\\ &\|u(t)\|\leq \sqrt{N},\\ &\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon_0,\\ &\delta\varphi(t)^2\|\nabla(v(t)-q\cdoty_{\rm ref}(t))\|^2+\varphi(t)^2\|v(t)-q\cdoty_{\rm ref}(t)\|_{L^4}^4\leq\Xi_1,\\ &\int_\gamma^t\|v(s)-q\cdoty_{\rm ref}(s)\|^4_{L^4}\ds{s}\leq2K_2t+\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2. \end{aligned} \end{equation} Moreover, as in Section~\ref{ssec:mono_proof_tleqgamma}, $v_n\to v$ strongly in $L^2(\gamma,T;L^2(\Omega))$ and $u,v\in C([\gamma,T);L^2(\Omega))$ with $(u(\gamma),v(\gamma))=(u_\gamma,v_\gamma)$.\\ Hence, for $\chi\in L^2(\Omega)$ and $\theta\in W^{1,2}(\Omega)$ we have that $(u_n,v_n)$ satisfy the integrated version of \eqref{eq:weak_uv}, thus we obtain that for $t\in(\gamma,T)$ \begin{align*} \scpr{v(t)}{\theta}=&\ \scpr{v_\gamma}{\theta}+\int_\gamma^t-\mathfrak{a}(v(s),\theta)+\scpr{p_3(v(s))-u(s)+I_{s,i}(s)}{\theta}\ds{s},\\ &+\int_\gamma^T\scpr{I_{s,e}(s)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m}\ds{s},\\ \scpr{u(t)}{\chi}=&\ \scpr{u_\gamma}{\chi}+\int_\gamma^t\scpr{c_5v(s)-c_4u(s)}{\chi}\ds{s},\\ I_{s,e}(t)=&\ -\frac{k_0}{1-\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}(\mathcal{B}' v(t)-y_{\rm ref}(t)) \end{align*} by bounded convergence \cite[Thm.~II.4.1]{Dies77}. Hence, $(u,v)$ is a solution of~\eqref{eq:FHN_feedback} in $(\gamma,T)$. Moreover,~\eqref{eq:strong_delta} also holds in $W^{1,2}(\Omega)'$ for $t\geq\gamma$, that is \begin{equation}\label{eq:Xminus} \dot{v}(t)=\mathcal{A} v(t)+p_3(v(t))+\mathcal{B} I_{s,e}(t)-u(t)+I_{s,i}(t). \end{equation} \emph{Step 5: We show uniqueness of the solution on $[0,\infty)$.}\\ Using the same arguments as in Step~1e of Section~\ref{ssec:mono_proof_tleqgamma} together with $v,u\in L^4((\gamma,T)\times\Omega)$, it can be shown that the solution $(v,u)$ of~\eqref{eq:FHN_feedback} is unique on $(\gamma,T)$ for any $T>0$. Combining this with uniqueness on $[0,\gamma]$ we obtain a unique solution on $[0,\infty)$. \emph{Step 6: We show the regularity properties of the solution.}\\ To this end, note that for all $\delta>0$ we have that \[v\in L^2_{\rm loc}(\gamma,\infty;W^{1,2}(\Omega))\cap L^\infty(\gamma+\delta,\infty;W^{1,2}(\Omega)),\] so that $I_r\coloneqq I_{s,i}+c_2v^2-c_3v^3-u\in L^2_{\rm loc}(\gamma,\infty;L^{2}(\Omega))\cap L^\infty(\gamma+\delta,\infty;L^2(\Omega))$, and the application of Proposition \ref{prop:hoelder} yields that $v\in BC([\gamma,\infty);L^2(\Omega))\cap BUC((\gamma,\infty);W^{1,2}(\Omega))$. By the uniform continuity of $v$ and the completeness of $W^{1,2}(\Omega)$, $v$ has a limit at $t=\gamma$, see for instance \cite[Thm.~II.13.D]{Simm63}. Thus, $v\in L^\infty(\gamma,\infty;W^{1,2}(\Omega))$. From Section \ref{ssec:mono_proof_tleqgamma} and the latter we have that $v\in L^2_{\rm loc}(0,\infty;W^{1,2}(\Omega))\cap L^\infty(\delta,\infty;W^{1,2}(\Omega))$ for all $\delta>0$, so we have \begin{align*} I_{s,e}&\in L^2_{\rm loc}(0,\infty;{\mathbb{R}}^m)\cap L^\infty(\delta,\infty;{\mathbb{R}}^m),\\ v&\in L^2_{\rm loc}(0,\infty;W^{1,2}(\Omega))\cap L^\infty(\delta,\infty;W^{1,2}(\Omega))\\ &\quad \ \ \cap BC([0,\infty);L^2(\Omega)) \cap BUC([\delta,\infty);W^{1,2}(\Omega)), \end{align*} so that $I_r\coloneqq I_{s,i}+c_2v^2-c_3v^3-u\in L^2_{\rm loc}(0,\infty;L^{2}(\Omega))\cap L^\infty(\delta,\infty;L^2(\Omega))$.\\ Recall that by assumption we have $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)')$ for some $r\in [0,1]$. Applying Proposition \ref{prop:hoelder} we have that for all $\delta>0$ the unique solution of~\eqref{eq:Xminus} satisfies \begin{equation}\label{eq:sol_reg} \begin{aligned} \text{if $r=0$:} &\quad \forall\,\lambda\in(0,1):\ v\in C^{0,\lambda}([\delta,\infty);L^2(\Omega)); \\ \text{if $r\in(0,1)$:} &\quad v\in C^{0,1-r/2}([\delta,\infty);L^2(\Omega));\\ \text{if $r=1$:} &\quad v\in C^{0,1/2}([\delta,\infty);L^2(\Omega)). \end{aligned} \end{equation} Since $u,v\in BC([0,\infty);L^2(\Omega))$ and $\dot{u}=c_4v-c_5u$, we also have $\dot{u}\in BC([0,\infty);L^2(\Omega))$. Now, from~\eqref{eq:sol_reg} and $\mathcal{B}' \in\mathcal{L}(W^{r,2}(\Omega),{\mathbb{R}}^m)$ for $r\in[0,1]$ we obtain that \begin{itemize} \item for $r=0$ and $\lambda\in(0,1)$:\ $y= \mathcal{B}' v\in C^{0,\lambda}([\delta,\infty);{\mathbb{R}}^m)$; \item for $r\in(0,1)$:\ $y= \mathcal{B}' v\in C^{0,1-r}([\delta,\infty);{\mathbb{R}}^m)$; \item for $r=1$:\ $y= \mathcal{B}' v\in BUC([\delta,\infty);{\mathbb{R}}^m)$. \end{itemize} Further, from \eqref{eq:potential_limit} we have \[\forall\,t\geq\delta:\ \varphi(t)^{2}\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon_0,\] hence $I_{s,e}\in L^\infty(\delta,\infty;{\mathbb{R}}^m)$ and $I_{s,e}$ has the same regularity properties as $y$, since we have that $\varphi\in\Phi_\gamma$ and $y_{\rm ref}\in W^{1,\infty}(0,\infty;{\mathbb{R}}^m)$. Therefore, we have proved statements (i)--(iii) in Theorem~\ref{thm:mono_funnel} as well as~a) and~b). It remains to show~c), for which we additionally require that $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega))$. Then there exist $b_1,\ldots,b_m\in W^{1,2}(\Omega)$ such that $(\mathcal{B}' x)_i=\scpr{x}{b_i}$ for all $i=1,\dots,m$ and $x\in L^2(\Omega)$. Using the $b_i$ in the weak formulation for $i=1,\dots,m$, we have \[\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{v(t)}{b_i}=-\mathfrak{a}(v(t),b_i)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{b_i}+\scpr{I_{s,e}(t)}{\mathcal{B}' b_i}_{{\mathbb{R}}^m}.\] Since $(\mathcal{B}' v(t))_i=\scpr{v(t)}{b_i}$, this leads to \[\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\mathcal{B}' v(t)_i)=-\mathfrak{a}(v(t),b_i)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{b_i}+\scpr{I_{s,e}(t)}{\mathcal{B}' b_i}_{{\mathbb{R}}^m}.\] Taking the absolute value and using the Cauchy-Schwarz inequality yields \begin{align*} \left|\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\mathcal{B}' v(t))_i\right|\leq&\ \|D\|_{L^\infty}\|v(t)\|_{W^{1,2}}\|b_i\|_{W^{1,2}}+\|p_3(v(t))-u(t)+I_{s,i}(t)\|_{L^2}\|b_i\|_{L^2}\\ &+\|I_{s,e}(t)\|_{{\mathbb{R}}^m}\|\mathcal{B}' b_i\|_{{\mathbb{R}}^m}, \end{align*} and therefore \begin{align*} \forall\, i=1,\ldots,m\ \forall\,\delta>:\ \left\|\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\mathcal{B}' v)_i\right\|_{L^\infty(\delta,\infty;{\mathbb{R}}^m)}<\infty, \end{align*} by which $y = \mathcal{B}'v \in W^{1,\infty}(\delta,\infty;{\mathbb{R}}^m)$ as well as $I_{s,e} \in W^{1,\infty}(\delta,\infty;{\mathbb{R}}^m)$. This completes the proof of the theorem. \ensuremath{\hfill\ensuremath{\square}} \newlength\fheight \newlength\fwidth \setlength\fheight{0.3\linewidth} \setlength\fwidth{0.9\linewidth} \section{A numerical example} \label{sec:numerics} In this section, we illustrate the practical applicability of the funnel controller by means of a numerical example. The setup chosen here is a standard test example for termination of reentry waves and has been considered similarly e.g.\ in~\cite{BreiKuni17,KuniNagaWagn11}. All simulations are generated on an AMD Ryzen 7 1800X @ 3.68 GHz x 16, 64 GB RAM, MATLAB\textsuperscript{\textregistered} \;Version 9.2.0.538062 (R2017a). The solutions of the ODE systems are obtained by the MATLAB\textsuperscript{\textregistered}\;routine \texttt{ode23}. The parameters for the FitzHugh-Nagumo model \eqref{eq:FHN_model} used here are as follows: \begin{align*} \Omega&=(0,1)^2,\ \ D=\begin{bmatrix} 0.015 & 0 \\ 0 & 0.015 \end{bmatrix}, \ \ \begin{pmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \\ c_5 \end{pmatrix} \approx \begin{pmatrix} 1.614\\ 0.1403 \\ 0.012\\ 0.00015\\ 0.015 \end{pmatrix}. \end{align*} The spatially discrete system of ODEs corresponds to a finite element discretization with piecewise linear finite elements on a uniform $64\times 64$ mesh. For the control action, we assume that $\mathcal{B}\in \mathcal{L}(\mathbb R^4,W^{1,2}(\Omega)')$, where the Neumann control operator is defined by \begin{align*} \mathcal{B}'z &= \begin{pmatrix} \int_{\Gamma_1} z(\xi)\, \mathrm{d}\sigma,\int_{\Gamma_2} z(\xi)\, \mathrm{d}\sigma,\int_{\Gamma_3} z(\xi)\, \mathrm{d}\sigma ,\int_{\Gamma_4} z(\xi) \,\mathrm{d}\sigma\end{pmatrix}^\top, \\ \Gamma_1 &= \{1\}\times [0,1], \ \ \Gamma_2= [0,1]\times \{1\}, \ \ \Gamma_3 = \{0\}\times [0,1], \ \ \Gamma=[0,1]\times \{0\}. \end{align*} The purpose of the numerical example is to model a typical defibrillation process as a tracking problem as discussed above. In this context, system \eqref{eq:FHN_model} is initialized with $(v(0),u(0))=(v_0^*,u_0^*)$ and $I_{s,i}=0=I_{s,e}$, where $(v_0^*,u_0^*)$ is an arbitrary snapshot of a reentry wave. The resulting reentry phenomena are shown in Fig.~\ref{fig:reentry_waves} and resemble a dysfunctional heart rhythm which impedes the intracellular stimulation current~$I_{s,i}$. The objective is to design a stimulation current $I_{s,e}$ such that the dynamics return to a natural heart rhythm modeled by a reference trajectory $y_{\text{ref}}$. The trajectory $y_{\text{ref}} = \mathcal{B}' v_{\text{ref}}$ corresponds to a solution $(v_{\text{ref}},u_{\text{ref}})$ of~\eqref{eq:FHN_model} with $(v_{\text{ref}}(0),u_{\text{ref}}(0))=(0,0)$, $I_{s,e}=0$ and \begin{align*} I_{s,i}(t) = 101\cdot w(\xi) (\chi_{[49,51]}(t) + \chi_{[299,301]}(t)), \end{align*} where the excitation domain of the intracellular stimulation current~$I_{s,i}$ is described by \begin{align*} w(\xi) = \begin{cases} 1 , & \text{if } (\xi_1-\frac{1}{2})^2+(\xi_2-\frac{1}{2})^2 \le 0.0225, \\ 0 , & \text{otherwise}. \end{cases} \end{align*} The smoothness of the signal is guaranteed by convoluting the original signal with a triangular function. The function $\varphi$ characterizing the performance funnel (see Fig.~\ref{fig:funnel_error}) is chosen as \begin{align*} \varphi(t)= \begin{cases} 0, & t \in [0,0.05] ,\\ \mathrm{tanh}(\frac{t}{100}), & t > 0.05 . \end{cases} \end{align*} \begin{figure}[tb] \begin{subfigure}{.5\linewidth} \begin{center} \includegraphics[scale=0.4]{reentry_1.pdf} \end{center} \end{subfigure} \begin{subfigure}{.5\linewidth} \begin{center} \includegraphics[scale=0.4]{reentry_2.pdf} \end{center} \end{subfigure} \caption{Snapshots of reentry waves for $t=100$ (left) and $t=200$ (right).} \label{fig:reentry_waves} \end{figure} \begin{figure}[tb] \begin{center} \input{funnel_error.tikz} \end{center} \caption{Error dynamics and funnel boundary.} \label{fig:funnel_error} \end{figure} Fig.~\ref{fig:y_funnel} shows the results of the closed-loop system for $(v(0),u(0))=(v_0^*,u_0^*)$ and the control law \begin{align*} I_{s,e}(t)=-\frac{0.75}{1-\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}(\mathcal{B}'v(t)-y_{\rm ref}(t)), \end{align*} which is visualized in Fig.~\ref{fig:u_funnel}. Let us note that the sudden changes in the feedback law are due to the jump discontinuities of the intracellular stimulation current $I_{s,i}$ used for simulating a regular heart beat. \setlength\fheight{0.3\linewidth} \setlength\fwidth{0.4\linewidth} \begin{figure}[tb] \begin{subfigure}{.5\linewidth} \begin{center} \input{y_funnel_1.tikz} \end{center} \end{subfigure}\quad \begin{subfigure}{.5\linewidth} \begin{center} \input{y_funnel_2.tikz} \end{center} \end{subfigure} \begin{subfigure}{.5\linewidth} \begin{center} \input{y_funnel_3.tikz} \end{center} \end{subfigure}\quad \begin{subfigure}{.5\linewidth} \begin{center} \input{y_funnel_4.tikz} \end{center} \end{subfigure} \caption{Reference signals and outputs of the funnel controlled system.} \label{fig:y_funnel} \end{figure} We see from Fig.~\ref{fig:y_funnel} that the controlled system tracks the desired reference signal with the prescribed performance. Also note that the performance constraints are not active on the interval $[0,0.05]$. Fig.~\ref{fig:u_funnel} further shows that the tracking is achieved with a comparably small control effort. \begin{figure}[tb] \begin{subfigure}{.5\linewidth} \begin{center} \input{u_funnel_1.tikz} \end{center} \end{subfigure}\quad \begin{subfigure}{.5\linewidth} \begin{center} \input{u_funnel_2.tikz} \end{center} \end{subfigure} \begin{subfigure}{.5\linewidth} \begin{center} \input{u_funnel_3.tikz} \end{center} \end{subfigure}\quad \begin{subfigure}{.5\linewidth} \begin{center} \input{u_funnel_4.tikz} \end{center} \end{subfigure} \caption{Funnel control laws.} \label{fig:u_funnel} \end{figure} \begin{appendices} \section{Neumann elliptic operators}\label{sec:neum_lapl} We collect some further facts on Neumann elliptic operators as introduced in Proposition~\ref{prop:Aop}. \begin{Prop}\label{prop:Aop_n} If Assumption~\ref{Ass1} holds, then the Neumann elliptic operator $\mathcal{A}$ on $\Omega$ associated to $D$ has the following properties: \begin{enumerate}[a)] \item\label{item:Aop3} there exists $\nu\in(0,1)$ such that $\mathcal{D}(\mathcal{A})\subset C^{0,\nu}(\Omega)$; \item\label{item:Aop4} $\mathcal{A}$ has compact resolvent; \item\label{item:Aop5} there exists a real-valued and monotonically increasing sequence $(\alpha_j)_{j\in{\mathbb{N}}_0}$ such that \begin{enumerate}[(i)] \item $\alpha_0=0$, $\alpha_1>0$ and $\lim_{j\to\infty}\alpha_j=\infty$, and \item the spectrum of $\mathcal{A}$ reads $\sigma(\mathcal{A})=\setdef{-\alpha_j}{j\in{\mathbb{N}}_0}$ \end{enumerate} and an orthonormal basis $(\theta_j)_{j\in{\mathbb{N}}_0}$ of $L^2(\Omega)$, such that \begin{equation}\forall\,x\in\mathcal{D}(\mathcal{A}):\ \mathcal{A} x=-\sum_{j=0}^\infty\alpha_j\scpr{x}{\theta_j}\theta_j,\label{eq:spectr}\end{equation} and the domain of $\mathcal{A}$ reads \begin{equation}\mathcal{D}(\mathcal{A})=\setdef{\sum_{j=0}^\infty \lambda_j \theta_j}{(\lambda_j)_{j\in{\mathbb{N}}_0}\text{ with }\sum_{j=1}^\infty \alpha_j^2 |\lambda_j|^2<\infty}.\label{eq:spectrda}\end{equation} \end{enumerate} \end{Prop} \begin{proof} Statement \ref{item:Aop3}) follows from \cite[Prop.~3.6]{Nitt11}.\\ To prove \ref{item:Aop4}), we first use that the ellipticity condition \eqref{eq:ellcond} implies \begin{equation}\delta\|z\|+\|\mathcal{A} z\|\geq \|z\|_{W^{1,2}}.\label{eq:Acoer}\end{equation} Since $\partial\Omega$ is Lipschitz, $\Omega$ has the cone property \cite[p.~66]{Adam75}, and we can apply the Rellich-Kondrachov Theorem~\cite[Thm.~6.3]{Adam75}, which states that $W^{1,2}(\Omega)$ is compactly embedded in $L^2(\Omega)$. Combining this with \eqref{eq:Acoer}, we obtain that $\mathcal{A}$ has compact resolvent.\\ We show~\ref{item:Aop5}). Since $\mathcal{A}$ has compact resolvent and is self-adjoint by Proposition~\ref{prop:Aop}, we obtain from \cite[Props.~3.2.9 \&~3.2.12]{TucsWeis09} that there exists a~real valued sequence $(\alpha_j)_{j\in{\mathbb{N}}_0}$ with $\lim_{j\to\infty}|\alpha_j|=\infty$ and \eqref{eq:spectr}, and the domain of $\mathcal{A}$ has the representation \eqref{eq:spectrda}. Further taking into account that \[\forall\, z\in \mathcal{D}(\mathcal{A}):\ \scpr{z}{\mathcal{A} z}=-\mathfrak{a}(z,z)\leq0,\] we obtain that $\alpha_j\geq0$ for all $j\in{\mathbb{N}}_0$. Consequently, it is no loss of generality to assume that $(\alpha_j)_{j\in{\mathbb{N}}_0}$ is monotonically increasing. It remains to prove that $\alpha_0=0$ and $\alpha_1>0$: On the one hand, we have that the constant function $1_\Omega\in L^2(\Omega)$ satisfies $\mathcal{A} 1_\Omega=0$, since \[\forall\, z\in W^{1,2}(\Omega):\ \scpr{z}{ \cA1_\Omega}=-\mathfrak{a}(z,1_\Omega)=-\scpr{\nabla z_1}{D\nabla 1_\Omega}=0.\] On the other hand, if $z\in\ker \mathcal{A}$, then \[0=\scpr{z}{\mathcal{A} z}=-\mathfrak{a}(z,z)=-\scpr{\nabla z}{D\nabla z},\] and the pointwise positive definiteness of $D$ implies $\nabla z=0$, whence $z$ is a constant function. This gives $\dim \ker \mathcal{A}=1$, by which $\alpha_0=0$ and $\alpha_1>0$. \end{proof} \section{Interpolation spaces} \label{sec:mono_prep_proof} We collect some results on interpolation spaces, which are necessary for the proof of Theorem~\ref{thm:mono_funnel}. For a (more) general interpolation theory, we refer to \cite{Luna18}. \begin{Def} Let $X,Y$ be Hilbert spaces and let $\alpha\in[0,1]$. Consider the function \[K:(0,\infty)\times (X+Y)\to{\mathbb{R}},\ (t,x)\mapsto \underset{x=a+b}{\inf_{a\in X,\, b\in Y,}}\, \|a\|_X+t\|b\|_Y.\] The {\em interpolation space} $(X,Y)_{\alpha}$ is defined by \[(X,Y)_{\alpha}:=\setdef{x\in X+Y}{\Big(t\mapsto t^{-\alpha} K(t,x)\Big)\in L^2(0,\infty)},\] and it is a Hilbert space with the norm \[\|x\|_{(X,Y)_\alpha}=\|t\mapsto t^{-\alpha} K(t,x)\|_{L^2}.\] \end{Def} Note that interpolation can be performed in a more general fashion for Banach spaces $X$, $Y$. More precise, we may utilize the $L^p$-norm of the map $t\mapsto t^{-\alpha} K(t,x)$ for some $p\in[1,\infty)$ instead of the $L^2$-norm in the above definition. However, this does not lead to Hilbert spaces $(X,Y)_\alpha$, not even when~$X$ and~$Y$ are Hilbert spaces. For a~self-adjoint operator $A:\mathcal{D}(A)\subset X\to X$, $X$ a Hilbert space and $n\in{\mathbb{N}}$, we may define the space $X_n:=\mathcal{D}(A^n)$ by $X_0=X$ and $X_{n+1}:=\setdef{x\in X_n}{Ax\in X_n}$. This is a Hilbert space with norm $\|z\|_{X_{n+1}}=\|-\lambda z+Az\|_{X_n}$, where $\lambda\in{\mathbb{C}}$ is in the resolvent set of $A$. Likewise, we introduce $X_{-n}$ as the completion of $X$ with respect to the norm $\|z\|_{X_{-n}}=\|(-\lambda I+A)^{-n}z\|$. Note that $X_{-n}$ is the dual of $X_n$ with respect to the pivot space $X$, cf.~\cite[Sec.~2.10]{TucsWeis09}. Using interpolation theory, we may further introduce the spaces~$X_\alpha$ for any $\alpha\in{\mathbb{R}}$ as follows. \begin{Def}\label{Def:int-space-A} Let $\alpha\in{\mathbb{R}}$, $X$ a Hilbert space and $A:\mathcal{D}(A)\subset X\to X$ be self-adjoint. Further, let $n\in{\mathbb{Z}}$ be such that $\alpha\in[n,n+1)$. The space $X_\alpha$ is defined as the interpolation space \[X_\alpha=(X_{n},X_{n+1})_{\alpha-n}.\] \end{Def} The Reiteration Theorem, see~\cite[Cor.~1.24]{Luna18}, together with~\cite[Prop.~3.8]{Luna18} yields that for all $\alpha\in[0,1]$ and $\alpha_1,\alpha_2\in{\mathbb{R}}$ with $\alpha_1\leq\alpha_2$ we have that \begin{equation}\label{eq:Xalpha-reit} (X_{\alpha_1},X_{\alpha_2})_{\alpha}=X_{\alpha_1+\alpha (\alpha_2-\alpha_1)}. \end{equation} Next we characterize interpolation spaces associated with the Neumann elliptic operator $\mathcal{A}$. \begin{Prop}\label{prop:Aop2_n} Let Assumption~\ref{Ass1} hold and $\mathcal{A}$ be the Neumann elliptic operator on $\Omega$ associated to $D$. Further let $X_\alpha$, $\alpha \in{\mathbb{R}}$, be the corresponding interpolation spaces with, in particular, $X=X_0=L^2(\Omega)$. Then \[X_{r/2}=W^{r,2}(\Omega)\;\text{ for all $r\in[0,1]$}.\] \end{Prop} \begin{proof} The equation $X_{1/2}=W^{1,2}(\Omega)$ is an immediate consequence of Kato's Second Representation Theorem~\cite[Sec.~VI.2, Thm.~2.23]{Kato80}. For general $r\in[0,1]$ equation~\eqref{eq:Xalpha-reit} implies \[X_{r/2}=(X_{0},X_{1/2})_{r}.\] Now using that $X_0=L^2(\Omega)$ by definition and, as already stated, $X_{1/2}=W^{1,2}(\Omega)$, it follows from~\cite[Thm.~1.35]{Yagi10} that \[(L^2(\Omega),W^{1,2}(\Omega))_{r}=W^{r,2}(\Omega),\] and thus $X_{r/2}=W^{r,2}(\Omega)$. \end{proof} \begin{Rem} \label{rem:X_alpha} In terms of the spectral decomposition \eqref{eq:spectr}, the space $X_\alpha$ has the representation \begin{equation}X_\alpha=\setdef{\sum_{j=0}^\infty \lambda_j \theta_j}{(\lambda_j)_{j\in{\mathbb{N}}_0}\text{ with }\sum_{j=1}^\infty \alpha_j^{2\alpha} |\lambda_j|^2<\infty}.\label{eq:Xrspec}\end{equation} This follows from a~combination of~\cite[Thm.~4.33]{Luna18} with~\cite[Thm.~4.36]{Luna18}. \end{Rem} \section{Abstract Cauchy problems and regularity} \label{sec:mono_prep_proof2} We consider mild solutions of certain abstract Cauchy problems and the concept of admissible control operators. This notion is well-known in infinite-dimensional linear systems theory with unbounded control and observation operators and we refer to~\cite{TucsWeis09} for further details. Let $X$ be a real Hilbert space and recall that a semigroup $(\T_t)_{t\geq0}$ on $X$ is a $\mathcal{L}(X,X)$-valued map satisfying ${\mathbb{T}}_0=I_{X}$ and ${\mathbb{T}}_{t+s}={\mathbb{T}}_t {\mathbb{T}}_s$, $s,t\geq0$, where $I_{X}$ denotes the identity operator, and $t\mapsto {\mathbb{T}}_t x$ is continuous for every $x\in X$. Semigroups are characterized by their generator~$A$, which is a, not necessarily bounded, operator on~$X$. If $A:\mathcal{D}(A)\subset X\to X$ is self-adjoint with $\scpr{x}{Ax}\leq0$ for all $x\in\mathcal{D}(A)$, then it generates a~contractive, analytic semigroup $(\T_t)_{t\geq0}$ on $X$, cf.~\cite[Thm.~4.2]{ArenElst12}. Furthermore, if additionally there exists $\omega_0>0$ such that $\scpr{x}{Ax}\leq-\omega_0 \|x\|^2$ for all $x\in\mathcal{D}(A)$, then the semigroup $(\T_t)_{t\geq0}$ generated by $A$ satisfies $\|{\mathbb{T}}_t\|\leq \mathrm{e}^{-\omega_0 t}$ for all $t\geq0$; the smallest number $\omega_0$ for which this is true is called \emph{growth bound} of $(\T_t)_{t\geq0}$. We can further conclude from~\cite[Thm.~6.13\,(b)]{Pazy83} that, for all $\alpha\in{\mathbb{R}}$, $(\T_t)_{t\geq0}$ restricts (resp.\ extends) to an analytic semigroup $(({\mathbb{T}}|_{\alpha})_t)_{t\ge0}$ on $X_\alpha$ with same growth bound as $(\T_t)_{t\geq0}$. Furthermore, we have $\im {\mathbb{T}}_t\subset X_r$ for all $t>0$ and $r\in{\mathbb{R}}$, see~\cite[Thm.~6.13(a)]{Pazy83}. In the following we present an estimate for the corresponding operator norm. \begin{Lem}\label{lem:A_alpha} Assume that $A:\mathcal{D}(A)\subset X\to X$, $X$ a Hilbert space, is self-adjoint and there exists $\omega_0>0$ with $\scpr{x}{Ax}\leq-\omega_0 \|x\|^2$ for all $x\in\mathcal{D}(A)$. Then there exist $M,\omega>0$ such that the semigroup $(\T_t)_{t\geq0}$ generated by $A$ satisfies \[ \forall\, \alpha\in[0,2]\ \forall\, t>0:\ \|{\mathbb{T}}_t\|_{\mathcal{L}(X,X_\alpha)}\leq M(1+t^{-\alpha})\mathrm{e}^{-\omega t}.\] Thus, for each $\alpha\in[0,2]$ there exists $K>0$ such that \[\sup_{t\in[0,\infty)}t^\alpha\|{\mathbb{T}}_t\|_{\mathcal{L}(X,X_\alpha)}<K.\] \end{Lem} \begin{proof} Since $A$ with the above properties generates an exponentially stable analytic semigroup $(\T_t)_{t\geq0}$, the cases $\alpha\in[0,1]$ and $\alpha=2$ follow from~\cite[Cor.~3.10.8~\&~Lem.~3.10.9]{Staf05}. The result for $\alpha\in[1,2]$ is a consequence of~\cite[Lem~3.9.8]{Staf05} and interpolation between $X_1$ and $X_2$, cf.\ Appendix~\ref{sec:mono_prep_proof}. \end{proof} Next we consider the abstract Cauchy problem with source term. \begin{Def}\label{Def:Cauchy} Let $X$ be a Hilbert space, $A:\mathcal{D}(A)\subset X\to X$ be self-adjoint with $\scpr{x}{Ax}\leq0$ for all $x\in\mathcal{D}(A)$, $T\in(0,\infty]$, and $\alpha\in[0,1]$. Let $(\T_t)_{t\geq0}$ be the semigroup on $X$ generated by $A$, and let $B\in\mathcal{L}({\mathbb{R}}^m,X_{-\alpha})$. For $x_0\in X$, $p\in[1,\infty]$, $f\in L^p_{\loc}(0,T;X)$ and $u\in L^p_{\loc}(0,T;{\mathbb{R}}^m)$, we call~$x:[0,T)\to X$ a \emph{mild solution} of \begin{equation}\label{eq:abstract_cauchy} \begin{aligned} \dot{x}(t)=Ax(t)+f(t)+Bu(t),\quad x(0)=x_0 \end{aligned} \end{equation} on $[0,T)$, if it satisfies \begin{equation}\label{eq:mild_solution} \forall\, t\in[0,T):\ x(t)={\mathbb{T}}_tx_0+\int_0^t{\mathbb{T}}_{t-s}f(s)\ds{s}+\int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}Bu(s)\ds{s}. \end{equation} We further call $x:[0,T)\to X$ a \emph{strong solution} of \eqref{eq:abstract_cauchy} on $[0,T)$, if $x$ in~\eqref{eq:mild_solution} satisfies $x\in C([0,T);X)\cap W^{1,p}_{\rm loc}(0,T;X_{-1})$. \end{Def} Definition~\ref{Def:Cauchy} requires that the integral $\int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}Bu(s)\ds{s}$ is in $X$, whilst the integrand is not necessarily in $X$. This motivates the definition of admissibility, which is now introduced for self-adjoint $A$. Note that admissibility can also be defined for arbitrary generators of semigroups, see~\cite{TucsWeis09}. \begin{Def}\label{Def:Adm} Let $X$ be a Hilbert space, $A:\mathcal{D}(A)\subset X\to X$ be self-adjoint with $\scpr{x}{Ax}\leq0$ for all $x\in\mathcal{D}(A)$, $T\in(0,\infty]$, $\alpha\in[0,1]$ and $p\in[1,\infty]$. Let $(\T_t)_{t\geq0}$ be the semigroup on $X$ generated by $A$, and let $B\in\mathcal{L}({\mathbb{R}}^m,X_{-\alpha})$. Then $B$ is called an {\em $L^p$-admissible (control operator) for $(\T_t)_{t\geq0}$}, if for some (and hence any) $t> 0$ we have $$\forall\, u\in L^p(0,t;{\mathbb{R}}^m):\ \Phi_{t}u :=\int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}Bu(s)\ds{s} \in X.$$ By a closed graph theorem argument this implies that $\Phi_t\in \mathcal{L}(L^p(0,t;{\mathbb{R}}^m),X)$ for all $t> 0$. We call $B$ an {\em infinite-time $L^p$-admissible (control operator) for $(\T_t)_{t\geq0}$}, if \[ \sup_{t>0} \|\Phi_t\| < \infty. \] \end{Def} In the following we show that for $p\ge 2$ and $\alpha\le 1/2$ any~$B$ is admissible and the mild solution of the abstract Cauchy problem is indeed a strong solution. \begin{Lem}\label{lem:abstract_solution} Let $X$ be a Hilbert space, $A:\mathcal{D}(A)\subset X\to X$ be self-adjoint with $\scpr{x}{Ax}\leq0$ for all $x\in\mathcal{D}(A)$, $B\in\mathcal{L}({\mathbb{R}}^m,X_{-\alpha})$ for some $\alpha\in[0,1/2]$, and $(\T_t)_{t\geq0}$ be the analytic semigroup generated by $A$. Then for all $p\in[2,\infty]$ we have that~$B$ is $L^p$-admissible for $(\T_t)_{t\geq0}$. Furthermore, for all $x_0\in X$, $T\in(0,\infty]$, $f\in L^p_{\loc}(0,T;X)$ and $u\in L^p_{\loc}(0,T;{\mathbb{R}}^m)$, the function~$x$ in~\eqref{eq:mild_solution} is a strong solution of~\eqref{eq:abstract_cauchy} on $[0,T)$. \end{Lem} \begin{proof} For the case $p=2$, there exists a unique strong solution in $X_{-1}$ (that is, we replace $X$ by $X_{-1}$ and $X_{-1}$ by $X_{-2}$ in the definition) given by~\eqref{eq:mild_solution} and at most one strong solution in $X$, see for instance~\cite[Thm.~3.8.2~(i)~\&~(ii)]{Staf05}, so we only need to check that all the elements are in the correct spaces. Since $A$ is self-adjoint, the semigroup generated by $A$ is self-adjoint as well. Further, by combining~\cite[Prop.~5.1.3]{TucsWeis09} with~\cite[Thm.~4.4.3]{TucsWeis09}, we find that~$B$ is an $L^2$-admissible control operator for~$(\T_t)_{t\geq0}$. Moreover, by~\cite[Prop.~4.2.5]{TucsWeis09} we have that \[\left(t\mapsto{\mathbb{T}}_t x_0+\int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}Bu(s)\ds{s}\right)\in C([0,T);X)\cap W^{1,2}_{\rm loc}(0,T;X_{-1})\] and from \cite[Thm.~3.8.2~(iv)]{Staf05}, \[\left(t\mapsto\int_0^t{\mathbb{T}}_{t-s}f(s)\ds{s}\right)\in C([0,T);X)\cap W^{1,2}_{\rm loc}(0,T;X_{-1}),\] whence $x\in C([0,T);X)\cap W^{1,2}_{\rm loc}(0,T;X_{-1})$, which proves that~$x$ is a strong solution of~\eqref{eq:abstract_cauchy} on $[0,T)$. Since $B$ is $L^2$-admissible, it follows from the nesting property of $L^p$ on finite intervals that~$B$ is an $L^p$-admissible control operator for~$(\T_t)_{t\geq0}$ for all $p\in[2,\infty]$. Furthermore, for $p>2$, set $\tilde{f}\coloneqq f+Bu$ and apply~\cite[Thm.~3.10.10]{Staf05} with $\tilde{f}\in L^\infty_{\rm loc}(0,T;X_{-\alpha})$ to conclude that~$x$ is a strong solution. \end{proof} Next we show the regularity properties of the solution of~\eqref{eq:abstract_cauchy}, if $A = \mathcal{A}$ and $B = \mathcal{B}$ are as in the model~\eqref{eq:FHN_model}. Note that this result also holds when considering some $t_0\ge 0$, $T\in(t_0,\infty]$, and the initial condition $x(t_0)=x_0$ (instead of $x(0)=x^0$) by some straightforward modifications, cf.~\cite[Sec.~3.8]{Staf05}. \begin{Prop}\label{prop:hoelder} Let Assumption~\ref{Ass1} hold, $\mathcal{A}$ be the Neumann elliptic operator on $\Omega$ associated to $D$, $T\in(0,\infty]$ and $c>0$. Further let $X=X_0=L^2(\Omega)$ and $X_r$, $r\in{\mathbb{R}}$, be the interpolation spaces corresponding to~$\mathcal{A}$ according to Definition~\ref{Def:int-space-A}. Define $\mathcal{A}_0\coloneqq \mathcal{A}-cI$ with $\mathcal{D}(\mathcal{A}_0)=\mathcal{D}(\mathcal{A})$ and consider $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,X_{-\alpha})$ for $\alpha\in[0,1/2]$, $u\in L_{\rm loc}^2(0,T;{\mathbb{R}}^m)\cap L^\infty(\delta,T;{\mathbb{R}}^m)$ and $f\in L_{\rm loc}^2(0,T;X)\cap L^\infty(\delta,T;X)$ for all $\delta>0$. Then for all $x_0\in X$ and all $\delta>0$ the mild solution of~\eqref{eq:abstract_cauchy} (with $A=\mathcal{A}_0$ and $B=\mathcal{B}$) on~$[0,T)$, given by~$x$ as in~\eqref{eq:mild_solution}, satisfies \begin{enumerate}[(i)] \item if $\alpha=0$, then \[\forall\, \lambda\in(0,1):\ x\in BC([0,T);X)\cap C^{0,\lambda}([\delta,T);X);\] \item if $\alpha\in(0,1/2)$, then \[x\in BC([0,T);X)\cap C^{0,1-\alpha}([\delta,T);X)\cap C^{0,1-2\alpha}([\delta,T);X_{\alpha});\] \item if $\alpha=1/2$, then \[x\in BC([0,T);X)\cap C^{0,1/2}([\delta,T);X)\cap BUC([\delta,T);X_{1/2}).\] \end{enumerate} \end{Prop} \begin{proof} First observe that by Proposition~\ref{prop:Aop} the assumptions of Lemma~\ref{lem:abstract_solution} are satisfied with $p=2$, hence~$x$ as in~\eqref{eq:mild_solution} is a strong solution of~\eqref{eq:abstract_cauchy} on $[0,T)$ in the sense of Definition~\ref{Def:Cauchy}. In the following we restrict ourselves to the case $T=\infty$, and the assertions for $T<\infty$ follow from these arguments by considering the restrictions to $[0,T)$. Define, for $t\ge 0$, the functions \begin{align} x_h(t)\coloneqq {\mathbb{T}}_tx_0,\quad x_f(t)\coloneqq \int_0^t{\mathbb{T}}_{t-s}f(s)\ds{s},\quad x_u(t)\coloneqq \int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}\mathcal{B} u(s)\ds{s},\label{eq:xhxfxu} \end{align} so that $x=x_h+x_f+x_u$. \emph{Step 1}: We show that $x\in BC([0,\infty);X)$. The definition of $\mathcal{A}$ in Proposition~\ref{prop:Aop} implies that for all $z\in \mathcal{D}(\mathcal{A})$ we have $\langle z,\mathcal{A} z\rangle\leq -c\|z\|^2$. The self-adjointness of $\mathcal{A}$ moreover implies that $\mathcal{A}_0$ is self-adjoint, whence~\cite[Thm.~4.2]{ArenElst12} gives that $\mathcal{A}_0$ generates an analytic, contractive semigroup $(\T_t)_{t\geq0}$ on $X$, which satisfies \begin{equation}\label{eq:est-sg-exp} \forall\, t\ge 0\ \forall\, x\in X:\ \|{\mathbb{T}}_t x\|\leq \mathrm{e}^{-ct}\|x\|. \end{equation} Since, by Lemma~\ref{lem:abstract_solution}, $x$ is a strong solution, we have $x\in C([0,\infty);X)\cap W^{1,2}_{\rm loc}(0,\infty;X_{-1})$. Further observe that $\mathcal{B}$ is $L^\infty$-admissible by Lemma~\ref{lem:abstract_solution}. Then it follows from~\eqref{eq:est-sg-exp} and~\cite[Lem.~2.9\,(i)]{JacoNabi18} that~$\mathcal{B}$ is infinite-time $L^\infty$-admissible, which implies that for $x_u$ as in~\eqref{eq:xhxfxu} we have \[ \|x_u\|_\infty \le \left( \sup_{t>0} \|\Phi_t\|\right) \|u\|_\infty < \infty, \] thus $x_u\in BC([0,\infty);X)$. A direct calculation using~\eqref{eq:est-sg-exp} further shows that $x_h,x_f\in BC([0,\infty);X)$, whence $x\in BC([0,\infty);X)$. \emph{Step 2}: We show~(i). Let $\delta>0$ and set $\tilde{f}:=f+Bu\in L^2_{\rm loc}(0,\infty;X)\cap L^\infty(\delta,\infty;X)$, then we may infer from~\cite[Props.~4.2.3~\&~4.4.1\,(i)]{Luna95} that \[\forall\,\lambda\in(0,1):\ x\in C^{0,\lambda}([\delta,\infty);X).\] From this together with Step~1 we may infer~(i). \emph{Step 3}: We show~(ii). Let $\delta>0$, then it follows from~\cite[Props.~4.2.3~\&~4.4.1\,(i)]{Luna95} together with $x_0\in X$ and $f\in L^\infty(\delta,\infty;X)$, that \[\begin{aligned} x_h+x_f&\in C^{0,1-\alpha}([\delta,\infty);X_{\alpha})\cap C^{1}([\delta,\infty);X)\\ &= C^{0,1-2\alpha}([\delta,\infty);X_{\alpha})\cap C^{0,1-\alpha}([\delta,\infty);X).\end{aligned}\] Since we have shown in Step~1 that $x\in BC([0,\infty),X)$, it remains to show that $x_u\in C^{0,1-2\alpha}([\delta,\infty);X_{\alpha})\cap C^{0,1-\alpha}([\delta,\infty);X)$.\\ To this end, consider the space $Y:=X_{-\alpha}$. Then $(\T_t)_{t\geq0}$ extends to a~semigroup~$\big(({\mathbb{T}}|_{-\alpha})_{t}\big)_{t\ge 0}$ on $Y$ with generator $\mathcal{A}_{0,\alpha}:\mathcal{D}(\mathcal{A}_{0,\alpha})=X_{-\alpha+1}\subset X_{-\alpha}=Y$, cf.~\cite[pp.~50]{Luna95}. Now, for $r\in{\mathbb{R}}$, consider the interpolation spaces $Y_r$ as in Definition~\ref{Def:int-space-A} by means of the operator $\mathcal{A}_{0,\alpha}$. Then it is straightforward to show that $Y_n = \mathcal{D}(\mathcal{A}_{0,\alpha}^n) = X_{n-\alpha}$ for all $n\in{\mathbb{N}}$ using the representation~\eqref{eq:Xrspec}. Similarly, we may show that $Y_n = X_{n-\alpha}$ for all $n\in{\mathbb{Z}}$. Then the Reiteration Theorem, see~\cite[Cor.~1.24]{Luna18} and also \eqref{eq:Xalpha-reit}, gives \[\forall\, r\in{\mathbb{R}}\,:\quad Y_r=X_{r-\alpha}.\] Since $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,Y)$, \cite[Props.~4.2.3~\&~4.4.1\,(i)]{Luna95} now imply \[\begin{aligned} x_u &\in C^{0,1-2\alpha}([\delta,\infty);Y_{2\alpha})\cap C^{0,1-\alpha}([\delta,\infty);Y_{\alpha})\\ &= C^{0,1-2\alpha}([\delta,\infty);X_{\alpha})\cap C^{0,1-\alpha}([\delta,\infty);X), \end{aligned}\] which completes the proof of (ii). \emph{Step 4}: We show~(iii). The proof of $x\in C^{0,1/2}([\delta,\infty);X)$ is analogous to that of $x\in C^{0,1-\alpha}([\delta,\infty);X)$ in Step~3. Boundedness and continuity of $x$ on $[0,\infty)$ was proved in Step~1. Hence, it remains to show that $x$ is uniformly continuous: Again consider the additive decomposition of $x$ into $x_h$, $x_f$ and $x_u$ as in~\eqref{eq:xhxfxu}. Similar to Step~3 it can be shown that $x_h,x_f\in C^{0,1/2}([\delta,\infty);X_{1/2})$, whence $x_h,x_f\in BUC([\delta,\infty);X_{1/2})$. It remains to show that $x_u\in BUC([\delta,\infty);X_{1/2})$. Note that Lemma \ref{lem:abstract_solution} gives that $x_\delta\coloneqq x(\delta)\in X$. Then $x_u$ solves $\dot z(t) = \mathcal{A}_0 z(t) + \mathcal{B} u(t)$ with $z(\delta) = x_u(\delta)$ and hence, for all $t\geq\delta$ we have \begin{equation}\label{eq:x-T-delta} \begin{aligned} x_u(t)=&\,{\mathbb{T}}_{t-\delta}x_u(\delta)+\underbrace{\int_\delta^t({\mathbb{T}}|_{-\alpha})_{t-s}\mathcal{B} u(s)\ds{s}}_{=:x_u^\delta(t)}\\ \end{aligned} \end{equation} Since $x_u(\delta)\in X$ by Lemma~\ref{lem:abstract_solution}, it remains to show that $x_u^\delta\in BUC([\delta,\infty);X_{1/2})$. We obtain from Proposition~\ref{prop:Aop_n}\,\ref{item:Aop5}) that $\mathcal{A}_0$ has an eigendecomposition of type~\eqref{eq:spectr} with eigenvalues $(-\beta_j)_{j\in{\mathbb{N}}_0}$, $\beta_j\coloneqq \alpha_j+c$, and eigenfunctions $(\theta_j)_{j\in{\mathbb{N}}_0}$. Moreover, there exist $b_i\in X_{-1/2}$ for $i=1,\dots,m$ such that $\mathcal{B} \xi = \sum_{i=1}^m b_i \cdot \xi_i$ for all $\xi\in{\mathbb{R}}^m$. Therefore, \begin{align*} x^\delta_u(t)&=\int_\delta^t\sum_{j=0}^\infty \mathrm{e}^{-\beta_j(t-\tau)}\theta_j \sum_{i=1}^m\scpr{b_i \cdot u_i(\tau)}{\theta_j}\ds{\tau}\\ &=\int_\delta^t\sum_{j=0}^\infty \mathrm{e}^{-\beta_j(t-\tau)}\theta_j \sum_{i=1}^m u_i(\tau) \scpr{b_i}{\theta_j}\ds{\tau}, \end{align*} where the last equality holds since $u_i(\tau)\in{\mathbb{R}}$ and can be treated as a constant in~$X$. By considering each of the factors in the sum over $i=1,\dots,m$, we can assume without loss of generality that $m=1$ and $b\coloneqq b_1$, so that \[x_u^\delta(t)=\int_\delta^t\sum_{j=0}^\infty \mathrm{e}^{-\beta_j(t-\tau)} u(\tau) \scpr{b}{\theta_j} \theta_j \ds{\tau}.\] Define $b^j \coloneqq \scpr{b}{\theta_j}$ for $j\in{\mathbb{N}}_0$. Since $b\in X_{-1/2}$ we have that $\sum_{j=0}^\infty b_j^2/\beta_j$ converges, which implies \begin{equation}S\coloneqq \sum_{j=0}^\infty\frac{(b^j)^2}{\beta_j}<\infty.\label{eq:Sdef}\end{equation} Recall that the spaces $X_\alpha$, $\alpha\in{\mathbb{R}}$, are defined by using $\lambda\in{\mathbb{C}}$ belonging to the resolvent set of $\mathcal{A}$, and they are independent of the choice of $\lambda$. Since $c>0$ in the statement of the proposition is in the resolvent set of $\mathcal{A}$, the spaces $X_\alpha$ coincide for $\mathcal{A}$ and $\mathcal{A}_0=\mathcal{A}-cI$.\\ Using the diagonal representation from Remark~\ref{rem:X_alpha} and \cite[Prop.~3.4.8]{TucsWeis09}, we may infer that $x_u^\delta(t)\in X_{1/2}$ for a.e. $t\geq\delta$, namely, \begin{align*} \|x_u^\delta(t)\|_{X_{1/2}}^2&\leq\sum_{j=0}^\infty\beta_j(b^j)^2\|u\|^2_{L^\infty(\delta,\infty)}\left(\int_\delta^t\mathrm{e}^{-\beta_j(t-s)}\ds{s}\right)^2\\ &=\|u\|^2_{L^\infty(\delta,\infty)}\sum_{j=0}^\infty\frac{(b^j)^2}{\beta_j}\left(1-\mathrm{e}^{-\beta_j(t-\delta)}\right)^2\\ &\leq\|u\|^2_{L^\infty(\delta,\infty)}\sum_{j=0}^\infty\frac{(b^j)^2}{\beta_j}<\infty. \end{align*} Hence, \begin{equation}\label{eq:xudelta_X12} \|x_u^\delta(t)\|_{X_{1/2}}\leq \|u\|_{L^\infty(\delta,\infty)}\sqrt{S}. \end{equation} Now let $t>s>\delta$ and $\sigma>0$ such that $t-s<\sigma$. By dominated convergence \cite[Thm.~II.2.3]{Dies77}, summation and integration can be interchanged, so that \begin{align*} \|x_u^\delta&(t)-x_u^\delta(s)\|_{X_{1/2}}^2\\ \leq&\,\|u\|_{L^\infty(\delta,\infty)}^2\sum_{j=0}^\infty \beta_j (b^j)^2 \left(\int_\delta^s\mathrm{e}^{-\beta_j(s-\tau)}-\mathrm{e}^{-\beta_j(t-\tau)}\ds{\tau}+\int_s^t\mathrm{e}^{-\beta_j(t-\tau)}\ds{\tau}\right)^2\\ \leq&\,4\|u\|_{L^\infty(\delta,\infty)}^2\sum_{j=0}^\infty \frac{(b^j)^2}{\beta_j} \left(1-\mathrm{e}^{-\beta_j (t-s)}\right)^2\\ \le&\, 4\|u\|_{L^\infty(\delta,\infty)}^2\sum_{j=0}^\infty \frac{(b^j)^2}{\beta_j} \left(1-\mathrm{e}^{-\beta_j \sigma}\right)^2. \end{align*} We can conclude from \eqref{eq:Sdef} that the series $F:(0,\infty)\to(0,S)$ with \[F(\sigma)\coloneqq \sum_{j=0}^\infty\frac{(b^j)^2}{\beta_j}(1-\mathrm{e}^{-\beta_j\sigma})^2\] converges uniformly to a strictly monotone, continuous and surjective function. Therefore, $F$ has an inverse. The function $x_u^\delta$ is thus uniformly continuous on~$[\delta,\infty)$ and by~\eqref{eq:est-sg-exp} we obtain boundedness, i.e., $x_u^\delta\in BUC([\delta,\infty);X_{1/2})$. \end{proof} Finally we present a consequence of the Banach-Alaoglu Theorem, see e.g.~\cite[Thm.~3.15]{Rudi91}. \begin{Lem}\label{lem:weak_convergence} Let $T>0$ and $Z$ be a reflexive and separable Banach space. Then \begin{enumerate}[(i)] \item every bounded sequence $(w_n)_{n\in{\mathbb{N}}}$ in $L^\infty(0,T;Z)$ has a weak$^\star$ convergent subsequence in $L^\infty(0,T;Z)$;\label{it:weak_star} \item every bounded sequence $(w_n)_{n\in{\mathbb{N}}}$ in $L^p(0,T;Z)$ with $p\in(1,\infty)$ has a weakly convergent subsequence in $L^p(0,T;Z)$.\label{it:weak} \end{enumerate} \end{Lem} \begin{proof} Let $p\in[1,\infty)$. Then $W:=L^p(0,T;Z')$ is a separable Banach space, see~\cite[Sec.~IV.1]{Dies77}. Since $Z$ is reflexive, by \cite[Cor.~III.4]{Dies77} it has the Radon-Nikodým property. Then it follows from~\cite[Thm.~IV.1]{Dies77} that $W'=L^q(0,T;Z)$ is the dual of $W$, where $q\in(1,\infty]$ such that $p^{-1}+q^{-1}=1$. Assertion~\eqref{it:weak_star} now follows from~\cite[Thm.~3.17]{Rudi91} with $p=1$ and $q=\infty$. On the other hand, statement~\eqref{it:weak} follows from~\cite[Thm.~V.2.1]{Yosi80} by further using that~$W$ is reflexive for $p\in(1,\infty)$. \end{proof} \end{appendices} \section*{Acknowledgments} The authors would like to thank Felix L.\ Schwenninger (U Twente) and Mark R.\ Opmeer (U Bath) for helpful comments on maximal regularity. \bibliographystyle{elsarticle-harv}
{'timestamp': '2019-12-05T02:11:10', 'yymm': '1912', 'arxiv_id': '1912.01847', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.01847'}
ArXiv
\section{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\nonumber}{\nonumber} \newcommand{\underline{1}}{\underline{1}} \newcommand{\underline{2}}{\underline{2}} \newcommand{\underline{a}}{\underline{a}} \newcommand{\underline{b}}{\underline{b}} \newcommand{\underline{c}}{\underline{c}} \newcommand{\underline{d}}{\underline{d}} \def\cs#1{\footnote{{\bf Stefan:~}#1}} \def \int\!\!{\rm d}^4x{\int\!\!{\rm d}^4x} \def \int\!\!{\rm d}^8z{\int\!\!{\rm d}^8z} \def \int\!\!{\rm d}^6z{\int\!\!{\rm d}^6z} \def \int\!\!{\rm d}^6{\bar z}{\int\!\!{\rm d}^6{\bar z}} \def \frac{E^{-1}}{R}{\frac{E^{-1}}{R}} \def \frac{E^{-1}}{\bar R}{\frac{E^{-1}}{\bar R}} \def (\cD^2 - 4 {\bar R}){(\cD^2 - 4 {\bar R})} \def (\cDB^2 - 4 R){(\cDB^2 - 4 R)} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb N}}{{\mathbb N}} \def\dt#1{{\buildrel {\hbox{\LARGE .}} \over {#1}}} \newcommand{\bm}[1]{\mbox{\boldmath$#1$}} \def\double #1{#1{\hbox{\kern-2pt $#1$}}} \begin{document} \begin{titlepage} \begin{flushright} hep-th/0601177\\ January, 2006\\ \end{flushright} \vspace{5mm} \begin{center} {\Large \bf On compactified harmonic/projective superspace,\\ 5D superconformal theories, and all that} \end{center} \begin{center} {\large Sergei M. Kuzenko\footnote{{[email protected]}} } \\ \vspace{5mm} \footnotesize{ {\it School of Physics M013, The University of Western Australia\\ 35 Stirling Highway, Crawley W.A. 6009, Australia}} ~\\ \vspace{2mm} \end{center} \vspace{5mm} \begin{abstract} \baselineskip=14pt \noindent Within the supertwistor approach, we analyse the superconformal structure of 4D $\cN=2$ compactified harmonic/projective superspace. In the case of 5D superconformal symmetry, we derive the superconformal Killing vectors and related building blocks which emerge in the transformation laws of primary superfields. Various off-shell superconformal multiplets are presented both in 5D harmonic and projective superspaces, including the so-called tropical (vector) multiplet and polar (hyper)multiplet. Families of superconformal actions are described both in the 5D harmonic and projective superspace settings. We also present examples of 5D superconformal theories with gauged central charge. \end{abstract} \vfill \end{titlepage} \newpage \setcounter{page}{1} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \sect{Introduction} According to Nahm's classification \cite{Nahm}, superconformal algebras exist in space-time dimensions ${\rm D} \leq 6$. Among the dimensions included, the case of ${\rm D}=5$ is truly exceptional, for it allows the existence of the unique superconformal algebra $F(4)$ \cite{Kac}. This is in drastic contrast to the other dimensions which are known to be compatible with series of superconformal algebras (say, 4D $\cN$-extended or 6D $(\cN,0) $ superconformal symmetry). Even on formal grounds, the exceptional feature of the five-dimensional case is interesting enough for studying in some depth the properties of 5D superconformal theories. On the other hand, such rigid superconformal theories are important prerequisites in the construction, within the superconformal tensor calculus \cite{Ohashi,Bergshoeff}, of 5D supergravity-matter dynamical systems which are of primary importance, for example, in the context of bulk-plus-brane scenarios. The main motivation for the present work was the desire to develop a systematic setting to build 5D superconformal theories, and clearly this is hardly possible without employing superspace techniques. The superconformal algebra in five dimensions includes 5D simple (or $\cN=1$) supersymmetry algebra\footnote{On historic grounds, 5D simple supersymmetry is often labeled $\cN=2$, see e.g. \cite{Bergshoeff}.} as its super Poincar\'e subalgebra. As is well-known, for supersymmetric theories in various dimensions with eight supercharges (including the three important cases: (i) 4D $\cN=2$, (ii) 5D $\cN=1$ and (iii) 6D $\cN= (1,0)$) a powerful formalism to generate off-shell formulations is the {\it harmonic superspace} approach that was originally developed for the 4D $\cN=2$ supersymmertic Yang-Mills theories and supergravity \cite{GIKOS,GIOS}. There also exists a somewhat different, but related, formalism -- the so-called {\it projective superspace} approach \cite{projective0,Siegel,projective,BS}, first introduced soon after the harmonic superspace had appeared. Developed originally for describing the general self-couplings of 4D $\cN=2$ tensor multiplets, this approach has been extended to include some other interesting multiplets. Both the harmonic and projective approaches make use of the same superspace, ${\mathbb R}^{D|8} \times S^2$, which first emerged, for $D=4$, in a work of Rosly \cite{Rosly} (see also \cite{RS}) who built on earlier ideas due to Witten \cite{Witten}. In harmonic superspace, one deals with so-called Grassmann analytic superfields required to be smooth tensor fields on $S^2$. In projective superspace, one also deals with Grassmann analytic superfields required, however, to be holomorphic on an open subset of $S^2$ (typically, the latter is chosen to be $ {\mathbb C}^* = {\mathbb C} \setminus \{ 0 \}$ in the Riemann sphere realisation $S^2 ={\mathbb C} \cup \{ \infty \}$). In many respects, the harmonic and projective superspaces are equivalent and complementary to each other \cite{Kuzenko}, although harmonic superspace is obviously more fundamental. Keeping in mind potential applications to the brane-world physics, the projective superspace setting seems to be more useful, since the 5D projective supermultiplets \cite{KL} are easy to reduce to 4D $\cN=1$ superfields. To our knowledge, no comprehensive discussion of the superconformal group and superconformal multiplets in projective superspace has been given, apart from the analysis of $SU(2)$ invariance in \cite{projective0} and the semi-component consideration of tensor multiplets in \cite{dWRV}. On the contrary, a realisation of the superconformal symmetry in 4D $\cN=2$ harmonic superspace\footnote{See \cite{ISZ} for an extension to six dimensions.} has been known for almost twenty years \cite{GIOS-conf,GIOS}. But some nuances of this realisation still appear to be quite mysterious (at least to newcomers) and call for a different interpretation. Specifically, one deals with superfields depending on harmonic variables $u^\pm_i$ subject to the two constraints \begin{equation} u^{+i}\,u^-_i =1~, \qquad \overline{u^{+i}} =u^-_i~, \qquad \quad i=\underline{1}, \underline{2}~ \label{1+2const} \end{equation} when describing general 4D $\cN=2$ super Poincar\'e invariant theories in harmonic superspace \cite{GIKOS,GIOS}. In the case of superconformal theories, on the other hand, only the first constraint in (\ref{1+2const}) has to be imposed \cite{GIOS-conf,GIOS}. Since any superconformal theory is, at the same time, a super Poincar\'e invariant one, some consistency issues seem to arise, such as that of the functional spaces used to describe the theory. It is quite remarkable that these issues simply do not occur if one pursues the twistor approach to harmonic superspace \cite{Rosly2,LN,HH} (inspired by earlier constructions due to Manin \cite{Manin}). In such a setting, the constraints (\ref{1+2const}) can be shown to appear only as possible `gauge conditions' and therefore they have no intrinsic significance, with the only structural condition being $u^{+i}\,u^-_i \neq 0$. In our opinion, the supertwistor construction sketched in \cite{Rosly2,LN} and further analysed in \cite{HH} is quite illuminating, for it allows a unified treatment of the harmonic and projective superspace formalisms. That is why it is reviewed and developed further in the present paper. Unlike \cite{Rosly2,HH} and standard texts on Penrose's twistor theory \cite{twistors}, see e.g. \cite{WW}, we avoid considering compactified complexified Minkowski space and its super-extensions, for these concepts are not relevant from the point of view of superconformal model-building we are interested in. Our 4D consideration is directly based on the use of (conformally) compactified Minkowski space $S^1 \times S^3$ and its super-extensions. Compactified Minkowski space is quite interesting in its own right (see, e.g. \cite{GS}), and its universal covering space (i) possesses a unique causal structure compatible with conformal symmetry \cite{S}, and (ii) coincides with the boundary of five-dimensional Anti-de Sitter space, which is crucial in the context of the AdS/CFT duality \cite{AGMOO}. In the case of 5D superconformal symmetry, one can also pursue a supertwistor approach. However, since we are aiming at future applications to brane-world physics, a more pragmatic course is chosen here, which is based on the introduction of the relevant superconformal Killing vectors and elaborating associated building blocks. The concept of superconformal Killing vectors \cite{Sohnius,Lang,BPT,Shizuya,BK,HH,West}, has proved to be extremely useful for various studies of superconformal theories in four and six dimensions, see e.g. \cite{Osborn,Park,KT}. This paper is organized as follows. In section 2 we review the construction \cite{U} of compactified Minkowski space $\overline{\cM}{}^4 = S^1 \times S^3$ as the set of null two-planes in the twistor space. In section 3 we discuss $\cN$-extended compactified Minkowski superspace $\overline{\cM}{}^{4|4\cN}$ and introduce the corresponding superconformal Killing vectors. In section 4 we develop different aspects of 4D $\cN=2$ compactified harmonic/projective superspace. Section 5 is devoted to the 5D superconformal formalism. Here we introduce the 5D superconformal Killing vectors and related building blocks, and also introduce several off-shell superconformal multiplets, both in the harmonic and projective superspaces. Section 6 introduces the zoo of 5D superconformal theories. Three technical appendices are also included at the end of the paper. In appendix A, a non-standard realisation for $S^2$ is given. Appendix B is devoted to the projective superspace action according to \cite{Siegel}. Some aspects of the reduction \cite{KL} from 5D projective supermultiplets to 4D $\cN=1,2$ superfields are collected in Appendix C. \sect{Compactified Minkowski space} \label{section:two} We start by recalling a remarkable realisation\footnote{This realisation is known in the physics literature since the early 1960's \cite{U,S,Tod,GS}, and it can be related (see, e.g. \cite{S}) to the Weyl-Dirac construction \cite{W,D} of compactified Minkowski space $ S^1 \times S^3 / {\mathbb Z}_2$ as the set of straight lines through the origin of the cone in ${\mathbb R}^{4,2}$. In the mathematics literature, its roots go back to Cartan's classification of the irreducible homogeneous bounded symmetric domains \cite{Cartan,Hua}.} of compactified Minkowski space $\overline{\cM}{}^4 = S^1 \times S^3$ as the set of null two-dimensional subspaces in the twistor space\footnote{In the literature, the term `twistor space' is often used for ${\mathbb C}P^3$. In this paper we stick to the original Penrose terminonology \cite{twistors}.} which is a copy of ${\mathbb C}^4$. The twistor space is defined to be equipped with the scalar product \begin{eqnarray} \langle T, S \rangle = T^\dagger \,\Omega \, S~, \qquad \Omega =\left( \begin{array}{cc} {\bf 1}_2 & 0\\ 0 & -{\bf 1}_2 \end{array} \right) ~, \end{eqnarray} for any twistors $T,S \in {\mathbb C}^4$. By construction, this scalar product is invariant under the action of the group $SU(2,2) $ to be identified with the conformal group. The elements of $SU(2,2)$ will be represented by block matrices \begin{eqnarray} g=\left( \begin{array}{cc} A & B\\ C & D \end{array} \right) \in SL(4,{\mathbb C}) ~, \qquad g^\dagger \,\Omega \,g = \Omega~, \label{SU(2,2)} \end{eqnarray} where $A,B,C$ and $D$ are $2\times 2$ matrices. We will denote by $\overline{\cM}{}^4 $ the space of null two-planes through the origin in ${\mathbb C}^4$. Given a two-plane, it is generated by two linearly independent twistors $T^\mu$, with $\mu=1,2$, such that \begin{equation} \langle T^\mu, T^\nu \rangle = 0~, \qquad \mu, \nu =1,2~. \label{nullplane1} \end{equation} Obviously, the basis chosen, $\{T^\mu\}$, is defined only modulo the equivalence relation \begin{equation} \{ T^\mu \}~ \sim ~ \{ \tilde{T}^\mu \} ~, \qquad \tilde{T}^\mu = T^\nu\,R_\nu{}^\mu~, \qquad R \in GL(2,{\mathbb C}) ~. \label{nullplane2} \end{equation} Equivalently, the space $\overline{\cM}{}^4 $ consists of $4\times 2$ matrices of rank two, \begin{eqnarray} ( T^\mu )=\left( \begin{array}{c} F\\ G \end{array} \right) ~, \qquad F^\dagger \,F =G^\dagger \,G~, \label{two-plane} \end{eqnarray} where $F$ and $G$ are $2\times 2$ matrices defined modulo the equivalence relation \begin{eqnarray} \left( \begin{array}{c} F\\ G \end{array} \right) ~ \sim ~ \left( \begin{array}{c} F\, R\\ G\,R \end{array} \right) ~, \qquad R \in GL(2,{\mathbb C}) ~. \end{eqnarray} In order for the two twistors $T^\mu $ in (\ref{two-plane}) to generate a two-plane, the $2\times 2$ matrices $F$ and $G$ must be non-singular, \begin{equation} \det F \neq 0~, \qquad \det G \neq 0~. \label{non-singular} \end{equation} Indeed, let us suppose the opposite. Then, the non-negative Hermitian matrix $F^\dagger F$ has a zero eigenvalue. Applying an equivalence transformation of the form \begin{eqnarray} \left( \begin{array}{c} F\\ G \end{array} \right) ~ \to ~ \left( \begin{array}{c} F\, \cV\\ G\,\cV \end{array} \right) ~, \qquad \cV \in U(2) ~, \nonumber \end{eqnarray} and therefore \begin{eqnarray} F^\dagger \,F ~\to~ \cV^{-1} \Big(F^\dagger \,F \Big) \,\cV~, \qquad G^\dagger \,G ~\to~ \cV^{-1} \Big(G^\dagger \,G \Big)\, \cV~, \nonumber \end{eqnarray} we can arrive at the following situation \begin{eqnarray} F^\dagger \,F =G^\dagger \,G = \left( \begin{array}{cc} 0 & 0\\ 0 & \lambda^2 \end{array} \right)~, \qquad \lambda \in {\mathbb R} ~. \nonumber \end{eqnarray} In terms of the twistors $T^\mu$, the conditions obtained imply that $T^1 =0$ and $T^2 \neq 0$. But this contradicts the assumption that the two vectors $T^\mu $ generate a two-plane. Because of (\ref{non-singular}), we have \begin{eqnarray} \left( \begin{array}{l} F\\ G \end{array} \right) ~ \sim ~ \left( \begin{array}{c} h \\ {\bf 1} \end{array} \right) ~, \qquad h =F\,G^{-1} \in U(2) ~. \end{eqnarray} It is seen that the space $\overline{\cM}{}^4 $ can be identified with the group manifold $U(2) = S^1 \times S^3$. The conformal group acts by linear transformations on the twistor space: associated with the group element (\ref{SU(2,2)}) is the transformation $T \to g\, T$, for any twistor $T \in {\mathbb C}^4$. This group representation induces an action of $SU(2,2)$ on $\overline{\cM}{}^4 $. It is defined as follows: \begin{equation} h ~\to ~g\cdot h = (A\,h +B ) \,(C\,h +D)^{-1} \in U(2) ~. \end{equation} One can readily see that $\overline{\cM}{}^4 $ is a homogeneous space of the group $SU(2,2)$, and therefore it can be represented as $\overline{\cM}{}^4 =SU(2,2) /H_{h_0}$, where $H_{h_0} $ is the isotropy group at a fixed unitary matrix $h_0 \in \overline{\cM}{}^4 $. With the choice \begin{equation} h_0 = - {\bf 1}~, \end{equation} a coset representative $s(h)\in SU(2,2)$ that maps $h_0$ to $h \in \overline{\cM}{}^4 $ can be chosen as follows (see, e.g. \cite{PS}): \begin{eqnarray} s(h)= (\det h)^{-1/4} \left( \begin{array}{cr} -h ~ & 0 \\ 0 ~& {\bf 1} \end{array} \right)~, \qquad s(h) \cdot h_0 =h\in U(2)~. \end{eqnarray} The isotropy group corresponding to $h_0$ consists of those $SU(2,2)$ group elements (\ref{SU(2,2)}) which obey the requirement \begin{equation} A+C = B+D~. \label{stability} \end{equation} This subgroup proves to be isomorphic to a group generated by the Lorentz transformations, dilatations and special conformal transformations. To visualise this, it is useful to implement a special similarity transformation for both the group $SU(2,2)$ and the twistor space. We introduce a special $4\times 4$ matrix $\Sigma$, \begin{eqnarray} \Sigma= \frac{1}{ \sqrt{2} } \left( \begin{array}{cr} {\bf 1}_2 ~ & - {\bf 1}_2\\ {\bf 1}_2 ~& {\bf 1}_2 \end{array} \right)~, \qquad \Sigma^\dagger \,\Sigma= {\bf 1}_4~, \end{eqnarray} and associate with it the following similarity transformation: \begin{eqnarray} g ~& \to & ~ {\bm g} = \Sigma \, g\, \Sigma^{-1} ~, \quad g \in SU(2,2)~; \qquad T ~ \to ~ {\bm T} = \Sigma \, T~, \quad T \in {\mathbb C}^4~. \end{eqnarray} The elements of $SU(2,2)$ are now represented by block matrices \begin{eqnarray} {\bm g}=\left( \begin{array}{cc} {\bm A} & {\bm B}\\ {\bm C} & {\bm D} \end{array} \right) \in SL(4,{\mathbb C}) ~, \qquad {\bm g}^\dagger \,{\bm \Omega} \, {\bm g} = {\bm \Omega}~, \label{SU(2,2)-2} \end{eqnarray} where \begin{eqnarray} {\bm \Omega} = \Sigma \, \Omega\, \Sigma^{-1} = \left( \begin{array}{cc} 0& {\bf 1}_2 \\ {\bf 1}_2 &0 \end{array} \right) ~. \end{eqnarray} The $2\times 2$ matrices in (\ref{SU(2,2)-2}) are related to those in (\ref{SU(2,2)}) as follows: \begin{eqnarray} {\bm A} &=& \frac12 ( A+D-B-C)~, \nonumber \\ {\bm B} &=& \frac12 ( A+B- C-D)~, \nonumber \\ {\bm C} &=& \frac12 ( A+C-B-D)~, \nonumber \\ {\bm D} &=& \frac12 ( A+B+C+D)~. \end{eqnarray} Now, by comparing these expressions with (\ref{stability}) it is seen that the stability group $\Sigma H_{h_0} \Sigma^{-1}$ consists of upper block-triangular matrices, \begin{equation} {\bm C} =0~. \label{stability2} \end{equation} When applied to $\overline{\cM}{}^4 $, the effect of the similarity transformation\footnote{We follow the two-component spinor notation of Wess and Bagger \cite{WB}.} is \begin{eqnarray} \left( \begin{array}{c} h \\ {\bf 1} \end{array} \right) ~\to ~ \Sigma\, \left( \begin{array}{c} h \\ {\bf 1} \end{array} \right) = \frac{1 }{ \sqrt{2} } \left( \begin{array}{c} h -{\bf 1} \\ h+{\bf 1} \end{array} \right) ~\sim ~ \left( \begin{array}{c} {\bf 1} \\ -{\rm i}\, \tilde{x} \end{array} \right) ~, \qquad \tilde{x} =x^m \,(\tilde{\sigma}_m)^{\dt \alpha \alpha}~, \label{two-plane-mod} \end{eqnarray} where \begin{eqnarray} -{\rm i}\, \tilde{x} = \frac{ h+{\bf 1} }{ h-{\bf 1} }~, \qquad \tilde{x}^\dagger = \tilde{x} ~. \label{inverseCayley} \end{eqnarray} The inverse expression for $h$ in terms of $\tilde{x}$ is given by the so-called Cayley transform: \begin{equation} -h = \frac{ {\bf 1} - {\rm i}\, \tilde{x} } { {\bf 1} + {\rm i}\, \tilde{x} } ~. \label{Cayley} \end{equation} It is seen that \begin{equation} h_0 =-{\bf 1} \quad \longleftrightarrow \quad \tilde{x}_0 =0~. \end{equation} Unlike the original twistor representation, eqs. (\ref{two-plane}) and (\ref{non-singular}), the $2\times 2$ matrices $h\pm{\bf 1}$ in (\ref{two-plane-mod}) may be singular at some points. This means that the variables $x^m $ (\ref{inverseCayley}) are well-defined local coordinates in the open subset of $\overline{\cM}{}^4$ which is specified by $\det \,( h-{\bf 1} ) \neq 0$ and, as will become clear soon, can be identified with the ordinary Minkowski space. As follows from (\ref{two-plane-mod}), in terms of the variables $x^m$ the conformal group acts by fractional linear transformations \begin{equation} -{\rm i}\, \tilde{x} ~\to ~-{\rm i}\, \tilde{x}' = \Big({\bm C} - {\rm i}\, {\bm D}\,\tilde{x} \Big) \Big({\bm A} - {\rm i}\, {\bm B}\,\tilde{x} \Big)^{-1}~. \end{equation} These transformations can be brought to a more familiar form if one takes into account the explicit structure of the elements of $SU(2,2)$: \begin{eqnarray} {\bm g} = {\rm e}^{\bm L}~, \quad {\bm L} = \left( \begin{array}{cc} \omega_\alpha{}^\beta - \frac12 \,\tau \delta_\alpha{}^\beta \quad & -{\rm i} \,b_{\alpha \dt \beta} \\ -{\rm i} \,a^{\dt \alpha \beta} \quad & -{\bar \omega}^{\dt \alpha}{}_{\dt \beta} + \frac12 \, \tau \delta^{\dt \alpha}{}_{\dt \beta} \\ \end{array} \right)~, \quad {\bm L}^\dagger =- {\bm \Omega} \, {\bm L} \, {\bm \Omega}~. \label{confmat} \end{eqnarray} Here the matrix elements correspond to a Lorentz transformation $(\omega_\alpha{}^\beta,~{\bar \omega}^{\dt \alpha}{}_{\dt \beta})$, translation $a^{\dt \alpha \beta}$, special conformal transformation $ b_{\alpha \dt \beta}$ and dilatation $\tau$. In accordance with (\ref{stability2}), the isotropy group at $x_0=0$ is spanned by the Lorentz transformations, special conformal boosts and scale transformations. \sect{Compactified Minkowski superspace} \label{section:three} The construction reviewed in the previous section can be immediately generalised to the case of $\cN$-extended conformal supersymmetry \cite{Manin}, by making use of the supertwistor space ${\mathbb C}^{4|\cN}$ introduced by Ferber \cite{Ferber}, with $\cN=1,2,3$ (the case $\cN=4$ is known to be somewhat special, and will not be discussed here). The supertwistor space is equipped with scalar product \begin{eqnarray} \langle T, S \rangle = T^\dagger \,\Omega \, S~, \qquad \Omega =\left( \begin{array}{ccc} {\bf 1}_2 & {}&0 \\ {} & -{\bf 1}_2 & {}\\ 0 & {} & -{\bf 1}_{\cN} \end{array} \right) ~, \end{eqnarray} for any supertwistors $T,S \in {\mathbb C}^{4|\cN}$. The $\cN$-extended superconformal group acting on the supertwistor space is $SU(2,2|\cN) $. It is spanned by supermatrices of the form \begin{eqnarray} g \in SL(4|\cN ) ~, \qquad g^\dagger \,\Omega \,g = \Omega~. \label{SU(2,2|N)} \end{eqnarray} In complete analogy with the bosonic construction, compactified Minkowski superspace $\overline{\cM}{}^{4|4\cN}$ is defined to be the space of null two-planes through the origin in ${\mathbb C}^{4|\cN}$. Given such a two-plane, it is generated by two supertwistors $T^\mu$ such that (i) their bodies are linearly independent; (ii) they obey the equations (\ref{nullplane1}) and (\ref{nullplane2}). Equivalently, the space $\overline{\cM}^{4|4\cN} $ consists of rank-two supermatrices of the form \begin{eqnarray} ( T^\mu )=\left( \begin{array}{c} F\\ G \\ \Upsilon \end{array} \right) ~, \qquad F^\dagger \,F =G^\dagger \,G +\Upsilon^\dagger \,\Upsilon~, \label{super-two-plane} \end{eqnarray} defined modulo the equivalence relation \begin{eqnarray} \left( \begin{array}{c} F\\ G \\ \Upsilon \end{array} \right) ~ \sim ~ \left( \begin{array}{c} F\, R\\ G\,R \\ \Upsilon\,R \end{array} \right) ~, \qquad R \in GL(2,{\mathbb C}) ~. \end{eqnarray} Here $F$ and $G$ are $2\times 2$ bosonic matrices, and $\Upsilon$ is a $\cN \times 2$ fermionic matrix. As in the bosonic case, we have \begin{eqnarray} \left( \begin{array}{c} F\\ G \\ \Upsilon \end{array} \right) ~ \sim ~ \left( \begin{array}{c} h \\ {\bf 1} \\ \Theta \end{array} \right) ~, \qquad h^\dagger h = {\bf 1} + \Theta^\dagger \, \Theta ~. \end{eqnarray} Introduce the supermatrix \begin{eqnarray} \Sigma= \frac{1 }{ \sqrt{2} } \left( \begin{array}{crc} {\bf 1}_2 ~ & - {\bf 1}_2 & 0\\ {\bf 1}_2 ~& {\bf 1}_2 & 0 \\ 0 & 0 ~& \sqrt{2} \,{\bf 1}_{\cN} \end{array} \right)~, \qquad \Sigma^\dagger \,\Sigma= {\bf 1}_{4+\cN}~, \end{eqnarray} and associate with it the following similarity transformation: \begin{eqnarray} g ~& \to & ~ {\bm g} = \Sigma \, g\, \Sigma^{-1} ~, \quad g \in SU(2,2|\cN)~; \qquad T ~ \to ~ {\bm T} = \Sigma \, T~, \quad T \in {\mathbb C}^{4|\cN}~. \label{sim2} \end{eqnarray} The supertwistor metric becomes \begin{eqnarray} {\bm \Omega} = \Sigma \, \Omega\, \Sigma^{-1} = \left( \begin{array}{ccc} 0& {\bf 1}_2 &0\\ {\bf 1}_2 &0 &0\\ 0 & 0& -{\bf 1}_{\cN} \end{array} \right) ~. \end{eqnarray} When implemented on the superspace $\overline{\cM}{}^{4|4\cN} $, the similarity transformation results in \begin{eqnarray} \left( \begin{array}{c} h \\ {\bf 1} \\ \Theta \end{array} \right) ~\to ~ \Sigma\, \left( \begin{array}{c} h \\ {\bf 1} \\ \Theta \end{array} \right) = \frac{1 }{ \sqrt{2} } \left( \begin{array}{c} h -{\bf 1} \\ h+{\bf 1} \\ \sqrt{2}\, \Theta \end{array} \right) ~\sim ~ \left( \begin{array}{c} {\bf 1} \\ -{\rm i}\, \tilde{x}_+ \\ 2 \,\theta \end{array} \right) = \left( \begin{array}{r} \delta_\alpha{}^\beta \\ -{\rm i}\, \tilde{x}_+^{\dt \alpha \beta} \\ 2 \,\theta_i{}^\beta \end{array} \right) ~, \label{super-two-plane-mod} \end{eqnarray} where \begin{eqnarray} -{\rm i}\, \tilde{x}_+ = \frac{ h+{\bf 1} }{ h-{\bf 1} }~, \qquad \sqrt{2} \, \theta = \Theta \,( h-{\bf 1} )^{-1}~. \end{eqnarray} The bosonic $\tilde{x}_+$ and fermionic $\theta$ variables obey the reality condition \begin{equation} \tilde{x}_+ -\tilde{x}_- =4{\rm i}\, \theta^\dagger \,\theta~, \qquad \tilde{x}_- = (\tilde{x}_+)^\dagger~. \label{chiral} \end{equation} It is solved by \begin{equation} x_\pm^{\dt \alpha \beta} = x^{\dt \alpha \beta} \pm 2{\rm i} \, {\bar \theta}^{\dt \alpha i} \theta^\beta_i ~,\qquad {\bar \theta}^{\dt \alpha i} = \overline{ \theta^\alpha_i}~, \qquad \tilde{x}^\dagger = \tilde{x}~, \end{equation} with $z^A = (x^a ,\theta^\alpha_i , {\bar \theta}_{\dt \alpha}^i)$ the coordinates of $\cN$-extended flat global superspace ${\mathbb R}^{4|4\cN}$. We therefore see that the supertwistors in (\ref{super-two-plane-mod}) are parametrized by the variables $x^a_+$ and $\theta^\alpha_i$ which are the coordinates in the chiral subspace. Since the superconformal group acts by linear transformations on ${\mathbb C}^{4| 2\cN}$, we can immediately conclude that it acts by holomorphic transformations on the chiral subspace. To describe the action of $SU(2,2|\cN)$ on the chiral subspace, let us consider a generic group element: \begin{equation} {\bm g} ={\rm e}^{\bm L}~, \quad {\bm L} = \left( \begin{array}{ccc} \omega_\alpha{}^\beta - \sigma \delta_\alpha{}^\beta \quad & -{\rm i} \,b_{\alpha \dt \beta} \quad & 2\eta_\alpha{}^j \\ -{\rm i} \,a^{\dt \alpha \beta} \quad & -{\bar \omega}^{\dt \alpha}{}_{\dt \beta} + {\bar \sigma} \delta^{\dt \alpha}{}_{\dt \beta} \quad & 2{\bar \epsilon}^{\dt \alpha j} \\ 2\epsilon_i{}^\beta \quad & 2{\bar \eta}_{i \dt \beta} \quad & \frac{2}{\cN}({\bar \sigma} - \sigma)\, \delta_i{}^j + \Lambda_i{}^j \end{array} \right)~, \label{su(2,2|n)} \end{equation} where \begin{equation} \sigma = \frac12 \left( \tau + {\rm i}\, \frac{\cN}{\cN -4} \varphi \right)~, \qquad \Lambda^\dag = -\Lambda~, \qquad {\rm tr}\; \Lambda = 0~. \end{equation} Here the matrix elements, which are not present in (\ref{confmat}), correspond to a $Q$--supersymmetry $(\epsilon_i^\alpha,~ {\bar \epsilon}^{\dt \alpha i})$, $S$--supersymmetry $(\eta_\alpha^i,~{\bar \eta}_{i \dt \alpha})$, combined scale and chiral transformation $\sigma$, and chiral $SU(\cN)$ transformation $\Lambda_i{}^j$. Now, one can check that the coordinates of the chiral subspace transform as follows: \begin{eqnarray} \delta \tilde{x}_+ &=& \tilde{a} +(\sigma +{\bar \sigma})\, \tilde{x}_+ -{\bar \omega}\, \tilde{x}_+ -\tilde{x}_+ \,\omega +\tilde{x}_+ \,b \,\tilde{x}_+ +4{\rm i}\, {\bar \epsilon} \, \theta - 4 \tilde{x}_+ \, \eta \, \theta ~, \nonumber \\ \delta \theta &=& \epsilon + \frac{1}{\cN} \Big( (\cN-2) \sigma + 2 {\bar \sigma}\Big)\, \theta - \theta\, \omega + \Lambda \, \theta +\theta \, b \, \tilde{x}_+ -{\rm i}\,{\bar \eta}\, \tilde{x}_+ - 4\,\theta \,\eta \, \theta~. \label{chiraltra} \end{eqnarray} Expressions (\ref{chiraltra}) can be rewritten in a more compact form, \begin{equation} \delta x^a_+ = \xi^a_+ (x_+, \theta) ~, \qquad \delta \theta^\alpha_i = \xi^\alpha_i (x_+, \theta) ~, \end{equation} where \begin{equation} \xi^a_+ = \xi^a + \frac{\rm i}{8} \,\xi_i \,\sigma^a \, {\bar \theta}^i~, \qquad \overline{\xi^a} =\xi^a~. \end{equation} Here the parameters $\xi^a$ and $\xi^\alpha_i$ are components of the superconformal Killing vector \begin{equation} \xi = {\overline \xi} = \xi^a (z) \,\pa_a + \xi^\alpha_i (z)\,D^i_\alpha + {\bar \xi}_{\dt \alpha}^i (z)\, {\bar D}^{\dt \alpha}_i~, \end{equation} which generates the infinitesimal transformation in the full superspace, $z^A \to z^A + \xi \,z^A$, and is defined to satisfy \begin{equation} [\xi \;,\; {\bar D}_i^\ad] \; \propto \; {\bar D}_j^\bd ~, \end{equation} and therefore \begin{equation} {\bar D}_i^{\dt \alpha } \xi^{\dt \beta \beta} = 4{\rm i} \, \ve^{\dt \alpha{}\dt \beta} \,\xi^\beta_i~. \label{4Dmaster} \end{equation} All information about the superconformal algebra is encoded in the superconformal Killing vectors. ${}$From eq. (\ref{4Dmaster}) it follows that \begin{equation} [\xi \;,\; D^i_\alpha ] = - (D^i_\alpha \xi^\beta_j) D^j_\beta = {\tilde\omega}_\alpha{}^\beta D^i_\beta - \frac{1}{\cN} \Big( (\cN-2) \tilde{\sigma} + 2 \overline{ \tilde{\sigma}} \Big) D^i_\alpha - \tilde{\Lambda}_j{}^i \; D^j_\alpha \;. \label{4Dmaster2} \end{equation} Here the parameters of `local' Lorentz $\tilde{\omega}$ and scale--chiral $\tilde{\sigma}$ transformations are \begin{equation} \tilde{\omega}_{\alpha \beta}(z) = -\frac{1}{\cN}\;D^i_{(\alpha} \xi_{\beta)i}\;, \qquad \tilde{\sigma} (z) = \frac{1}{\cN (\cN - 4)} \left( \frac12 (\cN-2) D^i_\alpha \xi^\alpha_i - {\bar D}^{\dt \alpha}_i {\bar \xi}_{\dt \alpha}^{ i} \right) \label{lor,weyl} \end{equation} and turn out to be chiral \begin{equation} {\bar D}_{\dt \alpha i} \tilde{\omega}_{\alpha \beta}~=~ 0\;, \qquad {\bar D}_{\dt \alpha {} i} \tilde{\sigma} ~=~0\;. \end{equation} The parameters $\tilde{\Lambda}_j{}^i$ \begin{equation} \tilde{\Lambda}_j{}^i (z) = -\frac{\rm i}{32}\left( [D^i_\alpha\;,{\bar D}_{\dt \alpha j}] - \frac{1}{\cN} \delta_j{}^i [D^k_\alpha\;,{\bar D}_{\dt \alpha k}] \right)\xi^{\dt \alpha \alpha}~, \qquad \tilde{\Lambda}^\dag = - \tilde{\Lambda}~, \qquad {\rm tr}\; \tilde{\Lambda} = 0 \label{lambda} \end{equation} correspond to `local' $SU(\cN )$ transformations. One can readily check the identity \begin{equation} D^k_\alpha \tilde{\Lambda}_j{}^i = -2 \left( \delta^k_j D^i_\alpha -\frac{1}{\cN} \delta^i_j D^k_\alpha \right) \tilde{\sigma}~. \label{an1} \end{equation} \sect{Compactified harmonic/projective superspace} \label{section:four} ${}$For Ferber's supertwistors used in the previous section, a more appropriate name seems to be {\it even supertwistors}. Being elements of ${\mathbb C}^{4|\cN}$, these objects have four bosonic components and $\cN$ fermionic components. One can also consider {\it odd supertwistors} \cite{LN}. By definition, these are $4+\cN$ vector-columns such that their top four entries are fermionic, and the rest $\cN$ components are bosonic. In other words, the odd supertwistors are elements of ${\mathbb C}^{\cN |4}$. It is natural to treat the even and odd supertwistors as the even and odd elements, respectively, of a supervector space\footnote{See, e.g. \cite{DeWitt,BK} for reviews on supervector spaces.} of dimension $(4|\cN )$ on which the superconformal group $SU(2,2|\cN) $ acts. Both even and odd supertwistors should be used \cite{Rosly2,LN} in order to define harmonic-like superspaces in extended supersymmetry. Throughout this section, our consideration is restricted to the case $\cN=2$. Then, $\tilde{\Lambda}^{ij} = \ve^{ik} \,\tilde{\Lambda}_k{}^{j}$ is symmetric, $\tilde{\Lambda}^{ij}= \tilde{\Lambda}^{ji}$, and eq. (\ref{an1}) implies \begin{equation} D^{(i}_\alpha \tilde{\Lambda}^{jk)} = {\bar D}^{(i}_{\dt \alpha} \tilde{\Lambda}^{jk)}= 0~. \label{an2} \end{equation} \subsection{Projective realisation} ${}$ Following \cite{LN}, we accompany the two even null supertwistors $T^\mu$, which occur in the construction of the compactified $\cN=2 $ superspace $\overline{\cM}{}^{4|8} $, by an odd supertwistor $\Xi$ with non-vanishing {\it body} (in particular, the body of $ \langle \Xi, \Xi \rangle$ is non-zero). These supertwistors are required to obey \begin{equation} \langle T^\mu, T^\nu \rangle = \langle T^\mu, \Xi \rangle = 0~, \qquad \mu, \nu =1,2 ~, \label{nullplane3} \end{equation} and are defined modulo the equivalence relation \begin{eqnarray} (\Xi, T^\mu)~\sim ~ (\Xi, T^\nu) \, \left( \begin{array}{cc} c~ &0 \\ \rho_\nu~ & R_\nu{}^\mu \end{array} \right) ~,\qquad \left( \begin{array}{cc} c~ &0 \\ \rho~ & R \end{array} \right) \in GL(1|2)~, \end{eqnarray} with $\rho_\nu$ anticommuting complex parameters. The superspace obtained can be seen to be $\overline{\cM}{}^{4|8} \times S^2$. Indeed, using the above freedom in the definition of $T^\mu$ and $\Xi$, we can choose them to be of the form \begin{eqnarray} T^\mu \sim \left( \begin{array}{c} h \\ {\bf 1} \\ \Theta \end{array} \right) ~, \qquad \Xi \sim \left( \begin{array}{c} 0 \\ - \Theta^\dagger \,v \\ v \end{array} \right) ~, \qquad h^\dagger h = {\bf 1} + \Theta^\dagger \, \Theta ~, \quad v \neq 0~. \end{eqnarray} Here the non-zero two-vector $v \in {\mathbb C}^2$ is still defined modulo re-scalings $v \to c\, v $, with $c \in {\mathbb C}^*$. A natural name for the supermanifold obtained is {\it projective superspace}. \subsection{Harmonic realisation} Now, we would like to present a somewhat different, but equivalent, realisation for $\overline{\cM}{}^{4|8} \times S^2$ inspired by the exotic realisation for the two-sphere described in Appendix A. We will consider a space of quadruples $\{T^\mu, \Xi^+, \Xi^- \}$ consisting of two even supertwistors $T^\mu$ and two odd supertwistors $\Xi^\pm$ such that (i) the bodies of $T^\mu$ are linearly independent four-vectors; (ii) the bodies of $\Xi^\pm$ are lineraly independent two-vectors. These supertwistors are further required to obey the relations \begin{equation} \langle T^\mu, T^\nu \rangle = \langle T^\mu, \Xi^+ \rangle = \langle T^\mu, \Xi^- \rangle = 0~, \qquad \mu, \nu =1,2 ~, \label{nullplane4} \end{equation} and are defined modulo the equivalence relation \begin{eqnarray} (\Xi^-,\Xi^+, T^\mu)\sim (\Xi^-,\Xi^+, T^\nu) \, \left( \begin{array}{ccc} a~& 0~& 0 \\ b~& c~ &0 \\ \rho^-_\nu~ & \rho^+_\nu ~&R_\nu{}^\mu \end{array} \right) ~,\quad \left( \begin{array}{lll} a~& 0~& 0 \\ b~& c~ &0 \\ \rho^- ~ & \rho^+ ~&R \end{array} \right) \in GL(2|2)~, \end{eqnarray} with $\rho^\pm_\nu$ anticommuting complex parameters. Using the `gauge freedom' in the definition of $T^\mu$ and $\Xi^\pm$, these supertwistors can be chosen to have the form \begin{eqnarray} T^\mu \sim \left( \begin{array}{c} h \\ {\bf 1} \\ \Theta \end{array} \right) ~, \quad \Xi^\pm \sim \left( \begin{array}{c} 0 \\ - \Theta^\dagger \,v^\pm \\ v^\pm \end{array} \right) ~, \quad h^\dagger h = {\bf 1} + \Theta^\dagger \, \Theta ~, \quad \det \, (v^- \,v^+) \neq 0~. \end{eqnarray} Here the `complex harmonics' $v^\pm$ are still defined modulo arbitrary transformations of the form (\ref{equivalence2}). Given a $2\times 2$ matrix ${\bm v}= (v^-\, v^+ ) \in GL(2,{\mathbb C})$, there always exists a lower triangular matrix $R$ such that ${\bm v} R \in SU(2)$. The latter implies that $v^-$ is uniquely determined in terms of $v^+$, and therefore the supermanifold under consideration is indeed $\overline{\cM}{}^{4|8} \times S^2$. In accordance with the construction given, a natural name for this supermanifold is {\it harmonic superspace}. \subsection{Embedding of $\bm{ {\mathbb R}^{4|8} \times S^2}$: Harmonic realisation} We can now analyse the structure of superconformal transformations on the flat global superspace $ {\mathbb R}^{4|8} \times S^2$ embedded in $\overline{\cM}{}^{4|8} \times S^2$. Upon implementing the similarity transformation, eq. (\ref{sim2}), we have \begin{eqnarray} ({\bm T}^\mu ) \sim \left( \begin{array}{c} {\bf 1} \\ -{\rm i}\, \tilde{x}_+ \\ 2 \,\theta \end{array} \right) = \left( \begin{array}{r} \delta_\alpha{}^\beta \\ -{\rm i}\, \tilde{x}_+^{\dt \alpha \beta} \\ 2 \,\theta_i{}^\beta \end{array} \right) ~, \qquad {\bm \Xi}^\pm \sim \left( \begin{array}{c} 0 \\ 2{\bar \theta}^\pm \\ u^\pm \end{array} \right) = \left( \begin{array}{c} 0 \\ 2{\bar \theta}^{\pm \dt \alpha } \\ u^\pm_i \end{array} \right) ~. \label{par} \end{eqnarray} with \begin{eqnarray} \det \Big(u_i{}^- \, u_i{}^+ \Big) = u^{+i} \,u^-_i \neq 0~, \qquad u^{+i} = \ve^{ij} \,u^+_j~. \nonumber \end{eqnarray} Here the bosonic $x^m_+$ and fermionic $\theta^\alpha_i$ variables are related to each other by the reality condition (\ref{chiral}). The orthogonality conditions $\langle {\bm T}^\mu, {\bm \Xi}^\pm \rangle = 0$ imply \begin{equation} {\bar \theta}^{+ \dt \alpha } = {\bar \theta}^{\dt \alpha i} \,u^+_i~, \qquad {\bar \theta}^{- \dt \alpha } = {\bar \theta}^{\dt \alpha i} \,u^-_i~. \end{equation} The complex harmonic variables $u^\pm_i$ in (\ref{par}) are still defined modulo arbitrary transformations of the form \begin{eqnarray} \Big(u_i{}^- \, u_i{}^+ \Big) ~\to ~ \Big(u_i{}^- \, u_i{}^+ \Big) \,R~, \qquad R= \left( \begin{array}{cc} a & 0\\ b & c \end{array} \right) \in GL(2,{\mathbb C})~. \label{equivalence22} \end{eqnarray} The `gauge' freedom (\ref{equivalence22}) can be reduced by imposing the `gauge' condition \begin{equation} u^{+i} \,u^-_i =1~. \label{unimod} \end{equation} It can be further reduced by choosing the harmonics to obey the reality condition \begin{equation} u^{+i} =\overline{u^-_i} ~. \label{real} \end{equation} Both requirements (\ref{unimod}) and (\ref{real}) have no fundamental significance, and represent themselves possible gauge conditions only. It is worth pointing out that the reality condition (\ref{real}) implies $ \langle {\bm \Xi}^- , {\bm \Xi}^+ \rangle = 0$. If both equations (\ref{unimod}) and (\ref{real}) hold, then we have in addition $ \langle {\bm \Xi}^+ , {\bm \Xi}^+ \rangle = \langle {\bm \Xi}^- , {\bm \Xi}^- \rangle = -1$. In what follows, the harmonics will be assumed to obey eq. (\ref{unimod}) only. As explained in the appendix, the gauge freedom (\ref{equivalence22}) allows one to represent any infinitesimal transformation of the harmonics as follows: \begin{eqnarray} \delta u^-_i =0~, \qquad \delta u^+_i = \rho^{++}(u)\, u^-_i~, \nonumber \end{eqnarray} for some parameter $\rho^{++}$ which is determined by the transformation under consideration. In the case of an infinitesimal superconformal transformation (\ref{su(2,2|n)}), one derives \begin{eqnarray} \delta u^-_i =0~, \qquad \delta u^+_i = - \tilde{\Lambda}^{++}\, u^-_i~, \qquad \tilde{\Lambda}^{++} = \tilde{\Lambda}^{ij} \,u^+_i u^+_j~, \label{deltau+} \end{eqnarray} with the parameter $ \tilde{\Lambda}^{ij} $ given by (\ref{lambda}). Due to (\ref{an2}), we have (using the notation $D^\pm_\alpha =D^i_\alpha u^\pm_i$ and $ {\bar D}^\pm_{\dt \alpha} ={\bar D}^i_{\dt \alpha} u^\pm_i$) \begin{equation} D^+_\alpha \tilde{\Lambda}^{++} ={\bar D}^+_{\dt \alpha} \tilde{\Lambda}^{++} =0~, \qquad D^{++} \tilde{\Lambda}^{++} =0~. \label{L-anal} \end{equation} Here and below, we make use of the harmonic derivatives \cite{GIKOS} \begin{eqnarray} D^{++}=u^{+i}\frac{\partial}{\partial u^{- i}} ~,\qquad D^{--}=u^{- i}\frac{\partial}{\partial u^{+ i}} ~. \label{5} \end{eqnarray} It is not difficult to express $\tilde{\Lambda}^{++} $ in terms of the parameters in (\ref{su(2,2|n)}) and superspace coordinates: \begin{equation} \tilde{\Lambda}^{++} =\Lambda^{ij} \,u^+_i u^+_j +4 \, {\rm i}\,\theta^+ \,b \,{\bar \theta}^+ - ( \theta^+ \eta^+ -{\bar \theta}^+ {\bar \eta}^+ ) ~. \end{equation} The transformation (\ref{deltau+}) coincides with the one originally given in \cite{GIOS-conf}. ${}$For the superconformal variations of $\theta^{+}_{ \alpha} $ and ${\bar \theta}^+_{\dt \alpha}$, one finds \begin{eqnarray} \delta \theta^{+}_{ \alpha} &=& \delta \theta^i_{ \alpha } \, u^+_i + \theta^i_{ \alpha }\, \delta u^+_i = \xi^i_{ \alpha } \, u^+_i - \tilde{\Lambda}^{++} \, \theta^i_{ \alpha } \, u^-_i ~, \end{eqnarray} and similarly for $\delta {\bar \theta}^{+}_{\dt \alpha}$. From eqs. (\ref{4Dmaster2}) and (\ref{L-anal}) one then deduces \begin{equation} D^+_\beta \, \delta q^{+}_{ \alpha } = {\bar D}^+_{\dt \beta} \, \delta \theta^{+}_{ \alpha}=0~, \end{equation} and similarly for $\delta {\bar \theta}^{+}_{\dt \alpha}$. The superconformal variations $ \delta q^{+}_{ \alpha } $ and $\delta {\bar \theta}^{+}_{ \dt \alpha}$ can be seen to coincide with those originally given in \cite{GIOS-conf}. One can also check that the superconformal variation of the analytic bosonic coordinates \begin{equation} y^a = x^a - 2{\rm i}\, \theta^{(i}\sigma^a {\bar \theta}^{j)}u^+_i u^-_j~, \qquad D^+_\beta \, y^a = {\bar D}^+_{\dt \beta} \, y^a=0~, \end{equation} is analytic. This actually follows from the transformation \begin{equation} \delta D^+_\alpha \equiv [ \xi - \tilde{\Lambda}^{++} D^{--} , D^+_{ \alpha} ] = \tilde{\omega}_{ \alpha}{}^{ \beta}\, D_{ \beta}^+ - ( \tilde{\sigma} + \tilde{\Lambda}^{ij} \,u^+_i u^-_j ) \, D^+_{ \alpha}~, \end{equation} and similarly for $\delta {\bar D}^+_{\dt \alpha} $. We conclude that the analytic subspace parametrized by the variables $$\zeta=( y^a,\theta^{+\alpha},{\bar\theta}^+_{\dt \alpha}, \, u^+_i,u^-_j )~, \qquad D^+_\beta \, \zeta = {\bar D}^+_{\dt \beta} \, \zeta=0~, $$ is invariant under the superconformal group. The superconformal variations of these coordinates coincide with those given in \cite{GIOS-conf}. No consistency clash occurs between the $SU(2)$-type constraints (\ref{1+2const}) and the superconformal transformation law (\ref{deltau+}), because the construction does not require imposing either of the constraints (\ref{1+2const}). Using eq. (\ref{an1}) one can show that the following descendant of the superconformal Killing vector \begin{equation} \Sigma = \tilde{\Lambda}^{ij} \,u^+_i u^-_j + \tilde{\sigma} +\overline{ \tilde{\sigma} } \end{equation} possesses the properties \begin{equation} D^+_\beta \, \Sigma = {\bar D}^+_{\dt \beta} \, \Sigma=0~, \qquad D^{++} \Sigma =\tilde{\Lambda}^{++}~. \end{equation} It turns out that the objects $\xi$, $\tilde{\Lambda}^{++}$ and $\Sigma$ determine the superconformal transformations of primary analytic superfields \cite{GIOS}. \subsection{Embedding of $\bm{ {\mathbb R}^{4|8} \times S^2}$: Projective realisation} Now, let us try to exploit the realisation of $S^2$ as the Riemann sphere ${\mathbb C}P^1$. The superspace can be covered by two open sets -- the north chart and the south chart -- that are specified by the conditions: (i) $u^{+ \underline{1}} \neq 0$; and (ii) $u^{+ \underline{2}} \neq 0$. In the north chart, the gauge freedom (\ref{equivalence22}) can be completely fixed by choosing \begin{eqnarray} u^{+i} \sim (1, w) \equiv w^i ~, \quad && \quad u^+_i \sim (-w,1) = w_i~, \quad \qquad \nonumber \\ u^{-i} \sim (0,-1) ~, \quad && \quad u^-_i \sim (1,0)~. \label{projectivegaugeN} \end{eqnarray} Here $w$ is the complex coordinate parametrizing the north chart. Then the transformation law (\ref{deltau+}) turns into \begin{equation} \delta w = \tilde{\Lambda}^{++}(w)~, \qquad \tilde{\Lambda}^{++} (w)= \tilde{\Lambda}^{ij} \,w^+_i w^+_j~. \label{deltaw+} \end{equation} It is seen that the superconformal group acts by holomorphic transformations. The south chart is defined by \begin{eqnarray} u^{+i} \sim (y, 1) \equiv y^i~, \quad && \quad u^+_i \sim (-1,y) =y_i ~, \nonumber \\ \quad \qquad u^{-i} \sim (1,0) ~, \quad && \quad u^-_i \sim (0,1)~, \end{eqnarray} with $y$ the local complex coordinate. The transformation law (\ref{deltau+}) becomes \begin{equation} \delta y = -\tilde{\Lambda}^{++}(y)~, \qquad \tilde{\Lambda}^{++} (y)= \tilde{\Lambda}^{ij} \,y^+_i y^+_j~. \label{deltay+} \end{equation} In the overlap of the north and south charts, the corresponding complex coordinates are related to each other in the standard way: \begin{equation} y= \frac{1}{w}~. \end{equation} \sect{5D superconformal formalism} \label{section:five} As we have seen, modulo some global topological issues, all information about the superconformal structures in a superspace is encoded in the corresponding superconformal Killing vectors. In developing the 5D superconformal formalism below, we will not pursue global aspects, and simply base our consideration upon elaborating the superconformal Killing vectors and related concepts. Our 5D notation and conventions follow \cite{KL}. \subsection{5D superconformal Killing vectors} In 5D simple superspace ${\mathbb R}^{5|8}$ parametrized by coordinates $ z^{\hat A} = (x^{\hat a}, \theta^{\hat \alpha}_i )$, we introduce an infinitesimal coordinate transformation \begin{equation} z^{\hat A} ~\to ~ z^{\hat A} = z^{\hat A} + \xi \, z^{\hat A} \end{equation} generated by a real vector field \begin{equation} \xi ={\bar \xi} = \xi^{\hat a} (z) \, \pa_{\hat a} + \xi^{\hat \alpha}_i (z) \, D_{\hat \alpha}^i ~, \end{equation} with $D_{\hat A} = ( \pa_{\hat a}, D_{\hat \alpha}^i ) $ the flat covariant derivatives. The transformation is said to be superconformal if $[\xi , D_{\hat \alpha}^i] \propto D_{\hat \beta}^j $, or more precisely \begin{equation} [\xi , D_{\hat \alpha}^i] = -( D_{\hat \alpha}^i \, \xi^{\hat \beta}_j )\, D_{\hat \beta}^j~. \label{master1}\end{equation} The latter equation is equivalent to \begin{equation} D_{\hat \alpha}^i \xi^{\hat b} = 2{\rm i} \,(\Gamma^{\hat b})_{\hat \alpha}{}^{\hat \beta}\, \xi^i_{\hat \beta} = - 2{\rm i} \,(\Gamma^{\hat b})_{\hat \alpha \hat \beta}\, \xi^{\hat \beta i} ~. \label{master2} \end{equation} It follows from here \begin{equation} \ve^{ij} \,(\Gamma_{\hat a})_{\hat \alpha \hat \beta}\, \pa^{\hat a} \xi^{\hat b} = (\Gamma^{\hat b})_{\hat \alpha \hat \gamma}\, D_{\hat \beta}^j \, \xi^{\hat \gamma i} + (\Gamma^{\hat b})_{\hat \beta \hat \gamma}\, D_{\hat \alpha}^i \, \xi^{\hat \gamma j}~. \label{master3} \end{equation} This equation implies that $\xi^{\hat a}= \xi^{\hat a}(x,\theta) $ is an ordinary conformal Killing vector, \begin{equation} \pa^{\hat a} \xi^{\hat b}+\pa^{\hat b} \xi^{\hat a} =\frac{2}{5}\, \eta^{\hat a \hat b} \, \pa_{\hat c} \, \xi^{\hat c}~, \label{master4} \end{equation} depending parametrically on the Grassmann superspace coordinates, \begin{eqnarray} \xi^{\hat a}(x,\theta) &=& b^{\hat a} (\theta) + 2\sigma (\theta) \, x^{\hat a} + \omega^{\hat a}{}_{\hat b} (\theta) \,x^{\hat b} +k^{\hat a} (\theta)\, x^2 -2 x^{\hat a} x_{\hat b}\, k^{\hat b}(\theta) ~, \end{eqnarray} with $\omega^{\hat a \hat b} =- \omega^{\hat a \hat b}$. ${}$From (\ref{master2}) one can derive a closed equation on the vector components $\xi_{\hat \beta \hat \gamma} = (\Gamma^{\hat b})_{\hat \beta \hat \gamma} \xi_{\hat b}$: \begin{equation} D^i_{( \hat \alpha}\, \xi_{\hat \beta ) \hat \gamma} =-\frac{1}{5} \,D^{ \hat \delta i} \, \xi_{\hat \delta ( \hat \alpha} \, \ve_{\hat \beta ) \hat \gamma}~. \end{equation} One can also deduce closed equations on the spinor components $ \xi^{\hat \alpha}_i $: \begin{eqnarray} D_{\hat \alpha}^{(i} \, \xi_{\hat \beta}^{ j) } &=&\frac{1}{ 4} \, \ve_{\hat \alpha \hat \beta} \,D^{\hat \gamma (i }\, \xi_{\hat \gamma}^{ j)}~, \label{master5} \\ (\Gamma^{\hat b})_{\hat \alpha \hat \beta} \,D^{\hat \alpha i} \xi^{\hat \beta }_i &=&0~. \label{master6} \end{eqnarray} At this stage it is useful to let harmonics $u^\pm_i$, such that $u^{+i}u^-_i\neq 0$, enter the scene for the first time. With the definitions $D^\pm_{\hat \alpha} = D^i_{\hat \alpha} \, u^\pm_i$ and $\xi^\pm_{\hat \alpha} = \xi^i_{\hat \alpha} \, u^\pm_i$, eq. (\ref{master5}) is equivalent to \begin{equation} D_{\hat \alpha}^{+} \xi_{\hat \beta}^{ + } =\frac{1}{4} \, \ve_{\hat \alpha \hat \beta} \,D^{+ \hat \gamma } \xi_{\hat \gamma}^{ +} \quad \Longrightarrow \quad D_{\hat \alpha}^{+} D_{\hat \beta}^{+} \xi_{\hat \gamma}^{ + } =0~. \end{equation} The above results lead to \begin{equation} [\xi , D_{\hat \alpha}^i] = \tilde{\omega}_{\hat \alpha}{}^{\hat \beta}\, D_{\hat \beta}^i -\tilde{\sigma} \, D_{\hat \alpha}^i - \tilde{\Lambda}_j{}^i D_{\hat \alpha}^j~, \label{param} \end{equation} where \begin{eqnarray} \tilde{\omega}^{\hat \alpha \hat \beta} =-\frac12 \,D^{k (\hat \alpha} \xi^{ \hat \beta )}_k~, \quad \tilde{\sigma} = \frac{1}{8} D_{\hat \gamma}^k \xi^{\hat \gamma }_k~, \quad \tilde{\Lambda}^{ij} = \frac{1}{ 4} D_{\hat \gamma }^{( i} \xi^{j) \hat \gamma }~. \end{eqnarray} The parameters on the right of (\ref{param}) are related to each other as follows \begin{eqnarray} D_{\hat \alpha}^i \tilde{\omega}_{\hat \beta \hat \gamma} &=& 2\Big( \ve_{\hat \alpha \hat \beta} \, D_{\hat \gamma}^i \tilde{\sigma} + \ve_{\hat \alpha \hat \gamma} \, D_{\hat \beta}^i \tilde{\sigma} \Big)~, \nonumber \\ D_{\hat \alpha}^i \tilde{\Lambda}^{jk} &=& 3\Big( \epsilon^{ik} \,D_{\hat \alpha}^j \tilde{\sigma} + \epsilon^{ij} \,D_{\hat \alpha}^k \tilde{\sigma} \Big) ~. \label{relations} \end{eqnarray} The superconformal transformation of the superspace integration measure involves \begin{equation} \pa_{\hat a} \,\xi^{\hat a} - D^i_{\hat \alpha} \,\xi^{\hat \alpha}_i =2\tilde{\sigma}~. \label{trmeasure1} \end{equation} \subsection{Primary superfields} Here we give a few examples of 5D primary superfields, without Lorentz indices. Consider a completely symmetric iso-tensor superfield $H^{i_1\dots i_n}= H^{(i_1\dots i_n)}$ with the superconformal transformation law \begin{equation} \delta H^{i_1\dots i_n}= -\xi \,H^{i_1\dots i_n} -p \,\tilde{\sigma}\, H^{i_1\dots i_n} -\tilde{\Lambda}_k{}^{(i_1} H^{i_2\dots i_n )k} ~, \label{lin1} \end{equation} with $p$ a constant parameter being equal to half the conformal weight of $H^{i_1\dots i_n}$. It turns out that this parameter is equal to $3n$ if $H^{i_1\dots i_n}$ is constrained by \begin{equation} D_{\hat \alpha}{}^{(j} H^{i_1\dots i_n)} =0 \quad \longrightarrow \quad p=3n~. \label{lin2} \end{equation} The vector multiplet strength transforms as \begin{equation} \delta W = - \xi\,W -2\tilde{\sigma} \,W~. \label{vmfstransfo} \end{equation} The conformal weight of $W$ is uniquely fixed by the Bianchi identity \begin{equation} D^{(i}_{\hat \alpha} D_{\hat \beta }^{j)} W = \frac{1 }{ 4} \ve_{\hat \alpha \hat \beta} \, D^{\hat \gamma (i} D_{\hat \gamma }^{j)} W~ \label{Bianchi1} \end{equation} obeyed by $W$. \subsection{Analytic building blocks} In what follows we make use of the harmonics $u^\pm_i$ subject to eq. (\ref{unimod}). As in the 4D $\cN=2$ case, eq. (\ref{unimod}) has no intrinsic significance, with the only essential condition being $(u^+u^-) \equiv u^{+i}u^-_i\neq 0$. Eq. (\ref{unimod}) is nevertheless handy, for it allows one to get rid of numerous annoying factors of $(u^+u^-)$. Introduce \begin{equation} \Sigma = \tilde{\Lambda}^{ij} \,u^+_i u^-_j +3\tilde{\sigma}~,\qquad \tilde{\Lambda}^{++} = D^{++} \Sigma =\tilde{\Lambda}^{ij} \,u^+_i u^+_j~. \end{equation} It follows from (\ref{relations}) and the identity $[ D^{++}, D^+_{\hat \alpha} ]=0$, that $\Sigma$ and $\tilde{\Lambda}^{++} $ are analytic superfields, \begin{equation} D^+_{\hat \alpha} \Sigma =0~, \qquad D^+_{\hat \alpha} \tilde{\Lambda}^{++} =0~. \end{equation} Representing $\xi = \xi^{\hat a} \pa_{\hat a} -\xi^{+\hat \alpha} D^-_{\hat \alpha} + \xi^{-\hat \alpha} D^+_{\hat \alpha}$, one can now check that \begin{equation} [ \xi - \tilde{\Lambda}^{++} D^{--} \, , \, D^+_{\hat \alpha} ] = \tilde{\omega}_{\hat \alpha}{}^{\hat \beta}\, D_{\hat \beta}^+ - (\Sigma - 2\tilde{\sigma} ) \, D^+_{\hat \alpha}~. \end{equation} This relation implies that the operator $ \xi - \tilde{\Lambda}^{++} D^{--} $ maps every analytic superfield into an analytic one. It is worth pointing out that the superconformal transformation of the analytic subspace measure involves \begin{equation} \pa_{\hat a} \xi^{\hat a} +D^-_{\hat \alpha} \xi^{+\hat \alpha} -D^{--}\tilde{\Lambda}^{++} =2\Sigma~. \end{equation} \subsection{Harmonic superconformal multiplets} We present here several superconformal multiplets that are globally defined over the harmonic superspace. Such a multiplet is described by a smooth Grassmann analytic superfields $\Phi^{(n)}_\kappa (z,u^+,u^-)$, \begin{equation} D^+_{\hat \alpha} \Phi^{(n)}_\kappa =0~, \end{equation} which is endowed with the following superconformal transformation law \begin{equation} \delta \Phi^{(n)}_\kappa = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, \Phi^{(n)}_\kappa -\kappa \,\Sigma \, \Phi^{(n)}_\kappa ~. \label{harmult1} \end{equation} The parameter $\kappa$ is related to the conformal weight of $ \Phi^{(n)}_\kappa$. We will call $ \Phi^{(n)}_\kappa$ an analytic density of weight $\kappa$. When $ n$ is even, one can define real superfields, $\breve{\Phi}^{(n)}_\kappa=\Phi^{(n)}_\kappa$, with respect to the analyticity-preserving conjugation \cite{GIKOS,GIOS} (also known as `smile-conjugation'). Let $V^{++}$ be a real analytic gauge potential describing a $U(1)$ vector multiplet. Its superconformal transformation is \begin{equation} \delta V^{++} = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, V^{++}~. \label{v++tr} \end{equation} Associated with the gauge potential is the field strength \begin{eqnarray} W = \frac{\rm i}{8} \int {\rm d}u \, ({\hat D}^-)^2 \, V^{++}~, \qquad ({\hat D}^\pm)^2=D^{\pm \hat \alpha} D^\pm_{\hat \alpha} \label{W2} \end{eqnarray} which is known to be invariant under the gauge transformation $\delta V^{++} = D^{++} \lambda $, where the gauge parameter $\lambda$ is a real analytic superfield. The superconformal transformation of $W$, \begin{eqnarray} \delta W = -\frac{\rm i}{8} \int {\rm d}u \, ({\hat D}^-)^2 \Big( \xi + (D^{--} \tilde{\Lambda}^{++}) \Big) \, V^{++}~, \end{eqnarray} can be shown to coincide with (\ref{vmfstransfo}). There are many ways to describe a hypermultiplet. In particular, one can use an analytic superfield $q^+ (z,u)$ and its smile-conjugate $\breve{q}^+ (z,u)$ \cite{GIKOS,GIOS}. They transform as follows: \begin{equation} \delta q^+ = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, q^+ - \,\Sigma \, q^+ ~, \qquad \delta \breve{q}^+ = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, \breve{q}^+ - \,\Sigma \, \breve{q}^+ ~. \label{q+-trlaw} \end{equation} One has $\kappa =n$ in (\ref{harmult1}), if the superfield is annihilated by $D^{++}$, \begin{eqnarray} && D^+_{\hat \alpha} H^{(n)} = D^{++} H^{(n)} =0~ \quad \longrightarrow \quad H^{(n)}(z,u) = H^{i_1\dots i_n} (z) \,u^+_{i_1} \dots u^+_{i_n} ~, \nonumber \\ && \qquad \delta H^{(n)} = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, H^{(n)} -n \,\Sigma \, H^{(n)}~. \label{O(n)-harm} \end{eqnarray} Here the harmonic-independent superfield $H^{i_1\dots i_n} $ transforms according to (\ref{lin1}) with $p=3n$. \subsection{Projective superconformal multiplets} In the projective superspace approach, one deals only with superfields ${\bm \phi}^{(n)} (z,u^+)$ obeying the constraints \begin{eqnarray} && D^+_{\hat \alpha} {\bm \phi}^{(n)} = D^{++} {\bm \phi}^{(n)} =0~, \qquad n\geq 0~. \label{holom2} \end{eqnarray} Here the first constraint means that ${\bm \phi}^{(n)} $ is Grassmann analytic, while the second constraint demands independence of $u^-$. Unlike the harmonic superspace approach, however, ${\bm \phi}^{(n)} (z,u^+)$ is not required to be well-defined over the two-sphere, that is, ${\bm \phi}^{(n)}$ may have singularities (say, poles) at some points of $S^2$. The presence of singularities turns out to be perfectly OK since the projective-superspace action involves a contour integral in $S^2$, see below. We assume that ${\bm \phi}^{(p)} (z,u)$ is non-singular outside the north and south poles of $ S^2$. In the north chart, we can represent \begin{equation} D^+_{\hat \alpha} = - u^{+\underline{1}}\, \nabla_{\hat \alpha} (w)~, \qquad \nabla_{\hat \alpha} (w) = -D^i_{\hat \alpha} \, w_i~, \qquad w_i = (-w, 1)~, \label{nabla0} \end{equation} Then, the equations (\ref{holom2}) are equivalent to \begin{equation} \phi (z, w) = \sum_{n=-\infty}^{+\infty} \phi_n (z) \,w^n~, \qquad \nabla_{\hat \alpha} (w) \, \phi(z,w)=0~, \label{holom0} \end{equation} with the holomorphic superfield $\phi (z, w) \propto {\bm \phi}^{(n)} (z,u^+)$. These relations define a {\it projective multiplet}, following the four-dimensional terminology \cite{projective}. Associated with $\phi (z,w) $ is its smile-conjugate \begin{eqnarray} \breve{\phi} (z, w) = \sum_{n=-\infty}^{+\infty} (-1)^n \, {\bar \phi}_{-n} (z) \,w^n~, \qquad \nabla_{\hat \alpha} (w) \, \breve{\phi}(z,w)=0~, \label{holom3} \end{eqnarray} which is also a projective multiplet. If $\breve{\phi} (z, w) = {\phi} (z, w) $, the projective superfield is called real. Below we present several superconformal multiplets as defined in the north chart. The corresponding transformations laws involve the two analytic building blocks: $$ \tilde{\Lambda}^{++} (w)= \tilde{\Lambda}^{ij} \,w^+_i w^+_j = \tilde{\Lambda}^{\underline{1} \underline{1} }\, w^2 -2 \tilde{\Lambda}^{\underline{1} \underline{2}}\, w + \tilde{\Lambda}^{\underline{2} \underline{2}} ~,\quad \Sigma (w) = \tilde{\Lambda}^{\underline{1} i} \,w_i +3 \tilde{\sigma} = - \tilde{\Lambda}^{\underline{1} \underline{1}} \,w + \tilde{\Lambda}^{\underline{1} \underline{2}} +3 \tilde{\sigma}~. $$ Similar structures occur in the south chart, that is $$ \tilde{\Lambda}^{++} (y)= \tilde{\Lambda}^{ij} \,y^+_i y^+_j = \tilde{\Lambda}^{\underline{1} \underline{1} } -2 \tilde{\Lambda}^{\underline{1} \underline{2}}\, y + \tilde{\Lambda}^{\underline{2} \underline{2}} \,y^2~,\quad \Sigma (y) = \tilde{\Lambda}^{\underline{2} i} \,y_i +3 \tilde{\sigma} = - \tilde{\Lambda}^{\underline{1} \underline{2}} + \tilde{\Lambda}^{\underline{2} \underline{2}}\, y +3 \tilde{\sigma}~. $$ In the overlap of the two charts, we have \begin{eqnarray} \tilde{\Lambda}^{++} (y)&=& \frac{1}{w^2} \,\tilde{\Lambda}^{++} (w) \quad \longrightarrow \quad \tilde{\Lambda}^{++} (y)\,\pa_y =- \tilde{\Lambda}^{++} (w)\,\pa_w \nonumber \\ \Sigma(y) &=& \Sigma (w) +\frac{1}{w} \,\tilde{\Lambda}^{++} (w)~. \end{eqnarray} To realise a massless vector multiplet, one uses the so-called tropical multiplet described by \begin{equation} V (z, w) = \sum_{n=-\infty}^{+\infty} V_n (z) \,w^n~, \qquad \bar{V}_n = (-1)^n \,V_{-n}~. \label{tropical} \end{equation} Its superconformal transformation \begin{equation} \delta V= - \Big( \xi + \tilde{\Lambda}^{++} (w)\,\pa_w \Big) \, V~. \label{tropicaltransf} \end{equation} The field strength of the vector multiplet\footnote{A more general form for the field strength (\ref{strength3}) is given in Appendix B.} is \begin{equation} W(z) =- \frac{1}{ 16\pi {\rm i}} \oint {\rm d} w \, (\hat{D}^-)^2 \, V(z,w) =\frac{1}{ 4 \pi {\rm i}} \oint \frac{{\rm d} w}{w} \, \cP (w) \, V(z,w) ~, \label{strength3} \end{equation} where \begin{eqnarray} \cP(w) =\frac{1}{ 4w} \, (\bar D_{\underline{1}})^2 + \pa_5 - \frac{w}{ 4} \, (D^{\underline{1}})^2~. \label{Diamond} \end{eqnarray} The superconformal transformation of $W$ can be shown to coincide with (\ref{vmfstransfo}). The field strength (\ref{strength3}) is invariant under the gauge transformation \begin{equation} \delta V(z,w) = {\rm i}\Big( \breve{\lambda} (z,w)-\lambda (z,w) \Big)~, \label{lambda4} \end{equation} with $\lambda(z,w)$ an arbitrary arctic multiplet, see below. To describe a massless off-shell hypermultiplet, one can use the so-called arctic multiplet $\Upsilon (z, w)$: \begin{equation} {\bm q}^+ (z, u) = u^{+\underline{1}}\, \Upsilon (z, w) \sim \Upsilon (z, w)~, \quad \qquad \Upsilon (z, w) = \sum_{n=0}^{\infty} \Upsilon_n (z) w^n~. \label{qsingular} \end{equation} The smile-conjugation of $ {\bm q}^+$ leads to the so-called the antarctic multiplet $\breve{\Upsilon} (z, w) $: \begin{equation} \breve{{\bm q}}^+ (z, u) = u^{+\underline{2}} \,\breve{\Upsilon} (z, w) \sim w\, \breve{\Upsilon} (z, w) \qquad \quad \breve{\Upsilon} (z, w) = \sum_{n=0}^{\infty} (-1)^n {\bar \Upsilon}_n (z) \frac{1}{w^n}\;. \label{smileqsingular} \end{equation} Their superconformal transformations are \begin{eqnarray} \delta \Upsilon = - \Big( \xi &+& \tilde{\Lambda}^{++} (w)\,\pa_w \Big) \Upsilon - \Sigma (w) \, \Upsilon ~, \nonumber \\ \delta \breve{\Upsilon} = - \frac{1}{w}\Big( \xi &+& \tilde{\Lambda}^{++} (w) \,\pa_w \Big) (w\,\breve{\Upsilon} ) -\Sigma (w) \,\breve{\Upsilon} ~. \label{polarsuperconf} \end{eqnarray} In the south chart, these transformations take the form \begin{eqnarray} \delta \Upsilon = - \frac{1}{y} \Big( \xi &-& \tilde{\Lambda}^{++} (y)\,\pa_y \Big) (y\,\Upsilon ) - \Sigma (y) \, \Upsilon ~, \nonumber \\ \delta \breve{\Upsilon} = - \Big( \xi &-& \tilde{\Lambda}^{++} (y) \,\pa_y \Big) \breve{\Upsilon} -\Sigma (y) \,\breve{\Upsilon} ~. \end{eqnarray} Both $\Upsilon(z,w)$ and $\breve{\Upsilon}(z,w)$ constitute the so-called polar multiplet. Since the product of two arctic superfields is again arctic, from (\ref{polarsuperconf}) we obtain more general transformation laws \begin{eqnarray} \delta \Upsilon_\kappa = - \Big( \xi &+& \tilde{\Lambda}^{++} (w)\,\pa_w \Big) \Upsilon_\kappa - \kappa\,\Sigma (w) \, \Upsilon_\kappa ~, \nonumber \\ \delta \breve{\Upsilon}_\kappa = - \frac{1}{w^\kappa}\Big( \xi &+& \tilde{\Lambda}^{++} (w) \,\pa_w \Big) (w^\kappa\,\breve{\Upsilon}_\kappa ) -\kappa\,\Sigma (w) \,\breve{\Upsilon}_\kappa ~, \label{polarsuperconf-kappa} \end{eqnarray} for some parameter $\kappa$. The case $\kappa=1$ corresponds to free hypermultiplet dynamics, see below. Since the product $U_\kappa = \breve{\Upsilon}_\kappa \, \Upsilon_\kappa $ is a tropical multiplet, we obtain more general transformation laws than the one defined by eq. (\ref{tropicaltransf}): \begin{eqnarray} \delta U_\kappa = - \frac{1}{w^\kappa}\Big( \xi &+& \tilde{\Lambda}^{++} (w) \,\pa_w \Big) (w^\kappa\,U_\kappa ) -2\kappa\,\Sigma (w) \,U_\kappa ~. \label{tropicaltransf-kappa} \end{eqnarray} ${}$Finally, let us consider the projective-superspace reformulation of the multiplets (\ref{O(n)-harm}) with an even superscript, \begin{eqnarray} H^{(2n)} (z,u) &=& \big({\rm i}\, u^{+1} u^{+2}\big)^n H^{[2n]}(z,w) \sim \big({\rm i}\, w\big)^n H^{[2n]}(z,w)~, \\ H^{[2n]}(z,w) &=& \sum_{k=-n}^{n} H_k (z) w^n~, \qquad {\bar H}_k = (-1)^k H_{-k} ~. \nonumber \label{O(n)-proj} \end{eqnarray} The projective superfield $H^{[2n]}(z,w) $ is often called a real $O(2n)$ multiplet \cite{projective}. Its superconformal transformation in the north chart is \begin{eqnarray} \delta H^{[2n]} &=& - \frac{1}{w^n}\Big( \xi + \tilde{\Lambda}^{++} (w) \,\pa_w \Big) (w^n\, H^{[2n]} ) -2n \,\Sigma (w)\, H^{[2n]} ~. \label{o2n} \end{eqnarray} In a similar way one can introduce complex $O(2n+1)$ multiplets. In what follows, we will use the same name `$O(n)$ multiplet' for both harmonic multiplets (\ref{O(n)-harm}) and the projective ones just introduced. Among the projective superconformal multiplets considered, it is only the $O(n)$ multiplets which can be lifted to well-defined representations of the superconformal group on a compactified 5D harmonic superspace. The other multiplets realise the superconformal algebra only. \sect{5D superconformal theories} \label{section:six} With the tools developed, we are prepared to constructing 5D superconformal theories. Superfield formulations for 5D $\cN=1$ rigid supersymmetric theories were earlier elaborated in the harmonic \cite{Z,KL} and projective \cite{KL} superspace settings.\footnote{In the case of 6D $\cN=(1,0)$ rigid supersymmetric theories, superfield formulations have been developed in the conventional \cite{6Dstand}, harmonic \cite{6Dhar} and projective \cite{6Dproj} superspace settings.} \subsection{Models in harmonic superspace} Let $\cL^{(+4)}$ be an analytic density of weight $+2$. Its superconformal transformation is a total derivative, \begin{eqnarray} \delta \cL^{(+4)} &=& - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, \cL^{(+4)} -2 \,\Sigma \, \cL^{(+4)} \nonumber \\ &=&-\pa_{\hat a} \Big( \xi^{\hat a} \, \cL^{(+4)}\Big) -D^-_{\hat \alpha} \Big( \xi^{+ \hat \alpha} \, \cL^{(+4)}\Big) + D^{--} \Big( \tilde{\Lambda}^{++} \, \cL^{(+4)}\Big)~. \end{eqnarray} Therefore, such a superfield generates a superconformal invariant of the form \begin{equation} \int {\rm d} \zeta^{(-4)} \, \cL^{(+4)} ~, \end{equation} where \begin{equation} \int {\rm d} \zeta^{(-4)} := \int{\rm d} u \int {\rm d}^5 x \, (\hat{D}^-)^4 ~, \qquad (\hat{D}^\pm)^4 = -\frac{1}{ 32} (\hat{D}^\pm)^2 \, (\hat{D}^\pm)^2~. \end{equation} This is the harmonic superspace action \cite{GIOS} as applied to the five-dimensional case. Let $V^{++}$ be the gauge potential of an Abelian vector multiplet. Given a real $O(2)$ multiplet $\cL^{++}$, \begin{equation} D^+_{\hat \alpha} \cL^{++} = D^{++} \cL^{++} =0~,\qquad \delta \cL^{++} = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, \cL^{++} -2 \,\Sigma \, \cL^{++}~, \label{tensor} \end{equation} we can generate the following superconformal invariant \begin{equation} \int {\rm d} \zeta^{(-4)} \, V^{++}\,\cL^{++} ~. \end{equation} Because of the constraint $D^{++} \cL^{++} =0$, the integral is invariant under the vector multiplet gauge transformation $\delta V^{++} =- D^{++} \lambda$, with $\lambda $ a real analytic gauge parameter. The field strength of the vector multiplet, $W$, is a primary superfield with the transformation (\ref{vmfstransfo}). Using $W$, one can construct the following analytic superfield \cite{KL} \begin{equation} -{\rm i} \, G^{++} = D^{+ \hat \alpha} W \, D^+_{\hat \alpha} W +\frac12 \,W \, ({\hat D}^+)^2 W ~, \qquad D^+_{\hat \alpha} G^{++}=D^{++}G^{++} =0 ~. \label{G++} \end{equation} which transforms as a harmonic superfield weight 2, \begin{equation} \delta G^{++} = - \Big( \xi - \tilde{\Lambda}^{++} D^{--} \Big) \, G^{++} -2 \,\Sigma \, G^{++} ~. \label{G++transf} \end{equation} In other words, $G^{++}$ is a real $O(2)$ multiplet. As a result, the supersymmetric Chern-Simons action\footnote{A different form for this action was given in \cite{Z}.} \cite{KL} \begin{equation} S_{\rm CS} [V^{++}]= \frac{1}{12 } \int {\rm d} \zeta^{(-4)} \, V^{++} \, G^{++} ~ \label{CS2} \end{equation} is superconformally invariant. Super Chern-Simons theory (\ref{CS2}) is quite remarkable as compared with the superconformal models of a single vector multiplet in four and six dimensions. In the 4D $\cN=2$ case, the analogue of $G^{++}$ in (\ref{CS2}) is known to be $D^{+\alpha} D^+_\alpha W= {\bar D}^+_{\dt \alpha} {\bar D}^{+\dt \alpha} {\bar W}$, with $W$ the chiral field strength, and therefore the model is free. In the case 6D $\cN=(1,0)$, the analogue of $G^{++}$ in (\ref{CS2}) is $(D^+)^4 D^-_{\hat \alpha} W^{-\hat \alpha}$, see \cite{ISZ} for more details, and therefore the models is not only free but also has higher derivatives. It is only in five dimensions that the requirement of superconformal invariance leads to a nontrivial dynamical system. The model (\ref{CS2}) admits interesting generalisations. In particular, given several Abelian vector multiplets $V^{++}_I$, where $I=1,\dots, n$, the composite superfield (\ref{G++}) is generalised as follows: \begin{eqnarray} G^{++} ~\to~ G^{++}_{IJ} =G^{++}_{(IJ)} &=&{\rm i}\, \Big\{ D^{+ \hat \alpha} W_{I} \, D^+_{\hat \alpha} W_{J} +\frac12 \,W_{(I} \, ({\hat D}^+)^2 W_{J)} \Big\}~, \nonumber \\ D^+_{\hat \alpha} G^{++}_{IJ}&=&D^{++}G^{++}_{IJ} =0 ~. \end{eqnarray} The gauge-invariant and superconformal action (\ref{CS2}) turns into \begin{equation} \tilde{S}_{\rm CS} = \frac{1}{12 } \int {\rm d} \zeta^{(-4)} \, V^{++}_I \, c_{I ,JK}\, G^{++}_{JK} ~, \qquad c_{I ,JK} =c_{I, KJ}~, \label{CS3} \end{equation} for some constant parameters $c_{I ,JK} $. One can also generalise the super Chern-Simons theory (\ref{CS2}) to the non-Abelian case. In harmonic superspace, some superconformal transformation laws are effectively independent (if properly understood) of the dimension of space-time. As a result, some 4D $\cN=2$ superconformal theories can be trivially extended to five dimensions. In particular, the model for a massless $U(1)$ charged hypermultiplet \cite{GIKOS} \begin{equation} \label{q-hyper} S_{\rm hyper}= - \int {\rm d} \zeta^{(-4)}\, \breve{q}{}^+ \Big( D^{++} +{\rm i} \, e\, V^{++} \Big) \,q^+ \end{equation} can be seen to be superconformal. This follows from eqs. (\ref{v++tr}) and (\ref{q+-trlaw}), in conjunction with the observation that the transformation laws of $q^+$ and $D^{++} q^+$ are identical. The dynamical system $S_{\rm CS} + S_{\rm hyper}$ can be chosen to describe the supergravity compensator sector (vector multiplet plus hypermultiplet) when describing 5D simple supergravity within the superconformal tensor calculus \cite{Ohashi,Bergshoeff}. Then, the hypermultiplet charge $e$ is equivalent to the presence of a non-vanishing cosmological constant, similar to the 4D $\cN=2$ case \cite{GIOS}. Our next example is a naive 5D generalisation of the 4D $\cN=2$ improved tensor multiplet \cite{deWPV,LR,projective0} which was described in the harmonic superspace approach in \cite{GIO1,GIOS}. Let us consider the action \begin{eqnarray} S_{\rm tensor} [H^{++}] = \int {\rm d} \zeta^{(-4)} \,\cL^{(+4)} (H^{++}, u) ~, \label{tensoraction1} \end{eqnarray} where \begin{eqnarray} \cL^{(+4)} (H^{++}, u) = \mu^3 \, \Big( \frac{ \cH^{++} }{1 + \sqrt{ 1+ \cH^{++} \,c^{--} }} \Big)^2~, \qquad \cH^{++} = H^{++} - c^{++} ~, \end{eqnarray} with $\mu$ a constant parameter of unit mass dimension, and $c^{++}$ a (space-time) independent holomorphic vector field on $S^2$, \begin{equation} c^{\pm \pm }(u) = c^{ij} \,u^\pm_i u^\pm_j ~, \qquad c^{ij} c_{ij} =2~, \qquad c^{ij} = {\rm const}~. \end{equation} Here $H^{++}(z,u)$ is a real $O(2)$ multiplet possessing the superconformal transformation law (\ref{O(n)-harm}) with $n=2$. The superconformal invariance of (\ref{tensoraction1}) can be proved in complete analogy to the detailed consideration given \cite{GIO1,GIOS}. Now, let us couple the vector multiplet to the real $O(2)$ multiplet by putting forward the action \begin{eqnarray} S_{\rm vector-tensor}[V^{++},H^{++}]= S_{\rm CS} [V^{++}] + \kappa \int {\rm d} \zeta^{(-4)} \, V^{++} \, H^{++} + S_{\rm tensor} [H^{++}]~, \end{eqnarray} with $\kappa$ a coupling constant. This action is both gauge-invariant and superconformal. It is a five-dimensional generalisation of the 4D $\cN=2$ model for massive tensor multiplet introduced in \cite{Kuz-ten}. The dynamical system $S_{\rm vector-tensor}$ can be chosen to describe the supergravity compensator sector (vector multiplet plus tensor multiplet) when describing 5D simple supergravity within the superconformal tensor calculus \cite{Ohashi,Bergshoeff}. Then, the coupling constant $\kappa$ is equivalent to a cosmological constant, similar to the 4D $\cN=2$ case \cite{BS}. Finally, consider the vector multiplet model \begin{equation} S_{\rm CS} [V^{++}] + S_{\rm tensor} [G^{++} / \mu^3]~, \end{equation} with $G^{++}$ the composite superfield (\ref{G++}). The second term here turns out to be a unique superconformal extension of the $F^4$-term, where $F$ is the field strength of the component gauge field. In this respect, it is instructive to recall its 4D $\cN=2$ analogue \cite{deWGR} \begin{equation} \int {\rm d}^4 x \,{\rm d}^8 \theta\, \ln W \ln {\bar W} ~. \end{equation} The latter can be shown \cite{BKT} to be a unique $\cN=2$ superconformal invariant in the family of actions of the form $\int {\rm d}^4x \,{\rm d}^8 \theta \,H(W, {\bar W})$ introduced for the first time in \cite{Hen}. In five space-time dimensions, if one looks for a superconformal invariant of the form $\int {\rm d}^5x \,{\rm d}^8 \theta \,H(W)$, the general solution is $H(W) \propto W$, as follows from (\ref{trmeasure1}) and (\ref{vmfstransfo}), and this choice corresponds to a total derivative. \subsection{Models in projective superspace} Let $\cL (z,w) $ be an analytic superfield transforming according to (\ref{tropicaltransf-kappa}) with $\kappa=1$. This transformation law can be rewritten as \begin{eqnarray} w\, \delta \cL &=& - \Big( \xi + \tilde{\Lambda}^{++} \,\pa_w \Big) (w \, \cL ) -2 w\, \Sigma \, \cL \nonumber \\ &=& -\pa_{\hat a} \Big( \xi^{\hat a} \, w\, \cL \Big) -D^-_{\hat \alpha} \Big( \xi^{+ \hat \alpha} \, w\, \cL \Big) -\pa_w \Big( \tilde{\Lambda}^{++} \, w\,\cL \Big)~. \label{o2} \end{eqnarray} Such a superfield turns out to generate a superconformal invariant of the form \begin{eqnarray} I = \oint \frac{{\rm d} w}{2\pi {\rm i}} \, \int {\rm d}^5 x \, (\hat{D}^-)^4 \, w\,\cL (z,w)~, \label{projac1} \end{eqnarray} where $\oint {\rm d} w $ is a (model-dependent) contour integral in ${\mathbb C}P^1$. Indeed, it follows from (\ref{o2}) that this functional does not change under the superconformal transformations. Eq. (\ref{projac1}) generalises the projective superspace action \cite{projective0,Siegel} to the five-dimensional case. A more general form for this action, which does not imply the projective gauge conditions (\ref{projectivegaugeN}) and is based on the construction in \cite{Siegel}, is given in Appendix B. It is possible to bring the action (\ref{projac1}) to a somewhat simpler form if one exploits the fact that $\cL$ is Grassmann analytic. Using the considerations outlined in Appendix C gives \begin{eqnarray} \int {\rm d}^5 x \, (\hat{D}^-)^4 \, \cL =\frac{1}{w^2} \int {\rm d}^5 x \, D^4 \cL \Big|~, \qquad D^4 = \frac{1}{16} (D^{\underline{1}})^2 ({\bar D}_{\underline{1}})^2 \Big|~. \end{eqnarray} Here $D^4$ is the Grassmann part of the integration measure of 4D $\cN=1$ superspace, $\int {\rm d}^4 \theta = D^4$. Then, functional (\ref{projac1}) turns into \begin{eqnarray} I= \oint \frac{ {\rm d} w}{2\pi {\rm i} w} \, \int {\rm d}^5 x \, D^4 \cL = \oint \frac{ {\rm d} w}{2\pi {\rm i}w} \int {\rm d}^5 x \,{\rm d}^4 \theta \, \cL ~. \label{projac2} \end{eqnarray} Our first example is the tropical multiplet formulation for the super Chern-Simons theory \cite{KL} \begin{equation} S_{\rm CS} = - \frac{1}{12 } \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, V\,G ~, \label{CS-proj} \end{equation} with the contour around the origin. Here $G(w) $ is the composite $O(2) $ multiplet (\ref{G++}) constructed out of the tropical gauge potential $V(w)$, \begin{equation} G^{++}= ({\rm i} \,u^{+\underline{1}}u^{+\underline{2}}) \, G(w) \sim {\rm i} \,w\,G(w)~, \qquad G(w) = -\frac{1}{ w} \, \Psi+K+ w\, \bar \Psi~, \label{sYMRed} \end{equation} The explicit expressions for the superfields $\Psi$ and $K$ can be found in \cite{KL}. The above consideration and the transformation laws (\ref{tropicaltransf}) and (\ref{G++transf}) imply that the action (\ref{CS-proj}) is superconformal. Next, let us generalise to five dimensions the charged $\Upsilon$-hypermultiplet model of \cite{projective}: \begin{equation} S_{\rm hyper}= \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, \breve{\Upsilon} \,{\rm e}^{ q \, V }\, \Upsilon ~, \end{equation} with $q$ the hypermultiplet charge, and the integration contour around the origin. This action is superconformal, in accordance with the transformation laws (\ref{tropicaltransf}) and (\ref{polarsuperconf}). It is also invariant under gauge transformations \begin{equation} \delta \Upsilon = {\rm i} \, q \, \Upsilon ~, \qquad \delta V = {\rm i} ( \breve{\lambda}-\lambda )~, \end{equation} with $\lambda$ an arctic superfield. Now, let us couple the vector multiplet to a real $O(2)$ multiplet $H(w)$ \begin{equation} H^{++}= ({\rm i} \,u^{+\underline{1}}u^{+\underline{2}}) \, H(w) \sim {\rm i} \,w\,H(w)~, \qquad H(w) = -\frac{1}{ w} \, \Phi+L + w\, \bar \Phi~, \label{O(2)-components} \end{equation} We introduce the vector-tensor system \begin{eqnarray} S &=& - \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, V \Big\{ \frac{1}{12 }\, G +\kappa \, H \Big\} + \mu^3 \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, H \, \ln H ~, \label{vt-proj} \end{eqnarray} where the first term on the right involves a contour around the origin, while the second comes with a contour turning clockwise and anticlockwise around the roots of of the quadratic equation $w\, H(w)=0$. The second term in (\ref{vt-proj}) is a minimal 5D extension of the 4D $\cN=2$ improved tensor multiplet \cite{projective0}. It should be pointed out that the component superfields in (\ref{O(2)-components}) obey the constraints \cite{KL} \begin{equation} {\bar D}^{\dt \alpha} \, \Phi =0~, \qquad -\frac{1}{ 4} {\bar D}^2 \, L = \pa_5\, \Phi~. \end{equation} It should be also remarked that the real linear superfield $L$ can always be dualised into a chiral scalar and its conjugate \cite{KL}, which generates a special chiral superpotential. Given several $O(2) $ multiplets $H^I(w)$, where $I=1,\dots,n$, superconformal dynamics is generated by the action \begin{equation} S=\oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, \cF( H^I ) ~, \qquad I=1,\dots ,n~ \end{equation} where $\cF (H) $ is a weakly homogeneous function of first degree in the variables $H$, \begin{equation} \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, \Big\{ H^I \, \frac{\pa \cF(H ) }{\pa H^I} -\cF (H ) \Big\} =0~. \end{equation} This is completely analogous to the four-dimensional case \cite{projective0,BS,dWRV} where the component structure of such sigma-models has been studied in detail \cite{deWKV}. A great many superconformal models can be obtained if one considers $\Upsilon$-hypermultiplet actions of the form \begin{eqnarray} S = \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, K \big( \Upsilon^I , \breve{\Upsilon}^{ \bar J} \big)~, \qquad I,{\bar J} =1,\dots ,n~ \label{nact} \end{eqnarray} with the contour around the origin. Let us first assume that the superconformal transformations of all $\Upsilon$'s and $\breve{\Upsilon}$'s have the form (\ref{polarsuperconf}). Then, in accordance with general principles, the action is superconformal if $K ( \Upsilon , \breve{\Upsilon} ) $ is a weakly homogeneous function of first degree in the variables $\Upsilon$, \begin{equation} \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, \Big\{ \Upsilon^I \, \frac{\pa K(\Upsilon, \breve{\Upsilon} ) }{\pa \Upsilon^I} -K(\Upsilon, \breve{\Upsilon} ) \Big\} =0~. \label{polar-homog} \end{equation} This homogeneity condition is compatible with the K\"ahler invariance \begin{equation} K(\Upsilon, \breve{\Upsilon}) \quad \longrightarrow \quad K(\Upsilon, \breve{\Upsilon}) ~+~ \Lambda(\Upsilon) \,+\, {\bar \Lambda} (\breve{\Upsilon} ) \end{equation} which the model (\ref{nact}) possesses \cite{Kuzenko,GK,KL}. Unlike the $O(n)$ multiplets, the superconformal transformations of $\Upsilon$ and $\breve{\Upsilon}$ are not fixed uniquely by the constraints, as directly follows from (\ref{polarsuperconf-kappa}). Therefore, one can consider superconformal sigma-models of the form (\ref{nact}) in which the dynamical variables $\Upsilon$'s consist of several subsets with different values for the weight $\kappa$ in (\ref{polarsuperconf-kappa}), and then $K(\Upsilon, \breve{\Upsilon} )$ should obey weaker conditions than eq. (\ref{polar-homog}). Such a situation occurs, for instance, if one starts with a gauged linear sigma-model and then integrates out the gauge multiplet, in the spirit of \cite{LR,dWRV}. As an example, consider \begin{equation} S= \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, \Big\{ \breve{\Upsilon}^\alpha \,\eta_{\alpha \beta} \, \Upsilon^\beta \,{\rm e}^{ V } + \breve{\Upsilon}^\mu \,\eta_{\mu \nu} \, \Upsilon^\nu \,{\rm e}^{ - V } \Big\} ~, \end{equation} where $\eta_{\alpha \beta} $ and $\eta_{\mu \nu}$ are constant diagonal metrics, $\alpha=1, \dots , m$ and $\mu =1, \dots , n$. Integrating out the tropical multiplet gives the gauge-invariant action \begin{equation} S= 2 \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, \sqrt{ \breve{\Upsilon}^\alpha \,\eta_{\alpha \beta} \, \Upsilon^\beta \, \breve{\Upsilon}^\mu \,\eta_{\mu \nu} \, \Upsilon^\nu }~. \end{equation} The gauge freedom can be completely fixed by setting, say, one of the superfields $\Upsilon^\nu$ to be unity. Then, the action becomes \begin{equation} S= 2 \oint \frac{{\rm d}w}{2\pi {\rm i}w} \int {\rm d}^5 x \, {\rm d}^4 \theta \, \sqrt{ \breve{\Upsilon}^\alpha \,\eta_{\alpha \beta} \, \Upsilon^\beta \,( \eta_{nn} + \breve{\Upsilon}^{\underline \mu} \, \eta_{\underline{\mu} \underline{\nu}} \, \Upsilon^{\underline \nu}) }~, \end{equation} where $\underline{\mu}, \underline{\nu}=1,\dots,n-1.$ This action is still superconformal, but now $ \Upsilon^\beta $ and $\Upsilon^{\underline \nu}$ transform according to (\ref{polarsuperconf-kappa}) with $\kappa=2$ and $\kappa=0$, respectively. Sigma-models (\ref{nact}) have an interesting geometric interpretation if $K(\Phi, \bar \Phi )$ is the K\"ahler potential of a K\"ahler manifold $\cM$ \cite{Kuzenko,GK,KL}. Among the component superfields of $\Upsilon (z,w) = \sum_{n=0}^{\infty} \Upsilon_n (z) \,w^n$, the leading components $\Phi = \Upsilon_0 | $ and $\Gamma = \Upsilon_1 |$ considered as 4D $\cN=1$ superfields, are constrained: \begin{equation} {\bar D}^{\dt \alpha} \, \Phi =0~, \qquad -\frac{1}{ 4} {\bar D}^2 \, \Gamma = \pa_5\, \Phi~. \label{pm-constraints} \end{equation} The $\Phi$ and $\Gamma$ can be regarded as a complex coordinate of the K\" ahler manifold and a tangent vector at point $\Phi$ of the same manifold, and therefore they parametrize the tangent bundle $T\cM$ of the K\"ahler manifold. The other components, $\Upsilon_2, \Upsilon_3, \dots$, are complex unconstrained superfields. These superfields are auxiliary since they appear in the action without derivatives. The auxiliary superfields $\Upsilon_2, \Upsilon_3, \dots$, and their conjugates, can be eliminated with the aid of the corresponding algebraic equations of motion \begin{equation} \oint {{\rm d} w} \,w^{n-1} \, \frac{\pa K(\Upsilon, \breve{\Upsilon} ) }{\pa \Upsilon^I} = 0~, \qquad n \geq 2 ~. \label{int} \end{equation} Their elimination can be carried out using the ansatz \begin{eqnarray} \Upsilon^I_n = \sum_{p=o}^{\infty} U^I{}_{J_1 \dots J_{n+p} \, \bar{L}_1 \dots \bar{L}_p} (\Phi, {\bar \Phi})\, \Gamma^{J_1} \dots \Gamma^{J_{n+p}} \, {\bar \Gamma}^{ {\bar L}_1 } \dots {\bar \Gamma}^{ {\bar L}_p }~, \qquad n\geq 2~. \end{eqnarray} It can be shown that the coefficient functions $U$'s are uniquely determined by equations (\ref{int}) in perturbation theory. Upon elimination of the auxiliary superfields, the action (\ref{nact}) takes the form \begin{eqnarray} S [\Phi, \bar \Phi, \Gamma, \bar \Gamma] &=& \int {\rm d}^5 x \, {\rm d}^4 \theta \, \Big\{\, K \big( \Phi, \bar{\Phi} \big) - g_{I \bar{J}} \big( \Phi, \bar{\Phi} \big) \Gamma^I {\bar \Gamma}^{\bar{J}} \nonumber\\ &&\qquad + \sum_{p=2}^{\infty} \cR_{I_1 \cdots I_p {\bar J}_1 \cdots {\bar J}_p } \big( \Phi, \bar{\Phi} \big) \Gamma^{I_1} \dots \Gamma^{I_p} {\bar \Gamma}^{ {\bar J}_1 } \dots {\bar \Gamma}^{ {\bar J}_p }~\Big\}~, \end{eqnarray} where the tensors $\cR_{I_1 \cdots I_p {\bar J}_1 \cdots {\bar J}_p }$ are functions of the Riemann curvature $R_{I {\bar J} K {\bar L}} \big( \Phi, \bar{\Phi} \big) $ and its covariant derivatives. Each term in the action contains equal powers of $\Gamma$ and $\bar \Gamma$, since the original model (\ref{nact}) is invariant under rigid $U(1)$ transformations \begin{equation} \Upsilon(w) ~~ \mapsto ~~ \Upsilon({\rm e}^{{\rm i} \alpha} w) \quad \Longleftrightarrow \quad \Upsilon_n(z) ~~ \mapsto ~~ {\rm e}^{{\rm i} n \alpha} \Upsilon_n(z) ~. \label{rfiber} \end{equation} The complex linear superfields $\Gamma^I$ can be dualised into chiral superfields\footnote{This is accompanied by the appearance of a special chiral superpotential \cite{KL}.} $\Psi_I$ which can be interpreted as a one-form at the point $\Phi \in \cM$ \cite{GK,KL}. Upon elimination of $\Gamma$ and $\bar \Gamma$, the action turns into $S[\Phi, \bar \Phi, \Psi, \bar \Psi]$. Its target space is (an open neighborhood of the zero section) of the cotangent bundle $T^*\cM$ of the K\"ahler manifold $\cM$. Since supersymmetry requires this target space to be hyper-K\"ahler, our consideration is in accord with recent mathematical results \cite{cotangent} about the existence of hyper-K\"ahler structures on cotangent bundles of K\"ahler manifolds. \subsection{Models with intrinsic central charge} We have so far considered only superconformal multiplets without central charge. As is known, there is no clash between superconformal symmetry and the presence of a central charge provided the latter is gauged. Here we sketch a 5D superspace setting for supersymmetric theories with gauged central charge, which is a natural generalisation of the 4D $\cN=2$ formulation \cite{DIKST}. To start with, one introduces an Abelian vector multiplet, which is destined to gauge the central charge $\Delta$, by defining gauge-covariant derivatives \begin{equation} \cD_{\hat A} = ( \cD_{\hat a}, \cD_{\hat \alpha}^i ) = D_{\hat A} + \cV_{\hat A} (z)\, \Delta ~, \qquad [\Delta , \cD_{\hat A} ]=0~. \end{equation} Here the gauge connection $ \cV_{\hat A} $ is inert under the central charge transformations, $[\Delta \,,\cV_{\hat A} ] =0$. The gauge-covariant derivatives are required to obey the algebra \begin{eqnarray} \{\cD^i_{\hat \alpha} \, , \cD^j_{\hat \beta} \} &= &-2{\rm i} \, \ve^{ij}\, \Big( \cD_{\hat \alpha \hat \beta} + \ve_{\hat \alpha \hat \beta} \, \cW \,\Delta \Big)~, \qquad \big[ \cD^i_{\hat \alpha} \, , \Delta \big] =0~, \nonumber \\ \big[ \cD^i_{\hat \gamma}\,, \cD_{\hat \alpha \hat \beta} \big] &=& {\rm i}\, \ve_{\hat a \hat \beta} \, \cD^i_{\hat \gamma} \cW\,\Delta +2{\rm i}\,\Big( \ve_{\hat \gamma \hat \alpha} \,\cD^i_{\hat \beta} - \ve_{\hat \gamma \hat \beta} \,\cD^i_{\hat \alpha} \Big)\cW \,\Delta ~, \label{SYM-algebra} \end{eqnarray} where the real field strength $\cW(z)$ obeys the Bianchi identity (\ref{Bianchi1}). The field strength should possess a non-vanishing expectation value, $\langle \cW \rangle \neq 0$, corresponding to the case of rigid central charge. By applying a harmonic-dependent gauge transformation, one can choose a frame in which \begin{equation} \cD^+_{\hat \alpha} ~\to ~D^+_{\hat \alpha} ~, \quad D^{++} ~\to ~ D^{++} +\cV^{++}\,\Delta~, \quad D^{--} ~\to ~ D^{--} +\cV^{--}\,\Delta~, \end{equation} with $\cV^{++} $ a real analytic prepotential, see \cite{DIKST} for more details. This frame is called the $\lambda$-frame, and the original representation is known as the $\tau$-frame \cite{GIKOS}. To generate a supersymmetric action, it is sufficient to construct a real superfield $\cL^{(ij)}(z)$ with the properties \begin{equation} \cD^{(i}_{\hat \alpha} \cL^{jk)} =0~, \end{equation} which for $\cL^{++}(z,u) = \cL^{ij} (z) \, u^+_i u^+_j$ take the form \begin{equation} \cD^+_{\hat \alpha} \cL^{++} = 0~, \qquad D^{++}\cL^{++} =0~. \end{equation} In the $\lambda$-frame, the latter properties become \begin{equation} D^+_{\hat \alpha} \tilde{\cL}^{++} = 0~, \qquad (D^{++} + \cV^{++} \,\Delta) \tilde{\cL}^{++} =0~. \end{equation} Associated with $ \tilde{\cL}^{++}$ is the supersymmetric action \begin{equation} \int {\rm d} \zeta^{(-4)} \, \cV^{++}\, \tilde{ \cL}^{++} \end{equation} which invariant under the central charge gauge transformations $\delta \cV^{++} =- D^{++} \lambda $ and $\delta \tilde{\cL}^{++} = \lambda \,\Delta \, \tilde{\cL}^{++} $, with an arbitrary analytic parameter $\lambda$. Let us give a few examples of off-shell supermultiplets with intrinsic central charge. The simplest is the 5D extension of the Fayet-Sohnius hypermultiplet. It is described by an iso-spinor superfield ${\bm q}_i (z)$ and its conjugate ${\bar {\bm q}}^i (z)$ subject to the constraint \begin{equation} \cD^{(i}_{\hat \alpha} \, {\bm q}^{j)} =0~. \label{FSh} \end{equation} This multiplet becomes on-shell if $\Delta = {\rm const}$. With the notation ${\bm q}^+(z,u) ={\bm q}^{j} (z) u^+_i$, the hypermultiplet dynamics is dictated by the Lagrangian \begin{equation} L^{++}_{\rm FS} = \frac12 \, \breve{{\bm q}}^+ \stackrel{\longleftrightarrow}{ \Delta} {\bm q}^+ -{\rm i}\, m\, \breve{{\bm q}}^+ {\bm q}^+~, \label{FS-Lagrangian} \end{equation} with $m$ the hypermultiplet mass/charge. This Lagrangian generates a superconformal theory. Our second example is an off-shell gauge two-form multiplet called in \cite{Ohashi} the massless tensor multiplet. It is Poincar\'e dual to the 5D vector multiplet. Similarly to the 4D $\cN=2$ vector-tensor multipet \cite{DIKST}, it is described by a constrained real superfield $L(z) $ coupled to the central charge vector multiplet. By analogy with the four-dimensional case \cite{DIKST}, admissible constraints must obey some nontrivial consistency conditions. In particular, the harmonic-independence of $L$ (in the $\tau$-frame) implies \begin{eqnarray} 0=(\hat{\cD}^+)^2 (\hat{\cD}^+)^2D^{--} L &=& D^{--} (\hat{\cD}^+)^2(\hat{\cD}^+)^2 L -4 \,\cD^{-\hat \alpha} \cD^+_{\hat \alpha} (\hat{\cD}^+)^2L +8{\rm i}\, \cD^{\hat \alpha \hat \beta} \cD^+_{\hat \alpha} \cD^+_{\hat \beta} \nonumber \\ & - &8{\rm i}\, \Delta \,\Big\{ L \,(\hat{\cD}^+)^2 \cW +\cW \,(\hat{\cD}^+)^2 L +4\,\cD^{+\hat \alpha} \cW \, \cD^+_{\hat \alpha} L\Big\}~. \label{consistency} \end{eqnarray} Let us assume that $L$ obeys the constraint \begin{equation} \cD^+_{\hat \alpha} \cD^+_{\hat \beta } L = \frac{1}{4} \ve_{\hat \alpha \hat \beta} \, ({\hat \cD}^+)^2 L \quad \Rightarrow \quad \cD^+_{\hat \alpha} \cD_{\hat \beta }^+ \cD_{\hat \gamma }^+ L = 0 \label{Bianchi2} \end{equation} which in the case $\Delta =0$ coincides with the Bianchi identity for an Abelian vector multiplet. Then, eq. (\ref{consistency}) gives \begin{equation} \Delta \, \Big\{ L \,(\hat{\cD}^+)^2 \cW +\cW \,(\hat{\cD}^+)^2 L +4\,\cD^{+\hat \alpha} \cW \, \cD^+_{\hat \alpha} L\Big\} =0~. \label{consistency2} \end{equation} The consistency condition is satisfied if $L$ is constrained as \begin{equation} (\hat{\cD}^+)^2 L =- \frac{1}{\cW}\,\Big\{ L \,(\hat{\cD}^+)^2 \cW +4\,\cD^{+\hat \alpha} \cW \, \cD^+_{\hat \alpha} L\Big\}~. \label{two-form-constraint1} \end{equation} The corresponding Lagrangian is \begin{equation} \cL^{++} = -\frac{\rm i}{4} \Big( \cD^{+ \hat \alpha} L\,\cD^+_{\hat \alpha} L +\frac12 \,L\,(\hat{\cD}^+)^2L\Big)~. \label{two-form-lagrang} \end{equation} The theory generated by this Lagrangian is superconformal. Another solution to (\ref{consistency2}) describes a Chern-Simons coupling of the two-form multiplet to an external Yang-Mills supermultiplets: \begin{eqnarray} (\hat{\cD}^+)^2 L &=&- \frac{1}{\cW}\,\Big\{ L \,(\hat{\cD}^+)^2 \cW +4\,\cD^{+\hat \alpha} \cW \, \cD^+_{\hat \alpha} L\Big\} + \frac{\rho}{\cW}\,{\mathbb G}^{++}~, \label{two-form-constraint2} \end{eqnarray} where \begin{eqnarray} -{\rm i} \, {\mathbb G}^{++} &=& {\rm tr}\, \Big( \cD^{+ \hat \alpha} {\mathbb W} \, \cD^+_{\hat \alpha} {\mathbb W} + \frac{1 }{ 4} \{ {\mathbb W} \,, ({\hat \cD}^+)^2 {\mathbb W} \} \Big)~. \end{eqnarray} Here $\rho$ is a coupling constant, and $\mathbb W$ is the gauge-covariant field strength of the Yang-Mills supermultiplet, see \cite{KL} for more details. As the corresponding supersymmetric Lagrangian one can again choose $\cL^{++}$ given by eq. (\ref{two-form-lagrang}). A plain dimensional reduction $5{\rm D} \to 4{\rm D}$ can be shown to reduce the constraints (\ref{Bianchi2}) and (\ref{two-form-constraint2}) to those describing the so-called linear vector-tensor multiplet\footnote{Ref. \cite{DIKST} contains an extensive list of publications on the linear and nonlinear vector-tensor multiplets and their couplings.} with Chern-Simons couplings \cite{DIKST}. \vskip.5cm When this paper was ready for submission to the hep-th archive, there appeared an interesting work \cite{BX} in which 4D and 5D supersymmetric nonlinear sigma models with eight supercharges were formulated in $\cN=1$ superspace. \noindent {\bf Acknowledgements:}\\ It is a pleasure to thank Ian McArthur for reading the manuscript. The author is grateful to the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in Golm and the Institute for Theoretical Physics at the University of Heidelberg for hospitality during the course of the work. This work is supported by the Australian Research Council and by a UWA research grant. \begin{appendix} \sect{Non-standard realisation for $\bm{ S^2}$ } Let us consider a quantum-mechanical spin-$1/2$ Hilbert space, i.e. the complex space ${\mathbb C}^2$ endowed with the standard positive definite scalar product $\langle ~|~\rangle$ defined by \begin{eqnarray} \langle u|v\rangle = u^\dagger \,v ={\bar u}^i \,v_i~, \qquad |u \rangle = (u_i) =\left( \begin{array}{c} u_1 \\ u_2 \end{array} \right)~, \qquad \langle u | = ({\bar u}^i) ~, \quad {\bar u}^i =\overline{u_i}~. \end{eqnarray} Two-sphere $S^2$ can be identified with the space of rays in ${\mathbb C}^2$. A ray is represented by a normalized state, \begin{equation} |u^- \rangle = (u^-_i) ~, \qquad \langle u^- | u^- \rangle=1~, \qquad \langle u^- | = (u^{+i}) ~, \quad u^{+i} =\overline{u^-_i}~, \end{equation} defined modulo the equivalence relation \begin{equation} u^-_i ~ \sim ~ {\rm e}^{ -{\rm i} \varphi } \, u^-_i~, \qquad | {\rm e}^{-{\rm i} \varphi } |=1~. \label{equivalence} \end{equation} Associated with $|u^- \rangle $ is another normalized state $|u^+ \rangle $, \begin{eqnarray} |u^+ \rangle = (u^+_i) ~, \qquad u^+_i = \ve_{ij}\,u^{+j}~, \qquad \langle u^+ | u^+ \rangle=1~, \end{eqnarray} which is orthogonal to $|u^- \rangle $, \begin{equation} \langle u^+ | u^- \rangle=0~. \end{equation} The states $|u^- \rangle $ and $|u^+ \rangle $ generate the unimodular unitary matrix \begin{eqnarray} {\bm u}=\Big( |u^- \rangle \, ,\, |u^+ \rangle \Big) =({u_i}^-\,,\,{u_i}^+) \in SU(2)~. \end{eqnarray} In terms of this matrix, the equivalence relation (\ref{equivalence}) becomes \begin{eqnarray} {\bm u} ~\sim ~ {\bm u}\, \left( \begin{array}{cc} {\rm e}^{ -{\rm i} \varphi } & 0\\ 0& {\rm e}^{ {\rm i} \varphi } \end{array} \right)~. \end{eqnarray} This gives the well-known realisation $S^2 = SU(2) /U(1)$. The above unitary realisation for $S^2$ is ideal if one is interested in the action of $SU(2)$, or its subgroups, on the two-sphere. But it is hardly convenient if one considers, for instance, the action of $SL(2,{\mathbb C})$ on $S^2$. There exists, however, a universal realisation. Instead of dealing with the orthonormal basis $( |u^- \rangle , |u^+ \rangle )$ introduced, one can work with an arbitrary basis for ${\mathbb C }^2$: \begin{eqnarray} {\bm v}=\Big( |v^- \rangle \, ,\, |v^+ \rangle \Big) =({v_i}^-\,,\,{v_i}^+) \in GL(2,{\mathbb C})~,\qquad \det {\bm v}=v^{+i}\,v^-_i~. \end{eqnarray} The two-sphere is then obtained by factorisation with respect to the equivalence relation \begin{eqnarray} {\bm v} ~\sim ~ {\bm v}\,R~, \qquad R= \left( \begin{array}{cc} a & 0\\ b & c \end{array} \right) \in GL(2,{\mathbb C})~. \label{equivalence2} \end{eqnarray} Given an arbitrary matrix ${\bm v} \in GL(2,{\mathbb C})$, there always exists a lower triangular matrix $R$ such that ${\bm v} R \in SU(2)$, and then we are back to the unitary realisation. One can also consider an intermediate realisation for $S^2$ given in terms of unimodular matrices of the form \begin{eqnarray} {\bm w}=\Big( |w^- \rangle \, ,\, |w^+ \rangle \Big) =({w_i}^-\,,\,{w_i}^+) \in SL(2,{\mathbb C}) \quad \longleftrightarrow \quad w^{+i} w^-_i =1~, \end{eqnarray} and the matrix $R$ in (\ref{equivalence2}) should be restricted to be unimodular. The harmonics $w^\pm$ are complex in the sense that $w^-_i$ and $w^{+i}$ are not related by complex conjugation. Let us consider a left group transformation acting on $S^2$ \begin{eqnarray} {\bm w} ~\to ~g\, {\bm w}= ({v_i}^-\,,\,{v_i}^+) \equiv {\bm v}~. \end{eqnarray} If $g$ is a ``small'' transformation, i.e. if it belongs to a sufficiently small neighbourhood of the identity, then there exists a matrix $R$ of the type (\ref{equivalence2}) such that \begin{equation} g\, {\bm w} \,R = ({w_i}^-\,,\,{\hat{w}_i}^+) \in SL(2,{\mathbb C}) ~. \end{equation} Since $$ w^{+i} w^-_i =1~,\qquad \hat{w}^{+i} {w}^-_i =1~, $$ for the transformed harmonic we thus obtain \begin{equation} \hat{w}^+_i = w^+_i + \rho^{++}(w) \,w^-_i ~. \end{equation} All information about the group transformation $g$ is now encoded in $ \rho^{++} $. \sect{Projective superspace action} ${}$Following \cite{Siegel}, consider \begin{eqnarray} I= \frac{1}{2\pi {\rm i}} \oint \frac{ u^+_i\,{\rm d} u^{+i}}{(u^+ u^-)^4} \, \int {\rm d}^5 x \, (\hat{D}^-)^4 \,\cL^{++} (z,u^+)~, \label{projac3} \end{eqnarray} where the Lagrangian enjoys the properties \begin{equation} D^+_{\hat \alpha} \cL^{++} (z,u^+)=0~, \qquad \cL^{++} (z,c\,u^+) = c^2\, \cL^{++} (z,u^+)~, \qquad c \in{\mathbb C}^*~. \end{equation} The functional (\ref{projac3}) is invariant under arbitrary projective transformations (\ref{equivalence22}). Choosing the projective gauge (\ref{projectivegaugeN}) gives the supersymmetric action (\ref{projac1}). It is worth pointing out that the vector multiplet field strength (\ref{strength3}) can be rewritten in the projective-invariant form \begin{equation} W(z) =- \frac{1}{ 16\pi {\rm i}} \oint \frac{ u^+_i\,{\rm d} u^{+i}}{(u^+ u^-)^2} \, (\hat{D}^-)^2 \, V(z,u^+)~, \label{strengt4} \end{equation} where the gauge potential enjoys the properties \begin{equation} D^+_{\hat \alpha} V (z,u^+)=0~, \qquad V (z,c\,u^+) = V (z,u^+)~,\qquad c \in{\mathbb C}^*~. \end{equation} \sect{From 5D projective supermultiplets to 4D $\bm{ \cN=1, \,2}$ superfields} The conventional 5D simple superspace ${\mathbb R}^{5|8}$ is parametrized by coordinates $ z^{\hat A} = (x^{\hat a}, \theta^{\hat \alpha}_i )$, with $i = \underline{1} , \underline{2}$. Any hypersurface $x^5 ={\rm const}$ in ${\mathbb R}^{5|8}$ can be identified with the 4D, $\cN=2$ superspace ${\mathbb R}^{4|8}$ parametrized by $ z^{A} = (x^a, \theta^\alpha_i , {\bar \theta}_{\dt \alpha}^i)$, where $(\theta^\alpha_i )^* = {\bar \theta}^{\dt \alpha i}$. The Grassmann coordinates of ${\mathbb R}^{5|8}$ and ${\mathbb R}^{4|8}$ are related to each other as follows: \begin{eqnarray} \theta^{\hat \alpha}_i = ( \theta^\alpha_i , - {\bar \theta}_{\dt \alpha i})~, \qquad \theta_{\hat \alpha}^i = \left( \begin{array}{c} \theta_\alpha^i \\ {\bar \theta}^{\dt \alpha i} \end{array} \right)~. \end{eqnarray} Interpreting $x^5$ as a central charge variable, one can view ${\mathbb R}^{5|8}$ as a 4D, $\cN=2$ central charge superspace. One can relate the 5D spinor covariant derivatives (see \cite{KL} for more details) \begin{eqnarray} D^i_{\hat \alpha} = \left( \begin{array}{c} D_\alpha^i \\ {\bar D}^{\dt \alpha i} \end{array} \right) =D^i_{\hat \alpha} = \frac{\pa}{\pa \theta^{\hat \alpha}_i} - {\rm i} \, (\Gamma^{\hat b} ){}_{\hat \alpha \hat \beta} \, \theta^{\hat \beta i} \, \pa_{\hat b} ~, \qquad D^{\hat \alpha}_i = (D^\alpha_i \,, \, -{\bar D}_{\dt \alpha i}) \label{con} \end{eqnarray} to the 4D, $\cN=2$ covariant derivatives $D_A = (\pa_a , D^i_\alpha , {\bar D}^{\dt \alpha}_i )$ where \begin{eqnarray} D^i_\alpha &=& \frac{\pa}{\pa \theta^{\alpha}_i} + {\rm i} \,(\sigma^b )_{\alpha \bd} \, {\bar \theta}^{\dt \beta i}\, \pa_b + \theta^i_\alpha \, \pa_5 ~, \quad {\bar D}_{\dt \alpha i} = - \frac{\pa}{\pa {\bar \theta}^{\dt \alpha i}} - {\rm i} \, \theta^\beta _i (\sigma^b )_{\beta \dt \alpha} \,\pa_b -{\bar \theta}_{\dt \alpha i} \, \pa_5 ~. \label{4D-N2covder1} \end{eqnarray} These operators obey the anti-commutation relations \begin{eqnarray} \{D^i_{\alpha} \, , \, D^j_{ \beta} \} = 2 \, \ve^{ij}\, \ve_{\alpha \beta} \, \pa_5 ~, \quad && \quad \{{\bar D}_{\dt \alpha i} \, , \, {\bar D}_{\dt \beta j} \} = 2 \, \ve_{ij}\, \ve_{\dt \alpha \dt \beta} \, \pa_5 ~, \nonumber \\ \{D^i_{\alpha} \, , \, \bar D_{ \dt \beta j} \} &=& -2{\rm i} \, \delta^i_j\, (\sigma^c )_{\alpha \dt \beta} \,\pa_c ~, \label{4D-N2covder2} \end{eqnarray} which correspond to the 4D, $\cN=2$ supersymmetry algebra with the central charge $\pa_5$. Consider a 5D projective superfield (\ref{holom0}). Representing the differential operators $\nabla_{\hat \alpha} (w)$, eq. (\ref{nabla0}), as \begin{eqnarray} \nabla_{\hat \alpha} (w) = \left( \begin{array}{c} \nabla_\alpha (w) \\ {\bar \nabla}^{\dt \alpha} (w) \end{array} \right)~, \quad \nabla_\alpha (w) \equiv w D^{\underline{1}}_\alpha - D^{\underline{2}}_\alpha ~, \quad {\bar \nabla}^{\dt \alpha} (w) \equiv {\bar D}^{\dt \alpha}_{ \underline{1}} + w {\bar D}^{\dt \alpha}_{ \underline{2}}~, \label{nabla} \end{eqnarray} the constraints (\ref{holom3}) can be rewritten in the component form \begin{equation} D^{\underline{2}}_\alpha \phi_n = D^{\underline{1}}_\alpha \phi_{n-1} ~,\qquad {\bar D}^{\dt \alpha}_{\underline{2}} \phi_n = - {\bar D}^{\dt \alpha}_{ \underline{1}} \phi_{n+1}~. \label{pc2} \end{equation} The relations (\ref{pc2}) imply that the dependence of the component superfields $\phi_n$ on $\theta^\alpha_{\underline{2}}$ and ${\bar \theta}^{\underline{2}}_{\dt \alpha}$ is uniquely determined in terms of their dependence on $\theta^\alpha_{\underline{1}}$ and ${\bar \theta}^{\underline{1}}_{\dt \alpha}$. In other words, the projective superfields depend effectively on half the Grassmann variables which can be choosen to be the spinor coordinates of 4D $\cN=1$ superspace \begin{equation} \theta^\alpha = \theta^\alpha_{\underline{1}} ~, \qquad {\bar \theta}_{\dt \alpha}= {\bar \theta}_{\dt \alpha}^{\underline{1}}~. \label{theta1} \end{equation} Then, one deals with reduced superfields $\phi | $, $ D^{\underline{2}}_\alpha \phi|$, $ {\bar D}_{\underline{2}}^{\dt \alpha} \phi|, \dots$ (of which not all are usually independent) and 4D $\cN=1$ spinor covariant derivatives $D_\alpha$ and ${\bar D}^{\dt \alpha}$ defined in the obvious way: \begin{equation} \phi| = \phi (x, \theta^\alpha_i, {\bar \theta}^i_{\dt \alpha}) \Big|_{ \theta_{\underline{2}} = {\bar \theta}^{\underline{2}}=0 }~, \qquad D_\alpha = D^{\underline{1}}_\alpha \Big|_{\theta_{\underline{2}} ={\bar \theta}^{\underline{2}}=0} ~, \qquad {\bar D}^{\dt \alpha} = {\bar D}_{\underline{1}}^{\dt \alpha} \Big|_{\theta_{\underline{2}} ={\bar \theta}^{\underline{2}}=0}~. \label{N=1proj} \end{equation} \end{appendix} \small{
{'timestamp': '2007-07-16T06:29:49', 'yymm': '0601', 'arxiv_id': 'hep-th/0601177', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/0601177'}
ArXiv
\section{Introduction} \label{sec:intro} Rank data arise naturally in many fields, such as web searching \citep{renda2003web}, design of recommendation systems \citep{baltrunas2010group} and genomics \citep{BADER20111099}. Many probabilistic models have been proposed for analyzing this type of data, among which the Thurstone model \citep{Thurstone1927}, the Mallows model \citep{mallows1957non} and the Plackett-Luce model \citep{luce1959,Plackett1975} are the most well-known representatives. The Thurstone model assumes that each entity possesses a hidden score and all the scores come from a joint probability distribution. The Mallows model is a location model defined on the permutation space of ordered entities, in which the probability mass of a permuted order is an exponential function of its distance from the true order. The Plackett-Luce model assumes that the preference of entity $E_i$ is associated with a weight $w_i$, and describes a recursive procedure for generating a random ranking list: entities are picked one by one with the probability proportional to their weights in a sequential fashion without replacement, and ranked based on their order of being selected. Rank aggregation aims to derive a ``better'' aggregated ranking list $\hat\tau$ from multiple ranking lists $\tau_1, \tau_2,\cdots, \tau_m$. It is a classic problem and has been studied in a variety of contexts for decades. Early applications of rank aggregation can be traced back to the 18th-century France, where the idea of rank aggregation was proposed to solve the problem of political elections \citep{de1781memoire}. In the past 30 years, efficient rank aggregation algorithms have played important roles in many fields, such as web searching \citep{renda2003web}, information retrieval \citep{fagin2003efficient}, design of recommendation systems \citep{baltrunas2010group}, social choice studies \citep{porello2012ontology,soufiani2014statistical}, genomics \citep{BADER20111099} and bioinformatics \citep{2010Integration,chen2016drhp}. Some popular approaches for rank aggregation are based on certain summary statistics. These methods simply calculate a summary statistics, such as the mean, median or geometric mean, for each entity $E_i$ based on its rankings across different ranking lists, and obtain the aggregated ranking list based on these summary statistics. Optimization-based methods obtain the aggregated ranking by minimizing a user-defined objective function, i.e., let $\hat{\tau} = \arg\min\limits_{\tau} \dfrac{1}{m} \sum\limits_{i=1}^m d\left(\tau, \tau_i\right)$, where distance measurement $d(\cdot,\cdot)$ could be either \textit{Spearman's footrule distance} \citep{diaconis1977spearman} or the \textit{Kendall tau distance} \citep{diaconis1988group}. More detailed studies on these optimization-based methods can be found in \citet{young1978consistent,young1988condorcet,dwork2001rank}. In early 2000s, a novel class of Markov chain-based methods have been proposed \citep{dwork2001rank,2010Integration,Lin2010Space,Deconde2011Combining}, which first use the observed ranking lists to construct a probabilistic transition matrix among the entities and then use the magnitudes of the entities' equilibrium probabilities of the resulting Markov chain to rank them. The boosting-based method \textit{RankBoost} \citep{freund2003efficient}, employs a \textit{feedback function} $\Phi(i,j)$ to construct the final ranking, where $\Phi(i,j)>0$ (or $\leq 0$) indicates that entity $E_i$ is (or is not) preferred to entity $E_j$. Some statistical methods utilize aforementioned probabilistic models (such as the Thurstone model) and derive the maximum likelihood estimate (MLE) of the final ranking. More recently, researchers have began to pay attention to rank aggregation methods for pairwise comparison data \citep{rajkumar2014statistical,chen2015spectral,fanjianqing2017Spectral}. We note that all aforementioned methods assume that the rankers of interest are equally reliable. In practice, however, it is very common that some rankers are more reliable than the others, whereas some are nearly non-informative and may be regarded as ``spam rankers''. Such differences in rankers' qualities, if ignored in analysis, may significantly corrupt the rank aggregation and lead to seriously misleading results. To the best of our knowledge, the earliest effort to address this critical issue can be traced to \citet{aslam2001models}, which derived an aggregated ranking list by calculating a weighted summation of the observed ranking lists, known as the \textit{Borda Fuse}. \citet{2010Integration} extended the objective function of \citet{dwork2001rank} to a weighted fashion. Independently, \citet{liu2007supervised} proposed a supervised rank aggregation to determine weights of the rankers by training with some external data. Although assigning weights to rankers is an intuitive and simple way to handle quality differences, how to scientifically determine these weights is a critical and unsolved problem in the aforementioned works. Recently, \citet{deng2014bayesian} proposed BARD, a Bayesian approach to deal with quality differences among independent rankers without the need of external information. BARD introduces a partition model, which assumes that all involved entities can be partitioned into two groups: the relevant ones and the background ones. A rationale of the approach is that, in many applications, distinguishing relevant entities from background ones has the priority over the construction of a final ranking of all entities. Under this setting, BARD decomposes the information in a ranking list into three components: (i) the relative rankings of all background entities, which is assumed to be uniform; (ii) the relative ranking of each relevant entity among all background ones, which takes the form of a truncated power-law; and, (iii) the relative rankings of all relevant entities, which is again uniform. The parameter of the truncated power-law distribution, which is ranker-specific, naturally serves as a quality measure for each ranker, as a ranker of a higher quality means a less spread truncated power-law distribution. \citet{fan2019} proposed a stage-wise data generation process based on an extended Mallows model (EMM) introduced by \citet{Fligner1986Distance}. EMM assumes that each entity comes from a two-components mixture model involving a uniform distribution to model non-informative entities, a modified Mallows model for informative entities and a ranker-specific proportion parameter. \citet{li2020bayesian} followed the Thurstone model framework to deal with available covariates for the entities as well as different qualities of the rankers. In their model, each entity is associated with a Gaussian-distributed latent score and a ranking list is determined by the ranking of these scores. The quality of each ranker is determined by the standard deviation parameter in the Gaussian model so that a larger standard deviation indicates a poorer quality ranker. Although these recent papers have proposed different ways for learning the quality variation among rankers, they all suffer from some limitations. The BARD method \citep{deng2014bayesian} simplifies the problem by assuming that all relevant entities are exchangeable. In many applications, however, the observed ranking lists often have a strong ordering information for relevant entities, and simply labeling these entities as ``relevant'' without considering their relative rankings tends to lose too much information and oversimplify the problem. \citet{fan2019} does not explicitly measure quality differences by their extended Mallows model. Although they mentioned that some of their model parameters can indicate the rankers' qualities, it is not clear how to properly combine multiple indicators to produce an easily interpretable quality measurement. The learning framework of \citet{li2020bayesian} based on Gaussian latent variables appears to be more suitable for incorporating covariates than for handling heterogeneous rankers. In this paper, we propose a \emph{partition-Mallows model} (PAMA), which combines the partition modeling framework of \citet{deng2014bayesian} with the Mallows model, to accommodate the detailed ordering information among the relevant entities. The new framework can not only quantify the quality difference of rankers and distinguish relevant entities from background entities like BARD, but also provide an explicit ranking estimate among the relevant entities in rank aggregation. In contrast to the strategy of imposing the Mallows on the full ranking lists, which tends to be sensitive to noises in low-ranking entities, the combination of the partition and Mallows models allows us to focus on highly ranked entities, which typically contain high-quality signals in data, and is thus more robust. Both simulation studies and real data applications show that the proposed approach is superior to existing methods, e.g., BARD and EMM, for a large class of rank aggregation problems. The rest of this paper is organized as follows. A brief review of BARD and the Mallows model is presented in Section \ref{sec:overview} as preliminaries. The proposed PAMA model is described in Section \ref{sec:model} with some key theoretical properties established. Statistical inference of the PAMA model, including the Bayesian inference and the pursuit of MLE, is detailed in Section \ref{sec:statinfer}. Performance of PAMA is evaluated and compared to existing methods via simulations in Section \ref{sec:simulation}. Two real data applications are shown in Section \ref{sec:realdata} to demonstrate strength of the PAMA model in practice. Finally, we conclude the article with a short discussion in Section \ref{sec:discussion}. \section{Notations and Preliminaries} \label{sec:overview} Let $U = \left\{E_1, E_2, \cdots, E_n \right\}$ be the set of entities to be ranked. We use ``$E_i \preceq E_j$" to represent that entity $E_i$ is preferred to entity $E_j$ in a ranking list $\tau$, and denote the position of entity $E_i$ in $\tau$ by $\tau(i)$. Note that more preferred entities always have lower rankings. Our research interest is to aggregate $m$ observed ranking lists, $\tau_1,\ldots, \tau_m$, presumably constructed by $m$ rankers independently into one consensus ranking list which is supposed to be ``better'' than each individual one. \subsection{BARD and Its Partition Model} The partition model in BARD \citep{deng2014bayesian} assumes that $U$ can be partitioned into two non-overlapping subsets: $U=U_R\cup U_B$, with $U_R$ representing the set of relevant entities and $U_B$ for the background ones. Let $I=\{I_i\}_{i\in U}$ be the vector of group indicators, where $I_i=\mathbb{I}(E_i \in U_R)$ and $\mathbb{I}(\cdot)$ is the indicator function. This formulation makes sense in many applications where people are only concerned about a fixed number of top-ranked entities. Under this formulation, the information in a ranking list $\tau_k$ can be equivalently represented by a triplet $(\tau_k^0, \tau_k^{1\mid 0}, \tau_k^1)$, where $\tau_k^0$ denotes relative rankings of all background entities, $\tau_k^{1\mid 0}$ denotes relative rankings of relevant entities among the background entities and $\tau_k^1$ denotes relative rankings of all relevant entities. \citet{deng2014bayesian} suggested a three-component model for $\tau_k$ by taking advantage of its equivalent decomposition: \begin{eqnarray}\label{Eq:RankDecomposition} P(\tau_k \mid I)=P(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1 \mid I)=P(\tau_k^0 \mid I)\times P(\tau_k^{1\mid 0} \mid I)\times P(\tau_k^1 \mid \tau_k^{1\mid 0},I), \end{eqnarray} where both $P(\tau_k^0 \mid I)$ (relative ranking of the background entities) and $P(\tau_k^1 \mid \tau_k^{1\mid 0},I) $ (relative ranking of the relevant entities conditional on their set of positions relative to background entities) are uniform, and the relative ranking of a relevant entity $E_i$ among background ones follows a power-law distribution with parameter $\gamma_k>0$, i.e., $$P(\tau_k^{1 \mid 0}(i)=t\mid I ) = q(t\mid \gamma_k, n_0) \propto t^{-\gamma_k}\cdot\mathbb{I}(1\leq t\leq n_0+1),$$ leading to the following explicit forms for the three terms in equation~(\ref{Eq:RankDecomposition}): \begin{eqnarray} \label{eqn:bardtau0} P(\tau_k^0 \mid I)&=&\frac{1}{n_0!},\\ \label{eqn:bardgamma} P(\tau_k^{1\mid 0} \mid I)&=&\prod_{i \in U_R} q(\tau_k^{1 \mid 0}(i)\mid \gamma_k, I)=\frac{1}{(B_{\tau_k,I})^{\gamma_k}\times(C_{\gamma_k,n_1})^{n_1}},\\ \label{eqn:bardtau1} P(\tau_k^1 \mid \tau_k^{1\mid 0},I)&=&\frac{1}{A_{\tau_k,I}}\times\mathbb{I}\big(\tau_k^1 \in \mathcal{A}_{U_R}(\tau_k^{1\mid0})\big), \end{eqnarray} where $n_1=\sum_{i=1}^n I_i$ and $n_0=n-n_1$ are the counts of relevant and background entities respectively, $B_{\tau_k,I}=\prod_{i \in U_R} \tau_k^{1\mid 0}(i)$, $C_{\gamma_k,n_1}=\sum_{t=1}^{n_0+1} t^{-\gamma_k}$ is the normalizing constant of the power-law distribution, $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$ is the set of $\tau_k^1$'s that are compatible with $\tau_k^{1\mid 0}$, and $A_{\tau_k,I}=\#\{\mathcal{A}_{U_R}(\tau_k^{1\mid0})\}=\prod_{t=1}^{n_0+1}(n_{\tau_k,t}^{1\mid 0}!)$ with $n_{\tau_k,t}^{1\mid 0}= \sum_{i \in U_R} \mathbb{I}(\tau_k^{1 \mid 0}(i) = t)$. Intuitively, this model assumes that each ranker first randomly places all background entities to generate $\tau_k^0$, then ``inserts" each relevant entity independently into the list of background entities according to a truncated power-law distribution to generate $\tau_k^{1\mid 0}$, and finally draws $\tau_k^1$ uniformly from $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$. In other words, $\tau_k^0$ serves as a baseline for modeling $\tau_k^{1\mid0}$ and $\tau_k^{1}$. It is easy to see from the model that a more reliable ranker should possess a larger $\gamma_k$. With the assumption of independent rankers, we have the full-data likelihood: \begin{eqnarray} P(\tau_1,\cdots,\tau_m \mid I,\boldsymbol{\gamma})&=&\prod_{k=1}^mP(\tau_k\mid I,\gamma_k)\nonumber\\ &=&[(n_0)!]^{-m}\times\prod_{k=1}^m\frac{\mathbb{I}\big(\tau_k^1 \in \mathcal{A}_{U_R}(\tau_k^{1\mid0})\big)}{A_{\tau_k,I}\times(B_{\tau_k,I})^{\gamma_k}\times\big(C_{\gamma_k,n_1}\big)^{n_1}}, \end{eqnarray} where $\boldsymbol{\gamma}=(\gamma_1,\cdots,\gamma_m)$. A detailed Bayesian inference procedure for $(I,\boldsymbol{\gamma})$ via Markov chain Monte Carlo can be found in \citet{deng2014bayesian}. \subsection{The Mallows Model} \label{sec:mallows} \cite{mallows1957non} proposed the following probability model for a ranking list $\tau$ of $n$ entities: \begin{equation}\label{Eq:MallowsModel} \pi(\tau \mid \tau_0, \phi) = \dfrac{1}{Z_n(\phi)}\cdot\exp\{-\phi\cdot d(\tau,\tau_0)\}, \end{equation} where $\tau_0$ denotes the true ranking list, $\phi>0$ characterizing the reliability of $\tau$, function $d(\cdot,\cdot)$ is a distance metric between two ranking lists, and \begin{equation}\label{eq:Mallow-norm} Z_n(\phi)=\sum_{\tau'}\exp\{-\phi\cdot d(\tau', \tau_0)\}=\frac{\prod_{t=2}^n(1-e^{-t\phi})}{(1-e^{-\phi})^{n-1}} \end{equation} being the normalizing constant, whose analytic form was derived in \citet{diaconis1988group}. Clearly, a larger $\phi$ means that $\tau$ is more stable and concentrates in a tighter neighborhood of $\tau_0$. A common choice of $d(\cdot, \cdot)$ is the Kendall tau distance. The Mallows model under the Kendall tau distance can also be equivalently described by an alternative multistage model, which selects and positions entities one by one in a sequential fashion, where $\phi$ serves as a common parameter that governs the probabilistic behavior of each entity in the stochastic process \citep{mallows1957non}. Later on, \citet{Fligner1986Distance} extended the Mallows model by allowing $\phi$ to vary at different stages, i.e., introducing a position-specific parameter $\phi_i$ for each position $i$, which leads to a very flexible, in many cases too flexible, framework to model rank data. To stabilize the generalized Mallows model by \citet{Fligner1986Distance}, \citet{fan2019} proposed to put a structural constraint on $\phi_i$s of the form $\phi_i=\phi\cdot(1-\alpha^i)$ with $0<\phi <1$ and $0\leq \alpha \leq 1$. As a probabilistic model for rank data, the Mallows model enjoys great interpretability, model compactness, inference and computation efficiency. For a comprehensive review of the Mallows model and its extensions, see \citet{Irurozki2014PerMallows} and \citet{fan2019}. \section{The Partition-Mallows Model} \label{sec:model} The partition model employed by BARD \citep{deng2014bayesian} tends to oversimplify the problem for scenarios where we care about the detailed rankings of relevant entities. To further enhance the partition model of BARD so that it can reflect the detailed rankings of relevant entities, we describe a new partition-Mallows model in this section. \subsection{The Reverse Partition Model}\label{subsec:RevPar} To combine the partition model with the Mallows model, a naive strategy is to simply replace the uniform model for the relevant entities, i.e., $P(\tau_k^1 \mid \tau_k^{1|0},I)$ in (\ref{Eq:RankDecomposition}), by the Mallows model, which leads to the updated Equation \eqref{eqn:bardtau1} as below: $$P(\tau_k^1 \mid \tau_k^{1\mid 0},I) = \frac{\pi(\tau_k^1)} {Z_{\tau_k,I}} \times \mathbb{I} \big(\tau_k^1 \in \mathcal{A}_{U_R}(\tau_k^{1\mid0}) \big),$$ where $\pi(\tau_k^1)$ is the Mallows density of $\tau_k^1$ and $Z_{\tau_k,I}=\sum_{\tau\in \mathcal{A}_{U_R}(\tau_k^{1\mid0})} \pi(\tau)$ is the normalizing constant of the Mallows model with a constraint due to the compatibility of $\tau_k^1$ with respect to $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$. Apparently, the calculation of $Z_{\tau_k,I}$, which involves the summation over the whole space of $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$, whose size is $A_{\tau_k,I}=\#\{\mathcal{A}_{U_R}(\tau_k^{1\mid0})\}=\prod_{t=1}^{n_0+1}(n_{\tau_k,t}^{1\mid 0}!)$, is infeasible for most practical cases, rendering such a naive combination of the Mallows model and the partition model impractical. To avoid the challenging computation caused by the constraints due to $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$, we rewrite the partition model by switching the roles of $\tau_k^0$ and $\tau_k^1$ in the model: instead of decomposing $\tau_k$ as $(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1)$ conditioning on the group indicators $I$, we decompose $\tau_k$ into an alternative triplet $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$, where $\tau_k^{0\mid 1}$ denotes the {\it relative reverse rankings} of background entities among the relevant ones. Formally, we note that $\tau_k^{0\mid 1}(i) \triangleq n_1+2-\tau_{k|\{i\} \cup U_R}(i)$ for any $i\in U_R$, where $\tau_{k|\{i\} \cup U_R}(i)$ denotes the relative ranking of a background entity among the relevant ones. In this {\it reverse partition model}, we first order the relevant entities according to a certain distribution and then use them as a reference system to ``insert'' the background entities. Figure~\ref{Tab:decomposition2} illustrates the equivalence between $\tau_k$ and its two alternative presentations, $(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1)$ and $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$. Given the group indicator vector $I$, the reverse partition model based on $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$ gives rise to the following distributional form for $\tau_k$: \begin{eqnarray}\label{Eq:RankDecompositionInverse} P(\tau_k \mid I)=P(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0 \mid I)=P(\tau_k^1 \mid I)\times P(\tau_k^{0\mid 1} \mid I)\times P(\tau_k^0 \mid \tau_k^{0\mid 1},I), \end{eqnarray} which is analogous to (\ref{Eq:RankDecomposition}) for the original partition model in BARD. Comparing to (\ref{Eq:RankDecomposition}), however, the new form (\ref{Eq:RankDecompositionInverse}) enables us to specify an unconstrained marginal distribution for $\tau_k^1$. Moreover, due to the symmetry between $\tau_k^{1\mid 0}$ and $\tau_k^{0\mid 1}$, it is highly likely that the power-law distribution, which was shown in \cite{deng2014bayesian} to approximate the distribution of $\tau_k^{1\mid 0}(i)$ well for each $E_i\in U_R$, can also model $\tau_k^{0\mid 1}(i)$ for each $E_i\in U_B$ reasonably well. Detailed numerical validations are shown in Supplementary Material. If we assume that all relevant entities are exchangeable, all background entities are exchangeable, and the relative reserve ranking of a background entity among the relevant entities follows a power-law distribution, we have \begin{eqnarray} \label{eqn:InvBARD_tau1} P(\tau_k^1 \mid I)&=&\frac{1}{n_1!},\\ \label{eqn:InvBARD_tau01} P(\tau_k^{0\mid 1} \mid I,\gamma_k)&=&\prod_{i \in U_B} P(\tau_k^{0 \mid 1}(i)\mid I,\gamma_k)=\frac{1}{(B^*_{\tau_k,I})^{\gamma_k}\times (C^*_{\gamma_k,n_1})^{n_0}},\\ \label{eqn:InvBARD_tau0} P(\tau_k^0 \mid \tau_k^{0\mid 1},I)&=&\frac{1}{A^*_{\tau_k,I}}\times\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big), \end{eqnarray} where $n_1$ and $n_0$ are numbers of relevant and background entities, respectively, $B^*_{\tau_k,I}=\prod_{i \in U_B} \tau_k^{0 \mid 1}(i)$ is the unnormalized part of the power-law, $C^*_{\gamma_k,n_1}=\sum_{t=1}^{n_1+1} t^{-\gamma_k}$ is the normalizing constant, $\mathcal{A}_{U_B}(\tau_k^{0\mid1})$ is the set of all $\tau_k^0$ that are compatible with a given $\tau_k^{0\mid 1}$, and $A^*_{\tau_k,I}=\#\{\mathcal{A}_{U_B}(\tau_k^{0\mid1})\}=\prod_{t=1}^{n_1+1}(n_{\tau_k,t}^{0\mid 1}!)$ with $n_{\tau_k,t}^{0\mid 1}= \sum_{i \in U_B} \mathbb{I}(\tau_k^{0\mid 1}(i) = t)$. Apparently, the likelihood of this reverse-partition model shares the same structure as that of the original partition model in BARD, and thus can be inferred in a similar way. \subsection{The Partition-Mallows Model} \label{subsec:BMM} The reverse partition model introduced in section ~\ref{subsec:RevPar} allows us to freely model $\tau_k^1$ beyond a uniform distribution, which is infeasible for the original partition model in BARD. Here we employ the Mallows model for $\tau_k^1$ due to its interpretability, compactness and computability. To achieve this, we replace the group indicator vector $I$ in the partition model by a more general indicator vector $\mathcal{I}=\{\mathcal{I}_i\}_{i=1}^n$, which takes value in $\Omega_\mathcal{I}$, the space of all permutations of $\{1,\cdots,n_1,\underbrace{0,\ldots,0}_{n_0}\}$, with $\mathcal{I}_i=0$ if $E_i\in U_B$, and $\mathcal{I}_i=k>0$ if $E_i\in U_R$ and is ranked at position $k$ among all relevant entities in $U_R$. Figure~\ref{Tab:decomposition2} provides an illustrative example of assigning an enhanced indicator vector $\mathcal{I}$ to a universe of 10 entities with $n_1=5$. Based on the status of $\mathcal{I}$, we can define subvectors $\mathcal{I}^+$ and $\mathcal{I}^0$, where $\mathcal{I}^+$ stands for the subvector of $\mathcal{I}$ containing all positive elements in $\mathcal{I}$, and $\mathcal{I}^0$ for the remaining zero elements in $\mathcal{I}$. Figure~\ref{Tab:decomposition2} demonstrates the constructions of $\mathcal{I}$, $\mathcal{I}^+$ and $\mathcal{I}^0$, and the equivalence between $\tau_k$, $(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1)$, and $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$ given $\mathcal{I}$. Note that different from the partition model in BARD, in which we allow the number of relevant entities represented by $n_1$ to vary nearby its expected value, the number of relevant entities in the new model, is assumed to be fixed and known for conceptual and computational convenience. In other words, we have $|U_R|=n_1$ in the new setting. \begin{figure}[h] \centering \begin{tabular}{ p{0.5cm}<{\centering}|p{0.5cm}<{\centering}|p{0.5cm}<{\centering}p{0.5cm}<{\centering}p{0.5cm}<{\centering}|p{0.5cm}<{\centering}p{0.5cm}<{\centering}p{0.5cm}<{\centering} p{0.5cm}<{\centering}p{0.5cm}<{\centering}|p{0.5cm}<{\centering}|p{0.5cm}<{\centering} p{0.5cm}<{\centering}p{0.5cm}<{\centering}|p{0.5cm}<{\centering}|p{0.5cm}<{\centering}} \cline{1-3} \cline{5-6} \cline{8-8}\cline{10-12} \cline{14-16} $\mathcal{I}^+$ &$\mathcal{I}^0$ &$I$& &$\mathcal{I}$& $U$& &$\tau_k$ & &$\tau_k^1$&$\tau_k^{0\mid 1}$&$\tau_k^0$ & & $\tau_k^0$&$\tau_k^{1\mid 0}$&$\tau_k^1$ \\ \cline{1-3} \cline{5-6} \cline{8-8}\cline{10-12} \cline{14-16} 1 & - & 1 & & 1& $E_1$ & & 2 & & 2& - & - &&-&1&2\\ 2 & - & 1 & & 2& $E_2$ & & 6 & & 4& - & - &&-&3&4\\ 3 & - & 1 & & 3& $E_3$ & & 4 & & 3& - & -&&-&2&3\\ 4 & - & 1 & & 4& $E_4$ & & 1 & & 1& - & -&&-&1&1\\ 5 & - & 1 &$\Longleftarrow$& 5& $E_5$ & & 7 & $\Longleftrightarrow$& 5&-& -&$\Longleftrightarrow$&-&3&5\\ - & 0 & 0 & & 0& $E_6$ & & 5 & & -& 3 & 2&&2&-&-\\ - & 0 & 0 & & 0& $E_7$ & & 3 & & -& 4 & 1&&1&-&-\\ - & 0 & 0 & & 0& $E_8$ & & 8 & & -& 1 & 3&&3&-&-\\ - & 0 & 0 & & 0& $E_9$ & & 9 & & -& 1 & 4&&4&-&-\\ - & 0 & 0 & & 0& $E_{10}$ & & 10& & -& 1 & 5&&5&-&-\\ \cline{1-3} \cline{5-6} \cline{8-8}\cline{10-12} \cline{14-16} \end{tabular} \caption{An illustrative example of construction of $\mathcal{I}^{+}$, $\mathcal{I}^0$ and $I$ based on the enhanced indicator vector $\mathcal{I}$ of $n_1=5$ to a universe of 10 entities, and the decomposition of a ranking list $\tau_k$ into triplet $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$ and $(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1)$ respectively given $\mathcal{I}$. } \label{Tab:decomposition2} \end{figure} As an analogy of Equations (\ref{Eq:RankDecomposition}) and (\ref{Eq:RankDecompositionInverse}), we have the following decomposition of $\tau_k$ given the enhanced indicator vector $\mathcal{I}$: \begin{eqnarray}\label{Eq:RankDecomposition_BARDM} P(\tau_k \mid \mathcal{I})=P(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0 \mid \mathcal{I})=P(\tau_k^1 \mid\mathcal{I})\times P(\tau_k^{0\mid 1} \mid\mathcal{I})\times P(\tau_k^0 \mid \tau_k^{0\mid 1},\mathcal{I}). \end{eqnarray} Assume that $\tau_k^1\mid\mathcal{I}$ follows the Mallows model (with parameter $\phi_k$) centered at $\mathcal{I}^+$: \begin{eqnarray}\label{Eq:tau1_BARDM} P(\tau_k^1 \mid \mathcal{I}, \phi_k)=P(\tau_k^1 \mid\mathcal{I}^+,\phi_k)=\frac{\exp\{-\phi_k\cdot d_{\tau}(\tau_k^1,\mathcal{I}^+)\}} {Z_{n_1}(\phi_k)}, \end{eqnarray} where $d_{\tau}(\cdot,\cdot)$ denotes Kendall tau distance and $Z_{n_1}(\phi_k)$ is defined as in \eqref{eq:Mallow-norm}. Clearly, a larger $\phi_k$ indicates that ranker $\tau_k$ is of a higher quality, as the distribution is more concentrated at the ``true ranking" defined by $\mathcal{I}^+$. Since the relative ranking of background entities are of no interest to us, we still assume that they are randomly ranked. Together with the power-law assumption for $\tau_k^{0\mid1}(i)$, we have \begin{eqnarray} \label{Eq:tau01_BARDM} P(\tau_k^{0\mid 1} \mid\mathcal{I})&=&P(\tau_k^{0\mid 1} \mid I,\gamma_k) =\frac{1}{(B^*_{\gamma_k,I})^{\gamma_k}\times (C^*_{\gamma_k,n_1})^{n-n_1}},\\ \label{Eq:tau0_BARDM} P(\tau_k^0 \mid \tau_k^{0\mid 1},\mathcal{I})&=&P(\tau_k^0 \mid \tau_k^{0\mid 1},I)=\frac{1}{A^*_{\tau_k,I}}\times\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big), \end{eqnarray} where notations $A^*_{\tau_k,I}$, $B^*_{\tau_k,I}$ and $C^*_{\gamma_k,n_1}$ are the same as in the reverse-partition model. We call the resulting model the \emph{Partition-Mallows model}, abbreviated as PAMA. Different from the partition and reverse partition models, which quantify the quality of ranker $\tau_k$ with only one parameter $\gamma_k$ in the power-law distribution, the PAMA model contains two quality parameters $\phi_k$ and $\gamma_k$, with the former indicating the ranker's ability of ranking relevant entities and the latter reflecting the ranker's ability in differentiating relevant entities from background ones. Intuitively, $\phi_k$ and $\gamma_k$ reflect the quality of ranker $\tau_k$ in two different aspects. However, considering that a good ranker is typically strong in both dimensions, it looks quite natural to further simplify the model by assuming \begin{equation}\label{Eq:phik2phi} \phi_k=\phi\cdot\gamma_k, \end{equation} with $\phi>0$ being a common factor for all rankers. This assumption, while reducing the number of free parameters by almost half, captures the natural positive correlation between $\phi_k$ and $\gamma_k$ and serves as a first-order (i.e., linear) approximation to the functional relationship between $\phi_k$ and $\gamma_k$. A wide range of numerical studies based on simulated data suggest that the linear approximation showed in \eqref{Eq:phik2phi} works reasonably well for many typical scenarios for rank aggregation. In contrast, the more flexible model with both $\phi_k$ and $\gamma_k$ as free parameters (which is referred to as PAMA$^*$) suffers from unstable performance from time to time. Detailed evidences to support assumption \eqref{Eq:phik2phi} can be found in Supplementary Material. Plugging in \eqref{Eq:phik2phi} into \eqref{Eq:tau1_BARDM}, we have a simplified model for $\tau_k^1$ given $\mathcal{I}$ as follows: \begin{eqnarray}\label{Eq:tau1_BARDM_final} P(\tau_k^1 \mid\mathcal{I}, \phi,\gamma_k)=P(\tau_k^1 \mid\mathcal{I}^+,\phi,\gamma_k)=\frac{\exp\{-\phi\cdot\gamma_k\cdot d_{\tau}(\tau_k^1,\mathcal{I}^+)\}}{Z_{n_1}(\phi\cdot\gamma_k)}. \end{eqnarray} Combining \eqref{Eq:tau01_BARDM}, \eqref{Eq:tau0_BARDM} and \eqref{Eq:tau1_BARDM_final}, we get the full likelihood of $\tau_k$: \begin{eqnarray}\label{Eq:tau_BARDM_final} P(\tau_k \mid\mathcal{I}, \phi,\gamma_k)&=&P(\tau_k^1\mid\mathcal{I},\phi,\gamma_k)\times P(\tau_k^{0|1}\mid\mathcal{I},\gamma_k)\times P(\tau_k^{0}\mid \tau_k^{0|1},\mathcal{I})\nonumber\\ &=&\frac{\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big)}{A^*_{\tau_k,I}\times(B^*_{\tau_k,I})^{\gamma_k}\times(C^*_{\gamma_k,n_1})^{n-n_1}\times (D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}\times E^*_{\phi,\gamma_k}}, \end{eqnarray} where $D^*_{\tau_k,\mathcal{I}}=\exp\{d_{\tau}(\tau_k^1,\mathcal{I}^+)\}$, $E^*_{\phi,\gamma_k}=Z_{n_1}(\phi\cdot\gamma_k)=\frac{\prod_{t=2}^{n_1}(1-e^{-t\phi\gamma_k})}{(1-e^{-\phi\gamma_k})^{n_1-1}}$, and $A^*_{\tau_k,\mathcal{I}}$, $B^*_{\tau_k,\mathcal{I}}$ and $C^*_{\tau_k,n_1}$ keep the same meaning as in the reverse partition model. At last, for the set of observed ranking lists $\boldsymbol{\tau}=(\tau_1,\cdots,\tau_m)$ from $m$ independent rankers, we have the joint likelihood: \begin{eqnarray} \label{eqn:like} P(\boldsymbol{\tau} \mid\mathcal{I},\phi,\boldsymbol{\gamma})&=&\prod_{k=1}^m P(\tau_k\mid\mathcal{I},\phi,\gamma_k). \end{eqnarray} \subsection{Model Identifiability and Estimation Consistency}\label{sec:consistency} Let $\Omega_n$ be the space of all permutations of $\{1,\cdots,n\}$ in which $\tau_k$ takes value, and let ${\boldsymbol{\theta}}=(\mathcal{I},\phi,\boldsymbol{\gamma})$ be the vector of model parameters. The PAMA model in \eqref{eqn:like}, i.e., $P(\boldsymbol{\tau} \mid {\boldsymbol{\theta}})$, defines a family of probability distributions on $\Omega_{n}^m$ indexed by parameter ${\boldsymbol{\theta}}$ taking values in space $\boldsymbol{\Theta}=\Omega_{\mathcal{I}} \times \Omega_{\phi} \times \Omega_{\boldsymbol{\gamma}} $, where $\Omega_{\mathcal{I}}$ is the space of all permutations of $\{1,\cdots,n_1,{\bf 0}_{n_0}\}$, $\Omega_{\phi}=(0,+\infty)$ and $\Omega_{\boldsymbol{\gamma}}=[0,+\infty)^m$. We show here that the PAMA model defined in \eqref{eqn:like} is identifiable and the model parameters can be estimated consistently under mild conditions. \begin{Thm} \label{thm:identi} The PAMA model is identifiable, i.e., \begin{equation}\label{eq:IdentifiablityCondition} \forall\ {\boldsymbol{\theta}}_1,{\boldsymbol{\theta}}_2\in\boldsymbol{\Theta},\ \mbox{if}\ P(\boldsymbol{\tau}\mid{\boldsymbol{\theta}}_1)= P(\boldsymbol{\tau}\mid{\boldsymbol{\theta}}_2)\ \mbox{for}\ \forall\ \boldsymbol{\tau}\in\Omega_n^m,\ \mbox{then}\ {\boldsymbol{\theta}}_1 = {\boldsymbol{\theta}}_2. \end{equation} \end{Thm} \begin{proof} See Supplementary Material. \end{proof} To show that parameters in the PAMA model can be estimated consistently, we will first construct a consistent estimator for the indicator vector $\mathcal{I}$ as $m\rightarrow \infty$ but with the number of ranked entities $n$ fixed, and show later that $\phi$ can also be consistently estimated once $\mathcal{I}$ is given. To this end, we define $\bar\tau(i)=m^{-1}\sum_{k=1}^{m}\tau_k(i)$ to be the average rank of entity $E_i$ across all $m$ rankers, and assume that the ranker-specific quality parameters $\gamma_1,\cdots,\gamma_m$ are i.i.d. samples from a non-atomic probability measure $F(\gamma)$ defined on $[0,\infty)$ with a finite first moment (referred to as condition $\boldsymbol{C}_\gamma$ hereinafter). Then, by the strong law of large numbers we have \begin{equation}\label{eq:MeanRank} \bar\tau(i)=\frac{1}{m}\sum_{k=1}^{m}\tau_k(i)\rightarrow \mathbb{E}\big[\tau(i)\big] \ a.s.\ \ \mbox{with} \ m\rightarrow\ \infty, \end{equation} since $\{\tau_k(i)\}_{k=1}^m$ are i.i.d. random variables with expectation $$\mathbb{E}\big[\tau(i)\big]=\mathbb{E}\Big[\mathbb{E}\big[\tau(i)\mid\gamma\big]\Big]=\int\mathbb{E}\big[\tau(i)\mid\gamma\big]dF(\gamma),$$ where $\mathbb{E}\big[\tau(i)\mid\gamma\big]$ is the conditional mean of $\tau(i)$ given the model parameters $(\mathcal{I},\phi,\gamma)$, i.e., $$\mathbb{E}\big[\tau(i)\mid\gamma\big]=\sum_{t=1}^n t\cdot P\big(\tau(i)=t\mid\mathcal{I},\phi,\gamma\big).$$ Clearly, $\mathbb{E}\big[\tau(i)\big]$ is a function of $\phi$ given $\mathcal{I}$ and $F(\gamma)$. We define $e_i(\phi)\triangleq \mathbb{E}\big[\tau(i)\big]$ to emphasize $\mathbb{E}\big[\tau(i)\big]$'s nature as a continuous function of $\phi$. Without loss of generality, we suppose that $U_R=\{1,\cdots,n_1\}$ and $U_B=\{n_1+1,\cdots,n\}$, i.e., $\mathcal{I}=(1,\cdots,n_1,0,\cdots,0)$, hereinafter. Then, the partition structure and the Mallows model embedded in the PAMA model lead to the following facts: \begin{equation}\label{eq:MeanRankRelation} e_1(\phi) < \cdots < e_{n_1}(\phi)\ \mbox{and}\ e_{n_1+1}(\phi)=\cdots=e_{n}(\phi)=e_0,\ \forall\ \phi\in\Omega_\phi. \end{equation} Note that $e_i(\phi)$ degenerates to a constant with respect to $\phi$ (i.e., $e_0$) for all $i>n_1$ because parameter $\phi$ influences only the relative rankings of relevant entities in the Mallows model. The value of $e_0$ is completely determined by $F(\gamma)$. For the BARD model, it is easy to see that $e_1=\cdots=e_{n_1} \leq e_{n_1+1}=\cdots=e_n.$ \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{expectations.pdf} \caption{Average ranks of all the entities with fixed $\mathcal{I} = (1,\cdots,n_1,0,\cdots,0)$, $n=30$, $n_1=15$, $m=100000$ and $F(\gamma)= U(0,2)$. Figures (a), (b) and (c) are the corresponding results for $\phi =0, 0.2\ \mbox{and } 0.4$ respectively.} \label{fig:AverageRankOfEntitiesInPAMA} \end{figure} Figure \ref{fig:AverageRankOfEntitiesInPAMA} shows some empirical estimates of the $e_i(\phi)$'s based on $m=100,000$ independent samples drawn from PAMA models with $n=30$, $n_1=15$, and $F(\gamma)= U(0,2)$, but three different $\phi$ values: (a) $\phi=0$, which corresponds to the BARD model; (b) $\phi=0.2$; and (c) $\phi=0.5$. One surprise is that in case (c), some relevant entities may have a larger $e_i(\phi)$ (worse ranking) than the average rank of background entities. Lemma \ref{lem:AverageMean} guarantees that for almost all $\phi\in\Omega_\phi$, $e_0$ is different from $e_i(\phi)$ for $i=1,\cdots,n_1$. The proof of Lemma \ref{lem:AverageMean} can be found in Supplementary Material. \begin{lemma}\label{lem:AverageMean} For the PAMA model with condition $\boldsymbol{C}_\gamma$, $\exists\ \tilde\Omega_\phi\subset\Omega_\phi$, s.t. $(\Omega_\phi-\tilde\Omega_\phi)$ contains only finite elements and \begin{equation}\label{eq:e0neqei} e_i(\phi)\neq e_0\ \mbox{for}\ i=1,\cdots,n_1,\ \forall\ \phi\in\tilde\Omega_\phi. \end{equation} \end{lemma} The facts demonstrated in \eqref{eq:MeanRankRelation} and \eqref{eq:e0neqei} suggest the following three-step strategy to estimate $\mathcal{I}$: (a) find the subset $S_0$ of $n_0=(n-n_1)$ entities from $U$ so that the within-subset variation of the $\bar\tau(i)$'s is the smallest, i.e., \begin{equation} S_0=\argmin_{S\in U, \ |S|=n_0} \sum_{i\in S} (e_i-\bar{e}_S)^2 , \ \ \mbox{with} \ \bar{e}_S=n_0^{-1} \sum_{i\in S} e_i, \end{equation} and let $\tilde{U}_B=S_0$ be an estimate of $U_B$; (b) rank the entities in $U\setminus S_0$ by $\bar\tau(i)$ increasingly and use the obtained ranking $\tilde\mathcal{I}^+$ as an estimate of $\mathcal{I}^+$; (c) combine the above two steps to obtain the estimate $\tilde\mathcal{I}$ of $\mathcal{I}$. This can be achieved by defining $\tilde U_R=U \setminus \tilde U_B$ and $\tilde\mathcal{I}^+=rank(\{\bar{\tau}(i) :i\in\tilde U_R\}),$ and obtain $\tilde\mathcal{I}=(\tilde\mathcal{I}_1,\cdots,\tilde\mathcal{I}_n)$, with $\tilde\mathcal{I}_i=\tilde\mathcal{I}^+_i\cdot\mathbb{I}(i\in\tilde U_R).$ Note that $\tilde U_B$ is based on the mean ranks, $\{\bar\tau(i)\}_{i\in U}$, thus is clearly a moment estimator. Although this three-step estimation strategy is neither statistically efficient nor computationally feasible (step (a) is NP-hard), it nevertheless serves as a prototype for developing the consistency theory. Theorem \ref{thm:consisI} guarantees that $\tilde\mathcal{I}$ is a consistent estimator of $\mathcal{I}$ under mild conditions. \begin{Thm} \label{thm:consisI} For the PAMA model with condition $\boldsymbol{C}_\gamma$, for almost all $\phi\in\Omega_\phi$, the moment estimator $\tilde\mathcal{I}$ converges to $\mathcal{I}$ with probability 1 with $m$ going to infinity. \end{Thm} \begin{proof} Combining fact \eqref{eq:e0neqei} in Lemma \ref{lem:AverageMean} with fact \eqref{eq:MeanRank}, we have for $\forall\ \phi\in\tilde\Omega_\phi$ that $$e_1(\phi)<\cdots<e_{n_1}(\phi)\ \mbox{and}\ e_i(\phi)\neq e_0\ \mbox{for}\ i=1,\cdots,n_1.$$ Moreover, as fact \eqref{eq:MeanRank} tells us that for $\forall\ \epsilon,\delta>0$, $\exists\ M>0$ s.t. for $\forall\ m>M$, $$P\big(|\bar\tau(i)-e_i(\phi)|<\delta\big)\geq 1-\epsilon,\ i=1,\cdots,n,$$ it is straightforward to see the conclusion of the theorem. \end{proof} Theorem \ref{thm:consisI} tells us that estimating $\mathcal{I}$ is straightforward if the number of independent rankers $m$ goes to infinity: a simple moment method ignoring the quality difference of rankers can provide us a consistent estimate of $\mathcal{I}$. In a practical problem where only a finite number of rankers are involved, however, more efficient statistical inference of the PAMA model based on Bayesian or frequentist principles becomes more attractive as effectively utilizing the quality information of different rankers is critical. With $n_0$ and $n_1$ fixed, parameter $\gamma_k$, which governs the power-law distribution for the rank list $\tau_k$, cannot be estimated consistently. Thus, its distribution $F(\gamma)$ cannot be determined nonparametrically even when the number of rank lists $m$ goes to infinity. We impose a parametric form $F_\psi(\gamma)$ with $\psi$ as the hyper-parameter and refer to the resulting hierarchical-structured model as PAMA-H, which has the following marginal likelihood of $(\phi,\psi)$ given $\mathcal{I}$: $$L(\phi,\psi\mid\mathcal{I})=\int P(\boldsymbol{\tau}\mid\mathcal{I},\phi,\boldsymbol{\gamma})dF_\psi(\boldsymbol{\gamma}) =\prod_{k=1}^m\int P(\tau_k\mid\mathcal{I},\phi,\gamma_k)dF_\psi(\gamma_k)=\prod_{k=1}^mL_k(\phi,\psi\mid\mathcal{I}).$$ We show in Theorem \ref{thm:consisPhiPsi} that the MLE based on the above marginal likelihood is consistent. \begin{Thm}\label{thm:consisPhiPsi} Under the PAMA-H model, assume that $(\phi,\psi)$ belongs to the parameter space $\Omega_\phi\times\Omega_\psi$, and the true parameter $(\phi_0,\psi_0)$ is an interior point of $\Omega_\phi\times\Omega_\psi$. Let $(\hat{\phi}_\mathcal{I},\hat\psi_\mathcal{I})$ be the maximizer of $L(\phi,\psi\mid\mathcal{I})$. If $F_\psi(\gamma)$ has a density function $f_\psi(\gamma)$ that is differentiable and concave with respect to $\psi$, then $\lim_{m\rightarrow\infty}(\hat{\phi}_\mathcal{I},\hat\psi_\mathcal{I})=(\phi_0,\psi_0)$ almost surely. \end{Thm} \begin{proof} See Supplementary Material. \end{proof} \section{Inference with the Partition-Mallows Model} \label{sec:statinfer} \subsection{Maximum Likelihood Estimation} \label{subsec:MLE} Under the PAMA model, the MLE of ${\boldsymbol{\theta}}= (\mathcal{I}, \phi,\boldsymbol{\gamma})$ is $\hat{{\boldsymbol{\theta}}}= \arg\max_{{\boldsymbol{\theta}}} l({\boldsymbol{\theta}})$, where \begin{equation} \label{eqn:loglike} l({\boldsymbol{\theta}}) = \log P(\tau_1,\tau_2,\cdots,\tau_m\mid{\boldsymbol{\theta}}) \end{equation} is the logarithm of the likelihood function (\ref{eqn:like}). Here, we adopt the \textit{Gauss-Seidel} iterative method in \cite{YANG2018281}, also known as \textit{backfitting} or \textit{cyclic coordinate ascent}, to implement the optimization. Starting from an initial point ${\boldsymbol{\theta}}^{(0)}$, the Gauss-Seidel method iteratively updates one coordinate of ${\boldsymbol{\theta}}$ at each step with the other coordinates held fixed at their current values. A Newton-like method is adopted to update $\phi$ and $\gamma_k$. Since $\mathcal{I}$ is a discrete vector, we find favorable values of $\mathcal{I}$ by swapping two neighboring entities to check whether $g(\mathcal{I}\mid\boldsymbol{\gamma}^{(s+1)}, \phi^{(s+1)})$ increases. More details of the algorithm are provided in Supplementary Material. With the MLE $\hat{\boldsymbol{\theta}}=(\hat\mathcal{I},\hat\phi,\hat\boldsymbol{\gamma})$, we define $U_R(\hat\mathcal{I})=\{i\in U:\ \hat\mathcal{I}_i>0\}$ and $U_B(\hat\mathcal{I})=\{i\in U:\ \hat\mathcal{I}_i=0\}$, and generate the final aggregated ranking list $\hat\tau$ based on the rules below: (a) set the top-$n_1$ list of $\hat\tau$ as $\hat\tau_{n_1}=sort(i\in U_R(\hat\mathcal{I})\ by\ \hat\mathcal{I}_i\uparrow)$, (b) all entities in $U_B(\hat\mathcal{I})$ tie for positions behind. Hereinafter, we refer to this MLE-based rank aggregation procedure under PAMA model as PAMA$_F$. For the PAMA-H model, a similar procedure can be applied to find the MLE of ${\boldsymbol{\theta}}=(\mathcal{I},\phi,\psi)$, with the $\boldsymbol{\gamma}=(\gamma_1,\cdots,\gamma_m)$ being treated as the missing data. With the MLE $\hat{\boldsymbol{\theta}}=(\hat\mathcal{I},\hat\phi,\hat\psi)$, we can generate the final aggregated ranking list $\hat\tau$ based on $\hat\mathcal{I}$ in the same way as in PAMA, and evaluate the quality of ranker $\tau_k$ via the mean or mode of the conditional distribution below: $$f(\gamma_k\mid\tau_k;\hat\mathcal{I},\hat\phi,\hat\psi)\propto f(\gamma_k\mid\hat\psi)\cdot P(\tau_k\mid\hat\mathcal{I},\hat\phi,\gamma_k).$$ In this paper, we refer to the above MLE-based rank aggregation procedure under PAMA-H model as PAMA$_{HF}$. The procedure is detailed in Supplementary Material. \subsection{Bayesian Inference} \label{subsec:BC} Since the three model parameters $\mathcal{I}$, $\phi$ and $\boldsymbol{\gamma}$ encode ``orthogonal" information of the PAMA model, it is natural to expect that $\mathcal{I}$, $\phi$ and $\boldsymbol{\gamma}$ are mutually independent {\it a priori}. We thus specify their joint prior distribution as $$\pi(\mathcal{I},\phi,\boldsymbol{\gamma})=\pi(\mathcal{I})\cdot\pi(\phi)\cdot\prod_{k=1}^m\pi(\gamma_k).$$ Without much loss, we may restrict the range of $\phi$ and $\gamma_k$'s to a closed interval $[0,b]$ with a large enough $b$. In contrast, $\mathcal{I}$ is discrete and takes value in the space $\Omega_\mathcal{I}$ of all permutations of $\{1,\ldots,n_1, \underbrace{0,\ldots,0}_{n_0}\}$. It is convenient to specify $\pi(\mathcal{I})$, $\pi(\phi)$ and $\pi(\gamma_k)$ as uniform, i.e., $$\pi(\mathcal{I})\sim U(\Omega_\mathcal{I}),\ \pi(\phi)\sim U[0,b],\ \pi(\gamma_k)\sim U[0,b].$$ Based on our experiences in a large range of simulation studies and real data applications, we find that it works reasonably well to set $b=10$. In Section~\ref{sec:consistency} we also discussed letting $\pi(\gamma_k)$ be of a parametric form, which will be further discussed later. The posterior distribution can be expressed as \begin{eqnarray} &&f(\mathcal{I},\phi,\boldsymbol{\gamma}|\tau_1,\tau_2,\cdots,\tau_m)\nonumber\\ &\propto& \pi(\mathcal{I},\phi,\boldsymbol{\gamma})\cdot P(\tau_1,\tau_2,\cdots,\tau_m|\mathcal{I},\phi,\boldsymbol{\gamma})\nonumber\\ &=&\mathbb{I}\big(\phi\in[0,10])\big)\times\prod_{k=1}^m \Big\{ \frac{\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big)\times\mathbb{I}\big(\gamma_k\in[0,10]\big)}{A^*_{\tau_k,I}\times(B^*_{\tau_k,I})^{\gamma_k}\times(C^*_{\gamma_k,n_1})^{n-n_1}\times (D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}\times E^*_{\phi,\gamma_k}} \Big\},\label{eqn:posterior} \end{eqnarray} with the following conditional distributions: \begin{eqnarray} \label{fc:I} f(\mathcal{I}\mid\phi,\boldsymbol{\gamma}) &\propto& \prod_{k=1}^m\frac{\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big)}{A^*_{\tau_k,I}\times(B^*_{\tau_k,I})^{\gamma_k}\times(D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}},\\ \label{fc:phi} f(\phi\mid\mathcal{I},\boldsymbol{\gamma}) &\propto& {\mathbb I}\big(\phi \in [0,10]\big)\times \prod_{k=1}^m \frac{1}{(D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}\times E^*_{\phi,\gamma_k}},\\ \label{fc:gamma} f(\gamma_k\mid\mathcal{I},\phi,\boldsymbol{\gamma}_{[-k]})&\propto&\frac{\mathbb{I}\big(\gamma_k \in [0,10]\big)}{(B^*_{\tau_k,I})^{\gamma_k}\times(C^*_{\gamma_k,n_1})^{n-n_1}\times (D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}\times E^*_{\phi,\gamma_k}}, \end{eqnarray} based on which posterior samples of $(\mathcal{I},\phi,\boldsymbol{\gamma})$ can be obtained by Gibbs sampling, where $\boldsymbol{\gamma}_{[-k]}=(\gamma_1,\cdots,\gamma_{k-1},\gamma_{k+1},\cdots,\gamma_m)$. Considering that conditional distributions in (\ref{fc:I})-(\ref{fc:gamma}) are nonstandard, we adopt the Metropolis-Hastings algorithm \citep{Hastings1970} to enable the conditional sampling. To be specific, we choose the proposal distributions for $\phi$ and $\gamma_k$ as \begin{eqnarray*} q(\phi\mid\phi^{(t)};\mathcal{I},\boldsymbol{\gamma})&\sim&\mathcal{N}(\phi^{(t)},\sigma_{\phi}^2)\\ q(\gamma_k\mid\gamma_k^{(t)};\mathcal{I},\phi,\boldsymbol{\gamma}_{[-k]})&\sim& \mathcal{N}(\gamma_k^{(t)},\sigma_{\gamma_k}^2), \end{eqnarray*} where $\sigma_{\phi}^2$ and $\sigma_{\gamma_k}^2$ can be tuned to optimize the mixing rate of the sampler. Since $\mathcal{I}$ is a discrete vector, we propose new values of $\mathcal{I}$ by swapping two randomly selected adjacent entities. Note that the entity whose ranking is $n_1$ could be swapped with any background entity. Due to the homogeneity of background entities, there is no need to swap two background entities. Therefore, the number of potential proposals in each step is $\mathcal{O}(n n_1)$. More details about MCMC sampling techniques can be found in \cite{liu2008monte}. Suppose that $M$ posterior samples $\{(\mathcal{I}^{(t)},\phi^{(t)},\boldsymbol{\gamma}^{(t)})\}_{t=1}^M$ are obtained. We calculate the posterior means of different parameters as below: \begin{eqnarray*} \bar{\mathcal{I}}_i&=&\frac{1}{M} \sum_{t=1}^M\Big[\mathcal{I}_i^{(t)}\cdot I^{(t)}_i+\frac{n_1+1+n}{2}\cdot(1-I^{(t)}_i)\Big],\ i=1,\cdots,n,\\ \bar{\phi} &=& \frac{1}{M} \sum_{t=1}^M \phi^{(t)},\\ \bar{\gamma}_k &=& \frac{1}{M} \sum_{t=1}^M \gamma_k^{(t)}, k=1,\cdots,m. \end{eqnarray*} We quantify the quality of ranker $\tau_k$ with $\bar\gamma_k$, and generate the final aggregated ranking list $\hat\tau$ based on the $\bar\mathcal{I}_i$s as following: $$\hat\tau=sort(i\in U\ by\ \bar\mathcal{I}_i \uparrow).$$ Hereinafter, we refer to this MCMC-based Bayesian rank aggregation procedure under the Partition-Mallows model as PAMA$_B$. The Bayesian inference procedure PAMA$_{HB}$ for the PAMA-H model differs from PAMA$_B$ only by replacing the prior distribution $\prod_{k=1}^m \pi(\gamma_k)$, which is uniform in $[0,b]^m$, with a hierarchically structured prior $\pi(\psi) \prod_{k=1}^m f_\psi (\gamma_k)$. The conditional distributions needed for Gibbs sampling are almost the same as \eqref{fc:I}-\eqref{fc:gamma}, except an additional one \begin{eqnarray} \label{PAMA-HB:psi} f(\psi\mid\mathcal{I},\phi,\boldsymbol{\gamma})&\propto&\pi(\psi)\cdot\prod_{k=1}^mf_\psi(\gamma_k). \end{eqnarray} We may specify $f_\psi(\gamma)$ to be an exponential distribution and let $\pi(\psi)$ be a proper conjugate prior to make \eqref{PAMA-HB:psi} easy to sample from. More details for PAMA$_{HB}$ with $f_\psi(\gamma)$ specified as an exponential distribution is provided in Supplementary Material. Our simulation studies suggest that the practical performance of PAMA$_B$ and PAMA$_{HB}$ are very similar when $n_0$ and $n_1$ are reasonably large (see Supplementary Material for details). In contrast, as we will show in Section~\ref{sec:simulation}, the MLE-based estimates (e.g., PAMA$_F$) typically produce less accurate results with a shorter computational time compared to PAMA$_B$. \subsection{Extension to Partial Ranking Lists} The proposed Partition-Mallows model can be extended to more general scenarios where partial ranking lists, instead of full ranking lists, are involved in the aggregation. Given the entity set $U$ and a ranking list $\tau_S$ of entities in $S\subseteq U$, we say $\tau_S$ is a \emph{full ranking list} if $S=U$, and a \emph{partial ranking list} if $S\subset U$. Suppose $\tau_S$ is a partial ranking list and $\tau_U$ is a full ranking list of $U$. If the projection of $\tau_U$ on $S$ equals to $\tau_S$, we say $\tau_U$ is compatible with $\tau_S$, denotes as $\tau_U\sim\tau_S$. Let $\mathcal{A}(\tau_S)=\{\tau_U:\tau_U\sim\tau_S\}$ be the set of all full lists that are compatible with $\tau_S$. Suppose a partial list $\tau_k$ is involved in the ranking aggregation problem. The probability of $\tau_k$ can be evaluated by: \begin{eqnarray} \label{eqn:pllike} P(\tau_k\mid\mathcal{I},\phi,\gamma_k)=\sum_{\tau_k^*\sim\tau_k} P(\tau_k^*\mid\mathcal{I},\phi,\gamma_k), \end{eqnarray} where $P(\tau_k^*\mid\mathcal{I},\phi,\gamma_k)$ is the probability of a compatible full list under the PAMA model. Clearly, the probability in (\ref{eqn:pllike}) does not have a closed-form representation due to complicated constraints between $\tau_k$ and $\tau_k^*$, and it is very challenging to do statistical inference directly based on this quantity. Fortunately, as rank aggregation with partial lists can be treated as a missing data problem, we can resolve the problem via standard methods for missing data inference. The Bayesian inference can be accomplished by the classic data augmentation strategy~\citep{tanner1987} in a similar way as described in \cite{deng2014bayesian}, which iterates between imputing the missing data conditional on the observed data given the current parameter values, and updating parameter values by sampling from the posterior distribution based on the imputed full data. To be specific, we iteratively draw from the following two conditional distributions: \begin{eqnarray*} P(\tau_1^\ast,\cdots,\tau_m^\ast\mid\tau_1,\cdots,\tau_m;\mathcal{I},\phi,\boldsymbol{\gamma})=\prod_{k=1}^m P(\tau_k^\ast\mid\tau_k;\mathcal{I},\phi,\gamma_k), \\ f(\mathcal{I},\phi,\boldsymbol{\gamma}\mid\tau_{1}^\ast,\cdots,\tau_m^\ast)\propto \pi(\mathcal{I})\times\pi(\boldsymbol{\gamma})\times\pi(\phi)\times\prod_{k=1}^m P(\tau_k^\ast\mid\mathcal{I},\gamma_{k},\phi). \end{eqnarray*} To find the MLE of ${\boldsymbol{\theta}}$ for this more challenging scenario, we can use the Monte Carlo EM algorithm (MCEM, \cite{tanner1990}). Let $\tau_k^{(1)},\cdots, \tau_k^{(M)}$ be $M$ independent samples drawn from distribution $P(\tau_k^\ast\mid\tau_k,\mathcal{I},\phi,\gamma_k)$. The E-step involves the calculation of the $Q$-function below: \begin{eqnarray*} Q(\mathcal{I},\boldsymbol{\gamma},\phi \mid \mathcal{I}^{(s)},\boldsymbol{\gamma}^{(s)},\phi^{(s)}) &=& E \left\{\sum_{k=1}^m \log P(\tau_k^{*} \mid \mathcal{I},\boldsymbol{\gamma},\phi) \mid \tau_k, \mathcal{I}^{(s)},\boldsymbol{\gamma}_k^{(s)},\phi^{(s)} \right\}\nonumber\\ &\approx& \dfrac{1}{M}\sum_{k=1}^m \sum_{t=1}^M \log P(\tau_k^{(t)}\mid \mathcal{I},\boldsymbol{\gamma}_k,\phi). \end{eqnarray*} In the M-step, we use the \emph{Gauss-Seidel} method to maximize the above $Q$-function in a similar way as detailed in Supplementary Material. No matter which method is used, a key step is to draw samples from \[P(\tau_k^\ast\mid\tau_k;\mathcal{I},\phi,\gamma_k)\propto P(\tau_k^\ast\mid\mathcal{I},\gamma_k,\phi)\cdot \mathbb{I}\big(\tau_k^\ast\in\mathcal{A}(\tau_k)\big).\] To achieve this goal, we start with $\tau_k^*$ obtained from the previous step of the data augmentation or MCEM algorithms, and conduct several iterations of the following Metropolis step with $P(\tau_k^\ast\mid\tau_k;\mathcal{I},\phi,\gamma_k)$ as its target distribution: (a) construct the proposal $\tau_k'$ by randomly selecting two elements in the current full list $\tau_k^*$ and swapping them; (b) accept or reject the proposal according to the Metropolis rule, that is to accept $\tau_k'$ with probability of $\min(1,\frac{P(\tau_k'\mid\mathcal{I},\gamma_k,\phi)}{P(\tau_k^{*}\mid\mathcal{I},\gamma_k,\phi)})$. Note that the proposed list $\tau_k'$ is automatically rejected if it is incompatible with the observed partial list $\tau_k$. \subsection{Incorporating Covariates in the Analysis} In some applications, covariate information for each ranked entity is available to assist rank aggregation. One of the earliest attempts for incorporating such information in analysing rank data is perhaps the \emph{hidden score model} due to \cite{Thurstone1927}, which has become a standard approach and has many extensions. Briefly, these models assume that there is an unobserved score for each entity that is related to the entity-specific covariates $X_i=(X_{i1},\cdots,X_{ip})^T$ under a regression framework and the observed rankings are determined by these scores plus noises, i.e., $$\tau_k=sort(S_{ik}\downarrow,\ E_i\in U),\ \mbox{where}\ S_{ik}=X_i^T\boldsymbol{\beta}+\varepsilon_{ik}.$$ Here, $\boldsymbol{\beta}$ is the common regression coefficient and $\varepsilon_{ik}\sim N(0,\sigma^2_k)$ is the noise term. Recent progresses along this line are reviewed by \cite{Yu2000Bayesian,Bhowmik2017,li2020bayesian}. Here, we propose to incorporate covariates into the analysis in a different way. Assuming that covariate $X_i$ provides information on the group assignment instead of the detailed ranking of entity $E_i$, we connect $X_i$ and $\mathcal{I}_i$, the enhanced indicator of $X_i$, by a logistic regression model: \begin{equation}\label{eq:BARDM_Logistics} P(\mathcal{I}_i\mid X_i)=P(I_i\mid X_i,\boldsymbol{\psi})=\dfrac{\exp\{X_i^T\boldsymbol{\psi}\cdot I_i\}}{1+\exp\{X_i^T\boldsymbol{\psi}\}},~~i=1,\cdots,n, \end{equation} where $\boldsymbol{\psi}=(\psi_1,\ldots,\psi_p)^T$ as the regression parameters. Let $\boldsymbol{X}=(X_1,\cdots,X_n)$ be the covariate matrix, we can extend the Partition-Mallows model as \begin{equation}\label{eq:BARDM_Covariate} P(\tau_1,\cdots,\tau_m, \mathcal{I} \mid \boldsymbol{X})=P(\mathcal{I} \mid \boldsymbol{X},\boldsymbol{\psi})\times P(\tau_1,\cdots,\tau_m\mid\mathcal{I},\phi,\boldsymbol{\gamma}), \end{equation} where the first term $$P(\mathcal{I} \mid \boldsymbol{X},\boldsymbol{\psi})=\prod_{i=1}^n P(I_i \mid X_i,\boldsymbol{\psi})$$ comes from the logistic regression model (\ref{eq:BARDM_Logistics}), and the second term comes from the original Partition-Mallows model. In the extended model, our goal is to infer $(\mathcal{I},\phi,\boldsymbol{\gamma},\boldsymbol{\psi})$ based on $(\tau_1,\cdots,\tau_m;\boldsymbol{X})$. We can achieve both Bayesian inference and MLE for the extended model in a similar way as described for the Partition-Mallows model. More details are provided in the Supplementary Material. An alternative way to incorporate covariates is to replace the logistic regression model by a naive Bayes model, which models the conditional distribution of $\boldsymbol{X}\mid\mathcal{I}$ instead of $\mathcal{I}\mid\boldsymbol{X}$, as follows: \begin{equation} f(\tau_1,\cdots,\tau_m,\boldsymbol{X}\mid\mathcal{I})=P(\tau_1,\cdots,\tau_m,\mid\mathcal{I},\phi,\boldsymbol{\gamma})\times f(\boldsymbol{X}\mid\mathcal{I}), \end{equation} where \begin{eqnarray*} f(\boldsymbol{X}\mid\mathcal{I})&=&\prod_{i=1}^nf(X_i\mid \mathcal{I}_i)=\prod_{i=1}^nf(X_i\mid I_i)=\prod_{i=1}^n\prod_{j=1}^pf(X_{ij}\mid I_i)\\ &=&\prod_{i=1}^n\prod_{j=1}^p\Big\{\big[f_{j}(X_{ij}\mid\psi_{j0})\big]^{1-I_i}\cdot\big[f_{j}(X_{ij}\mid\psi_{j1})\big]^{I_i}\Big\}, \end{eqnarray*} $f_j$ is pre-specified parametric distribution for covariates $X_j$ with parameter $\psi_{j0}$ for entities in the background group and $\psi_{j1}$ for entities in the relevant group. Since the performances of the two approaches are very similar, in the rest of this paper we use the logistic regression strategy to handle covariates due to its convenient form. \section{Simulation Study} \label{sec:simulation} \subsection{Simulation Settings}\label{sec:SimuSetting} We simulated data from two models: (a) the proposed Partition-Mallows model, referred to as $\mathcal{S}_{PM}$, and (b) the Thurstone hidden score model, referred to as $\mathcal{S}_{HS}$. In the $\mathcal{S}_{PM}$ scenario, we specified the true indicator vector as $\mathcal{I}=(1,\cdots,n_1,0,\cdots,0)$, indicating that the first $n_1$ entities $E_1,\cdots, E_{n_1}$ belong to $U_R$ and the rest belong to the background group $U_B$, and set $$\gamma_k=\left\{ \begin{array}{ll} 0.1, & \mbox{if } k\leq\frac{m}{2}; \\ a+(k-\frac{m}{2})\times\delta_R, & \mbox{if } k>\frac{m}{2}. \\ \end{array} \right. $$ Clearly, $a>0$ and $\delta_R>0$ control the quality difference and signal strength of the $m$ base rankers in the $\mathcal{S}_{PM}$ scenario. We set $\phi=0.6$ (defined in \eqref{Eq:phik2phi}), $\delta_R=\frac{2}{m}$, and $a$ with two options: 2.5 and 1.5. For easy reference, we denoted the strong signal case with $a=2.5$ and the weak signal case with $a=1.5$ by $\mathcal{S}_{PM_1}$ and $\mathcal{S}_{PM_2}$, respectively. In the $\mathcal{S}_{HS}$ scenario, we used the Thurstone model to generate the rank lists as $\tau_k = sort(i\in U\ by\ S_{ik} \downarrow),\ \mbox{where}\ S_{ik}\sim N(\mu_{ik},1)$ and $$\mu_{ik}=\left\{ \begin{array}{ll} 0, & \mbox{if } k\leq\frac{m}{2}\ \mbox{or}\ i>n_1; \\ a^*+\frac{b^*-a^*}{m}\times k + (n_1-i)\times\delta_E^*, & \mbox{otherwise}. \\ \end{array} \right. $$ In this model, $a^*, b^*$ and $\delta_E^*$ (all positive numbers) control the quality difference and signal strength of the $m$ base rankers. We also specified two sub-cases: $\mathcal{S}_{HS_1}$, the stronger-signal case with $(a^*,b^*,\delta_E^*)=(0.5,2.5,0.2)$; and $\mathcal{S}_{HS_2}$, the weaker-signal case with $(a^*,b^*,\delta_E^*)=(-0.5,1.5,0.2)$. Table \ref{Tab:HS1mu} shows the configuration matrix of $\mu_{ik}$ under $\mathcal{S}_{HS_1}$ when $m=10, n=100$ and $n_1=10$. In both scenarios, the first half of rankers are completely non-informative, with the other half providing increasingly strong signals. \begin{table}[h] \small \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline & $\mu_1$&$\mu_2$&$\mu_3$&$\mu_4$&$\mu_5$&$\mu_6$&$\mu_7$&$\mu_8$&$\mu_9$&$\mu_{10}$\\ \hline $E_1$&0 &0&0&0&0& 3.7&3.9&4.1&4.3&4.5 \\ $E_2$&0 &0&0&0&0 &3.5 &3.7&3.9&4.1&4.3 \\ $E_3$&0 &0&0&0&0 & 3.3&3.5&3.7&3.9&4.1 \\ $E_4$&0 &0&0&0&0 &3.1 &3.3&3.5&3.7&3.9 \\ $E_5$&0 &0&0&0&0 & 2.9&3.1&3.3&3.5&3.7 \\ $E_6$&0 &0&0&0&0 &2.7 &2.9&3.1&3.3&3.5 \\ $E_7$&0 &0&0&0&0 &2.5 &2.7&2.9&3.1&3.3 \\ $E_8$&0 &0&0&0&0 &2.3 &2.5&2.7&2.9&3.1 \\ $E_9$&0 &0&0&0&0 & 2.1&2.3&2.5&2.7&2.9 \\ $E_{10}$&0 &0&0&0&0 &1.9 &2.1&2.3&2.5&2.7 \\ $E_{11}$&0 &0&0&0&0 &0 &0&0&0&0 \\ $\vdots$&$\vdots$ &$\vdots$&$\vdots$&$\vdots$&$\vdots$ &$\vdots$ &$\vdots$&$\vdots$&$\vdots$&$\vdots$ \\ $E_{100}$&0 &0&0&0&0 &0 &0&0&0&0 \\ \hline \end{tabular} \caption{The configuration matrix of the $\mu_{ik}$'s under $\mathcal{S}_{HS_1}$ with $m$=10, $n$=100 and $n_1$=10.} \label{Tab:HS1mu} \end{table} For each of the four simulation scenarios (i.e., $\mathcal{S}_{PM_1}$, $\mathcal{S}_{PM_2}$, $\mathcal{S}_{HS_1}$ and $\mathcal{S}_{HS_2}$), we fixed the true number of relevant entities $n_1=10$, but allowed the number of rankers $m$ and the total number of entities $n$ to vary, resulting in a total of 16 simulation settings ( $\{scenarios: \mathcal{S}_{PM_1},\mathcal{S}_{PM_2},\mathcal{S}_{HS_1}, \mathcal{S}_{HS_2}\}\times\{m: 10, 20\}\times\{n: 100, 300\}\times\{n_1: 10\}$). Under each setting, we simulated 500 independent data sets to evaluate and compare performances of different rank aggregation methods. \subsection{Methods in Comparison and Performance Measures} Except for the proposed PAMA$_B$ and PAMA$_F$, we considered state-of-the-art methods in several classes, including the Markov chain-based methods MC$_1$, MC$_2$, MC$_3$ in \cite{Lin2010Space} and CEMC in \cite{2010Integration}, the partition-based method BARD in \cite{deng2014bayesian}, and the Mallows model-based methods MM and EMM in \cite{fan2019}. Classic naive methods based on summary statistics were ignored because they have been shown in previous studies to perform suboptimally, especially in cases where base rankers are heterogeneous in quality. The Markov-chain-based methods, MM, and EMM were implemented in \textit{TopKLists}, \textit{PerMallows} and \textit{ExtMallows} packages in R (https://www.r-project.org/), respectively. The code of BARD was provided by its authors. Let $\tau$ be the underlying true ranking list of all entities, $\tau_R=\{\tau(i):\ E_i\in U_R\}$ be the true relative ranking of relevant entities, $\hat\tau$ be the aggregated ranking obtained from a rank aggregation approach, $\hat\tau_R=\{\hat\tau(i):\ E_i\in U_R\}$ be the relative ranking of relevant entities after aggregation, and $\hat\tau_{n_1}$ be the top-$n_1$ list of $\hat\tau$. After obtaining the aggregated ranking $\hat\tau$ from a rank aggregation approach, we evaluated its performance by two measurements, namely the \emph{recovery distance} $\kappa_{R}$ and the \textit{coverage} $\rho_R$, defined as below: \begin{eqnarray*} \kappa_{R}&\triangleq& d_{\tau}(\hat{\tau}_R,\tau_R) + n_{\hat{\tau}} \times \frac{n+n_1+1}{2},\\ \rho_R&\triangleq&\frac{n_1 -n_{\hat{\tau}} }{n_1}, \end{eqnarray*} where $d_{\tau}(\hat{\tau}_R,\tau_R)$ denotes the Kendall tau distance between $\hat\tau_R$ and $\tau_R$, and $n_{\hat{\tau}}$ denotes the number of relevant entities who are classified as background entities in $\hat{\tau}$. The recovery distance $\kappa_R$ considers detailed rankings of all relevant entities plus mis-classification distances, while the coverage $\rho_R$ cares only about the identification of relevant entities without considering the detailed rankings. In the setting of PAMA, $\frac{n+n_1+1}{2}$ is the average rank of a background entity. The recovery distance increases if some relevant entities are mis-classified as background entities. Clearly, we expect a smaller $\kappa_R$ and a larger $\rho_R$ for a stronger aggregation approach. \subsection{Simulation Results} Table~\ref{Tab:recovery} summarizes the performances of the nine competing methods in the 16 different simulation settings, demonstrating the proposed PAMA$_B$ and PAMA$_F$ outperform all the other methods by a significant margin in most settings and PAMA$_B$ uniformly dominates PAMA$_F$. Figure~\ref{Fig:gamma} shows the quality parameter $\boldsymbol{\gamma}$ learned from the Partition-Mallows model in various simulation scenarios with $m=10$ and $n=100$, confirming that the proposed methods can effectively capture the quality difference among the rankers. The results of $\boldsymbol{\gamma}$ for other combinations of $(m,n)$ can be found in Supplementary Material which demonstrates consistent performance with Figure \ref{Fig:gamma}. \begin{table}[htp] \scriptsize \centering \begin{tabular}{c|cc|ccc|cc|cccc} \hline \multicolumn{3}{c|}{Configuration}&\multicolumn{3}{c|}{Partition-type Models}&\multicolumn{2}{c|}{{Mallows Models}} &\multicolumn{4}{c}{MC-based Models} \\ \cline{1-3} \cline{4-6} \cline{7-8} \cline{9-12} $\mathcal{S}$& $n$ &$m$ &PAMA$_F$&PAMA$_B$ &BARD & EMM & MM& MC$_1$ & MC$_2$ & MC$_3$ & CEMC\\ \cline{1-3} \cline{4-6} \cline{7-8} \cline{9-12} \multirow{8}{*}{$\mathcal{S}_{PM_1}$}&\multirow{2}{*}{100}&\multirow{2}{*}{10}& 24.5 & {\bf 15.2} & 57.1 & 51.7 & 103.2 & 338.4 & 163.1 & 198.6 & 197.8 \\ &&&[0.95] & {\bf[0.97]} & [0.91] & [0.89] & [0.81] & [0.36] & [0.69] & [0.63] & [0.62] \\ \cline{4-12} &\multirow{2}{*}{100}&\multirow{2}{*}{20}&2.6 & {\bf 0.3} & 42.1 & 22.8 & 44.2 & 466.6 & 88.9 & 121.2 & 114.7 \\ &&&[0.99] & {\bf[1.00]} & [0.95] & [0.95] & [0.93] & [0.11] & [0.82] & [0.78] & [0.77] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{10}& 17.4 & {\bf 4.0} & 180.0 & 683.3 & 519.2 & 1268.3 & 997.7 & 1075.8 & 1085.7 \\ &&&[0.99] & {\bf[1.00]} & [0.89] & [0.66] & [0.55] & [0.17] & [0.34] & [0.29] &[0.28] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{20} & 7.1& {\bf 3.2} & 122.3 & 124.4 & 157.1 & 1445.9 & 613.5 & 723.0 & 727.2\\ &&&{\bf [1.00]} & {\bf[1.00]} & [0.93]& [0.92] & [0.90] & [0.05] & [0.60] & [0.53] & [0.52] \\ \cline{1-12} \multirow{8}{*}{$\mathcal{S}_{PM_2}$}&\multirow{2}{*}{100}&\multirow{2}{*}{10}&90.0 & {\bf 66.6} & 115.2 & 108.3 & 152.9 & 404.3 & 285.5 & 307.2 & 313.8\\ &&&[0.82] & {\bf [0.86]} & [0.77] & [0.77]& [0.70] & [0.24] & [0.47] & [0.43] & [0.41]\\\cline{4-12} &\multirow{2}{*}{100}&\multirow{2}{*}{20}& 26.9 & {\bf 2.4} & 81.5 & 59.8 & 91.5 & 468.1 & 217.3 & 245.2 & 249.5\\ &&& [0.94] & {\bf[1.00]} & [0.85] & [0.87] &[0.82] & [0.11] & [0.60] & [0.55] &[0.53] \\\cline{4-12} & \multirow{2}{*}{300}&\multirow{2}{*}{10}& 81.1 & {\bf 26.8} & 468.4 & 609.8 & 472.1 & 1388.4 & 1294.7 & 1321.5 & 1328.4\\ &&&[0.95] & {\bf[0.98]} & [0.69] & [0.68] & [0.60] & [0.09] & [0.15] & [0.13] & [0.13] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{20}&77.2 &{\bf 3.4} & 313.6 & 267.5 & 337.0 & 1469.0 & 1205.9 & 1251.8 & 1258.9\\ &&&[0.95] & {\bf [1.00]} & [0.79] & [0.82] & [0.78] & [0.04] & [0.21] & [0.18] & [0.18]\\\hline \multirow{8}{*}{$\mathcal{S}_{HS_1}$}&\multirow{2}{*}{100}&\multirow{2}{*}{10}&24.9 & {\bf 20.6} & 22.9 & 54.9 & 115.9 & 334.7 & 150.9 & 180.3 & 186.0 \\ &&&[0.97] & [0.98] & {\bf[0.99]} & [0.91] & [0.80] & [0.37] & [0.71] & [0.66] & [0.64] \\ \cline{4-12} &\multirow{2}{*}{100}&\multirow{2}{*}{20}& 18.7 & 15.6 & 22.8 & {\bf 8.7} & 33.4 & 498.8 & 46.7 & 64.1 & 60.8 \\ &&&[0.98] & [0.98] & {\bf[1.00]} & {\bf[1.00]} &[0.97] & [0.05] & [0.92] & [0.89] & [0.89] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{10}&172.0 & 159.8 & {\bf 37.9} & 205.5 & 490.6 & 1098.6 & 627.0 & 752.9 & 769.4 \\ &&&[0.89] & [0.90] & {\bf[0.99]} & [0.87] & [0.68] & [0.28] & [0.59] & [0.50] & [0.49] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{20}&7.4 & {\bf 7.0} & 22.7 & 11.4 & 114.1 & 1402.6 & 237.8 & 319.7 & 322.3\\ &&&{\bf [1.00]} & {\bf[1.00]} & {\bf[1.00]} & {\bf[1.00]} & [0.94] &[0.08] & [0.84] & [0.79] & [0.79] \\ \hline \multirow{8}{*}{$\mathcal{S}_{HS_2}$}&\multirow{2}{*}{100}&\multirow{2}{*}{10}&92.6 & 74.0 & {\bf 68.7} & 123.7 & 162.3 & 382.4 & 228.2 & 250.2 & 256.6\\ &&&[0.83] & [0.86] & {\bf[0.88]} & [0.77] & [0.70] & [0.27] & [0.56] & [0.52] & [0.50] \\ \cline{4-12} &\multirow{2}{*}{100}&\multirow{2}{*}{20}&24.4 & { 20.0} & 22.2 & {\bf 12.4} & 38.3 & 500.3 & 87.5 & 103.5 & 102.9 \\ &&&[0.96] & [0.97] & {\bf[1.00]} & [0.99] &[0.95] & [0.04] & [0.83] & [0.80] & [0.80] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{10}& 319.1 & 463.8 & {\bf 245.6} & 516.9 & 683.5 & 1267.9 & 998.0 & 1076.0 & 1085.5 \\ &&&[0.79] & [0.69] & {\bf [0.84]} & [0.66] & [0.55] & [0.17] & [0.34] & [0.29] & [0.28] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{20}& 8.7 & {\bf 8.0} & 23.2 & 30.3& 155.5 & 1430.7 & 437.6 & 516.2 & 523.3\\ &&&{\bf[1.00]} & {\bf[1.00]} & {\bf[1.00]} & [0.99] & [0.91] & [0.06] & [0.71] & [0.66]& [0.65] \\ \hline \end{tabular} \caption{Average recovery distances [coverages] of different methods based on 500 independent replicates under different simulation scenarios.} \label{Tab:recovery} \end{table} \begin{figure}[htp] \centering \includegraphics[width=0.98\linewidth]{gamman100m10.pdf} \caption{(a) The boxplots of $\{\bar\gamma_k\}$ estimated by PAMA$_B$ with $m=10$ and $n=100$. (b) The boxplots of $\{\hat\gamma_k\}$ estimated by PAMA$_F$ with $m=10$ and $n=100$. Each column denotes a scenario setting. The results were obtained from 500 independent replicates.} \label{Fig:gamma} \end{figure} Figure~\ref{Fig:n100m10} (a) shows the boxplots of recovery distances and the coverages of the nine competing methods in the four simulation scenarios with $m=10$, $n=100$, and $n_1=10$. The five methods from the left outperform the other four methods by a significant gap, and the PAMA-based methods generally perform the best. Figure~\ref{Fig:n100m10} (b) confirms that the methods based on the Partition-Mallows model enjoys the same capability as BARD in detecting quality differences between informative and non-informative rankers. However, while both BARD and PAMA can further discern quality differences among informative rankers, EMM fails this more subtle task. Similar figures for other combinations of $(m,n)$ are provided in the Supplementary Material, highlighting consistent results as in Figure~\ref{Fig:n100m10}. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{n100m10EMM.pdf} \caption{Boxplots of the rank aggregation results of 500 replications obtained from different methods under various scenarios with $m$=10, $n$=100, and $n_1$=10. (a) Recovery distances in log scale and coverage obtained from nine algorithms. (b) Quality parameters obtained by Partition-type models and EMM.} \label{Fig:n100m10} \end{figure} \subsection{Robustness to the Specification of $n_1$} We need to specify $n_1$, the number of relevant entities, when applying PAMA$_B$ or PAMA$_F$. In many practical problems, however, there may not be a strong prior information on $n_1$ and there may not even be clear distinctions between relevant and background entities. To examine robustness of the algorithm with respect to the specification of $n_1$, we design a simulation setting $\mathcal{S}_{HS_3}$ to mimic the no-clear-cut scenario and investigate how the performance of PAMA is affected by the specification of $n_1$ under this setting. Formally, $\mathcal{S}_{HS_3}$ assumes that $\tau_k=sort(i\in U\ by\ S_{ik}\downarrow)$, where $S_{ik}\sim N(\mu_{ik},1)$, following the same data generating framework as $\mathcal{S}_{HS}$ defined in the Section \ref{sec:SimuSetting}, with $\mu_{ik}$ being replaced by $$\mu_{ik}=\left\{ \begin{array}{ll} 0, & \mbox{if } k\leq\frac{m}{2}, \\ \frac{2\times a^* \times k/m}{1+e^{-b^*\times (70-i)}}, & \mbox{otherwise}, \\ \end{array} \right. $$ where $a^*=50$ and $b^*=0.1$. Different from $\mathcal{S}_{HS_1}$ and $\mathcal{S}_{HS_2}$, where $\mu_{ik}$ jumps from 0 to a positive number as $i$ ranges from background to relevant entities, in the $\mathcal{S}_{HS_3}$ scenario $\mu_{ik}$ increases smoothly as a function of $i$ for each informative ranker $k$. In such cases, the concept of ``relevant'' entities is not well-defined. We simulate 500 independent data sets from $\mathcal{S}_{HS_3}$ with $n=100$ and $m=10$. For each data set, we try different specifications of $n_1$ ranging from 10 to 50 and compare PAMA to several competing methods based on their performance on recovering the top-$A$ list $[E_1\preceq E_2\preceq\cdots \preceq E_{A}]$, which is still well-defined based on the simulation design. The results summarized in Table~\ref{Tab:misspecifiedd} show that no matter which $n_1$ is specified, the partition-type models consistently outperform all the competing methods in terms of a lower recovery distance from the true top-$n_1$ list of items, i.e., $[E_1\preceq E_2\preceq\cdots \preceq E_{n_1}]$. Figure \ref{fig:consistentd} illustrates in details the average aggregated rankings of the top-10 entities by PAMA as $n_1$ increases, suggesting that PAMA is able to figure out the correct rankings of the top entities effectively. These results give us confidence that PAMA is robust to misspecification of $n_1$. \begin{table} \scriptsize \centering \begin{tabular}{ccc|ccc|cc|cccc} \hline \multicolumn{3}{c|}{Configuration}&\multicolumn{3}{c|}{Partition-type Models}&\multicolumn{2}{c|}{{Mallows Models}} &\multicolumn{4}{c}{MC-based Models} \\ \cline{1-3} \cline{4-6} \cline{7-8} \cline{9-12} $n$ &$m$ &$n_1$& PAMA$_F$&PAMA$_B$ &BARD & EMM & MM& MC$_1$ & MC$_2$ & MC$_3$ & CEMC\\ \cline{1-3} \cline{4-6} \cline{7-8} \cline{9-12} \multirow{2}{*}{100}&\multirow{2}{*}{10} &\multirow{2}{*}{10}&44.8 & {\bf 34.6} & 42.6 & 61.5 & 227.7 & 423.8 & 45.6 & 199.1 & 241.3 \\ &&&[0.90] & [0.93] & {\bf [0.96]} & [0.88] & [0.58] & [0.20] & [0.92] & [0.63] & [0.54] \\ \cline{4-12} \multirow{2}{*}{100}&\multirow{2}{*}{10}&\multirow{2}{*}{20}&39.2 & {\bf 33.9} & 94.2& 107.0 & 308.6 & 764.6 & 52.3 & 268.9 & 372.9 \\ &&& [0.95] & [0.96] & {\bf [0.99]} & [0.90] & [0.75] & [0.33] & [0.96] & [0.78] & [0.67] \\\cline{4-12} \multirow{2}{*}{100}&\multirow{2}{*}{10}&\multirow{2}{*}{30}& {\bf 27.5} & 29.2 & 207.4 & 126.2 & 360.6 & 1040.2 & 67.8 & 325.4 & 445.0 \\ &&&{\bf [0.98]} & {\bf [0.98]} & {\bf [0.98]} & [0.93] & [0.83] & [0.44] & [0.96] & [0.84] & [0.77] \\\cline{4-12} \multirow{2}{*}{100}&\multirow{2}{*}{10}& \multirow{2}{*}{40}& {\bf 16.0} & 17.4 & 363.9 & 131.6 & 408.1 & 1274.1 & 83.1 & 382.5 & 486.9 \\ &&&{\bf [0.99]} & {\bf [0.99]} & [0.98] & [0.95] & [0.87] & [0.54] & [0.97] & [0.87] & [0.83] \\ \cline{4-12} \multirow{2}{*}{100}&\multirow{2}{*}{10}& \multirow{2}{*}{50}& {\bf 8.8} & 9.3 & 565.3 & 134.6 & 452.2 & 1484.2 & 109.2 & 446.4 & 524.9 \\ &&&{\bf [1.00]} & {\bf [1.00]} & [0.99] & [0.97] & [0.90] & [0.62] & [0.96] & [0.89] & [0.88] \\ \cline{1-12} \end{tabular} \caption{Average recovery distances [coverages] of different methods based on 500 independent replicates under scenario $\mathcal{S}_{HS_{3}}$. } \label{Tab:misspecifiedd} \end{table} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth, height=0.4\textheight]{consistentd1.pdf} \caption{Average aggregated rankings of the top-10 entities by PAMA as $n_1$ increases from 10 to 50 for simulated data sets generated from $\mathcal{S}_{HS_3}$.} \label{fig:consistentd} \end{figure} Noticeably, although PAMA and BARD achieve comparable coverage as shown in Table \ref{Tab:misspecifiedd}, PAMA dominates BARD uniformly in terms of a much smaller recovery distance in all cases, suggesting that PAMA is capable of figuring out the detailed ranking of relevant entities that is missed by BARD. In fact, since BARD relies only on $\rho_i \triangleq P(I_i = 1 \mid \tau_1, \cdots , \tau_m)$ to rank entities, in cases where the signal to distinguish the relevant and background entities is strong, many $\rho_i$'s are very close to 1, resulting in nearly a ``random" ranking among the top relevant entities. Theoretically, if all relevant entities are recognized correctly but ranked randomly, the corresponding recovery distance would increase with $n_1$ in an order of $\mathcal{O}({n_1}^2)$, which matches well with the increasing trend of the recovery distance of BARD shown in Table \ref{Tab:misspecifiedd}. We also tested the model's performance when there is a true $n_1$ but it is mis-specified in our algorithm. We varied $n_1$ as $8, 10$ and $18$, respectively, for setting $\mathcal{S}_{HS_1}$ with $n$=100 and $m$=10, where the true $n_1$=10 (the first ten entities). Figure \ref{fig:misspecifiedd} shows boxplots of $\mathcal{I}$ for each mis-specified case. For the visualization purpose, we just show the boxplot of $E_1$ to $E_{20}$. The other entities are of the similar pattern with $E_{20}$. The figure shows a robust behavior of PAMA$_B$ as we vary the specifications of $n_1$. It also shows that the results are slightly better if we specify a $n_1$ that is moderately larger than its true value. The consistent results of other mis-specified cases, such as $5, 12, 15$, can be found in the Supplementary Material. \begin{figure} \centering \includegraphics[width=0.7 \linewidth, height=0.6 \linewidth]{misspecd.pdf} \caption{Boxplots of the estimated $\mathcal{I}$ from 500 replications under the setting of $\mathcal{S}_{HS_1}$ with $n_1$ being set as 8, 10 and $18$, respectively. The true $n_1$ is $10$. The vertical lines separate relevant entities (left) from background ones. The Y-axis shows the logarithm of the entities' ranks. The rank of a background entity is replaced by their average $\frac{100+10+1}{2}$. The triangle denotes the mean rank of each entity.} \label{fig:misspecifiedd} \end{figure} \section{Real Data Applications} \label{sec:realdata} \subsection{Aggregating Rankings of NBA Teams} We applied PAMA$_B$ to the NBA-team data analyzed in \cite{deng2014bayesian}, and compared it to competing methods in the literature. The NBA-team data set contains 34 ``predictive" power rankings of the 30 NBA teams in the 2011-2012 season. The 34 ``predictive" rankings were obtained from 6 professional rankers (sports websites) and 28 amateur rankers (college students), and the data quality varies significantly across rankers. More details of the dataset can be found in Table 1 of \cite{deng2014bayesian}. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{NBAsymmetric.pdf} \caption{Results from PAMA$_B$ for the NBA-team dataset. (a) boxplots of posterior samples of $\gamma_k$. (b) barplots of $\bar{\mathcal{I}}_i$ where the vertical line divides the NBA teams in Western and Eastern Conferences. (c) the trace plot of the log-likelihood function.} \label{Fig:NBA} \end{figure} Figure \ref{Fig:NBA} displays the results obtained by PAMA$_B$ (the partial-list version with $n_1$ specified as 16). Figure \ref{Fig:NBA} (a) shows the posterior distributions, as boxplots, of the quality parameter of each involved ranker. Figure \ref{Fig:NBA} (b) shows the posterior distribution of the aggregated power ranking of each NBA team. All the posterior samples that reports ``0" for the rank of an entity means that the entity is a background one, for visualization purpose we replace ``0'' by the rank of background average rank, $\frac{n+n_1+1}{2}=\frac{30+16+1}{2}=23.5$. The final set of 16 playoff teams are shown in blue while the champion of that season is shown in red (i.e., Heat). Figure \ref{Fig:NBA} (c) shows the trace plot of the log-likelihood of the PAMA model along the MCMC iteration. Comparing the results to Figure 8 of \cite{deng2014bayesian}, we observe the following facts: (1) PAMA$_B$ can successfully discover the quality difference of rankers as BARD; (2) PAMA$_B$ can not only pick up the relevant entities effectively like BARD, but also rank the discovered relevant entities reasonably well, which cannot be achieved by BARD; (3) PAMA$_B$ converges quickly in this application. We also applied other methods, including MM, EMM and Markov-chain-based methods, to this data set. We found that none of these methods could discern the quality difference of rankers as successfully as PAMA and BARD. Moreover, using the team ranking at the end of the regular season as the surrogate true power ranking of these NBA teams in the reason, we found that PAMA also outperformed BARD and EMM by reporting an aggregated ranking list that is the closest to the truth. Table \ref{Tab:NBArankresults} provides the detailed aggregated ranking lists inferred by BARD, EMM and PAMA respectively, as well as their coverage of and Kendall $\tau$ distance from the surrogate truth. Note that the Kendall $\tau$ distance is calculated for the eastern teams and western teams separately because the NBA Playoffs proceed at the eastern conference and the western conference in parallel until the NBA final, in which the two conference champions compete for the NBA champion title, making it difficult to validate the rankings between Eastern and Western teams. \begin{table}[ht] \footnotesize \centering \begin{tabular}{|c|c:c|c:c|c:c|c:c|} \hline Ranking& \multicolumn{2}{c|}{Surrogate truth}&\multicolumn{2}{c|}{BARD}&\multicolumn{2}{c|}{EMM}&\multicolumn{2}{c|}{PAMA}\\ \hline & Eastern & Western & Eastern & Western & Eastern & Western & Eastern & Western \\ \hline 1&Bulls& Spurs&\emph{Heat}&\emph{Thunder}&Heat&Thunder&Heat&Thunder\\ 2&Heat&Thunder&\emph{Bulls}&\emph{Mavericks}&Bulls&Maverick&Bulls&Maverickss\\ 3&Pacers&Lakers&\emph{Celtics}&\emph{Clippers}&Knicks&Clippers&Celtics&Lakers\\ 4&Celtics&Grizzlies&\emph{Knicks}&\emph{Lakers}&Celtics&Lakers&Knicks&Clippers\\ 5&Hawks&Clippers&\emph{Magic}&\emph{Spurs}&Magic&Spurs&Magic&Spurs\\ 6&Magic&Nuggets&\emph{Pacers}&\emph{Grizzlies}&Pacers&Grizzlies&Hawks&Nuggets\\ 7&Knicks&Mavericks&76ers&\emph{Nuggets}&76ers&Nuggets&Pacers&Grizzlies\\ 8&76ers&Jazz&Hawks&Jazz$^*$&Hawks$^*$&Jazz$^*$&76ers&Jazz$^*$\\ \hline Kendall $\tau$ &-&-&14.5&10.5&9&10&8&10\\ \hline Coverage & \multicolumn{2}{c|}{-}& \multicolumn{2}{c|}{$\frac{15}{16}$}&\multicolumn{2}{c|}{$\frac{14}{16}$}& \multicolumn{2}{c|}{$\frac{15}{16}$} \\ \hline \end{tabular} \caption{Aggregated power ranking of the NBA teams inferred by BARD, EMM, and PAMA, respectively, and the corresponding coverage of and the Kendall $\tau$ distance from the surrogate true rank based on the performances of these teams in the regular season. The teams in italic indicate that they have equal posterior probabilities of being in the relevant group, and the teams with asterisk are those that were misclassified to the background group.} \label{Tab:NBArankresults} \end{table} \subsection{Aggregating Rankings of NFL Quarterback Players with the Presence of Covariates} Our next application is targeted at the NFL-player data reported by \cite{li2020bayesian}. The NFL-player data contains 13 predictive power rankings of 24 NFL quarterback players. The rankings were produced by 13 experts based on the performance of the 24 NFL players during the first 12 weeks in the 2014 season. The dataset also contains covariates for each player summarizing the performances of these players during the period, including the \emph{number of games played} (G), \emph{pass completion percentage} (Pct), the \emph{number of passing attempts per game} (Att), \emph{average yards per carry} (Avg), \emph{total receiving yards} (Yds), \emph{average passing yards per attempt} (RAvg), the \emph{touchdown percentage} (TD), the \emph{intercept percentage} (Int), \emph{running attempts per game} (RAtt), \emph{running yards per attempt} (RYds) and the \emph{running first down percentage} (R1st). Details of the dataset can be found in Table 2 of \cite{li2020bayesian}. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{NFLcov.pdf} \caption{Key results from PAMA$_B$ for the NFL-player dataset. (a) Boxplots of posterior samples of $\gamma_k$. (b) Barplots of $\bar{\mathcal{I}}_i$. (c) Trace plot of the log-likelihood. (d) Barplots of posterior probabilities for each coefficient to be positive.} \label{Fig:NFLcov} \end{figure} Here, we set $n_1=12$ in order to find which players are above average. We analyzed the NFL-player data with PAMA$_B$ (the covaritate-assisted version) and the results are summarized in Figure \ref{Fig:NFLcov}: (a) the posterior boxplots of the quality parameter for all the rankers; (b) the barplot of $\bar{\mathcal{I}_i}$ for all the NFL players with the descending order; (c) the traceplot of the log-likelihood of the model; and (d), the barplot of probabilities $P({\psi}_j >0)$ and the covariates are rearranged from left to right by decreasing the probability. From Figure \ref{Fig:NFLcov} (a), we observe that rankers 1, 3, 4 and 5 are generally less reliable than the other rankers. In the study of the same dataset in \cite{li2020bayesian}, the authors assumed that the 13 rankers fall into three quality levels, and reported that seven rankers (i.e., 2, 6, 7, 8, 9, 10 and 13) are of a higher quality than the other six (see Figure 7 of \cite{li2020bayesian}). Interestingly, according to Figure \ref{Fig:NFLcov} (a), the PAMA algorithm suggested exactly the same set of high-quality rankers. In the meantime, ranker 2 is of the lowest quality among the seven high quality rankers in both studies. From Figure \ref{Fig:NFLcov} (b), a consensus ranking list can be obtained. Our result is consistent with that of Figure 6 in \cite{li2020bayesian}. Figure \ref{Fig:NFLcov} (d) shows that six covariates are more probable to have positive effects. \begin{table}[htb] \def0.8{0.8} \centering \begin{tabular}{|c|c|c|c|c|} \hline Ranking&Gold standard & BARD & EMM& PAMA\\ \hline 1 & Aaron R. & \emph{Andrew L.} & Andrew L.& Andrew L.\\ 2 & Andrew L. & \emph{Aaron R.}& Aaron R.& Aaron R.\\ 3 & Ben R. & \emph{Tom B.} & Tom B.& Tom B.\\ 4 &Drew B. & \emph{Drew B.} &Ben R. & Drew B.\\ 5 &Russell W. & \emph{Ben R.} & Drew B.& Ben R.\\ 6 &Matt R. & \emph{Ryan T.} & Ryan T. & Ryan T.\\ 7 &Ryan T. &Russell W. & Russell W. &Russell W.\\ 8 &Tom B.& Philip R.* & Philip R.*& Philip R.\\ 9 &Eli M. &Eli M.* & Eli M.* &Eli M.*\\ 10 &Philip R. &Matt R.* &Matt R.* &Matt R.*\\ \hline R-distance& - &35.5&32&25\\ \hline Coverage& - &0.7 &0.7& 0.8\\ \hline \end{tabular} \caption{Top players in the aggregated rankings inferred by BARD, EMM and PAMA. The entities in italic indicate that they have equal posterior probabilities of being in the relevant group, and the players with asterisk are those that were mis-classified to the background group.} \label{tab:rankingsofNFL} \end{table} Using the Fantasy points of the players (\url{https://fantasy.nfl.com/research/players}) derived at the end of the 2014 NFL season as the surrogate truth, the recovery distance and coverage of the aggregated rankings by different approaches can be calculated so as to evaluate the performances of different approaches. Note that the Fantasy points of two top NFL players Peyton Manning and Tony Romo are missing for unknown reasons, we excluded them from analysis and only report results for the top 10 positions instead of top 12. Table~\ref{tab:rankingsofNFL} summarizes the results, demonstrating that PAMA outperformed the other two methods. \section{Conclusion and Discussion} \label{sec:discussion} The proposed Partition-Mallows model embeds the classic Mallows model into a partition modeling framework developed earlier by \cite{deng2014bayesian}, which is analogous to the well-known ``spike-and-slab" mixture distribution often employed in Bayesian variable selection. Such a nontrivial ``mixture" combines the strengths of both the Mallows model and BARD's partition framework, leading to a stronger rank aggregation method that can not only learn quality variation of rankers and distinguish relevant entities from background ones effectively, but also provide an accurate ranking estimate of the discovered relevant entities. Compared to other frameworks in the literature for rank aggregation with heterogeneous rankers, the Partition-Mallows model enjoys more accurate results with better interpretability at the price of a moderate increase of computational burden. We also show that the Partition-Mallows framework can easily handle partial lists and and incorporate covariates in the analysis. Throughout this work, we assume that the number of relevant entities $n_1$ is known. This is reasonable in many practical problems where a specific $n_1$ can be readily determined according to research demands. Empirically, we found that the ranking results are insensitive to the choice of a wide range of values of $n_1$. If needed, we may also determine $n_1$ according to a model selection criterion, such as AIC or BIC. In the PAMA model, $\pi(\tau_k^0 \mid \tau_k^{0\mid 1})$ is assumed to be a uniform distribution. If the detailed ranking of the background entities is of interest, we can modify the conditional distribution $\pi(\tau_k^0 \mid \tau_k^{0\mid 1})$ to be the Mallows model or other models. A quality parameter can still be incorporated to control the interaction between relevant entities and background entities. The resulting likelihood function becomes more complicated, but is still tractable. In practice, the assumption of independent rankers may be violated. In the literature, a few approaches have been proposed to detect and handle dependent rankers. For example, \cite{deng2014bayesian} proposed a hypothesis-testing-based framework to detect pairs of over-correlated rankers and a hierarchical model to accommodate clusters of dependent rankers; \cite{JohnsonS2019} adopted an extended Dirichlet process and a similar hierarchical model to achieve simultaneous ranker clustering and rank aggregation inference. Similar ideas can be incorporated in the PAMA model as well to deal with non-independent rankers. \section*{Acknowledgement} We thank Miss Yuchen Wu for helpful discussions at the early stage of this work and the two reviewers for their insightful comments and suggestions that helped us improve the paper greatly. This research is supported in part by the National Natural Science Foundation of China (Grants 11771242 \& 11931001), Beijig Academy of Artificial Intelligence (Grant BAAI2019ZD0103), and the National Science Foundation of USA (Grants DMS-1903139 and DMS-1712714). The author Wanchuang Zhu is partially supported by the Australian Research Council (Data Analytics for Resources and Environments, Grant IC190100031) {\section*{Supplementary Materials}}
{'timestamp': '2021-04-16T02:09:38', 'yymm': '2104', 'arxiv_id': '2104.07261', 'language': 'en', 'url': 'https://arxiv.org/abs/2104.07261'}
ArXiv
\section{Introduction} Reaching consensus is an important construct in distributed coordination of multi-agent systems \cite{mesbahi2010graph,olfati2007consensus,zhang2018fully,degroot1974reaching}. Although the consensus problem has been extensively investigated in the literature, it has often been assumed that the network has scalar-weighted edges; extensions of the scalar weights to matrix-valued weights has become relevant in order to characterize interdependencies among multi-dimensional states of neighboring agents. Recently, a broader category of networks referred to as matrix-weighted networks has been introduced to address such interdependencies~\cite{trinh2018matrix,sun2018dimensional}. In fact, matrix-weighted networks arise in scenarios such as graph effective resistance examined in the context of distributed control and estimation \cite{barooah2006graph,tuna2017observability}, logical inter-dependencies amongst topics in opinion evolution \cite{friedkin2016network,ye2018continuous}, bearing-based formation control \cite{zhao2015translational}, dynamics of an array of coupled LC oscillators \cite{tuna2019synchronization}, as well as consensus and synchronization on matrix-weighted networks \cite{trinh2018matrix,tuna2016synchronization,pan2018bipartite}. For matrix-weighted networks, network connectivity does not translates to achieving consensus. To this end, properties of weight matrices play an important role in characterizing consensus. For instance, positive definiteness and positive semi-definiteness of weight matrices have been employed to provide consensus conditions in \cite{trinh2018matrix}; negative definiteness and negative semi-definiteness of weight matrices are further introduced in \cite{pan2018bipartite,su2019bipartite}. In the meantime, the notion of network connectivity can be further extended for matrix-valued networks. For instance, one can identify edges with positive/negative definite matrices as ``strong'' connections; whereas an edge weighted by positive/negative semi-definite matrices can be considered as a ``weak'' connection \cite{trinh2017theory}. To the best of our knowledge, conditions under which consensus can be achieved for time-varying matrix-weighted networks have not been developed in the literature; this is in contrast with conditions that have been examined for scalar-weighted networks \cite{cao2008reaching,olfati2004consensus,ren2005consensus,Jadbabaie2003,cao2011necessary,moreau2005stability,meng2018uniform,meng2014modulus,proskurnikov2014consensus}. In this paper, we provide necessary and/or sufficient conditions for achieving consensus on matrix-weighed time-varying networks. Under mild assumptions on the switching pattern for such networks, necessary and/or sufficient conditions for which average consensus is achieved are provided in terms of the null space of the matrix-valued Laplacian of the associated integral networks. In particular, for periodic matrix-weighted time-varying networks with period $T>0$, a necessary and sufficient condition for average consensus is obtained; we further show that from a graph-theoretic perspective, when the integral network over time span $[0,T)$ has a positive spanning tree, then average consensus is achieved. Simulation results are provided to demonstrate the theoretical analysis. The remainder of this paper is organized as follows. Preliminaries are introduced in \S\ref{sec:Preliminaries}. The problem formulation is provided in \S\ref{sec:Problem-Formulation}, followed by the consensus conditions in \S\ref{sec:Consensus-on-General} and \S\ref{sec:Consensus-on-Periodic}, respectively. A simulation example is presented in \S\ref{sec:Simulation-Results} followed by concluding remarks in \S~\ref{sec:Conclusion}. \section{Preliminaries \label{sec:Preliminaries}} Let $\mathbb{R}$, $\mathbb{N}$ and $\mathbb{Z}_{+}$ be the set of real numbers, natural numbers and positive integers, respectively. For $n\in\mathbb{Z}_{+}$, denote $\underline{n}=\left\{ 1,2,\ldots,n\right\}$. A symmetric matrix $M\in\mathbb{R}^{n\times n}$ is positive definite, denoted by $M\succ0$, if $\boldsymbol{z}^{\top}M\boldsymbol{z}>0$ for all $\boldsymbol{z}\in\mathbb{\mathbb{R}}^{n}$ and $\boldsymbol{z\not}=\boldsymbol{0}$ and is positive semi-definite, denoted by $M\succeq0$, if $\boldsymbol{z}^{\top}M\boldsymbol{z}\ge0$ for all $\boldsymbol{z}\in\mathbb{\mathbb{R}}^{n}$. The null space of a matrix $M\in\mathbb{R}^{n\times n}$ is denoted by $\text{{\bf null}}(M)=\left\{ \boldsymbol{z}\in\mathbb{R}^{n}|M\boldsymbol{z}=\boldsymbol{0}\right\} $. \begin{lem} \label{lem:Rayleigh Theorem}\cite{horn2012matrix} Let $M\in\mathbb{R}^{n\times n}$ be symmetric with eigenvalues $\lambda_{1}\leq\cdots\leq\lambda_{n}$. Let $\boldsymbol{x}_{i_{1}},\cdots,\boldsymbol{x}_{i_{k}}$ be mutually orthonormal vectors such that $M\boldsymbol{x}_{i_{p}}=\lambda_{i_{p}}\boldsymbol{x}_{i_{p}}$, where $i_{p}\in\mathbb{Z}_{+}$, $p\in\underline{k}$ and $1\leq i_{1}<\cdots<i_{k}\leq n$. Then \[ \lambda_{i_{1}}=\underset{\{\boldsymbol{x}\not={\bf 0},\boldsymbol{x}\in S_k\}}{\text{{\bf min}}}\frac{\boldsymbol{x}^{\top}M\boldsymbol{x}}{\boldsymbol{x}^{\top}\boldsymbol{x}}, \] and \[ \lambda_{i_{k}}=\underset{\{\boldsymbol{x}\not={\bf 0},\boldsymbol{x}\in S_k\}}{\text{{\bf max}}}\frac{\boldsymbol{x}^{\top}M\boldsymbol{x}}{\boldsymbol{x}^{\top}\boldsymbol{x}}, \] where $S_k=\text{{\bf span}}\{\boldsymbol{x}_{i_{1}},\cdots,\boldsymbol{x}_{i_{k}}\}$. \end{lem} \section{Problem Formulation \label{sec:Problem-Formulation}} Consider a multi-agent system consisting of $n>1$ ($n\in\mathbb{Z}_{+}$) agents whose interaction network is characterized by a matrix-weighted time-varying graph $\mathcal{G}(t)=(\mathcal{V},\mathcal{E}(t),A(t))$, where $t$ refers to the time index. The node and edge sets of $\mathcal{G}$ are denoted by $\mathcal{V}=\left\{ 1,2,\ldots,n\right\} $ and $\mathcal{E}(t)\subseteq\mathcal{V}\times\mathcal{V}$, respectively. The weight on the edge $(i,j)\in\mathcal{E}(t)$ is encoded by the symmetric matrix $A_{ij}(t)\in\mathbb{R}^{d\times d}$ such that $A_{ij}(t)\succeq0$ or $A_{ij}(t)\succ0$, and $A_{ij}(t)=0_{d\times d}$ for $(i,j)\not\in\mathcal{E}(t)$.\textbf{ }Thereby, the matrix-weighted adjacency matrix $A(t)=[A_{ij}(t)]\in\mathbb{R}^{dn\times dn}$ is a block matrix such that the block located in its $i$-th row and the $j$-th column is $A_{ij}(t)$. It is assumed that $A_{ij}(t)=A_{ji}(t)$ for all $i\not\not=j\in\mathcal{V}$ and $A_{ii}(t)=0_{d\times d}$ for all $i\in\mathcal{V}$. Denote the state of an agent $i\in\mathcal{V}$ as $\boldsymbol{x}_{i}(t)=[x_{i1}(t),x_{i2}(t),\ldots,x_{id}(t)]^{\top}\in\mathbb{R}^{d}$ evolving according to the protocol, \begin{equation} \dot{\boldsymbol{x}}_{i}(t)=-\sum_{j\in\mathcal{N}_{i}(t)}A_{ij}(t)(\boldsymbol{x}_{i}(t)-\boldsymbol{x}_{j}(t)),\thinspace i\in\mathcal{V},\label{equ:matrix-consensus-protocol} \end{equation} where $\mathcal{N}_{i}(t)=\left\{ j\in\mathcal{V}\,|\,(i,j)\in\mathcal{E}(t)\right\} $ denotes the neighbor set of agent $i\in\mathcal{V}$ at time $t$. Note that protocol \eqref{equ:matrix-consensus-protocol} degenerates into the scalar-weighted case when $A_{ij}(t)=a_{ij}(t)I$, where $a_{ij}(t)\in\mathbb{R}$ and $I$ denotes the $d\times d$ identity matrix. Let $C(t)=\text{{\bf diag}}\left\{ C_{1}(t),C_{1}(t),\cdots,C_{n}(t)\right\} \in\mathbb{R}^{dn}$ be the matrix-valued degree matrix of $\mathcal{G}(t)$, where $C_{i}(t)=\sum_{j\in\mathcal{N}_{i}}A_{ij}(t)\in\mathbb{R}^{d\times d}$. The matrix-valued Laplacian is subsequently defined as $L(t)=C(t)-A(t)$. The dynamics of the overall multi-agent system now admits the form, \begin{equation} \dot{\boldsymbol{x}}(t)=-L(t)\boldsymbol{x}(t),\label{equ:matrix-consensus-overall} \end{equation} where $\boldsymbol{x}(t)=[\boldsymbol{x}_{1}^{\top}(t),\boldsymbol{x}_{2}^{\top}(t),\ldots,\boldsymbol{x}_{n}^{\top}(t)]^{\top}\in\mathbb{R}^{dn}$. \begin{defn} Let $\boldsymbol{x}_{f}=\boldsymbol{1_{n}}\otimes(\frac{1}{n}\sum_{i=1}^{n}\boldsymbol{x_{i}}(0))$. Then the multi-agent system \eqref{equ:matrix-consensus-overall} admits an average consensus solution if $\lim{}_{t\rightarrow\infty}\boldsymbol{x}_{i}(t)=\lim{}_{t\rightarrow\infty}\boldsymbol{x}_{j}(t)=\boldsymbol{x}_{f}$ for all $i,j\in\mathcal{V}$. \end{defn} This work aims to investigate the necessary and/or sufficient conditions under which the multi-agent system \eqref{equ:matrix-consensus-overall} admits an average consensus solution. It is well-known that network connectivity plays a central role in determining consensus for scalar-weighted networks \cite{olfati2004consensus}. However, as we shall show subsequently, definiteness of the weight matrices is also a crucial factor in examining consensus for a matrix-weighted networks in addition to its connectivity. First, we shall recall a few facts on network connectivity. In graph theory, network connectivity captures how a pair of nodes in the network can be ``connected'' by traversing a sequence of consecutive edges called paths. A path of $\mathcal{G}(t)$ is a sequence of edges of the form $(i_{1},i_{2}),(i_{2},i_{3}),\ldots,(i_{p-1},i_{p})$, where nodes $i_{1},i_{2},\ldots,i_{p}\in\mathcal{V}$ are distinct; in this case we say that node $i_{p}$ is reachable from $i_{1}$. The graph $\mathcal{G}(t)$ is connected if any two distinct nodes in $\mathcal{G}(t)$ are reachable from each other. A tree is a connected graph with $n\ge2$ nodes and $n-1$ edges where $n\in\mathbb{Z}_{+}$. For matrix weighted graphs, we adopt the following terminology. An edge $(i,j)\in\mathcal{E}(t)$ is positive definite or positive semi-definite if the associated weight matrix $A_{ij}(t)$ is positive definite or positive semi-definite, respectively. A positive path in $\mathcal{G}(t)$ is a path such that every edge on this path is positive definite. A tree in $\mathcal{G}(t)$ is a positive tree if every edge contained in this tree is positive definite. A positive spanning tree of $\mathcal{G}(t)$ is a positive tree containing all nodes in $\mathcal{G}(t)$. \section{Consensus on General Matrix-weighted Time-varying Networks \label{sec:Consensus-on-General}} In order to analyze multi-agent systems of the form~\eqref{equ:matrix-consensus-overall}, we adopt the following assumption on the matrix-weighted time-varying network~\cite{olfati2004consensus,ren2005consensus,cao2011necessary,meng2018uniform}. \textbf{Assumption 1. }There exists a sequence $\{t_{k}|k\in\mathbb{N}\}$ such that $\lim_{k\rightarrow\infty}t_{k}=\infty$ and $\triangle t_{k}=t_{k+1}-t_{k}\in[\alpha,\beta]$ for all $k\in\mathbb{N}$, where $\beta>\alpha>0$, $t_{0}=0$, and $\mathcal{G}(t)$ is time-invariant for $t\in[t_{k},t_{k+1})$ for all $k\in\mathbb{N}$. When $L(t)=L$ for all $t\in[0,\infty)$, then \eqref{equ:matrix-consensus-overall} encodes the consensus protocol on a time-invariant network. The following observation characterizes the structure of the null space of matrix-valued Laplacian $L$ on time-invariant networks, that in turn, can determine the steady-state of the network~\eqref{equ:matrix-consensus-overall}. \begin{lem} \label{lem:1}\cite{trinh2018matrix} Let $\mathcal{G}=(\mathcal{V},\mathcal{E},A)$ be a matrix-weighted time-invariant network with matrix-valued Laplacian $L$. Then $L\succeq0$ and $\text{{\bf null}}(L)=\text{{\bf span}}\left\{ \mathcal{R},\mathcal{H}\right\}$, where $\mathcal{R}=\text{{\bf range}}\{\boldsymbol{1}\otimes I_{d}\}$ and \begin{align*} \mathcal{H=}\{&[\boldsymbol{v}_{1}^{T},\boldsymbol{v}_{2}^{T},\cdots,\boldsymbol{v}_{n}^{T}]^{T}\in\mathbb{R}^{dn}\mid\\ & (\boldsymbol{v}_{i}-\boldsymbol{v}_{j})\in\text{{\bf null}}(A_{ij}),\,(i,j)\in\mathcal{E}\}. \end{align*} \end{lem} Note that the null space of a matrix-valued Laplacian is not only determined by the network connectivity, but also by the properties of weight matrices; this is distinct from the scalar-weighted networks. For matrix-weighted time-invariant networks, a condition under which the multi-agent system \eqref{equ:matrix-consensus-overall} achieves consensus is provided in the following lemma. \begin{lem} \label{lem:2}\cite{trinh2018matrix} Let $\mathcal{G}=(\mathcal{V},\mathcal{E},A)$ be a matrix-weighted time-invariant network with matrix-valued Laplacian $L$. Then the multi-agent system \eqref{equ:matrix-consensus-overall} admits an average consensus if and only if $\text{{\bf null}}(L)=\mathcal{R}$. \end{lem} \begin{defn} Define the consensus subspace of the multi-agent system \eqref{equ:matrix-consensus-overall} as $\mathcal{R}=\text{{\bf range}}\{\boldsymbol{1}\otimes I_{d}\}$. \end{defn} \begin{lem} \label{lem:3}\cite{trinh2018matrix} Let $\mathcal{G}=(\mathcal{V},\mathcal{E},A)$ be a matrix-weighted time-invariant network. If $\mathcal{G}$ has a positive spanning tree $\mathcal{T}$, then the network \eqref{equ:matrix-consensus-overall} admits an average consensus. \end{lem} In order to characterize the related properties of the time-varying networks $\mathcal{G}(t)$ over a given time span, we introduce the notion of matrix-weighted integral network; this notion proves crucial in characterizing algebraic and graph-theoretic conditions for reaching consensus on matrix-weighted time-varying networks. \begin{defn} \label{def:integral graph}Let $\mathcal{G}(t)=(\mathcal{V},\mathcal{E}(t),A(t))$ be a matrix-weighted time-varying network. Then the matrix-weighted integral network of $\mathcal{G}(t)$ over time span $[\tau_{1},\tau_{2})\subseteq[0,\infty)$ is defined as $\widetilde{\mathcal{G}}_{[\tau_{1},\tau_{2})}=(\mathcal{V},\widetilde{\mathcal{E}},\widetilde{A})$, where \[ \widetilde{A}=\frac{1}{\tau_{2}-\tau_{1}}\intop_{\tau_{1}}^{\tau_{2}}A(t)dt \] and \[ \widetilde{\mathcal{E}}=\left\{ (i,j)\in\mathcal{V}\times\mathcal{V}\mid\intop_{\tau_{1}}^{\tau_{2}}A_{ij}(t)dt\succ0\; \text{or}\; \intop_{\tau_{1}}^{\tau_{2}}A_{ij}(t)dt\succeq0\right\} . \] \end{defn} According to Definition \ref{def:integral graph}, denote by $\widetilde{C}$ as the matrix-weighted degree matrix of $\widetilde{\mathcal{G}}_{[\tau_{1},\tau_{2})}$, that is, $\widetilde{C}=\frac{1}{\tau_{2}-\tau_{1}}\intop_{\tau_{1}}^{\tau_{2}}C(t)dt$. Denote the matrix-valued Laplacian of $\widetilde{\mathcal{G}}_{[\tau_{1},\tau_{2})}$ as $\widetilde{L}_{[\tau_{1},\tau_{2})}$. Thus, \begin{align*} \widetilde{L}_{[\tau_{1},\tau_{2})} & =\widetilde{C}-\widetilde{A}=\frac{1}{\tau_{2}-\tau_{1}}\intop_{\tau_{1}}^{\tau_{2}}L(t). \end{align*} According to Assumption 1, we denote $\mathcal{G}(t)$ on dwell time $t\in[t_{k},t_{k+1})$ as $\mathcal{G}_{[t_{k},t_{k+1})}(t)=\mathcal{G}^{k}$ and denote the associated matrix-valued Laplacian as $L^{k}$, where $k\in\mathbb{N}$. The following lemma reveals the connection between the null space of the matrix-valued Laplacian of a sequence of matrix-weighted networks and that of the corresponding integral network. \begin{lem} \label{thm:null space equality} Let $\mathcal{G}(t)$ be a matrix-weighted time-varying network satisfying \textbf{Assumption 1. }Then $\text{{\bf null}}(\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})})=\mathcal{R}$ if and only if \[ \underset{i\in\underline{k^{\prime\prime}-k^{\prime}}}{\bigcap}\text{{\bf null}}(L^{k^{\prime}+i-1})=\mathcal{R}, \] where $k^{\prime}<k^{\prime\prime}\in\mathbb{N}$. \end{lem} \begin{IEEEproof} (Necessity) From the definition of matrix-valued Laplacian, one has $\mathcal{R}\subseteq\underset{i\in\underline{k^{\prime\prime}-k^{\prime}}}{\bigcap}\text{{\bf null}}(L^{k^{\prime}+i-1})$. Assume that $\underset{i\in\underline{k^{\prime\prime}-k^{\prime}}}{\bigcap}\text{{\bf null}}(L^{k^{\prime}+i-1})\neq\mathcal{R}$; then there exists an $\boldsymbol{\eta}\notin\mathcal{R}$ such that $L^{k^{\prime}+i-1}\boldsymbol{\eta}=\boldsymbol{0}$ for all $i\in\underline{k^{\prime\prime}-k^{\prime}}$, which would imply, \begin{align*} \widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})}\boldsymbol{\eta} & =\left(\frac{1}{\triangle t_{k^{\prime}}}\intop_{t_{k^{\prime}}}^{t_{k^{\prime\prime}}}L(t)dt\right)\boldsymbol{\eta}\\ & =\frac{1}{\triangle t_{k^{\prime}}}\sum_{i=1}^{k^{\prime\prime}-k^{\prime}}L^{k^{\prime}+i-1}(t_{k^{\prime}+i}-t_{k^{\prime}+i-1})\boldsymbol{\eta}\\ & =\boldsymbol{0}, \end{align*} contradicting the fact that $\text{{\bf null}}(\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})})=\mathcal{R}$. Therefore, $\underset{i\in\underline{k^{\prime\prime}-k^{\prime}}}{\bigcap}\text{{\bf null}}(L^{k^{\prime}+i-1})=\mathcal{R}$. (Sufficiency) Assume that $\text{{\bf null}}(\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})})\neq\mathcal{R}$; then there exists $\boldsymbol{\eta}\notin\mathcal{R}$ such that $\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})}\boldsymbol{\eta}=\boldsymbol{0}$. Hence, $\boldsymbol{\eta}^{\top}\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})}\boldsymbol{\eta}=0$, implying that, \begin{align*} & \frac{1}{\triangle t_{k^{\prime}}}\boldsymbol{\eta}^{\top}\left(\intop_{t_{k^{\prime}}}^{t_{k^{\prime\prime}}}L(t)dt\right)\boldsymbol{\eta}\\ & =\frac{1}{\triangle t_{k^{\prime}}}\sum_{i=1}^{k^{\prime\prime}-k^{\prime}}\boldsymbol{\eta}^{\top}L^{k^{\prime}+i-1}(t_{k^{\prime}+i}-t_{k^{\prime}+i-1})\boldsymbol{\eta}\\ & =0. \end{align*} Due to the fact that $L^{k^{\prime}+i-1}$ is positive semi-definite for all $i\in\underline{k^{\prime\prime}-k^{\prime}}$, $\boldsymbol{\eta}^{\top}L^{k^{\prime}+i-1}\boldsymbol{\eta}=0$, which would imply that $L^{k^{\prime}+i-1}\boldsymbol{\eta}=\boldsymbol{0}$; this on the other hand, contradicts the premise $\underset{i\in\underline{k^{\prime\prime}-k^{\prime}}}{\bigcap}\text{{\bf null}}(L^{k^{\prime}+i-1})=\mathcal{R}$. Thus $\text{{\bf null}}(\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})})=\mathcal{R}$. \end{IEEEproof} In order to link the state evolution of the multi-agent system \eqref{equ:matrix-consensus-overall} and the null space of the integral of matrix-weighted time-varying networks, we need to employ the state transition matrix. Denote $\varPhi(k^{\prime},k^{\prime\prime})=e^{-L^{k^{\prime\prime}-1}\triangle t_{k^{\prime\prime}-1}}\cdots e^{-L^{k^{\prime}}\triangle t_{k^{\prime}}}$. Then $\boldsymbol{x}(t_{k^{\prime\prime}})=\varPhi(k^{\prime},k^{\prime\prime})\boldsymbol{x}(t_{k^{\prime}})$, where $k^{\prime}<k^{\prime\prime}\in\mathbb{N}$. Note that the matrix-valued Laplacian $L$ has at least $d$ zero eigenvalues. Let $\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{dn}$ be the eigenvalues of $L$. Then we have $0=\lambda_{1}=\cdots=\lambda_{d}\leq\lambda_{d+1}\le\cdots\leq\lambda_{dn}$. Denote by $\beta_{1}\geq\beta_{2}\geq\cdots\geq\beta_{dn}$ as the eigenvalues of $e^{-Lt}$; then $\beta_{i}(e^{-Lt})=e^{-\lambda_{i}(L)t}$, i.e., $1=\beta_{1}=\cdots=\beta_{d}\geq\beta_{d+1}\geq\cdots\geq\beta_{dn}$. In the meantime, the eigenvector corresponding to the eigenvalue $\beta_{i}(e^{-Lt})$ is equal to that corresponding to $\lambda_{i}(L)$. Consider the symmetric matrix $\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime})$ which has at least $d$ eigenvalues at $1$. Let $\mu_{j}$ be the eigenvalues of $\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime})$, where $j\in\underline{dn}$ such that $\mu_{1}=\cdots=\mu_{d}=1$ and $\mu_{d+1}\geq\mu_{d+2}\geq\cdots\geq\mu_{dn}$. The following lemma provides the relationship between the null space of the matrix-valued Laplacian of $\widetilde{\mathcal{G}}_{[t_{k^{\prime}},t_{k^{\prime\prime}})}$ and the eigenvalue $\mu_{d+1}$ of $\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime})$. This relationship will prove useful in the proof of our main theorem. \begin{lem} Let $\mathcal{G}(t)$ be a matrix-weighted time-varying network satisfying \textbf{Assumption 1. }Then $\text{{\bf null}}(\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})})=\mathcal{R}$ if and only if \[ \mu_{d+1}(\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime}))<1, \] where $k^{\prime}<k^{\prime\prime}\in\mathbb{N}$. \end{lem} \begin{IEEEproof} (Sufficiency) Assume that $\text{{\bf null}}(\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})})\neq\mathcal{R}$; then according to Lemma \ref{thm:null space equality}, there exists an $\boldsymbol{\eta}\notin\mathcal{R}$ such that $L^{k^{\prime}+i-1}\boldsymbol{\eta}=\boldsymbol{0}$ for all $i\in\underline{k^{\prime\prime}-k^{\prime}}$. Thus one can obtain $e^{-L^{k^{\prime}+i-1}t}\boldsymbol{\eta}=\boldsymbol{\eta}$ for all $i\in\underline{k^{\prime\prime}-k^{\prime}}$ and $\varPhi(k^{\prime},k^{\prime\prime})\boldsymbol{\eta}=\boldsymbol{\eta}$. According to the Lemma \ref{lem:Rayleigh Theorem}, one has \begin{align*} & \mu_{d+1}(\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime}))\\ \geq & \frac{\boldsymbol{\eta}^{\top}\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime})\boldsymbol{\eta}}{\boldsymbol{\eta}^{\top}\boldsymbol{\eta}}\\ = & 1, \end{align*} contradicting, \[ \mu_{d+1}(\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime}))<1. \] Therefore $\text{{\bf null}}(\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})})=\mathcal{R}$ holds. (Necessity) Assume that $\mu_{d+1}(\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime}))\geq1$. Again, according to Lemma \ref{lem:Rayleigh Theorem}, there exists a $\boldsymbol{\eta}\notin\mathcal{R}$ and $\boldsymbol{\eta}\neq\boldsymbol{0}$ such that \begin{align*} \mu_{d+1}(\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime})) & = \frac{\boldsymbol{\eta}^{\top}\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime})\boldsymbol{\eta}}{\boldsymbol{\eta}^{\top}\boldsymbol{\eta}}\\ &\geq 1. \end{align*} Thus, \[ \parallel\boldsymbol{\eta}\parallel\leq\parallel\varPhi(k^{\prime},k^{\prime\prime})\boldsymbol{\eta}\parallel. \] Let $\boldsymbol{\eta}_{k^{\prime}}=\boldsymbol{\eta}$ and $\boldsymbol{\eta}_{k^{\prime}+i}=e^{-L^{k^{\prime}+i-1}\triangle t_{k^{\prime}+i-1}}\boldsymbol{\eta}_{k^{\prime}+i-1}$ for $i\in\underline{k^{\prime\prime}-k^{\prime}}$. Due to the fact $\lambda_{j}(e^{-L^{k^{\prime}+i-1}\triangle t_{k^{\prime}+i-1}})\leq1$ for $j\in\underline{dn}$ and $\boldsymbol{\eta}\notin\mathcal{R}$, then \[ \parallel e^{-L^{k^{\prime}+i-1}\triangle t_{k^{\prime}+i-1}}\boldsymbol{\eta}\parallel\leq\parallel\boldsymbol{\eta}\parallel, \] which implies that, \begin{align*} \parallel\boldsymbol{\eta}\parallel & \leq\parallel\varPhi(k^{\prime},k^{\prime\prime})\boldsymbol{\eta}\parallel\\ & =\parallel\boldsymbol{\eta}_{k^{\prime\prime}}\parallel\leq\parallel\boldsymbol{\eta}_{k^{\prime\prime}-1}\parallel\leq\ldots\leq\parallel\boldsymbol{\eta}_{k^{\prime}}\parallel\\ & =\parallel\boldsymbol{\eta}\parallel. \end{align*} Hence, $\parallel e^{-L^{k^{\prime}+i-1}\triangle t_{k^{\prime}+i-1}}\boldsymbol{\eta}_{k^{\prime}+i-1}\parallel=\parallel\boldsymbol{\eta}_{k^{\prime}+i-1}\parallel$ for $i\in\underline{k^{\prime\prime}-k^{\prime}}$. Then, one can further derive $L^{k^{\prime}+i-1}\boldsymbol{\eta}_{k^{\prime}+i-1}=\boldsymbol{0}$; thus $\boldsymbol{\eta}_{k^{\prime}+i-1}\in\text{{\bf ker}}(L^{k^{\prime}+i-1})$. Note that since, \begin{align*} \parallel\boldsymbol{\eta}_{k^{\prime}+i}-\boldsymbol{\eta}_{k^{\prime}+i-1}\parallel & =\parallel e^{-L^{k^{\prime}+i-1}\triangle t_{k^{\prime}+i-1}}\boldsymbol{\eta}_{k^{\prime}+i-1}-\boldsymbol{\eta}_{k^{\prime}+i-1}\parallel\\ & =\parallel\sum_{t=1}^{\infty}\frac{1}{t!}(-L^{k^{\prime}+i-1}\triangle t_{k^{\prime}+i-1})^{t}\boldsymbol{\eta}_{k^{\prime}+i-1}\parallel\\ & =0, \end{align*} one can further obtain $\boldsymbol{\eta}_{k^{\prime}+i-1}=\boldsymbol{\eta}_{k^{\prime}+i}$ for $\forall i\in\underline{k^{\prime\prime}-k^{\prime}}$, which implies that $\boldsymbol{\eta}\in\underset{i\in\underline{k^{\prime\prime}-k^{\prime}}}{\cap}\text{{\bf ker}}(L^{k^{\prime}+i-1})$ and $\text{{\bf null}}(\widetilde{L}_{[t_{k^{\prime}},t_{k^{\prime\prime}})})\neq\mathcal{R}$. This is a contradiction however. As such $\mu_{d+1}(\varPhi(k^{\prime},k^{\prime\prime})^{\top}\varPhi(k^{\prime},k^{\prime\prime}))<1$. \end{IEEEproof} \begin{thm} \label{thm:consensus theorem-necessary}Let $\mathcal{G}(t)$ be a matrix-weighted time-varying network satisfying \textbf{Assumption 1.} If the multi-agent network \eqref{equ:matrix-consensus-overall} admits an average consensus, then there exists a subsequence of $\{t_{k}|k\in\mathbb{N}\}$ denoted by $\{t_{k_{l}}|l\in\mathbb{N}\}$, such that the null space of the matrix-valued Laplacian of $\widetilde{\mathcal{G}}_{[t_{k_{l}},t_{k_{l+1}})}(t)$ is $\mathcal{R}$, namely, $\text{{\bf null}}(\widetilde{L}_{[t_{k_{l}},t_{k_{l+1}})})=\mathcal{R}$ for all $l\in\mathbb{N}$, where $\triangle t_{k_{l}}=t_{k_{l+1}}-t_{k_{l}}<\infty$ and $t_{k_{0}}=t_{0}$. \end{thm} \begin{IEEEproof} Assume that there does not exist a subsequence $\{t_{k_{l}}|l\in\mathbb{N}\}$ such that $\text{{\bf null}}(\widetilde{L}_{[t_{k_{l}},t_{k_{l+1}})})=\mathcal{R}$ for all $l\in\mathbb{N}$, which implies that there exists $k^{*}\in\mathbb{N}$ such that $\text{{\bf null}}(\widetilde{L}_{[t_{k^{*}},\infty)})\neq\mathcal{R}$. Then $\underset{k\geq k^{*},k\in\mathbb{N}}{\bigcap}\text{{\bf null}}(L^{k})\neq\mathcal{R}$. Denote $\boldsymbol{\eta}\notin\mathcal{R}$ and $\boldsymbol{\eta}\in\underset{k\geq k^{*},k\in\mathbb{N}}{\bigcap}\text{{\bf null}}(L^{k})$. Then $L^{k}\boldsymbol{\eta}=\boldsymbol{0}$ for all $k\geq k^{*},k\in\mathbb{N}$. One can choose a suitable $\boldsymbol{x}(0)$ such that $\boldsymbol{x}(t_{k^{*}})=\boldsymbol{\eta}$; then $\underset{t\rightarrow\infty}{\text{{\bf lim}}}\boldsymbol{x}(t)=\boldsymbol{\eta}$, establishing a contradiction to the fact that the multi-agent network \eqref{equ:matrix-consensus-overall} admits an average consensus. Thus, there exists a subsequence $\{t_{k_{l}}|l\in\mathbb{N}\}$ such that $\text{{\bf null}}(\widetilde{L}_{[t_{k_{l}},t_{k_{l+1}})})=\mathcal{R}$ for all $l\in\mathbb{N}$. \end{IEEEproof} \begin{rem} Although the existence of a subsequence of $\{t_{k}|k\in\mathbb{N}\}$ denoted by $\{t_{k_{l}}|l\in\mathbb{N}\}$ such that $\text{{\bf null}}(\widetilde{L}_{[t_{k_{l}},t_{k_{l+1}})})=\mathcal{R}$, for all $l\in\mathbb{N}$ is a necessary condition for an average consensus, it is not sufficient. To see this fact, we choose, for instance, the multi-agent system $\dot{\boldsymbol{x}}(t)=-\frac{1}{t^{2}}L\boldsymbol{x}(t)$, where $L$ is the matrix-valued Laplacian of a time-invariant matrix-weighted network for which $\text{{\bf null}}(L)=\mathcal{R}$. Now consider the underlying matrix-weighted time-varying network corresponding to the Laplacian matrix $\frac{1}{t^{2}}L$. Then for the arbitrary subsequence $\{t_{k_{l}}|l\in\mathbb{N}\}$ of $\{t_{k}|k\in\mathbb{N}\}$, one always has $\text{{\bf null}}(\widetilde{L}_{[t_{k_{l}},t_{k_{l+1}})})=\mathcal{R}$ for all $l\in\mathbb{N}$. However, the solution to the above system is $\boldsymbol{x}(t)=e^{\frac{L}{t}}e^{-L}\boldsymbol{x}(0)$, and $\text{{\bf lim}}_{t\rightarrow\infty}\boldsymbol{x}(t)=e^{-L}\boldsymbol{x}(0)$. Therefore, an average consensus cannot be achieved in this example. Thus, we need additional conditions in order to guarantee average consensus for \eqref{equ:matrix-consensus-overall}. These observations motivate the following result. \end{rem} \begin{thm} \label{thm:consensus theorem}Let ${\normalcolor \mathcal{G}(t)}$ be a matrix-weighted time-varying network satisfying \textbf{Assumption 1}; furthermore, suppose there exists a subsequence of $\{t_{k}|k\in\mathbb{N}\}$, denoted by $\{t_{k_{l}}|l\in\mathbb{N}\}$, such that $\text{{\bf null}}(\widetilde{L}_{[t_{k_{l}},t_{k_{l+1}})})=\mathcal{R}$ for all $l\in\mathbb{N}$, where $\triangle t_{k_{l}}=t_{k_{l+1}}-t_{k_{l}}<\infty$ and $t_{k_{0}}=t_{0}$. If there exists a scalar $0<q<1$ such that $\mu_{d+1}(\varPhi(t_{k_{l}},t_{k_{l+1}})^{\top}\varPhi(t_{k_{l}},t_{k_{l+1}}))\leq q$ for all $l\in\mathbb{N}$, then the multi-agent network \eqref{equ:matrix-consensus-overall} admits an average consensus. \end{thm} \begin{IEEEproof} Let $\boldsymbol{\omega}(t)=\boldsymbol{x}(t)-\boldsymbol{x}_{f}$. Then $\dot{\boldsymbol{\omega}}(t)=-L(t)\boldsymbol{\omega}(t)$. Choose $\boldsymbol{\omega}(0)\notin\mathcal{R}$ and observe that, \begin{align*} & \mu_{d+1}(\varPhi(t_{k_{0}},t_{k_{1}})^{\top}\varPhi(t_{k_{0}},t_{k_{1}}))\\ \geq & \frac{\boldsymbol{\omega}(0)^{\top}(\varPhi(t_{k_{0}},t_{k_{1}})^{\top}\varPhi(t_{k_{0}},t_{k_{1}}))\boldsymbol{\omega}(0)}{\boldsymbol{\omega}(0)^{\top}\boldsymbol{\omega}(0)}\\ = & \frac{\boldsymbol{\omega}(t_{k_{1}})^{\top}\boldsymbol{\omega}(t_{k_{1}})}{\boldsymbol{\omega}(0)^{\top}\boldsymbol{\omega}(0)}, \end{align*} implying that, \[ \parallel\boldsymbol{\omega}(t_{k_{1}})\parallel\leq\mu_{d+1}(\varPhi(t_{k_{0}},t_{k_{1}})^{\top}\varPhi(t_{k_{0}},t_{k_{1}}))^{\frac{1}{2}}\parallel\boldsymbol{\omega}(0)\parallel. \] Therefore, \begin{align*} \parallel\boldsymbol{\omega}(t_{k_{l+1}})\parallel & \leq\mu_{d+1}(\varPhi(t_{k_{l}},t_{k_{l+1}})^{\top}\varPhi(t_{k_{l}},t_{k_{l+1}}))^{\frac{1}{2}}\parallel\boldsymbol{\omega}(t_{k_{l}})\parallel\\ & \leq\mu_{d+1}(\varPhi(t_{k_{l}},t_{k_{l+1}})^{\top}\varPhi(t_{k_{l}},t_{k_{l+1}}))^{\frac{1}{2}}\\ {\color{red}} & \thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\thinspace\vdots\\ {\color{red}} & \mu_{d+1}(\varPhi(t_{k_{0}},t_{k_{1}})^{\top}\varPhi(t_{k_{0}},t_{k_{1}}))^{\frac{1}{2}}\parallel\boldsymbol{\omega}(0)\parallel\\ & \leq q^{\frac{1}{2}(l+1)}\parallel\boldsymbol{\omega}(0)\parallel. \end{align*} Let \[ V(t)=\boldsymbol{\omega}(t)^{\top}\boldsymbol{\omega}(t)=\parallel\boldsymbol{\omega}(t)\parallel^{2}; \] then \[ \dot{V}(t)=2\boldsymbol{\omega}(t)^{\top}(-L(t))\boldsymbol{\omega}(t)\leq0. \] Thus \[ \parallel\boldsymbol{\omega}(t)\parallel\leq\parallel\boldsymbol{\omega}(t_{k_{l+1}})\parallel\leq q^{\frac{1}{2}(l+1)}\parallel\boldsymbol{\omega}(0)\parallel, \] for $\forall t\in[t_{k_{l+1}},\infty)$. Note that $0<q<1$, and hence, \[ {\displaystyle \lim_{t\rightarrow\infty}}\parallel\boldsymbol{\omega}(t)\parallel=0. \] As such, the multi-agent network \eqref{equ:matrix-consensus-overall} achieves average consensus. \end{IEEEproof} \section{Consensus on Periodic Matrix-weighted Time-varying Networks \label{sec:Consensus-on-Periodic}} In the subsequent discussion, we consider a special class of time-varying networks, where $\mathcal{G}(t)$ is periodic. The periodic network $\mathcal{G}(t)$ is formally characterized by the following assumption. \textbf{Assumption 2. } There exists a $T>0$ such that $\mathcal{G}(t+T)=\mathcal{G}(t)$ for any $t\geq0$. Moreover, there exists a time sequence $\left\{ t_{k}|k\in\mathbb{N}\right\}$ satisfying $\triangle t_{k}=t_{k+1}-t_{k}>\alpha$ for all $k\in\mathbb{N}$, where $\alpha>0$, and there exists $m>2$ ($m\in\mathbb{N}$) partitions for each time span $[lT,(l+1)T)$ for which, \[ lT=t_{lm}<t_{lm+1}<\cdots<t_{(l+1)m}=(l+1)T,\thinspace l\in\mathbb{N}, \] and $\mathcal{G}(t)$ is time-invariant for $t\in[t_{k},t_{k+1})$, where $k\in\mathbb{N}$. Under Assumption 2, we now proceed to provide the algebraic and graph-theoretic conditions under which the multi-agent system \eqref{equ:matrix-consensus-overall} admits average consensus. \begin{thm} \label{thm:consensus theorem-periodic}Let $\mathcal{G}(t)$ be a periodic matrix-weighted time-varying network satisfying \textbf{Assumption 2}. Then the multi-agent network \eqref{equ:matrix-consensus-overall} admits average consensus if and only if, \[ \text{{\bf null}}(\widetilde{L}_{[0,T)})=\mathcal{R}. \] \end{thm} \begin{IEEEproof} (Necessity) Assume that $\text{{\bf null}}(\widetilde{L}_{[0,T)})\neq\mathcal{R}$; then there exists a $\boldsymbol{\eta}\notin\mathcal{R}$ such that $L^{i-1}\boldsymbol{\eta}=\boldsymbol{0}$ for all $i\in\underline{m}$. Let $\boldsymbol{x}(0)=\boldsymbol{\eta}$. Thereby, we can obtain $\boldsymbol{x}(t)=\boldsymbol{\eta}$ for all $t>0$, contradicting the fact that the multi-agent network \eqref{equ:matrix-consensus-overall} admits average consensus. (Sufficiency) Let $\boldsymbol{\omega}(t)=\boldsymbol{x}(t)-\boldsymbol{x}_{f}$; then we have $\dot{\boldsymbol{\omega}}(t)=-L(t)\boldsymbol{\omega}(t)$. Denote \[ \varPhi(0,T)=e^{-L^{m-1}\triangle t_{m-1}}\cdots e^{-L^{0}\triangle t_{0}}, \] and choose $\boldsymbol{\omega}(0)\notin\mathcal{R}$. Then, \begin{align*} \mu_{d+1}(\varPhi(0,T)^{\top}\varPhi(0,T)) & \geq\frac{\boldsymbol{\omega}(0)^{\top}(\varPhi(0,T)^{\top}\varPhi(0,T))\boldsymbol{\omega}(0)}{\boldsymbol{\omega}(0)^{\top}\boldsymbol{\omega}(0)}\\ & =\frac{\boldsymbol{\omega}(T)^{\top}\boldsymbol{\omega}(T)}{\boldsymbol{\omega}(0)^{\top}\boldsymbol{\omega}(0)}, \end{align*} implying that, \[ \boldsymbol{\omega}(T)^{\top}\boldsymbol{\omega}(T)\leq\mu_{d+1}(\varPhi(0,T)^{\top}\varPhi(0,T))\boldsymbol{\omega}(0)^{\top}\boldsymbol{\omega}(0). \] Therefore \[ \parallel\boldsymbol{\omega}(T)\parallel\leq\mu_{d+1}(\varPhi(0,T)^{\top}\varPhi(0,T))^{\frac{1}{2}}\parallel\boldsymbol{\omega}(0)\parallel, \] implying that, \[ \parallel\boldsymbol{\omega}(kT)\parallel\leq\mu_{d+1}(\varPhi(0,T)^{\top}\varPhi(0,T))^{\frac{1}{2}k}\parallel\boldsymbol{\omega}(0)\parallel. \] Hence, one has \[ \parallel\boldsymbol{\omega}(t)\parallel\leq\parallel\boldsymbol{\omega}(kT)\parallel\leq\mu_{d+1}(\varPhi(0,T)^{\top}\varPhi(0,T))^{\frac{1}{2}k}\parallel\boldsymbol{\omega}(0)\parallel, \] for $t\in[kT,(k+1)T)$; then $\text{{\bf lim}}_{t\rightarrow\infty}\parallel\boldsymbol{\omega}(t)\parallel=0$. Therefore, the multi-agent network \eqref{equ:matrix-consensus-overall} admits average consensus. \end{IEEEproof} Theorem \ref{thm:consensus theorem-periodic} provides an algebraic condition for reaching consensus for periodic matrix-weighted time-varying networks using the structure of the null space of the matrix-valued Laplacian matrix of the corresponding integral network. An analogous graph theoretic condition is as follows. \begin{thm} \label{thm:graph condition-integral graph}Let $\mathcal{G}(t)$ be a periodic matrix-weighted time-varying network satisfying \textbf{Assumption 2}. If the integral graph of $\mathcal{G}(t)$ over time span $[0,T)$ has a positive spanning tree, then the multi-agent network \eqref{equ:matrix-consensus-overall} admits average consensus. \end{thm} \begin{IEEEproof} Let $\widetilde{\mathcal{G}}_{[0,T)}$ be the integral network of $\mathcal{G}(t)$ over time span $[0,T)$. If $\widetilde{\mathcal{G}}_{[0,T)}$ has a positive spanning tree, from Lemma \ref{lem:2} and Lemma \ref{lem:3}, one has $\text{{\bf null}}(\widetilde{L}_{[0,T)})=\mathcal{R}$, where $\widetilde{L}_{[0,T)}$ is the matrix-valued Laplacian matrix of $\widetilde{\mathcal{G}}_{[0,T)}$. Theorem \ref{thm:consensus theorem-periodic}, now implies that the multi-agent network \eqref{equ:matrix-consensus-overall} admits average consensus. \end{IEEEproof} \section{Simulation Results \label{sec:Simulation-Results}} Consider a sequence of matrix-weighted networks, consisting of (the same) four agents, and the topologies of the networks are as $\mathcal{G}_{1},\mathcal{G}_{2}$ and $\mathcal{G}_{3}$, as shown in Figure \ref{fig:example-network}. Note that $n=4$ and $d=2$ in this example. \begin{figure}[H] \begin{centering} \begin{tikzpicture}[scale=0.6] \node (n1) at (1 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {1}; \node (n2) at (0,-1.5) [circle,circular drop shadow,fill=black!20,draw] {2}; \node (n3) at (2,-1.5) [circle,circular drop shadow,fill=black!20,draw] {3}; \node (n4) at (3 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {4}; \node (G2) at (1.2,-2.5) {\large{$\mathcal{G}_1$}}; \draw[-, ultra thick, color=black!70] [-] (n1) -- (n2); \draw [-, ultra thick, dashed, color=black!70] (n2) -- (n3); \end{tikzpicture}\,\,\,\,\,\,\,\,\,\,\,\,\begin{tikzpicture}[scale=0.6] \node (n1) at (1 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {1}; \node (n2) at (0,-1.5) [circle,circular drop shadow,fill=black!20,draw] {2}; \node (n3) at (2,-1.5) [circle,circular drop shadow,fill=black!20,draw] {3}; \node (n4) at (3 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {4}; \node (G2) at (1.2,-2.5) {\large{$\mathcal{G}_2$}}; \draw[-, ultra thick, color=black!70] (n2) -- (n4); \draw[-, ultra thick, dashed, color=black!70] (n4) -- (n3); \end{tikzpicture}\,\,\,\,\,\,\,\,\,\,\,\,\begin{tikzpicture}[scale=0.6] \node (n1) at (1 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {1}; \node (n2) at (0,-1.5) [circle,circular drop shadow,fill=black!20,draw] {2}; \node (n3) at (2,-1.5) [circle,circular drop shadow,fill=black!20,draw] {3}; \node (n4) at (3 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {4}; \node (G2) at (1.2,-2.5) {\large{$\mathcal{G}_3$}}; \draw[-, ultra thick, color=black!70] (n2) -- (n3); \end{tikzpicture} \par\end{centering} \caption{Three matrix-weighted networks $\mathcal{G}_{1}$, $\mathcal{G}_{2}$, and $\mathcal{G}_{3}$. \textcolor{black}{Those edges weighted by positive definite matrices are illustrated by solid lines and edges weighted by positive semi-definite matrices are illustrated by dotted lines.}} \label{fig:example-network} \end{figure} The matrix-valued edge weights for each network are, \[ A_{12}(\mathcal{G}_{1})=\left[\begin{array}{cc} 1 & 1\\ 1 & 2 \end{array}\right],\thinspace\thinspace A_{23}(\mathcal{G}_{1})=\left[\begin{array}{cc} 1 & 1\\ 1 & 1 \end{array}\right], \] \[ A_{24}(\mathcal{G}_{2})=\left[\begin{array}{cc} 1 & 0\\ 0 & 2 \end{array}\right],\thinspace\thinspace A_{34}(\mathcal{G}_{2})=\left[\begin{array}{cc} 1 & 0\\ 0 & 0 \end{array}\right], \] and \[ A_{23}(\mathcal{G}_{3})=\left[\begin{array}{cc} 1 & -1\\ -1 & 2 \end{array}\right], \] respectively. The matrix-valued Laplacian matrices corresponding to above three networks are, \[ L(\mathcal{G}_{1})=\left[\begin{array}{cccccccc} 1 & 1 & -1 & -1 & 0 & 0 & 0 & 0\\ 1 & 2 & -1 & -2 & 0 & 0 & 0 & 0\\ -1 & -1 & 2 & 2 & -1 & -1 & 0 & 0\\ -1 & -2 & 2 & 3 & -1 & -1 & 0 & 0\\ 0 & 0 & -1 & -1 & 1 & 1 & 0 & 0\\ 0 & 0 & -1 & -1 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right], \] \[ L(\mathcal{G}_{2})=\left[\begin{array}{cccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 2 & 0 & 0 & 0 & -2\\ 0 & 0 & 0 & 0 & 1 & 0 & -1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & -1 & 0 & 2 & 0\\ 0 & 0 & 0 & -2 & 0 & 0 & 0 & 2 \end{array}\right], \] and \[ L(\mathcal{G}_{3})=\left[\begin{array}{cccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & -1 & -1 & 1 & 0 & 0\\ 0 & 0 & -1 & 2 & 1 & -2 & 0 & 0\\ 0 & 0 & -1 & 1 & 1 & -1 & 0 & 0\\ 0 & 0 & 1 & -2 & -1 & 2 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right], \] respectively. \begin{figure}[tbh] \begin{centering} \begin{tikzpicture}[scale=1] \node (G1) at (0,0) [] {\large{$\mathcal{G}_1$}}; \node (G2) at (2.5,0) [] {\large{$\mathcal{G}_2$}}; \node (G3) at (5,0) [] {\large{$\mathcal{G}_3$}}; \path[] (G1) [->, thick] edge node[above] {$ 2\Delta t$} (G2); \path[] (G2) [->, thick] edge node[above] {$3\Delta t$} (G3); \path[] (G1) [<-, thick, bend left=50] edge node[above] {$\Delta t$} (G3); \end{tikzpicture} \par\end{centering} \caption{Switching sequence amongst networks $\mathcal{G}_{1}$, $\mathcal{G}_{2}$, and $\mathcal{G}_{3}$.} \label{fig:switching-network} \end{figure} Consider a time sequence $\{t_{k}\thinspace|\thinspace k\in\mathbb{N}\}$ such that $t_{k}=k\Delta t$ where $\Delta t>0$. The evolution is initiated from network $\mathcal{G}_{1}$ (i.e., ${\normalcolor \mathcal{G}(0)=\mathcal{G}_{1}}$) with $\boldsymbol{x}_{1}(0)=[0.6787,\thinspace0.7577]^{\top}$, $\boldsymbol{x}_{2}(0)=[0.7431,\thinspace0.3922]^{\top}$, $\boldsymbol{x}_{3}(0)=[0.6555,\thinspace0.1712]^{\top}$ and $\boldsymbol{x}_{4}(0)=[0.7060,\thinspace0.0318]^{\top}$. The switching among networks $\mathcal{G}_{1},\mathcal{G}_{2}$ and $\mathcal{G}_{3}$ satisfies $\{t_{k}\thinspace|\thinspace k\in\mathbb{N}\}$, \[ {\normalcolor \mathcal{G}(t)}=\begin{cases} \begin{array}{c} \mathcal{G}_{1},\\ \mathcal{G}_{2},\\ \mathcal{G}_{3}, \end{array} & \begin{array}{c} t\in[t_{6l},t_{6l+2}),\\ t\in[t_{6l+2},t_{6l+5}),\\ t\in[t_{6l+5},t_{6(l+1)}), \end{array}\end{cases} \] where $l\in\mathbb{N}$. The network switching process is demonstrated in Figure \ref{fig:switching-network}. Examine the dimension of the null space of $L(\mathcal{G}_{1})$, $L(\mathcal{G}_{2})$ and $L(\mathcal{G}_{3})$, respectively. We have $\text{{\bf null}}(L(\mathcal{G}_{1}))\not=\mathcal{R}$, $\text{{\bf null}}(L(\mathcal{G}_{2}))\not=\mathcal{R}$ and $\text{{\bf null}}(L(\mathcal{G}_{3}))\not=\mathcal{R}$. However, note that from Figure \ref{fig:integral-network}, the integral graph of $L(\mathcal{G}_{1})$, $L(\mathcal{G}_{2})$ and $L(\mathcal{G}_{3})$ over time span $[t_{6l},t_{6(l+1)})$, where $l\in\mathbb{N}$, denoted by $\widetilde{\mathcal{G}}$, has a positive spanning tree $\mathcal{T}(\widetilde{\mathcal{G}})$. Therefore, according to Theorem \ref{thm:graph condition-integral graph}, the multi-agent system \eqref{equ:matrix-consensus-overall} admits an average consensus solution at $[0.6958,\thinspace0.3382]^{\top}$; see Figure \ref{fig:trajectory}. \begin{figure}[tbh] \begin{centering} \begin{tikzpicture}[scale=0.7] \node (n1) at (1 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {1}; \node (n2) at (0,-1.5) [circle,circular drop shadow,fill=black!20,draw] {2}; \node (n3) at (2,-1.5) [circle,circular drop shadow,fill=black!20,draw] {3}; \node (n4) at (3 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {4}; \node (G2) at (1.2,-2.5) {{$\widetilde{\mathcal{G}}$}}; \draw[-, ultra thick, color=black!70] (n1) -- (n2); \draw[-, ultra thick, color=black!70] (n2) -- (n3); \draw[-, ultra thick, color=black!70] (n2) -- (n4); \draw[-, ultra thick, dashed, color=black!70] (n3) -- (n4); \end{tikzpicture}\,\,\,\,\,\,\,\,\,\,\,\,\,\begin{tikzpicture}[scale=0.7] \node (n1) at (1 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {1}; \node (n2) at (0,-1.5) [circle,circular drop shadow,fill=black!20,draw] {2}; \node (n3) at (2,-1.5) [circle,circular drop shadow,fill=black!20,draw] {3}; \node (n4) at (3 ,0.3) [circle,circular drop shadow,fill=black!20,draw] {4}; \node (G2) at (1.2,-2.5) {{$\mathcal{T}(\widetilde{\mathcal{G}}$})}; \draw[-, ultra thick, color=black!70] (n1) -- (n2); \draw[-, ultra thick, color=black!70] (n2) -- (n3); \draw[-, ultra thick, color=black!70] (n2) -- (n4); \end{tikzpicture} \par\end{centering} \caption{The integral graph $\mathcal{G}(t)$ over time span $[t_{6l},t_{6(l+1)})$ where $l\in\mathbb{N}$ (left) and the associated positive spanning tree $\mathcal{T}(\widetilde{\mathcal{G}})$ (right).} \label{fig:integral-network} \end{figure} \begin{figure}[tbh] \begin{centering} \includegraphics[width=9cm]{figures/trajectory} \par\end{centering} \caption{State evolution in the multi-agent system \eqref{equ:matrix-consensus-overall}.} \label{fig:trajectory} \end{figure} \section{Conclusion \label{sec:Conclusion}} This paper examines consensus problems on matrix-weighted time-varying networks. For such networks, necessary and/or sufficient conditions for reaching average consensus are provided. Furthermore, for matrix-weighted periodic time-varying networks, necessary and sufficient algebraic and graph theoretic conditions are obtained for reaching consensus. \bibliographystyle{IEEEtran} \phantomsection\addcontentsline{toc}{section}{\refname}
{'timestamp': '2020-01-31T02:05:36', 'yymm': '2001', 'arxiv_id': '2001.11179', 'language': 'en', 'url': 'https://arxiv.org/abs/2001.11179'}
ArXiv
\section{INTRODUCTION} \begin{figure} \centering \includegraphics[scale=0.5]{Figure1.jpg} \caption{An image of a truncated magnetoelastic membrane in a precessing magnetic field. The degree of truncation $S=h/2R$, where $h$ is the sagitta length of the removed circular segment, and $R$ is the membrane radius, determines membrane symmetry. The magnetic field $\vec{\bm{H}}$ precesses at the angle $\theta$ around the $z$-axis with a phase given by $\phi=\omega t$, where $\omega$ is the precession frequency and $t$ is time. The field induces an amplitude, $A$, along the membrane perimeter, measured from the $x$-$y$ plane. Coloration indicates z-position (+z in red and -z in blue).} \label{fig:fig1} \end{figure} \begin{figure*} \centering \includegraphics[scale=0.62]{Figure2.jpg} \caption{A circular magnetoelastic membrane in a precessing magnetic field adopts dynamic motion. (a) Transverse waves propagate around the membrane above a critical frequency ($\omega > \omega_c$). (b) A schematic plot showing the phase diagram of a membrane. Above the dotted black curve, the ``wobbling" membrane remains perpendicular to the precession axis and possess the rotational waves from (a). The wave amplitude maximizes just before the transition. Below this curve, the membrane buckles and rotates asynchronously with the field, hence ``dancers". (c) The bending stiffness controls the shape of the rotational waves. The black arrows indicate the direction of wave propagation. Coloration indicates z-position (+z in red and -z in blue).} \label{fig:fig2} \end{figure*} Magnetically controlled microrobots have applications in drug delivery \cite{klosta2020kinematics,Dreyfus2005micro, jang2015undulatory, bryan2019magneto}, sensing \cite{moerland2019rotating,goubault2003flexible}, micromixing \cite{biswal2004mixing}, detoxification \cite{zhu2015microfish, wu2015nanomotor} and microsurgery \cite{wu2020multifunctional, Vyskocil2020cancer}. Such versatile use of magnetic microrobots is possible because magnetic fields can penetrate organic matter, do not interfere with biological or chemical functions, do not require fuel, and, most importantly, can be externally controlled. These properties allow for non-invasive and precise spatiotemporal execution of desired function. In particular, superparamagnetic particles are ideal candidates for robotic functions when combined with elastic components \cite{dempster2017contractile,Dreyfus2005micro} due to their lack of residual magnetization, lowering their propensity to agglomerate, and are less toxic than ferromagnetic particles \cite{markides2012biocomp}. Magnetoelastic membranes possess a diverse repertoire of possible dynamic motion under time-dependent magnetic fields \cite{brisbois2019actuation,Hu2018small}, making them particularly suited for designing multifunctional microrobots. Navigating a viscous environment requires a magnetoelastic microrobot to use competing magnetic and elastic interactions to induce non-reciprocal motion \cite{purcell1977life}. That is, the sequence of configurations that the robot adopts must break time-reversal symmetry to swim at low Reynolds numbers (Re $\ll 1$). For example, magnetoelastic filaments achieve non-reciprocal motion with a non-homogeneous distribution of magnetic components or with shape asymmetry \cite{Dreyfus2005micro, jang2015undulatory, bryan2019magneto, yang2020reconfig, cebers2005flexible}. These asymmetries induce bending waves that propagate along the chain. In nature, microscopic organisms such as euglenids \cite{Arroyo2012reverse} swim using self-propagating waves directed along their cellular membrane. G. I. Taylor was the first to model such organisms using a transverse wave traveling along a infinite 2D sheet \cite{taylor1951analysis}. Taylor found that the wave induced motion in the sheet opposite to the propagating wave direction. Subsequent works expanded upon Taylor's findings \cite{lauga2009hydro}, and developed a rotational counterpart \cite{Corsi2019neutrally} that produces a hydrodynamic torque on circular membranes with prescribed waves traveling around their perimeter. In this article, we study rotational waves in homogeneous superparamagnetic membranes under precessing magnetic fields. We investigate the dynamic modes of the membrane separated by a critical precession frequency, $\omega_c$, below which the membrane motion is asynchronous with the field, and above which rotational waves propagate in-phase with the field precession. Breaking the membrane's center of inversion symmetry, by removing part of the circle (Fig 1), allows for locomotion in the fast frequency phase ($\omega > \omega_c$). Shape asymmetry is needed to disrupt the inversion symmetry of the magnetic forces experienced by a circular membrane. We show that the torque and velocity of the membrane counterintuitively resembles the linear Taylor sheet rather than its rotational analogue. Furthermore, by controlling a magnetoviscous parameter and the membrane shape asymmetry, we demonstrate swimming directed by a programmed magnetic field and diagram its non-reciprocal path through conformation space. The paper is organized as follows. In Sec. II, we establish the phase diagram of a circular magnetoelatic membrane in a precessing magnetic field and determine the transition frequency $\omega_c$. In Sec. III, we introduce hydrodynamic interactions and observe circular locomotion in asymmetric membranes. We demonstrate a programmed magnetic field, in Sec. IV, that directs a membrane swimmer along a predetermined path. Finally, we make concluding remarks on the necessary conditions for superparamagnetic swimmers in Sec. V. \section{PHASE SPACE FOR UNTRUNCATED MEMBRANE} \begin{figure} \centering \includegraphics[scale=0.65]{Figure3.jpg} \caption{The synchronous-asynchronous (wobbler-dancer) transition frequency $\omega_c$ for a magnetoelastic membrane. (a) Molecular dynamics calculation of $\omega_c$ as a function of the field precession angle $\theta$. The solid and dashed lines indicate a dipole magnitude of $\mu=$ 2 and $\mu=$ 1, respectively. The inset shows the dimensionless transition frequency $\omega_c/\Omega$, where $\Omega$ is the membrane's characteristic rotation frequency. The green dashed line represents the theoretical transition at $\omega_c/\Omega=2/\pi$, which, near $\theta=$ 90$^\circ$, is independent of bending stiffness ($\kappa=$ 1, orange. $\kappa=$ 100, blue/red). The black squares show the transition calculated from lattice-Boltzmann simulations. (b) Supercritical and subcritical behavior of the total energy U (magnetic + bending). The precession frequency is close to the critical frequency, $0.029 < \omega_c < 0.030$ ($\theta=$ 80$^\circ$). Fourier transform of the rotational wave amplitude (bottom).} \label{fig:fig3} \end{figure} We construct the phase diagram for the dynamic modes of the membrane using molecular dynamics (MD) without hydrodynamics to efficiently search for non-reciprocal actuation relevant to locomotion. Actuation of magnetoelastic membranes in time-dependent magnetic fields necessitates a model that captures elastic bending in response to magnetic forces, which are imparted by the dipolar interactions of embedded magnetic colloids. The membrane is composed of a hexagonal close-packed monolayer of hard spherical colloids, each of diameter $\sigma$ and possessing a point dipole moment $\bm{\mu}$ at its center. The bonds between the colloids are approximately inextensible, but able to bend with rigidity, $\kappa$. We model an implicit, uniform magnetic field by constraining the orientation of the colloids' dipole moments in the direction of the field, $\bm{H}=\bm{\mu}/\chi$, where $\chi$ is the magnetic susceptibility of the material and $\bm{\mu}$ is the dipole moment with magnitude $\mu$. The instantaneous dipole orientation is given by $\bm{\hat{\mu}} = \sin{\theta}\sin{\omega t}~\bm{\hat{i}} + \sin{\theta}\cos{\omega t}~\bm{\hat{j}} + \cos{\theta}~\bm{\hat{k}}$, where $\theta$ is the field precession angle, $\omega$ is the precession frequency, and $t$ is time. All quantities herein are expressed in dimensionless units (see Appendix A). A diverse set of possible dynamic motion develop depending on the radius $R$ of the thin membrane and the magnetic field parameters ($\mu$, $\theta$, $\omega$). While varying these parameters, we solve the equations of motion for an overdamped system with a friction force imparted on each colloid given by $-\xi v(t)$, where $v(t)$ is the colloid velocity, and $\xi$ is the damping coefficient. Within this approximation, two dynamic mode regimes develop. At fast frequencies ($\omega > \omega_c$), the membrane motion synchronizes with the field to produce transverse waves that propagate around the membrane (Fig. 2a). At slow frequencies ($\omega < \omega_c$), we observe a collection of modes that are asynchronous with the field. We find a critical frequency, $\omega_c$, where there is an abrupt change in the membrane's dynamic motion (Fig 2b). As the field precesses, the forces along the membrane perimeter generate internal buckling and create a torque that rotates the membrane around its diameter. If the magnetic field precession is fast ($\omega > \omega_c$), the continuous change in the direction of the axis of rotation leads to the development of a constant-amplitude wave traveling along the membrane perimeter, see Video 1 in Ref. \cite{video1}. On average, the membrane remains perpendicular to the precession axis and simply ``wobbles", synchronous to the field, and with no significant rotation around the precession axis. This state closely resembles acoustically levitated granular rafts \cite{lim2021acoustically}. The direction of the propagating wave matches the handedness of precession because the dipole-dipole forces, which cause buckling, point in the direction of the magnetic field. However, the field polarity does not affect the magnitude or travel direction of the wave since the superparamagnetic dipoles are always oriented in the same direction as the field. Hence, the force due to the dipole-dipole interactions, ${\bm{F}}_{dipole}$, remains unchanged (${\bm{F}}_{dipole} \propto (\bm{\mu} \cdot \bm{r})\bm{\mu} = (-\bm{\mu} \cdot \bm{r})(-\bm{\mu})$, where $\bm{r}$ is the displacement vector between dipoles \cite{yung1998analytical}). In addition to the rotational waves, the wobbling mode also manifests radially propagating (inward) bending waves (Fig. 2c) that terminate at the membrane center. The wave shape weakly depends on the membrane stiffness $\kappa$; the wave form is better defined as $\kappa$ decreases. However, totally compliant membranes ($\kappa\rightarrow 0$) do not transmit bending waves and therefore this phenomenon exists only for intermediate $\kappa$. \begin{figure*} \centering \includegraphics[scale=0.65]{Figure4.jpg} \caption{Fluid flow around a magnetoelastic membranes in the ``wobbler" regime. The top images show the total force vector for each colloid (blue arrows) alongside the dipole orientation (cyan arrows) for a precessing field ($\mu=$1, $\theta=$ 70$^\circ$, $\omega=$ 0.1). The bottom images show streamlines around the membrane, where the color indicates flow speed (red $>$ blue). (a) A snapshot of a circular membrane. (b) Two snapshots of a truncated circular membrane separated by a shift in the field precession $\Delta\phi=\omega t=6\pi/5$.} \label{fig:fig4} \end{figure*} If the precession is slow ($\omega < \omega_c$), the membrane has enough time to rotate completely parallel to the precession axis and will adopt new configurations due to elastic buckling. How the membrane buckles depends on the magnetoelastic parameter \cite{vazquez2018flexible}, $\Gamma = M L^2 / \kappa$, which characterizes the ratio between the membrane's magnetic and bending energies, where $M$ is the magnetic modulus, and $L^2$ is the membrane area. If the magnitude of $\Gamma$ is very small ($\Gamma \ll 1$) or very large ($\Gamma \gg 1$), we observe hard disk behavior because bending distortions become impossible due to mechanical stiffness or due to unfavorable magnetic interactions, respectively. While not investigated here, strong magnetic coupling \cite{park2020dna,messina2015quant} between colloids will adversely affect membrane synthesis. At intermediate $\Gamma$, membrane edges buckle several times per precession period and produce magnetically stabilized conformations that, while periodic, run out-of-sync with the field, see Video 2 in Ref. \cite{video2}. Much of this back-and-forth ``dancing" motion is essentially reciprocal and is therefore generally a poor candidate for studying swimming at small Re. Therefore, here we seek to formally define $\omega_c$ and focus on the wobbling regime ($\omega > \omega_c$). To accurately determine the transition frequency $\omega_c$ that separates the wobblers from the dancers, we investigate how the magnetic field parameters (precession angle $\theta$, dipole magnitude $\mu$), and membrane radius $R$ (Fig. 3a) contribute to the characteristic response time $\tau$ of the rotating membrane. When the membrane rotation time $\tau$ increases, it necessarily requires a slower field to keep the membrane in the wobbling mode, decreasing $\omega_c$. A larger $\tau$, can be achieved by weakening the magnetic torque ($\theta$ closer to $\pi/2$ or smaller $\mu$) or increasing the drag on the membrane (larger $R$). Similarly, a smaller $\tau$ implies a fast membrane response from a strong field or a small membrane. We observe that $\omega_c$ diverges as $\theta$ approaches the magic angle, partly due to instability of the wobbling phase at angles below the magic angle \cite{cimurs2013dynamics}. The transition to the wobbling state is characterized by the abrupt shift in the membrane's potential energy from a time-dependent function to a constant value (Fig. 3b, top). When the potential energy does not change, this implies that the shape of the membrane conformation becomes invariant in the rotating field reference frame. This change in the dynamic buckling results in a single Fourier mode for the displacement of the colloids parallel to the precession axis (Fig. 3b, bottom). This resembles the transition between the synchronous and asynchronous motion for oblate magnetic particles \cite{cimurs2013dynamics}. \begin{figure*} \centering \includegraphics[scale=0.65]{Figure5.jpg} \caption{Actuation drives circular locomotion of truncated magnetoelastic membranes through a viscous fluid. (a) The average rotational (``wobble") wave amplitude $A_{avg}$, scaled by the membrane radius $R$, depends inversely on the magnetoviscous parameter $\tau \omega$. Data points from lattice Boltzmann simulations are compared to our analytical model (solid blue line). The coloration of the simulation data notes the degree of truncation $S$. The inset shows the variation in $A/\sigma$ over time based on membrane geometry ($S=$ 0.05, black; $S=$ 0.5, gray), where $\sigma$ is the colloid diameter. (b) The path taken by a membrane in a precessing field. The arrow indicates the travel direction with velocity $V$. The inset shows the radius $\rho$ of this path as a function of $S$. (c) The membrane velocity is proportional to $A_{avg}^2 \propto (\tau \omega)^{-2}$ and scales with $S^{3/2}$ due to changes in the length of the membrane perimeter. The data points shape are coded by the membrane radius ($R=$ 7, triangle; $R=$ 9, square; $R=$ 12, circle). The line shows our analytical prediction (slope = 1.0). The inset shows the continuous inversion symmetry measure for a flat truncated membrane.} \label{fig:fig5} \end{figure*} When the precession angle approaches $\pi/2$, the membrane motion becomes independent of the stiffness of the membrane; the membrane remains flat at all times and for all values of $\omega$. As the field precesses, the forces perpendicular to the membrane plane vanish near $\theta = \pi/2$ preventing significant radial bending and, consequently, changing $\kappa$ does not shift $\omega_c$ (Fig. 3a, inset). By solving an Euler-Lagrange equation with Rayleigh dissipation (Appendix B), we derive an equation of motion for a membrane in a field precessing at a large angle. It reveals a characteristic frequency of membrane motion, $\Omega = 6 \zeta(3)\mu_0\mu^2\sin{2\theta} / \pi^2 \eta R^2 \sigma^4$, where $\mu_0$ is the magnetic permeability of free space, $R$ is the radius of the membrane, $\eta$ is the viscosity, and $\zeta(x)$ is the Riemann zeta function. The frequency $\Omega$ comes from the magnetic ($\propto \mu_0\mu^2\sin{2\theta} R^2/\sigma^5$) and drag ($\propto \eta R^4/\sigma$) potential functions. The $\omega_c$ curves in Fig. 3a can be scaled by $\Omega$ to obtain a dimensionless transition frequency $\omega_c / \Omega = 2/\pi$ (Fig 3a, inset). This provides a single number with which to predict the dynamic motion of a membrane and defines the membrane response time $\tau = \Omega^{-1}$. \section{HYDRODYNAMIC EFFECTS ON ``WOBBLING" MEMBRANES} It is useful to investigate the broad range of dynamic motions accessible to a magnetoelastic membrane using a simple overdamped system to highlight relevant transitions in motion. Afterwards, we confirmed the dimensionless transition $\omega_c / \Omega$ using the more computationally expensive hydrodynamic simulations (Fig. 3a, black squares) and change the characteristic frequency $\Omega = 27 \zeta(3)\mu_0 \mu^2\sin{2\theta}/64\eta R\sigma^5$ to include hydrodynamic interactions (see Appendix B). Using the same magnetic potential as the previous section, this change in $\Omega$ is due to the torque on the membrane a viscous fluid ($\propto \eta R^3$). We will use this definition for $\Omega$ hereafter. To observe the effect of the wobbler's non-reciprocal motion on the surrounding fluid, we add hydrodynamic interactions to our simulations by coupling the MD model to the lattice Boltzmann method (LBM) \cite{Mackay2013hydrodynamic}. This technique, which comes from a discretization of the Boltzmann transport equation, reproduces the incompressible Navier-Stokes equation in the macroscopic limit. LBM calculates the evolution of a discrete-velocity distribution function, $f_i$, at each fluid node that fills the simulation box on a square lattice mesh with a spacing of $\Delta x$. The surface of the colloids act as a boundary and is defined by surface nodes that interact with the fluid using the model developed by Peskin \cite{peskin2002immersed}. Care must be taken when implementing LBM with MD because compliant springs can cause translation of the membrane due to in-plane stretching. This mechanism has been observed in systems of a few colloids \cite{Grosjean2018surface}. Therefore, the stiffness of the springs must be large enough to eliminate this effect for an inextensible membrane model requiring the use of a smaller simulation timestep. Simultaneously, small Re in LBM is achieved by decreasing the Mach number, set by the speed of sound $c_s=\frac{1}{\sqrt3}\frac{\Delta x}{\Delta t}$ \cite{kruger}. Therefore, we rely on a small timestep that is compatible with both schemes. See Appendix C for a complete description of the model. The fluid flow around the membrane is determined by its symmetry and actuation. The precessing magnetic field induces a torque along the membrane perimeter that circulates fluid around an axis of rotation through the membrane diameter (Fig. 4a). This axis of rotation moves continuously with the field, producing circulating flows above and below the membrane. The rising peaks push the fluid up (+z) and it flows towards the falling peak (-z) on the other side of the membrane. This flow simultaneously resembles analytical predictions for rotating hard disks \cite{tanzosh1996general} and the flow vorticity from Taylor's swimming sheet \cite{lauga2009hydro}. \begin{figure*} \includegraphics[scale=0.5]{Figure6.jpg} \caption{A two-step magnetic field directs a swimming membrane along a path. (a) First, a membrane wobbler moves under a precessing magnetic field. After it rotates a half-turn (\#1), the precession switches to a fast frequency at $\theta=\pi/2$ while the axis rotates to flip the membrane (\#2). (b) We define the angles that the normal vector $\bm{n}$ and the truncation vector $\bm{S}$ make with the $x$-axis as $\zeta_n$ and $\zeta_S$, respectively. (c) The path in conformation space over the two-step field. (c) Repeated cycles from (a) move the membrane against the Brownian motion of a thermalized fluid. The upper panel shows the motion of the membrane in the $x$-$y$ plane. The black arrow indicates the direction of motion. The lower panel shows the displacement in the $z$-direction.} \label{fig:fig6} \end{figure*} The centrosymmetry of a magnetoelastic membrane generates a flow that prevents its center of mass from translating. To induce locomotion, we truncate the membrane by removing a circular segment with a sagitta of length $h$. We normalize $h$ by the diameter of the circle to define the degree of truncation of the circular membranes as $S = h/2R$. In contrast to the circular membrane case, the shape of the fluid flow in the truncated membrane changes during a single precession period leading to asymmetric flow field depending on the relative orientation between the field and the truncation cut (Fig. 4b). The amplitude of propagating waves is particularly relevant for predicting the translational \cite{taylor1951analysis} or rotational \cite{Corsi2019neutrally} velocity of a membrane. Here, the wobble amplitude can be calculated by balancing the magnetic \cite{yung1998analytical} and drag \cite{tanzosh1996general} torque in a viscous fluid (see Appendix D). Under small amplitudes for the rotational wave, we obtain the simple relation \begin{equation} \frac{A}{R}=\frac{C}{\tau\omega} \label{eq:amplitude} \end{equation} where $A$ is the amplitude, and $C=32/9\pi^2$ (Fig. 5a). It is reasonable to assume, in the limit of small deformations, the bending contribution to the torque along the edge is negligible, unless $\kappa \rightarrow \infty$. Furthermore, the amplitude is independent of the membrane size since $\tau \propto R$. However, the membrane is not free to increase in radius arbitrarily. The small Re condition implies that $\nu \gg R^2/\tau$, where $\nu$ is the kinematic viscosity. Obeying this constraint on $\tau$, we can define a magnetoviscous parameter $\tau \omega$ and use it to predict locomotion of the membrane. Asymmetry in the fluid flow due to a degree of truncation $S>0$ leads to locomotion of the membrane. The membrane travels with a net velocity in the direction of the truncation cut. This net motion is due to the decrease in the amplitude of the waves traveling along the truncated edge. Since the truncated edge is closer to the center of mass and $\kappa$ is homogeneous, the membrane will bend to a lesser extent along the truncation. This manifests as a net motion every $2\pi/\omega$, where reversing the handedness of the field reverses the locomotive direction. However, the membrane follows a curved path. While the membrane torque due to the underlying colloidal lattice is negligible, the membrane can rotate significantly by choosing $\omega$ close to $\omega_c$. This rotation emerges exclusively due to the magnetic interactions perpendicular to the wobbling membrane. If the projection of the forces, visualized in Fig. 4, on the $x$-$y$ plane are non-zero, the membrane will rotate. Over many precession periods, the membrane moves in a circular path around a central point (Fig. 5b). The radius $\rho$ of the path depends on $S$ and $A_{avg}$. Untruncated, $S=0$, and fully truncated, $S = 1$, do not translate and result in $\rho=0$. Hence, a maximum for $\rho$ exists at intermediate S values (Fig. 5b inset). Since the membrane is composed of colloids, irregularities in the $\rho \left(S,A_{avg}\right)$ curve appear because the symmetry of the membrane changes in discrete steps. The magnetic field controls how quickly the membrane travels along the circular path and affects its angular velocity. Together with the truncation $S$, the velocity $V$ at which the membrane translates along the path can be determined using a singularity method. With a nearest-neighbors assumption for the magnetic interactions and treating them as point-disturbances, the advective flow through the center of mass leads to the velocity \begin{equation} \frac{V}{R\omega} = \frac{C^2}{12 \zeta(3)} \frac{S^{3/2}}{{(\tau\omega)}^2} \label{eq:velocity} \end{equation} where the $S^{3/2}$ dependence comes from the number of uncompensated point forces formed by truncation. The velocity is normalized by the phase speed $R\omega$. The inverse squared relation on $\tau \omega$ for the velocity is a result of the dependence on the product of the magnetic force and wave amplitude, which, in turn, relies on the magnetic force. Here, we recover the velocity dependence on the square of the wave amplitude \cite{taylor1951analysis}, but with a lower velocity ($V \leq V_{Taylor}/6$). Appendix E contains the full derivation. We see a deviation from the relationship obtained in Eq. \ref{eq:velocity} at large values of $S^{3/2}/(\tau \omega)^2$ owing to either a high degree of truncation (a linear polymer) or a small viscomagnetic parameter (``dancer") (Fig. 5c). The direction of $V$ is dictated by the handedness of the precessing field and is an example of magnetically induce symmetry breaking. We find that the continuous symmetry measurement \cite{zabrodsky1992continuous} can predict relative changes in the velocity of locomotion. When the inversion asymmetry increases, $V$ increases (Fig. 5c, inset) because the conformational path taken by the membrane widens, leading to greater net work done on the fluid \cite{grosjean2016realization}. \section{MEMBRANE SWIMMING} Here, we give an example of how a programmed magnetic field can produce a non-reciprocal conformational path that results linear swimming. In Fig. 6a, we show that a precessing field can rotate the membrane 180$^\circ$ from its initial configuration. Then, the precession frequency is increased and $\theta$ is set to $\pi/2$. This keeps the membrane flat in the precession plane while the precession axis is rotated to flip the membrane. This field is on for a period of $\pi/\omega_s$ to flip the membrane orientation, where $\omega_s > \omega$. Once the membrane resembles the starting configuration, the 2-step field is repeated. After half the orbit from Fig. 5b is obtained, the membrane's center of mass has shifted $\sim 2 \rho$. The ``flip" from the second field places the membrane back into its original configuration. This recovery stroke moves the membrane back towards its original position, but not entirely, leading to a net translation. The chirality and duration of the magnetic field precession controls the displacement in the membrane plane and the flip direction controls the direction for the out-of-plane displacement. The fastest achievable velocity using this method is $V_{max}=(2/\pi)V$, but will be slowed by the time taken during the recovery step. This cycle forms a closed loop in configuration space based on the two independent degrees of freedom that are defined by the angles the normal vector $\bm{n}$ and the truncation vector $\bm{S}$ make with the x-axis as $\zeta_n$ and $\zeta_S$, respectively (Fig. 6b). It is reasonable to note that this configuration loop is in addition to the already present non-reciprocal motion of the wobbling mode, but is needed since the latter only follows circular paths. Thermalizing the LB fluid to 1 $k_BT$, by the method of Adhikari et al. \cite{Adhikari2005fluctuating} for $S^{3/2}/(\tau \omega)^2 \approx$ 10$^{-2}$, shows a swimming membrane as $\zeta_n$ and $\zeta_S$ changes (Fig. 6c). In this instance, the path during the rotation step, to change $\zeta_S$, is dominated by Brownian motion. The largest displacement occur during the flipping step, to change $\zeta_n$. Additionally, each flip shifts the membrane along the $z$-axis, where the traveling direction is determined by the handedness of the flip. By controlling the precession axis orientation, a membrane may be directed along an arbitrary path. The useful swimming regime is bound by the P\'{e}clet number and the dimensionless transition frequency. In other words, the system parameters, in particular, the field frequency $\omega$, must be large enough to maintain the wobbling mode, but not too large as to attenuate the wobble amplitude below an efficient swimming velocity. In practice, this implies operating at a driving frequency just above $\omega_c$. The range for the frequency can be written as $2\Omega/\pi<\omega<C^2\eta R^3S^{3/2}/\sqrt2 ~\zeta(3)\tau^2k_BT$. Here, we calculate the P\'{e}clet number using swimming velocity from Appendix E, the membrane radius as the characteristic length, and set the diffusion coefficient using the radius of gyration of a disk \cite{capuani2006disks}. For example, a membrane of $R=$ 1 $\upmu$m composed of 25 nm magnetite nanoparticles at $25^{\circ}$C in water subject to 50 mT field \cite{susan2019from} precessing at 80$^\circ$ gives a effective frequency range of 1--10 kHz. \section{SWIMMING IN HOMOGENEOUS MEMBRANES} Homogeneous superparamagnetic membranes require both non-reciprocal motion and shape asymmetry to swim in viscous fluids. While the Scallop Theorem \cite{purcell1977life} makes the necessity for non-reciprocal motion known, implementing such motion without modifying the elastic or magnetic homogeneity implies using a ``non-reciprocal" magnetic field, where the field vector returns to its starting position without retracing its path. Using a field that does not self-retrace imparts a change in membrane conformation that breaks time-reversal symmetry. However, this type of external magnetic field will still generate centrosymmetric forces within a symmetric membrane. Therefore, shape asymmetry is also needed to displace the membrane center of mass during one precession period, where more asymmetry leads to a larger per-period displacement. \acknowledgements{We would like to acknowledge Mykola Tasinkevych and Eleftherios Kyrkinis for helpful discussions. We thank the Sherman Fairchild Foundation for computational support. This project was funded by the Department of Energy's Center for Bio-Inspired Energy Science (DE-SC0000989).}
{'timestamp': '2021-12-08T02:25:42', 'yymm': '2112', 'arxiv_id': '2112.03209', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.03209'}
ArXiv
\section{Introduction} \label{sec:Introduction} Massive, young stars are sites of nucleosynthesis, not just of the stable nuclides, but radioactive isotopes as well. Long-lived radioisotopes with decay times of billions of years or longer, such as $^{40}$K, mix into the interstellar medium (ISM), accumulating and provide a very low level source of energetic radiation in all gas \citep{Cameron62,Umebayashi81}. More unstable isotopes are also synthesized; some have decay times of a few years or less and cannot reach most of the ISM. In between those two extremes are the short lived radionuclides (SLRs) of the ISM: with $\sim \textrm{Myr}$ decay times, they can be present in a galaxy's gas but only as long as star formation replenishes their abundances. The most prominent of the SLRs is \textrm{$^{26}$Al}, which is detected in gamma-ray decay lines from star-formation regions throughout the Milky Way \citep{Mahoney84,Diehl06}. Another SLR that has recently been detected in gamma-ray lines is $^{60}$Fe \citep{Harris05}. SLRs are not just passive inhabitants of the ISM. By releasing energetic particles and radiation when they decay, they inject power that can heat or ionize the surrounding matter. In the Milky Way's molecular clouds, the radioactivity is overwhelmed by that of cosmic rays, which sustain an ionization rate of $\zeta_{\rm CR} \approx 5 \times 10^{-17}\ \sec^{-1}$. Rapidly star-forming galaxies known as starbursts may have elevated levels of cosmic rays and ionization rates a thousand times or more higher than the Milky Way (\citealt*{Suchkov93}; \citealt{Papadopoulos10-CRDRs}). However, it is possible that cosmic rays are stopped by the abundant gas in starbursts before they enter the densest molecular gas. Gamma rays can provide ionization through large columns; but while the gamma-ray ionization rate can reach up to $\zeta_{\gamma} \approx 10^{-16}\ \sec^{-1}$ in the densest starbursts, in most starbursts gamma rays sustain relatively weak ionization \citep{Lacki12-GRDRs}. SLRs like \textrm{$^{26}$Al} can in principle provide ionization through any column of gas, and if abundant enough, maintain moderate levels of ionization. The major open question for SLR-induced ionization is how well mixed the SLRs are with the gas of the galaxy, a process which takes anything up to 100 Myr; if the mixing times are longer than a few Myr, the SLRs are not abundant in most star-forming cores \citep{Meyer00,Huss09}. Meteorites recording the composition of the primordial Solar system demonstrate that SLRs were present during its formation. Assuming the SLRs were not created in situ by energetic radiation from the Sun \citep{Lee98}, the SLRs provide evidence that the Solar system formed near a star-forming region with young massive stars \citep[e.g.,][]{Adams10}. In fact, \textrm{$^{26}$Al} was overabundant by a factor $\sim 6$ in the primordial Solar system, with $X (\textrm{$^{26}$Al}) \approx 10^{-10}$ ($\textrm{$^{26}$Al} / ^{27}{\rm Al} \approx 5 \times 10^{-5}$), compared to its present day abundances in the Milky Way (e.g., \citealt*{Lee77}; \citealt*{MacPherson95}; \citealt{Diehl06}; \citealt{Huss09}). Their quick decay time also indicate the Solar system formed quickly, within about a few Myr. SLRs, particularly \textrm{$^{26}$Al}, were a primary source of ionization in the Solar Nebula \citep{Stepinski92,Finocchi97,Umebayashi09}, affecting the conductivity and ultimately accretion rate in the protoplanetary disc. Moreover, \textrm{$^{26}$Al} and other SLRs may have regulated the early geological evolution of the Solar system by being a major source of heat in early planetesimals, driving their differentiation and rock metamorphism \citep[e.g.,][]{Hutcheon89,Grimm93,Shukolyukov93}. The contemporary Milky Way, with a star-formation rate (SFR) of a few stars per year is not a typical environment for most of the star-formation in the history of the Universe, however. Roughly 5-20\% of star-formation at all times occurred in rapid starbursts mainly driven by galaxy-galaxy interactions and mergers \citep{Rodighiero11,Sargent12}. Furthermore, most of the star-formation in `normal' galaxies occurred in massive galaxies with a much higher star-formation rate ($\ga 10\ \textrm{M}_{\sun}\ \textrm{yr}^{-1}$) at redshifts $z$ of 1 and higher, when most star-formation took place \citep[e.g.,][]{Magnelli09}. These high star-formation rates translate into large masses of SLRs present in these galaxies. I { will} show that \textrm{$^{26}$Al} in these galaxies, if it is well mixed with the gas, can sustain rather high ionization rates in their ISMs. This has consequences for both star formation and planet formation. When necessary, I assume a Hubble constant of $H_0 = 72\ \textrm{km}~\textrm{s}^{-1}\ \textrm{Mpc}^{-1}$, a matter density of $\Omega_M = 0.25$, and a cosmological constant $\Omega_{\Lambda} = 0.75$ for the cosmology. \section{The Equilibrium Abundance of SLR\lowercase{s}} In a one-zone model of a galaxy, which disregards spatial inhomogeneities, the complete equation for the SLR mass $M_{\rm SLR}$ in the ISM is \begin{equation} \label{eqn:MeqSLRFull} \frac{dM_{\rm SLR}}{dt} = Q_{\rm SLR}(t) - \frac{M_{\rm SLR} (t)}{\tau_{\rm SLR}}, \end{equation} where $\tau_{\rm SLR}$ is the lifetime of the SLR in the galaxy. $Q_{\rm SLR} (t)$, the injection rate of the SLR, depends on the past star-formation history: \begin{equation} \label{eqn:QSLR} Q_{\rm SLR} (t) = \int_{-\infty}^t Y_{\rm SLR}(t - t^{\prime}) \times {\rm SFR}(t^{\prime}) dt^{\prime}. \end{equation} For a coeval stellar population of age $t$, the yield $Y_{\rm SLR}(t)$ is the mass ejection rate of the SLR into the interstellar medium per unit stellar mass \citep{Cervino00}. If there are no big fluctuations in the star-formation rate over the past few Myr, then the SLR abundance approaches a steady-state. The equilibrium mass of a SLR in a galaxy is proportional to its star-formation (or supernova) rate averaged over the previous few Myr. We can then { parametrize} the injection of SLRs in the ISM by a yield $\Upsilon_{\rm SLR}$ per supernova: \begin{equation} \Upsilon_{\rm SLR} = \varepsilon \int_0^{\infty} Y(t^{\prime\prime}) dt^{\prime\prime}, \end{equation} regardless of whether SNe are actually the source of SLRs. The $\varepsilon$ factor is the ratio of the supernova rate $\Gamma_{\rm SN}$ and star-formation rate. Then the equilibrium SLR mass is given by \citep[e.g.,][]{Diehl06} \begin{equation} \label{eqn:MeqSLR} M_{\rm SLR}^{\rm eq} = \Gamma_{\rm SN} \Upsilon_{\rm SLR} \tau_{\rm SLR}. \end{equation} The supernova rate is proportional to the star-formation rate, so $\Gamma_{\rm SN} = \varepsilon {\rm SFR}$. The abundance of an SLR is given by $X_{\rm SLR} = M_{\rm SLR}^{\rm eq} m_H / (M_{\rm gas} m_{\rm SLR})$, where $m_{\rm SLR}$ is the mass of one atom of the SLR and $M_{\rm gas}$ is the gas mass in the galaxy. Therefore the abundance of an SLR is \begin{equation} X_{\rm SLR} = \varepsilon \frac{\rm SFR}{M_{\rm gas}} \frac{\Upsilon_{\rm SLR} \tau_{\rm SLR} m_H}{m_{\rm SLR}} \end{equation} The quantity $M_{\rm gas} / {\rm SFR} = \tau_{\rm gas}$ is the gas consumption time. Note that it is related to the specific star formation rate, ${\rm SSFR} = M_{\star} / {\rm SFR}$, as $\tau_{\rm gas} = f_{\rm gas} / ((1 - f_{\rm gas}) {\rm SSFR})$, where $M_{\star}$ is the stellar mass and $f_{\rm gas} = M_{\rm gas} / (M_{\rm gas} + M_{\star})$ is the gas fraction. Therefore, we can express the equilibrium mass of the SLR in a galaxy as \begin{equation} X_{\rm SLR} = \frac{\varepsilon \Upsilon_{\rm SLR} \tau_{\rm SLR} m_H}{\tau_{\rm gas} m_{\rm SLR}} = \varepsilon \frac{1 - f_{\rm gas}}{f_{\rm gas}} {\rm SSFR} \frac{\Upsilon_{\rm SLR} \tau_{\rm SLR} m_H}{m_{\rm SLR}} \end{equation} Finally, the ratio of SLR abundance in a galaxy to that in the Milky Way is \begin{eqnarray} \nonumber \frac{X_{\rm SLR}}{X_{\rm SLR}^{\rm MW}} & = & \frac{\tau_{\rm gas}^{\rm MW}}{\tau_{\rm gas}} \frac{\tau_{\rm SLR}}{\tau_{\rm SLR}^{\rm MW}}\\ & = & \frac{1 - f_{\rm gas}}{1 - f_{\rm gas}^{\rm MW}} \frac{f_{\rm gas}^{\rm MW}}{f_{\rm gas}} \frac{\rm SSFR}{\rm SSFR^{\rm MW}} \frac{\tau}{\tau^{\rm MW}}, \end{eqnarray} with a MW superscript referring to values in the present day Milky Way. Thus, galaxies with short gas consumption times (and generally those with high SSFRs) should have high abundances of SLRs. The reason is that in such galaxies, more of the gas is converted into stars and SLRs within the residence time of an SLR. The greatest uncertainty in these abundances is the residence time $\tau_{\rm SLR}$. In the Milky Way, these times are just the radioactive decay times, defined here as the e-folding time. In starburst galaxies, however, much of the volume is occupied by a hot, low density gas which forms into a galactic wind with characteristic speeds $v$ of several hundred { kilometres} per second (e.g., \citealt{Chevalier85}; \citealt*{Heckman90}; \citealt{Strickland09}). If massive stars emit SLRs at random locations in the starburst, most of them will dump their SLRs into the wind phase of the ISM. The wind-crossing time is $\tau_{\rm wind} = 330\ \textrm{kyr}\ (h / 100\ \textrm{pc}) (v / 300\ \textrm{km}~\textrm{s}^{-1})^{-1}$, where $h$ is the gas scale-height. The equilibrium time in starburst galaxies is then $\tau = [\tau_{\rm decay}^{-1} + \tau_{\rm wind}^{-1}]^{-1}$. Furthermore, the SLRs ejected into the wind may never mix with the molecular gas, so the fraction of SLRs injected into the molecular medium may be $\ll 1$ (I discuss this issue further in section~\ref{sec:Mixing}). However, very massive stars are found close to their birth environments where there is a lot of molecular gas to enrich, and these may be the source of \textrm{$^{26}$Al}, as supported by the correlation of the 1.809 MeV \textrm{$^{26}$Al} decay line emission and free-free emission from massive young stars \citep{Knoedlseder99}. Turning to the specific example of \textrm{$^{26}$Al}, I note that the yield of \textrm{$^{26}$Al} is thought to be $\Upsilon_{\rm Al-26} \approx 1.4 \times 10^{-4}\ \textrm{M}_{\sun}$ per supernova \citep{Diehl06}. For a Salpeter initial mass function from $0.1 - 100\ \textrm{M}_{\sun}$, the supernova rate is $\Gamma_{\rm SN} = 0.0064\ \textrm{yr}^{-1} ({\rm SFR} / \textrm{M}_{\sun}\ \textrm{yr}^{-1})$, or $\Gamma_{\rm SN} = 0.11\ \textrm{yr}^{-1} (L_{\rm TIR} / 10^{11}\ \textrm{L}_{\sun})$ in terms of the total infrared ($8 - 1000\ \mu\textrm{m}$) luminosity $L_{\rm TIR}$ of starbursts (\citealt{Kennicutt98}; \citealt*{Thompson07}). If I suppose all of the \textrm{$^{26}$Al} is retained by the molecular gas, so that the residence time is the \textrm{$^{26}$Al} decay time of 1.04 Myr, then the equilibrium abundance of \textrm{$^{26}$Al} in a galaxy is just \begin{equation} \label{eqn:XAl26Numer} X (\textrm{$^{26}$Al}) = 3.4 \times 10^{-11} \left(\frac{\tau_{\rm gas}}{\textrm{Gyr}}\right)^{-1} = 1.7 \times 10^{-9} \left(\frac{\tau_{\rm gas}}{20\ \textrm{Myr}}\right)^{-1} \end{equation} \subsection{High-Redshift Normal Galaxies} \label{sec:MSGalaxies} The star-formation rates and stellar masses of normal star-forming galaxies lie on a `main sequence' with a characteristic SSFR that varies weakly, if at all, with stellar mass \citep[e.g.,][]{Brinchmann04}. However, the characteristic SSFR evolves rapidly with redshift \citep[e.g.,][]{Daddi07,Noeske07,Karim11}, with ${\rm SSFR} \propto (1 + z)^{2.8}$ out to $z \approx 2.5$ -- a rise of factor $\sim 30$ \citep{Sargent12}. At $z \ga 2.5$, the SSFR of the main sequence then seems to remain constant \citep{Gonzalez10}. Countering this rise in the SSFR, the gas fractions of normal galaxies at high $z$ were also higher: the high equilibrium masses of SLRs are diluted to some extent by higher gas masses. \citet{Hopkins10} provide a convenient equation, motivated by the Schmidt law \citep{Kennicutt98}, to describe the evolution of gas fraction: \begin{equation} \label{eqn:fGas} f_{\rm gas} (z) = f_0 [1 - (t_L (z) / t_0) (1 - f_0^{3/2})]^{-2/3}, \end{equation} assuming a gas fraction $f_0$ at $z = 0$, with a look back time of $t_L (z) = \int_0^z dz^{\prime} / [H_0 (1 + z^{\prime}) \sqrt{\Omega_{\Lambda} + \Omega_M (1 + z^{\prime})^3}]$ and a current cosmic age of $t_0$ \citep[see also][]{Hopkins09}. Since the gas fractions of normal galaxies at present are small, the evolution at low redshifts can be approximated as $f_{\rm gas} (z) = f_0 [1 - (t_L (z) / t_0)]^{-2/3}$. After calculating the mean abundances of SLRs in normal galaxies, I find that the rapid SSFR evolution overwhelms the modest evolution in $f_{\rm gas}$ at high $z$: the SLR abundances of normal galaxies evolves quickly. These enhancements are plotted in Fig.~\ref{fig:NormGalaxy}. Observational studies of high redshift main sequence galaxies indicate a slower evolution in $\tau_{\rm gas}$, resulting from a quicker evolution of $f_{\rm gas}$. Although equation~\ref{eqn:fGas} implies that $f_{\rm gas}$ was about twice as high ten billion years ago at $z \approx 2$, massive disc galaxies are observed with gas fractions of $\sim 40$ -- $50\%$, which is 3 to 10 times greater than at present \citep[e.g.,][]{Tacconi10,Daddi10}. According to \citet{Genzel10}, the typical (molecular) gas consumption time at redshifts 1 to 2.5 was $\sim 500\ \textrm{Myr}$. In the \citet{Daddi10} sample of BzK galaxies at $z \approx 1.5$, gas consumption times are likewise $\sim 300$ -- $700\ \textrm{Myr}$. To compare, the molecular gas consumption times at the present are estimated to be 1.5 to 3 Gyr \citep{Diehl06,Genzel10,Bigiel11,Rahman12}, implying an enhancement of a factor 3 to 6 in SLR abundances at $z \ga 1$. But note that the BzK galaxies are not the direct ancestors of galaxies like the present Milky Way, which are less massive. The SSFR, when observed to have any mass dependence, is greater in low mass galaxies at all $z$ \citep{Sargent12}. This means that lower mass galaxies at all $z$ have shorter $\tau_{\rm gas}$, as indicated by observations of present galaxies \citep{Saintonge11}. The early Milky Way therefore may have had a gas consumption time smaller than $500\ \textrm{Myr}$. So far, I have ignored possible metallicity $Z$ dependencies in the yield $\Upsilon$ of SLRs. It may be generally expected that star-forming galaxies had lower metallicity in the past, since less of the gas has been processed by stellar nucleosynthesis. However, observations of the age-metallicity relation of G dwarfs near the Sun reveal that they have nearly the same metallicity at ages approaching 10 Gyr (e.g., \citealt{Twarog80,Haywood06}; \citealt*{Holmberg07}), though the real significance of the lack of a trend remains unclear, since there is a wide scatter in metallicity with age \citep[see the discussion by][]{Prantzos09}. Observations of external star-forming galaxies find weak evolution at constant stellar mass, with metallicity $Z$ decreasing by $\sim 0.1-0.2$ dex per unit redshift (\citealt*{Lilly03}; \citealt{Kobulnicky04}). After adopting a metallicity dependence of $Z(z) = Z(0) \times 10^{-0.2 z}$ (0.2 dex decrease per unit redshift), I show in the revised SLR abundances Fig.~\ref{fig:NormGalaxy} assuming that the SLR yield goes as $Z^{-1}$, $Z^{-0.5}$, $Z^{0.5}$, $Z$, $Z^{1.5}$, and $Z^2$. If the yields are smaller at lower metallicity, the SLR abundances are still elevated at high redshift, though by not as much for metallicity-independent yields. As an example, the yield of \textrm{$^{26}$Al} in the winds of Wolf-Rayet stars is believed to scale as $\Upsilon \propto Z^{1.5}$ \citep{Palacios05}. According to \citet{Limongi06}, these stellar winds contribute only a minority of the \textrm{$^{26}$Al} yield, so it is unclear how the \textrm{$^{26}$Al} yield really scales. \citet{Martin10} considered the \textrm{$^{26}$Al} and \textrm{$^{60}$Fe} yields from stars with half Solar metallicity. They found that, because reduced metallicity lowers wind losses, more SLRs are produced in supernovae. This mostly compensates for the reduced wind \textrm{$^{26}$Al} yield, and actually raises the synthesized amount of \textrm{$^{60}$Fe}. \begin{figure} \centerline{\includegraphics[width=8cm]{f1.eps}} \caption{Plot of the SLR abundance enhancements in normal galaxies lying on the `main sequence', for a gas fraction evolution described by equation~\ref{eqn:fGas}. The rapid evolution of SSFRs leads to big enhancements of SLRs at high $z$. Even during the epoch of Solar system formation, the mean SLR abundance was twice the present value. The different lines are for different $f_{\rm gas}$ at $z = 0$, assuming SLR yields are independent of metallicity: 0.05 (dotted), 0.1 (solid), 0.2 (dashed). The shading shows the abundances for $0.05 \le f_{\rm gas} \le 0.2$ when the SLR yield depends on metallicity, assuming a 0.2 dex decrease in metallicity per unit redshift. \label{fig:NormGalaxy}} \end{figure} \subsection{Starbursts} \begin{table*} \begin{minipage}{170mm} \caption{$^{26}$A\lowercase{l} Abundances and Associated Ionization Rates} \label{table:Al26Abundances} \begin{tabular}{lccccccccc} \hline Starburst & SFR & $\Gamma_{\rm SN}$ & $M_{\rm Al-26}^{\rm eq}$ & $M_H$ & $\tau_{\rm gas}$ & $X (\textrm{$^{26}$Al})^a$ & $\displaystyle \frac{^{26}\rm Al}{^{27}\rm Al}^b$ & $\zeta_{\rm Al-26}(e^+)^c$ & $\zeta_{\rm Al-26}(e^+ \gamma)^d$\\ & ($\textrm{M}_{\sun}\ \textrm{yr}^{-1}$) & (yr$^{-1}$) & ($\textrm{M}_{\sun}$) & ($\textrm{M}_{\sun}$) & ($\textrm{Myr}$) & & & $(\sec^{-1})$ & $(\sec^{-1})$\\ \hline Milky Way ($z = 0$)$^e$ & 3.0 & 0.019 & 2.8 & $4.5 \times 10^9$ & 1500 & $2.4 \times 10^{-11}$ & $9.4 \times 10^{-6}$ & $1.9 \times 10^{-20}$ & $7.1 \times 10^{-20}$\\ Galactic Centre CMZ$^f$ & 0.071 & $4.6 \times 10^{-4}$ & 0.067 & $3 \times 10^7$ & 420 & $8.6 \times 10^{-11}$ & $3.4 \times 10^{-5}$ & $6.9 \times 10^{-20}$ & $2.6 \times 10^{-19}$\\ NGC 253 core$^{g,h}$ & 3.6 & 0.023 & 3.3 & $3 \times 10^7$ & 8.3 & $4.3 \times 10^{-9}$ & $1.8 \times 10^{-3}$ & $3.4 \times 10^{-18}$ & $1.3 \times 10^{-17}$\\ M82$^{g,i}$ & 10.5 & 0.067 & 9.8 & $2 \times 10^8$ & 19 & $1.9 \times 10^{-9}$ & $7.5 \times 10^{-4}$ & $1.5 \times 10^{-18}$ & $5.7 \times 10^{-18}$\\ Arp 220 nuclei$^j$ & 50 & 0.3 & 44 & $10^9$ & 20 & $1.7 \times 10^{-9}$ & $6.7 \times 10^{-4}$ & $1.3 \times 10^{-18}$ & $5.0 \times 10^{-18}$\\ Submillimeter galaxy$^k$ & 1000 & 6.4 & 930 & $2.5 \times 10^{10}$ & 25 & $1.4 \times 10^{-9}$ & $5.7 \times 10^{-4}$ & $1.1 \times 10^{-18}$ & $4.3 \times 10^{-18}$\\ BzK galaxies$^l$ & 200 & 1 & 200 & $7 \times 10^{10}$ & 400 & $1 \times 10^{-10}$ & $4 \times 10^{-5}$ & $8 \times 10^{-20}$ & $3 \times 10^{-19}$\\ \hline \end{tabular} \\$^a$: Mean abundance of \textrm{$^{26}$Al}, calculated assuming the \textrm{$^{26}$Al} is well-mixed with the gas and resides there for a full decay time (instead of, for example, a wind-crossing time). \\$^b$: Calculated assuming Solar metallicity with $\log_{10} [N(^{27}{\rm Al})/N(H)] = -5.6$. \\$^c$: Ionization rate from \textrm{$^{26}$Al} with the derived abundance, with ionization only from MeV positrons released by the decay, assuming effective stopping. \\$^d$: Ionization rate from \textrm{$^{26}$Al} with the derived abundance, where ionization from the 1.809 MeV decay line and 0.511 keV positron annihilation gamma rays is included, assuming they are all stopped. \\$^e$: Supernova rate and gas mass from \citet{Diehl06}; SFR calculated from supernova rate using Salpeter IMF for consistency. \\$^f$: Inner 100 pc of the Milky Way. SFR and $\Gamma_{\rm SN}$ from IR luminosity in \citet*{Launhardt02}; gas mass from \citet{Molinari11}. \citet{PiercePrice00} gives a gas mass of $5 \times 10^7\ \textrm{M}_{\sun}$. \\$^g$: SFR and $\Gamma_{\rm SN}$ from IR luminosity in \citet{Sanders03}. \\$^h$: Gas mass from \citet*{Harrison99}. \\$^i$: Gas mass from \citet{Weiss01}. \\$^j$: Assumes IR luminosity of $3 \times 10^{11}\ \textrm{L}_{\sun}$ for SFR and $\Gamma_{\rm SN}$ and gas mass given in \citet{Downes98}. \\$^k$: Typical gas mass and SFR of { submillimetre} galaxies from \citet{Tacconi06}. \\$^l$: Mean SFR and gas mass of the 6 BzK galaxies in \citet{Daddi10}, which are representative of main sequence galaxies at $z \approx 1.5$. \end{minipage} \end{table*} The true starbursts, driven by galaxy mergers and galaxies interacting with each other, represent about $\sim 10\%$ of star formation at all redshifts \citep{Rodighiero11,Sargent12}. They have SSFRs that are up to an order of magnitude higher than $z = 2$ normal galaxies. The mean, background abundances of SLRs in starbursts are therefore about 100 times greater than the present day Milky Way. I show the \textrm{$^{26}$Al} abundances in some nearby starburst galaxies in Table~\ref{table:Al26Abundances}. In the Galactic Centre region, the \textrm{$^{26}$Al} abundance is only twice that of the present Milky Way as a whole. However, the \textrm{$^{26}$Al} abundances are extremely high in the other starbursts, $\sim 2 \times 10^{-9}$, about twenty times that of the primordial Solar nebula. The $^{26}{\rm Al}/^{27}{\rm Al}$ ratio in these starbursts is also very high. Assuming Solar metallicity with an $^{27}$Al abundance of $\log_{10} [N(^{27}{\rm Al})/N(H)] = -5.6$ \citep{Diehl06}, this ratio is $\sim (0.6 - 1.8) \times 10^{-3}$. Again, this ratio for Solar metallicity gas is $\sim 10 - 30$ times higher than that of the early Solar Nebula, $\sim 5 \times 10^{-5}$. \section{Systematic Uncertainties} \subsection{Effects of Variable Star-Formation Rates} The steady-state abundance (equation~\ref{eqn:MeqSLR}) is only appropriate when the star-formation rate is slowly varying on time-scales of a few Myr. Since young stellar populations produce SLRs for several Myr, and since \textrm{$^{26}$Al} and \textrm{$^{60}$Fe} themselves survive for $\ga 1\ \textrm{Myr}$, the injection rate of SLRs is smoothed over those time-scales (equation~\ref{eqn:QSLR}). Very high frequency fluctuations in the SFR therefore have little effect on the abundance of SLRs. In the opposite extreme, when the fluctuations in SLRs are slow compared to a few Myr, we can simply take the present SFR and use it in equation~\ref{eqn:MeqSLR} for accurate results. However, intermediate frequency variability invalidates the use of equation~\ref{eqn:MeqSLR}, and can result in the SLR abundance being out of phase with the SFR. Normal main sequence galaxies at high redshift built up their stellar populations over Gyr times, evolving secularly \citep[c.f.,][]{Wuyts11}. They are also large enough to contain many stellar clusters, so that stochastic effects average out. It is reasonable to suppose that they have roughly constant SFRs over the relevant time-scales. True starbursts, on the other hand, are violent events that last no more than $\sim 100\ \textrm{Myr}$, as evinced by their short $\tau_{\rm gas}$. They are relatively small, so stochastic fluctuations in their star-formation rates are more likely. \citet{Forster03} studied the nearby, bright starburst M82 and concluded that its star-formation history is in fact bursty. The star-formation histories of other starbursts are poorly known, but \citet{Mayya04} present evidence for large fluctuations on $\sim 4\ \textrm{Myr}$ times. I estimate the magnitude of these fluctuations for the prototypical starburst M82 with the full equation for SLR mass in a one-zone model (equation~\ref{eqn:MeqSLRFull}). The solution to equation~\ref{eqn:MeqSLRFull} for $M_{\rm SLR}$ is \begin{equation} M_{\rm SLR} (t) = \int_{-\infty}^t {\rm SFR}(t^{\prime}) \times m_{\rm SLR}(t^{\prime}) dt^{\prime}, \end{equation} where \begin{equation} m_{\rm SLR}(t^{\prime}) = \int_{-t^{\prime}}^0 Y_{\rm SLR}(t^{\prime\prime}) \exp\left(-\frac{t^{\prime} - t^{\prime\prime}}{\tau_{\rm SLR}}\right) dt^{\prime\prime}. \end{equation} The quantity $m_{\rm SLR}(t^{\prime})$ represents the SLR mass in the ISM from a coeval stellar population of unit mass and age $t^{\prime}$. It is given by \citet{Cervino00} and \citet{Voss09} for \textrm{$^{26}$Al} and $^{60}$Fe. I use the star-formation history derived by \citet{Forster03} for the `3D region' of M82, which consists of two peaks at 4.7 Myr ago and 8.9 Myr ago. The peaks are modelled as Gaussians with the widths given in \citet{Forster03} (standard deviations $\sigma$ of 0.561 Myr for the more recent burst, and 0.867 Myr for the earlier burst). I convert the star-formation rate from a Salpeter IMF from 1 to 100$\ \textrm{M}_{\sun}$ given in \citet{Forster03} to a Salpeter IMF from 0.1 to 100$\ \textrm{M}_{\sun}$ for consistency with the rest of the paper.\footnote{I ignore the relatively small difference between the upper mass limit of 100$\ \textrm{M}_{\sun}$ in \citet{Forster03} and 120$\ \textrm{M}_{\sun}$ in \citet{Cervino00} and \citet{Voss09}. Since stars with masses 100 to 120$\ \textrm{M}_{\sun}$ can affect stellar diagnostics, converting to that IMF may require an adjustment to the star-formation history beyond a simple mass scaling.} This region does not include the entire starburst; it has roughly $1/3$ of the luminosity of the starburst, but the stellar mass formed within the 3D region over the past 10 Myr gives an average SFR of 10 $\textrm{M}_{\sun}\ \textrm{yr}^{-1}$ in the \citet{Forster03} history. Note that \citet*{RodriguezMerino11} derives a different age distributions for stellar clusters (compare with \citealt{Satyapal97}). \citet{Strickland09} has also argued that the star-formation history of M82's starburst core is not well constrained before 10 Myr ago (as observed from Earth), and may have extended as far back as 60 Myr ago. Thus, I take the \citet{Forster03} history merely as a representative example of fluctuating SFRs. \begin{figure} \centerline{\includegraphics[width=8cm]{f2.eps}} \caption{History of the SLR masses in M82's `3D region' ISM for the star-formation history given in \citet{Forster03}. The black lines are for \textrm{$^{26}$Al} and grey lines are for $^{60}$Fe; solid lines are using the yields in \citet{Voss09} and dashed lines are using the \citet{Cervino00} yields. We presently observe M82 at $t = 0$; I assume there are no bursts of star-formation after then, so that the masses inevitably decay away. \label{fig:M82SLRHistory}} \end{figure} The calculated \textrm{$^{26}$Al} (black) and \textrm{$^{60}$Fe} (grey) masses are plotted in Fig.~\ref{fig:M82SLRHistory}. At first, there is no SLR mass in the starburst, because it takes a few Myr for SLR injection to start. With the \citet{Voss09} yields, the SLR masses rise quickly and peak $\sim 5\ \textrm{Myr}$ ago (as observed from Earth). The SLR masses drop afterwards. Yet they are still within a factor of 1.7 of their peak values even now, $\sim 5\ \textrm{Myr}$ after the last star-formation burst. If there is no further star-formation, the SLRs will mostly vanish over the next 10 Myr. The \citet{Cervino00} yields predict a greater role for supernovae from lower mass stars, so the fluctuations are not as great; the \textrm{$^{60}$Fe} mass remains roughly the same even 10 Myr from now. As long as there has been recent star-formation in the past $\sim 5\ \textrm{Myr}$, the SLR abundances are at least half of those predicted by the steady-state assumption. There is a more fundamental reason to expect that the steady-state SLR abundances are roughly correct for starbursts. A common way of estimating star-formation rates in starbursts is to use the total infrared luminosity \citep{Kennicutt98}, which is nearly the bolometric luminosity for these dust-obscured galaxies. Young stellar populations, containing very massive stars, are brighter and contribute disproportionately to the bolometric luminosity. Therefore, both the luminosity and the SLR abundances primarily trace young stars. To compare the bolometric luminosity, I ran a Starburst99 (v6.04) model of a $Z = 0.02$ metallicity coeval stellar population with a Salpeter IMF ($dN/dM \propto M^{-2.35}$) between 0.1 and 120$\ \textrm{M}_{\sun}$ \citet{Leitherer99}. I then calculate the SFR that would be derived from these { luminosities} using the \citet{Kennicutt98} conversion, and then from that, the expected steady-state SLR masses from equation~\ref{eqn:MeqSLR}. The `bolometric' \textrm{$^{26}$Al} masses are compared to the actual masses in Fig.~\ref{fig:LBolVsMAl26}. \begin{figure} \centerline{\includegraphics[width=8cm]{f3.eps}} \caption{How the bolometric luminosity traces \textrm{$^{26}$Al} mass for a coeval stellar population with age $t$. The grey line is the predicted steady state \textrm{$^{26}$Al} mass I would predict from the bolometric luminosity of the population, whereas the black lines are the actual mass of \textrm{$^{26}$Al} (solid for \citealt{Voss09} and dashed for \citealt{Cervino00}).\label{fig:LBolVsMAl26}} \end{figure} Although the very youngest stellar populations are bright but not yet making SLRs, the bolometric luminosity (grey) is a good tracer of \textrm{$^{26}$Al} mass (black) for stellar populations with ages between 3 and 20 Myr. For most of the interval, the bolometric \textrm{$^{26}$Al} masses are within a factor 2 of the actual masses. For populations between 15 Myr and 20 Myr, the \citet{Voss09} and \citet{Cervino00} predictions envelop the bolometric \textrm{$^{26}$Al} masses. At 20 to 25 Myr old, the bolometric \textrm{$^{26}$Al} masses are about twice the true masses. For older populations still, the true \textrm{$^{26}$Al} masses finally die away while the bolometric luminosity only slowly declines. Note that, if stars have been forming continuously for the past 100 Myr, over half of the luminosity comes from stars younger than 20 Myr. Thus, the use of the bolometric luminosities introduces a factor $\la 3$ systematic error. In short, the use of bolometric luminosity as a SFR indicator, and the natural variability in the star-formation rates of starbursts can lead to overestimations of the SLR abundances by a factor $\sim 3$. But I estimate the SLR abundances of true starbursts are a hundred times higher than in the present Milky Way (equation~\ref{eqn:XAl26Numer} and Table~\ref{table:Al26Abundances}). The ratio is so great that the systematic effects do not undermine the basic conclusion that SLR abundances are much larger in true starbursts. \subsection{Are SLRs mixed quickly enough into the gas?} \label{sec:Mixing} Although the average levels of SLRs in starbursts and high-$z$ normal galaxies are high, that does not by itself mean the SLRs influence the environments for star-formation. While SLRs can play an important role in star-forming regions, by elevating the ionization rates and by being incorporated into solid bodies, SLRs trapped in ionized gas are irrelevant for these processes. The mixing of metals from young stars into the ISM gas mass is usually thought to be very slow in the present Milky Way, compared to SLR lifetimes. The massive stars responsible for making SLRs often live in star clusters, which blow hot and rarefied bubbles in the ISM. Supernovae also excavate the coronal phase of the ISM \citep{McKee77}. Turbulence within the bubbles mixes the SLRs and homogenizes their abundances \citep[e.g.,][]{Martin10}, over a time scale $t_{\rm mix} \approx L / v_{\rm turb}$, where $L$ is the outer scale of turbulence (typical size of the largest eddies) and $v_{\rm turb}$ is the turbulent speed \citep{Roy95,Pan10}. The large outer scale of turbulence, $\sim$100 -- 1000$\ \textrm{pc}$, and the slow turbulent speeds ($\sim 5$--$10\ \textrm{km}~\textrm{s}^{-1}$) in the Milky Way imply mixing times of $\sim 10$ -- $200\ \textrm{Myr}$. Even if the \textrm{$^{26}$Al} is homogenized within the superbubbles, this low density hot gas requires a long time to mechanically affect cold star-forming clouds, because of the large density contrast \citep{deAvillez02}. Mixing between the phases, particularly warm and hot gas, is accelerated by Rayleigh-Taylor and Kelvin-Helmholtz instabilities \citep{Roy95}, but overall, mixing takes tens of Myr to operate in the Milky Way (\citealt{deAvillez02}; see also \citealt{Clayton83}, where mixing times are between the warm ISM from evaporated H I clouds, cool and large H I clouds, and molecular clouds). Thus, SLRs like \textrm{$^{26}$Al} are thought to decay long before they are mixed thoroughly with the star-forming gas. Indeed, studies of the abundances of longer lived isotopes in the primordial Solar system supports longer mixing times of $\sim 50$ -- 100$\ \textrm{Myr}$ \citep[e.g.,][]{Meyer00,Huss09}. It is thought that these obstacles existed, at least qualitatively, in the $z \approx 0.45$ Milky Way, when the Solar system formed. These problems are part of the motivation for invoking a local source of SLRs, including energetic particles from the Sun itself \citep{Lee98}, injection from a nearby AGB star (\citealt*{Busso03}), or injection from an anomalously nearby supernova \citep{Cameron77} or Wolf-Rayet star (\citealt*{Arnould97}). Recently, though, several authors proposed models that might overcome the mixing obstacle, where young stars are able to inject SLRs into star-forming clouds. A motivation behind these models is the idea that molecular clouds are actually intermittent high density turbulent fluctuations in the ISM \citep[e.g.,][]{MacLow04}, and the supernovae that partly drive the turbulence -- indirectly forming the molecular clouds -- also are the sources of SLRs \citep{Gounelle09}. In the model of \citet{Gounelle09}, old superbubbles surrounding stellar clusters form into molecular clouds after { ploughing} through the ISM for $\sim$10 Myr. Supernovae continue going off in the star clusters, adding their SLRs into these newborn molecular clouds \citep[see also][]{Gounelle12}. Simulations by \citet*{Vasileiadis13} also demonstrate that SLRs from supernovae going off very near giant molecular clouds are mixed thoroughly with the star-forming gas. On a different note, \citet{Pan12} argued that supernovae remnants are clumpy, and that clumps could penetrate into molecular clouds surround star clusters and inject SLRs. If these scenarios are common, then a large fraction of the produced SLRs reaches the star-forming gas before decaying. In fact, these mechanisms may be so efficient that SLRs are concentrated only into star-forming molecular gas, a minority of the Milky Way gas mass. If so, then the abundance of SLRs within Galactic molecular gas ($M_{\rm SLR} / M_{\rm H2}$) is greater than the mean background level ($M_{\rm SLR} / M_{\rm gas}$); in this way, SLR abundances could reach the elevated levels that existed in the early Solar system \citep{Gounelle09,Vasileiadis13}. { On the other hand, young stars can trigger star-formation in nearby gas without polluting them. This can occur when a shock from a supernova or from an overpressured H II region propagates into a molecular cloud, causing the cores within it to collapse (\citealt{Bertoldi89}; \citealt*{Elmegreen95}). This process has been inferred to happen in the Upper Scorpius OB association \citep{Preibisch99}. Since the cores are pre-existing, they may not be enriched with SLRs (although supernova shocks can also inject SLRs into a molecular cloud; see \citealt{Boss10} and \citealt{Boss12}). The triggering can also occur before any supernovae enrich the material with SLRs. Star formation can also be triggered when a shock from a H II region sweeps up a shell of material, which eventually becomes gravitationally unstable and collapses (\citealt{Elmegreen77}; see also \citealt{Gritschneder09}).} I note, however, that the homogeneity of the \textrm{$^{26}$Al} abundance in the early Solar system is controversial; if the abundance was inhomogeneous, that is inconsistent with efficient SLR mixing within the Solar system's progenitor molecular cloud. Although \citet*{Villeneuve09} conclude that \textrm{$^{26}$Al} had a constant abundance, \citet{Makide11} find that the \textrm{$^{26}$Al} abundance varied during the earliest stages of Solar system formation, when { aluminium} first condensed from the Solar nebula. What is even less clear, though, is how similar the Milky Way is to starbursts and the massive high-$z$ normal galaxies that host the majority of the cosmic star formation. As in the Milky Way, supernovae in starbursts like M82 probably blast out a hot phase. But the hot phase escapes in a rapid hot wind in starbursts with high star-formation densities ($\ga 0.1\ \textrm{M}_{\sun}\ \textrm{yr}^{-1}\ \textrm{kpc}^{-2}$; \citealt{Chevalier85,Heckman90}). There is direct evidence for this hot wind from X-ray observations \citep{Strickland07,Strickland09}. Furthermore, supernova remnants are observed to expand quickly in M82, implying that they are going off in a very low density environment (e.g., \citealt{Beswick06}; compare with \citealt{Chevalier01}). Cool and warm gas is observed outflowing alongside the hotter wind, possibly entrained by the hot wind \citep{Strickland09}. Whereas the edges of superbubbles eventually cool and fade back into the ISM in the Milky Way after a 10 -- 100 Myr, any superbubble gas that does cool in these starbursts could still be pushed out by the wind. If the SLRs are trapped in the wind, they reside in these starbursts for only a few hundred kyr. But the ISM conditions in the starbursts are much different, with higher densities in all phases, higher pressures, and higher turbulent speeds \citep[e.g.,][]{Smith06}. Starbursts are physically small, with radii of $\sim 200\ \textrm{pc}$ at $z = 0$ -- comparable to the size of some individual superbubbles in the Milky Way. The eddy sizes therefore must be smaller and mixing processes could be faster. To demonstrate just how small superbubbles are in starbursts, \citet*{Silich07} modelled the superbubble surrounding the star cluster M82 A-1 in M82, which has a mass of $2 \times 10^6\ \textrm{M}_{\sun}$. They find that the wind propagates only for a few parsecs before being shocked and cooled. Stellar ejecta in the core of the cluster also cool rapidly in their model. The turbulent mixing time is therefore much smaller, $\sim 200\ \textrm{kyr}$ for an eddy length of 10 pc and a turbulent speed of $50\ \textrm{km}~\textrm{s}^{-1}$. Conditions are even more extreme in present day compact Ultraluminous Infrared Galaxy starbursts, where the ISM is so dense ($\sim 10^4\ \textrm{cm}^{-3}$; \citealt{Downes98}) that a hot phase may be unable to form (\citealt{Thornton98}; \citealt*{Thompson05}). Instead, the ISM is almost entirely molecular \citep{Downes98}. Indeed, observations of supernovae in Arp 220 indicate they are going off in average density molecular gas \citep{Batejat11}. Supernovae remnants fade within a few tens of kyr into the ISM, due to powerful radiative losses \citep{McKee77}. The SLRs then are incorporated into the molecular ISM in a turbulent mixing time, the whole process taking just a few hundred kyr. The main uncertainty is then, not whether the SLRs are injected into the molecular gas, but whether these SLR-polluted regions of the molecular gas fill the entire starburst. Turbulent mixing smooths abundances over regions the size of the largest eddies \citep{Pan10}, but if the distribution of SLR injection sites varies over larger scales, the final SLR abundance may also vary on these large scales. We know very little about the conditions in high redshift galaxies. Star formation in main sequence galaxies is dominated by massive galaxies with large star-formation rates at high redshift. These massive galaxies are several kpc in radius, but contain large amounts of molecular gas \citep{Daddi10}. They also have large star-formation densities and host winds. In the more extreme galaxies, radiative bremsstrahlung losses stall any hot wind \citep{Silich10}. Turbulent speeds in these galaxies are high \citep{Green10}, implying faster turbulent mixing than in the Milky Way. But it is not clear which phase the SLRs are injected into or how long it takes for them to mix throughout star formation regions. The effects of clustering in the ISM is also uncertain, but it probably is important in these galaxies, where huge clumps ($\ga 10^8\ \textrm{M}_{\sun}$ and a kpc wide) are observed \citep[e.g.,][]{Genzel11}. To summarize, while there are reasons to expect that most of the SLRs synthesized by young stars in the Milky Way decay before reaching star-forming gas, this is not necessarily true in starbursts or high-$z$ normal galaxies. Turbulent mixing is probably fast, at least in compact starbursts which are physically small. On the other hand, winds might blow out SLRs before they reach the star-forming gas, at least in the weaker starbursts. Clearly, this issue deserves further study. \section{Implications} \subsection{Implications for the early Solar system and Galactic stars of similar age} The rapid evolution in SSFRs implies that Galactic background SLR abundances were up to twice as high during the epoch of Solar system formation (4.56 Gyr ago; $z \approx 0.44$). If the evolution of the Milky Way's gas fraction was comparable to that in observed massive main sequence galaxies, the enhancement may have been only $\sim 50\%$ above present values (see the discussion in section~\ref{sec:MSGalaxies}; \citealt{Genzel10}). The inferred primordial abundances of $^{60}$Fe in the Solar system are in fact up to twice as high as in the contemporary Milky Way, as determined with gamma-ray lines \citep{Williams08,Gounelle08}. This overabundance is cited as evidence for an individual, rare event enriching the early Solar system, or the gas that formed into the Solar system, with SLRs. However, my calculations show this is not necessarily the case: the twice high abundances of $^{60}$Fe in the early Solar system can arise \emph{simply because the Galaxy was more efficient at converting gas into stars 4.5 Gyr ago.} The primordial abundance of \textrm{$^{26}$Al} in the Solar system was about six times higher than the mean Galactic value at present \citep{Diehl06}, or three times higher than the mean Galactic abundance at $z = 0.44$ assuming that equation~\ref{eqn:fGas} holds. Even so, the normal effects of galaxy evolution are a potential contributor to the greater \textrm{$^{26}$Al} abundances, assuming efficient mixing of \textrm{$^{26}$Al} with the molecular gas of the Milky Way. Furthermore, the high abundances of \textrm{$^{26}$Al} in the early Solar system are actually typical of star-formation at $z \approx 1 - 2$ -- when most cosmic star-formation occurred. In this sense, the early Solar system's \textrm{$^{26}$Al} abundance may be normal for most planetary systems in the Universe. As I have discussed in Section~\ref{sec:Mixing}, it is not clear whether the background abundances of \textrm{$^{26}$Al} and other SLRs actually represent those of typical star-forming gas; if mixing takes more than a few Myr, these SLRs could not have affected star and planet formation \citep{Meyer00,deAvillez02,Huss09}. But although there may have been a wide distribution of abundances if mixing is inefficient, the mean of the distribution is still higher simply because there were more supernovae and young stars per unit gas mass. Thus, a greater fraction of star formation occurred above any given threshold in the past. In addition, the Galactic background level can be meaningful if a large fraction of the SLRs from young stars make it into the cold gas, and \citet{Gounelle09}{,} \citet{Gounelle12}, \citet{Pan12}, and \citet{Vasileiadis13} have presented mechanisms where this can happen. Although there is suggestive evidence that these mechanisms did not operate for the Solar system \citep{Makide11}, there is no reason they could not have worked for other star systems of similar ages. Then the Solar system's relatively high abundance of SLRs, and \textrm{$^{60}$Fe} in particular, may be common for Galactic stars of similar ages, even if through a different process. Finally, as I noted, these conclusions depend on how the yields of SLRs from massive stars change with metallicity, and what the mean Galactic metallicity was at the epoch of Solar system formation. \subsection{The Ionization Rate and Physical Conditions in Starbursts' Dense Clouds} Radioactive decay from SLRs injects energy in the form of daughter particles into the ISM. The decay particles, with typical energies of order an MeV, ionize gas and ultimately heat the ISM, if they do not escape entirely. The high abundances of SLRs, including \textrm{$^{26}$Al}, can alter the ionization state of molecular gas in these galaxies. The ionization rate, in particular, is important in determining whether magnetic fields can thread through the densest gas. I focus here on the contribution from \textrm{$^{26}$Al}, which dominated the ionization rate from SLRs in the primordial Solar system \citep{Umebayashi09}. For the sake of discussion, I assume that the SLRs are well-mixed into the gas, despite the uncertainties (section~\ref{sec:Mixing}). Each \textrm{$^{26}$Al} decay releases an energy $E_{\rm decay}$ into the ISM in the form of MeV positrons and gamma rays. If each atom in the ISM takes an energy $E_{\rm ion}$ to be ionized, each \textrm{$^{26}$Al} decay can therefore ionize $E_{\rm decay} / E_{\rm ion}$ atoms. In 82\% of the decays, a positron with kinetic energy of 1.16 MeV is released, and the positron is slowed by ionization losses \citep{Finocchi97}. The minimum energy per decay, after accounting for this branching ratio, that goes into ionization is $E_e^{\rm min} = 0.95\ \textrm{MeV}$, when the medium stops the positron (inflight annihilation losses are negligible at these energies; \citealt{Beacom06}). The annihilation of the positron with { an} electron in the ISM produces gamma rays of total energy $2 \times 0.511 = 1.022$ MeV. In addition, in very nearly all \textrm{$^{26}$Al} decays, a 1.809 MeV gamma ray is produced. These gamma rays only interact with the ISM over columns of several $\textrm{g}~\textrm{cm}^{-2}$, so only in particularly dense regions will they contribute to the ionization \citep{Finocchi97}. When they do, $E_{\rm decay}^{e \gamma} = 3.60\ \textrm{MeV}$. \citet{Stepinski92} gives $E_{\rm ion}$ as 36.3 eV, so that the ionization rate is \begin{equation} \zeta_{\rm Al-26} = \frac{X_{\rm Al-26} E_{\rm decay}}{(36.3\ \textrm{eV}) \tau_{\rm decay}}. \end{equation} In terms of gas consumption time, the ionization rate from \textrm{$^{26}$Al} is \begin{equation} \zeta = (1.4 - 5.1) \times 10^{-18}\ \sec^{-1}\ \left(\frac{\tau_{\rm gas}}{20\ \textrm{Myr}}\right)^{-1}. \end{equation} My results for the mean ionization rate from \textrm{$^{26}$Al} of some characteristic starbursts are shown in Table~\ref{table:Al26Abundances}; they are in the range $10^{-18} - 10^{-17}\ \sec^{-1}$. The maximal ionization rates are roughly an order of magnitude higher than that found in early Solar system, a dense environment with $\zeta \approx (0.6 - 1) \times 10^{-18}\ \sec^{-1}$ \citep{Finocchi97,Umebayashi09}. Even if the \textrm{$^{26}$Al} is in the cold star-forming gas, it could actually be condensed into dust grains instead of existing in the gas phase. Yet the decay products still escape into the ISM from within the grain. The attenuation of gamma rays at $\sim 1$ MeV is dominated by Compton scattering, requiring columns of a few $\textrm{g}~\textrm{cm}^{-2}$ to be important. Thus, gamma rays pass freely through dust grains that are smaller than a { centimetre}. The energy loss rate of relativistic electrons or positrons in neutral matter is approximately \begin{equation} \frac{dK}{ds} = \frac{9}{4} m_e c^2 \sigma_T \sum_j \frac{\rho_j Z_j}{A_j m_H} \left[\ln \frac{K + m_e c^2}{m_e c^2} + \frac{2}{3} \ln \frac{m_e c^2}{\mean{E_j}}\right] \end{equation} from \citet{Schlickeiser02}, where $K$ is the particle kinetic energy, $s$ is the physical length, and $\sigma_T$ is the Thomson cross section. The sum is over elements $j$; for heavy elements $Z_j \approx A_j / 2$, $\rho_j$ is the partial density of each element within the grain, and $\mean{E_j}$ is related to the atomic properties of the element. I take the bracketed term to have a value $\sim 5$ and $\rho \approx 3\ \textrm{g}~\textrm{cm}^{-2}$. Then the stopping length is $K / (dK / ds) \approx 0.3\ \textrm{cm}\ (K / \textrm{MeV})$, much bigger than the typical grain radius. Thus, \textrm{$^{26}$Al} (and other SLRs) in dust grains still contribute to the ionization of the ISM. On the other hand, are the positrons actually stopped in starburst molecular clouds, or do they escape? For neutral interstellar gas, the stopping column of MeV positrons is $\sim K \rho / (dK / ds) \approx 0.2\ \textrm{g}~\textrm{cm}^{-2}$ { through ionization and excitation of atoms.} \citep{Schlickeiser02}. { In cold molecular gas, ionization and excitation continue to cool the positrons until $K \approx 10\ \textrm{eV}$, at which point they start annihilating by charge exchange reactions or they thermalize \citep*{Guessoum05}.} The column densities of starbursts range from $\sim 0.1 - 10\ \textrm{g}~\textrm{cm}^{-2}$ \citep[e.g.,][]{Kennicutt98}, and the columns through individual molecular clouds are expected to be similar to those of the galaxies (\citealt*{Hopkins12}). In the denser molecular clouds, positrons are stopped even if they are not confined at all. In massive main sequence galaxies, the columns are $\sim 0.1\ \textrm{g}~\textrm{cm}^{-2}$ \citep{Daddi10}, insufficient to stop positrons moving in straight lines. If magnetic waves with a wavelength near the positron gyroradius scale exist in these clouds, they efficiently scatter positrons and confine them. { As a result, the propagation of the positrons can be described with a diffusion constant, as widely used when interpreting the Galactic GeV positron spectrum (e.g., \citealt{Moskalenko98}; \citealt*{Hooper09}), although it is unclear how relevant these studies are for MeV positrons \citep{Jean09,Prantzos11,Martin12}.} However, these waves are probably damped quickly in dense neutral gas (see { \citealt{Jean09}; \citealt*{Higdon09};} \citealt{Prantzos11} and references therein). On the other hand, positrons move along magnetic field lines, and if the lines themselves are twisted on a scale $\lambda_B$, the positrons are forced to random walk with a similar mean free path { \citep{Martin12}}. As long as $\lambda_B$ is less than about a third of the molecular cloud size, then positrons are stopped in these galaxies. If it is well mixed with the molecular gas, does \textrm{$^{26}$Al} dominate the ionization rate in molecular gas in starbursts, and if so, what physical conditions does it induce? The starburst \textrm{$^{26}$Al} ionization rates are about an order of magnitude lower than the canonical cosmic ray-sustained ionization rates in most Milky Way molecular clouds, but in some of the densest Galactic starless cores, the ionization rate drops to $\sim 10^{-18}\ \sec^{-1}$ \citep{Caselli02,Bergin07}. Cosmic rays in starbursts can sustain much higher ionization rates (up to $\sim 10^{-14}\ \sec^{-1}$; c.f., \citealt{Papadopoulos10-CRDRs}), but cosmic rays can be deflected by magnetic fields, possibly preventing them from penetrating high columns. { Aside from cosmic rays themselves, starbursts are also bright sources of GeV gamma rays, which are generated when cosmic rays interact with the prevalent gas \citep[e.g.,][]{Ackermann12}. These gamma rays can penetrate molecular clouds and induce low levels of ionization \citep{Lacki12-GRDRs}.} In \citet{Lacki12-GRDRs}, I found that the gamma-ray ionization rate in starbursts can be anywhere from $10^{-22} - 10^{-16}\ \sec^{-1}$, with values of $\sim (1 - 3) \times 10^{-19}\ \sec^{-1}$ in M82 and $\sim (5 - 8) \times 10^{-17}\ \sec^{-1}$ in Arp 220's radio nuclei. In the dense clouds of most starbursts, \textrm{$^{26}$Al} radioactivity could exceed the ionization rate over gamma rays, setting a floor to the ionization rate. In the most extreme starbursts, with mean gas surface densities of $\ga 3\ \textrm{g}~\textrm{cm}^{-2}$ (c.f. equation 10 of \citealt{Lacki12-GRDRs}), however, gamma-ray ionization is more important, since the gamma-ray ionization rate depends strongly on the density of gas and compactness of the starbursts. Unlike the uncertainty of how SLRs and their positron decay products are transported and mixed with the gas of starbursts, gamma rays propagate relatively simply, so the gamma ray ionization rates are more secure. An \textrm{$^{26}$Al}-dominated ionization rate has implications for the physical conditions of star-forming clouds. According to \citet{McKee89}, the ionization fraction of a cloud with hydrogen number density $n_H$ is \begin{equation} \label{eqn:xELow} x_e \approx 1.4 \times 10^{-8} \left(\frac{\zeta}{10^{-18}\ \sec^{-1}}\right)^{1/2} \left(\frac{n_H}{10^4\ \textrm{cm}^{-3}}\right)^{-1/2} \end{equation} when the ionization rate is low. We see that the ionization fraction of cloud with density $n_H = 10^4\ \textrm{cm}^{-3}$ { is} $(1 - 4) \times 10^{-8}$, if the ionization is powered solely by \textrm{$^{26}$Al} decay. For these ionization fractions, the ambipolar diffusion time of a molecular core, the time for magnetic fields to slip from the gas, is a few times its free-fall time. Since clouds with strong magnetic fields do not collapse until the field slips away by ambipolar diffusion \citep{Mestel56,Mouschovias76}, this means that \textrm{$^{26}$Al}-ionized clouds in starbursts collapse quasi-statically, as in the Milky Way. On the other hand, the energy injection from \textrm{$^{26}$Al} has essentially no effect on the gas temperature. \citet{Papadopoulos10-CRDRs} gives the minimum gas temperature of gas as: \begin{equation} T_k^{\rm min} = 6.3\ \textrm{K}\ [(0.0707 n_4^{1/2} \zeta_{-18} + 0.186^2 n_4^3)^{1/2} - 0.186 n_4^{3/2}]^{2/3}, \end{equation} which was derived under the assumption that there is no heating from interactions with dust grains or the dissipation of turbulence in the gas, for gas with density $n_4 = (n_H / 10^4\ \textrm{cm}^{-3})$ and ionization rate $\zeta_{-18} = (\zeta / 10^{-18}\ \sec^{-1})$. In typical starbursts, I find that \textrm{$^{26}$Al} decay alone heats gas of density $n_H = 10^{4}\ \textrm{cm}^{-3}$ to $\sim 2 - 5\ \textrm{K}$ ($0.1 - 0.5\ \textrm{K}$ for $n_H = 10^6\ \textrm{cm}^{-3}$). As I note in \citet{Lacki12-GRDRs}, under such conditions, dust heating is more likely to set the temperature of the gas than ionization, raising the temperature to the dust temperature for densities $\ga 40000\ \textrm{cm}^{-3}$. \section{Conclusions} The high SSFRs of starbursts and high-$z$ normal galaxies implies high abundances of \textrm{$^{26}$Al} and other SLRs in their ISMs. In true starbursts, these abundances are enormous, with $X (\textrm{$^{26}$Al}) \approx 10^{-9}$ and $^{26}$Al/$^{27}$Al $\approx 10^{-3}$. The SSFRs of normal galaxies evolve rapidly with $z$; even taking into account higher gas fractions, the SLR abundances were about 3 -- 10 times higher at $z \approx 2$ than in the present Milky Way. Even at the epoch of Solar system formation, the mean SLR abundance of the Milky Way was 1.5 to 2 times as high as at the present (Fig.~\ref{fig:NormGalaxy}). This alone could explain the high abundances of $^{60}$Fe in the early Solar system, and reduce the discrepancy in the \textrm{$^{26}$Al} abundances from a factor of $\sim 6$ to $\sim 3$. In this way, the cosmic evolution of star-forming galaxies may have direct implications for the early geological history of the Solar system. The first main uncertainty is whether the SLRs produced by massive stars is well-mixed with the molecular gas: they may instead be ejected by the galactic winds known to be present in starbursts and high-$z$ galaxies, or decay before they can propagate far from its injection site, or before it penetrates cold gas. I discussed these uncertainties in section~\ref{sec:Mixing}. The other uncertainty is how SLR yields depend on metallicity. The most direct way to test the high \textrm{$^{26}$Al} abundances of starburst galaxies is to detect the 1.809 MeV gamma-ray line produced by the decay of \textrm{$^{26}$Al}, directly informing us of the equilibrium mass of \textrm{$^{26}$Al}. Whether most of the \textrm{$^{26}$Al} is ejected by the superwind can be resolved with spectral information. The turbulent velocities of molecular gas in starbursts, $\sim 100\ \textrm{km}~\textrm{s}^{-1}$ \citep{Downes98}, is much smaller than the bulk speed of the superwind, which is hundreds or even thousands of $\textrm{km}~\textrm{s}^{-1}$ \citep{Chevalier85}. Unfortunately, the \textrm{$^{26}$Al} line fluxes predicted for even the nearest external starbursts ($\sim 10^{-8}\ \textrm{cm}^{-2}\ \sec^{-1}$) are too low to detect with any planned instrument (\citealt*{Lacki12-MeV}). However, \citet{Crocker11-Wild} have argued that the inner 100 pc of the Galactic Centre are an { analogue} of starburst galaxies, launching a strong superwind. The \textrm{$^{26}$Al} line from this region should have a flux of $\sim 2 \times 10^{-5}\ \textrm{cm}^{-2}\ \sec^{-1}$, easily achievable with possible future MeV telescopes like Advanced Compton Telescope (ACT; \citealt{Boggs06}) or Gamma-Ray Burst Investigation via Polarimetry and Spectroscopy (GRIPS; \citealt{Greiner11}). A search for the \textrm{$^{26}$Al} signal from this region may inform us on its propagation, since it is nearby and resolved spatially{.}\footnote{\citet{Naya96} reported that the \textrm{$^{26}$Al} signal from the inner Galaxy had line widths corresponding to speeds of several hundred $\textrm{km}~\textrm{s}^{-1}$, but this was not verified by later observations { by RHESSI} \citep{Smith03} { and INTEGRAL} \citep{Diehl06-Linewidth}. { Instead, recent observations indicate that Galactic \textrm{$^{26}$Al} is swept up in superbubbles expanding at 200 $\textrm{km}~\textrm{s}^{-1}$ into the interarm regions \citep{Kretschmer13}.} However, this signal is from the entire inner Galactic disc on kiloparsec scales; the inner 100 pc of the Galactic Centre is a much smaller region and just a small part of this signal, so the kinematics of its \textrm{$^{26}$Al} are currently unconstrained. Current observations of the \textrm{$^{26}$Al} decay signal from the Galactic Centre region are summarized in \citet{Wang09}.} If the \textrm{$^{26}$Al} generated in the Centre region is actually advected away by the wind, the `missing' \textrm{$^{26}$Al} should be visible several hundred pc above and below the Galactic Plane near the Galactic Centre{.}\footnote{This will also be true in other starbursts, but these starbursts would not be resolved spatially by proposed MeV telescopes. The total amount of \textrm{$^{26}$Al} line emission from other starbursts would therefore not inform us of whether it is in the starburst proper or in the superwind; other information, such as the Doppler width of the line, is necessary to determine that.} Resolved measurements of the Galactic Centre can also inform us on whether the \textrm{$^{26}$Al} is present in all molecular clouds in the region (and therefore is well-mixed), or just if it is trapped near a few injection sites. If the SLRs do mix with the star-forming molecular gas of these galaxies, there are implications for both their star formation and planet formation. Ionization rates of $10^{-18} - 10^{-17}\ \sec^{-1}$, like those in some Milky Way starless cores, result from the energy injection of \textrm{$^{26}$Al} decay in starbursts. While cosmic ray ionization rates can easily exceed those ionization rates by orders of magnitude in gas they penetrate into, and while gamma rays produce higher ionization rates in the most extreme starbursts like Arp 220, \textrm{$^{26}$Al} might dominate the ionization rate in the dense clouds shielded from cosmic rays in typical starbursts. In starbursts' protoplanetary discs, \textrm{$^{26}$Al} can provide moderate ionization through all columns, possibly eliminating the `dead zones' where there is little accretion (e.g., \citealt{Gammie96}; \citealt*{Fatuzzo06}). Any planetesimals that do form in starbursts may have much higher radiogenic heating from \textrm{$^{26}$Al}. Admittedly, studying the geological history of planets, much less planetesimals, in other galaxies is very difficult for the forseeable future. However, at $z \approx 2$ the Milky Way likely had background SLR abundances $\sim 10$ times higher than at present, so the effects of elevated SLR abundances may be studied in planetary systems around old Galactic stars. On a final point, \citet{Gilmour09} propose that the elevated abundances of \textrm{$^{26}$Al} in the early Solar system are mandated by anthropic selection, since high SLR abundances are necessary for planetesimal differentiation and the loss of volatiles, but that explanation may be difficult to maintain. If high \textrm{$^{26}$Al} abundances, far from being very rare, are actually typical of high-$z$ and starburst solar systems (and indeed, much of the star-formation in the Universe's history), the anthropic principle would imply that most technological civilizations would develop in solar systems formed in these environments. Instead of asking why we find ourselves in a system with an \textrm{$^{26}$Al} abundance just right to power differentiation and evaporate volatiles, we must ask why we find ourselves in one of the rare solar systems with sufficient \textrm{$^{26}$Al} that formed in a normal spiral galaxy at $z \approx 0.4$, instead of the common \textrm{$^{26}$Al}-enriched solar systems formed at $z \approx 2$ or in starbursts. \section*{Acknowledgements} During this work, I was supported by a Jansky Fellowship from the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation.
{'timestamp': '2014-05-06T02:05:11', 'yymm': '1204', 'arxiv_id': '1204.2584', 'language': 'en', 'url': 'https://arxiv.org/abs/1204.2584'}
ArXiv
\section{Introduction} Restoring forces play a very fundamental role in the study of vibrations of mechanical systems. If a system is moved from its equilibrium position, a restoring force will tend to bring the system back toward equilibrium. For decades, if not centuries, springs have been used as the most common example of this type of mechanical system, and have been used extensively to study the nature of restoring forces. In fact, the use of springs to demonstrate the Hooke's law is an integral part of every elementary physics lab. However, and despite the fact that many papers have been written on this topic, and several experiments designed to verify that the extension of a spring is, in most cases, directly proportional to the force exerted on it~\cite{Mills:404,Cushing:925,Easton:494,Hmurcik:135, Sherfinski:552,Glaser:164,Menz:483,Wagner:566,Souza:35,Struganova:516, Freeman:224,Euler:57}, not much has been written about experiments concerning springs connected in series. Perhaps one of the most common reasons why little attention has been paid to this topic is the fact that a mathematical description of the physical behaviour of springs in series can be derived easily~\cite{Gilbert:430}. Most of the textbooks in fundamental physics rarely discuss the topic of springs in series, and they just leave it as an end of the chapter problem for the student~\cite{Giancoli,Serway}. One question that often arises from spring experiments is, ``If a uniform spring is cut into two or three segments, what is the spring constant of each segment?'' This paper describes a simple experiment to study the combination of springs in series using only \textit{one} single spring. The goal is to prove experimentally that Hooke's law is satisfied not only by each individual spring of the series, but also by the \textit{combination} of springs as a whole. To make the experiment effective and easy to perform, first we avoid cutting a brand new spring into pieces, which is nothing but a waste of resources and equipment misuse; second, we avoid combining in series several springs with dissimilar characteristics. This actually would not only introduce additional difficulties in the physical analysis of the problem (different mass densities of the springs), but it would also be a source of random error, since the points at which the springs join do not form coils and the segment elongations might not be recorded with accuracy. Moreover, contact forces (friction) at these points might affect the position readings, as well. Instead, we decide just to use one single spring with paint marks placed on the coils that allow us to divide it into different segments, and consider it as a collection of springs connected in series. Then the static Hooke's exercise is carried out on the spring to observe how each segment elongates under a suspended mass. In the experiment, two different scenarios are examined: the mass-spring system with an ideal massless spring, and the realistic case of a spring whose mass is comparable to the hanging mass. The graphical representation of force against elongation, used to obtain the spring constant of each individual segment, shows, in excellent agreement with the theoretical predictions, that the inverse of the spring constant of the entire spring equals the addition of the reciprocals of the spring constants of each individual segment. Furthermore, the experimental results allow us to verify that the ratio of the spring constant of a segment to the spring constant of the entire spring equals the ratio of the total number of coils of the spring to the number of coils of the segment. The experiment discussed in this article has some educational benefits that may make it attractive for a high school or a first-year college laboratory: It is easy to perform by students, makes use of only one spring for the investigation, helps students to develop measuring skills, encourages students to use computational tools to do linear regression and propagation of error analysis, helps to understand how springs work using the relationship between the spring constant and the number of coils, complements the traditional static Hooke's law experiment with the study of combinations of springs in series, and explores the contribution of the spring mass to the total elongation of the spring. \section{The model} When a spring is stretched, it resists deformation with a force proportional to the amount of elongation. If the elongation is not too large, this can be expressed by the approximate relation $F = -k\,x$, where $F$ is the restoring force, $k$ is the spring constant, and $x$ is the elongation (displacement of the end of the spring from its equilibrium position)~\cite{Symon}. Because most of the springs available today are \textit{preloaded}, that is, when in the relaxed position, almost all of the adjacent coils of the helix are in contact, application of only a minimum amount of force (weight) is necessary to stretch the spring to a position where all of the coils are separated from each other~\cite{Glanz:1091,Prior:601,Froehle:368}. At this new position, the spring response is linear, and Hooke's law is satisfied. It is not difficult to show that, when two or more springs are combined in series (one after another), the resulting combination has a spring constant less than any of the component springs. In fact, if $p$ ideal springs are connected in sequence, the expression \begin{equation} \frac{1}{k} = \sum_{i=1}^p \frac{1}{k_i} \label{Eq:1/k} \end{equation} relates the spring constant $k$ of the combination with the spring constant $k_i$ of each individual segment. In general, for a cylindrical spring of spring constant $k$ having $N$ coils, which is divided into smaller segments, having $n_i$ coils, the spring constant of each segment can be written as \begin{equation} k_i = \frac{N}{n_i} k\,. \label{Eq:ki} \end{equation} Excluding the effects of the material from which a spring is made, the diameter of the wire and the radius of the coils, this equation expresses the fact that the spring constant $k$ is a parameter that depends on the number of coils $N$ in a spring, but not on the way in which the coils are wound (i.e. tightly or loosely)~\cite{Gilbert:430}. In an early paper, Galloni and Kohen~\cite{Galloni:1076} showed that, under \textit{static} conditions, the elongation sustained by a non-null mass spring is equivalent to assuming that the spring is massless and a fraction of one-half of the spring mass should be added to the hanging mass. That is, if a spring of mass $m_{\mathrm{s}}$ and relaxed length $l$ (neither stretched nor compressed) is suspended vertically from one end in the Earth's gravitational field, the mass per unit length becomes a function of the position, and the spring stretches \textit{non-uniformly} to a new length $l' = l + \Delta l$. When a mass $m$ is hung from the end of the spring, the total elongation $\Delta l$ is found to be \begin{equation} \Delta l = \int_0^l \xi(x)\,\rmd x = \frac{(m + \frac{1}{2} m_{\mathrm{s}})\,g}{k}\,, \label{Eq:Dl1} \end{equation} where \begin{equation} \xi(x) = \frac{m + m_{\mathrm{s}}(l-x)/l}{k\,l}\,g \label{Eq:xi} \end{equation} is the \textit{dimensionless elongation factor} of the element of length between $x$ and $x + \rmd x$, and $g$ is the acceleration due to gravity. An important number of papers dealing with the static and dynamic effects of the spring mass have been written in the physics education literature. Expressions for the spring elongation as a function of the $n$th coil and the mass per unit length of the spring have also been derived~\cite{Edwards:445,Heard:1102,Lancaster:217, Mak:994,French:244,Hosken:327,Ruby:140,Sawicki:276,Ruby:324,Toepker:16, Newburgh:586,Bowen:1145,Christensen:818,Rodriguez:100,Gluck:178,Essen:603}. \section{The Experiment} We want to show that, with just \textit{one} single spring, it is possible to confirm experimentally the validity of equations~\eref{Eq:1/k} and~\eref{Eq:ki}. This approach differs from Souza's work~\cite{Souza:35} in that the constants $k_i$ are determined from the same single spring, and there is no need of cutting the spring into pieces; and from the standard experiment in which more than one spring is required. A soft spring is \textit{divided} into three separate segments by placing a paint mark at selected points along its surface (see~\fref{Fig1}). These points are chosen by counting a certain number of coils for each individual segment such that the original spring is now composed of three marked springs connected in series, with each segment represented by an index $i$ (with $i=1,2,3$), and consisting of $n_i$ coils. An initial mass $m$ is suspended from the spring to stretch it into its \textit{linear} region, where the equation $F_i=-k_i\Delta x_i$ is satisfied by each segment. Once the spring is brought into this region, the traditional static Hooke's law experiment is performed for several different suspended masses, ranging from $1.0$ to $50.0\,\mbox{g}$. The initial positions of the marked points $x_i$ are then used to measure the \textit{relative} displacement (elongation) of each segment after they are stretched by the additional masses suspended from the spring~(\fref{Fig2}). The displacements are determined by the equations \begin{equation} \Delta x_i = (x'_i - x'_{i-1}) - l_i\,, \label{Eq:Dxi1} \end{equation} where the primed variables $x'_i$ represent the new positions of the marked points, $l_i = x_i - x_{i-1}$ are the initial lengths of the spring segments, and $x_0 = 0$, by definition. Representative graphs used to determine the spring constant of each segment are shown in figures~\ref{Fig3},~\ref{Fig4}, and~\ref{Fig5}. \section{Dealing with the effective mass} As pointed out by some authors~\cite{Galloni:1076,Mak:994,French:244,Sawicki:276,Newburgh:586}, it is important to note that there is a difference in the total mass hanging from each segment of the spring. The reason is that each segment supports not only the mass of the segments below it, but also the mass attached to the end of the spring. For example, if a spring of mass $m_i$ is divided into three \textit{identical} segments, and a mass $m$ is suspended from the end of it, the total mass $M_1$ hanging from the first segment becomes $m + \frac{2}{3}m_{\mathrm{s}}$. Similarly, for the second and third segments, the total masses turn out to be $M_2 = m + \frac{1}{3}m_{\mathrm{s}}$ and $M_3 = m $, respectively. However, in a more realistic scenario, the mass of the spring and its effect on the elongation of the segments must be considered, and equation~\eref{Eq:Dl1} should be incorporated into the calculations. Therefore, for each individual segment, the elongation should be given by \begin{equation} \Delta x_i = \frac{(M_i + \frac{1}{2} m_i)\,g}{k_i}, \label{Eq:Dxi2} \end{equation} where $m_i$ is the mass of the $i$th segment, $M_i$ is its corresponding total hanging mass, and $k_i$ is the segment's spring constant. Consequently, for the spring divided into three identical segments ($m_i = \frac{1}{3} m_{\mathrm{s}}$), the total masses hanging from the first, second and third segments are now $m + \frac{5}{6} m_{\mathrm{s}}$, $m + \frac{1}{2} m_{\mathrm{s}}$ and $m + \frac{1}{6} m_{\mathrm{s}}$, respectively. This can be explained by the following simple consideration: If a mass $m$ is attached to the end of a spring of length $l$ and spring constant $k$, for three identical segments with elongations $\Delta l_1$, $\Delta l_2$, and $\Delta l_3$, the total spring elongation is given by \begin{eqnarray} \Delta l &= \Delta l_1 + \Delta l_2 + \Delta l_3 \nonumber \\[10pt] &= \int_0^{\frac{l}{3}} \xi(x)\,\rmd x + \int_{\frac{l}{3}}^{\frac{2l}{3}} \xi(x)\,\rmd x + \int_{\frac{2l}{3}}^l \xi(x)\,\rmd x \nonumber \\[10pt] &= \frac{(m + \frac{5}{6} m_{\mathrm{s}})\,g}{3\,k} + \frac{(m + \frac{1}{2} m_{\mathrm{s}})\,g}{3\,k} + \frac{(m + \frac{1}{6} m_{\mathrm{s}})\,g}{3\,k} \nonumber \\[10pt] &= \frac{(m + \frac{1}{2} m_{\mathrm{s}})\,g}{k}\,. \label{Eq:Dl2} \end{eqnarray} As expected, equation~\eref{Eq:Dl2} is in agreement with equation~\eref{Eq:Dl1}, and reveals the contribution of the mass of each individual segment to the total elongation of the spring. It is also observed from this equation that \begin{equation} \Delta l_1 - \Delta l_2 = \Delta l_2 - \Delta l_3 = \frac{(\frac{1}{3} m_{\mathrm{s}})g}{3\,k} = \mbox{const.} \label{Eq:Dl123} \end{equation} As we know, $\frac{1}{3} m_{\mathrm{s}}$ is the mass of each identical segment, and $k_1 = k_2 = k_3 = 3\,k$ is the spring constant for each. Therefore, the spring stretches non-uniformly under its own weight, but uniformly under the external load, as it was also indicated by Sawicky~\cite{Sawicki:276}. \section{Results and Discussion} Two particular cases were studied in this experiment. First, we considered a spring-mass system in which the spring mass was small compared with the hanging mass, and so it was ignored. In the second case, the spring mass was comparable with the hanging mass and included in the calculations. We started with a configuration of three approximately \textit{identical} spring segments connected in series; each segment having $12$ coils ($n_1 = n_2 = n_3 = 12$)~\footnote{Although the three segments had the same number of coils, the first and third segments had an additional portion of wire where the spring was attached and the masses suspended. This added extra mass to these segments, making them slightly different from each other and from the second segment.} When the spring was stretched by different weights, the elongation of the segments increased linearly, as expected from Hooke's law. Within the experimental error, each segment experienced the same displacement, as predicted by~\eref{Eq:Dl123}. An example of experimental data obtained is shown in~\tref{Table01}. Simple linear regression was used to determine the slope of each trend line fitting the data points of the force versus displacement graphs. \Fref{Fig3}(a) clearly shows the linear response of the first segment of the spring, with a resulting spring constant of $k_1=10.3\,\pm\,0.1\,\mbox{N/m}$. A similar behaviour was observed for the second and third segments, with spring constants $k_2=10.1\,\pm\,0.1\,\mbox{N/m}$, and $k_3=10.2\,\pm\,0.1\,\mbox{N/m}$, respectively. For the entire spring, the spring constant was $k=3.40\pm\,0.01\,\mbox{N/m}$, as shown in~\fref{Fig3}(b). The uncertainties in the spring constants were calculated using the \textit{correlation coefficient} $R$ of the linear regressions, as explained in Higbie's paper ``Uncertainty in the linear regression slope''~\cite{Higbie:184}. Comparing the spring constant of each segment with that for the total spring, we obtained that $k_1=3.03\,k$, $k_2=2.97\,k$ and $k_3=3.00\,k$. As predicted by~\eref{Eq:ki}, each segment had a spring constant three times larger than the resulting combination of the segments in series, that is, $k_i = 3\,k$. The reason why the uncertainty in the spring constant of the entire spring is smaller than the corresponding spring constants of the segments may be explained by the fact that the displacements of the spring as a whole have smaller ``relative errors'' than those of the individual segments. \Tref{Table01} shows that, whereas the displacements of the individual segments $\Delta x_i$ are in the same order of magnitude that the uncertainty in the measurement of the elongation ($\pm 0.002\,\mbox{m}$), the displacements of the whole spring $\Delta x_{\mathrm{s}}$ are much bigger compared with this uncertainty. We next considered a configuration of two spring segments connected in series with $12$ and $24$ coils, respectively ($n_1 = 12$, $n_2 = 24$). \Fref{Fig4}(a) shows a graph of force against elongation for the second segment of the spring. We obtained $k_2=5.07\,\pm\,0.03\,\mbox{N/m}$ using linear regression. For the first segment and the entire spring, the spring constants were $k_1=10.3\,\pm\,0.1\,\mbox{N/m}$ and $k=3.40\,\pm\,0.01\,\mbox{N/m}$, respectively, as shown in~\fref{Fig4}(b). Then, we certainly observed that $k_1 = 3.03\,k$ and $k_2 = 1.49\,k$. Once again, these experimental results proved equation~\eref{Eq:ki} correct ($k_1 = 3\,k$ and $k_2 = \frac{3}{2}\,k$). We finally considered the same two spring configuration as above, but unlike the previous trial, this time the spring mass ($4.5 \pm 0.1\,\mbox{g}$) was included in the experimental calculations. Figures~\ref{Fig5}(a)--(b) show results for the two spring segments, including spring masses, connected in series ($n_1 = 12$, $n_2 = 24$). Using this method, the spring constant for the whole spring was found to be slightly different from that obtained when the spring was assumed ideal (massless). This difference may be explained by the corrections made to the total mass as given by~\eref{Eq:Dl2}. The spring constants obtained for the segments were $k_1 = 2.94\,k$ and $k_2 = 1.51\,k$ with $k = 3.34 \pm 0.04\,\mbox{N/m}$ for the entire spring. These experimental results were also consistent with equation~\eref{Eq:ki}. The experimental data obtained is shown in~\tref{Table02}. When the experiment was performed by the students, measuring the positions of the paint marks on the spring when it was stretched, perhaps represented the most difficult part of the activity. Every time that an extra weight was added to the end of the spring, the starting point of each individual segment changed its position. For the students, keeping track of these new positions was a laborious task. Most of the experimental systematic error came from this portion of the activity. To obtain the elongation of the segments, using equation~\eref{Eq:Dxi1} substantially facilitated the calculation and tabulation of the data for its posterior analysis. The use of computational tools (spreadsheets) to do the linear regression, also considerably simplified the calculations. \section{Conclusions} In this work, we studied experimentally the validity of the static Hooke's law for a system of springs connected in series using a simple single-spring scheme to represent the combination of springs. We also verified experimentally the fact that the reciprocal of the spring constant of the entire spring equals the addition of the reciprocal of the spring constant of each segment by including well-known corrections (due to the finite mass of the spring) to the total hanging mass. Our results quantitatively show the validity of Hooke's law for combinations of springs in series [equation~\eref{Eq:1/k}], as well as the dependence of the spring constant on the number of coils in a spring [equation~\eref{Eq:ki}]. The experimental results were in excellent agreement, within the standard error, with those predicted by theory. The experiment is designed to provide several educational benefits to the students, like helping to develop measuring skills, encouraging the use of computational tools to perform linear regression and error propagation analysis, and stimulating the creativity and logical thinking by exploring Hooke's law in a combined system of \textit{springs in series} simulated by a \textit{single} spring. Because of it easy setup, this experiment is easy to adopt in any high school or undergraduate physics laboratory, and can be extended to any number of segments within the same spring such that all segments represent a combination of springs in series. \ack The authors gratefully acknowledge the School of Mathematical and Natural Sciences at the University of Arkansas-Monticello (\#11-2225-5-M00) and the Department of Physics at Eastern Illinois University for providing funding and support for this work. Comments on earlier versions of the paper were gratefully received from Carol Trana. The authors are also indebted to the anonymous referee for the valuable comments and suggestions made. \section*{References}
{'timestamp': '2011-01-05T02:02:14', 'yymm': '1005', 'arxiv_id': '1005.4983', 'language': 'en', 'url': 'https://arxiv.org/abs/1005.4983'}
ArXiv
\section*{Acknowledgments} The authors would like to thank Andreas Winter for invaluable discussions on the problem of enumerating the extremal rays of polyhedral convex cones. This work was supported by NSF CAREER award CCF 1652560.
{'timestamp': '2019-04-23T02:04:11', 'yymm': '1811', 'arxiv_id': '1811.08000', 'language': 'en', 'url': 'https://arxiv.org/abs/1811.08000'}
ArXiv
\section{Introduction} With the continuous development of mobile communication technology and the rapid development of mobile Internet, mobile terminals represented by smart phones, tablet computers, laptop, and smart assistants have been widely used. But the mobile terminal receives limiting factors such as volume, weight, performance, power, etc. Its working ability is still in a serious and tedious state, which cannot meet the increasing demand of people. Although the mobile terminal has made great progress in hardware technology (for example, the continuous replacement of CPU/GPU, the continuous improvement of chip manufacturing process from 28nm to 14nm to the current 7nm, 5nm\cite{8776580}, etc.), but it is still far from what people need. Moreover, with the emergence of new concepts such as autonomous driving, telemedicine, and Industry 4.0 which need ultra reliability, low latency\cite{7847322}, ordinary equipment is even more unable to support their operations. Meanwhile, with the emergence of machine learning, artificial intelligence and other emerging technologies\cite{7951770}, the rapid development of image recognition, speech recognition and other applications, virtual reality and augmented reality game applications are emerging in endlessly. The operation of these applications requires a large amount of computing resources and storage resources, and they are all computationally intensive applications at a time. Due to the limitation of mobile terminals or some other devices, when computationally intensive applications\cite{7951770} are running on smart terminals, the endurance of the terminal and the performance of the application are very problematic. How to solve this problem of resource limitation and energy consumption has become a huge challenge today. Most of the literature suggests to change the task allocation method, transmission power, clock frequency to optimize the task offloading algorithm. In \cite{8279411}--\cite{8638800}, authors proposed task splitting, which is a way to change the task structure to reduce the latency and improve the local device energy efficiency. However, they only split the task, but did not consider the redundancy of sources. In this paper, we propose to split task to the smallest executable task, named as unit, and then select the unit by using the correlation between them. Furthermore, both time and task domain correlation is considered in our work to selected the necessary tasks to be processed. \section{Problem Formulation and System Models} In this section, the definition of task and unit will be introduced first to help to understand the proposed models. Both task and unit are the process of collecting data, processing data, and sending instructions, but unit is the smallest section that can form a task, and unit cannot be divided anymore. That means task can be composed of one or more units, and a task can be split into one or more units. Consider $N$ devices (users), denoted by a set of $\mathbb{N}=\{1, 2, ...., N\}$, and device $i$ has $M_i$ tasks at the same time, $i\in \mathbb{N}$, denoted by a set of $\mathbb{M}=\{M_1, M_2, ...., M_N\}$, each task in the $M_i$ tasks is composed of $M_s$ different unit, denoted by a set of $\mathbb{K}=\{k_{M_1},k_ {M_2}, ...., k_{M_s}\}$and $i\in \mathbb{N}$. After some tasks are split, some identical units may be generated, so the correlation of task domain came into being. In our work, it is proposed to use the correlation between units to improve the energy-efficient of local device and reduce the latency. Moreover, it is assumed here that devices communicate with MEC server orthogonally. The main target here is to improve the energy efficiency of local devices when fulfil the extremely severe latency requirements of each unit and each device. The energy cost minimization problem is formulated as: \begin{equation}\label{eqn01} OPT-1\ \ \ \ \ \ \ \ \ \min \limits_{\mathcal{A,\ F,\ P}}\sum_{n=1}^{n=N}E_n^L \end{equation} Subject to{\color{black}{ \begin{equation} \begin{split} C1: LT_{n,j}^m&\leq T_{n,j\ max},\ \ \ \ \ j \in A, n\in \mathbb{N}\\ C2: LT_{n,k}^l&\leq T_{n,k\ max},\ \ \ \ \ k \in B, n\in \mathbb{N}\\ C3: LT_{n}&\leq T_{n\ max},\ \ \ \ \ n\in\mathbb{N}\\ C4:f_{n,l}&\leq f_{n,l\ max},\ \ \ \ \ \ n\in \mathbb{N}\\ C5:p^t_{n}&\leq p^t_{n\ max}, \ \ \ \ \ \ n\in \mathbb{N} \end{split} \end{equation}}} $E_n^L$ represents the local energy consumption of user $n$, $ LT_{n,j}^m$ represents the latency of unit $j$ of user n processed on MEC, $LT_{n,k}^l$ represents the latency of unit k of user $n$ processed on local device, $ T_{n,j\ max}$ represents the latency requirement of unit $j$, $A$ represents the collection of all tasks processed on the MEC, and $B$ represents the collection of all units processed on the local device, $LT_n$ represents the latency of user $n$, $T_{n\ max}$ represents the latency requirement of user $n$, $f_{n,l}$ represents the computation capability of user $n$, such as the number of CPU cycles per second, $ p^t_{n}$ represents the transmission power of user $n$, and $f_{n,l\ max},\ p^t_{n\ max}$ indicate the maximum value of $f_{n,l}$ and $ p^t_{n}$ respectively. $\mathcal{F}=\{f_{n,l}|n\in \mathbb{N}\}$, $\mathcal{P}=\{p^t_{n}|n\in \mathbb{N}\}$, $\mathcal{A}= \{A_{n,j}|n\in \mathbb{N},j \in \mathbb{M}2\}$, C1 and C2 are to limit the latency of each task to not exceed the requirements, C3 is to limit the user's overall latency does not exceed the requirements, C4 is to limit the processing power of the local device not to exceed its maximum processing power, C5 is to limit the transmission power of the local device not to exceed its maximum transmission power. \subsection{Transmission Model} {\color{black}{When the interference is not considered}}, the signal-to-noise ratio (SNR) of user n is \begin{equation}\label{eqn07} SNR_n=\frac{p_n^th_{n,m}^2}{\color{black}{B_w\mathcal{N}_0}} \end{equation} and then, the transmission rate (uplink) of user $n$ is calculated as: \begin{equation}\label{eqn08} r_n=Blog_2(1+ SNR_n). \end{equation} In \eqref{eqn07} and \eqref{eqn08}, {\color{black}{$B_w$}} represents the bandwidth of this channel, $h_{n, m}$ represent the channel gain of user n to MEC, and {\color{black}{$\mathcal{N}_0$ represents the noise spectral density}}, $p_n^t$ represents the transmission power of user $n$. Because the amount of data that needs to be offloaded is much larger than the amount of data that needs to be downloaded, {\color{black}{so the downlink is ignored in this paper}}\cite{7524497,7553459}. \subsection{Computation Model} Let $w_{n,j}$ {\color{black}{represent}} the CPU cycle required to calculate the unit (unit $j$ of user $n$), $d_{n,j}$ represents the computation input data (in bits) of the {\color{black}{ $j$ th unit}} of {\color{black}{$n$ th user of local devices}}. \subsubsection*{Local Computing} {\color{black}{when user chooses to process locally, use}} $f_{n,l}$ to represent the {\color{black}{CPU clock speed}} of user's device, so the latency in local for computing can be represented as: \begin{equation}\label{eqn09} t_{n,j}^l=\frac{w_{n,j}}{f_{n,l}}. \end{equation} where $ t_{n,j}^l$ {\color{black}{denotes}} the time required to complete unit $j$ of user $n$ in local device. The energy required to complete unit $j$ in local device can be expressed by the following formula \begin{equation}\label{eqn10} E_{n,j}^l=\kappa w_{n,j}f^2_{n,l} \end{equation} In this case, $\kappa$ is the effective switched capacitance depending on the chip architecture\cite{miettinen2010energy}. \subsubsection{MEC Computing} when a user {\color{black}{decides}} to offload tasks to MEC, and {\color{black}{transmit}} a unit through a wireless network, the corresponding transmission latency and energy consumption will be generated. According to the communication model, the uplink transmission latency when user $n$ offload the task is: \begin{equation}\label{eqn11} t_{n,j}^t=\frac{d_{n,j}}{r_n}, \end{equation} and the energy required to transmit unit $j$ can be expressed by the following formula: \begin{equation}\label{eqn12} E_{n,j}^{t}=P_{n,j}^{t}t_{n,j}^t. \end{equation} {\color{black}{where}} $ t_{n,j}^t$ represents the time required to transmit the {\color{black}{$j$ th unit of $n$ th user.}} After the computing unit is offloaded to the MEC, the MEC will allocate certain computing resources to this unit, considering that the computing resources allocated by the MEC to each user are fixed. Let $f_{MEC}$ denote the computing resources allocated by the MEC, and the latency for the MEC to perform {\color{black}{the $j$ th unit of $n$ th user}} is \begin{equation}\label{eqn13} t_{n,j}^m=\frac{w_{n,j}}{f_{MEC}}. \end{equation} Because MEC has a constant energy supply, {\color{black}{MEC's energy consumption need not be considered in this paper.}} In this paper, {\color{black}{it is assumed}} that MEC can only process {\color{black}{one}} unit at the same time, communication capacity can only support the transmission of a unit simultaneously, as shown in Fig. \ref{figure1} and Fig. \ref{figure2}. Blue {\color{black}{blocks}} represent transmission time, green {\color{black}{ones}} represent computing time, black ones {\color{black}{and yellow ones represent queuing for transmission, and queuing for computing respectively}} At the beginning of this section, {\color{black}{it is introduced}} that each user generates multiple tasks and then they are split into multiple units, and needs to be processed. However, due to channel and processor limitations, these units cannot be processed at the same time. {\color{black}{There are}} two definitions of waiting time for the units which will be processed in local and waiting time for the units which will be processed in MEC. Suppose A is the set of units which will be processed in MEC, and A=\{1,2, ...,a\}, B is the set of units which will be processed in local device, and B=\{1,2, ...,b\}, \par \textbf{Definition 1} (waiting time for the units which will be processed in local): The sum of all time, before the unit is processed by local device. {\color{black}{Use}} $WT_{n,k}^l$ to present the waiting time of unit $k$ of device $n$, which will be processed in local. \par \textbf{Definition 2} (waiting time for the units which will be processed in MEC): The sum of all time, before the unit is processed in MEC. {\color{black}{Use}} $WT_{n,j}^m$ to present the waiting time of unit $j$ of device $n$, which will be processed in MEC. \begin{figure}[t] \centering \includegraphics[width=8cm,height=3cm]{figure1.png} \vspace{-1.5em}\caption{{\color{black}{Tasks which are processed in MEC}}} \label{figure1} \end{figure} \begin{figure}[t] \centering \vspace{-1em}\includegraphics[width=8cm,height=3cm]{figure2.png} \vspace{-1em}\caption{{\color{black}{Tasks which are processed in local}}} \label{figure2} \end{figure} According to {\color{black}{the}} Definition 2, waiting time for the units which will be processed in MEC {\color{black}{is divided}} into {\color{black}{two}} parts, time of waiting for transmission plus time of transmission ($WT3_{n,j}^m$), and time before processing in MEC and after MEC receives the unit($WT4_{n,j}^m$), that means $ WT_{n,j}^m$ {\color{black}{can be divided}} into $WT3_{n,j}^m+ WT4_{n,j}^m$. Assume that $unit_j\in A$\\ $when\ j=1$ \begin{equation}\label{eqn14} WT3_{n,j}^m=t_{n,j}^t,\ \ \ \ \ WT4_{n,j}^m=0 \end{equation} $when\ j >1$ {\color{black}{ \begin{equation}\label{eqn15} \begin{split} WT3_{n,j}^m&=WT3_{n,j-1}^m+t_{n,j}^t\\ WT4_{n,j}^m&=max(WT3_{n,j}^m, LT_{n,j-1}^m)-WT3_{n,j}^m \end{split} \end{equation} In (\ref{eqn15}) , max means the maximum value between these two value. \begin{equation}\label{eqn18} \begin{split} WT_{n,j}^m&=WT3_{n,j}^m+WT4_{n,j}^m\\ LT_{n,j}^m&=WT_{n,j}+t_{n,j}^m \end{split} \end{equation} $LT_{n,j}^m$ means the latency of $unit_j$, $unit_j\in A$ Assume that $unit_k\in B$\\ so, in this case \begin{equation}\label{eqn20} WT_{n,k}^l=\sum_{c=1}^{c=j-1}t_{n,c}^l \end{equation} so that, the latency of unit $k$ can be expressed as ($unit_k\in B$): \begin{equation}\label{eqn21} LT_{n,k}^l=\sum_{c=1}^{c=k-1}t_{n,c}^l+t_{n,k}^l=\sum_{c=1}^{c=k}t_{n,c}^l \end{equation} So the latency of the whole system of user $n$ can be represent as: \begin{equation}\label{eqn22} Ts_n=max(LT_{n,b}^l, LT_{n,a}^m) \end{equation} Use $E_n^L$ to represent the total energy consumption of local device $n$: \begin{equation}\label{eqn23} E_n^L=\sum_{a=1}^{a=j} E^t_{n,a}+\sum_{b=1}^{b=k} E^l_{n,b} \end{equation} }} \section{Proposed task offloading algorithm} In this {\color{black}{section}}, first, several preprocessing methods {\color{black}{will be proposed}}, and perform some operations before task offloading, without affecting the reliability of the task, reduce the amount of task calculation and transmission, and then introduce the algorithm proposed for task offloading. \subsection{Correlation in Time Domain of Tasks} With the change of time, the task constantly updates its own information source to process the task. In the traditional task offloading, only consider how to change the task allocation mode, the processing frequency, transmission power to reduce latency and improve the energy-efficient, but in their algorithm, the data size does not change, which to a large extent, restricts the development of task offloading, {\color{black}{this paper provide a new way}} to filter some unnecessary information to reduce the data size, so as to further reduce the latency and energy consumption. Correlation coefficient is a good way to be considered. The correlation coefficient {\color{black} introduced in \cite{7000704}} is defined as: \begin{equation}\label{eq1} r(x,y) \triangleq \frac{\mathsf{cov}(x,y)}{\sqrt{\mathsf{var}(x)\mathsf{var}(y)}}\ ,\ r(x,y) \in [-1,1]. \end{equation} In this formula, x, y are the sources of information {\color{black}{which need to be compared}}. Suppose there is a task, denoted as $Task\ A$, {\color{black}{and}} denote the names of $Task\ A$ at different moments as $Task\ A1$, $Task\ A2$, $Task\ A3$, $Task\ A4$ in the order of time, $A1$ is the first and $A4$ is the last one, to reduce the number of processing, {\color{black}{correlation coefficient should be used}} between them to decide whether to process these tasks, or process part of them. There are two ways to use the correlation coefficient. The first way, set only one threshold to decide whether to process this task: set a threshold $\alpha$, and then process the $Task\ A1$, and then calculate then correlation coefficient of $Task\ A1$ and $Task\ A2$. If the correlation coefficient between them is greater than $\alpha$, the $Task\ A2$ {\color{black}{will not be processed}}, and keep the $Task\ A1$ in memory, and the correlation coefficient between $Task\ A1$ and $Task\ A3$ {\color{black}{should be calculated}}, if it is greater than the $\alpha$, $Task\ A3$ {\color{black}{will not be processed}} too, and continue to calculate the correlation coefficient between $Taks\ A1$ and $Task\ A4$, if not, $Task\ A3$ {\color{black}{will be processed}}, and keep $Task\ A3$ in memory, and then calculate the correlation coefficient between $Task\ A3$ and $Task\ A4$, and repeat with following Tasks. The second way, set multi-threshold to decide how to process this task: {\color{black}{two thresholds}} as an example. First of all, set two thresholds, $\alpha$ and $\beta$, and $\alpha >\beta$, and process the first task. When the correlation coefficient {\color{black}{between}} two adjacent tasks is greater than $\beta$ and smaller than $\alpha$, {\color{black}{just}} need to process the different part of this two tasks, when the correlation coefficient {\color{black}{between}} two tasks is smaller than $\beta$, {\color{black}{it can be considered}} the information of these two tasks are totally different, so, {\color{black}{the new task need to be processed}}, when the correlation coefficient {\color{black}{between}} two tasks is greater than $\alpha$, {\color{black}{it can be considered}} the information of these two tasks are totally same, {\color{black}{the new task need not to be processed again}} \subsection{Correlation in Task Domain of Tasks} The correlation coefficient {\color{black}{ is considered in}} time domain, so as to reduce the amount of data, then the next step is to do the task splitting to get the units of these necessary task. {\color{black}{As said before}}, different tasks may split into the same unit, so the correlation of task domain came into being. How to use these correlation to further optimize{ \color{black}{the}} algorithm is the main content of this part, and {\color{black}{use}} $C_{o,p}$ to represent the correlation between unit o and unit p. $C_{o,p}\in \{0, 0.5, 1\}$. when $C_{o,p}=0$, that means that there is no relationship between unit o and unit p, both units need to be processed, when $C_{o,p}=1$, that means unit o and unit p are total same, {\color{black}{and}} only need to process one of them and share the result, when $C_{o,p}=0.5$, that means unit o and unit p are different units, but use the same source information. \subsection{Units Allocation} After {\color{black}{arranging units}} in the order required by the latency, determine whether this unit needs to be assigned to the MEC for processing {\color{black}{from the first unit}}. At this time, the tree diagram{\color{black}{ needs to be drawn}}, as shown in Fig. \ref{figure42}, {\color{black}{use}} four units as an example. Starting from the vertex first unit, there are two paths, one (unit will be processed in MEC Server) and zero (unit will be processed in local device). If the unit latency caused by the path meets the requirements of this unit, this path will be maintained and the next level of judgment will be made. If the latency does not meet the requirements, then drop this path. {\color{black}{Until finish}} the last task, and judgment whether the system latency is meet the requirement, if the answer is YES, keep this node, if not, drop this node, then all the feasible solutions {\color{black}{can be obtained}}. Then, for getting the optimal solution, some other optimization to transmission power and clock frequency {\color{black}{need to be done, and they will be introduced in the next few parts. }} \begin{figure}[t] \centering \includegraphics[scale=0.27]{figure42.png} \vspace{-1.5em}\caption{Binary Tree} \label{figure42} \end{figure} In the above situation, there is no correlation between units, {\color{black}{the next section will introduce}} the situation with there are correlation between units. when $C_{o,p}=1$, the duplicate units {\color{black}{should be deleted}}, and then allocate the remaining units in the same way as said before. when $C_{o,p}=0.5$, these units with a correlation degree of 0.5 {\color{black}{ need to be merged}}, that means merge them into a large task to carry out the process of task allocation(this allocation process is same as said before). If this large task is allocated to MEC server for processing, {\color{black}{it}} will reduce the latency and energy consumption caused by communication, that because they use the same source information. If this large task is allocated to local device for processing, the energy consumption and the latency will not change. \subsection{Clock Frequency Allocation} According to {\color{black}{(\ref{eqn10})}}, {\color{black}{if}} $w_{n,j}$ (the CPU cycle required to calculate the unit) not change, the smaller the value of $f_n^t$, the lower energy consumption will take. \subsection{Transmission Power Allocation} According to {\color{black}{(\ref{eqn07}), (\ref{eqn08}), and (\ref{eqn12})}} and use $D$ to represent the total transmission volume of all units that need to be transmitted to MEC, so the energy consumed for transmission becomes E1 \begin{equation} E1 ={\color{black}{\frac{D}{B_w}}\times log_{\eta}}2^{p^t_n} \end{equation} {\color{black}{$\eta=1+\frac{p_n^th_{n,m}^2}{{B_w}\mathcal{N}_0}$.}} So, the monotonic interval of E1 with respect to $p_n^t$, and take the $p_n^t$value that minimizes E1 under conditions C1 to C5. \subsection{Algorithm Flow} Use the maximum $p_n^t$ of the local device and maximum $f_n^t$ of local device to find all combinations that satisfy the conditions of C1-C5, and record them as $\mathcal{V}$, and then use the algorithms proposed before find the smallest value of energy consumption that can be achieved after optimization in $\mathcal{V}$ , take the combination with the least energy consumption as {\color{black}{the}} final result. Therefore, the optimal clock frequency, transmission power, and unit allocation method can be obtained. \section{Simulation results} In this section, {\color{black} we evaluate our proposed task-offloading algorithm with three state-of-the-art baselines through MATLAB based simulations. It is considered in our simulation, that system contains four users and a single MEC. Each user has different number of tasks, and different type of task has different size.} Each user has {\color{black} its dedicated orthogonal channel to communicate to MEC}. {\color{black}{The channel bandwidth is set }}to 20MHZ, and the channel conforms to the Rayleigh distribution. The SNR of the user terminal is set to 10dB, 20dB, 30dB, 40dB and 50dB in our simulations. The maximum computing rate in user's device is $2\times10^9$cycles/s\cite{7914660}, and the maximum computing rate in MEC server is $20\times10^9$cycles/s \cite{7914660}, the size of each task is between 1-3M, and the number of processing revolutions they require is Between 1500-4500 cycles\cite{7914660}, and $\kappa =10^{-11}$ \cite{miettinen2010energy}. {\color{black}{There are}} two latency requirements, which are 50ms and 100ms. \begin{figure}[t] \centering \vspace{-1.5em} \subfigure[SNR=20,30dB]{ \begin{minipage}[t]{1\linewidth} \centering \includegraphics[width=0.825\textwidth]{SNRM2030.png} \label{20} \end{minipage}% }% \quad \subfigure[SNR=40,50dB]{ \begin{minipage}[t]{1\linewidth} \centering \includegraphics[width=0.825\textwidth]{SNRM4050.png} \label{40} \end{minipage} }% \centering \vspace{-1em}\caption{\color{black}{Energy consumption of five different algorithms }} \label{20304050} \end{figure} {\color{black}{In the experiment, {\color{black}{two proposed approaches simulated with the other three baselines}}. Method 1 (Baseline 1): the most primitive task offloading algorithm, without considering any transmission power and other optimizations. Method 2 (Baseline 2): Further optimize the processing frequency and transmission power based on the Method 1\cite{7524497}. Method 3 (Baseline 3): Based on Method 1 and Method 2, consider the impact of split tasks on task offloading\cite{8279411}. Method 4: Consider the relevance of task on time domain to reduce the amount of data. method 5: Base on Method 3, consider the correlation in time domain and in different tasks to reduce the amount of data.}} The {\color{black}{Fig. \ref{20} Fig. \ref{40} show the energy consumption of the five different algorithms on the local device when the SNR on the user terminal are 20dB, 30dB, 40dB and 50dB respectively. In the same SNR, our proposed algorithms have better performs Fig. \ref{SNRmethod} shows the probability of task failure under different methods and SNR. From Fig. \ref{20304050}, and Fig. \ref{SNRmethod}, with the same SNR, our proposed algorithms have better performs in energy saving and reliability, and as the increase of SNR, with the same method, the energy cost may increase too, this is because we record that the energy consumption of the failed tasks that can be known in the stage of decision making is zero. The larger the SNR, the more tasks that can be processed, so it will also cause more energy consumption.}} \section{Conclusion} In this paper, it was studied how to improve the energy efficiency of local devices and reduce the latency in the process of task offloading. A novel task offloading algorithm has been proposed with the consideration of task correlation in both time domain and task domain to reduce the number of tasks to be transmitted. Moreover, a joint optimization has been studied for task allocation, transmission power and clock frequency. MATLAB based simulation results have demonstrated that the proposed task offloading algorithm can reduce the device energy consumption along various transmit power versus noise variance setup compare with the conventional one. \begin{figure}[t] \centering \includegraphics[width=9.8cm,height=6.17 cm]{SNRmethod.png} \vspace{-1.8em}\caption{{\color{black}{ Probability of task processing failure under different methods and SNR}}} \label{SNRmethod} \end{figure} {\color{black}{\section*{Acknowledgment} The work was supported in part by European Com- mission under the framework of the Horizon2020 5G-Drive project, and in part by 5G Innovation Centre (5GIC) HEFEC grant.}} \balance \bibliographystyle{IEEEtran}
{'timestamp': '2021-10-04T02:21:09', 'yymm': '2110', 'arxiv_id': '2110.00424', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.00424'}
ArXiv
\section{Introduction} The purpose of this paper is to present an introduction to the subject of the $Z_N$ symmetry\cite{zn} which is present in the study of $SU_N$ Gauge Theories at finite temperature. I begin with a brief review of the Euclidean Path Integral formulation of pure $SU_N$ Gauge Theories (without fermions) and of the Wilson--Polyakov Line which is used as an order parameter for this theory. This is followed by a discussion of the $Z_N$ symmetry, its relationship to Confinement and the breaking of this symmetry at high temperatures. $Z_N$ domain walls and $Z_N$ bubbles are then introduced. I then discuss what happens when fermions are introduced including the presence of metastable configurations in the Euclidean Partition Function. The paper concludes with a discussion of the physical interpretation of these metastable states, and of $Z_N$ domains in general, in Minkowski space. \section{Pure $SU_N$ Gauge Theories} Pure $SU_N$ Gauge Theories at a finite temperature $T=\beta^{-1}$ are usually studied via the Partition Function\cite{ftZ} \equation Z(\beta)~=~{\rm Tr}~{\rm e}^{-\beta H}~ \propto~\int_{A_i(\tau=0)=A_i(\tau=\beta)} {\cal D}A_\mu ~ {\rm exp}\left[-S_E(\beta)\right] \endequation with the Eucliean Action $S_E(\beta)$ given by \equation S_E(\beta)~=~{1\over g^2}\int_0^\beta d\tau\int d^3x~ {1\over 4}~F_{\mu\nu}^aF^{\mu\nu}_a \endequation where $F_{\mu\nu}^a=\partial_\mu A_\nu^a-\partial_\nu A_\mu^a +(A_\mu \times A_\nu)^a$ are the Field Strengths. The above expression for $Z(\beta)$ is derived by considering only the states $\vert\psi>$ which satisfy Gauss' Law $D_iE_i\vert\psi>=0$. It thus represents the Partition Function for the theory in the {\bf absense} of any external sources. The Free Energy of such a system is given by $F(\beta)=-(1/\beta){\rm log}\left[Z(\beta)\right]$. The Partition Function in the presence of a single external ``quark'' source (i.e. a static source in the fundamental representation of $SU_N$) at a spatial point $\vec x$ is given by\cite{wpline} \equation Z_q(\beta)~ \propto~\int_{A_i(\tau=0)=A_i(\tau=\beta)} {\cal D}A_\mu ~ {\rm exp}\left[-S_E(\beta)\right] ~\times~ \left[{\rm Tr}~L(\vec x)\right] \endequation where \equation L(\vec x)~=~\left[ {1\over N}~P{\rm exp}\left( i\int_0^\beta A_0(\vec x,\tau) d\tau\right)\right] \endequation is called the Wilson--Polyakov Line. Under a gauge transformation $U(\vec x, \tau)$, $L(\vec x)$ transforms as \equation L(\vec x)\rightarrow U(\vec x,0)L(\vec x)U(\vec x,\beta) \endequation It thus follows that ${\rm Tr}~L(\vec x)$ is invariant under gauge transformations which are {\bf periodic} in time i.e. $U(\vec x,0)=U(\vec x,\beta)$. The increase in Free Energy $\delta F$ when adding such an external source is thus given by \equation {\rm e}^{-\beta\left(\delta F\right)}~=~\langle{\rm Tr}~L(\vec x) \rangle~=~ {{Z_q(\beta)}\over{Z(\beta)}} \endequation Similarly the excess Free Energy $V(\vert\vec x-\vec y\vert)$ of a ``quark'' and an ``antiquark'' source at locations $\vec x$ and $\vec y$ respectively is given by \equation \langle {\rm Tr}~L^\dagger(\vec y)~{\rm Tr}~L(\vec x)\rangle \propto {\rm e}^{-\beta~V(\vert\vec x-\vec y\vert)} \endequation It follows from the above discussion that $\langle {\rm Tr}~L(\vec x)\rangle$ is a useful order parameter for distinguishing a confining from a deconfining phase in Gauge Theories without fermions. If $\langle {\rm Tr}~L\rangle=0$ then $\delta F$ is infinite and it costs an infinite amount of energy to introduce a single source. Furthermore $\langle {\rm Tr}~L^\dagger (\vec y)~{\rm Tr}~L(\vec x)\rangle\rightarrow 0$ as $\vert\vec x-\vec y\vert \rightarrow \infty$ so that $V(\vert\vec x-\vec y\vert)\rightarrow \infty$ at large distances. This signals a confining phase. If, on the other hand, $\langle {\rm Tr}~L\rangle \ne 0$ then the potential energy $V$ is finite at large distance. This signals a deconfining phase. \vfill \eject \section {$Z_N$ Symmetry} It so happens that the Euclidean Path Integral possesses a discrete symmetry which implies that $\langle {\rm Tr}~L\rangle = 0$. For $SU_N$ this symmetry transforms \equation L\rightarrow {\rm e}^{2\pi i {\rm k}/N} \times L \label{ltrans} \endequation for $k=1,2...N$ leaving the action $S_E$ invariant. To see this note that in our formalism the Partition Function is a sum over only {\bf periodic} Gauge Potentials $A_i$. If, however, we consider a nonperiodic Gauge Transformation $U(\vec x,\tau)$ such that \equation U(\vec x,\tau=0)=I;~~~~U(\vec x,\tau=\beta)=M(\vec x)\ne I \label{zntrans} \endequation then the periodic boundary conditions on $A_i$ will be maintained if and only if $M(\vec x)$ commutes with {\bf any} $A_i$. This can only happen if $M$ is in the Center of the group $SU_N$ \equation M= {\rm e}^{2\pi i {\rm k}/N} \times I \endequation for $k=1,2...N$. This is the group $Z_N$. Under this transformation both the action and the boundary conditions are invariant but the order parameter $\langle {\rm Tr}~L\rangle$ is not invariant but transforms as in Eq. (\ref{ltrans}). This then implies that $\langle {\rm Tr}~L\rangle =0$ which seems to imply confinement. The loophole is that this symmetry is a discrete, global symmetry and thus it is entirely possible that this symmetry is spontaneously broken at some temperatures. It thus follows that the issue of Confinement is intimately tied to the question of whether the $Z_N$ symmetry is spontaneously broken. Spontaneous breaking of the symmetry leads to $\langle {\rm Tr}~L\rangle \ne 0$ which signals a deconfined phase. It has been shown\cite{znbrkg} that the $Z_N$ symmetry is indeed broken in weak coupling perturbation theory which is expected to be valid at high temperatures. The basic reason for this is that perturbation theory is an expansion around $A_\mu =0$ which implies that $\langle {\rm Tr}~L\rangle=1$-(perturbative corrections). Perturbative calculations of the Effective Potential for $A_0$ or, equivalently for $L$ show a minimum at $\langle {\rm Tr}~L\rangle=1$ and at the $Z_N$ symmetric points $\langle {\rm Tr}~L\rangle={\rm exp}(2\pi i{\rm k}/N)$. A typical curve (for $SU_2$) is shown in Figure 1. The presence of this minimum in the deconfined (high temperature) phase and its absense in the confined (low temperature) phase is confirmed by numerical Lattice simulations. It is interesting to note that, unlike most other symmetries, this $Z_N$ symmetry is broken in the low temperature phase and unbroken in the high temperature phase. \section{Domain Walls and Bubbles} The form of the Effective Potential shown in Figure 1 in the high temperature phase implies the existence of domain walls in the Euclidean Path Integral. If we imagine forcing boundary conditions on our system such that for $x_3\rightarrow -\infty$ the system sits near the minimum at $\langle {\rm Tr}~L\rangle=1$ whereas for $x_3\rightarrow +\infty$ the system is near another of the minima of the Effective Potential, say at $\langle {\rm Tr}~L\rangle= {\rm exp}(2\pi i{\rm k}/N)$ then there will be a Domain Wall separating these two ``vacua''. Calculations of these Domain Wall energies have been carried out both in perturbation theory\cite{pka} and numerically and they will be discussed in another paper at this workshop by C. Korthals--Altes. \vspace{1in} \epsfysize=4in \epsfbox[-125 0 624 576]{fig1.ps} \vspace{-1in} \centerline{\bf Figure 1} \centerline{\bf Free energy versus $q\propto {\rm log} \langle {\rm Tr}~L\rangle$ for $SU(2)$ with no Fermions} \vskip .4in A consequence of the existence of these domain walls is that there also exist $Z_N$ bubbles. These bubbles are regions in {\bf space} for which $\langle {\rm Tr}~L\rangle \simeq 1$ outside the bubble but at the center of the bubble $\langle {\rm Tr}~L\rangle \simeq {\rm exp}(2\pi i{\rm k}/N)$. As the temperature $T$ decreases it becomes more likely to form these bubbles. This happens because the probability to form a bubble is proportional to ${\rm exp}(-S/g^2)$. The action is proportional to $T^4$ and $g^2$ increases as $T$ is decreased. As the temperature is lowered more of these bubbles will be present and they will eventually ``randomize'' $L$ until at some critical temperature the symmetry is restored and $\langle {\rm Tr}~L\rangle =0$. This is the confining--deconfining transition. \section {Gauge Theories with Fermions} The situation described above for pure Gauge Theories change significantly when Fermions (in the fundamental representation of $SU_N$) are introduced. We consider a theory with $N_f$ flavours of ``quarks'' described by Fermionic fields $\psi$. The Partition Function is given by \equation Z_f(\beta)~=~{\rm e}^{-\beta F_f(\beta)}~ \propto~\int_{\buildrel{A_i(0)=A_i(\beta)}\over {\psi(0)=-\psi(\beta)}} {\cal D}A_\mu {\cal D}\psi {\cal D}\psi^\dagger ~ {\rm exp}\left[-S_E^f(\beta)\right] \endequation where $S_E^f(\beta)$ is the usual Euclidean Action for QCD. Note the antiperiodic boundary conditions which $\psi$ satisfies. We expect, both on physical and on mathematical grounds, that $\langle {\rm Tr}~L\rangle \ne 0$ in this case. Physically this is a result of the fact that the potential energy $V(r)$ of two static sources separated by a distance $r$ does not grow as $r\rightarrow\infty$ due to screening by the dynamical fermions. Furthermore the ground state in the presence of a single quark source has a finite energy since it includes a dynamical antiquark to which it is bound. This physical expectation is realized mathematically by the fact that there is no $Z_N$ symmetry for this system and thus there is no mathematical reason to suppose that $\langle {\rm Tr}~L\rangle = 0$. (There is, in fact, no known order parameter which distinguishes a confining from a deconfining phase in the presence of quarks.) The reason for the loss of $Z_N$ symmetry is that even though both the Action and the periodic boundary conditions on $A_i$ are maintained by the transformation (\ref{zntrans}) the antiperiodic boundary conditions on $\psi$ are not preserved since \equation \psi(\beta)\rightarrow U(\beta)\psi(\beta) = - {\rm e}^{2\pi i{\rm k}/N}\psi(0) \endequation which is a simple reflection of the fact that fields transforming under the fundamental representation of $SU_N$ are not invariant under its center. This lack of $Z_N$ invariance can be vividly demonstrated by computing the Effective Potential for $\langle {\rm Tr}~L\rangle $. The specific case of $N=3$ is plotted in Figure 2 for various values of $N_f$. In this figure $F_f/T^4$ is plotted as a function of a variable $q\propto {\rm log}(\langle {\rm Tr}~L\rangle)$. Note the presence of ``metastable minima'' at $q=1$ and $2$. These minima actually become maxima for sufficiently large $N_f$ although this is not shown in the figure. \section {Interpretation of the $Z_N$ Domains in Minkowski Space} It is tempting to treat the metastable extrema of an Effective Potential such as that of Figure 2 as a physical metastable state in Minkowski space. There have been several attempts to do this and interesting physical and cosmological consequences of this have been suggested\cite{dolk}. Despite these attempts I believe that these metastable {\bf Euclidean} extrema have no {\bf direct} physical interpretation\cite{bksw}. First note that if we {\bf do} interpret these metastable states physically we run into very serious trouble. The reason is that the Free Energy $F_f\propto T^4$. For example for $SU_3$ with $N_f=4$, the metastable extremum at $q=1$ has a free energy $F=\gamma T^4$ {\bf with $\gamma > 0$}. This happens for a very large class of values for $N_f$ and $N$. This is a disaster if these points represent true metastable states of the system. The reason is that if $F=\vert \gamma \vert T^4$ then the pressure $p=-\vert \gamma \vert T^4$, the internal energy $E=-3\vert \gamma \vert T^4$, the specific heat $c=-12\vert \gamma \vert T^3$ but worst of all the entropy \equation S~=~{{E-F}\over T}~=~-4\vert \gamma \vert T^3 \endequation This negative entropy implies, among other things, that there is less than one state available to the system. The above values for the thermodynamic quantities are clearly unphysical. \epsfysize=4in \epsfbox[-125 0 624 576]{fig2.ps} \vskip .2in \centerline{\bf Figure 2} \centerline{\bf Free energy versus $q\propto {\rm log} \langle {\rm Tr}~L\rangle$ for $SU(3)$ with various numbers} \centerline{\bf of fermion flavours} \vskip .4in This problem (which is caused by the ``wrong'' sign of $F$) can {\bf artificially} fixed by adding extra particles to the system whose entropy $S>0$. This is what happens, for example, in the Standard Electoweak Model. But this does not solve the fundamental problem that a physical interpretation of these states is untennable. The basic problem is that $\langle {\rm Tr}~L\rangle$ is {\bf fundamentally} a Euclidean object. It is, in fact, nonlocal in Euclidean time. In fact a Euclidean $A_0\ne 0$ (which leads to a nontrivial $\langle {\rm Tr}~L\rangle)$ corresponds to an {\bf imaginary} $A_0$ in Minkowski space. Thus in Minkowski space a constant $A_0$ looks like a purely imaginary chemical potential for ``color'' charge! This is clearly unphysical for a Minkowski object. In fact if we use as an example the case when $N$ is even and $q=N/2$ (i.e. $\langle {\rm Tr}~L\rangle) = -1$ then the Fermi distribution in such a constant background $A_0$ field turns into a Bose distribution but with \equation n(p)~=~ -{1\over{{\rm e}^{\beta E(p)}-1}} \endequation Note the minus sign in front of the expression! This is related to the negative entropy of the system. The lesson of the above discussion is that $Z_N$ bubbles should {\bf not} be interpreted as physical bubbles any more than Instantons should be interpreted as real Minkowski objects. But these $Z_N$ bubbles {\bf do} contribute to the Partition Function and to expectation values of observables if they are calculated using the Euclidean Path Integral. They should thus be included in a non--perturbative analysis of the thermodynamics of QCD. The physical interpretation of these bubbles is further discussed in a paper by A.V. Smilga presented at this Workshop. \section{Summary} \noindent -- The $Z_N$ symmetry for $SU_N$ gauge theories plays an important role in the {\bf Euclidean} analysis of its thermodynamics. \medskip \noindent -- $Z_N$ bubbles are likely to play a role in the confining--deconfining phase transition. \medskip \noindent -- Fermions in the Fundamental representation break the $Z_N$ symmetry. \medskip \noindent -- The resulting metastable extrema of the Effective Potential should {\bf not} be interpreted as physically attainable states nor should the $Z_N$ bubbles which attain these extrema in their cores. \medskip \noindent -- These metastable extrema {\bf do} however contribute to the thermodynamics of the system. \vspace{1cm} \noindent {\bf REFERENCES}
{'timestamp': '1993-11-05T19:24:53', 'yymm': '9311', 'arxiv_id': 'hep-ph/9311233', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9311233'}
ArXiv
\section{Introduction} \label{sec:intro} A major open question in the study of exoplanets is the origin of their apparent obliquity properties---the distribution of the angle $\lambda$ between the stellar spin and the planet's orbital angular momentum vectors as projected on the sky (see, e.g., the review by \citealt{WinnFabrycky15}). Measurements of the Rossiter--McLaughlin effect in hot Jupiters (HJs, defined here as planets with masses $M_\mathrm{p}\gtrsim0.3\,M_\mathrm{J}$ that have orbital periods $P_\mathrm{orb} \lesssim 10\,$days) have indicated that $\lambda$ spans the entire range from~$0^\circ$ to~$180^\circ$, in stark contrast with the situation in the solar system (where the angle between the planets' total angular momentum vector and that of the Sun is only $\sim$$6^\circ$). In addition, there is a marked difference in the distribution of $\lambda$ between G~stars, where $\sim$$1/2$ of systems are well aligned ($\lambda < 20^\circ$) and the rest are spread out roughly uniformly over the remainder of the $\lambda$ range, and F~stars of effective temperature $T_\mathrm{eff} \gtrsim 6250\,$K, which exhibit only a weak excess of well-aligned systems. There is, however, also evidence for a dependence of the obliquity distribution on the properties of the planets and not just on those of the host star; in particular, only planets with $M_\mathrm{p} < 3\,M_\mathrm{J}$ have apparent retrograde orbits ($\lambda > 90^\circ$). Various explanations have been proposed to account for the broad range of observed obliquities, but the inferred dependences on $T_\mathrm{eff}$ and $M_\mathrm{p}$ provide strong constraints on a viable model. In one scenario \cite[][]{Winn+10, Albrecht+12}, HJs arrive in the vicinity of the host star on a misaligned orbit and subsequently act to realign the host through a tidal interaction, which is more effective in cool stars than in hot ones. In this picture, HJs form at large radii and either migrate inward through their natal disk while maintaining nearly circular orbits or are placed on a high-eccentricity orbit after the gaseous disk dissipates---which enables them to approach the center and become tidally trapped by the star (with their orbits getting circularized by tidal friction; e.g., \citealt{FordRasio06}).\footnote{ The possibility of HJs forming at their observed locations has also been considered in the literature \citep[e.g.,][]{Boley+16,Batygin+16}, but the likelihood of this scenario is still being debated.} The processes that initiate high-eccentricity migration (HEM), which can be either planet--planet scattering \citep[e.g.,][]{Chatterjee+08, JuricTremaine08, BeaugeNesvorny12} or secular interactions that involve a stellar binary companion or one or more planetary companions (such as Kozai-Lidov oscillations --- e.g., \citealt{WuMurray03, FabryckyTremaine07, Naoz+11, Petrovich15b}---and secular chaos---e.g., \citealt{WuLithwick11, LithwickWu14, Petrovich15a, Hamers+17}), all give rise to HJs with a distribution of misaligned orbits. In the case of classical disk migration, the observed obliquities can be attributed to a primordial misalignment of the natal disk that occurred during its initial assembly from a turbulent interstellar gas \citep[e.g.,][]{Bate+10, Fielding+15} or as a result of magnetic and/or gravitational torques induced, respectively, by a tilted stellar dipolar field and a misaligned companion \citep[e.g.,][]{Lai+11, Batygin12, BatyginAdams13, Lai14, SpaldingBatygin14}. The tidal realignment hypothesis that underlies the above modeling framework was challenged by the results of \citet{Mazeh+15}, who examined the rotational photometric modulations of a large number of {\it Kepler}\/ sources. Their analysis indicated that the common occurrence of aligned systems around cool stars characterizes the general population of planets and not just HJs, and, moreover, that this property extends to orbital periods as long as $\sim$$50\,$days, about an order of magnitude larger than the maximum value of $P_\mathrm{orb}$ for which tidal interaction with the star remains important. To reconcile this finding with the above scenario, \citet{MatsakosKonigl15} appealed to the results of planet formation and evolution models, which predict that giant planets form efficiently in protoplanetary disks and that most of them migrate rapidly to the disk's inner edge, where, if the arriving planet's mass is not too high ($\lesssim 1\,M_\mathrm{J}$), it could remain stranded near that radius for up to $\sim$$1\,$Gyr---until it gets tidally ingested by the host star. They proposed that the ingestion of a stranded HJ (SHJ)---which is accompanied by the transfer of its orbital angular momentum to the star---is the dominant spin-realignment mechanism. In this picture, the dichotomy in the obliquity properties between cool and hot stars is a direct consequence of the higher efficiency of magnetic braking and lower moment of inertia of the former in comparison with the latter. By applying a simple dynamical model to the observed HJ distributions in~G and F~stars, \citet{MatsakosKonigl15} inferred that $\sim$50\% of planetary systems harbor an SHJ with a typical mass of $\sim$$0.6\,M_\mathrm{J}$. In this picture, the obliquity properties of currently observed HJs---and the fact that they are consistent with those of lower-mass and more distant planets---are most naturally explained if most of the planets in a given system---including any SHJ that may have been present---are formed in, and migrate along the plane of, a primordially misaligned disk.\footnote{ This explanation does not necessarily imply that all planets that reached the vicinity of the host star must have moved in by classical migration, although SHJs evidently arrived in this way. In fact, \citet{MatsakosKonigl16} inferred that most of the planets that delineate the boundary of the so-called sub-Jovian desert in the orbital-period--planet-mass plane got in by a secular HEM process (one that, however, did not give rise to high orbital inclinations relative to the natal disk plane).} This interpretation is compatible with the properties of systems like Kepler-56, in which two close-in planets have $\lambda \approx 45^\circ$ and yet are nearly coplanar \citep{Huber+13}, and 55~Cnc, a coplanar five-planet system with $\lambda \approx 72^\circ$ \citep[e.g.,][]{Kaib+11, BourrierHebrard14}.\footnote{ The two-planet system KOI-89 \citep{Ahlers+15} may be yet another example.} It is also consistent with the apparent lack of a correlation between the obliquity properties of observed HJs and the presence of a massive companion \citep[e.g.,][]{Knutson+14, Ngo+15, Piskorz+15}. In this paper we explore a variant of the primordial disk misalignment model first proposed by \citet{Batygin12}, in which, instead of the tilting of the entire disk by a distant ($\sim$500\,au) stellar companion on an inclined orbit, we consider the gravitational torque exerted by a much closer ($\sim$5\,au) \emph{planetary} companion on such an orbit, which acts to misalign \emph{only the inner region} of the protoplanetary disk. This model is motivated by the inferences from radial velocity surveys and adaptive-optics imaging data (\citealt{Bryan+16}; see also \citealt{Knutson+14}) that $\sim$70\% of planetary systems harboring a transiting HJ have a companion with mass in the range 1--13\,$M_\mathrm{J}$ and semimajor axis in the range $1$--$20$\,au, and that $\sim$50\% of systems harboring one or two planets detected by the radial velocity method have a companion with mass in the range $1$--$20\,M_\mathrm{J}$ and semimajor axis in the range $5$--$20$\,au. Further motivation is provided by the work of \citet{LiWinn16}, who re-examined the photometric data analyzed by \citet{Mazeh+15} and found indications that the good-alignment property of planets around cool stars does not hold for large orbital periods, with the obliquities of planets with $P_\mathrm{orb} \gtrsim 10^2\,$days appearing to tend toward a random distribution. One possible origin for a giant planet on an inclined orbit with a semimajor axis $a$ of a few au is planet--planet scattering in the natal disk. Current theories suggest that giant planets may form in tightly packed configurations that can become dynamically unstable and undergo orbit crossing (see, e.g., \citealt{Davies+14} for a review). The instabilities start to develop before the gaseous disk component dissipates \citep[e.g.,][]{Matsumura+10, Marzari+10}, and it has been argued \citep{Chatterjee+08} that the planet--planet scattering process may, in fact, peak before the disk is fully depleted of gas (see also \citealt{Lega+13}). A close encounter between two giant planets is likely to result in a collision if the ratio $(M_\mathrm{p}/M_*)(a/R_\mathrm{p})$ (the Safronov number) is $< 1$ (where $M_*$ is the stellar mass and $R_\mathrm{p}$ is the planet's radius), and in a scattering if this ratio is $> 1$ \citep[e.g.,][]{FordRasio08}. The scattering efficiency is thus maximized when a giant planet on a comparatively wide orbit is involved \citep[cf.][]{Petrovich+14}. High inclinations might also be induced by resonant excitation in giant planets that become trapped in a mean-motion resonance through classical (Type II) disk migration \citep{ThommesLissauer03, LibertTsiganis09}, and this process could, moreover, provide an alternative pathway to planet--planet scattering \citep{LibertTsiganis11}. In these scenarios, the other giant planets that were originally present in the disk can be assumed to have either been ejected from the system in the course of their interaction with the remaining misaligned planet or else reached the star at some later time through disk migration. As we show in this paper, a planet on an inclined orbit can have a significant effect on the orientation of the disk region interior to its orbital radius when the mass of that region decreases to the point where the inner disk's angular momentum becomes comparable to that of the planet. For typical mass depletion rates in protoplanetary disks \citep[e.g.,][] {BatyginAdams13}, this can be expected to happen when the system's age is $\sim$$10^6$--$10^7\,$yr, which is comparable to the estimated formation time of Jupiter-mass planets at $\gtrsim 5\,$au. In the proposed scenario, a planet of mass $M_\mathrm{p} \gtrsim M_\mathrm{J}$ is placed on a high-inclination orbit at a time $t_0 \gtrsim 1\,$Myr that, on the one hand, is late enough for the disk mass interior to the planet's location to have decreased to a comparable value, but that, on the other hand, is early enough for the inner disk to retain sufficient mass after becoming misaligned to enforce the orbital misalignment of existing planets and/or form new planets in its reoriented orbital plane (including any Jupiter-mass planets destined to become an HJ or an SHJ). The dynamical model adopted in this paper is informed by the smooth-particle-hydrodynamics simulations carried out by \citet{Xiang-GruessPapaloizou13}. They considered the interaction between a massive ($1$--$6\,M_\mathrm{J}$) planet that is placed on an inclined, circular orbit of radius $5$\,au and a low-mass ($0.01\,M_*$) protoplanetary disk that extends to $25$\,au. A key finding of these simulations was that the disk develops a warped structure, with the regions interior and exterior to the planet's radial location behaving as separate, rigid disks with distinct inclinations; in particular, the inner disk was found to exhibit substantial misalignment with respect to its initial direction when the planet's mass was large enough and its initial inclination was intermediate between the limits of $0^\circ$ and $90^\circ$ at which no torque is exerted on the disk. Motivated by these results, we construct an analytic model for the gravitational interaction between the planet and the two separate parts of the disk. The general effect of an interaction of this type between a planet on an inclined orbit and a rigid disk is to induce a precession of the planet's orbit about the total angular momentum vector. In contrast with \citet{Xiang-GruessPapaloizou13}, whose simulations only extended over a fraction of a precession period, we consider the long-term evolution of such systems. In particular, we use our analytic model to study how the ongoing depletion of the disk's mass affects the orbital orientations of the planet and of the disk's two parts. We describe the model in Section~\ref{sec:model} and present our calculations in Section~\ref{sec:results}. We discuss the implications of these results to planet obliquity measurements and to the alignment properties of debris disks in Section~\ref{sec:discussion}, and summarize in Section~\ref{sec:conclusion}. \section{Modeling approach} \label{sec:model} \subsection{Assumptions} \label{subsec:assumptions} \begin{figure} \includegraphics[width=\columnwidth]{initial_fig1.eps} \caption{ Schematic representation (not to scale) of the initial configuration of our model. See text for details. \label{fig:initial}} \end{figure} The initial configuration that we adopt is sketched in Figure~\ref{fig:initial}. We consider a young star (subscript s) that is surrounded by a Keplerian accretion disk, and a Jupiter-mass planet (subscript p) on a circular orbit. The disk consists of two parts: an inner disk (subscript d) that extends between an inner radius $r_\mathrm{d,in}$ and an outer radius $r_\mathrm{d,out}$, and an outer disk (subscript h) that extends between $r_\mathrm{h,in}$ and $r_\mathrm{h,out}$; they are separated by a narrow gap that is centered on the planet's orbital radius $a$. The two parts of the disk are initially coplanar, with their normals aligned with the stellar angular momentum vector $\boldsymbol{S}$, whereas the planet's orbital angular momentum vector $\boldsymbol{P}$ is initially inclined at an angle $\psi_\mathrm{p0}$ with respect to $\boldsymbol{S}$ (where the subscript $0$ denotes the time $t = t_0$ at which the planet is placed on the inclined orbit). We assume that, during the subsequent evolution, each part of the disk maintains a flat geometry and precesses as a rigid body. The rigidity approximation is commonly adopted in this context and is attributed to efficient communication across the disk through the propagation of bending waves or the action of a viscous stress (e.g., \citealt{Larwood+96}; see also \citealt{Lai14} and references therein).\footnote{ One should, however, bear in mind that real accretion disks are inherently fluid in nature and therefore cannot strictly obey the rigid-body approximation; see, e.g., \citet{Rawiraswattana+16}.} Based on the simulation results presented in \citet{Xiang-GruessPapaloizou13}, we conjecture that this communication is severed at the location of the planet. This outcome is evidently the result of the planet's opening up a gap in the disk, although it appears that the gap need not be fully evacuated for this process to be effective. In fact, the most strongly warped simulated disk configurations correspond to comparatively high initial inclination angles, for which the planet spends a relatively small fraction of the orbital time inside the disk, resulting in gaps that are less deep and wide than in the fully embedded case. Our calculations indicate that, during the disk's subsequent evolution, its inner and outer parts may actually detach as a result of the precessional oscillation of the inner disk. This oscillation is particularly strong in the case of highly mass-depleted disks on which we focus attention in this paper: in the example shown in Figure~\ref{fig:all-m} below, the initial amplitude of this oscillation is $\sim$$40^\circ$. The planet's orbital inclination is subject to damping by dynamical friction \citep{Xiang-GruessPapaloizou13}, although the damping rate is likely low for the high values of $\psi_\mathrm{p0}$ that are of particular interest to us \citep{Bitsch+13}. Furthermore, in cases where the precessional oscillation of the inner disk causes the disk to split at the orbital radius of the planet, one can plausibly expect the local gas density to become too low for dynamical friction to continue to play a significant role on timescales longer than the initial oscillation period ($\sim$$10^4$\,yr for the example shown in Figure~\ref{fig:all-m}). In light of these considerations, and in the interest of simplicity, we do not include the effects of dynamical friction in any of our presented models. As a further simplification, we assume that the planet's orbit remains circular. The initial orbital eccentricity of a planet ejected from the disk by either of the two mechanisms mentioned in Section~\ref{sec:intro} may well have a nonnegligible eccentricity. However, the simulations performed by \citet{Bitsch+13} indicate that the dynamical friction process damps eccentricities much faster than inclinations, so that the orbit can potentially be circularized on a timescale that is shorter than the precession time (i.e., before the two parts of the disk can become fully separated). On the other hand, even if the initial eccentricity is zero, it may be pumped up by the planet's gravitational interaction with the outer disk if $\psi_\mathrm{p0}$ is high enough ($\gtrsim 20^\circ$; \citealt{Teyssandier+13}). This is essentially the Kozai-Lidov effect, wherein the eccentricity undergoes periodic oscillations in antiphase with the orbital inclination \citep{TerquemAjmia10}. These oscillations were noticed in the numerical simulations of \citet{Xiang-GruessPapaloizou13} and \citet{Bitsch+13}. Their period can be approximated by $\tau_\mathrm{KL} \sim (r_\mathrm{h,out}/ r_\mathrm{h,in})^2 (2\pi/|\Omega_\mathrm{ph}|)$ \citep{TerquemAjmia10}, where we used the expression for the precession frequency $\Omega_\mathrm{ph}$ (Equation~(\ref{eq:omega_ph})) that corresponds to the torque exerted by the outer disk on the misaligned planet. For the parameters of the representative mass-depleted disk model shown in Figure~\ref{fig:all-m}, $\tau_\mathrm{KL} \sim 10^6$\,yr. This time is longer by a factor of $\sim$$10^2$ than the initial precession period of the inner disk in this example, implying that the Kozai-Lidov process will have little effect on the high-amplitude oscillations of $\psi_\mathrm{p}$. Kozai-Lidov oscillations might, however, modify the details of the long-term behavior of the inner disk, since $\tau_\mathrm{KL}$ is comparable to the mass-depletion time $\tau$ (Equation~(\ref{eq:deplete})) that underlies the secular evolution of the system. Our model takes into account the tidal interaction of the spinning star with the inner and outer disks and with the planet, which was not considered in the aforementioned simulations. The inclusion of this interaction is motivated by the finding \citep{BatyginAdams13, Lai14, SpaldingBatygin14} that an evolving protoplanetary disk with a binary companion on an inclined orbit can experience a resonance between the disk precession frequency (driven by the companion) and the stellar precession frequency (driven by the disk), and that this resonance crossing can generate a strong misalignment between the angular momentum vectors of the disk and the star. As it turns out (see Section~\ref{sec:results}), in the case that we consider---in which the companion is a Jupiter-mass planet with an orbital radius of a few au rather than a solar-mass star at a distance of a few hundred au---this resonance is not encountered. We also show that, even in the case of a binary companion, the misalignment effect associated with the resonance crossing is weaker than that inferred in the above works when one also takes into account the torque that the \emph{star} exerts on the inner disk (see Appendix~\ref{app:resonance}). \subsection{Equations} We model the dynamics of the system by following the temporal evolution of the angular momenta ($\boldsymbol{S}$, $\boldsymbol{D}$, $\boldsymbol{P}$, and $\boldsymbol{H}$) of the four constituents (the star, the inner disk, the planet, and the outer disk, respectively) due to their mutual gravitational torques. Given that the orbital period of the planet is much shorter than the characteristic precession time scales of the system, we approximate the planet as a ring of uniform density, with a total mass equal to that of the planet and a radius equal to its semimajor axis. The evolution of the angular momentum $\boldsymbol L_k$ of an object $k$ under the influence of a torque $\boldsymbol T_{ik}$ exerted by an object $i$ is given by $d\boldsymbol L_k/dt = \boldsymbol T_{ik}$. The set of equations that describes the temporal evolution of the four angular momenta is thus \begin{equation} \frac{d\boldsymbol S}{dt} = \boldsymbol T_\mathrm{ds} + \boldsymbol T_\mathrm{ps} + \boldsymbol T_\mathrm{hs}\,, \end{equation} \begin{equation} \frac{d\boldsymbol D}{dt} = \boldsymbol T_\mathrm{sd} + \boldsymbol T_\mathrm{pd} + \boldsymbol T_\mathrm{hd}\,, \end{equation} \begin{equation} \frac{d\boldsymbol P}{dt} = \boldsymbol T_\mathrm{sp} + \boldsymbol T_\mathrm{dp} + \boldsymbol T_\mathrm{hp}\,, \end{equation} \begin{equation} \frac{d\boldsymbol H}{dt} = \boldsymbol T_\mathrm{sh} + \boldsymbol T_\mathrm{dh} + \boldsymbol T_\mathrm{ph}\,, \end{equation} where $\boldsymbol T_{ik} = -\boldsymbol T_{ki}$. The above equations can also be expressed in terms of the precession frequencies $\Omega_{ik}$: \begin{equation} \frac{d\boldsymbol L_k}{dt} = \sum_i\boldsymbol T_{ik} = \sum_i\Omega_{ik}\frac{\boldsymbol L_i\times\boldsymbol L_k}{J_{ik}}\,, \label{eq:precession} \end{equation} where $J_{ik} = |\boldsymbol L_i + \boldsymbol L_k| = (L_i^2 + L_k^2 + 2L_iL_k\cos{\theta_{ik}})^{1/2}$ and $\Omega_{ik} = \Omega_{ki}$. In Appendix~\ref{app:torques} we derive analytic expressions for the torques $\boldsymbol T_{ik}$ and the corresponding precession frequencies $\Omega_{ik}$. \subsection{Numerical Setup} The host is assumed to be a protostar of mass $M_* = M_\odot$, radius $R_* = 2R_\odot$, rotation rate $\Omega_* = 0.1(GM_*/R_*^3)^{1/2}$, and angular momentum \begin{eqnarray} S &=& k_*M_*R_*^2\Omega_* = 1.71 \times 10^{50}\\ &\times& \left(\frac{k_*}{0.2}\right) \left(\frac{M_*}{M_\odot}\right) \left(\frac{R_*}{2R_\odot}\right)^2 \left(\frac{\Omega_*}{0.1\sqrt{GM_\odot/(2R_\odot)^3}}\right)\, \mathrm{erg\,s}\nonumber\,, \end{eqnarray} where $k_* \simeq 0.2$ for a fully convective star (modeled as a polytrope of index $n = 1.5$). The planet is taken to have Jupiter's mass and radius, $M_\mathrm{p} = M_\mathrm{J}$ and $R_\mathrm{p} = R_\mathrm{J}$, and a fixed semimajor axis, $a = 5$\,au, so that its orbital angular momentum is \begin{eqnarray} P &=& M_\mathrm{p}(GM_*a)^{1/2} = 1.89 \times 10^{50} \label{eq:P}\\ &&\times \left(\frac{M_\mathrm{p}}{M_\mathrm{J}}\right) \left(\frac{M_*}{M_\odot}\right)^{1/2} \left(\frac{a}{5\,\mathrm{au}}\right)^{1/2}\,\mathrm{erg\,s}\,.\nonumber \end{eqnarray} We consider two values for the total initial disk mass: (1) $M_\mathrm{t0} = 0.1\,M_*$, corresponding to a comparatively massive disk, and (2) $M_\mathrm{t0} = 0.02\,M_*$, corresponding to a highly evolved system that has entered the transition-disk phase. In both cases we take the disk surface density to scale with radius as $r^{-1}$. The inner disk extends from $r_\mathrm{d,in} = 4R_\odot$ to $r_\mathrm{d,out} = a$, and initially has $10\%$ of the total mass. Its angular momentum is \begin{eqnarray} D &=& \frac{2}{3}M_\mathrm{d}\left(GM_*\right)^{1/2} \frac{r_\mathrm{d,out}^{3/2} - r_\mathrm{d,in}^{3/2}} {r_\mathrm{d,out} - r_\mathrm{d,in}} \label{eq:D}\\ &\simeq& 1.32 \times 10^{51}\, \left(\frac{M_\mathrm{d}}{0.01M_\odot}\right) \left(\frac{M_*}{M_\odot}\right)^{1/2} \left(\frac{a}{5\,\mathrm{au}}\right)^{1/2}\, \mathrm{erg\,s} \nonumber \,. \end{eqnarray} The outer disk has edges at $r_\mathrm{h,in} = a$ and $r_\mathrm{h,out} = 50$\,au, and angular momentum \begin{eqnarray} H &=& \frac{2}{3}M_\mathrm{h}\left(GM_*\right)^{1/2} \frac{r_\mathrm{h,out}^{3/2} - r_\mathrm{h,in}^{3/2}} {r_\mathrm{h,out} - r_\mathrm{h,in}}\\ &\simeq& 3.76 \times 10^{52}\, \left(\frac{M_\mathrm{h}}{0.09M_\odot}\right) \left(\frac{M_*}{M_\odot}\right)^{1/2} \left(\frac{r_\mathrm{h,out}}{50\,\mathrm{au}}\right)^{1/2}\, \mathrm{erg\,s} \nonumber\,. \end{eqnarray} We model mass depletion in the disk using the expression first employed in this context by \citet{BatyginAdams13}, \begin{equation} M_\mathrm{t}(t) = \frac{M_{\mathrm{t}}(t=0)}{1 + t/\tau}\,, \label{eq:deplete} \end{equation} where we adopt $M_{\mathrm{t}}(t=0)=0.1\,M_\sun$ and $\tau = 0.5$\,Myr as in \citet{Lai14}. We assume that this expression can also be applied separately to the inner and outer parts of the disk. The time evolution of the inner disk's angular momentum due to mass depletion is thus given by \begin{equation} \label{eq:dDdt} \left(\frac{d\boldsymbol{D}}{dt}\right)_\mathrm{depl} = -\frac{D_0}{\tau(1 + t/\tau)^2}\hat{\boldsymbol{D}} = -\frac{\boldsymbol{D}}{\tau+t}\,. \end{equation} For the outer disk we assume that the presence of the planet inhibits efficient mass accretion, and we consider the following limits: (1) the outer disk's mass remains constant, and (2) the outer disk loses mass (e.g., through photoevaporation) at the rate given by Equation~(\ref{eq:deplete}).\footnote{ After the inner disk tilts away from the outer disk, the inner rim of the outer disk becomes exposed to the direct stellar radiation field, which accelerates the evaporation process \citep{Alexander+06}. According to current models, disk evaporation is induced primarily by X-ray and FUV photons and occurs at a rate of $\sim$$10^{-9}$--$10^{-8}\,M_\sun\,\mathrm{yr}^{-1}$ for typical stellar radiation fields (see \citealt{Gorti+16} for a review). Even if the actual rate is near the lower end of this range, the outer disk in our low-$M_{\rm t0}$ models would be fully depleted of mass on a timescale of $\sim$$10$\,Myr; however, a similar outcome for the high-$M_\mathrm{t0}$ models would require the mass evaporation rate to be near the upper end of the estimated range.} We assume that any angular momentum lost by the disk is transported out of the system (for example, by a disk wind). We adopt a Cartesian coordinate system ($x,\,y,\,z$) as the ``lab'' frame of reference (see Figure~\ref{fig:initial}). Initially, the equatorial plane of the star and the planes of the inner and outer disks coincide with the $x$--$y$ plane (i.e., $\psi_\mathrm{s0} = \psi_\mathrm{d0} = \psi_\mathrm{h0} = 0$, where $\psi_k$ denotes the angle between $\boldsymbol{L}_k$ and the $z$ axis), and only the orbital plane of the planet has a finite initial inclination ($\psi_\mathrm{p0}$). The $x$ axis is chosen to coincide with the initial line of nodes of the planet's orbital plane. \begin{table*} \begin{center} \caption{Model parameters\label{tab:models}} \begin{tabular}{l|cccccccccc} \hline\hline Model & $\boldsymbol{S}$ & $\boldsymbol{D}$ & $\boldsymbol{P}$ & $\boldsymbol{H}$ & $M_\mathrm{d0} \ [M_*] $ & $M_\mathrm{h0}\ [M_*]$ & $M_\mathrm{t0}\ [M_*]$ & $M_\mathrm{p}$ & $a$ [au] & $\psi_\mathrm{p0}\ [^\circ]$ \\ \hline \texttt{DP-M} & -- & $\surd$ & $\surd$ & -- & $0.010\downarrow$ & -- & -- & $M_\mathrm{J}$ & $5$ & $60$ \\ \texttt{DP-m} & -- & $\surd$ & $\surd$ & -- & $0.002\downarrow$ & -- & -- & $M_\mathrm{J}$ & $5$ & $60$ \\ \texttt{all-M} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.010\downarrow$ & $0.090\downarrow$ & $0.10$ & $M_\mathrm{J}$ & $5$ & $60$ \\ \texttt{all-m} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018\downarrow$ & $0.02$ & $M_\mathrm{J}$ & $5$ & $60$ \\ \texttt{all-Mx} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.010\downarrow$ & $0.090$ -- & $0.10$ & $M_\mathrm{J}$ & $5$ & $60$ \\ \texttt{all-mx} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018$ -- & $0.02$ & $M_\mathrm{J}$ & $5$ & $60$ \\ \texttt{retrograde} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018\downarrow$ & $0.02$ & $M_\mathrm{J}$ & $5$ & $110$ \\ \texttt{binary} & $\surd$ & $\surd$ & $\surd$ & -- & -- & -- & $\ \ \,0.10\downarrow$ & $M_\odot$ & $300$ & $10$ \\ \hline \end{tabular} \end{center} \end{table*} Table~\ref{tab:models} presents the models we explore and summarizes the relevant parameters. Specifically, column 1 contains the models' designations (with the letters \texttt{M} and \texttt{m} denoting, respectively, high and low disk masses at time $t=t_0$), columns 2--5 indicate which system components are being considered, columns 6--9 list the disk and planet masses (with the arrow indicating active mass depletion), and columns 10 and~11 give the planet's semimajor axis and initial misalignment angle, respectively. The last listed model (\texttt{binary}) does not correspond to a planet misaligning the inner disk but rather to a binary star tilting the entire disk. This case is considered for comparison with the corresponding model in \citet{Lai14}. \section{Results} \label{sec:results} The gravitational interactions among the different components of the system that we consider (star, inner disk, planet, and outer disk) can result in a highly nonlinear behavior. To gain insight into these interactions we start by analyzing a much simpler system, one consisting only of the inner disk and the (initially misaligned) planet. The relevant timescales that characterize the evolution of this system are the precession period $\tau_\mathrm{dp} \equiv 2\pi/\Omega_\mathrm{dp}$ (Equation~(\ref{eq:omega_dp})) and the mass depletion timescale $\tau = 5\times 10^5\,$yr (Equation~(\ref{eq:deplete})). \begin{figure*} \includegraphics[width=\textwidth]{DP-M_fig2.eps} \caption{ Time evolution of a ``reduced'' system, consisting of just a planet and an inner disk, for an initial disk mass $M_\mathrm{d0} = 0.01\,M_*$ (model~\texttt{DP-M}). Top left: the angles that the angular momentum vectors $\boldsymbol{D}$, $\boldsymbol{P}$ and $\boldsymbol{J}_\mathrm{dp}$ form with the $z$ axis (the initial direction of $\boldsymbol{D}$), as well as the angle between $\boldsymbol{D}$ and $\boldsymbol{P}$. Top right: the projections of the angular momentum unit vectors onto the $x$--$y$ plane. Bottom left: the characteristic precession frequency. Bottom right: the magnitudes of the angular momentum vectors. In the left-hand panels, the initial $0.1$\,Myr of the evolution is displayed at a higher resolution. \label{fig:DP-M}} \end{figure*} Figure~\ref{fig:DP-M} shows the evolution of such a system for the case (model~\texttt{DP-M}) where a Jupiter-mass planet on a misaligned orbit ($\psi_\mathrm{p0} = 60^\circ$) torques an inner disk of initial mass $M_\mathrm{d0} = 0.01\,M_*$ (corresponding to $M_\mathrm{t0} = 0.1\,M_*$, i.e., to $t_0 = 0$ when $M_* = M_\sun$; see Equation~(\ref{eq:deplete})). The top left panel exhibits the angles $\psi_\mathrm{d}$ and $\psi_\mathrm{p}$ (blue: inner disk; red: planet) as a function of time. In this and the subsequent figures, we show results for a total duration of $10$\,Myr. This is long enough in comparison with $\tau$ to capture the secular evolution of the system, which is driven by the mass depletion in the inner disk. To capture the details of the oscillatory behavior associated with the precession of the individual angular momentum vectors ($\boldsymbol{D}$ and $\boldsymbol{P}$) about the total angular momentum vector $\boldsymbol{J}_\mathrm{dp} = \boldsymbol{D} + \boldsymbol{P}$ (subscript j)---which takes place on the shorter timescale $\tau_\mathrm{dp}$ ($\simeq 9\times 10^3$\,yr at $t = t_0$)---we display the initial $0.1$\,Myr in the top left panel using a higher time resolution and, in addition, show the projected trajectories of the unit vectors $\hat{\boldsymbol{D}}$, $\hat{\boldsymbol{P}}$, and $\hat{\boldsymbol{J}}_\mathrm{dp}$ in the $x$--$y$ plane during this time interval in the top right panel. Given that $0.1\,{\rm Myr} \ll \tau$, the vectors $\hat{\boldsymbol{D}}$ and $\hat{\boldsymbol{P}}$ execute a circular motion about $\hat{\boldsymbol{J}}_\mathrm{dp}$ with virtually constant inclinations with respect to the latter vector (given by the angles $\theta_\mathrm{jd}$ and $\theta_\mathrm{jp}$, respectively), and the orientation of $\hat{\boldsymbol{J}}_\mathrm{dp}$ with respect to the $z$ axis (given by the angle $\psi_\mathrm{j}$) also remains essentially unchanged. (The projection of $\hat{\boldsymbol{J}}_\mathrm{dp}$ on the $x$--$y$ plane is displaced from the center along the $y$ axis, reflecting the fact that the planet's initial line of nodes coincides with the $x$ axis.) As the vectors $\hat{\boldsymbol{D}}$ and $\hat{\boldsymbol{P}}$ precess about $\hat{\boldsymbol{J}}_\mathrm{dp}$, the angles $\psi_\mathrm{d}$ and $\psi_\mathrm{p}$ oscillate in the ranges $|\psi_\mathrm{j} - \theta_\mathrm{jd}| \leq \psi_\mathrm{d} \leq \psi_\mathrm{j} + \theta_\mathrm{jd}$ and $|\psi_\mathrm{j} - \theta_\mathrm{jp}| \leq \psi_\mathrm{p} \leq \psi_\mathrm{j} + \theta_\mathrm{jp}$, respectively. \begin{figure} \begin{center} \includegraphics{misalignment_fig3.eps} \end{center} \caption{ Schematic sketch of the change in the total angular momentum vector $\boldsymbol{J}_\mathrm{dp}$ that is induced by mass depletion from the disk in the limit where the precession period $\tau_{\rm dp}$ is much shorter than the characteristic depletion time $\tau$. The two depicted configurations are separated by $0.5\,\tau_\mathrm{dp}$. \label{fig:vectors}} \end{figure} A notable feature of the evolution of this system on a timescale $\gtrsim \tau$ is the increase in the angle $\psi_\mathrm{d}$ (blue line in the top left panel)---indicating progressive misalignment of the disk with respect to its initial orientation---as the magnitude of the angular momentum $\boldsymbol{D}$ decreases with the loss of mass from the disk (blue line in the bottom right panel). At the same time, the orbital plane of the planet (red line in the top left panel) tends toward alignment with $\boldsymbol{J}_\mathrm{dp}$. The magenta lines in the top left and bottom right panels indicate that the orientation of the vector $\boldsymbol{J}_\mathrm{dp}$ remains fixed even as its magnitude decreases (on a timescale $\gtrsim \tau$) on account of the decrease in the magnitude of $\boldsymbol{D}$. As we demonstrate analytically in Appendix~\ref{app:Jdp}, the constancy of $\psi_\mathrm{j}$ is a consequence of the inequality $\tau_\mathrm{dp} \ll \tau$. To better understand the evolution of the disk and planet orientations, we consider the (small) variations in $\boldsymbol{D}$ and $\boldsymbol{J}_\mathrm{dp}$ that are induced by mass depletion over a small fraction of the precession period. On the left-hand side of Figure~\ref{fig:vectors} we show a schematic sketch of the orientations of the vectors $\boldsymbol{D}$, $\boldsymbol{P}$, and $\boldsymbol{J}_\mathrm{dp}$ at some given time (denoted by the subscript 1) and a short time later (subscript 2). During that time interval the vector $\boldsymbol{J}_\mathrm{dp}$ tilts slightly to the left, and as a result it moves away from $\boldsymbol{D}$ and closer to $\boldsymbol{P}$. The sketch on the right-hand side of Figure~\ref{fig:vectors} demonstrates that, if we were to consider the same evolution a half-cycle later, the same conclusion would be reached: in this case the vector $\boldsymbol{J}_{\mathrm{dp}3}$ moves slightly to the right (to become $\boldsymbol{J}_{\mathrm{dp}4}$), with the angle between $\boldsymbol{J}_\mathrm{dp}$ and $\boldsymbol{D}$ again increasing even as the angle between $\boldsymbol{J}_\mathrm{dp}$ and $\boldsymbol{P}$ decreases. The angles between the total angular momentum vector and the vectors $\boldsymbol{D}$ and $\boldsymbol{P}$ are thus seen to undergo a systematic, secular variation. The sketch in Figure~\ref{fig:vectors} also indicates that the vector $\boldsymbol{J}_\mathrm{dp}$ undergoes an oscillation over each precession cycle. However, when $\tau_\mathrm{dp} \ll \tau$ and the fractional decrease in $M_\mathrm{d}$ over a precession period remains $\ll 1$, the amplitude of the oscillation is very small and $\boldsymbol{J}_\mathrm{dp}$ practically maintains its initial direction (see Appendix~\ref{app:Jdp} for a formal demonstration of this result). In the limit where the disk mass becomes highly depleted and $D \to 0$, $\boldsymbol{J}_\mathrm{dp} \to \boldsymbol{P}$, i.e., the planet aligns with the initial direction of $\boldsymbol{J}_\mathrm{dp}$ ($\theta_\mathrm{jp} \to 0$ and $\psi_\mathrm{p} \to \psi_\mathrm{j}$). The disk angular momentum vector then precesses about $\boldsymbol{P}$, with its orientation angle $\psi_\mathrm{d}$ (blue line in top left panel of Figure~\ref{fig:DP-M}) oscillating between $|\psi_\mathrm{p} - \theta_\mathrm{dp}|$ and $\psi_\mathrm{p} + \theta_\mathrm{dp}$.\footnote{ The angle $\theta_\mathrm{dp}$ between $\boldsymbol{D}$ and $\boldsymbol{P}$ (cyan line in the top left panel of Figure~\ref{fig:DP-M}) remains constant because there are no torques that can modify it.} Note that the precession frequency is also affected by the disk's mass depletion and decreases with time (see Equation~(\ref{eq:omega_dp})); the time evolution of $\Omega_{\rm dp}$ is shown in the bottom left panel of Figure~\ref{fig:DP-M}. \begin{figure*} \includegraphics[width=\textwidth]{DP-m_fig4.eps} \caption{ Same as Figure~\ref{fig:DP-M}, except that $M_\mathrm{d0} = 0.02\,M_*$ (model~\texttt{DP-m}). \label{fig:DP-m}} \end{figure*} Figure~\ref{fig:DP-m} shows the evolution of a similar system---model~\texttt{DP-m}---in which the inner disk has a lower initial mass, $M_\mathrm{d0} = 0.002\,M_*$ (corresponding to $M_\mathrm{t0} = 0.02\,M_*$, i.e., to $t_0=2$\,Myr when $M_*=M_\sun$; see Equation~(\ref{eq:deplete})). The initial oscillation frequency in this case is lower than in model \texttt{DP-M}, as expected from Equation~(\ref{eq:omega_dp}), but it attains the same asymptotic value (bottom left panel), corresponding to the limit $J_\mathrm{dp} \to P$ in which $\Omega_\mathrm{dp}$ becomes independent of $M_\mathrm{d}$. The initial value of $J_\mathrm{dp}/D$ is higher in the present model than in the model considered in Figure~\ref{fig:DP-M} ($\simeq 1.5$ vs. $\simeq 1.1$; see Equations~(\ref{eq:P}) and~(\ref{eq:D})), which results in a higher value of $\psi_\mathrm{j}$ (and, correspondingly, a higher initial value of $\theta_\mathrm{jd}$ and lower initial value of $\theta_\mathrm{jp}$). The higher value of $\psi_\mathrm{j}$ is the reason why the oscillation amplitude of $\psi_\mathrm{d}$ and the initial oscillation amplitude of $\psi_\mathrm{p}$ (top left panel) are larger in this case. The higher value of $J_\mathrm{dp}/D_0$ in Figure~\ref{fig:DP-m} also accounts for the differences in the projection map shown in the top right panel (a larger $y$ value for the projection of $\hat{\boldsymbol{J}}_\mathrm{dp}$, a larger area encircled by the projection of $\hat{\boldsymbol{D}}$, and a smaller area encircled by the projection of $\hat{\boldsymbol{P}}$). \begin{figure*} \includegraphics[width=\textwidth]{all-M_fig5.eps} \caption{ Time evolution of the full system (star, inner disk, planet, outer disk) for an initial inner disk mass $M_\mathrm{d0} = 0.01\,M_*$ and initial total disk mass $M_\mathrm{t0} = 0.1\,M_*$ (model~\texttt{all-M}). Panel arrangement is the same as in Figure~\ref{fig:DP-M}, although the details of the displayed quantities---which are specified in each panel and now also include the angular momenta of the star ($\boldsymbol{S}$) and the outer disk ($\boldsymbol{H}$)---are different. \label{fig:all-M}} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{all-m_fig6.eps} \caption{ Same as Figure~\ref{fig:all-M}, except that $M_\mathrm{d0} = 0.002\,M_*$ and $M_\mathrm{t0} = 0.02\,M_*$ (model~\texttt{all-m}). \label{fig:all-m}} \end{figure*} We now consider the full system for two values of the total disk mass: $M_\mathrm{t0} = 0.1\,M_*$ (model~\texttt{all-M}, corresponding to $t_0 = 0$; Figure~\ref{fig:all-M}) and $M_\mathrm{t0} = 0.02\,M_*$ (model~\texttt{all-m}, corresponding to $t_0 = 2$\,Myr; Figure~\ref{fig:all-m}), assuming that both parts of the disk lose mass according to the relation given by Equation~(\ref{eq:deplete}). The inner disks in these two cases correspond, respectively, to the disk masses adopted in model~\texttt{DP-M} (Figure~\ref{fig:DP-M}) and model~\texttt{DP-m} (Figure~\ref{fig:DP-m}). The merit of first considering the simpler systems described by the latter models becomes apparent from a comparison between the respective figures. It is seen that the basic behavior of model~\texttt{all-M} is similar to that of model~\texttt{DP-M}, and that the main differences between model~\texttt{all-M} and model~\texttt{all-m} are captured by the way in which model~\texttt{DP-m} is distinct from model~\texttt{DP-M}. The physical basis for this correspondence is the centrality of the torque exerted on the inner disk by the planet. According to Equation~(\ref{eq:precession}), the relative magnitudes of the torques acting on the disk at sufficiently late times (after $D$ becomes smaller than the angular momentum of each of the other system components) are reflected in the magnitudes of the corresponding precession frequencies. The dominance of the planet's contribution can thus be inferred from the plots in the bottom left panels of Figures~\ref{fig:all-M} and~\ref{fig:all-m}, which show that, after the contribution of $D$ becomes unimportant (bottom right panels), the precession frequency induced by the planet exceeds those induced by the outer disk and by the star.\footnote{ The star--planet and star--outer-disk precession frequencies ($\Omega_\mathrm{sp}$ and~$\Omega_\mathrm{sh}$; see Equations~(\ref{eq:omega_sp}) and~(\ref{eq:omega_sh})) are not shown in these figures because they are too low to fit in the plotted range.} While the basic disk misalignment mechanism is the same as in the planet--inner-disk system, the detailed behavior of the full system is understandably more complex. One difference that is apparent from a comparison of the left-hand panels in Figures~\ref{fig:all-M} and~\ref{fig:DP-M} is the higher oscillation frequency of $\psi_\mathrm{p}$ and $\psi_\mathrm{d}$ in the full model (with the same frequency also seen in the timeline of $\psi_\mathrm{s}$). In this case the planet--outer-disk precession frequency $\Omega_\mathrm{ph}$ (Equation~(\ref{eq:omega_ph})) and the inner-disk--outer-disk precession frequency $\Omega_\mathrm{dh}$ (Equation~(\ref{eq:omega_dh})) are initially comparable and larger than $\Omega_\mathrm{dp}$, and $\Omega_\mathrm{ph}$ remains the dominant frequency throughout the system's evolution. The fact that the outer disk imposes a precession on both $\boldsymbol{P}$ and $\boldsymbol{D}$ has the effect of weakening the interaction between the planet and the inner disk, which slows down the disk misalignment process. Another difference is revealed by a comparison of the top right panels: in the full system, $\hat{\boldsymbol{J}}_\mathrm{dp}$ precesses on account of the torque induced by the outer disk, so it no longer corresponds to just a single point in the $x$--$y$ plane. This, in turn, increases the sizes of the regions traced in this plane by $\hat{\boldsymbol{D}}$ and $\hat{\boldsymbol{P}}$. The behavior of the lower-$M_\mathrm{t0}$ model shown in Figure~\ref{fig:all-m} is also more involved. In this case, in addition to the strong oscillations of the angles $\psi_i$ already manifested in Figure~\ref{fig:DP-m}, the different precession frequencies $\Omega_{ik}$ also exhibit large-amplitude oscillations, reflecting their dependence on the angles $\theta_{ik}$ between the angular momentum vectors. In both of the full-system models, the strongest influence on the star is produced by its interaction with the inner disk, but the resulting precession frequency ($\Omega_\mathrm{sd}$) remains low. Therefore, the stellar angular momentum vector essentially retains its original orientation, which implies that the angle $\psi_\mathrm{d}$ is a good proxy for the angle between the primordial stellar spin and the orbit of any planet that eventually forms in the inner disk. \begin{figure} \includegraphics[width=\columnwidth]{all-Mx_fig7.eps} \caption{ Time evolution of the full system in the limit where only the inner disk undergoes mass depletion and the mass of the outer disk remains unchanged, for the same initial conditions as in Figure~\ref{fig:all-M} (model~\texttt{all-Mx}). The top and bottom panels correspond, respectively, to the top left and bottom left panels of Figure~\ref{fig:all-M}, but in this case the initial $0.1$\,Myr of the evolution are not displayed at a higher resolution. \label{fig:all-Mx}} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{all-mx_fig8.eps} \caption{ Same as Figure~\ref{fig:all-Mx}, but for the initial conditions of Figure~\ref{fig:all-m} (model~\texttt{all-mx}). \label{fig:all-mx}} \end{figure} We repeated the calculations shown in Figures~\ref{fig:all-M} and~\ref{fig:all-m} under the assumption that only the inner disk loses mass while $M_\mathrm{h}$ remains constant (models~\texttt{all-Mx} and~\texttt{all-mx}; Figures~\ref{fig:all-Mx} and~\ref{fig:all-mx}, respectively). At the start of the evolution, the frequencies $\Omega_\mathrm{ph}$ and $\Omega_\mathrm{dh}$ are $\propto$$M_\mathrm{h}$, whereas $\Omega_\mathrm{dp}$ scales linearly (or, in the case of the lower-$M_\mathrm{d0}$ model, close to linearly) with $M_\mathrm{d}$ (see Appendix~\ref{app:torques}). In the cases considered in Figures~\ref{fig:all-M} and~\ref{fig:all-m} all these frequencies decrease with time, so the relative magnitude of $\Omega_\mathrm{dp}$ remains comparatively large throughout the evolution. In contrast, in the cases shown in Figures~\ref{fig:all-Mx} and~\ref{fig:all-mx} the frequencies $\Omega_\mathrm{ph}$ and $\Omega_\mathrm{dh}$ remain constant and only $\Omega_\mathrm{dp}$ decreases with time. As the difference between $\Omega_\mathrm{dp}$ and the other two frequencies starts to grow, the inner disk misalignment process is aborted, and thereafter the mean values of $\psi_\mathrm{d}$ and $\psi_\mathrm{p}$ remain constant. This behavior is consistent with our conclusion about the central role that the torque exerted by the planet plays in misaligning the inner disk: when the fast precession that the outer disk induces in the orbital motions of both the planet and the inner disk comes to dominate the system dynamics, the direct coupling between the planet and the inner disk is effectively broken and the misalignment process is halted. Note, however, from Figure~\ref{fig:all-mx} that, even in this case, the angle $\psi_\mathrm{d}$ can attain a high value (as part of a large-amplitude oscillation) when $M_\mathrm{t0}$ is small. \begin{figure*} \includegraphics[width=\textwidth]{retrograde_fig9.eps} \caption{ Time evolution with the same initial conditions as in Figure~\ref{fig:all-m}, except that the planet is initially on a retrograde orbit ($\psi_{\mathrm{p}0}$ is changed from $60^\circ$ to $110^\circ$; model~\texttt{retrograde}). The display format is the same as in Figure~\ref{fig:all-Mx}, but in this case the panels also show a zoomed-in version of the evolution around the time of the jumps in $\psi_\mathrm{p}$ and $\psi_\mathrm{d}$. The dashed line in the top panel marks the transition between prograde and retrograde orientations ($90^\circ$). \label{fig:retrograde}} \end{figure*} To determine whether the proposed misalignment mechanism can also account for disks (and, eventually, planets) on retrograde orbits, we consider a system in which the companion planet is placed on such an orbit (model~\texttt{retrograde}, which is the same as model~\texttt{all-m} except that $\psi_{\mathrm{p}0}$ is changed from $60^\circ$ to $110^\circ$). As Figure~\ref{fig:retrograde} demonstrates, the disk in this case evolves to a retrograde configuration ($\psi_\mathrm{d} > 90^\circ$) at late times even as the planet's orbit reverts to prograde motion. A noteworthy feature of the plotted orbital evolution (shown in the high-resolution portion of the figure) is the rapid increase in the value of $\psi_\mathrm{d}$ (which is an adequate proxy for $\theta_\mathrm{sd}$ also in this case)---and corresponding fast decrease in the value of $\psi_\mathrm{p}$---that occurs when the planet's orbit transitions from a retrograde to a prograde orientation. This behavior can be traced to the fact that $\cos{\theta_\mathrm{ph}}$ vanishes at essentially the same time that $\psi_\mathrm{p}$ crosses $90^\circ$ because the outer disk (which dominates the total angular momentum) remains well aligned with the $z$ axis. This, in turn, implies (see Equation~(\ref{eq:omega_ph})) that, at the time of the retrograde-to-prograde transition, the planet becomes dynamically decoupled from the outer disk and only retains a coupling to the inner disk. Its evolution is, however, different from that of a ``reduced'' system, in which only the planet and the inner disk interact, because the inner disk remains dynamically ``tethered'' to the outer disk ($\theta_\mathrm{dh}\ne 90^\circ$). As we verified by an explicit calculation, the evolution of the reduced system remains smooth when $\psi_\mathrm{p}$ crosses $90^\circ$. The jump in $\psi_\mathrm{p}$ exhibited by the full system leads to a significant increase in the value of $\cos{\theta_\mathrm{ph}}$ and hence of $\Omega_\mathrm{ph}$, which, in turn, restores (and even enhances) the planet's coupling to the outer disk after its transition to retrograde motion (see bottom panel of Figure~\ref{fig:retrograde}). The maximum value attained by $\theta_\mathrm{sd}$ in this example is $\simeq 172^\circ$, which, just as in the prograde case shown in Figure~\ref{fig:all-m}, exceeds the initial misalignment angle of the planetary orbit (albeit to a much larger extent in this case). It is, however, worth noting that not all model systems in which the planet is initially on a retrograde orbit give rise to a retrograde inner disk at the end of the prescribed evolution time; in particular, we found that the outcome of the simulated evolution (which depends on whether $\psi_\mathrm{p}$ drops below $90^\circ$) is sensitive to the value of the initial planetary misalignment angle $\psi_{\mathrm{p}0}$ (keeping all other model parameters unchanged). In concluding this section it is instructive to compare the results obtained for our model with those found for the model originally proposed by \citet{Batygin12} (see Section~\ref{sec:intro} for references to additional work on that model). We introduced our proposed scenario as a variant of the latter model, with a close-by giant planet taking the place of a distant stellar companion. In the original proposal the disk misalignment was attributed to the precessional motion that is induced by the torque that the binary companion exerts on the disk. In this picture the spin--orbit angle oscillates (on a timescale $\sim$1\,Myr for typical parameters) between $0^\circ$ and roughly twice the binary orbital inclination, so it can be large if observed at the ``right'' time. Our model retains this feature of the earlier proposal, particularly in cases where the companion planet is placed on a high-inclination orbit after the disk has already lost much of its initial mass, but it also exhibits a novel feature that gives rise to a secular (rather than oscillatory) change in the spin--orbit angle (which can potentially lead to a substantial increase in this angle). This new behavior represents an ``exchange of orientations'' between the planet and the inner disk that is driven by the mass loss from the inner disk and corresponds to a decrease of the inner disk's angular momentum from a value higher than that of the planet to a lower value (with the two remaining within an order of magnitude of each other for representative parameters). This behavior is not found in a binary system because of the large mismatch between the angular momenta of the companion and the disk in that case (and, in fact, it is also suppressed in the case of a planetary companion when the mass of the outer disk is not depleted). As we already noted in Section~\ref{subsec:assumptions}, \citet{BatyginAdams13} suggested that the disk misalignment in a binary system can be significantly increased due to a resonance between the star--disk and binary--disk precession frequencies. (We can use Equations~(\ref{eq:omega_sd}) and~(\ref{eq:omega_dp}), respectively, to evaluate these frequencies, plugging in values for the outer disk radius, companion orbital radius, and companion mass that are appropriate for the binary case.) \citet{Lai14} clarified the effect of this resonance and emphasized that, for plausible system parameters, it can be expected to be crossed as the disk becomes depleted of mass. However, for the planetary-companion systems considered in this paper the ratio $|\Omega_\mathrm{sd}/\Omega_\mathrm{dp}|$ remains $< 1$ throughout the evolution, so no such resonance is encountered in this case. In both of these systems $\Omega_\mathrm{sd}$ is initially $\propto M_\mathrm{d}$, so it decreases during the early evolution. The same scaling also characterizes $\Omega_\mathrm{dp}$ in the planetary case, which explains why the corresponding curves do not cross. In contrast, in the binary case (for which the sum of the disk and companion angular momenta is dominated by the companion's contribution) the frequency $\Omega_\mathrm{dp}$ does not scale with the disk mass and it thus remains nearly constant, which makes it possible for the corresponding curves to cross (see Figure~\ref{fig:binary} in Appendix~\ref{app:resonance}). Since our formalism also encompasses the binary case, we examined one such system (model~\texttt{binary})---using the parameters adopted in figure~3 of \citet{Lai14}---for comparison with the results of that work. Our findings are presented in Appendix~\ref{app:resonance}. \section{Discussion} \label{sec:discussion} The model considered in this paper represents a variant of the primordial disk misalignment scenario of \citet{Batygin12} in which the companion is a nearby planet rather than a distant star and only the inner region of the protoplanetary disk (interior to the planet's orbit) becomes inclined. In this section we assess whether this model provides a viable framework for interpreting the relevant observations. The first---and most basic---question that needs to be addressed is whether the proposed misalignment mechanism is compatible with the broad range of apparent spin--orbit angles indicated by the data. In Section~\ref{sec:results} we showed that the spin--orbit angle $\theta_\mathrm{sd}$ can deviate from its initial value of $0^\circ$ either because of the precessional motion that is induced by the planet's torque on the disk or on account of the secular variation that is driven by the mass depletion process. In the ``reduced'' disk--planet model considered in Figures~\ref{fig:DP-M} and~\ref{fig:DP-m}, for which the angle $\psi_\mathrm{d}$ is taken as a proxy for the intrinsic spin--orbit angle, the latter mechanism increases $\theta_\mathrm{sd}$ to $\sim$$45^\circ$--$50^\circ$ on a timescale of $10$\,Myr for an initial planetary inclination $\psi_\mathrm{p0} = 60^\circ$. The maximum disk misalignment is, however, increased above this value by the precessional oscillation, whose amplitude is higher the lower the initial mass of the disk. Based on the heuristic discussion given in connection with Figure~\ref{fig:vectors}, the maximum possible value of $\psi_\mathrm{d}$ (corresponding to the limit $J_\mathrm{dp} \to P$) is given by \begin{equation} \label{eq:psi_max} \psi_\mathrm{d,max} = \arccos\frac{D_0 + P\cos\psi_\mathrm{p0}} {(D_0^2 + P^2 + 2D_0P\cos\psi_\mathrm{p0})^{1/2}} + \psi_\mathrm{p0}\,. \end{equation} For the parameters of Figure~\ref{fig:DP-m}, $\psi_\mathrm{d,max} \approx 84.5^\circ$, which can be compared with the actual maximum value ($\simeq 72^\circ$) attained over the course of the $10$-Myr evolution depicted in this figure.\footnote{ The intrinsic spin--orbit angle is not directly measurable, so its value must be inferred from that of the apparent (projected) misalignment angle $\lambda$ \citep{FabryckyWinn09}. In the special case of a planet whose orbital plane contains the line of sight---an excellent approximation for planets observed by the transits method---the apparent obliquity cannot exceed the associated intrinsic misalignment angle (i.e., $\lambda \le \theta_\mathrm{sd}$).} Although the behavior of the full system (which includes also the outer disk and the star) is more complicated, we found (see Figures~\ref{fig:all-M} and~\ref{fig:all-m}) that, if the outer disk also loses mass, the maximum value attained by $\theta_{\rm sd}$ ($\simeq 67^\circ$) is not much smaller than in the simplified model. Note that in the original primordial-misalignment scenario the maximum value of $\theta_\mathrm{sd}$ ($\simeq 2\,\psi_\mathrm{p0}$) would have been considerably higher ($\simeq 120^\circ$) for the parameters employed in our example. However, as indicated by Equation~(\ref{eq:psi_max}), the maximum value predicted by our model depends on the ratio $P/D_0$ and can in principle exceed the binary-companion limit if $D_0$ is small and $P$ is sufficiently large.\footnote{ $D_0$, the magnitude of the initial angular momentum of the inner disk, cannot be much smaller than the value adopted in models~\texttt{DP-m} and~\texttt{all-m} in view of the minimum value of $M_\mathrm{d0}$ that is needed to account for the observed misaligned planets in the primordial-disk-misalignment scenario (and also for the no-longer-present HJ in the SHJ picture).} Repeating the calculations shown in Figure~\ref{fig:all-m} for higher values of $M_\mathrm{p}$, we found that the maximum value of $\theta_\mathrm{sd}$ is $\sim$$89^\circ$, $104^\circ$ and~$125^\circ$ when $M_\mathrm{p}/M_\mathrm{J}$ increases from~1 to~2, 3, and~4, respectively. These results further demonstrate that the disk can be tilted to a retrograde configuration even when $\psi_\mathrm{p0} < 90^\circ$ if the planet is sufficiently massive, although a retrograde disk orientation can also be attained (including in the case of $M_\mathrm{p} \lesssim M_\mathrm{J}$) if the planet's orbit is initially retrograde (see Figure~\ref{fig:retrograde}). A low initial value of the disk angular momentum $D$ arises naturally in the leading scenarios for placing planets in inclined orbits, which favor comparatively low disk masses (see Section~\ref{sec:intro}). The distribution of $\psi_\mathrm{p0}$ as well as those of the occurrence rate, mass, and orbital radius of planets on inclined orbits are required for determining the predicted distribution of primordial inner-disk misalignment angles in this scenario, for comparison with observations.\footnote{ \citet{MatsakosKonigl15} were able to reproduce the observed obliquity distributions of HJs around G and F stars within the framework of the SHJ model under the assumption that the intrinsic spin--orbit angle has a random distribution (corresponding to a flat distribution of $\lambda$; see \citealt{FabryckyWinn09}).} However, this information, as well as data on the relevant values of $M_\mathrm{d0}$, are not yet available, so our results for $\theta_\mathrm{sd}$ are only a first step (a proof of concept) toward validating this interpretation of the measured planet obliquities. Our proposed misalignment mechanism is most effective when the disk mass within the planetary orbit drops to $\sim$$M_\mathrm{p}$. In the example demonstrating this fact (Figure~\ref{fig:all-m}), $M_\mathrm{d0} \approx 2\,M_\mathrm{J}$. In the primordial disk misalignment scenario, $M_\mathrm{d0}$ includes the mass that would eventually be detected in the form of an HJ (or a lower-mass planet) moving around the central star on a misaligned orbit. Furthermore, if the ingestion of an HJ on a misaligned orbit is as ubiquitous as inferred in the SHJ picture, that mass, too, must be included in the tally. These requirements are consistent with the fact that the typical disk misalignment time in our model (a few Myr) is comparable to the expected giant-planet formation time, but this similarity also raises the question of whether the torque exerted by the initially misaligned planet has the same effect on the gaseous inner disk and on a giant planet embedded within it. This question was considered by several authors in the context of a binary companion \citep[e.g.,][]{Xiang-GruessPapaloizou14, PicognaMarzari15, Martin+16}. A useful gauge of the outcome of this dynamical interaction is the ratio of the precession frequency induced in the embedded planet (which we label $\Omega_\mathrm{pp}$) to $\Omega_\mathrm{dp}$ \citep{PicognaMarzari15}. We derive an expression for $\Omega_\mathrm{pp}$ by approximating the inclined and embedded planets as two rings with radii $a$ and $a_1 < a$, respectively (see Appendix~\ref{app:torques}), and evaluate $\Omega_\mathrm{dp}$ under the assumption that the disk mass has been sufficiently depleted for the planetary contribution ($P$) to dominate $J_\mathrm{dp}$. This leads to $\Omega_\mathrm{pp}/\Omega_\mathrm{dp} \simeq 2\,(a_1/r_\mathrm{d,out})^{3/2}$, which is the same as the estimate obtained by \citet{PicognaMarzari15} for a binary system. In the latter case, this ratio is small ($\lesssim 0.1$) for typical parameters, implying that the embedded planet cannot keep up with the disk precession and hence that its orbit develops a significant tilt with respect to the disk's plane. However, when the companion is a planet, the above ratio equals $(a_1/a)^{3/2}$ and may be considerably larger ($\lesssim 1$), which suggests that the embedded planet can remain coupled to the disk in this case. A key prediction of our proposed scenario---which distinguishes it from the original \citet{Batygin12} proposal---is that there would in general be a difference in the obliquity properties of ``nearby'' and ``distant'' planets, corresponding to the different orientations attained, respectively, by the inner and outer disks. This prediction is qualitatively consistent with the finding of \citet{LiWinn16} that the good spin--orbit alignment inferred in cool stars from an analysis of rotational photometric modulations in \textit{Kepler} sources \citep{Mazeh+15} becomes weaker (with the inferred orientations possibly tending toward a nearly random distribution) at large orbital periods ($P_\mathrm{orb} \gtrsim 10^2\,$days). The interpretation of these results in our picture is that the outer planets remain aligned with the original stellar-spin direction, whereas the inner planets---and, according to the SHJ model, also the stellar spin in $\sim$50\% of sources---assume the orientation of the misaligned inner disk (which samples a broad range of angles with respect to the initial spin direction). Further observations and analysis are required to corroborate and refine these findings so that they can be used to place tighter constrains on the models. The result reported by \citet{LiWinn16} is seemingly at odds with another set of observational findings---the discovery that the orbital planes of debris disks (on scales $\gtrsim 10^2\,$au) are by and large well aligned with the spin axis of the central star \citep{Watson+11, Greaves+14}. This inferred alignment also seemingly rules out any interpretation of the obliquity properties of exoplanets (including the SHJ model) that appeals to a tidal realignment of the host star by a misaligned HJ. These apparent difficulties can, however, be alleviated in the context of the SHJ scenario and our present model. Specifically, in the SHJ picture the realignment of the host star occurs on a relatively long timescale ($\lesssim 1\,$Gyr; see \citealt{MatsakosKonigl15}). This is much longer than the lifetime ($\sim$1--10\,Myr) of the gaseous disk that gives rise to both the misaligned ``nearby'' planets and the debris disk (which, in the scenario considered in this paper, are associated with the inner and outer parts of the disk, respectively). The inferred alignment properties of debris disks can be understood in this picture if these disks are not much older than $\sim$1\,Gyr, so that the stellar spin axis still points roughly along its original direction (which coincides with the symmetry axis of the outer disk). We searched the literature for age estimates of the 11 uniformly observed debris disks tabulated in \citet{Greaves+14} and found that only two (10~CVn and 61~Vir) are definitely much older than $1$\,Gyr. Now, \citet{MatsakosKonigl15} estimated that $\sim$50\% of systems ingest an SHJ and should exhibit spin--orbit alignment to within $20^\circ$, with the rest remaining misaligned. Thus, the probability of observing an aligned debris disk in an older system is $\sim 1/2$, implying that the chance of detecting 2 out of 2 such systems is $\sim 1/4$. It is, however, worth noting that the two aforementioned systems may not actually be well aligned: based on the formal measurement uncertainties quoted in \citet{Greaves+14}, the misalignment angle could be as large as $36^\circ$ in 10~CVn and $31^\circ$ in 61~Vir. Further measurements that target old systems might be able to test the proposed explanation, although one should bear in mind that additional factors may affect the observational findings. For example, in the tidal-downsizing scenario of planet formation, debris disks are less likely to exist around stars that host giant planets \citep[see][] {FletcherNayakshin16}. \section{Conclusion} \label{sec:conclusion} In this paper we conduct a proof-of-concept study of a variant of the primordial disk misalignment model of \citet{Batygin12}. In that model, a binary companion with an orbital radius of a few hundred au exerts a gravitational torque on a protoplanetary disk that causes its plane to precess and leads to a large-amplitude oscillation of the spin--orbit angle $\theta_\mathrm{sd}$ (the angle between the angular momentum vectors of the disk and the central star). Motivated by recent observations, we explore an alternative model in which the role of the distant binary is taken by a giant planet with an orbital radius of just a few au. Such a companion likely resided originally in the disk, and its orbit most probably became inclined away from the disk's plane through a gravitational interaction with other planets (involving either scattering or resonant excitation). Our model setup is guided by indications from numerical simulations \citep{Xiang-GruessPapaloizou13} that, in the presence of the misaligned planet, the disk separates at the planet's orbital radius into inner and outer parts that exhibit distinct dynamical behaviors even as each can still be well approximated as a rigid body. We integrate the secular dynamical evolution equations in the quadrupole approximation for a system consisting of the inclined planet, the two disk parts, and the spinning star, with the disk assumed to undergo continuous mass depletion. We show that this model can give rise to a broad range of values for the angle between the angular momentum vectors of the inner disk and the star (including values of $\theta_\mathrm{sd}$ in excess of $90^\circ$), but that the orientation of the outer disk remains virtually unchanged. We demonstrate that the misalignment is induced by the torque that the planet exerts on the inner disk and that it is suppressed when the mass depletion time in the outer disk is much longer than in the inner disk, so that the outer disk remains comparatively massive and the fast precession that it induces in the motions of the inner disk and the planet effectively breaks the dynamical coupling between the latter two. Our calculations reveal that the largest misalignments are attained when the initial disk mass is low (on the order of that of observed systems at the onset of the transition-disk phase). We argued that, when the misalignment angle is large, the inner and outer parts of the disk become fully detached and damping of the planet's orbital inclination by dynamical friction effectively ceases. This suggests a consistent primordial misalignment scenario: the inner region of a protoplanetary disk can be strongly misaligned by a giant planet on a high-inclination orbit if the disk's mass is low (i.e., late in the disk's evolution); in turn, the planet's orbital inclination is least susceptible to damping in a disk that undergoes a strong misalignment. We find that, in addition to the precession-related oscillations seen in the binary-companion model, the spin--orbit angle also exhibits a secular growth in the planetary-companion case, corresponding to a monotonic increase in the angle between the inner disk's and the total (inner disk plus planet) angular momentum vectors (accompanied by a monotonic decrease in the angle between the planet's and the total angular momentum vectors). This behavior arises when the magnitude of the inner disk's angular momentum is initially comparable to that of the planet but drops below it as a result of mass depletion (on a timescale that is long in comparison with the precession period). This does not happen when the companion is a binary, since in that case the companion's angular momentum far exceeds that of the inner disk at all times. On the other hand, in the binary case the mass depletion process can drive the system to a resonance between the disk--planet and star--disk precession frequencies, which has the potential of significantly increasing the maximum value of $\theta_\mathrm{sd}$ \citep[e.g.,][]{BatyginAdams13, Lai14}. We show that this resonance is not encountered when the companion is a nearby planet because---in contrast with the binary-companion case, in which the disk--binary precession frequency remains constant---both of these precession frequencies decrease with time in the planetary-companion case. However, we also show that when the torque that the star exerts on the disk is taken into account (and not just that exerted by the companion, as in previous treatments), the misalignment effect of the resonance crossing in the binary case is measurably weaker. A key underlying assumption of the primordial disk-misalignment model is that the planets embedded in the disk remain confined to its plane as the disk's orientation shifts, so that their orbits become misaligned to the same extent as that of the gaseous disk. However, the precession frequency that a binary companion induces in the disk can be significantly higher than the one induced by its direct interaction with an embedded planet, which would lead to the planet's orbital plane separating from that of the disk: this argument was used to critique the original version of the primordial misalignment model \citep[e.g.,][]{PicognaMarzari15}. However, this potential difficulty is mitigated in the planetary-companion scenario, where the ratio of these two frequencies is typically substantially smaller. The apparent difference in the obliquity properties of HJs around cool and hot stars can be attributed to the tidal realignment of a cool host star by an initially misaligned HJ \citep[e.g.,][]{Albrecht+12}. The finding \citep{Mazeh+15} that this dichotomy is exhibited also by lower-mass planets and extends to orbital distances where tidal interactions with the star are very weak motivated the SHJ proposal \citep{MatsakosKonigl15}, which postulates that $\sim$50\% of systems contain an HJ that arrives through migration in the protoplanetary disk and becomes stranded near its inner edge for a period of $\lesssim 1$\,Gyr---during which time the central star continues to lose angular momentum by magnetic braking---until the tidal interaction with the star finally causes it to be ingested (resulting in the transfer of the planet's orbital angular momentum to the star and in the realignment of the stellar spin in the case of cool stars). This picture fits naturally with the primordial misalignment model discussed in this paper. In this broader scenario, the alignment properties of currently observed planets (which do not include SHJs) can be explained if these planets largely remain confined to the plane of their primordial parent disk. In the case of cool stars the planets exhibit strong alignment on account of the realignment action of a predecessor SHJ, whereas in the case of hot stars they exhibit a broad range of spin--orbit angles, reflecting the primordial range of disk misalignment angles that was preserved on account of the ineffectiveness of the tidal realignment process in these stars. A distinguishing prediction of the planetary-companion variant of the primordial misalignment model in the context of this scenario arises from the expected difference in the alignment properties of the inner and outer disks, which implies that the good alignment exhibited by planets around cool stars should give way to a broad range of apparent spin--orbit angles above a certain orbital period. There is already an observational indication of this trend \citep{LiWinn16}, but additional data are needed to firm it up. A complementary prediction, which is potentially also testable, is that the range of obliquities exhibited by planets around hot stars would narrow toward $\lambda=0^\circ$ at large orbital periods. This scenario may also provide an explanation for another puzzling observational finding---that large-scale debris disks are by and large well aligned with the spin vector of the central star---which, on the face of it, seems inconsistent with the spin-realignment hypothesis. In this interpretation, debris disks are associated with the outer parts of protoplanetary disks and should therefore remain aligned with the central star---as a general rule for hot stars, but also in the case of cool hosts that harbor a stranded HJ if they are observed before the SHJ realigns the star. This explanation is consistent with the fact that the great majority of observed debris disks have inferred ages $\ll 1$\,Gyr, but the extent to which it addresses the above finding can be tested through its prediction that a sufficiently large sample of older systems should also contain misaligned disks. \acknowledgements We are grateful to Dan Fabrycky, Tsevi Mazeh, and Sean Mills for fruitful discussions. We also thank Gongjie Li and Josh Winn for helpful correspondence, and the referee for useful comments. This work was supported in part by NASA ATP grant NNX13AH56G and has made use of NASA's Astrophysics Data System Bibliographic Services and of \texttt{matplotlib}, an open-source plotting library for Python \citep{Hunter07}. \bibliographystyle{apj}
{'timestamp': '2016-12-07T02:10:22', 'yymm': '1612', 'arxiv_id': '1612.01985', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.01985'}
ArXiv
\section{Introduction} \label{sec-intr} The evolution of an actual system is likely to depend not only on the present state but also on the historical state. Motivated by it, differential equations with delays have been widely studied. Compared with differential equations without delays, an interesting problem is to study the effects of small delays on the dynamical behaviors of systems. Much effort has been made in the past few decades, for example, retarded differential equations were studied in \cite{Arino-Pituk,Casal-Corsi-Llave-20,Chicone03,Driver1968,Driver1976,Guoetal,Ouifki,Ryabov1965}, delay partial differential equations were considered in \cite{Bessaihetal,Faria-Huang,Li-Shi,Wangetal}, specific applications were discussed in \cite{Campbelletal,Erneuxetal,Fowler,Hill-Shafer-18,Li-Kloeden,Mao} and so on. Recently, several works paid more attentions to neutral differential equations with small delays, see \cite{Chen-Shen-20,Gyori-Pitukl,Hale-Lunel2001,Hale-Lunel2002,Liu} and the references therein. As shown in \cite{JKHale-Verduyn}, neutral differential equations can be seen as infinite-dimensional systems if the dynamical behavior is described in the space of continuous functions. In the framework of infinite-dimensional systems, one of interesting problems is the finite-dimensional reduction. On this topic, it is of importance to study the existence and smoothness of inertial manifold. As defined in \cite{Foiaseatal}, the inertial manifold is a finite-dimensional smooth submanifold of the phase space, which is invariant with respect to the family of solution operators, possesses the exponential tracking property, i.e., the inertial manifold attracts any trajectory that starts outside of the manifold exponentially fast. In this paper, we will consider the existence and smoothness of the global inertial manifolds for neutral differential equations. For some constant $r>0$, let $\mathcal{C}$ denote the set of all continuous maps from $[-r,0]$ into $\mathbb{R}^n$, which is a Banach space endowed with the supremum norm $|\phi|:=\sup_{\theta\in[-r,0]}|\phi(\theta)|$. The notation $|\cdot|$ is always used to denote norms in different spaces, but no confusion should arise. We first consider a nonautonomous neutral differential equation of the form \begin{eqnarray}\label{NA-NDE-1} \frac{d}{d t}\left\{x(t)-L(t)x_{t}\right\}= F(t,x_t), \end{eqnarray} where the section $x_t(\theta):= x(t+\theta)$ for $\theta\in[-r,0]$, $F:\mathbb{R}\times \mathcal{C} \rightarrow \mathbb{R}^n$ is a continuous map and there is a constant $K>0$ such that \begin{eqnarray*} |F(t,\phi)-F(t,\psi)|\leq K|\phi-\psi| \ \ \mbox{ for any }t\in \mathbb{R} \ \mbox{ and } \ \phi, \psi\in \mathcal{C}, \end{eqnarray*} and for each $t\in \mathbb{R}$, the operator $L(t):\mathcal{C}\to \mathbb{R}^n$ is defined by \begin{eqnarray}\label{L-expres} L(t)\phi=\int^{0}_{-r} d\eta(t,\theta)\phi(\theta) ,\ \ \ \phi \in \mathcal{C}. \end{eqnarray} As was done in \cite[p.255]{JKHale-Verduyn}, we assume that the kernel $\eta: \mathbb{R}\times \mathbb{R}\to \mathbb{R}^{n\times n}$ is measurable and normalized so that $\eta(t,\theta)$ satisfies $\eta(t,\theta)=\eta(t,-r)$ for $\theta\leq -r$, $\eta(t,\theta)=0$ for $\theta\geq 0$, $\eta(t,\cdot)$ is continuous from the left on $(-r,0)$ and has bounded variation uniformly in $t$ such that $t\mapsto L(t)\phi$ is continuous for each $\phi\in \mathcal{C}$. Furthermore, the kernel $\eta$ is uniformly nonatomic at zero, i.e., for every $\epsilon>0$. there exists a constant $\delta>0$ such that the total variation of $\eta(t,\cdot)$ on $[-\delta,0]$ is less than $\epsilon$ for all $t\in \mathbb{R}$. It then follows from \cite[Theorem 8.1, p.61]{JKHale-Verduyn} and \cite[Theorem 8.3, p.65]{JKHale-Verduyn} that the Cauchy problem of equation (\ref{NA-NDE-1}) is well-posed. Then for each $t_0\in \mathbb{R}$ and $\phi\in\mathcal{C}$, there exists a unique solution $x(\cdot\,;t_0,\phi): [-r,+\infty) \rightarrow \mathbb{R}$ of equation (\ref{NA-NDE-1}) with the initial value $\phi$ at $t_0$, i.e., $x(\cdot\,;t_0,\phi)$ with $x_{t_0}=\phi$ is a continuous map, $x(t)-L(t)x_{t}$ is continuously differentiable and satisfies equation (\ref{NA-NDE-1}) on $[t_0,+\infty)$. If $L(t)\phi=(0,...,0)^T\in \mathbb{R}^{n}$ for any $t\in \mathbb{R}$ and $\phi\in \mathcal{C}$, then equation (\ref{NA-NDE-1}) is reduced to a retarded differential equation. Ryabov (\cite{Ryabov1965}) and Driver (\cite{Driver1968}) obtained that retarded differential equations with small delays has a Lipschitz inertial manifold. Later on, Chicone (\cite{Chicone03}) generalized this result to $C^{1,1}$ inertial manifold if $F$ is $C^{1,1}$ with respect to the second variable by using the Fiber Contraction Theorem (see \cite{Hirsch-Pugh}). However, to approximate inertial manifolds, we need obtain the higher-order smoothness of inertial manifolds. To this end, in current paper, we will prove that neutral differential equation (\ref{NA-NDE-1}) with small delay $r$ possesses a global $C^{k,1}$ inertial manifold if $F$ is $C^{k,1}$ with respect to the second variable. We further assume that $L(t)\phi= A\phi(-r)$ for any $\phi\in \mathcal{C}$ and $F(t,x_t)=f(x(t),x(t-r))$ for $t\in \mathbb{R}$ and $r>0$. Then equation (\ref{NA-NDE-1}) is rewritten as \begin{eqnarray}\label{A-NDE-1} \frac{d}{d t}\left\{x(t)-Ax(t-r)\right\}= f(x(t),x(t-r)), \end{eqnarray} where $A$ is an $n\times n$ real matrix and $f$ is Lipschitz continuous function, i.e., \begin{eqnarray*} |f(y_1,z_1)-f(y_2,z_2)|\leq K \max\{|y_1-y_2|, \ |z_1-z_2|\} \ \ \mbox{ for any } (y_1,z_1) \mbox{ and } (y_2,z_2) \mbox{ in } \mathbb{R}^{n}\times \mathbb{R}^{n}. \end{eqnarray*} In the second part of this paper, we will show that the inertial manifold of equation (\ref{A-NDE-1}) with small delay is $C^{k,1}$ with respect to the delay $r$ if $f$ is $C^{k,1}$. When we consider the differentiability of inertial manifold with respect to the delay $r$, we need look for an appropriate space such that inertial manifold is smooth in $r$. As early as 1991, on the space $W^{1,\infty}([-r,0],\mathbb{R}^n)$, where all functions are absolutely continuous and their derivatives are essentially bounded, Hale and Ladeira (\cite{Hale-Ladeira}) used the Uniform Contraction Principle to prove that the solutions are $C^{k-1}$ in $r$ provided that the map $f$ is $C^k$. Then Hartung and Turi (\cite{Hartung-Turi}) studied the differentiability of solutions with respect to parameters for state-dependent delay equations. Chicone (\cite{Chicone03}) used the Fiber Contraction Theorem on a certain weighted Sobolev-type space and proved that the inertial manifolds of retarded differential equations with small delays are $C^{1,1}$ with respect to delays. Subsequently, Chicone (\cite{Chicone04}) further studied the smoothness of inertial manifold for the delay equation $\dot x(t)= f(x(t),x(t-r))$ by the slow manifold of the $N$th-order ordinary differential equation obtained by replacing the right-hand side of the delay equation by the $N$th-order Taylor polynomial of the function $\tau\mapsto f(x(t), x(t-\tau))$ at $\tau=0$. Unlike Chicone (\cite{Chicone03}), we studied the smoothness of inertial manifold for equation (\ref{NA-NDE-1}) (resp. equation (\ref{A-NDE-1})) on some $C^{k,1}$ space endowed with weighted supremum norm. To prove the completeness of these spaces with the metrics induced by the corresponding norms, we use the Henry's lemma (see \cite[Lemma 6.1.6]{Henry1981}), which is widely used to study the smoothness of invariant manifolds, see Chow and Lu \cite{Chow-Lu}, Barreira and Valls \cite{Barreira-Valls2006,Barreira-Valls2007} and Elbialy\cite{Elbialy2001}. Moreover, to guarantee that the Contraction Mapping Principle is valid, the multivariate Fa\`a di Bruno Formula (see \cite[Theorem 2.1]{Constaintine-Savits}) is applied to estimate the desired derivatives. The paper is organized as follows. In section \ref{sec-main result}, we will state the main results in this paper, see Theorems \ref{thm-1}-\ref{thm-2}. Before proving these results, we first make some preparations in section 3. In sections \ref{sec-pf-thm-1}-\ref{sec-pf-thm3}, we prove Theorems \ref{thm-1}-\ref{thm-2}, respectively. To illustrate the applications of the main results, we study the dynamics of the van der Pol oscillator model with small delay in section 7. In section 8, we give a discussion for further study. \section{main results} \label{sec-main result} In this section, we will state the main results in this paper. We first consider a nonautonomous neutral differential equation of the form \begin{eqnarray}\label{NA-NDE} \frac{d}{d t}\left\{x(t)-L(t)x_{t}\right\}= F(t,x_t), \end{eqnarray} where the section $x_t(\theta):= x(t+\theta)$ for $\theta\in[-r,0]$. To study the existence and the higher-order smoothness of the inertial manifold for equation (2.1) with small delay, we need the following hypotheses. \begin{enumerate} \item[{\bf (H1)}] Let $L(t)$ be given by (\ref{L-expres}) and satisfy the conditions stated in the introduction. Assume that $\sup_{t\in \mathbb{R}}|L(t)|=M$ with $0<M<1$ and $F:\mathbb{R}\times \mathcal{C} \to \mathbb{R}^n$ is a continuous map and $C^{k}$ ($k\in \mathbb{N}$) in the second variable. There exist positive constants $M_j$, $j=0,1,2,...,k+1$, such that for any $t\in \mathbb{R}$ and $\phi,\psi \in \mathcal{C}$, $|F(t,0)|\leq M_0 e^{\lambda|t|}$, $|D_2^jF(t,\phi)|\leq M_j$ for $j=1,...,k$ and $|D_2^kF(t,\phi)-D_2^k F(t,\psi)|\leq M_{k+1}|\phi-\psi|$, where $D_2^{j}$ denotes the {\it j-}th partial derivative with respect to the second variable. Let the constants $r_0$ and $x^{*}$ satisfy $0<M_1r_0<H(-\ln M/(k+1))$ and $x^{*}\in \left(x_1(r_0),-\ln M/(k+1)\right) \subset \left(x_1(r_0),x_2(r_0)\right)$, where $H$, $x_1(r_0)$ and $x_2(r_0)$ are defined in Appendix A. The delay $r$ and the constant $\lambda$ satisfy $0<r\leq r_0$ and $r\lambda=x^*$. \end{enumerate} Since $x^{*}\in (x_1(r_0),x_2(r_0))$, by Appendix A, we find $x^{*} e^{-x^{*}}-Mx^{*}>M_1r_0$, which implies $(Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}<1$. Then from the condition $r\lambda=x^{*}$ it follows \begin{equation}\label{x-star} \begin{split} Me^{r\lambda}+M_1e^{r\lambda}/\lambda =&\,(Mr\lambda e^{r\lambda}+M_1re^{r\lambda})/(r\lambda) =(Mx^{*} e^{x^{*}}+M_1re^{x^{*}})/x^{*} \\ \leq &\, (Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}<1. \end{split} \end{equation} Moreover, $x^{*}< -\ln M/(k+1)$ and $r\lambda=x^*$ imply that $Me^{(k+1)r\lambda}=Me^{(k+1)x^*}<1$. \vskip 0.2cm Our main results on equation (\ref{NA-NDE}) are summarized as following. \begin{thm}\label{thm-1} Assume that {\bf (H1)} holds. Then there exists a constant $\delta$ with $0<\delta\leq r_0$ such that for equation (\ref{NA-NDE}) with $0<r<\delta$, there is a continuous map $\Psi: \mathbb{R}\times \mathbb{R}^{n}\to \mathbb{R}^n$ satisfying that for each $\xi\in\mathbb{R}^{n}$, $\Psi(\cdot\,,\xi)$ is the solution of equation (\ref{NA-NDE}) with the condition $x(0)-L(0)x_{0}=\xi$ and for each $t\in\mathbb{R}$, $\Psi(t,\cdot\,)$ is a $C^{k,1}$ map on $\mathbb{R}^{n}$. \end{thm} \begin{thm}\label{thm-1-2} Assume that {\bf (H1)} holds. Then for the solution $x(\cdot\,; 0,\phi)$ of equation (\ref{NA-NDE}) with $x_{0}=\phi$, there exists a unique $\xi\in \mathbb{R}^n$ such that \begin{eqnarray*} \sup_{t\geq0}|x(t; 0,\phi)-\Psi(t,\xi)|e^{\lambda t}<+\infty. \end{eqnarray*} \end{thm} \begin{rmk} Assume that {\bf (H1)} holds. By Theorem \ref{thm-1} and Theorem \ref{thm-1-2}, we clearly see that the graph of $\Psi$ defined in Theorem \ref{thm-1} forms the $C^{k,1}$ inertial manifold of equation (\ref{NA-NDE}) with small delay. \end{rmk} We further assume that $L(t)\phi= A\phi(-r)$ for any $\phi\in \mathcal{C}$ and $F(t,x_t)=f(x(t),x(t-r))$ for $t\in \mathbb{R}$ and $r>0$. Then equation (\ref{NA-NDE}) is rewritten as \begin{eqnarray}\label{A-NDE} \frac{d}{d t}\left\{x(t)-Ax(t-r)\right\}= f(x(t),x(t-r)), \end{eqnarray} where $A$ is an $n\times n$ real matrix. We will show that the inertial manifold of equation (\ref{A-NDE}) is smooth with respect to the delay $r$. To this end, we need the following hypothesis. \begin{enumerate} \item[{\bf (H2)}] The function $f: \mathbb{R}^n\times \mathbb{R}^n \to \mathbb{R}^n$ is $C^k$ and satisfies that there exist constants $M_j$, $j=1,...,k+1$, such that $|D^jf|\leq M_j$ and $|D^{k}f(y_1,z_1)-D^{k}f(y_2,z_2)|\leq M_{k+1}|(y_1,z_1)-(y_2,z_2)|$, where $D^{j}$ denote the {\it j-}th derivative of $f$, and the matrix $A$ satisfies $|A|=M$ with $0<M<1/(2\cdot 3^{k})$. Let the constants $r_0$ and $x^{*}$ satisfy $0<M_1r_0<H(-\ln(2\cdot 3^{k}M)/(k+1))$ and $x^{*}\in \left(x_1(r_0),-\ln(2\cdot 3^{k}M)/(k+1)\right) \subset \left(x_1(r_0), x_2(r_0) \right)$, where $H$, $x_1(r_0)$ and $x_2(r_0)$ are defined in Appendix A. The constant $\lambda$ satisfies $\delta\lambda=x^{*}$ for $0<\delta\leq r_0$. \end{enumerate} Without loss of generality, we assume that positive integer $k\geq 2$ in {\bf (H2)}. Note that $0<M<1/(2\cdot 3^{k})<1$. This implies that $-\ln(2\cdot 3^{k}M)/(k+1)<-\ln M$. By Appendix A, we obtain \begin{eqnarray}\label{x-star-1} (Mx^{*}e^{x^{*}}+M_1\delta e^{x^{*}})/x^{*}\leq(Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}<1. \end{eqnarray} In addition, Note that $x^{*}<-\ln(2\cdot 3^{k}M)/(k+1)$ in {\bf (H2)}. Then \begin{eqnarray}\label{x-star-2} 2\cdot 3^{k}Me^{(k+1)x^{*}}<1. \end{eqnarray} \vskip 0.2cm In the end, we summarize our main results on equation (\ref{A-NDE}) as following. \begin{thm}\label{thm-2} Assume that {\bf (H2)} holds. Then for a sufficiently small $\delta$ with $0<\delta\leq r_0$, there exists a continuous map $\Psi: \mathbb{R}\times (0\,,\delta) \times \mathbb{R}^{n}\to \mathbb{R}^n$ such that \begin{itemize} \item[(i)] for each $(r,\xi)\in(0\,,\delta) \times \mathbb{R}^{n}$, $\Psi(\cdot\,,r,\xi)$ is the solution of equation (\ref{A-NDE}) with the condition $x(0)-Ax(-r)=\xi$; \item[(ii)]for each $(t,r)\in\mathbb{R}\times (0\,,\delta)$, $\Psi(t,r,\cdot\,)$ is a $C^{k,1}$ map on $\mathbb{R}^{n}$; \item[(iii)] for each $\xi\in \mathbb{R}^{n}$, $\Psi(\cdot\,,\cdot\,, \xi)$ is a $C^{k,1}$ map on $\mathbb{R}\times (0\,,\delta)$. \end{itemize} \end{thm} \begin{rmk} \label{rmk-cut-off} In many practical applications, we are often interested in the dynamics of equation (\ref{NA-NDE}) (resp.\,(\ref{A-NDE})) in some bounded regions, such as finding special bounded solutions including periodic orbits, homoclinic loops and heteroclinic loops and so on. Then we only require that the assumptions on the function $F$ (resp.\,$f$) in equation (\ref{NA-NDE}) (resp.\,(\ref{A-NDE})) hold in some bounded regions. In fact, by the cut-off technique we can modify the original equation (\ref{NA-NDE}) (resp.\,(\ref{A-NDE})) such that the modified equation not only satisfies the assumptions, but also is consistent with the original one in some large bounded regions. On the other hand, based on the higher-order smoothness obtained in Theorem \ref{thm-2}, we can also give an effective approximation of the inertial manifold for equation (\ref{A-NDE}). In fact, by the invariant property of the inertial manifold and the so-called post-Newtonian expansion used for retarded differential equations with small delays in \cite{Chicone03,Chicone04}, we expand the restriction of equation (\ref{A-NDE}) on the inertial manifold in a series with respect to $r$, where a sequence of slow-fast systems is involved. Then we could give an approximation of the inertial manifold for equation (\ref{A-NDE}) with small delay by analyzing the slow manifolds of these slow-fast systems. \end{rmk} \section{Preliminaries} Before proving our main results, we first make some preliminaries in this section. Let $B_{i},\,i=1,..., n$ and $E$ be Banach spaces and $B=B_1\times\cdot\cdot\cdot\times B_n$ be product space with the norm $|v|:=\max_{1\leq i\leq n}|v_i|$ for ${v}=(v_1,...,v_n)\in B$. Let $U$ denote the open subset of $B$ and $h$ be a $C^{k}$ map from $U$ to $E$. As shown in \cite[p.181]{Dieudonne}, the {\it k-}th derivative $D^k h$ and the partial derivative with respect to the {\it j-}th variable $D_{j}h$ are identified to one of elements in $L^{k}(B; E)$ and $L(B_j; E)$, respectively, where $L^{k}(B; E)$ denotes the set of all $k$ multilinear continuous maps from $B$ to $E$, which is a Banach space equipped with the norm $|\mathcal{L}|=\sup\left\{K\in \mathbb{R}: |\mathcal{L}(u_1,...,u_k)|\leq K |u_1|\cdot\cdot\cdot|u_k|\mbox{ for any } u_i\in B\right\}$ for each $\mathcal{L}\in L^{k}(B;E)$, and $L(B_j; E)$ denotes the set of all continuous linear maps from $B_{j}$ to $E$. Let $\boldsymbol{\nu}=(\nu_1,...,\nu_n)\in \mathbb{N}_0^n$ with $|\boldsymbol{\nu}|=\nu_1+\cdot\cdot\cdot+\nu_n\leq k$ and $\mathbb{N}_0$ denote the set of nonnegative integers. $D^{\boldsymbol{\nu}}_{u}h:=\frac{\partial^{|\boldsymbol{\nu}|}}{\partial u_1^{\nu_1}...\partial u_n^{\nu_n}}h$ is said to be the partial derivative of order $|\boldsymbol{\nu}|$ of $h$ and assume $D^{\boldsymbol{\nu}}_{u}h=h$ for $\boldsymbol{\nu}=(0,...,0)$. The following lemma can be found in \cite[p.182]{Dieudonne}. \begin{lm}\label{derivt} Let $U$ be an open set of $\mathbb{R}^n$ and $E$ denote a Banach space. If the map $h: U \rightarrow E$ is $C^k$, then for each $x\in U$ and $\xi_i=(\xi_{i1},...,\xi_{in})^{T}\in \mathbb{R}^n$, $i=1,...,k$, we have $$ D^kh(x)(\xi_1,...,\xi_k)=\sum_{(j_1,...,j_k)} D_{j_1}D_{j_2}\cdot\cdot\cdot D_{j_k}h(x)\xi_{1,j_1}\cdot\cdot\cdot \xi_{k,j_k}, $$ where the sum is extended to all $n^k$ distinct sequences $(j_1,...,j_k)$ of integers from the set of indices $\{1,...,n\}$. \end{lm} As was done in \cite{Elbialy2000}, for each $0<\alpha\leq1$ and $k\in \mathbb{N}$, where $\mathbb{N}$ denotes the set of positive integers, let $C^{k,\alpha}(U,E)$ be the set of $C^k$ maps $\phi: U\to E$ satisfying \begin{eqnarray*} H_{\alpha}(D^k\phi):=\sup_{x\neq x',\, x,\, x'\in U}\frac{|D^k \phi(x)-D^k \phi(x')|}{|x-x'|^{\alpha}}<\infty. \end{eqnarray*} For any $b>0$, we define the Banach space $$C^{k,\alpha}_{b}:=\{\phi\in C^{k,\alpha}_{b}(U,E): |\phi|_{k,\alpha}\leq b\},$$ where $|\phi|_{k,\alpha}:=\max\left\{|\phi|_{\infty},|D\phi|_{\infty},..., |D^k\phi|_{\infty}, H_{\alpha}(D^k\phi)\right\}$ and $|\cdot|_{\infty}$ denotes the supremum norm. The following lemma is from \cite{Elbialy2000}. \begin{lm}\label{Henry-lm} {\rm({\bf Henry's lemma})} {\rm (i)} Suppose that a sequence $\{\phi_m\}_{m=1}^{+\infty} \subset C^{k,\alpha}_{b}$ and a map $\phi: U \to E$ satisfy $|\phi_m-\phi|_{\infty}\to 0$ as $m\to +\infty$, then $\phi\in C^{k,\alpha}_{b}$ and $D^k\phi_m(u)\to D^k\phi(u)$ as $m\to+\infty$ for each $u\in U$. {\rm (ii)} Let $U_0\subset U$ be a subset which is uniformly bounded away from the boundary of $U$. Then $D^k\phi_m(u)\to D^k\phi(u)$ uniformly on $U_0$. \end{lm} Assume that $B_1=\cdots =B_n=E=\mathbb{R}$. To state a multivariate Fa\`a di Bruno Formula (see \cite{Constaintine-Savits}), we need introduce some notations. Let $\boldsymbol{\nu}!=\prod_{i=1}^{n}(\nu_i!)$ and $x^{\boldsymbol{\nu}}=\prod_{i=1}^{n}x_i^{\nu_i}$ for $\boldsymbol{\nu}=(\nu_1,...,\nu_n)\in\mathbb{N}^n_0$ and $x=(x_1,...,x_n)\in \mathbb{R}^n$. Set $\boldsymbol{\nu}=(\nu_1,...,\nu_n)$ and $\boldsymbol{\mu}=(\mu_1,...,\mu_n)$ belong to $\mathbb{N}^n_0$. We write $\boldsymbol{\mu} \prec \boldsymbol{\nu}$ if one of the following holds: {(i)} $|\boldsymbol{\mu}|<|\boldsymbol{\nu}|$; {(ii)} $|\boldsymbol{\mu}|=|\boldsymbol{\nu}|$ and $\mu_1<\nu_1$; {(iii)} $|\boldsymbol{\mu}|=|\boldsymbol{\nu}|$, $\mu_1=\nu_1$,..., $\mu_j=\nu_j$ and $\mu_{j+1}<\nu_{j+1}$ for some $1\leq j<n$. Let $h(x_1,...,x_n)=g(g_1(x_1,...,x_n),...,g_m(x_1,...,x_n))$ for $x=(x_1,...,x_n)\in \mathbb{R}^n$, where $g_i:\mathbb{R}^n\to \mathbb{R}$, $i=1,..,m$, are $C^k$ maps defined in a neighborhood of $x^0=(x^0_1,...,x^0_n)$ and $g:\mathbb{R}^m\to \mathbb{R}$ is a $C^k$ map defined in a neighborhood of $y^0:=(g_1(x^0),...,g_m(x^0))\in\mathbb{R}^m$. We define $h_{\boldsymbol{\nu}}:=D^{\boldsymbol{\nu}}_{x}h(x^0)$, $g_{\boldsymbol{\omega}}:=D^{\boldsymbol{\omega}}_{y}g(y^0)$, $g^{(i)}_{\boldsymbol{\mu}}:=D^{\boldsymbol{\mu}}_{x}g_{i}(x^0)$ and ${\bf g}_{\boldsymbol{\mu}}:=(g^{(1)}_{\boldsymbol{\mu}}, ...,g^{(m)}_{\boldsymbol{\mu}})$. Set $0^0=1$. The following lemma shows the explicit expression of an arbitrary partial derivative of composite function $h$, which can be found in \cite[Theorem 2.1]{Constaintine-Savits}. \begin{lm}\label{partial-devt} Let $h$ be given as above. Then we have \begin{eqnarray*} h_{\boldsymbol{\nu}} = \!\!\sum_{1\leq|\boldsymbol{\omega}|\leq |\boldsymbol{\nu}|}g_{\boldsymbol{\omega}}\sum_{s=1}^{|\boldsymbol{\nu}|}\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})}(\boldsymbol{\nu}!) \prod_{j=1}^{s}\frac{({\bf g}_{\boldsymbol{l_j}})^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}}, \end{eqnarray*} where \begin{eqnarray*} p_s(\boldsymbol{\nu},\boldsymbol{\omega})\!\!\!&=&\!\!\! \{(\boldsymbol{k_1},...,\boldsymbol{k_s};\boldsymbol{l_1},...,\boldsymbol{l_s}): |\boldsymbol{k_j}|>0, 0\prec \boldsymbol{l_1}\prec...\prec \boldsymbol{l_s},\\ \!\!\!& &\!\!\! \sum_{j=1}^{s} \boldsymbol{k_j}=\boldsymbol{\omega} \mbox{ and } \sum_{j=1}^{s} |\boldsymbol{k_j}|\boldsymbol{l_j}=\boldsymbol{\nu} \}. \end{eqnarray*} \end{lm} \section{Proof of Theorem \ref{thm-1}} \label{sec-pf-thm-1} For any fixed constant $d>0$, set $V^{0}_d:=\{\xi\in \mathbb{R}^n: |\xi|<d\}$. Let $\mathcal{B}_{d,\lambda}$ denote the set of continuous maps from $\mathbb{R}\times V^{0}_d$ to $\mathbb{R}^n$ and each $x\in \mathcal{B}_{d,\lambda}$ is $C^k$ in the second variable and satisfies that for some constants $\beta_{j}>0$, \begin{eqnarray} |x|_{\mathcal{B}_{d,\lambda}}\!\!\!&:=&\!\!\! \sup_{(t,\xi)\in \mathbb{R}\times V^{0}_d} |x(t,\xi)|e^{-\lambda|t|} \leq \beta_0,\label{norm-x}\\ |x|_{\mathcal{B}_{d,\lambda},j}\!\!\!&:=&\!\!\! \sup_{(t,\xi)\in \mathbb{R}\times V^{0}_d} |D_2^jx(t,\xi)|e^{-j\lambda|t|} \leq \beta_j, \ \ j=1,2,...,k,\label{bd-xj}\\ |x|_{\mathcal{B}_{d,\lambda},k+1}\!\!\!\!\!&:=&\!\!\!\! \sup_{t\in \mathbb{R},\, \xi_1,\,\xi_2\in V^{0}_d,\,\xi_1\neq\xi_2} \!\!\frac{|D_2^k x(t,\xi_1)-D_2^k x(t,\xi_2)|}{|\xi_1-\xi_2|}e^{-(k+1)\lambda|t|} \leq \beta_{k+1},\label{bd-lip-x} \end{eqnarray} where the constant $\lambda$ is given as in {\bf (H1)}. \begin{lm}\label{lm-4-complete} Let the map $\rho$ be defined in the form $\rho(x,y)=|x-y|_{\mathcal{B}_{d,\lambda}}$ for any $x, y\in \mathcal{B}_{d,\lambda}$. Then the set $\mathcal{B}_{d,\lambda}$ endowed with the metric $\rho$ is a complete metric space. \end{lm} \begin{proof} Clearly, the map $\rho$ on $\mathcal{B}_{d,\lambda} \times \mathcal{B}_{d,\lambda}$ is well defined and induces a metric. In the following we prove the completeness of the metric space $(\mathcal{B}_{d,\lambda}, \rho)$. Let $\{g_m\}_{m=1}^{+\infty}$ be a Cauchy sequence of $\mathcal{B}_{d,\lambda}$, that is, for any $\epsilon>0$, there is a positive integer $N(\epsilon)$ such that for any positive integer $m, m'\geq N(\epsilon)$, \begin{eqnarray*}\label{Cauchy-seq-1} \rho(g_{m'},g_m)=|g_{m'}-g_m|_{\mathcal{B}_{d,\lambda}}= \sup_{(t,\xi)\in \mathbb{R}\times V^{0}_d} |g_{m'}(t,\xi)-g_m(t,\xi)|e^{-\lambda|t|}<\epsilon. \end{eqnarray*} Then $\{g_m(t,\xi)e^{-\lambda|t|}\}_{m=1}^{+\infty}$ is a Cauchy sequence in the Banach space $C_b(\mathbb{R}\times V^{0}_d,\mathbb{R}^n):= \{f \in C(\mathbb{R}\times V^{0}_d,\mathbb{R}^n): \sup_{(t,\xi)\in \mathbb{R}\times V^{0}_d} |f(t,\xi)|<+\infty\}$. Let $\tilde{g}_0(t, \xi)$ be the limit of $g_{m}(t,\xi)e^{-\lambda|t|}$ in $C_b(\mathbb{R}\times V^{0}_d,\mathbb{R}^n)$. Set $g_0(t,\xi):=\tilde{g}_0(t,\xi)e^{\lambda|t|}$. This implies that $\rho(g_{m},g_{0})\to 0$ as $m\to +\infty$. Next we prove that $g_0\in\mathcal{B}_{d,\lambda}$. For any $g\in \mathcal{B}_{d,\lambda}$ and fixed $t\in \mathbb{R}$, we define the map $\widetilde{g}:V^{0}_d\to \mathbb{R}^n$ in the form $\widetilde{g}(\xi)=g(t,\xi)$, then $|\widetilde{g}(\xi)|=|g(t,\xi)|\leq |g|_{\mathcal{B}_{d,\lambda}}\,e^{\lambda|t|}\leq \beta_0e^{\lambda|t|}.$ Recall that $\rho(g_{m},g_{0})\to 0$ as $m\to +\infty$. Then we have $|\widetilde{g}_m-\widetilde{g}_0|_{\infty}\to0$ as $m\to+\infty$. From (\ref{norm-x})-(\ref{bd-lip-x}) it follows that $\{\widetilde{g}_m\}_{m=1}^{+\infty}\subset C_{\beta(t)}^{k,1}$, where $\beta(t)=\max\{\beta_{0}e^{\lambda|t|}, \beta_je^{j\lambda|t|} \mbox{ for } j=1,2,...,k+1\}$. Hence, by Lemma \ref{Henry-lm} we obtain $\widetilde{g}_0\in C_{\beta(t)}^{k,1}$ and $D^j\widetilde{g}_m(\xi)\to D^j\widetilde{g}_0(\xi)$ as $m\to +\infty$ for each $\xi\in V^{0}_d$ and $j=1,2,...,k$. Together with (\ref{norm-x})-(\ref{bd-lip-x}) again, we obtain $g_0\in\mathcal{B}_{d,\lambda}$. Therefore, the proof is complete. \end{proof} On the space $\mathcal{B}_{d,\lambda}$, we define a map $\mathcal{T}$ in the following form \begin{eqnarray}\label{map-1} \mathcal{T}(x)(t,\xi)=\xi+L(t)x_{t}+\int_0^{t} F(s,x_s)ds. \end{eqnarray} We will show that the unique fixed point of $\mathcal{T}$ in $\mathcal{B}_{d,\lambda}$ is the desired map $\Psi$ in Theorem \ref{thm-1}. To this end, we first need prove the following lemmas. \begin{lm}\label{lm-GB4-1} Assume that {\bf (H1)} holds. Then for any $x\in \mathcal{B}_{d,\lambda}$ and $(t,\xi) \in \mathbb{R}\times V^{0}_d$, the following estimates hold: \begin{itemize} \item[(i)] $|\mathcal{T}(x)(t,\xi)|e^{-\lambda|t|}\leq d+M_0/\lambda+(Me^{r\lambda}+M_1e^{r\lambda}/\lambda)\beta_0.$ \item[(ii)] $|D_2\mathcal{T}(x)(t,\xi)|e^{-\lambda|t|}\leq 1+(Me^{r\lambda}+M_1e^{r\lambda}/\lambda)\beta_1.$ \item[(iii)] For any positive integer $m$ with $2\leq m\leq k$, we have $$|D_2^{m}\mathcal{T}(x)(t,\xi)|e^{-m\lambda|t|}\leq M\beta_{m}e^{mr\lambda}+A_m/\lambda,$$ where \begin{eqnarray*} A_m\!\!\!&=&\!\!\!(m-1)!e^{mr\lambda}\sum_{j=1}^{m} M_j\sum_{p(m,j)}\prod_{i=1}^{m}\frac{\beta_i^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}},\\ p(m,j)\!\!\!&=&\!\!\!\left\{(\omega_1,...,\omega_m): \omega_i\in\mathbb{N}_0, \sum_{i=1}^{m}\omega_i=j, \sum_{i=1}^{m}i\omega_i=m\right\}. \end{eqnarray*} \end{itemize} \end{lm} \begin{proof} For any $x\in \mathcal{B}_{d,\lambda}$ and $(t,\xi) \in \mathbb{R}\times V^{0}_d$, we have \begin{eqnarray*} |\mathcal{T}(x)(t,\xi)|\!\!\!&\leq&\!\!\! |\xi|+|L(t)x_{t}|+|\int_0^{t} |F(s,x_s)-F(s,0)|ds|+|\int_0^{t} |F(s,0)|ds|\\ \!\!\!&\leq&\!\!\! d+M\beta_0e^{\lambda(r+|t|)}+M_1|\int_0^{t} \beta_0e^{\lambda(r+|s|)}ds|+M_0e^{\lambda|t|}/\lambda\\ \!\!\!&\leq&\!\!\! \left(d+M_0/\lambda+(Me^{r\lambda}+M_1e^{r\lambda}/\lambda)\beta_0\right)e^{\lambda|t|}. \end{eqnarray*} Thus, result (i) is proved. By Leibniz's Rule (\cite[Theorem 8.11.2, p.177]{Dieudonne}) and the linearity of $L(t)$, we see that \begin{eqnarray*} D_2\mathcal{T}(x)(t,\xi)=I+L(t)(D_2x_{t})+\int_0^{t} D_2F(s,x_s)D_2 x_s ds. \end{eqnarray*} Then we have \begin{eqnarray*} |D_2\mathcal{T}(x)(t,\xi)|\!\!\!&\leq&\!\!\! 1 +M|D_2x_{t}|+|\int_0^{t} |D_2F(s,x_s)||D_2 x_s| ds|\\ \!\!\!&\leq&\!\!\! 1 +M\beta_{1}e^{\lambda(r+|t|)}+|\int_0^{t} M_1\beta_1e^{\lambda(r+|s|)}ds| \leq \left(1+(Me^{r\lambda}+M_1e^{r\lambda}/\lambda)\beta_1\right)e^{\lambda|t|}. \end{eqnarray*} Thus, result (ii) is proved. For any positive integer $m$ with $2\leq m\leq k$, by the Leibniz's Rule and the univariate Fa\`a di Bruno Formula (see (1.1) in \cite[p.503]{Constaintine-Savits}), we have \begin{eqnarray*} |D^m_2\mathcal{T}(x)(t,\xi)|\!\!\!&\leq&\!\!\!|L(t)||D_2^{m}x_{t}|+m!|\int_0^{t} \sum_{j=1}^{m} |D^j_2F(s,x_s)|\sum_{p(m,j)}\prod_{i=1}^{m} \frac{|D^{i}_2 x_s|^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}} ds|\\ \!\!\!&\leq&\!\!\! M\beta_{m}e^{m\lambda(r+|t|)}+m!|\int_0^{t}\sum_{j=1}^{m} M_j\sum_{p(m,j)}\prod_{i=1}^{m}\frac{(\beta_ie^{i\lambda(r+|s|)})^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}ds|\\ \!\!\!&\leq&\!\!\! M\beta_{m}e^{m\lambda(r+|t|)}+m!e^{mr\lambda}\sum_{j=1}^{m} M_j\sum_{p(m,j)}\prod_{i=1}^{m}\frac{\beta_i^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}|\int_0^{t} e^{m\lambda|s|} ds|\\ \!\!\!&\leq&\!\!\! \left(M\beta_{m}e^{mr\lambda}+A_{m}/\lambda\right)e^{m\lambda|t|}. \end{eqnarray*} Thus, result (iii) is proved. Then the proof is complete. \end{proof} \begin{lm}\label{lm-GB4-2} For any $t\in \mathbb{R}$, $\xi_1,\,\xi_2\in \mathbb{R}^{n}$ and $x\in \mathcal{B}_{d,\lambda}$, we have $$ |D_2^k\mathcal{T}(x)(t,\xi_1)-D_2^k\mathcal{T}(x)(t,\xi_2)| \leq \left(M\beta_{k+1}e^{(k+1)r\lambda}+A_{k+1}/\lambda\right) e^{(k+1)\lambda|t|}|\xi_1-\xi_2|, $$ where $A_{k+1}:=(A^1_{k+1}+A^2_{k+1})/(k+1)$, \begin{eqnarray*} A^1_{k+1}\!\!\!&:=&\!\!\! \beta_1e^{(k+1)r\lambda}(k!)\sum_{m=1}^{k} M_{m+1}\sum_{p(k,m)}\prod_{i=1}^{k}\frac{\beta_i^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}},\\ A^2_{k+1}\!\!\!&:=&\!\!\! e^{(k+1)r\lambda}(k!)\sum_{m=1}^{k} M_{m}\sum_{p(k,m)}\sum_{j=1}^{k} \frac{\omega_{j}\beta_{j+1}\beta_j^{\omega_j-1}}{(\omega_j!)(j!)^{\omega_j}} \left(\prod_{i=0}^{j-1}\frac{\beta_i^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\right) \left(\prod_{i=j+1}^{k+1}\frac{\beta_i^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\right), \end{eqnarray*} and $\omega_{0}=\omega_{k+1}:=0$. \end{lm} \begin{proof} For any $\xi_1,\,\xi_2\in \mathbb{R}^{n}$ and $x\in \mathcal{B}_{d,\lambda}$, let $y_t:=x_t(\cdot\,,\xi_1)$, $z_t:=x_t(\cdot\,,\xi_2)$, $G(t,\xi_1):=F(t,y_t)$ and $G(t,\xi_2):=F(t,z_t)$. By Appendix B, we have \begin{eqnarray}\label{est-G-dev} |D^k_2G(t,\xi_1)-D^k_2G(t,\xi_2)|\leq I_1+I_2, \end{eqnarray} where \begin{equation} \begin{split} I_1:=&\,k!\sum_{m=1}^{k}|D^m_2F(t,y_t)-D^m_2F(t,z_t)| \sum_{p(k,m)}\prod_{i=1}^{k}\frac{|D^{i}_2y_t|^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}},\\ I_2:=&\,k!\sum_{m=1}^{k} |D^m_2F(t,z_t)|\sum_{p(k,m)}\sum_{j=1}^{k} \frac{Q_{j}}{(\omega_j!)(j!)^{\omega_j}} \left(\prod_{i=0}^{j-1} \frac{|D^{i}_2z_t|^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\right) \left(\prod_{i=j+1}^{k+1} \frac{|D^{i}_2y_t|^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\right),\label{I2}\\ Q_{j}:=&\, |D^{j}_2y_t-D^{j}_2z_t|\sum_{l=0}^{\omega_{j}-1}|D^{j}_2z_t|^{l}|D^{j}_2y_t|^{\omega_{j}-l-1}. \end{split} \end{equation} For $I_1$, we observe that \begin{equation} \begin{split} I_1 \leq&\,k!\sum_{m=1}^{k}M_{m+1}|y_t-z_t| \sum_{p(k,m)}\prod_{i=1}^{k}\frac{\beta_i^{\omega_i}e^{i\omega_i\lambda(r+|t|)}}{(\omega_i!)(i!)^{\omega_i}}\\ \leq&\, k!\sum_{m=1}^{k}M_{m+1}\beta_1e^{\lambda(r+|t|)}|\xi_1-\xi_2| \sum_{p(k,m)}\prod_{i=1}^{k}\frac{\beta_i^{\omega_i}e^{i\omega_i\lambda(r+|t|)}}{(\omega_i!)(i!)^{\omega_i}}= A_{k+1}^1 e^{(k+1)\lambda|t|}|\xi_1-\xi_2|\label{I1-est}. \end{split} \end{equation} To estimate $I_2$, by (\ref{bd-xj}) and (\ref{bd-lip-x}) we note that \begin{eqnarray*} Q_{j} \!\!\!&\leq&\!\!\! \beta_{j+1}e^{(j+1)\lambda(r+|t|)}|\xi_1-\xi_2| \sum_{l=0}^{\omega_{j}-1}(\beta_je^{j\lambda(r+|t|)})^{l}(\beta_je^{j\lambda(r+|t|)})^{\omega_{j}-l-1}\nonumber\\ \!\!\!&\leq&\!\!\! \omega_{j}\beta_{j+1}\beta_j^{\omega_{j}-1} e^{(1+j\omega_{j})\lambda(r+|t|)}|\xi_1-\xi_2|, \end{eqnarray*} together with (\ref{bd-xj}), (\ref{I2}) and {\bf (H1)}, we have \begin{equation} \begin{split} I_2 \leq&\, (k!)\sum_{m=1}^{k}\! M_{m}\!\!\sum_{p(k,m)}\sum_{j=1}^{k} \frac{\omega_{j}\beta_{j+1}\beta_j^{\omega_j-1}e^{(1+j\omega_{j})\lambda(r+|t|)}|\xi_1-\xi_2|}{(\omega_j!)(j!)^{\omega_j}} \\ &\, \times \left(\prod_{i=0}^{j-1}\!\frac{(\beta_ie^{i\lambda(r+|t|)})^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\right) \left(\prod_{i=j+1}^{k+1}\!\!\frac{(\beta_ie^{i\lambda(r+|t|)})^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\right) \leq A_{k+1}^2e^{(k+1)\lambda|t|}|\xi_1-\xi_2|.\label{I2-est} \end{split} \end{equation} In the end, applying the Leibniz's reule, in view of (\ref{bd-lip-x}), (\ref{I1-est}) and (\ref{I2-est}) we have \begin{eqnarray*} \lefteqn{|D_2^k\mathcal{T}(x)(t,\xi_1)-D_2^k\mathcal{T}(x)(t,\xi_2)|}\\ \!\!\!&\leq&\!\!\! |L(t)||D_2^{k}y_{t}-D_2^{k}z_{t}|+ |\int_0^{t} \left(D^k_2G(s,\xi_1)-D^k_2G(s,\xi_2)\right) ds|\\ \!\!\!&\leq&\!\!\!\!\! M\beta_{k+1}e^{(k+1)\lambda(r+|t|)}|\xi_1-\xi_2|+\!\!\int_0^{t}\!\!(A_{k+1}^1+A_{k+1}^2) e^{(k+1)\lambda|s|}|\xi_1-\xi_2| ds|\\ \!\!\!&\leq&\!\!\!\!\! \left(M\beta_{k+1}e^{(k+1)r\lambda}+A_{k+1}/\lambda\right) e^{(k+1)\lambda|t|}|\xi_1-\xi_2|. \end{eqnarray*} Then Lemma \ref{lm-GB4-2} is established. \end{proof} In the end of this section, we prove Theorem \ref{thm-1} by the Contraction Mapping Principle. \begin{proof}[Proof of Theorem \ref{thm-1}] The proof of this theorem is divided into four steps. \vskip 0.2cm \noindent{\bf Step (i).} We first choose a suitable constant $\delta$ with $0<\delta\leq r_{0}$ to give the smallness condition. Recall that the constants $r_0$ and $x^{*}$ defined as in {\bf (H1)} only depend on $M$ and $M_{1}$. Let the constants $\beta_{j}$, $j=1,...,k+1$, satisfy the following conditions: \begin{eqnarray*} \beta_1\geq x^{*}/(x^{*}-Mx^{*}e^{x^{*}}-M_1r_0e^{x^{*}})>0,\ \ \ \beta_{j}>0, \ \ \ j=2,...,k+1. \end{eqnarray*} By (\ref{x-star}) we also have $\beta_{1}>0$. We define the constant $\delta$ by \begin{eqnarray*} \delta:=\min\left\{r_0, x^{*}(1-Me^{mx^{*}})\beta_{m}/A_{m} \mbox{ for } m=2,...,k+1\right\}, \end{eqnarray*} where $A_m$\,s are defined in Lemmas \ref{lm-GB4-1} and \ref{lm-GB4-2}. Note that $A_m$\,s are only determined by $\beta_{j}$ for $j=1,...,m$, and $1-Me^{mx^{*}}>0$ for each $m=2,...,k+1$, then $\delta$ is well defined and satisfies $\delta>0$. \vskip 0.2cm \noindent{\bf Step (ii).} Secondly, for the delay $r$ satisfying the smallness condition $r\in (0,\delta)$ and each fixed $d>0$, we construct the desired complete metric space $\mathcal{B}_{d,\lambda}$, where the constant $\lambda=x^{*}/r$. By (\ref{x-star}) we can take a sufficiently large $\beta_{0}$ such that the following inequality holds: \begin{eqnarray}\label{beta-0-restr} d+M_0r_0/x^*+\beta_0(Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}\leq \beta_0. \end{eqnarray} Define the constants $\beta_{j}$, $j=0,...,k+1$, associated with $\mathcal{B}_{d,\lambda}$ by the above way. Then by Lemma \ref{lm-4-complete}, the set $\mathcal{B}_{d,\lambda}$ endowed with the metric $\rho$, which is induced by (\ref{norm-x}), is a complete metric space. \vskip 0.2cm \noindent{\bf Step (iii).} Thirdly, we prove that the operator $\mathcal{T}$ defined by (\ref{map-1}) maps $\mathcal{B}_{d,\lambda}$ to itself. Clearly, for each $x\in \mathcal{B}_{d,\lambda}$, $\mathcal{T}(x)$ is a continuous map from $\mathbb{R}\times V^{0}_d$ to $\mathbb{R}^n$. Note that $F$ and $x$ are $C^k$ maps in the second variable, and $L(t)$ is a linear map for each $t\in \mathbb{R}$. Then $\mathcal{T}(x)$ is a $C^k$ map with respect to the second variable. For $r\in (0,\delta)$, by the condition $r\lambda=x^{*}$, (\ref{x-star}) and (\ref{beta-0-restr}) we have \begin{eqnarray*} d+M_0/\lambda+(Me^{r\lambda}+M_1e^{r\lambda}/\lambda)\beta_0 \leq d+M_0r_0/x^{*}+\beta_0(Mx^{*} e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}\leq \beta_0. \end{eqnarray*} In view of Lemma \ref{lm-GB4-1} (i), we obtain $|\mathcal{T}(x)|_{\mathcal{B}_{d,\lambda}}\leq \beta_0$. Recall that $\beta_1\geq x^{*}/(x^{*}-Mx^{*}e^{x^{*}}-M_1r_0e^{x^{*}})>0$. Then $(Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}\leq 1-1/\beta_1$, together with (\ref{x-star}), yields that \begin{eqnarray*} 1+(Me^{r\lambda}+M_1e^{r\lambda}/\lambda)\beta_1 < 1+\beta_1(Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}\leq 1+\beta_1(1-1/\beta_1)=\beta_1. \end{eqnarray*} It follows from Lemma \ref{lm-GB4-1} (ii) that $|\mathcal{T}(x)|_{\mathcal{B}_{d,\lambda},1}\leq \beta_1$. For $m=2,...,k+1$, note that $0<r< x^{*}(1-Me^{mx^{*}})\beta_{m}/A_{m}$, then \begin{eqnarray*} M\beta_{m}e^{mr\lambda}+A_m/\lambda=M\beta_{m}e^{mx^{*}}+A_mr/x^{*} \leq M\beta_{m}e^{mx^{*}}+A_mx^{*}(1-Me^{mx^{*}})\beta_{m}/(A_{m}x^{*}) \leq \beta_{m}. \end{eqnarray*} Then by Lemmas \ref{lm-GB4-1} (iii) and \ref{lm-GB4-2}, we have $|\mathcal{T}(x)|_{\mathcal{B}_{d,\lambda},m}\leq \beta_m$ for each $m=2,...,k+1$. Thus, the operator $\mathcal{T}$ maps $\mathcal{B}_{d,\lambda}$ to itself. \vskip 0.2cm \noindent{\bf Step (iv).} We finally prove that $\mathcal{T}$ is a contraction. For any $x, y \in \mathcal{B}_{d,\lambda}$ and $(t,\xi) \in \mathbb{R}\times V^{0}_d$, we observe that \begin{equation} \begin{split} |\mathcal{T}(x)(t,\xi)-\mathcal{T}(y)(t,\xi)| \leq&\, |L(t)||x_t-y_t|+ |\int_0^{t} |F(s,x_s)-F(s,y_s)|ds|\\ \leq&\, M |x-y|_{\mathcal{B}_{d,\lambda}}e^{\lambda(r+|t|)}+|\int_0^{t} M_1|x_s-y_s|ds|\\ \leq&\, M |x-y|_{\mathcal{B}_{d,\lambda}}e^{\lambda(r+|t|)}+|\int_0^{t} M_1|x-y|_{\mathcal{B}_{d,\lambda}}e^{\lambda(r+|s|)}ds|\\ \leq&\, (Me^{r\lambda}+M_1e^{r\lambda}/\lambda)|x-y|_{\mathcal{B}_{d,\lambda}}e^{\lambda|t|}.\label{T-contr} \end{split} \end{equation} Then by (\ref{x-star}) and (\ref{T-contr}), \begin{eqnarray*} |\mathcal{T}(x)-\mathcal{T}(y)|_{\mathcal{B}_{d,\lambda}}\leq((Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*})|x-y|_{\mathcal{B}_{d,\lambda}}, \end{eqnarray*} which together with (\ref{x-star}) yields that $\mathcal{T}$ is a contraction. By the Contraction Mapping Principle, $\mathcal{T}$ has a unique fixed point in the complete metric space $(\mathcal{B}_{d,\lambda}, \rho)$. Assume that $\Psi$ is the unique fixed point of $\mathcal{T}$ in $(\mathcal{B}_{d,\lambda}, \rho)$. By (\ref{map-1}), we can check that $\Psi$ satisfies equation (\ref{NA-NDE}) and $\Psi(0,\xi)=\xi+L(0)\Psi_0$. Therefore, Theorem \ref{thm-1} is proved. \end{proof} \begin{rmk}\label{rk-unique} Assume that {\bf (H1)} holds. By the similar method used in the proof of Theorem \ref{thm-1}, we can check that for each $\xi\in \mathbb{R}^{n}$, there exists a unique solution $y$ of equation (\ref{NA-NDE}) satisfying that $y$ is defined on $\mathbb{R}$, $\xi=y(0)-L(0)y_{0}$ and $\sup_{t\leq 0}|y(t)|e^{\lambda t}<+\infty$. More precisely, $y(t)=\Psi(t,\xi)$ for $t\in \mathbb{R}$, where $\Psi$ is defined in Theorem \ref{thm-1}. In fact, to obtain this result, we only need to consider the case $k=0$ in the proof of Theorem \ref{thm-1}. \end{rmk} \section{Proof of Theorem \ref{thm-1-2}} \label{sec-pf-thm-1-2} For any fixed $\phi\in \mathcal{C}$, let $x(t)=x(t;0,\phi)$ for $t\in [-r,+\infty)$ denote the solution of equation (\ref{NA-NDE}) with $x_0=\phi$. Let the constant $\lambda$ is defined as in {\bf (H2)} in this section. Let $\mathcal{S}_{\lambda}$ denote the set of continuous maps $y:\mathbb{R}\to \mathbb{R}^n$ with \begin{eqnarray*} |y-x|_{\mathcal{S}_{\lambda},1}:=\sup_{t\geq 0}|y(t)-x(t)|e^{\lambda t}<+\infty \ \mbox{ and } \ |y|_{\mathcal{S}_{\lambda},2}:= \sup_{t\leq 0}|y(t)|e^{\lambda t}<+\infty. \end{eqnarray*} We define a map $\tilde{\rho}:\mathcal{S}_{\lambda}\times \mathcal{S}_{\lambda}\to \mathbb{R}$ in the form $ \tilde{\rho}(f,g):=\sup_{t\in \mathbb{R}}|f(t)-g(t)|e^{\lambda t} $ for any $f, g$ in $\mathcal{S}_{\lambda}$. Note that for $f, g$ in $\mathcal{S}_{\lambda}$, we have \begin{eqnarray*} &&|f(t)-g(t)|e^{\lambda t}\leq |f(t)-x(t)|e^{\lambda t}+|g(t)-x(t)|e^{\lambda t} \ \ \ \mbox{ for } t\geq 0,\\ &&|f(t)-g(t)|e^{\lambda t}\leq |f(t)|e^{\lambda t}+|g(t)|e^{\lambda t} \ \ \ \mbox{ for } t\leq 0, \end{eqnarray*} then from the definition of the set $\mathcal{S}_{\lambda}$ it follows that the map $\tilde{\rho}$ is well defined. Furthermore, we have the following result. \begin{lm} $(\mathcal{S}_{\lambda}, \tilde{\rho})$ is a complete metric space. \end{lm} \begin{proof} Clearly, the map $\tilde{\rho}$ induces a metric on the set $\mathcal{S}_{\lambda}$. To prove the completeness of the metric space $(\mathcal{S}_{\lambda}, \tilde{\rho})$, take any Cauchy sequence $\{g_m\}_{m=1}^{+\infty}$ of $\mathcal{S}_{\lambda}$, that is, for any $\epsilon>0$, there is a positive integer $N(\epsilon)$ such that for any positive integer $m, m'\geq N(\epsilon)$, $\tilde{\rho}(g_{m'},g_m)<\epsilon$, which implies \begin{eqnarray*}\label{sup-ym} \sup_{t\geq 0}|g_{m'}(t)-g_{m}(t)|e^{\lambda t}<\epsilon, \ \ \ \sup_{t\leq 0}|g_{m'}(t)-g_{m}(t)|e^{\lambda t}<\epsilon. \end{eqnarray*} Similarly to Lemma \ref{lm-4-complete}, we observe that $\{g_m(t)e^{\lambda t}\}_{m=1}^{+\infty}$ is a Cauchy sequence in $C_b(\mathbb{R})$. Let $\tilde{g}_0(t)$ be the limit of $g_{m}(t)e^{\lambda t}$ in $C_b(\mathbb{R})$ and $g_0(t):=\tilde{g}_0(t)e^{-\lambda t}$. Then we have $\tilde{\rho}(g_{m},g_{0})\to 0$ as $m\to +\infty$. By the definition of the set $\mathcal{S}_{\lambda}$, we have for sufficiently large $m$, \begin{eqnarray*} &&\sup_{t\geq 0}|g_{0}(t)-x(t)|e^{\lambda t} \leq \sup_{t\geq 0}|g_{0}(t)-g_m(t)|e^{\lambda t}+\sup_{t\geq 0}|g_{m}(t)-x(t)|e^{\lambda t}<+\infty,\\ &&\sup_{t\leq 0}|g_{0}(t)|e^{\lambda t} \leq \sup_{t\leq 0}|g_{0}(t)-g_m(t)|e^{\lambda t}+\sup_{t\leq 0}|g_{m}(t)|e^{\lambda t}<+\infty, \end{eqnarray*} which implies that $g_0\in \mathcal{S}_{\lambda}$. Therefore, the proof is complete. \end{proof} Next we prove Theorem \ref{thm-1-2} by constructing a contraction operator on the complete metric space $(\mathcal{S}_{\lambda}, \tilde{\rho})$, and then applying the statements in Remark \ref{rk-unique}. The similar method was widely used to establish the existence of the desired solutions of differential equations (see, for instance, \cite{Burton-85,Driver1976}). \begin{proof}[Proof of Theorem \ref{thm-1-2}] For any $y\in \mathcal{S}_{\lambda}$, we define the operator $\mathcal{Q}$: \qquad \begin{eqnarray}\label{eq-def-T} \mathcal{Q}(y)(t) =\left\{ \begin{array}{ll} L(t)y_t+x(t)-L(t)x_t-\int_{t}^{+\infty}(F(s,y_s)-F(s,x_s)) ds, & t>0, \\ x(0)-L(0)x_0-\int_{0}^{+\infty}(F(s,y_s)-F(s,x_s)) ds +L(t)y_t \\ +\int_{0}^{t}F(s,y_s) ds, & t\leq0. \end{array} \right. \end{eqnarray} For any $y\in \mathcal{S}_{\lambda}$ and $t_2\geq t_1\geq r$, we note that \begin{eqnarray*} \lefteqn{|\int_{t_1}^{t_2}(F(s,y_s)-F(s,x_s)) ds|}\\ &\leq&\!\!\!\! |\int_{t_1}^{t_2} M_1|y_s-x_s| ds|\leq M_1|\int_{t_1}^{t_2}|y-x|_{\mathcal{S}_{\lambda},1}\, e^{\lambda(r-s)} ds| \leq M_1|y-x|_{\mathcal{S}_{\lambda},1}\,e^{\lambda(r-t_1)}/\lambda. \end{eqnarray*} By the Cauchy Convergence Principle, we see that the integral $\int_{t}^{+\infty}(F(s,y_s)-F(s,x_s)) ds$ is well defined for $t\geq 0$. Moreover, for $t \geq 0$ we find that \begin{equation} \begin{split} |\int_{t}^{+\infty}(F(s,y_s)-F(s,x_s)) ds| \leq&\, M_1\int_{t}^{+\infty}|y_s-x_s| ds\leq M_1\int_{0}^{+\infty}|y_s-x_s| ds\\ =&\, M_1\int_{0}^{r}|y_s-x_s| ds+M_1\int_{r}^{+\infty}|y_s-x_s| ds\\ \leq &\, M_1r\left(\max_{t\in[-r,0]}|y(t)-x(t)|+\max_{t\in[0,r]}|y(t)-x(t)|\right) \\ &\,+M_1|y-x|_{\mathcal{S}_{\lambda},1}/\lambda\\ \leq &\, M_1r\left(|y|_{\mathcal{S}_{\lambda},2}e^{r\lambda}+|\phi|+|y-x|_{\mathcal{S}_{\lambda},1}\right)+M_1|y-x|_{\mathcal{S}_{\lambda},1}/\lambda. \label{est-Fy-Fx} \end{split} \end{equation} Since both $x$ and $y$ are continuous on $[-r,+\infty)$ and $\mathbb{R}$, we have \begin{equation} \begin{split} \mathcal{Q}(y)(0^+)=&\,L(0)y_0+x(0)-L(0)x_0-\int_{0}^{+\infty}(F(s,y_s)-F(s,x_s)) ds\label{Ty-0}\\ =&\,\mathcal{Q}(y)(0^-)=\mathcal{Q}(y)(0). \end{split} \end{equation} Thus $T(y)$ is continuous at $t=0$. Furthermore, by the continuity of $x$ and $y$, $T(y)$ is continuous on $\mathbb{R}$. As $t\geq r$, we have \begin{eqnarray*} |\mathcal{Q}(y)(t)-x(t)|\!\!\!&\leq&\!\!\! |L(t)(y_t-x_t)| +|\int_{t}^{+\infty}(F(s,y_s)-F(s,x_s)) ds|\\ \!\!\!&\leq&\!\!\! M|y_t-x_t|+|\int_{t}^{+\infty}M_1|y_s-x_s|ds|\\ \!\!\!&\leq&\!\!\! (M+ M_1/\lambda)|y-x|_{\mathcal{S}_{\lambda},1}\,e^{\lambda(r-t)}, \end{eqnarray*} which implies $|\mathcal{Q}(y)-x|_{\mathcal{S}_{\lambda},1}<+\infty$. As $t\leq 0$, by (\ref{est-Fy-Fx}), (\ref{Ty-0}) and the properties of the space $\mathcal{S}_{\lambda}$, we find that \begin{equation} \label{Ty-est} \begin{split} |\mathcal{Q}(y)(t)| \leq&\, |x(0)-L(0)x_0-\int_{0}^{+\infty}(F(s,y_s)-F(s,x_s)) ds|\\ &\, +|L(t)y_t|+|\int_{0}^{t}(F(s,y_s)-F(s,0)) ds|+|\int_0^{t} |F(s,0)|ds|\\ \leq&\, |x(0)-L(0)x_0|+\int_{0}^{+\infty} M_1 |y_s-x_s| ds\\ &\, +M|y|_{\mathcal{S}_{\lambda}, 2}e^{\lambda(r-t)}+|\int_{0}^{t}M_1|y|_{\mathcal{S}_{\lambda},2}e^{\lambda(r-s)} ds|+|\int_0^t M_0 e^{\lambda |s|}ds|\\ &\leq\, (1+M)|\phi|+M_1r\left(|y|_{\mathcal{S}_{\lambda},2}e^{r\lambda}+|\phi|+|y-x|_{\mathcal{S}_{\lambda},1}\right) +M_1|y-x|_{\mathcal{S}_{\lambda},1}/\lambda\\ &\, +M|y|_{\mathcal{S}_{\lambda}, 2}e^{\lambda(r-t)}+M_1|y|_{\mathcal{S}_{\lambda}, 2}\,e^{\lambda(r-t)} /\lambda+M_0e^{-\lambda t}/\lambda\\ =&\, \left\{(1+M+M_1r)|\phi|+M_0/\lambda+M_1(r+1/\lambda)|y-x|_{\mathcal{S}_{\lambda},1}\right.\\ &\,\left.+(M_1re^{r\lambda}+Me^{r\lambda}+M_1e^{r\lambda} /\lambda)|y|_{\mathcal{S}_{\lambda},2}\right\}e^{-\lambda t}, \end{split} \end{equation} which yields that $|\mathcal{Q}(y)|_{\mathcal{S}_{\lambda},2}<+\infty$. Therefore, $\mathcal{Q}$ maps $\mathcal{S}_{\lambda}$ into itself. To prove that $\mathcal{Q}$ is a contraction, for any $y,z \in \mathcal{S}_{\lambda}$ and $t\in \mathbb{R}$, we see that \begin{eqnarray*} |\mathcal{Q}(y)(t)-\mathcal{Q}(z)(t)| \!\!\!&\leq&\!\!\!\! |L(t)(y_t-z_t)|+|\int_{t}^{+\infty}(F(s,y_s)-F(s,z_s))ds|\\ \!\!\!&\leq&\!\!\!\! Md(x,y)e^{\lambda(r-t)}+|\!\int_{t}^{+\infty}\!\!\! M_1\tilde{\rho}(x,y)e^{\lambda(r-s)}ds|\\ \!\!\!&\leq&\!\!\!\! \left(Me^{r\lambda}+M_1e^{r\lambda}/\lambda\right)e^{-\lambda t} \tilde{\rho}(x,y), \end{eqnarray*} together with (\ref{x-star}), yields $$ |\mathcal{Q}(y)(t)-\mathcal{Q}(z)(t)|e^{\lambda t} \leq \left((Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}\right)\tilde{\rho}(x,y)<\tilde{\rho}(x,y). $$ Thus $\mathcal{Q}$ is a contraction. Applying the Contraction Mapping Principle, $\mathcal{Q}$ has a unique fixed point in the complete metric space $(\mathcal{S}_{\lambda}, \tilde{\rho})$, denoted by $y$. By (\ref{eq-def-T}) we can check that $y$ satisfies equation (\ref{NA-NDE}) for $t\in \mathbb{R}$ and by (\ref{Ty-est}) we have $|y|_{\mathcal{S}_{\lambda},2}<+\infty$. Take $\xi=x(0)-L(0)x_0-\int_{0}^{+\infty}(F(s,y_s)-F(s,x_s)) ds\in\mathbb{R}^{n}$. Using (\ref{eq-def-T}) again, we find that $\xi=y(0)-L(0)y_0$. Furthermore, by Remark \ref{rk-unique} we have the fixed point $y(t)=\Psi(t,\xi)$ for $t\in \mathbb{R}$. In the end, following the definition of $\mathcal{S}_{\lambda}$, we obtain $\sup_{t\geq0}|x(t; 0,\phi)-y(t)|e^{\lambda t}<+\infty$. Therefore, the proof of Theorem \ref{thm-1-2} is complete. \end{proof} \section{Proof of Theorem \ref{thm-2}} \label{sec-pf-thm3} For any fixed constant $d>1$, set $V_{0}:=V_{d}^0=\{\xi\in \mathbb{R}^n: |\xi|<d\}$ and $V_{1}:=R^n\backslash V_d^0$. Let the constants $r_0$, $\delta$, $\lambda$ be given in {\bf (H2)} and $\Omega$ denote the interval $(0,\delta)$. For each $\gamma\in\{0,1\}$, let $\mathcal{E}_{\gamma,\lambda}$ be the set of continuous maps from $\mathbb{R}\times\Omega\times V_{\gamma}$ to $\mathbb{R}^n$, and satisfy for each $x\in \mathcal{E}_{\gamma,\lambda}$, $x$ is $C^k$ in $(t,r) \in \mathbb{R}\times\Omega$ for each $\xi\in V_{\gamma}$ and for some constants $\varepsilon_{j}>0$, $j=0,1,...,k+1$, \begin{eqnarray} |x|_{\mathcal{E}_{\gamma,\lambda}}\!\!\!&:=&\!\!\! \sup_{(t,r,\xi)\in \mathbb{R}\times\Omega\times V_{\gamma}} |x(t,r,\xi)|(e^{\lambda|t|}|\xi|^{\gamma})^{-1} \leq \varepsilon_0,\label{2-norm-x}\\ |x|_{\mathcal{E}_{\gamma,\lambda},j}\!\!\!&:=&\!\!\! \sup_{(t,r,\xi)\in \mathbb{R}\times\Omega\times V_{\gamma}} |D^jx(t,r,\xi)|(e^{\lambda|t|}|\xi|^{\gamma})^{-j} \leq \varepsilon_j, \ \ \ j=1,2,...,k,\label{2-bd-xj}\\ |x|_{\mathcal{E}_{\gamma,\lambda},k+1}\!\!\!&:=&\!\!\!\!\!\!\!\! \!\!\!\sup_{\xi\in V_{\gamma},\, (t_1,r_1)\neq (t_2,r_2), (t_i,r_i)\in\mathbb{R}\times\Omega}\!\!\!\!\!\!\!\!\! \frac{|D^k x(t_1,r_1,\xi)-D^k x(t_2,r_2,\xi)|}{|(t_1,r_1)-(t_2,r_2)|}(e^{\lambda|t_{*}|}|\xi|^{\gamma})^{-(k+1)} \!\! \leq \varepsilon_{k+1},\label{2-bd-lip-x} \end{eqnarray} where $|t_*|=\max\{|t_1|,|t_2|\}$ and for simplicity, in this section $D^{j}x$ denote the {\it j-}th derivative of $x$ with respect to $(t,r)$. \begin{lm}\label{lm-5-complete} For each $\gamma \in \{0,1\}$, the metric space $(\mathcal{E}_{\gamma,\lambda}, \rho_{\gamma})$ is complete, where $\rho_{\gamma}(x,y)=|x-y|_{\mathcal{E}_{\gamma,\lambda}}$ for any $x, y\in \mathcal{E}_{\gamma,\lambda}$. \end{lm} \begin{proof} Clearly, for each $\gamma \in \{0,1\}$, $\rho_{\gamma}$ is well defined and induces a metric for the set $\mathcal{E}_{\gamma,\lambda}$. To prove the completeness of the metric space $(\mathcal{E}_{\gamma,\lambda}, \rho_{\gamma})$, take any Cauchy sequence $\{g_m\}_{m=1}^{+\infty}$ of $\mathcal{E}_{\gamma,\lambda}$, that is, for any $\epsilon>0$, there is a positive integer $N(\epsilon)$ such that for any $m, m' \geq N(\epsilon)$, \begin{eqnarray}\label{Cauchy-seq-2} \rho_{\gamma}(g_{m'},g_m)=\sup_{(t,r,\xi)\in \mathbb{R}\times\Omega\times V_{\gamma}} |g_{m'}(t,r,\xi)-g_m(t,r,\xi)|(e^{\lambda|t|}|\xi|^{\gamma})^{-1} <\!\epsilon. \end{eqnarray} Similarly to Lemma \ref{lm-4-complete}, there exists a continuous map $g_0$ from $\mathbb{R}\times\Omega\times V_{\gamma}$ to $\mathbb{R}^n$ such that $\rho_{\gamma}(g_{m},g_{0})\to 0$ as $m\to+\infty$. Next we claim that $g_0 \in \mathcal{E}_{\gamma,\lambda}$. For any $g\in \mathcal{E}_{\gamma,\lambda}$ and any $t_0>0$, let $\xi\in V_{\gamma}$ be fixed and $\widetilde{g}(t,r)=g(t,r,\xi)$ for $(t,r)\in(-t_0,t_0)\times(0,\delta)$. Then we see that $|\widetilde{g}(t,r)|=|g(t,r,\xi)| \leq |g|_{\mathcal{E}_{\gamma,\lambda}} e^{\lambda|t_0|}|\xi|^{\gamma}.$ Recall that $\rho_{\gamma}(g_{m},g_{0})\to 0$ as $m\to+\infty$. Then we have $|\widetilde{g}_m-\widetilde{g}_0|_{\infty}\to0$ as $m\to+\infty$. From (\ref{2-norm-x})-(\ref{2-bd-lip-x}) it follows that $\{\widetilde{g}_m\}_{m=1}^{+\infty}\subset C_{\varepsilon(t_0,\xi)}^{k,1}$, where $\varepsilon(t_0,\xi)=\max\{\varepsilon_{0}e^{\lambda|t_0|}|\xi|^{\gamma}, \varepsilon_j(e^{\lambda|t_0|}|\xi|^{\gamma})^{j} \mbox{ for }j=1,2,...,k+1\}$. Hence, by Lemma \ref{Henry-lm}, we obtain $\widetilde{g}_0\in C_{\varepsilon(t_0,\xi)}^{k,1}$ and $D^j\widetilde{g}_m(t,r)\to D^j\widetilde{g}_0(t,r)$ as $m\to +\infty$ for each $(t,r)\in (-t_0,t_0)\times(0,\delta)$ and $j=1,2,...,k$. Note that the arbitrariness of $t_0$. Using (\ref{2-norm-x})-(\ref{2-bd-lip-x}) again, we obtain that $g_0\in \mathcal{E}_{\gamma,\lambda}$. Thus, the claim is true and Lemma \ref{lm-5-complete} is established. \end{proof} On each $\mathcal{E}_\gamma$, we define a map $\mathcal{F}$ in the form \begin{eqnarray}\label{map-2} \mathcal{F}(x)(t,r,\xi):=\xi+Ax(t-r,r,\xi)+\int_0^{t}f(x(s,r,\xi),x(s-r,r,\xi))ds. \end{eqnarray} To prove Theorem \ref{thm-2}, we first make some preparations. \begin{lm}\label{lm-GB5-1} Let $x\in \mathcal{E}_{\gamma,\lambda}$ and $g(t,r,\xi)=x(t-r,r,\xi)$ for $(t,r,\xi)\in \mathbb{R}\times\Omega\times V_{\gamma}$. Then for each $\boldsymbol{\nu}=(\nu_1,\nu_2)\in\mathbb{N}_0^2$ with $1\leq|\boldsymbol{\nu}|\leq k$, the following results hold: \begin{itemize} \item[(i)] For any $(t,r,\xi) \in \mathbb{R}\times\Omega\times V_{\gamma}$, \begin{eqnarray}\label{deriv-xtr} \frac{\partial^{\nu_1+\nu_2}}{\partial t^{\nu_1}\partial r^{\nu_2} }g(t,r,\xi) =\sum_{j=0}^{\nu_2}\frac{(-1)^{j}v_2!}{j!(\nu_2-j)!} \frac{\partial^{\nu_1+\nu_2}}{\partial y_1^{\nu_1+j}\partial y_2^{\nu_2-j}} x(t-r,r,\xi), \end{eqnarray} and $ |\frac{\partial^{\nu_1+\nu_2}}{\partial t^{\nu_1}\partial r^{\nu_2} }g(t,r,\xi)| \leq \varepsilon_{|\boldsymbol{\nu}|}S_{\boldsymbol{\nu}}\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{\nu}|}, $ where the constant $S_{\boldsymbol{\nu}}=2^{\nu_2}e^{|\boldsymbol{\nu}|x^{*}}$ and $x^{*}$ is defined in {\bf (H2)}. \item[(ii)] For any $(t_1,r_1,\xi),\,(t_2,r_2,\xi)$ in $\mathbb{R}\times\Omega\times V_{\gamma}$ with $(t_1,r_1)\neq(t_2,r_2)$ and $|\boldsymbol{\nu}|=k$, \begin{eqnarray*} \!\!\! |\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2} }g(t_1,r_1,\xi) -\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2} }g(t_2,r_2,\xi)| \leq \varepsilon_{k+1}S_{(\nu_1,\nu_2+1)}\left(e^{\lambda|t_{*}|}|\xi|^{\gamma}\right)^{k+1} |(t_1,r_1)-(t_2,r_2)|, \end{eqnarray*} where $|t_{*}|=\max\{|t_1|,|t_2|\}$. \item[(iii)] For any $(t,r,\xi) \in \mathbb{R}\times\Omega\times V_{\gamma}$, $ |\frac{\partial^{\nu_1+\nu_2}}{\partial t^{\nu_1}\partial r^{\nu_2} }f(x(t,r,\xi),x(t-r,r,\xi))| \leq T_{\boldsymbol{\nu}}\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{\nu}|}, $ where the constant $$ T_{\boldsymbol{\nu}}= \sum_{1\leq |\boldsymbol{\omega}|\leq |\boldsymbol{\nu}|}M_{|\boldsymbol{\omega}|} \sum_{s=1}^{|\boldsymbol{\nu}|}\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})}(\boldsymbol{\nu}!) \prod_{j=1}^{s}\frac{1}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} \varepsilon_{|\boldsymbol{l_j}|}^{k_{j1}}\left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\right)^{k_{j2}}, $$ and $\boldsymbol{\omega}=(\omega_1,\omega_2)$, $\boldsymbol{k_j}=(k_{j1},k_{j2})$, $\boldsymbol{l_j}=(l_{j1},l_{j2})$, $p_s(\boldsymbol{\nu},\boldsymbol{\omega})$ are defined in Lemma \ref{partial-devt}. \end{itemize} \end{lm} \begin{proof} For each $\boldsymbol{\nu}=(\nu_1,\nu_2)\in\mathbb{N}_0^2$ with $1\leq|\boldsymbol{\nu}|\leq k$ and any $(t,r,\xi) \in \mathbb{R}\times\Omega\times V_{\gamma}$, we note that \begin{eqnarray*} \frac{\partial^{\nu_1+\nu_2}}{\partial t^{\nu_1}\partial r^{\nu_2} }g(t,r,\xi)= \frac{\partial^{\nu_2}}{\partial r^{\nu_2}}\left(\frac{\partial^{\nu_1}}{\partial t^{\nu_1}}x(t-r,r,\xi) \right) =\sum_{j=0}^{\nu_2}\frac{(-1)^{j}v_2!}{j!(\nu_2-j)!} \frac{\partial^{\nu_1+\nu_2}}{\partial y_1^{\nu_1+j}\partial y_2^{\nu_2-j}} x(t-r,r,\xi), \end{eqnarray*} which implies that \begin{eqnarray*} |\frac{\partial^{\nu_1+\nu_2}}{\partial t^{\nu_1}\partial r^{\nu_2}}g(t,r,\xi)| \leq \sum_{j=0}^{\nu_2}\frac{v_2!}{j!(\nu_2-j)!}\varepsilon_{|\boldsymbol{\nu}|} \!\left(e^{\lambda(r+|t|)}|\xi|^{\gamma}\right)\!^{|\boldsymbol{\nu}|} \leq 2^{\nu_2}e^{|\boldsymbol{\nu}|r\lambda}\varepsilon_{|\boldsymbol{\nu}|}\!\left(e^{\lambda|t|}|\xi|^{\gamma}\right)\!^{|\boldsymbol{\nu}|}. \end{eqnarray*} In view of $0<r<\delta$ and $\delta\lambda=x^{*}$, result (i) is proved. For any $(t_1,r_1,\xi),\,(t_2,r_2,\xi)$ in $\mathbb{R}\times\Omega\times V_{\gamma}$ with $(t_1,r_1)\neq(t_2,r_2)$ and $|\boldsymbol{\nu}|=k$, by (\ref{deriv-xtr}) we have \begin{eqnarray*} \lefteqn{|\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2} }g(t_1,r_1,\xi) -\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2} }g(t_2,r_2,\xi)|}\\ \!\!\!&\leq&\!\!\! \sum_{j=0}^{\nu_2}\frac{\nu_2!}{j!(\nu_2-j)!} |\frac{\partial^{k}}{\partial y_1^{\nu_1+j}\partial y_2^{\nu_2-j}}x(t_1-r_1,r_1,\xi) -\frac{\partial^{k}}{\partial y_1^{\nu_1+j}\partial y_2^{\nu_2-j}}x(t_2-r_2,r_2,\xi)|\\ \!\!\!&\leq&\!\!\! \sum_{j=0}^{\nu_2}\frac{\nu_2!}{j!(\nu_2-j)!}\varepsilon_{k+1} \left(e^{\lambda(\delta+|t_{*}|)}|\xi|^{\gamma}\right)^{k+1}\max\{|t_1-t_2-r_1+r_2|,|r_1-r_2|\}\\ \!\!\!&\leq&\!\!\varepsilon_{k+1} S_{(\nu_1,\nu_2+1)}\left(e^{\lambda|t_{*}|}|\xi|^{\gamma}\right)^{k+1} \max\{|t_1-t_2|,|r_1-r_2|\}. \end{eqnarray*} Then result (ii) is proved. By Lemma \ref{partial-devt}, we obtain \begin{eqnarray*} \lefteqn{|\frac{\partial^{\nu_1+\nu_2}}{\partial t^{\nu_1}\partial r^{\nu_2} }f(x(t,r,\xi),x(t-r,r,\xi))|}\\ \!\!\!\!&\leq&\!\!\!\!\!\!\! \sum_{1\leq |\boldsymbol{\omega}|\leq |\boldsymbol{\nu}|}\! |\frac{\partial^{|\boldsymbol{\omega}|}}{\partial y_1^{\omega_1}\partial y_2^{\omega_2}}f| \sum_{s=1}^{|\boldsymbol{\nu}|}\!\!\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})}(\boldsymbol{\nu}!)\\ \!\!\!\!& &\!\!\! \times \prod_{j=1}^{s}\frac{1}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} |\frac{\partial^{|\boldsymbol{l_j}|}}{\partial t^{l_{j1}}\partial r^{l_{j2}} }x(t,r,\xi)|^{k_{j1}} |\frac{\partial^{|\boldsymbol{l_j}|}}{\partial t^{l_{j1}}\partial r^{l_{j2}} }x(t-r,r,\xi)|^{k_{j2}}\\ \!\!\!&\leq&\!\!\!\!\! \sum_{1\leq |\boldsymbol{\omega}|\leq |\boldsymbol{\nu}|}M_{|\boldsymbol{\omega}|} \sum_{s=1}^{|\boldsymbol{\nu}|}\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})}(\boldsymbol{\nu}!) \prod_{j=1}^{s}\frac{1}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} \left(\varepsilon_{|\boldsymbol{l_j}|}\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}|}\right)^{k_{j1}} \!\!\!\left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}|}\right)^{k_{j2}}, \end{eqnarray*} together with $\sum_{j=1}^{s} |\boldsymbol{k_j}|\boldsymbol{l_j}=\boldsymbol{\nu}$, yields result (iii). Therefore, the proof is complete. \end{proof} \begin{lm}\label{lm-GB5-2} Let $x\in \mathcal{E}_{\gamma,\lambda}$ and $f$ satisfy the conditions in {\bf (H2)}. Then for any $(t,r_1,\xi)$ and $(t,r_2,\xi)$ in $\mathbb{R}\times\Omega\times V_{\gamma}$ with $r_1\neq r_2$, \begin{eqnarray*} &&\!\!\!\!\!\!\!\!\! |\frac{\partial^{k}}{\partial r^{k}} f(x(t,r_1,\xi),x(t-r_1,r_1,\xi)) -\!\frac{\partial^{k}}{\partial r^{k}} f(x(t,r_2,\xi),x(t-r_2,r_2,\xi))| \leq T_{(0,k+1)}\!\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{k+1}\!\!|r_1-r_2|, \end{eqnarray*} where \begin{eqnarray*} T_{(0,k+1)}\!\!\!&=&\!\!\! (k!)\!\sum_{1\leq|\boldsymbol{\omega}|\leq k} M_{|\boldsymbol{\omega}|+1}\varepsilon_{1}S_{(0,1)} \sum_{s=1}^{k}\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})} \prod_{j=1}^{s} \frac{\left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\right)^{|\boldsymbol{k_j}|}} {(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} +(k!)\!\sum_{1\leq|\boldsymbol{\omega}|\leq k}M_{|\boldsymbol{\omega}|} \sum_{s=1}^{k}\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})} \mathcal{J}_s,\\ \mathcal{K}_{j}\!\!\!&=&\!\!\! \sum_{i=1}^{2n}\!k_{j,i} \varepsilon_{|\boldsymbol{l_j}|+1}S_{(l_{j1},l_{j2}+1)} \left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\right)^{k_{j,i}-1} \prod_{m=0}^{i-1} \left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\right)^{k_{j,m}} \prod_{m=i+1}^{2n+1}\left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\right)^{k_{j,m}},\\ \mathcal{J}_{s}\!\!\!&=&\!\!\! \sum_{j=1}^{s}\frac{\mathcal{K}_{j}} {(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} \prod_{i=0}^{j-1} \frac{\left(\varepsilon_{|\boldsymbol{l_i}|}S_{\boldsymbol{l_i}}\right)^{|\boldsymbol{k_i}|}} {(\boldsymbol{k_i}!)(\boldsymbol{l_i}!)^{|\boldsymbol{k_i}|}} \prod_{i=j+1}^{s+1} \frac{\left(\varepsilon_{|\boldsymbol{l_i}|}S_{\boldsymbol{l_i}}\right)^{|\boldsymbol{k_i}|}} {(\boldsymbol{k_i}!)(\boldsymbol{l_i}!)^{|\boldsymbol{k_i}|}}, \end{eqnarray*} $\boldsymbol{\omega}=(\omega_1,...,\omega_{2n})$, $\boldsymbol{\nu}=(0,k)$, $\boldsymbol{k_j}=(k_{j,1},...,k_{j,2n})$, $\boldsymbol{l_j}=(l_{j,1}, l_{j,2})$, $\boldsymbol{k_0}=\boldsymbol{k_{s+1}}=(0,...,0)$ and $k_{j,0}=k_{j,2n+1}=0$. \end{lm} \begin{proof} Let $x(t, r, \xi)=(x_1(t, r, \xi),...,x_n(t, r, \xi))^{T}\in \mathbb{R}^{n}$ and ${\bf g}(t,r,\xi)=(g_1(t,r,\xi),...,g_{2n}(t,r,\xi)) =(x_1(t,r,\xi),...,x_n(t,r,\xi),x_1(t-r,r,\xi),...,x_n(t-r,r,\xi))^{T}$. To simplify the notations, we also denote $f(r):=f(x(t,r,\xi),x(t-r,r,\xi))$ and ${\bf g}(r):={\bf g}(t,r,\xi)$. By Appendix C, we obtain \begin{equation}\label{k-lip} \begin{split} \lefteqn{ |\frac{\partial^{k}}{\partial r^{k}} f(x(t,r_1,\xi),x(t-r_1,r_1,\xi)) -\frac{\partial^{k}}{\partial r^{k}} f(x(t,r_2,\xi),x(t-r_2,r_2,\xi))|} \\ \leq&\, \sum_{1\leq|\boldsymbol{\omega}|\leq k}\!\! |f_{\boldsymbol{\omega}}(r_1)-f_{\boldsymbol{\omega}}(r_2)| \sum_{s=1}^{k}\!\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})} (k!)\prod_{j=1}^{s} \frac{|({\bf g}_{\boldsymbol{l_j}}(r_1))^{\boldsymbol{k_j}}|}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} +\!\!\!\sum_{1\leq|\boldsymbol{\omega}|\leq k}\!\! |f_{\boldsymbol{\omega}}(r_2)| \sum_{s=1}^{k}\!\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})}(k!) \Delta_{s}, \end{split} \end{equation} where \begin{eqnarray*} \Delta_{s}\!\!\!&:=&\!\!\! \sum_{j=1}^{s}\left(\frac{\Theta_{j}} {(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}}\right) \prod_{i=0}^{j-1} \frac{|({\bf g}_{\boldsymbol{l_i}}(r_2))^{\boldsymbol{k_i}}|}{(\boldsymbol{k_i}!)(\boldsymbol{l_i}!)^{|\boldsymbol{k_i}|}} \prod_{i=j+1}^{s+1} \frac{|({\bf g}_{\boldsymbol{l_i}}(r_1))^{\boldsymbol{k_i}}|}{(\boldsymbol{k_i}!)(\boldsymbol{l_i}!)^{|\boldsymbol{k_i}|}},\\ \Theta_{j}\!\!\!&:=&\!\!\! \sum_{i=1}^{2n}\Theta_{i,j} \prod_{m=0}^{i-1} |g_{\boldsymbol{l_j}}^{(m)}(r_2)|^{k_{j,m}} \prod_{m=i+1}^{2n+1}|g_{\boldsymbol{l_j}}^{(m)}(r_1)|^{k_{j,m}},\\ \Theta_{i,j}\!\!\!&:=&\!\!\! |g^{(i)}_{\boldsymbol{l_j}}(r_1)-g^{(i)}_{\boldsymbol{l_j}}(r_2)| \sum_{m=0}^{k_{j,i}-1}|g^{(i)}_{\boldsymbol{l_j}}(r_1)|^{m}\,|g^{(i)}_{\boldsymbol{l_j}}(r_2)|^{k_{j,i}-m-1}, \end{eqnarray*} $\boldsymbol{k_0}=\boldsymbol{k_{s+1}}=(0,...,0)$ and $k_{j,0}=k_{j,2n+1}=0$. We first note that for any $1\leq i\leq n$, \begin{eqnarray*} \Theta_{i,j} \!\!\!&\leq&\!\!\! \varepsilon_{|\boldsymbol{l_j}|+1} \left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}|+1}\!\!|r_1-r_2| \sum_{m=0}^{k_{j,i}-1} \left(\varepsilon_{|\boldsymbol{l_j}|} \left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}|}\right)^{k_{j,i}-1}\nonumber\\ \!\!\!&=&\!\!\! k_{j,i}\, \varepsilon_{|\boldsymbol{l_j}|+1} \varepsilon_{|\boldsymbol{l_j}|}^{k_{j,i}-1} \left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{k_{j,i}|\boldsymbol{l_j}|+1}|r_1-r_2|. \end{eqnarray*} Similarly, we have $\Theta_{i,j}\leq k_{j,i} \varepsilon_{|\boldsymbol{l_j}|+1}S_{(l_{j1},l_{j2}+1)} \left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\right)^{k_{j,i}-1} \left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{k_{j,i}|\boldsymbol{l_j}|+1}|r_1-r_2|$ for $n+1\leq i \leq 2n$. In view of $1\leq S_{(l_{j1},l_{j2}+1)}$ and $1\leq S_{\boldsymbol{l_j}}$, applying Lemma \ref{lm-GB5-1} (i), we obtain \begin{eqnarray*} \Theta_{j} \!\!\!&\leq&\!\!\! \sum_{i=1}^{2n}\!k_{j,i} \varepsilon_{|\boldsymbol{l_j}|+1}S_{(l_{j1},l_{j2}+1)} \left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\right)^{k_{j,i}-1} \!\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{k_{j,i}|\boldsymbol{l_j}|+1} \!\prod_{m=0}^{i-1} \left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}|}\right)^{k_{j,m}} \nonumber\\ \!\!\!& &\!\!\! \times \prod_{m=i+1}^{2n+1}\left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}} \left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}|}\right)^{k_{j,m}} |r_1-r_2| \nonumber\\ \!\!\!&=&\!\!\! \mathcal{K}_{j}\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}||\boldsymbol{k_j}|+1}|r_1-r_2|.\nonumber\\ \end{eqnarray*} Applying Lemma \ref{lm-GB5-1} (i) again, we have \begin{eqnarray}\label{Delta-est} \begin{split} \Delta_{s} \leq&\, \sum_{j=1}^{s}\left(\frac{\mathcal{K}_{j}\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}||\boldsymbol{k_j}|+1}|r_1-r_2|} {(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}}\right) \prod_{i=0}^{j-1} \frac{\left(\varepsilon_{|\boldsymbol{l_i}|}S_{\boldsymbol{l_i}} \left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_i}|}\right)^{|\boldsymbol{k_i}|}} {(\boldsymbol{k_i}!)(\boldsymbol{l_i}!)^{|\boldsymbol{k_i}|}}\\ &\, \times \prod_{i=j+1}^{s+1} \frac{\left(\varepsilon_{|\boldsymbol{l_{i}}|}S_{\boldsymbol{l_i}} \left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_i}|}\right)^{|\boldsymbol{k_i}|}} {(\boldsymbol{k_i}!)(\boldsymbol{l_i}!)^{|\boldsymbol{k_i}|}}= \mathcal{J}_s\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{k+1}|r_1-r_2|. \end{split} \end{eqnarray} Finally, combining (\ref{k-lip}), (\ref{Delta-est}) and Lemma \ref{lm-GB5-1}, we get that \begin{eqnarray*} \lefteqn{ |\frac{\partial^{k}}{\partial r^{k}} f(x(t,r_1,\xi),x(t-r_1,r_1,\xi)) -\frac{\partial^{k}}{\partial r^{k}} f(x(t,r_2,\xi),x(t-r_2,r_2,\xi))|}\\ \!\!\!&\leq&\!\!\!\!\!\!\! \sum_{1\leq|\boldsymbol{\omega}|\leq k} M_{|\boldsymbol{\omega}|+1} \max\{|x(t,r_1,\xi)-x(t,r_2,\xi)|,|x(t-r_1,r_1,\xi)-x(t-r_2,r_2,\xi)|\} \sum_{s=1}^{k}\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})}(k!)\\ \!\!\!&&\!\!\!\!\!\!\! \times \prod_{j=1}^{s} \frac{\left(\varepsilon_{|\boldsymbol{l_j}|}S_{\boldsymbol{l_j}}\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{|\boldsymbol{l_j}|}\right)^{|\boldsymbol{k_j}|}} {(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} +\sum_{1\leq|\boldsymbol{\omega}|\leq k}M_{|\boldsymbol{\omega}|} \sum_{s=1}^{k}\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})} (k!) \mathcal{J}_s\left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{k+1}|r_1-r_2|\\ \!\!\!&\leq&\!\!\!\! T_{(0,k+1)} \left(e^{\lambda|t|}|\xi|^{\gamma}\right)^{k+1}|r_1-r_2|. \end{eqnarray*} Therefore, the proof is complete. \end{proof} \begin{lm}\label{lm-GB5-3} Assume that {\bf (H2)} holds. Then for any $x\in \mathcal{E}_{\gamma,\lambda}$, and any $(t,r,\xi)$, $(t_1,r_1,\xi)$ and $(t_2,r_2,\xi)$ in $\mathbb{R}\times\Omega\times V_{\gamma}$ with $(t_1,r_1)\neq (t_2,r_2)$, the following assertions holds: \begin{itemize} \item[(i)] $|\mathcal{F}(x)(t,r,\xi)| \leq \left(d+\varepsilon_0(Mx^{*}e^{x^{*}}+M_1\delta e^{x^{*}})/x^{*}+|f(0,0)|\delta /(ex^{*})\right)e^{\lambda|t|}|\xi|^{\gamma}.$ \item[(ii)] $|\frac{\partial}{\partial t}\mathcal{F}(x)(t,r,\xi)|\leq \left(Me^{x^{*}}\varepsilon_1+M_1e^{x^{*}}\varepsilon_0+|f(0,0)|\right)e^{\lambda|t|}|\xi|^{\gamma}$. \item[(iii)] For $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $\nu_1=0$ and $\nu_2=|\boldsymbol{\nu}|\leq k$, we have \begin{eqnarray*} |\frac{\partial^{|\boldsymbol{\nu}|}}{\partial r^{|\boldsymbol{\nu}|}}\mathcal{F}(x)(t,r,\xi)| \leq \left(MS_{(0,|\boldsymbol{\nu}|)}\varepsilon_{|\boldsymbol{\nu}|}+T_{(0,|\boldsymbol{\nu}|)}\delta /(|\boldsymbol{\nu}|x^{*})\right) (e^{\lambda|t|}|\xi|^{\gamma})^{|\boldsymbol{\nu}|}. \end{eqnarray*} \item[(iv)] For $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $1\leq\nu_i\leq |\boldsymbol{\nu}|-1$ and $2\leq|\boldsymbol{\nu}|\leq k$, we have \begin{eqnarray*} |\frac{\partial^{|\boldsymbol{\nu}|}}{\partial t^{\nu_1}\partial r^{\nu_2}}\mathcal{F}(x)(t,r,\xi)| \leq \left(MS_{(\nu_1,\nu_2)}\varepsilon_{|\boldsymbol{\nu}|}+T_{(\nu_1-1,\nu_2)}\right) (e^{\lambda|t|}|\xi|^{\gamma})^{|\boldsymbol{\nu}|}. \end{eqnarray*} \item[(v)] For $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $\nu_1=0$ and $\nu_2=k$, we have \begin{eqnarray*} \lefteqn{|\frac{\partial^{k}}{\partial r^{k}}\mathcal{F}(x)(t_1,r_1,\xi) -\frac{\partial^{k}}{\partial r^{k}}\mathcal{F}(x)(t_2,r_2,\xi)|}\\ \!\!\!&&\!\!\!\!\!\!\!\leq \left(MS_{(0,k+1)}\varepsilon_{k+1}+T_{(0,k+1)}\delta /((k+1)x^{*})+T_{(0,k)}\right) \left(e^{\lambda|t_*|}|\xi|^{\gamma}\right)^{k+1}|(t_1,r_1)-(t_2,r_2)|. \end{eqnarray*} where $|t_*|=\max\{|t_1|,|t_2|\}$. \item[(vi)] For $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $1\leq \nu_1\leq k$ and $\nu_1+\nu_2=k$, we have \begin{eqnarray*} \lefteqn{|\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2}}\mathcal{F}(x)(t_1,r_1,\xi) -\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2}}\mathcal{F}(x)(t_2,r_2,\xi)|}\\ \!\!\!&&\!\!\!\!\!\!\!\leq \left(MS_{(\nu_1,\nu_2+1)}\varepsilon_{k+1}+T_{(\nu_1,\nu_2)}+T_{(\nu_1-1,\nu_2+1)}\right) \left(e^{\lambda|t_*|}|\xi|^{\gamma}\right)^{k+1}|(t_1,r_1)-(t_2,r_2)|. \end{eqnarray*} where $|t_*|=\max\{|t_1|,|t_2|\}$. \end{itemize} \end{lm} \begin{proof} For any $x\in \mathcal{E}_{\gamma,\lambda}$ and $(t,r,\xi) \in \mathbb{R}\times\Omega\times V_{\gamma}$, we have \begin{equation} \begin{split} \!|\mathcal{F}(x)(t,r,\xi)| \leq&\, |\xi|\!+\!|Ax(t-r,r,\xi)|\!+\!|\!\int_0^{t}\!|f(x(s,r,\xi),x(s-r,r,\xi))\!-\!f(0,0)|ds|\! \\ &+\!|\!\int_0^{t}\!|f(0,0)|ds|\\ \leq&\, |\xi|+M\varepsilon_{0}e^{\lambda(r+|t|)}|\xi|^{\gamma} +|\int_0^{t}M_1 \varepsilon_{0}e^{\lambda(r+|s|)}|\xi|^{\gamma} ds|+|f(0,0)t|\\ \leq&\, |\xi|+M\varepsilon_{0}e^{\lambda(r+|t|)}|\xi|^{\gamma} +M_1 \varepsilon_{0}e^{\lambda(r+|t|)}|\xi|^{\gamma}/\lambda +|f(0,0)|e^{\lambda|t|}/(\lambda e)\label{Fx-est}, \end{split} \end{equation} where the last inequality follows from the fact that $\max_{t\in \mathbb{R}}|t|e^{-\lambda|t|}=1/(\lambda e)$. As $\gamma=0$, by (\ref{Fx-est}) we obtain \begin{eqnarray*} |\mathcal{F}(x)(t,r,\xi)|e^{-\lambda|t|} \leq d+(Me^{r\lambda}+M_1e^{r\lambda}/\lambda) \varepsilon_{0}+|f(0,0)|/(\lambda e), \end{eqnarray*} together with $0<r<\delta$ and $\delta \lambda=x^{*}$, yields result (i) is true in the case $\gamma=0$. As $\gamma=1$, by (\ref{Fx-est}) we have \begin{eqnarray*} |\mathcal{F}(x)(t,r,\xi)|(e^{\lambda|t|}|\xi|)^{-1} \leq 1+(Me^{r\lambda}+M_1e^{r\lambda}/\lambda) \varepsilon_{0}+|f(0,0)|/(\lambda e|\xi|). \end{eqnarray*} Note that $0<r<\delta$, $\delta \lambda=x^{*}$ and $|\xi|>d>1$ for $\gamma=1$. This implies that result {\bf (i)} is also true in the case $\gamma=1$. Note that $x\in \mathcal{E}_{\gamma,\lambda}$ and $f$ satisfies {\bf (H2)}. We obtain \begin{eqnarray*} |\frac{\partial}{\partial t}\mathcal{F}(x)(t,r,\xi)| \!\!\!&=&\!\!\!|A\frac{\partial}{\partial t}x(t-r,r,\xi)+f(x(t,r,\xi),x(t-r,r,\xi))|\\ \!\!\!&\leq&\!\!\!M\varepsilon_1e^{\lambda(r+|t|)}|\xi|^{\gamma}+|f(x(t,r,\xi),x(t-r,r,\xi))-f(0,0)|+|f(0,0)|\\ \!\!\!&\leq&\!\!\!M\varepsilon_1e^{\lambda(r+|t|)}|\xi|^{\gamma}+M_1\varepsilon_0e^{\lambda(r+|t|)}|\xi|^{\gamma}+|f(0,0)|\\ \!\!\!&\leq&\!\!\!\left(Me^{r\lambda}\varepsilon_1+M_1e^{r\lambda}\varepsilon_0+|f(0,0)|\right)e^{\lambda|t|}|\xi|^{\gamma}, \end{eqnarray*} where the last inequality follows from the fact that $|\xi|>d>1$ in the case $\gamma=1$. Then by the conditions $0<r<\delta $ and $\delta \lambda=x^{*}$, result (ii) is established. For $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $\nu_1=0$ and $\nu_2=|\boldsymbol{\nu}|\leq k$, using the Leibniz's Rule and Lemma \ref{lm-GB5-1}, we have \begin{eqnarray*} |\frac{\partial^{|\boldsymbol{\nu}|}}{\partial r^{|\boldsymbol{\nu}|}}\mathcal{F}(x)(t,r,\xi)| \!\!\!&=&\!\!\!|A\frac{\partial^{|\boldsymbol{\nu}|}}{\partial r^{|\boldsymbol{\nu}|}}x(t-r,r,\xi) +\int_{0}^{t}\frac{\partial^{|\boldsymbol{\nu}|}}{\partial r^{|\boldsymbol{\nu}|}}f(x(s,r,\xi),x(s-r,r,\xi))ds|\\ \!\!\!&\leq&\!\!\! MS_{(0,|\boldsymbol{\nu}|)}\varepsilon_{|\boldsymbol{\nu}|}(e^{\lambda|t|}|\xi|^{\gamma})^{|\boldsymbol{\nu}|} +|\int_{0}^{t}T_{(0,|\boldsymbol{\nu}|)}(e^{\lambda|s|}|\xi|^{\gamma})^{|\boldsymbol{\nu}|}ds|\\ \!\!\!&\leq&\!\!\! \left(MS_{(0,|\boldsymbol{\nu}|)}\varepsilon_{|\boldsymbol{\nu}|}+T_{(0,|\boldsymbol{\nu}|)}/(|\boldsymbol{\nu}|\lambda)\right) (e^{\lambda|t|}|\xi|^{\gamma})^{|\boldsymbol{\nu}|}. \end{eqnarray*} Together with $\delta \lambda=x^{*}$, result (iii) is true. For $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $1\leq\nu_i\leq |\boldsymbol{\nu}|-1$ and $2\leq|\boldsymbol{\nu}|\leq k$, using the Leibniz's Rule and Lemma \ref{lm-GB5-1} again, we have \begin{eqnarray*} |\frac{\partial^{|\boldsymbol{\nu}|}}{\partial t^{\nu_1}\partial r^{\nu_2}}\mathcal{F}(x)(t,r,\xi)| \!\!\!&=&\!\!\! |A\frac{\partial^{|\boldsymbol{\nu}|}}{\partial t^{\nu_1}\partial r^{\nu_2}}x(t-r,r,\xi) +\frac{\partial^{|\boldsymbol{\nu}|-1}}{\partial t^{\nu_1-1}\partial r^{\nu_2}}f(x(t,r,\xi),x(t-r,r,\xi))|\\ \!\!\!&\leq&\!\!\! MS_{(\nu_1,\nu_2)}\varepsilon_{|\boldsymbol{\nu}|}(e^{\lambda|t|}|\xi|^{\gamma})^{|\boldsymbol{\nu}|}+T_{(\nu_1-1,\nu_2)} (e^{\lambda|t|}|\xi|^{\gamma})^{|\boldsymbol{\nu}|}. \end{eqnarray*} So result (iv) holds. For $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $\nu_1=0$ and $\nu_2=k$, applying Lemma \ref{lm-GB5-1} and Lemma \ref{lm-GB5-2}, we have \begin{eqnarray*} \lefteqn{|\frac{\partial^{k}}{\partial r^{k}}\mathcal{F}(x)(t_1,r_1,\xi) -\frac{\partial^{k}}{\partial r^{k}}\mathcal{F}(x)(t_2,r_2,\xi)|}\nonumber\\ \!\!\!\!&\leq &\!\!\!\! |A||\frac{\partial^{k}}{\partial r^{k}}x(t_1-r_1,r_1,\xi)-\frac{\partial^{k}}{\partial r^{k}}x(t_2-r_2,r_2,\xi)| +|\int^{t_2}_{t_1} \frac{\partial^{k}}{\partial r^{k}} f(x(s,r_2,\xi),x(s-r_2,r_2,\xi))ds|\\ \!\!\!\!&&\!\!\!\! +|\int^{t_1}_{0} \left( \frac{\partial^{k}}{\partial r^{k}} f(x(s,r_1,\xi),x(s-r_1,r_1,\xi)) - \frac{\partial^{k}}{\partial r^{k}} f(x(s,r_2,\xi),x(s-r_2,r_2,\xi))\right)ds|\\ \!\!\!\!&\leq &\!\!\!\! MS_{(0,k+1)}\varepsilon_{k+1}\left(e^{\lambda|t_*|}|\xi|^{\gamma}\right)^{k+1}\max\{|t_1-t_2|,|r_1-r_2|\} +T_{(0,k)}\left(e^{\lambda|t_*|}|\xi|^{\gamma}\right)^{k+1}|t_1-t_2| \\ \!\!\!\!& &\!\!\!\! +|\int^{t_1}_{0} T_{(0,k+1)}\left(e^{\lambda|s|}|\xi|^{\gamma}\right)^{k+1}|r_1-r_2|ds|\\ \!\!\!\!&\leq &\!\!\!\! \left(MS_{(0,k+1)}\varepsilon_{k+1}+T_{(0,k+1)}/((k+1)\lambda)+T_{(0,k)}\right) \left(e^{\lambda|t_*|}|\xi|^{\gamma}\right)^{k+1}\max\{|t_1-t_2|,|r_1-r_2|\}, \end{eqnarray*} which implies that result (v) is true. In the end, using Lemma \ref{lm-GB5-1}, we have for $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $1\leq \nu_1\leq k$ and $\nu_1+\nu_2=k$, \begin{eqnarray*} \lefteqn{|\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2}}\mathcal{F}(x)(t_1,r_1,\xi) -\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2}}\mathcal{F}(x)(t_2,r_2,\xi)|}\\ \!\!\!\!&\leq &\!\!\!\! |A||\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2}}x(t_1-r_1,r_1,\xi) -\frac{\partial^{k}}{\partial t^{\nu_1}\partial r^{\nu_2}}x(t_2-r_2,r_2,\xi)|\\ \!\!\!\!& &\!\!\!\! +|\frac{\partial^{k-1}}{\partial t^{\nu_1-1}\partial r^{\nu_2}} f(x(t_1,r_1,\xi),x(t_1-r_1,r_1,\xi)) - \frac{\partial^{k-1}}{\partial t^{\nu_1-1}\partial r^{\nu_2}} f(x(t_2,r_1,\xi),x(t_2-r_1,r_1,\xi))|\\ \!\!\!\!& &\!\!\!\! +|\frac{\partial^{k-1}}{\partial t^{\nu_1-1}\partial r^{\nu_2}} f(x(t_2,r_1,\xi),x(t_2-r_1,r_1,\xi)) - \frac{\partial^{k-1}}{\partial t^{\nu_1-1}\partial r^{\nu_2}} f(x(t_2,r_2,\xi),x(t_2-r_2,r_2,\xi))|\\ \!\!\!\!&\leq&\!\!\!\! \left(MS_{(\nu_1,\nu_2+1)}\varepsilon_{k+1}+T_{(\nu_1,\nu_2)}+T_{(\nu_1-1,\nu_2+1)}\right) \left(e^{\lambda|t_*|}|\xi|^{\gamma}\right)^{k+1}\max\{|t_1-t_2|,|r_1-r_2|\}. \end{eqnarray*} Therefore, result (vi) holds. Then the proof of Lemma \ref{lm-GB5-3} is complete. \end{proof} For any constant $d_0>0$, we define $$\mathcal{E}^{\lambda}_{d_0}:=\{x\in C(\mathbb{R}\times\Omega\times V^{0}_{d_0}, \mathbb{R}^n): \sup_{(t,r,\xi)\in\mathbb{R}\times\Omega\times V^{0}_{d_0}} |x(t,r,\xi)|e^{-\lambda|t|}<+\infty\}$$ equipped with the norm $$|x|_{\mathcal{E}^{\lambda}_{d_0}}= \sup_{(t,r,\xi)\in\mathbb{R}\times\Omega\times V^{0}_{d_0}} |x(t,r,\xi)|e^{-\lambda|t|}.$$ Similarly to Lemma \ref{lm-4-complete}, we can prove $(\mathcal{E}^{\lambda}_{d_0}, |\cdot|_{\mathcal{E}^{\lambda}_{d_0}})$ is a Banach space. \begin{lm}\label{lm-GB5-4} Assume that {\bf (H2)} holds and $0<r<\delta\leq r_0$. Then $\mathcal{F}$ in (\ref{map-2}) has a unique fixed point in $\mathcal{E}^{\lambda}_{d_0}$. \end{lm} \begin{proof} Obviously, for each $x\in \mathcal{E}^{\lambda}_{d_0}$, $\mathcal{F}(x)$ is a continuous map from $\mathbb{R}\times\Omega\times V^{0}_{d_0}$ to $\mathbb{R}^n$. Using the same procedure as for Lemma \ref{lm-GB5-3} (i), we have \begin{eqnarray*} |\mathcal{F}(x)(t,r,\xi)| \leq \left(d_0+|x|_{\mathcal{E}^{\lambda}_{d_0}}(Mx^{*}e^{x^{*}}+M_1r_0 e^{x^{*}})/x^{*}+|f(0,0)|r_0 /(ex^{*})\right)e^{\lambda|t|}. \end{eqnarray*} Then $\mathcal{F}$ maps $\mathcal{E}^{\lambda}_{d_0}$ into $\mathcal{E}^{\lambda}_{d_0}$. Moreover, for any $x,y\in\mathcal{E}^{\lambda}_{d_0}$, we observe that \begin{eqnarray*} \lefteqn{|\mathcal{F}(x)(t,r,\xi)-\mathcal{F}(y)(t,r,\xi)|}\nonumber\\ \!\!\!&\leq&\!\!\!\!\! |A||x(t-r,r,\xi)-y(t-r,r,\xi)|+|\!\! \int_0^{t} \!\!|f(x(s,r,\xi),x(s-r,r,\xi))-f(y(s,r,\xi),y(s-r,r,\xi))|ds|\nonumber\\ \!\!\!&\leq&\!\!\!\!\! M|x-y|_{\mathcal{E}^{\lambda}_{d_0}}e^{\lambda (r+|t|)}+ |\int_0^{t} M_1 |x-y|_{\mathcal{E}^{\lambda}_{d_0}} e^{\lambda (r+|s|)} ds|\nonumber\\ \!\!\!&\leq&\!\!\!\!\! \left(Me^{r\lambda}+M_1e^{r\lambda}/\lambda\right)e^{\lambda |t|}|x-y|_{\mathcal{E}^{\lambda}_{d_0}}, \label{T-contr-lm6-5} \end{eqnarray*} which implies that $|\mathcal{F}(x)-\mathcal{F}(y)|_{\mathcal{E}^{\lambda}_{d_0}}\leq (Me^{x^{*}}+M_1r_0 e^{x^{*}}/x^{*})|x-y|_{\mathcal{E}^{\lambda}_{d_0}}$. Using (\ref{x-star-1}) yields that $\mathcal{F}$ is a contraction. By the Contraction Mapping Principle, $\mathcal{F}$ has a unique fixed point in $\mathcal{E}^{\lambda}_{d_0}$. Then the proof of Lemma \ref{lm-GB5-4} is complete. \end{proof} Next we prove Theorem \ref{thm-2}. \begin{proof}[Proof of Theorem \ref{thm-2}] We prove this theorem in three steps. \vskip 0.2cm \noindent{\bf Step (i).} We first prove that there exists a continuous map $\Psi_1$ satisfying Theorem \ref{thm-2} (i) and (iii). Let the constant $r_0$ satisfy Hypothesis {\bf (H2)}. By (\ref{x-star-1}), we see $(Mx^{*}e^{x^{*}}+M_1r_0e^{x^{*}})/x^{*}<1$. Then we can choose a sufficiently large $\varepsilon_0>0$ such that \begin{eqnarray}\label{esp-0-est} (1-(Mx^{*}e^{x^{*}}+M_1r_0 e^{x^{*}})/x^{*})\varepsilon_0\geq d+|f(0,0)|r_0 /(ex^{*}). \end{eqnarray} By (\ref{x-star-2}), we note that for $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $1\leq|\boldsymbol{\nu}|\leq k$, \begin{eqnarray*} 1-Me^{x^{*}}-MS_{(0,1)}>0,\ \ \ \ 1-M\sum_{\nu_1=0}^{|\boldsymbol{\nu}|}\frac{|\boldsymbol{\nu}|!S_{\boldsymbol{\nu}}}{(\nu_1!) (\nu_2!)}>0, \end{eqnarray*} and for $\boldsymbol{\nu}=(\nu_1,\nu_2)$ with $|\boldsymbol{\nu}|=k$, \begin{eqnarray*} 1-M\sum_{\nu_1=0}^{k}\frac{k!S_{(\nu_1,\nu_2+1)}}{(\nu_1!) (\nu_2!)}>0. \end{eqnarray*} Then we can choose the constants $\varepsilon_{j}$ and $\delta_{j}$ for $j=1,...,k+1$ in the following way: \begin{equation}\label{e-1-d-1} \begin{split} (1-Me^{x^{*}}-MS_{(0,1)})\varepsilon_1/2 &\geq M_1e^{x^{*}}\varepsilon_0+|f(0,0)|, \\ \delta_1 &:=(1-Me^{x^{*}}-MS_{(0,1)})\varepsilon_1x^{*}/(2T_{(0,1)}), \end{split} \end{equation} \begin{equation}\label{e-v-d-v} \begin{split} \left(1-M\sum_{\nu_1=0}^{|\boldsymbol{\nu}|}\frac{|\boldsymbol{\nu}|!S_{\boldsymbol{\nu}}}{(\nu_1!) (\nu_2!)}\right) \varepsilon_{|\boldsymbol{\nu}|} & \geq 2 \sum_{\nu_1=1}^{|\boldsymbol{\nu}|}\frac{|\boldsymbol{\nu}|!T_{(\nu_1-1,\nu_2)}}{(\nu_1!) (\nu_2!)}, \\ \delta_{|\boldsymbol{\nu}|} & :=\frac{\left(1-M\sum_{\nu_1=0}^{|\boldsymbol{\nu}|}\frac{|\boldsymbol{\nu}|!S_{\boldsymbol{\nu}}}{(\nu_1!) (\nu_2!)}\right) \varepsilon_{|\boldsymbol{\nu}|} |\boldsymbol{\nu}| x^{*}} {2T_{(0,|\boldsymbol{\nu}|)}},\, 2\leq |\boldsymbol{\nu}|\leq k, \end{split} \end{equation} \begin{equation}\label{d-k+1} \begin{split} \left(1-M\sum_{\nu_1=0}^{k}\frac{k!S_{(\nu_1,\nu_2+1)}}{(\nu_1!) (\nu_2!)}\right)\varepsilon_{k+1} &\geq 2 \sum_{\nu_1=1}^{k}\frac{k!\left(T_{(\nu_1,\nu_2)}+T_{(\nu_1-1,\nu_2+1)}\right)}{(\nu_1!) (\nu_2!)}+2T_{(0,k)},\\ \delta_{|\boldsymbol{\nu}|}&:= \frac{\left(1-M\sum_{\nu_1=0}^{k}\frac{k!S_{(\nu_1,\nu_2+1)}}{(\nu_1!) (\nu_2!)}\right)\varepsilon_{k+1}(k+1) x^{*}}{2T_{(0,k+1)}}. \end{split} \end{equation} By Lemma \ref{lm-GB5-1}, we further observe that for each $\boldsymbol{\nu}$, $T_{\boldsymbol{\nu}}$ only depends on $\varepsilon_0$,...,$\varepsilon_{|\boldsymbol{\nu}|}$, which guarantees that all $\varepsilon_{j}$ and $\delta_{j}$ are well defined. Take $\delta:=\min\{r_0, \delta_1, \delta_2, \ldots, \delta_{k+1}\}$. Clearly, $\delta$ is independent of $\mathbb{\xi}\in \mathbb{R}^{n}$. Applying the above parameters, for each $\gamma \in \{0,1\}$ we define the set $\mathcal{E}_{\gamma,\lambda}$. According to Lemma \ref{lm-5-complete}, for each $\gamma \in \{0,1\}$, $(\mathcal{E}_{\gamma,\lambda}, \rho_{\gamma})$ is a complete metric space. Obviously, for each $x\in \mathcal{E}_{\gamma,\lambda}$, $\mathcal{F}(x)$ is a continuous map from $\mathbb{R}\times\Omega\times V_{\gamma}$ to $\mathbb{R}^n$ and $\mathcal{F}(x)(t,r, \xi)$ is $C^k$ in $(t,r)$ for each $\xi\in V_{\gamma}$. By (\ref{esp-0-est}) and Lemma \ref{lm-GB5-3} (i), we find that $|\mathcal{F}(x)|_{\mathcal{E}_{\gamma,\lambda}}\leq \varepsilon_0$. By (\ref{e-1-d-1})-(\ref{d-k+1}), Lemma \ref{derivt} and Lemma \ref{lm-GB5-3}, for $0<r<\delta$ we find that $|\mathcal{F}(x)|_{\mathcal{E}_{\gamma,\lambda},j}\leq \varepsilon_j$ for $j=1,...,k+1$. Then $\mathcal{F}$ maps $\mathcal{E}_{\gamma,\lambda}$ into itself. Moreover, similarly to Lemma \ref{lm-GB5-4}, we also prove that $\mathcal{F}$ is a contraction in $\mathcal{E}_\gamma$. By the Contraction Mapping Principle, $\mathcal{F}$ has a unique fixed point in the complete metric space $(\mathcal{E}_{\gamma,\lambda}, \rho_{\gamma})$. We denote this fixed point by $\Psi_1$. Then $\Psi_1$ satisfies Theorem \ref{thm-2} (i) and (iii). \vskip 0.2cm \noindent{\bf Step (ii).} Secondly, we prove that there exists a continuous map $\Psi_2$ satisfying Theorem \ref{thm-2} (ii). For any constant $d_0>0$, by the similar method of Theorem \ref{thm-1}, there exists a constant $\delta$ which is independent of $d_0$ such that the operator $\mathcal{F}$ has a unique fixed point $\Psi_2$, which is a continuous map from $\mathbb{R}\times\Omega\times V^{0}_{d_0}$ to $\mathbb{R}^n$, and for each fixed $(t,r)\in\mathbb{R}\times\Omega$, $\Psi_2(t,r,\cdot\,)$ is $C^{k,1}$ and there exists a sequence $\{\beta_{j}\}_{j=0}^{k+1}$ such that \begin{eqnarray*} \sup_{(t,r,\xi)\in\mathbb{R}\times\Omega\times V^{0}_{d_0}} |\Psi_2(t,r,\xi)|e^{-\lambda|t|} \!\!\! &\leq&\!\!\! \beta_0,\\ \sup_{(t,r,\xi)\in\mathbb{R}\times\Omega\times V^{0}_{d_0}} |D_2^j\Psi_2(t,r,\xi)|e^{-j\lambda|t|} \!\!\!&\leq&\!\!\! \beta_j, \ \ j=1,2,...,k,\nonumber\\ \sup_{\xi_1\neq\xi_2}\frac{|D_2^k \Psi_2(t,r,\xi_1)-D_2^k \Psi_2(t,r,\xi_2)|}{|\xi_1-\xi_2|}e^{-(k+1)\lambda|t|} \!\!\!&\leq&\!\!\! \beta_{k+1}, \end{eqnarray*} where the constant $\lambda$ satisfies the conditions stated in {\bf (H2)}. Without loss of generality, we assume that $\delta$ in this step is the same as the one in step 1, otherwise, we choose the minimum one. \vskip 0.2cm \noindent{\bf Step (iii).} Finally, we prove $\Psi_1=\Psi_2$ on $\mathbb{R}\times\Omega\times V^{0}_{d_0}$. By step 1 we see that $\Psi_1\in \mathcal{E}_{\gamma,\lambda}$. Then by the property of $\mathcal{E}_{\gamma,\lambda}$, we have $\Psi_1\in \mathcal{E}^{\lambda}_{d_0}$. By step 2, we also have $\Psi_2\in \mathcal{E}^{\lambda}_{d_0}$. In view of Lemma \ref{lm-GB5-4}, we find $\Psi_1=\Psi_2$ on $\mathbb{R}\times\Omega\times V^{0}_{d_0}$. Note that the arbitrariness of $d_0$ and the fact that $\delta$ is independent of $d_0$. We have $\Psi_1=\Psi_2$ on $\mathbb{R}\times\Omega\times \mathbb{R}^{n}$. Therefore, the proof of Theorem \ref{thm-2} is complete. \end{proof} \section{Illustrating example} In this section we apply the main results obtained in Theorem \ref{thm-2} to study the dynamics of the following neutral differential system \begin{eqnarray} \label{VDP-1} \begin{split} \frac{d}{dt}\left\{\tilde{x}_{1}(t)-c\tilde{x}_{1}(t-\tilde{r})\right\} &= \tilde{x}_{2}(t)-\tilde{\varepsilon}\left\{\frac{1}{2}\tilde{x}_{1}^{2}(t)+\frac{1}{3}\tilde{x}_{1}^{3}(t)\right\}, \\ \frac{d}{dt} \tilde{x}_{2}(t) &= b-\tilde{x}_{1}(t-\tilde{r}), \end{split} \end{eqnarray} where $(\tilde{x}_{1},\tilde{x}_{2})\in\mathbb{R}^{2}$, $b\in\mathbb{R}$, $1>c>0$, $\tilde{\varepsilon}>0$ and $\tilde{r}>0$. Letting $c=0$ and $\tilde{r}=0$, system (\ref{VDP-1}) becomes the well-known van der Pol (abbreviated as vdP) oscillator model. Then we call system (\ref{VDP-1}) a vdP oscillator model of neutral type. Here we are interested in the case $1/\tilde{r}\gg \tilde{\varepsilon}^{3}\gg 1$, that is, the vdP oscillator model (\ref{VDP-1}) is in the relaxation case and has a small time delay. By a rescaling $(\tilde{x}_{1},\tilde{x}_{2},t)\to (\tilde{x}_{1},\tilde{\varepsilon}\tilde{x}_{2},t/\tilde{\varepsilon})$, the vdP oscillator model (\ref{VDP-1}) is changed into \begin{eqnarray} \label{VDP-2} \begin{split} \frac{d}{dt}\left\{x_{1}(t)-cx_{1}(t-r)\right\} &= x_{2}(t)-\left\{\frac{1}{2}x_{1}^{2}(t)+\frac{1}{3}x_{1}^{3}(t)\right\}:=f_{1}(x_{1}(t),x_{2}(t)), \\ \frac{d}{dt} x_{2}(t) &= \varepsilon(b- x_{1}(t-r)):=f_{2}(x_{1}(t-r)), \end{split} \end{eqnarray} where $x_{1}(t)=\tilde{x}_{1}(t/\tilde{\varepsilon})$, $x_{2}(t)=\tilde{x}_{2}(t/\tilde{\varepsilon})$, $\varepsilon=1/\tilde{\varepsilon}^2$, $r=\tilde{\varepsilon}\tilde{r}$ and $0<r\ll \varepsilon\ll 1$. When $r=0$, then system (\ref{VDP-2}) is changed into a standard slow-fast system \begin{eqnarray} \label{VDP-3} \begin{split} (1-c)\frac{d}{dt}x_{1}(t) &= x_{2}(t)-\left\{\frac{1}{2}x_{1}^{2}(t)+\frac{1}{3}x_{1}^{3}(t)\right\}, \\ \frac{d}{dt} x_{2}(t) &= \varepsilon(b- x_{1}(t)). \end{split} \end{eqnarray} In distinct parameter regions, complex oscillation phenomena occur near the following curve \begin{eqnarray*} \mathcal{W}_{1}:=\left\{(x_{1},x_{2})\in\mathbb{R}^{2}: x_{2}=\frac{1}{2}x_{1}^{2}+\frac{1}{3}x_{1}^{3}, \ -3/2\leq x_{1}\leq 1/2\right\}, \end{eqnarray*} which contains all non-hyperbolic points of the critical manifold for the slow-fast system (\ref{VDP-3}). In particular, under some suitable conditions the slow-fast system (\ref{VDP-3}) has hyperbolic periodic orbits, such as relaxation oscillations and canard cycles (see, for instance, \cite{Dumortieretal-Roussarie-96}). One of interesting problems is to study the effect of small delay $r$ on these periodic orbits near the curve $\mathcal{W}_{1}$. Without loss of generality, we consider the restriction of system (\ref{VDP-2}) on the set $$\mathcal{W}_{2}:=\left\{(x_{1},x_{2})\in\mathbb{R}^{2}: -2\leq x_{1}\leq 1, \ -1/2\leq x_{2}\leq 1/2\right\},$$ the interior of which includes the set $\mathcal{W}_{1}$. Note that both $f_{1}$ and $f_{2}$ are polynomials, then for a large positive integer $k$, there exists constants $M_{j}$, $j=1,...,k+1$, such that the restriction of $f=(f_{1},f_{2})^{T}$ on the set $\mathcal{W}_{2}$ satisfies {\bf (H2)}. By applying the cut-off technique, we can get a modified system which is consistent with the original system (\ref{VDP-2}) in the set $\mathcal{W}_{2}$ and satisfies {\bf (H2)}. More specifically, let the cut-off function $\chi:[0,+\infty)\to [0,1]$ satisfy the following properties: {\rm (i')} $\chi\in C^{\infty}$; {\rm (ii')} $\chi(u)=1$ for $u\in[0,1]$ and $\chi(u)=0$ for $u\in[2,+\infty)$; {\rm (iii')} $\sup_{u \in [0,+\infty)} |\chi'(u)| \le 2$. We define the modified function $\tilde{f}$ of $f$ by $\tilde{f}(\phi(0),\phi(-r))=f(\chi(|\phi|/\kappa)\phi(0),\chi(|\phi|/\kappa)\phi(-r))$ for each $\phi\in\mathcal{C}$ and a certain positive constant $\kappa$. Then by Theorem \ref{thm-2}, the modified system has a two-dimensional inertial manifold which is $C^{k,1}$ in $r$. Consider the restriction of the modified system on this inertial manifold and let $r=0$ in system (\ref{VDP-2}). We obtain that the zeroth-order approximation of the restricted system in the set $\mathcal{W}_{2}$ has the form (\ref{VDP-3}). The high-order approximations of the restricted system can be obtained by the similar method used in \cite{Chicone03,Chicone04} for retarded differential equations. Note that the restricted system can be given by a two-dimensional ordinary differential system, then by the structural stability of hyperbolic periodic orbits, we can obtain that system (\ref{VDP-2}) with small delay $r$ also have periodic orbits near the hyperbolic relaxation oscillations and hyperbolic canard cycles arising in the slow-fast system (\ref{VDP-3}). As a result, Theorem \ref{thm-2} is useful to study the dynamics of the vdP oscillator model (\ref{VDP-1}) with small delay. \section{Discussion} We hope that the method used here could be applied to establish the existence of smooth invariant manifolds, such as stable manifolds, unstable manifolds, center manifolds and so on, for other classes of evolutionary equations. In particular, to the best of our knowledge, the existence and the smoothness of inertial manifolds for neutral differential equations with arbitrary delays are not understood yet. We also point out that our results are not sharp. It is interesting to give a sharp smallness condition such that equations (\ref{NA-NDE}) and (\ref{A-NDE}) both have $C^{k,1}$ inertial manifolds. It is also possible to make a weaker condition on $M$ to obtain smooth inertial manifolds for equations (\ref{NA-NDE}) and (\ref{A-NDE}). All these conditions on $M$ in {\bf (H1)} and {\bf (H2)} are used to guarantee that the Contraction Mapping Principle is valid. It should be noted especially that, to guarantee the smoothness of the inertial manifold for equation (\ref{A-NDE}) with respect to the delay $r$, we demand that the contraction condition $M$ in {\bf (H2)} is sharper than one in {\bf (H1)}. We hope that all these conditions on $M$ can be relaxed. It is also interesting to study whether there exists an inertial manifold of equation (2.3) with some smallness conditions is analytic (resp. $C^{\infty}$) in the delay $r$ if the function $f$ is analytic (resp. $C^{\infty}$). However, for retarded differential equation (2.3) (with $A=0$), our method could prove that equation (2.3) admits an $C^{\infty}$ inertial manifold in the delay $r$ if the function $f$ is $C^{\infty}$. Complex oscillations arising from the vdP oscillator model (\ref{VDP-2}) and the difference between the original system and its approximate systems should also be further studied. \subsection*{Appendix A}\label{sec-app} \renewcommand\theequation{A.1} Let $H(x):=x e^{-x}-Mx$, $x\in \mathbb{R}$. We consider the following equation \begin{eqnarray}\label{exp-poly-1} H(x)=M_1r, \ \ \ \ x\in \mathbb{R}, \end{eqnarray} where the constants $M>0$, $M_1>0$ and $r>0$ are defined in Hypothesis {\bf (H1)} (resp. {\bf (H2)}). Then equation (\ref{exp-poly-1}) has positive real roots if and only if $0<M<1$ and $0<M_1r\leq H(x_0)$, where $x_0\in(0,1)$ is the unique positive real zero of $H'$, the derivative of $H$ with respect to $x$. If $0<M<1$ and $0<M_1r< H(x_0)$, then there exist exactly two positive real roots of equation (\ref{exp-poly-1}), which are denoted by $x_1(r)$ and $x_2(r)$ with $0<x_1(r)<x_0<x_2(r)<-\ln M$, and $H(x)>M_1r$ for $x\in (x_1(r),x_2(r))$. \begin{proof} (Necessity) Suppose that equation (\ref{exp-poly-1}) has positive real roots, we first prove $0<M<1$. Otherwise, suppose that the constant $M$ satisfies $M\geq 1$. Clearly, the first and second order derivative of $H$ with respect to $x$ are in the form $H'(x)=(1-x)e^{-x}-M$ and $H^{''}(x)=(x-2)e^{-x}$ for $x\in \mathbb{R}$, respectively. Then, we see that $H'(0)=1-M>-M$, $H'(x)\leq -M$ for $x\geq 1$ and $H'$ decreases monotonely in the interval $[0,1]$. Thus, $\max_{x\geq0}H'(x)=H'(0)=1-M\leq 0$, which implies $H(x)\leq H(0)=0< M_1r$ for $x\geq 0$. Hence, equation (\ref{exp-poly-1}) has no positive real roots, which is a contraction. Thus, the constant $M$ satisfies $0<M<1$. Note that for $0<M<1$, $H'(0)=1-M>0$, $H'(x)\leq -M<0$ for $x\geq 1$, and $H'$ decreases monotonely in the interval $[0,1]$. Then there is exactly one zero of $H^{'}$ on $[0,+\infty)$, which is denoted by $x_0$ with $0<x_0<1$, further, $H'(x)\geq 0$ for $x\in [0, x_0]$ and $H'(x)\leq 0$ for $x\in [x_0, +\infty)$. Hence, $\max_{x\geq0}H(x)=H(x_0)>0$. Thus, if $H(x)=M_1 r$ has positive real roots, then it is necessary that $M_1 r\leq H(x_0)$. Therefore, the necessity is proved. (Sufficiency) If $0<M<1$, from the proof of the necessity, we see that $x_0$ is well defined. Note that for $0<M<1$, $H$ has a positive real zero $-\ln M$, thus the sufficiency is proved. Assume $0<M<1$, then by the statements above, we see that $H$ increases monotonely in $[0,x_0]$ and decreases monotonely in $[x_0,+\infty)$, $\max_{x\geq0} H(x)=H(x_0)>H(0)=0$, and $H$ has exactly two nonnegative real zeros, that is, $0, -\ln M$. Hence, for $0<M_1r< H(x_0)$, equation (\ref{exp-poly-1}) has exactly two positive real zeros $x_1(r)$ and $x_2(r)$ with $0<x_1(r)<x_0<x_2(r)<-\ln M$, and $H(x)> M_1r$ for $x\in (x_1(r),x_2(r))$. Therefore, the proof is now complete. \end{proof} \subsection*{Appendix B. Proof of (\ref{est-G-dev})} By Lemma \ref{partial-devt}, we obtain \renewcommand\theequation{B.1} \begin{equation} \begin{split} \label{prop-2-1} \lefteqn{D^k_2G(t,\xi_1)-D^k_2G(t,\xi_2)}\\ =&\, k!\sum_{m=1}^{k}D^m_2F(t,y_t)\sum_{p(k,m)}\prod_{i=1}^{k}\frac{(D^{i}_2y_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}} -k!\sum_{m=1}^{k}D^m_2F(t,z_t)\sum_{p(k,m)}\prod_{i=1}^{k}\frac{(D^{i}_2z_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\\ =&\, k!\sum_{m=1}^{k}(D^m_2F(t,y_t)-D^m_2F(t,z_t))\sum_{p(k,m)}\prod_{i=1}^{k}\frac{(D^{i}_2y_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}} +k!\sum_{m=1}^{k}D^m_2F(t,z_t)\sum_{p(k,m)}\widetilde{I}_{2}^{k}, \end{split} \end{equation} where \renewcommand\theequation{B.2} \begin{equation} \begin{split} \label{prop-2-2} \widetilde{I}_{2}^{k} :=&\, \prod_{i=1}^{k} \frac{(D^{i}_2y_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}} -\prod_{i=1}^{k} \frac{(D^{i}_2z_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\\ =&\, \frac{\widetilde{Q}_{1}}{(\omega_1!)(1!)^{\omega_1}}\prod_{i=2}^{k} \frac{(D^{i}_2y_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}} +\frac{(D^{1}_2z_t)^{\omega_1}}{(\omega_1!)(1!)^{\omega_1}}\frac{\widetilde{Q}_{2}}{(\omega_2!)(2!)^{\omega_2}} \prod_{i=3}^{k} \frac{(D^{i}_2y_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\\ & +\cdot\cdot\cdot+\prod_{i=1}^{k-1} \frac{(D^{i}_2z_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\frac{\widetilde{Q}_{k}}{(\omega_k!)(k!)^{\omega_k}}\\ =&\, \sum_{j=1}^{k} \left(\prod_{i=0}^{j-1} \frac{(D^{i}_2z_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\right) \left(\frac{\widetilde{Q}_{j}}{(\omega_j!)(j!)^{\omega_j}}\right) \left(\prod_{i=j+1}^{k+1} \frac{(D^{i}_2y_t)^{\omega_i}}{(\omega_i!)(i!)^{\omega_i}}\right),\\ \end{split} \end{equation} \renewcommand\theequation{B.3} \begin{equation} \begin{split} \label{prop-2-3} \widetilde{Q}_{j} :=&\, (D^{j}_2y_t)^{\omega_j}-(D^{j}_2z_t)^{\omega_j}\\ =&\, (D^{j}_2y_t-D^{j}_2z_t)(D^{j}_2y_t)^{\omega_j-1}+D^{j}_2z_t(D^{j}_2y_t-D^{j}_2z_t)(D^{j}_2y_t)^{\omega_j-2}\\ &\,+\cdot\cdot\cdot+(D^{j}_2z_t)^{\omega_j-1}(D^{j}_2y_t-D^{j}_2z_t). \end{split} \end{equation} Along with (\ref{prop-2-1})-(\ref{prop-2-3}), it is easy to verify that (\ref{est-G-dev}) holds. \subsection*{Appendix C. Proof of (\ref{k-lip})} By Lemma \ref{partial-devt}, we have \renewcommand\theequation{C.1} \begin{equation}\label{prop-3-1} \begin{split} & \lefteqn{ \frac{\partial^{k}}{\partial r^{k}} f(x(t,r_1,\xi),x(t-r_1,r_1,\xi)) -\frac{\partial^{k}}{\partial r^{k}} f(x(t,r_2,\xi),x(t-r_2,r_2,\xi))}\\ =&\, \sum_{1\leq|\boldsymbol{\omega}|\leq k} f_{\boldsymbol{\omega}}(r_1) \sum_{s=1}^{k}\!\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})} (k!)\prod_{j=1}^{s} \frac{({\bf g}_{\boldsymbol{l_j}}(r_1))^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}}\\ &\, -\sum_{1\leq|\boldsymbol{\omega}|\leq k} f_{\boldsymbol{\omega}}(r_2) \sum_{s=1}^{k}\!\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})} (k!)\prod_{j=1}^{s} \frac{({\bf g}_{\boldsymbol{l_j}}(r_2))^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}}\\ =&\, \sum_{1\leq|\boldsymbol{\omega}|\leq k}\!\! (f_{\boldsymbol{\omega}}(r_1)-f_{\boldsymbol{\omega}}(r_2)) \sum_{s=1}^{k}\!\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})} (k!)\prod_{j=1}^{s} \frac{({\bf g}_{\boldsymbol{l_j}}(r_1))^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}}\\ &\,+\sum_{1\leq|\boldsymbol{\omega}|\leq k}\!\! f_{\boldsymbol{\omega}}(r_2) \sum_{s=1}^{k}\!\sum_{p_s(\boldsymbol{\nu},\boldsymbol{\omega})}(k!) \widetilde{\Delta}_{s}, \end{split} \end{equation} where \renewcommand\theequation{C.2} \begin{equation} \begin{split} \label{prop-3-2} \widetilde{\Delta}_{s}: =&\, \prod_{j=1}^{s}\frac{({\bf g}_{\boldsymbol{l_j}}(r_1))^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} -\prod_{j=1}^{s}\frac{({\bf g}_{\boldsymbol{l_j}}(r_2))^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}}\\ =&\, \frac{\widetilde{\Theta}_{1}}{(\boldsymbol{k_1}!)(\boldsymbol{l_1}!)^{|\boldsymbol{k_1}|}} \prod_{j=2}^{s}\frac{({\bf g}_{\boldsymbol{l_j}}(r_1))^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} +\frac{({\bf g}_{\boldsymbol{l_1}}(r_2))^{\boldsymbol{k_1}}}{(\boldsymbol{k_1}!)(\boldsymbol{l_1}!)^{|\boldsymbol{k_1}|}} \frac{\widetilde{\Theta}_{2}}{(\boldsymbol{k_2}!)(\boldsymbol{l_2}!)^{|\boldsymbol{k_2}|}} \prod_{j=3}^{s}\frac{({\bf g}_{\boldsymbol{l_j}}(r_1))^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} \\ &\, +\cdot\cdot\cdot+ \prod_{j=1}^{s-1}\frac{({\bf g}_{\boldsymbol{l_j}}(r_2))^{\boldsymbol{k_j}}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}} \frac{\widetilde{\Theta}_{s}}{(\boldsymbol{k_s}!)(\boldsymbol{l_s}!)^{|\boldsymbol{k_s}|}} \\ =&\, \sum_{j=1}^{s} \left(\prod_{i=0}^{j-1} \frac{({\bf g}_{\boldsymbol{l_i}}(r_2))^{\boldsymbol{k_i}}}{(\boldsymbol{k_i}!)(\boldsymbol{l_i}!)^{|\boldsymbol{k_i}|}}\right) \left(\frac{\widetilde{\Theta}_{j}}{(\boldsymbol{k_j}!)(\boldsymbol{l_j}!)^{|\boldsymbol{k_j}|}}\right) \left(\prod_{i=j+1}^{s+1} \frac{({\bf g}_{\boldsymbol{l_i}}(r_1))^{\boldsymbol{k_i}}}{(\boldsymbol{k_i}!)(\boldsymbol{l_i}!)^{|\boldsymbol{k_i}|}} \right), \end{split} \end{equation} \renewcommand\theequation{C.3} \begin{equation} \begin{split} \label{prop-3-3} \widetilde{\Theta}_{j}:=& ({\bf g}_{\boldsymbol{l_j}}(r_1))^{\boldsymbol{k_j}}-({\bf g}_{\boldsymbol{l_j}}(r_2))^{\boldsymbol{k_j}} \\ =&\, \widetilde{\Theta}_{1,j}\prod_{m=2}^{2n}(g_{\boldsymbol{l_j}}^{(m)}(r_1))^{k_{j,m}} +(g_{\boldsymbol{l_j}}^{(1)}(r_2))^{k_{j,1}}\widetilde{\Theta}_{2,j}\prod_{m=3}^{2n}(g_{\boldsymbol{l_j}}^{(m)}(r_1))^{k_{j,m}}\\ &\, +\cdot\cdot\cdot+ \prod_{m=1}^{2n-1}(g_{\boldsymbol{l_j}}^{(m)}(r_2))^{k_{j,m}}\widetilde{\Theta}_{2n,j} \end{split} \end{equation} \begin{equation*} \begin{split} =&\, \sum_{i=1}^{2n} \left(\prod_{m=0}^{i-1} (g_{\boldsymbol{l_j}}^{(m)}(r_2))^{k_{j,m}}\right) \widetilde{\Theta}_{i,j} \left(\prod_{m=i+1}^{2n+1}(g_{\boldsymbol{l_j}}^{(m)}(r_1))^{k_{j,m}}\right), \end{split} \end{equation*} \renewcommand\theequation{C.4} \begin{equation} \label{prop-3-4} \begin{split} \widetilde{\Theta}_{i,j}: =&\, (g^{(i)}_{\boldsymbol{l_j}}(r_1))^{k_{j,i}}-(g^{(i)}_{\boldsymbol{l_j}}(r_2))^{k_{j,i}}\\ =&\,(g^{(i)}_{\boldsymbol{l_j}}(r_1)-g^{(i)}_{\boldsymbol{l_j}}(r_2)) \sum_{m=0}^{k_{j,i}-1}(g^{(i)}_{\boldsymbol{l_j}}(r_1))^{m}(g^{(i)}_{\boldsymbol{l_j}}(r_2))^{k_{j,i}-m-1}. \end{split} \end{equation} Substituting (\ref{prop-3-2})-(\ref{prop-3-4}) into (\ref{prop-3-1}), it is easy to verify that (\ref{k-lip}) holds. \bibliographystyle{plain} {\footnotesize
{'timestamp': '2021-03-30T02:13:24', 'yymm': '1911', 'arxiv_id': '1911.07385', 'language': 'en', 'url': 'https://arxiv.org/abs/1911.07385'}
ArXiv
\section{{FedAGM}} \label{sec:method} This section first describes FedAGM algorithm and discuss how it solves the problem of client heterogeneity. Then we show that FedAGM is desirable for the real-world federated learning settings. \subsection{{Acceleration of global momentum}} To reduce the gap between the local and global objective function, our main idea is to incorporate global gradient information into local updates by accelerating the current model with global momentum. The Pseudo-code for our proposed method, FedAGM, is shown in~\cref{alg:proposed_method}. { In each round $t \in \{0,1,\dots,T\}$, the central server quantifies the server update direction $\Delta^t=-(\theta^{t}-\theta^{t-1})$, and broadcasts the accelerated global model $\theta^t - \lambda \Delta^t$ to the currently active clients $S_{t}$.} Starting with the integrated model as an initial point, each participating client optimizes a local objective function. {The local objective function is defined as the sum of its local empirical loss function and a penalized loss function which is based on the local online model and the received server model:} \begin{equation} {\underset{\theta_{i,k}^t}{\operatorname{argmin}}}~ L_i(\theta_{i,k}^t) = \alpha f_i(\theta_{i,k}^t ) + \frac{\beta}{2}\|\theta_{i,k}^t-(\theta^t-\lambda\Delta^t)\|^2, \label{eq:local_objective} \end{equation} {where $\alpha$ and $\beta$ control the relative importance of individual terms.} After $K$ local updates, each client uploads their trained model $\theta_{i,K}^t$ to the server, and then the server constructs the next server model $\theta^{t+1}$ as shown in~\cref{alg:proposed_method}. \setlength{\textfloatsep}{10pt} \begin{algorithm}[t] \caption{FedAvg} \label{alg:fedavg} \SetAlgoLined \KwIn{$\theta^0$, number of clients $N$, number of local iterations $K$, number of communication rounds $T$} \For{$\text{each round}~t = 0, \dots ,T$}{ Sample subset of clients $S_t \subseteq \{1, \dots, N\}$ Server sends $\theta^t$ to all clients $i \in S_t$ \For{$\text{each client}~ i \in S_t,~\textbf{in parallel}$}{ \For{$k = 0, \dots ,K$}{ {${\underset{\theta_{i,k}^t}{\operatorname{argmin}}}\ f_i(\theta_{i,k}^t )$} } Client sends $\theta_{i,K}^t$ back to the server} \textbf{In server:} \quad $\theta^{t+1}$ = $\frac{1}{|S_t|} \Sigma_{i \in S_t}\theta_{i,K}^{t}$ } \textbf{Return} ~ $\theta^{t+1}$ \end{algorithm} Since we define the server model update direction as $\Delta^{t+1}=-(\theta^{t+1}-\theta^t)$, the following Lemma~\ref{thm:delta_momentum} holds. \begin{lemma} \label{thm:delta_momentum} Let the averaged client's amount of local updates be $\nabla (\theta^t) =-\frac{1}{|S_t|}\underset{i \in S_t}{\Sigma}(\theta_{i,K}^{t}-(\theta^t-\lambda\Delta^t))$. Then $\Delta^t$ is the exponential moving average of ~$\nabla (\theta^t)$, i.e. \begin{equation*} \Delta^{t+1}=\tau\nabla(\theta^t)+\lambda\Delta^t. \end{equation*} \end{lemma} \begin{myproof} \begin{align*} \Delta^{t+1}&=-(\theta^{t+1}-\theta^{t}) \\ &= -\frac{\tau}{|S_t|} \Sigma_{i \in S_t}\theta_{i,K}^{t} - (1-\tau) (\theta^t-\lambda\Delta^t)+\theta^t \\ &= -\frac{\tau}{|S_t|} \Sigma_{i \in S_t}(\theta_{i,K}^{t} - (\theta^t-\lambda\Delta^t)) + \lambda\Delta^t \\&=\tau\nabla(\theta^t) + \lambda \Delta^t \end{align*} \end{myproof} Lemma~\ref{thm:delta_momentum} implies that $\Delta^t$ is an exponential moving average of the total local updates computed at the projected point with the decay coefficient $\lambda$ and server learning rate $\tau$. Although only a subset of clients participate\revision{s} in each communication round, $\Delta^t$ maintains past global gradient information, which serves as an approximation to the gradient of the global loss function $f(\theta)$. Therefore, we can view integrating $\lambda \Delta^t$ to the global model $\theta^t$ at server-to-client transmission as taking a lookahead through the interim parameters where the accumulated global velocity will lead the global model. This anticipatory update in client side leads each client to find a local minima adjacent to the trajectory of the global gradients, which helps FedAGM avoid inconsistent local updates. \begin{algorithm}[t] \caption{FedAGM} \label{alg:proposed_method} \SetAlgoLined \KwIn{$\alpha$, $\beta$, $\lambda$, $\tau$, $\theta^0$, number of clients $N$, number of local iterations $K$, number of communication rounds $T$} Initialize $\Delta^{0}=0$ \For{$\text{each round}~t = 0, \dots ,T$}{ Sample subset of clients $S_t \subseteq \{1, \dots, N\}$ Server sends $\theta^t-\lambda\Delta^t$ to all clients $i \in S_t$ \For{$\text{each client}~ i \in S_t,~\textbf{in parallel}$}{ Set $\theta_{i,0}^t=\theta^t-\lambda\Delta^t$ \For{$k = 0, \dots ,K$}{ {${\underset{\theta_{i,k}^t}{\operatorname{argmin}}}~ L_i(\theta_{i,k}^t) = \newline \hspace*{3.33em} \alpha f_i(\theta_{i,k}^t ) + \frac{\beta}{2}\|\theta_{i,k}^t-(\theta^t-\lambda\Delta^t)\|^2$ } } Client sends $\theta_{i,K}^t$ back to the server} \textbf{In server:} \quad $\theta^{t+1}$ = $\frac{\tau}{|S_t|} \Sigma_{i \in S_t}\theta_{i,K}^{t} + (1-\tau) (\theta^t-\lambda\Delta^t)$ \quad $\Delta^{t+1}=-(\theta^{t+1}-\theta^{t}$) } \textbf{Return} ~$\theta^{t+1}$ \end{algorithm} {\subsection{Local regularization with global momentum}} {In addition to the initial point acceleration for local training, our proposed local objective function in~\cref{eq:local_objective} also takes advantage of the global gradient information to further align the gradients of individual clients.} In detail, due to the regularization term in the local objective function, the local update direction is as follows: \begin{equation} \begin{aligned} \nabla L_{i}(\theta_{i, k}^{t}) &=\alpha \nabla f_{i}(\theta_{i, k}^{t})+\beta(\theta_{i, k}^{t}-(\theta^{t}-\lambda \Delta^{t})) \\ &=\alpha \nabla f_{i}(\theta_{i, k}^{t}) +\beta \lambda \Delta^{t} +\beta (\theta_{i, k}^{t}-\theta^{t}) \\ &\approx(\alpha + \beta\lambda) \nabla f_{i}(\theta_{i, k}^{t}) +\beta \lambda (\nabla f(\theta) - \nabla f_{i}(\theta_{i, k}^{t})) \\ &\quad +\beta (\theta_{i, k}^{t}-\theta^{t}). \end{aligned} \end{equation} This implies that FedAGM corrects the local gradient toward global gradient direction at every local gradient step, which also prevents each client from falling into its own biased minima. \subsection{Discussion} While our formulation is related to existing works which also handle client heterogeneity by employing global gradient information for the local update, FedAGM has the following major advantages. First, the server and clients only communicate model weights without imposing additional network overhead for transmitting gradients and other information~\cite{karimireddy2019scaffold, xu2021fedcm}. Note that the increase in communication cost challenges many realistic federated learning applications involving clients with limited network bandwidths. Also, FedAGM does not require the server to compute or maintain any historical information of the model, which leads to extra saving of computational cost in the server. Second, FedAGM is robust to the low-rate client participation situations and allows new-arriving clients to join the training process immediately without warmup because, unlike~\cite{karimireddy2019scaffold, acar2020federated, li2021model}, the clients are not supposed to store their local states. \iffalse \paragraph{\revision{[Advantages of not modifying local objective function]}} Moreover, while several works~\cite{karimireddy2019scaffold, xu2021fedcm} correct the direction of local update by modifying local SGD procedure, repetitive correction with the constant term for every iteration can impose overshooting toward the global gradient which can be noisy in partial participation scenario. In contrast, in FedAGM, we modifies the initial point of local update in direction of the global information only once, which impose more flexibility for local update. \fi \iffalse as linear combination of averaged parameters received from clients $\frac{1}{|S_t|} \Sigma_{i \in S_t}\theta_{i,K}^{t}$ and the parameters distributed to clients. For the regularization term, is to penalize in model communication process where The server aggregates the gradient information of previous active clients and sends the projected model $\theta^t - \lambda \Delta^t$ to the client instead of $\theta^t$. \iffalse \jkkim{The main modification in FedAGM compared to FedAvg lies in the model that the server broadcasts to the client. Unlike FedAvg passing the current global model($\theta^t)$, FedAGM passes the accelerated global model($\theta^t-\lambda\Delta^t)$ where $\Delta^t$ is global update in the previous step and $\lambda$ is hyperparameter. } \fi \jkkim{Starting with the projected parameter as an initial point, clients active in current round take local steps to minimize their local objective functions.} After $K$ local updates, each client returns their parameters $\theta_{i,K}^t$. The server calculates the next model as linear combination of averaged parameters received from clients $\frac{1}{|S_t|} \Sigma_{i \in S_t}\theta_{i,K}^{t}$ and the parameters distributed to clients. We define server model update direction as $\Delta^{t+1}=-(\theta^{t+1}-\theta^t)$. Then the following~\cref{thm:delta_momentum} holds. Lemma 4.1 means that $\Delta^t$ is an exponential moving average of the total local updates computed at the projected point with the decay coefficient $\lambda$. Although only a subset of clients participate in each communication round, past local update information included in $\Delta^t$ is maintained. Updating the current model using it can have the similar effect as increasing the participation rate which makes FedAGM be robust to partial client participation in federated learning. Therefore, since the momentum term $\Delta^t$ serves as an approximation to the gradient of the global loss function $f(\theta)$, we have the local update direction as follows: \begin{equation} \begin{aligned} \nabla L_{i}(\theta_{i, k}^{t}) &=\alpha \nabla f_{i}(\theta_{i, k}^{t})+\beta(\theta_{i, k}^{t}-(\theta^{t}-\lambda \Delta^{t})) \\ &=\alpha \nabla f_{i}(\theta_{i, k}^{t}) +\beta \lambda \Delta^{t} +\beta (\theta_{i, k}^{t}-\theta^{t}) \\ &\approx(\alpha + \beta\lambda) \nabla f_{i}(\theta_{i, k}^{t}) +\beta \lambda (\nabla f(\theta) - \nabla f_{i}(\theta_{i, k}^{t})) \\ &\quad +\beta (\theta_{i, k}^{t}-\theta^{t}) \end{aligned} \end{equation} \iffalse \paragraph{\revision{[Correct local update with global information $\Delta^t$ without communication cost]}} \jkkim{ If each client knows the global update information and utilizes it for local update, it can be expected that client-drift due to data heterogeneity will be reduced. At first glance, this method seems to be possible only by communicating an additional message. However, the following lemma suggests that there is a detour.} \begin{lemma} \label{local correction} Let the model that the $i$th client undergoes the $k$th local iteration be $\theta_{i,k}^t$. Then the local model is updated in the following way. \begin{equation*} \theta_{i,k+1}^t=\theta_{i,k}^t-\eta_l\alpha \nabla f_i(\theta_{i,k}^t) -\eta_l \beta(\theta_{i,k}^t-\theta^t)+ \eta_l\beta \lambda \Delta^t \end{equation*} \end{lemma} \begin{myproof} Since $i$th client's local objective is \\ $L_i(\theta_{i,k}^t)=\alpha f_i(\theta_{i,k}^t )+\frac{\beta}{2}\|\theta_{i,k}^t-(\theta^t+\lambda\Delta^t)\|^2$,\\ Its gradient is obtained as follows. \begin{align*} \nabla L_i(\theta_{i,k}^t) &=\alpha\nabla f_i(\theta_{i,k}^t )+\beta(\theta_{i,k}^t-(\theta^t+\lambda\Delta^t))\\ &=\alpha\underbrace{\nabla f_i(\theta_{i,k}^t )}_\text{local loss func}+\beta\underbrace{(\theta_{i,k}^t-\theta^t)}_\text{prox}-\underbrace{\beta\lambda\Delta^t}_\text{global momentum} \end{align*} \end{myproof} \fi This implies that FedAGM has a correction toward global gradient direction with proximal term, which prevent each client from falling into its biased minima. \jkkim{ \iffalse This suggests a surprising fact. We can use global information for local update correction without communicating messages bigger than the model size. \fi \iffalse This is effective in improving performance in a federated setting. \fi } \iffalse Moreover, $\Delta^t$ contains the update information of clients that participated in the past in the form of an exponential moving average, updating the current model using it \iffalse in this direction \fi can have the similar effect as increasing the participation rate.\fi Furthermore, you can view adding $\lambda \Delta^t$ to current model $\theta^t$ when sending the model for local update as taking a lookahead through the interim parameters where the accumulated global velocity will lead the current model. If the velocity update leads to bad client loss, then the client will make a correction to minimize their loss properly. This anticipatory update in client side allow the search to slow down when approaching the local optima and reduce the likelihood of going too far by the added global velocity, which helps FedAGM avoid oscillations by client heterogeneity and achieves better performance. \fi \section{Discussion} \label{sec:discussion} \iffalse \subsection{Comparision with NAG} Let $x^t:=\theta^t+\lambda \Delta^t$ and $\nabla (\theta^t) := \theta^{t+1} - x^t$. Then $\Delta^{t+1} = \tau\nabla(\theta^t) + \lambda \Delta^t$. After $t$ communication round we have: \begin{equation} \begin{aligned} x^{t+1}&=\theta^{t+1}+\lambda \Delta^{t+1} \\&=x^t +\tau\nabla(\theta^t)+\lambda(\tau\nabla(\theta^t) + \lambda \Delta^t) \\& =x^t+\tau\nabla(\theta^t) +\tau\underset{i=0}{\overset{t}{\Sigma}}\lambda^{i+1}\nabla(\theta^{t-i}) \end{aligned} \end{equation} FedAGM is a federated algorithm. But, if we have N=1 as total client who always participate and do single local update, then we recover the Nesterov Accelerated momentum, which is widely used in centralized learning. \fi \section{Conclusion} \label{sec:conclusion} This paper tackles a realistic federated learning scenario, where a large number of clients with heterogeneous data and limited participation constraints hurt the convergence and performance of the model. To address this problem, we proposed a novel federated learning framework, which naturally aggregates previous global gradient information and incorporates it to guide client updates. The proposed algorithm transmits the global gradient information to clients without additional communication cost by simply adding the global information to the current model when broadcasting it to clients. We showed that the proposed method is desirable with the realistic federated learning scenarios since it does not require any constraints such as communication or memory overhead. We demonstrate the effectiveness of the proposed method in terms of robustness and communication-efficiency in the presence of client heterogeneity through extensive evaluation on multiple benchmarks. \iffalse \vspace{-0.2cm} \paragraph{Limitations} A drawback of this work is simplicity, which may undermine the novelty of this paper. However, simplicity has rather many positive aspects in terms of efficiency and reproducibility, and we identified many practical benefits of our approach. \vspace{-0.3cm} \paragraph{Negative social impact} While a large number of edge devices participate to train a single model in federated learning, there is no security consideration in federated learning, which may be susceptible to adversarial clients. To mitigate this issue, a central server needs mechanisms to monitor adversarial clients. \fi \section{Preliminaries} \label{sec:preliminary} \subsection{Problem setting and notations} The goal of federated learning is to construct a single model that minimizes the following objective function. \begin{equation} \label{global_objective} \underset{\theta\in \mathbb{R}^d}{\operatorname{argmin}}\left[f(\theta)=\frac{1}{N}\underset{i=1}{\overset{N}{\Sigma}}f_i(\theta)\right], \end{equation} where $f_i(\theta)=\mathbb{E}_{z \sim D_i}[f_i(\theta,z)]$ is the loss function of the $i$-th client. Note that clients may have heterogeneous data distributions, but communication of training data between clients and the central server is strictly prohibited due to privacy concerns. All the basic notations throughout this paper are listed in~\cref{tab:notation}. \begin{table}[t] \centering \caption{Summary of notation used in the paper} \scalebox{0.9}{ \begin{tabular}{@{}cl@{}} \hline $N$ & total number of clients \\ $T$ & total number of communication rounds \\ $S_t$ & sampled number of clients in round $t$\\ $K$ & total number of local iterations \\ $i$ & client index \\ $D_i$ & data distribution of $i$th client \\ $\theta^t$ & shared model parameters in round $t$\\ $\theta_{i,k}^t$ & local model of $i$th client in round $t$ and step $k$ \\ \hline \end{tabular}} \label{tab:notation} \end{table} \subsection{FedAvg algorithm} A standard algorithm for solving federated learning tasks is FedAvg~\cite{mcmahan2017communication}, as described by the pseudo-code in~\cref{alg:fedavg}. Specifically, a central server sends a common model $\theta^t$ to the clients. Each client performs several steps of gradient descent to minimize its local loss function then returns the resultant model parameters $\theta_{i,K}^t$. A new common model for the next round training is constructed by averaging the models sent by all participants in the current round. While the property of taking multiple local updates in FedAvg before aggregation cuts down the communication cost required for training, in practice, this property leads to the so-called client drift~\cite{karimireddy2019scaffold} issue, where the individual client updates do not align well due to over-fitting on the local client data. This phenomenon inhibits FedAvg from converging to the optimum of the average loss over all clients. \section{Related work} \label{sec:related_work} Federated learning was first proposed in McMahan~\etal~\cite{mcmahan2017communication}, which introduces the key properties of federated learning as non-iid client data, massively distribution, and partial participation, and then proposes FedAvg algorithm as a solution. Several works explore the negative influence of heterogeneity in federated learning empirically~\cite{zhao2018federated} and derive convergence rates depending on the amount of heterogeneity~\cite{li2018federated, wang2019adaptive, khaled2019first, li2019convergence, hsieh2020non, wang2020tackling}. In this work, we focus on the problem of non-iidness, also known as client statistical heterogeneity where participating clients have different data distributions. To improve FedAvg in the presence of heterogeneous clients, there is a long line of work which penalize local models not to drift towards their local minima by regularizing the local objectives. These approaches regularize the client loss function with a proximal term~\cite{li2018federated} or use primal-dual approaches~\cite{zhang2020fedpd, acar2020federated}. In activation regularization perspective, client update is regularized to have similar activation to the downloaded global model by contrastive learning~\cite{li2021model}, using mixup with global statistics~\cite{yoon2021fedmix}, and using generative models~\cite{zhu2021data}. There is another line of work which reduce inter-client variance to eliminate inconsistent update across clients. several approaches use control variates~\cite{karimireddy2019scaffold, liang2019variance, karimireddy2020mime, li2019feddane} or global gradient momentum~\cite{xu2021fedcm} to reduce biases in client update. \cite{khanduri2021stem, das2020faster} apply STORM algorithm~\cite{cutkosky2019momentum} to reduce variance caused by both server-level and client-level SGD procedures. Another way to de-bias client updates is to estimate the global posterior using local posterior sampling by running Markov chain Monte Carlo (MCMC) on client side~\cite{al2020federated}. However, most of these methods require full participation, additional communication cost or client storage, which can be problematic in realistic federated learning tasks. On the other hand, several works incorporate momentum~\cite{wang2019slowmo, hsu2019measuring} and adaptive gradient methods~\cite{reddi2021adaptive} in server optimization to accelerate the convergence of the federated learning. While these methods only involve server-level optimization, our method incorporates the momentum of global gradient information into both server- and client-level optimization. \section{Introduction} \label{sec:introduction} Federated learning, first introduced in~\cite{mcmahan2017communication}, is an emerging large-scale machine learning framework that a central server learns a shared model without direct observation of training examples using a large number of remote clients with separate datasets. This decentralized learning concept allows the federated learning to achieve the basic level of data privacy. Since the remote clients such as mobile or IoT devices have limited communication bandwidth, federated learning algorithms are particularly sensitive to the communication cost. FedAvg~\cite{mcmahan2017communication} is known as the baseline algorithm of federated learning. In FedAvg, a subset of the clients updates their models based on a gradient descent method using their local data and then upload their resultant models to the server for updating the global model parameters via model averaging. As discussed in the extensive analysis on the convergence of FedAvg~\cite{stich2018local, yu2019parallel, wang2021cooperative, stich2019error, basu2020qsparse}, multiple local updates conducted before aggregation greatly reduce the communication cost that is required for training a model in the sever. However, federated learning faces two key challenges: high heterogeneity in the data distributed over clients and limited participation rate, so-called cross-device setting~\cite{kairouz2019advances}, where only a subset of clients is active at a given time due to the limited communication bandwidth. Several studies~\cite{zhao2018federated, karimireddy2019scaffold} have shown that multiple local updates in the clients with non-iid data lead to client-drift, diverging updates in the individual clients. Such a phenomenon introduces high variance associated with the averaging step of FedAvg for the global update, which hampers the convergence to the optimal average loss over all clients~\cite{li2018federated, wang2019adaptive, khaled2019first, li2019convergence, hsieh2020non, wang2020tackling}. The client-drift challenge is exacerbated when the participation rate per communication round is low due to unreliable connections between the server and the clients. \iffalse This phenomenon are exacerbated by low participation rate per communication round due to unreliable connection link between the server and the clients. \fi \iffalse To properly handle client heterogeneity, we propose a novel federated learning algorithm, Federated Averaging with Acceleration of Global Momentum (FedAGM), which close the gap between local and global loss function by conveying momentum of global gradient information to clients through simply adding it to the current global and incorporating the momentum into client local updates. Instead of aggregating previous global gradient information in server and transmitting it with the current model to clients, FedAGM automatically carry out the update and transmission of momentum term to client updates through gradient acceleration in server-to-client communication procedure. \fi To properly address the client heterogeneity issue, we propose a novel optimization algorithm for federated learning, Federated Averaging with Acceleration of Global Momentum (FedAGM)), which conveys the momentum of global gradient information to clients and enables the momentum to be incorporated into the local updates in the individual clients. This approach turns out to be effective for reducing the gap between the global and local losses. Contrary to the existing methods that require additional steps to compute the momentum, FedAGM transmits the global model integrated with the momentum and saves the extra communication and computational cost consequently. In addition, FedAGM incorporates a regularization term in the objective function of clients to make the global and local gradients more consistent. FedAGM incurs no extra computation and communication costs on both server and client sides. \iffalse , where server stores statistical information about previous global update information and \jkkim{ Server stores statistical information about previous global update information. Using this, clients can accelerate their initial point in advance to point supposed to be on the global model trajectory. Each client is regularized on every local iteration to reduce the weight $L_2$ norm between the local online model and the initial point. This allows them to receive local corrects consistent with the global update direction and reduce variance between clients local update. } \fi \iffalse The server stores momentum which accumulates previous global update information. Then it sends the accelerated model on the predicted trajectory using momentum to each client. Each client gets local correct at every iteration to find a local minima near the predicted trajectory. Each client is regularized to find a local minima near the predicted global trajectory at every iteration. \fi \iffalse To properly address client heterogeneity issue, we propose a novel optimization algorithm for federated learning, Federated Averaging with Acceleration of Global Momentum (FedAGM), which conveys the momentum of global gradient information to clients and enables the momentum to be incorporated into the local updates in the individual clients. This approach turns out to be effective for reducing the gap between the local and global losses. Contrary to the existing methods that require additional step to computes the momentum, FedAGM transmits the global model integrated with the momentum and saves the extra communication and computational cost consequently. \bhhan{[client side regularization]} \jkkim{ We propose to regularize the weight $L_2$ norm with the accelerated point in each client local objective. Global momentum accelerated at the server stage jumps out in the gradient and acts as a local correction term in each local updates. Local correction term regularizes each client to find a local minima adjacent to the direction of the global optimum instead of falling to the nearest local minima which are distinct to the global optimum, which helps to handle client-drift phenomena. } \fi Although there have been a growing number of works that handle the client heterogeneity in federated learning, FedAGM has the following major advantages. Unlike previous methods focusing on the strategies for either client-level optimization~\cite{xu2021fedcm, acar2020federated, karimireddy2019scaffold, li2021model, li2018federated, zhang2020fedpd, karimireddy2020mime, li2019feddane, yang2020federated, liang2019variance} or server-level updates~\cite{reddi2021adaptive, wang2019slowmo, hsu2019measuring}, FedAGM incorporates the momentum based on the global gradient information into both server and client updates. This feature allows the proposed method to achieve the same level of task-specific performance with fewer communication rounds. Moreover, while most of existing methods require additional requirements compared to FedAvg such as full participation~\cite{liang2019variance, zhang2020fedpd, khanduri2021stem}, additional communication bandwidth~\cite{xu2021fedcm, karimireddy2019scaffold, zhu2021data, karimireddy2020mime, li2019feddane, das2020faster}, or storage costs on clients to store local states~\cite{acar2020federated, karimireddy2019scaffold, li2021model}, FedAGM is completely free from any additional communication and memory overhead, which ensures the compatibility with large-scale and low-participation federated learning scenarios. The main contributions of this paper are summarized as follows. \begin{itemize} \item[$\bullet$] We propose a communication-efficient federated optimization algorithm, which deals with heterogeneous clients effectively. The proposed approach computes and transmits the global model with a momentum efficiently, which facilitates the optimization in clients. \iffalse \jkkim{by approximating the global model trajectory with a past global update history and searching the local minima around it.}\fi \iffalse computing the global model with a momentum efficiently and incorporating it into both server- and client-level optimizations. \fi \item[$\bullet$] We also revise the objective function of clients, which augments a regularization term to the local gradient direction, which further aligns the gradients of the server and the individual clients. \item[$\bullet$] We show that the proposed approach does not require any additional communication cost and memory overhead, which is desirable for the real-world settings of federated learning. \item[$\bullet$] Through extensive evaluation on multiple benchmarks, we demonstrate that our optimization technique is communication-efficient and robust to client heterogeneity, especially when the participation ratio is low. \end{itemize} The rest of this paper is organized as follows. We first review the core algorithmic idea of federated learning, FedAvg, and its variants in~\cref{sec:related_work,sec:preliminary}. Then, we formally describe the proposed federated learning framework and demonstrate the effectiveness of our method in~\cref{sec:method,sec:experiment}. Finally, we conclude the paper in~\cref{sec:conclusion}. \section{Experiments} \label{sec:experiment} This section presents empirical evaluations of FedAGM and competing federated learning methods, to highlight the robustness to data heterogeneity of the proposed method in terms of performance and communication-efficiency. \subsection{Experimental setup} \paragraph{Datasets and baselines} We conduct a set of experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet\footnote{\url{https://www.kaggle.com/c/tiny-imagenet}}~\cite{le2015tiny} with various data heterogeneity levels and participation rates. Note that Tiny-ImageNet ($200$ classes with $10,000$ samples) is more natural and realistic compared to the simple datasets, such as MNIST and CIFAR, used for evaluation of many previous methods~\cite{mcmahan2017communication, karimireddy2019scaffold}. We generate IID data split by randomly assigning training data to individual clients without replacement. For the non-IID data, we simulate the data heterogeneity by sampling the label ratios from a Dirichlet distribution with parameter \{0.3, 0.6\}, following~\cite{hsu2019measuring}. We keep the training data balanced, so each client holds the same amount of data. We compare our method, FedAGM, with several state-of-the-art federated learning techniques, which include FedAvg~\cite{mcmahan2017communication}, FedProx~\cite{li2018federated}, FedAvgm~\cite{hsu2019measuring}, FedAdam~\cite{reddi2021adaptive}, FedDyn~\cite{acar2020federated}, FedCM~\cite{xu2021fedcm}. We adopt a standard ResNet-18~\cite{he2016deep} as backbone network for all benchmarks, but we replace batch normalization by group normalization as suggested in~\cite{hsieh2020non}. \begin{table*}[t!] \centering \caption{Comparison of FedAGM with baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet with a moderate number of clients. In these experiments, the number of clients, participation rate, and Dirichlet parameter are set to 100, 5\%, and $0.3$, respectively. Accuracy at target round and the communication round to reach target test accuracy is based on running exponential moving average with parameter $0.9$. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted in {\bf bold}.} \vspace{-0.2cm} \label{tab:clients100_prate5} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} \multirow{3}{*}{Method} & \multicolumn{4}{c}{CIFAR10} & \multicolumn{4}{c}{CIFAR100 } & \multicolumn{4}{c}{Tiny-ImageNet} \\ \cline{2-13} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 84\% & 87\% & 500R & \multicolumn{1}{c}{1000R} & 47\% & 53\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{800R} & \multicolumn{1}{c}{35\%} & \multicolumn{1}{c}{38\%} \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 74.36 & 82.53 & 840 & 1000+ & 41.88 & 47.83 & 924 & 1000+ & 33.94 & 35.37 & 645 & 1000+ \\ FedProx~\cite{li2018federated} & 73.70 & 82.68 & 826 & 1000+ & 42.43 & 48.32 & 881 & 1000+ & 34.14 & 35.53 & 613 & 1000+ \\ FedAvgm~\cite{hsu2019measuring} & 80.56 & 85.48 & 519 & 828 & 46.98 & 53.29 & 515 & 936 & 36.32 & 38.51 & 416 & 829 \\ FedAdam~\cite{reddi2021adaptive} & 73.18 & 81.14 & 878 & 1000+ & 44.80 & 52.48 & 691 & 1000+ & 33.22 & 38.91 & 658 & 945 \\ FedDyn~\cite{acar2020federated} & {\bf85.67} & 88.84 & 340 & 794 & 52.66 & 58.46 & 293 & 504 & 38.86 & 42.82 & 327 & 440 \\ FedCM~\cite{xu2021fedcm} & 78.92 & 83.71 & 624 & 1000+ & 52.44 & 58.06 & 293 & 572 & 31.61 & 37.87 & 694 & 1000+\\ FedAGM (ours) & 85.13 & {\bf89.10} & {\bf326} & {\bf450} & {\bf55.79} & {\bf62.51} & {\bf260} & {\bf389} & {\bf42.26}& \bf{46.31} & {\bf226} & {\bf331}\\ \cline{1-13} \end{tabular}} \vspace{0.3cm} \end{table*} \begin{table*}[t!] \centering \caption{Comparison of FedAGM with baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet with a large number of clients. In these experiments, number of clients, participation rate, and Dirichlet parameter are set to 500, 2\%, and $0.3$, respectively. } \vspace{-0.2cm} \label{tab:clients500_prate2} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} \multirow{3}{*}{Method} & \multicolumn{4}{c}{CIFAR10} & \multicolumn{4}{c}{CIFAR100 } & \multicolumn{4}{c}{Tiny-ImageNet} \\ \cline{2-13} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & 500R & \multicolumn{1}{c}{1000R} & 73\% & 77\% & 500R & \multicolumn{1}{c}{1000R} & 36\% & 40\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{800R} & 24\% & 30\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 58.74 & 71.45 & 1000+ & 1000+ & 30.16& 38.11& 842& 1000+ & 23.63 &29.48 & 523 & 1000+ \\ FedProx~\cite{li2018federated}& 57.88 & 70.75 & 1000+ & 1000+ & 29.28 & 36.16 & 966 & 1000+ &25.45 &31.71 &445 &799 \\ FedAvgm~\cite{hsu2019measuring} & 65.85 & 77.49 & 753 & 959 & 31.80 & 40.54 & 724 & 955 & 26.75 & 33.26 & 386 & 687 \\ FedAdam~\cite{reddi2021adaptive} & 61.53 & 69.94 & 1000+ & 1000+ & 24.40 & 30.83 & 1000+ & 1000+ &21.88 &28.08 & 648& 1000+ \\ FedDyn~\cite{acar2020federated} & 69.14 & 82.56 & 600 & 719 & 33.52 & 45.01 & 576 & 714 & 27.74 & 37.25 & 387 & 580 \\ FedCM~\cite{xu2021fedcm} & 69.27 & 76.57 & 742 & 1000+ & 27.23 & 38.79 & 872 & 1000+ &19.41 &24.09 & 975 & 1000+ \\ FedAGM (ours) & \textbf{76.50} & \textbf{83.93} & \textbf{410} & \textbf{519} & \textbf{35.68} & \textbf{48.40} & \textbf{505} & \textbf{616} &\textbf{31.47} &\textbf{38.48} & \textbf{246} & \textbf{447}\\ \hline \end{tabular}} \vspace{-0.1cm} \end{table*} \paragraph{Validation metrics} To evaluate the generalization performance of the methods, we use the entire test set in the CIFIAR-10~\cite{krizhevsky2009learning}, CIFAR-100~\cite{krizhevsky2009learning}, and Tiny-ImageNet datasets. Since both the speed of learning as well as the final performance are important quantities for federated learning, we measure: (i) the performance attained at a specified number of rounds, and (ii) the number of rounds needed for an algorithm to attain the desired level of target accuracy, following~\cite{al2020federated}. For methods that could not achieve aimed accuracy within the maximum communication round, we append the communication round with a $+$ sign. We also report the communication savings of FedAGM compared to each method in parenthesis. Note that all the comparison methods except FedCM have the same communication cost per round whereas FedCM costs twice than other methods for server-to-client communication due to additional transmission of global gradient information. \paragraph{Implementation details} We use PyTorch~\cite{paszke2019pytorch} to implement FedAGM and the other baselines. We follow~\cite{acar2020federated, xu2021fedcm} for evaluation protocol. For local update, we use the SGD optimizer with a learning rate $0.1$ for all approaches on the three benchmarks. We apply exponential decay on the local learning rate, and the decay parameter is selected from $\{1.0, 0.998, 0.995\}$. We apply no momentum for local SGD, but apply weight decay of $0.001$ to prevent overfitting. We also use gradient clipping to increase the stability of the algorithms. The number of local training epochs over each client update is set to 5, and the batch size is set so that the total iteration for local updates is set to $50$ for all experiments. We set the global learning rate as $1$ for all methods except for FedADAM which is set to $0.01$. We list the details of the hyperparameters specific to FedAGM and the baselines in the supplementary materials. \iffalse The SGD weight decay is set to 0.00001 and the SGD momentum is set to 0.9. The batch size is set to 64. The number of local epochs is set to 300 for SOLO. The number of local epochs is set to 10 for all federated learning approaches unless explicitly specified. The number of communication rounds is set to 100 for CIFAR-10/100 and 20 for Tiny-ImageNet, where all federated learning approaches have little or no accuracy gain with more communications. For MOON, we set the temperature parameter to 0.5 by default like [3]. \fi \begin{figure*}[t] \centering \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=1\linewidth]{figures/cifar10_mid_100.pdf} \caption{5\% participation, 100 clients} \end{subfigure} \hspace{0.05cm} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=1\linewidth]{figures/cifar10_low_500.pdf} \caption{2\% participation, 500 clients} \end{subfigure} \hspace{0.05cm} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=1\linewidth]{figures/cifar10_low_100.pdf} \caption{1\% participation, 100 clients} \end{subfigure} \caption{ The convergence plot of FedAGM and comparison methods on CIFAR-10 with different client heterogeneity. } \label{fig:moderate_curve} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=1\linewidth]{figures/cifar100_mid_100.pdf} \caption{5\% participation, 100 clients} \end{subfigure} \hspace{0.05cm} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=1\linewidth]{figures/cifar100_low_500.pdf} \caption{2\% participation, 500 clients} \end{subfigure} \hspace{0.05cm} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=1\linewidth]{figures/cifar100_low_100.pdf} \caption{1\% participation, 100 clients} \end{subfigure} \caption{ The convergence plot of FedAGM and comparison methods on CIFAR-100 with different client heterogeneity. } \label{fig:massive_curve} \end{figure*} \subsection{Evaluation on a moderate number of clients} We first present the performance of the proposed approach, FedAGM, on CIFAR-10, CIFAR-100, and Tiny-ImageNet in comparison to baseline methods for a moderate-scale federated learning setting. This setting has 100 devices while keeping the participation rate constant per round. \cref{tab:clients100_prate5}~provides a detailed comparison of FedAGM to the baselines in terms of the attained performance and the speed of learning on the three benchmark tasks. \cref{tab:clients100_prate5}~shows that FedAGM outperforms competitive methods in most cases. In particular, FedAGM outperforms the momentum-based methods which incorporate momentum in either the server-side update (FedAvgM, FedAdam) or the client-side update (FedCM). This is partly because, in FedAGM, the accelerated global model enables each client to lookahead the global update trajectory and find local minima near the global gradient trajectory, which aligns client updates. Note that since FedCM requires twice the communication cost as the server transmits the current model the associated momentum at each server-to-client communication. \subsection{Evaluation on a large number of clients} To further validate the effectiveness of the proposed method in handling client heterogeneity, we perform experiments with a large number of clients, which are more realistic federated learning scenarios. In this setting, since the total number of clients is increased by 5 times compared to that of the previous experiments, the number of training data that one client has is reduced by 80\%. Specifically, each client has 100 data points in CIFAR-10 and CIFAR-100, respectively. \begin{table}[t] \centering \caption{Comparison of FedAGM with baselines on CIFAR-10 for more limited participation rate (1\%). The number of clients, and Dirichlet parameter is set to 100, and $0.3$, respectively.} \label{tab:cifar10_p_0.01} \scalebox{0.9}{ \begin{tabular}{lcc} \multirow{2}{*}{Method} & accuracy (\%, $\uparrow$ ) & rounds (\#, $\downarrow$) \\ & 1000R & 70\% \\ \cline{1-3} FedAvg~\cite{mcmahan2017communication} & 64.54 & 1000+ ($>$ 1.43$\times$) \\ FedProx~\cite{li2018federated} & 65.47 & 1000+ ($>$ 1.43$\times$) \\ FedAvgm~\cite{hsu2019measuring} & 63.73 & 1000+ ($>$ 1.43$\times$) \\ FedAdam~\cite{reddi2021adaptive} & 69.29 & 1000+ ($>$ 1.43$\times$) \\ FedDyn~\cite{acar2020federated} & 72.18 & 854 (1.22$\times$) \\ FedCM~\cite{xu2021fedcm} & 55.03 & 1000+ ($>$ 1.43$\times$) \\ FedAGM (ours) & {\bf76.72} & {\bf696} \\ \cline{1-3} \end{tabular}} \end{table} \begin{table}[t] \centering \caption{Comparison of FedAGM with baselines on CIFAR-100 for more limited participation rate (1\%). The number of clients, and Dirichlet parameter is set to 100, and $0.3$, respectively. } \label{tab:cifar100_p_0.01} \scalebox{0.9}{ \begin{tabular}{lcc} \multirow{2}{*}{Method} & accuracy (\%, $\uparrow$ ) & rounds (\#, $\downarrow$) \\ & 1000R & 40\% \\ \cline{1-3} FedAvg~\cite{mcmahan2017communication} & 40.44 & 994 (1.30$\times$) \\ FedProx~\cite{li2018federated} & 40.16 & 995(1.30$\times$) \\ FedAvgm~\cite{hsu2019measuring} & 36.30 & 1000+ ($>$ 1.30$\times$) \\ FedAdam~\cite{reddi2021adaptive} & 18.02 & 1000+ ($>$ 1.30$\times$) \\ FedDyn~\cite{acar2020federated} & 42.64 & 849 ($>$ 1.11) \\ FedCM~\cite{xu2021fedcm} & 27.16 & 1000+ ($>$ 1.30$\times$) \\ FedAGM (ours) & {\bf45.15} & {\bf768} \\ \cline{1-3} \end{tabular}} \vspace{+0.2cm} \end{table} \begin{table*}[t] \centering \caption{Comparison of FedAGM with baselines on CIFAR-10 with 100 clients of Dirichlet 0.6 split on 10\%, 5\%, and 1\% client participation. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted in {\bf bold}. } \vspace{-0.2cm} \label{tab:cifar10_d_0.6} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} \multirow{4}{*}{Method }&\multicolumn{12}{c}{Participation rate}\\ \cline{2-13} & \multicolumn{4}{c}{10 \%} & \multicolumn{4}{c}{5 \%} & \multicolumn{4}{c}{1 \%} \\ \cline{2-13} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & 500R & \multicolumn{1}{c}{1000R} & 83\% & 87\% & 500R & \multicolumn{1}{c}{1000R} & 81\% & 85\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 71\% & 75\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} &83.09 & 87.15 & 497 &907 &80.56 & 85.97& 520& 832 & 62.84 &76.92 & 767 & 943 \\ FedProx~\cite{li2018federated} &83.22 &87.33 & 494 & 855 & 80.39 & 85.53 & 524 & 889 &61.27 &75.16 &780 &994 \\ FedAvgm~\cite{hsu2019measuring} & 86.58 & 89.70 &346 & 568& 84.65 & 87.96 & 355 & 541 & 62.67 & 75.05 & 804&998 \\ FedAdam~\cite{reddi2021adaptive} & 84.97 & 87.59 & 340& 715 & 80.25 & 83.52 &526& 1000+ & 60.32& 75.16 &828 &952 \\ FedDyn~\cite{acar2020federated} & \bf{89.98} & 90.78 & 244 & 303 & 80.10 & 86.47 & 551 & 826& 67.50&79.57 & 580 & 661 \\ FedCM~\cite{xu2021fedcm} & 87.50 & 89.29 & 226 &452 & 82.84 &86.64 & 385 & 714 & 42.57 &53.75 &1000+ &1000+ \\ FedAGM (ours) & 89.24 & \bf{91.10} & \bf{184} & \bf{286} & \bf{87.57} & \bf{90.56} & \bf{218} & \bf{332}& \bf{73.59}& \bf{81.50} &\bf{463} &\bf{584} \\ \cline{1-13} \end{tabular}} \vspace{0.3cm} \end{table*} \begin{table*}[t]\centering \caption{Comparison of FedAGM with baselines on CIFAR-10 with 100 clients of IID split on 10\%, 5\%, and 1\% client participation. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted in {\bf bold}. } \vspace{-0.2cm} \label{tab:cifar10_iid} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} \multirow{4}{*}{Method }&\multicolumn{12}{c}{Participation rate}\\ \cline{2-13} & \multicolumn{4}{c}{10 \%} & \multicolumn{4}{c}{5 \%} & \multicolumn{4}{c}{1 \%} \\ \cline{2-13} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & 500R & \multicolumn{1}{c}{1000R} & 84\% & 88\% & 500R & \multicolumn{1}{c}{1000R} & 82\% & 86\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 82\% & 86\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 86.83 & 89.84 & 374 &600 &85.28 &88.69 & 372& 545 &77.03 & 85.06& 724 & 1000+ \\ FedProx~\cite{li2018federated} & 86.93 & 89.71 & 359 & 606 & 84.79 &87.99 & 384 & 594 & 79.2 &86.13 &645 & 977\\ FedAvgm~\cite{hsu2019measuring} & 88.51 & 90.23 & 256& 429&87.67&89.96 & 258 & 375 & 79.09 & 85.79 &685 & 1000+ \\ FedAdam~\cite{reddi2021adaptive} & 89.05 & 90.87 & 204& 397 & 85.29 & 87.97 & 286 & 633 &70.78 & 83.01 &912 &1000+ \\ FedDyn~\cite{acar2020federated} & \bf{91.57} &91.98 & 233 & 295 & 88.41 & 89.91 & 265 &355 &76.49 & 84.79& 747 & 1000+ \\ FedCM~\cite{xu2021fedcm} &89.93 & 91.18 & 166 &296 & 87.38 & 89.65& 181 & 352&69.99 & 76.19& 1000+&1000+ \\ FedAGM (ours) & 91.08 & \bf{92.26} & \bf{154} & \bf{239} & \bf{90.57} & \bf{92.29} & \bf{157} & \bf{295} & \bf{81.05} & \bf{87.01} & \bf{536} & \bf{892} \\ \cline{1-13} \end{tabular}} \end{table*} \cref{tab:clients500_prate2}~shows the result of FedAGM and other methods on CIFAR-10 and CIFAR-100. We first observe that the overall performance of all algorithms is lower than those with a moderate number of clients. This is because, as the number of training data for each client decreases, each client is more likely to fall into its distinct local optimum, which intensifies client drift. For instance, it takes FedAGM 450 rounds to achieve 84\% with 100 devices, while it takes 519 rounds to achieve 77\% with 500 devices. A similar trend is observed for CIFAR-100 and other methods. Nevertheless, we can observe that FedAGM outperforms the compared methods consistently on all benchmarks in terms of accuracy and communication efficiency. This is partly because that FedAGM effectively aligns the gradients of the server and the individual clients. \subsection{Evaluation on the low participation rate} To validate the robustness to partial participation nature of federated learning, we simulate an extremely constrained federated learning scenario, with a very low participation level ($1\%$) on 100 clients. \cref{tab:cifar10_p_0.01,tab:cifar100_p_0.01}~show the results of FedAGM and competitive methods on CIFAR-10 and CIFAR-100, respectively. Although the overall accuracy at 1000 rounds is degraded for all methods due to the limited participants per round, the proposed algorithm outperforms all comparison methods on all datasets in terms of both generalization performance and communication efficiency. Performance gaps between FedAGM and baselines are much larger than the experiments in high participation setting~(\cref{tab:clients100_prate5}). Note that FedDyn shows a large performance drop as participation rate drops; this is mainly because it needs to store local states, which can easily get stale, and hurt the convergence of the algorithm. In contrast, our method does not rely on the past information stored in local devices and is relatively unaffected by this issue, which validates our claim that the proposed method is robust to the low participation property of federated learning. \subsection{Convergence plots} Convergence plots of FedAGM and competing baselines on CIFAR-10 and CIFAR-100 under various settings are provided in~\cref{fig:moderate_curve,fig:massive_curve}. We observe that FedAGM outperforms other strong baselines across different participation rates and heterogeneity levels. Note that, the performance gap between FedCM and baselines is larger in 1\% participation and 100 devices. This also validates our claim the FedAGM is robust to the low participation cases of federated learning. \begin{table*}[t]\centering \caption{Comparison of FedAGM with baselines on CIFAR-100 with 100 clients of Dirichlet 0.6 split on 10\%, 5\%, and 1\% client participation. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted in {\bf bold}. } \vspace{-0.2cm} \label{tab:cifar100_d_0.6} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} \multirow{4}{*}{Method }&\multicolumn{12}{c}{Participation rate}\\ \cline{2-13} & \multicolumn{4}{c}{10 \%} & \multicolumn{4}{c}{5 \%} & \multicolumn{4}{c}{1 \%} \\ \cline{2-13} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & 500R & \multicolumn{1}{c}{1000R} & 56\% & 60\% & 500R & \multicolumn{1}{c}{1000R} & 54\% & 58\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 38\% & 42\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 45.22&50.02 &1000+ &1000+ & 43.91 & 49.18& 1000+ &1000+ &29.98 & 44.11 & 744 &911 \\ FedProx~\cite{li2018federated} & 44.43 & 49.24 & 1000+ &1000+ & 43.15 &48.45 &1000+ & 1000+ & 29.77 & 42.61 & 790 & 962 \\ FedAvgm~\cite{hsu2019measuring} & 47.27&51.80 &1000+ &1000+ &46.66 & 52.49 & 1000+ & 1000+ &28.41& 44.56 & 771 &915 \\ FedAdam~\cite{reddi2021adaptive} & 53.74 & 58.82 & 659 & 1000+ &45.95 & 51.63 & 1000+& 1000+ &18.89 & 24.53 & 1000+& 1000+ \\ FedDyn~\cite{acar2020federated} & \bf{59.86} & \bf{62.68} & \bf{355} & \bf{506} & 53.74 &59.70 & 518& 776 &33.96 &47.23 &592& 768 \\ FedCM~\cite{xu2021fedcm} &58.33 &61.75 & 377 & 647 &53.75 &60.48 & 511& 740 & 19.77 & 27.20 & 1000+ & 1000+ \\ FedAGM (ours) & 58.21 & 62.60 & 397 & 595 &\bf{58.82} &\bf{63.88} & \bf{325}& \bf{471} & \bf{36.75} & \bf{49.34} & \bf{544} & \bf{672} \\ \cline{1-13} \end{tabular}} \vspace{0.3cm} \end{table*} \begin{table*}[t]\centering \caption{Comparison of FedAGM with baselines on CIFAR-100 with 100 clients of IID split on 10\%, 5\%, and 1\% client participation. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted in {\bf bold}. } \vspace{-0.2cm} \label{tab:cifar100_iid} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} \multirow{4}{*}{Method }&\multicolumn{12}{c}{Participation rate}\\ \cline{2-13} & \multicolumn{4}{c}{10 \%} & \multicolumn{4}{c}{5 \%} & \multicolumn{4}{c}{1 \%} \\ \cline{2-13} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & 500R & \multicolumn{1}{c}{1000R} & 56\% & 60\% & 500R & \multicolumn{1}{c}{1000R} & 52\% & 56\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 42\% & 48\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 45.38 &50.37 & 1000+ &1000+ & 43.60& 48.01& 1000+&1000+ & 35.92 & 49.15& 678 &940 \\ FedProx~\cite{li2018federated} &44.96 & 49.16 & 1000+ & 1000+ & 43.27 & 47.23 &1000+ & 1000+ & 36.50 &48.38 &691 & 984\\ FedAvgm~\cite{hsu2019measuring} & 47.19 & 51.48 &1000+ &1000+ & 47.43 & 52.83 & 880 & 1000+ & 39.07& 51.75 &578 &822 \\ FedAdam~\cite{reddi2021adaptive} & 55.40 & 60.08 & 545& 997 & 52.23 & 57.73 & 496 & 835 &25.84 & 35.08 &1000+ & 1000+ \\ FedDyn~\cite{acar2020federated} & \bf{59.91}& 63.37 &335 &\bf{513} & 54.88 & 60.69 & 412 &558 &34.50 & 43.91& 779 & 1000+ \\ FedCM~\cite{xu2021fedcm} &59.37 & 63.32 & 335 &531 & 57.10 &62.48 & 288 & 466 & 22.68&34.37 & 1000+&1000+ \\ FedAGM (ours) & 59.64& \bf{63.50} &\bf{321}& 523& \bf{58.23} & \bf{62.48} & \bf{280} & \bf{391} & \bf{40.07}& \bf{52.75} & \bf{536} & \bf{769}\\ \cline{1-13} \end{tabular}} \vspace{-0.1cm} \end{table*} \subsection{Evaluation on less heterogeneous setting} \cref{tab:cifar10_d_0.6,tab:cifar10_iid,tab:cifar100_d_0.6,tab:cifar100_iid}~show that FedAGM matches or outperforms the performance of competitive methods when data heterogeneity is not severe (Dirichlet 0.6) or absent (IID) in most cases. Note that, while the compared methods show performance degradation as the participation rate decreases, FedAGM shows little degradation as the participation rate decreases for both benchmarks. This implies that FedAGM is more robust for low participation rates than other baselines. This is partly because low client heterogeneity reduces noise in the momentum of global gradient, which attributes to the smooth trajectory of global update. Since FedAGM effectively incorporates the momentum for local updates, FedAGM is relatively unaffected by the partial participation of federated learning. \subsection{Ablation study} \paragraph{Sensitivity to decay coefficient $\lambda$} As shown in our~\cref{alg:proposed_method}, $\lambda$ controls how much the server projects the current model toward the global gradient to initialize the local update. To analyze the effect of $\lambda$ on the performance of FedAGM algorithms, we evaluate the generalization performance of FedAGM after $1000$ rounds varying $\lambda$ as \{0.75, 0.8, 0.85, 0.9, 0.95\} on CIFAR-10 with 500 clients of different data splits on $2 \%$ participation. \cref{tab:lamb_ablation} shows that FedAGM converges well for all the selection of $\lambda$ while there is little performance drop when $\lambda$ is set to $0.95$. We note that selecting too large $\lambda$ will harm the performance of FedAGM since large acceleration toward momentum can impose oscillation for the global optimization procedure. Despite this, the proposed method still outperforms most existing algorithms. \begin{table}[t] \centering \caption{$\lambda$ sensitivity of FedAGM for test accuracy at 1000 rounds on CIFAR-10 with large number of clients.} \label{tab:lamb_ablation} \scalebox{0.9}{ \begin{tabular}{cccccc} $\lambda$ & 0.75 & 0.8 & 0.85 & 0.9 & 0.95 \\ \hline Dirichlet (0.3) & 81.32 & 82.52 & 82.80 & 81.82 & 78.25 \\ Dirichlet (0.6) & 83.97 & 84.89 & 85.28 & 84.56 & 82.07 \\ IID & 85.52 & 86.92 & 86.83 & 87.08 & 84.37 \\ \hline \end{tabular}} \end{table} \begin{table}[t] \centering \caption{Effect of regularization at local objective function on CIFAR-100. 'P' and 'C' denotes participation rate and number of clients, respectively.} \label{abl:beta} \scalebox{0.9}{ \begin{tabular}{cccc} \multirow{2}{*}{P / C} & \multirow{2}{*}{Method} & accuracy (\%, $\uparrow$ ) & rounds (\#, $\downarrow$) \\ && 1000R & 42\% \\ \cline{1-4} \multirow{2}{*}{5\% / 100}&$\beta= 0$ & 59.50 & 200 \\ &$\beta \neq 0$ & \bf{62.51} & \bf{197} \\ \cline{1-4} \multirow{2}{*}{1\% / 100}&$\beta= 0$ & 42.95 & 914 \\ &$\beta \neq 0$ & \bf{45.15} & \bf{895} \\ \cline{1-4} \multirow{2}{*}{2\% / 500}&$\beta= 0$ & 46.80 & 736 \\ &$\beta \neq 0$ & \bf{48.40} & \bf{678} \\ \cline{1-4} \end{tabular}} \end{table} \paragraph{Effect of adding regularization on local objective function} To analyze the effect of incorporating the regularization term in the local objective function, we compare the performance by setting $\beta=0$ and $\beta \neq 0$ on CIFAR-100. \cref{abl:beta}~shows that FedAGM with the regularization term achieves better performance in terms of convergence rate and accuracy for all levels of participation. This implies that correcting local gradients for every local update helps to mitigate inconsistent local updates between clients. \iffalse \begin{table*}[t] \centering \caption{CIFAR 10, 500 clients,0.02} \begin{tabular}{lcccc} \hline \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Participation \\ rate\end{tabular}} & \multicolumn{4}{c}{2\%} \\ \hline \multirow{2}{*}{Method \textbackslash{}@} & \multicolumn{2}{c}{accuracy (\%, )} & \multicolumn{2}{c}{rounds (\#, )} \\ & 500R & \multicolumn{1}{c}{1000R} & 73\% & 77\% \\ \hline Ours(mu\textgreater{}0) & 73.61 & 82.80 & 484 & 606 \\ Ours(mu=0) & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ FedAvg & 58.74 & 71.45 & - & - \\ FedAvgm & 65.86 & 77.49 & 753 & 959 \\ FedProx & 57.88 & 70.75 & - & - \\ FedDyn & 69.14 & 82.56 & 600 & 719 \\ FedAdam & 61.53 & 69.94 & - & - \\ FedCM & 69.27 & 76.57 & 742 & - \\ \hline \end{tabular} \end{table*} \begin{table*}[t]\centering \caption{CIFAR 100, 500 clients,0.02} \begin{tabular}{lcccc} \hline \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Participation \\ rate\end{tabular}} & \multicolumn{4}{c}{2\%} \\ \hline \multirow{2}{*}{Method \textbackslash{}@} & \multicolumn{2}{c}{accuracy (\%, )} & \multicolumn{2}{c}{rounds (\#, )} \\ & 500R & \multicolumn{1}{c}{1000R} & 36\% & 40\% \\ \hline Ours(mu\textgreater{}0) & 34.94 & 47.35 & 526 & 642 \\ Ours(mu=0) & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ FedAvg & 30.16 & 38.11 & 842 & - \\ FedAvgm & 31.80 & 40.54 & 724 & 955 \\ FedProx & 29.28 & 36.16 & 966 & - \\ FedDyn & 33.52 & 45.01 & 576 & 714 \\ FedAdam & 24.40 & 30.83 & - & - \\ FedCM & 27.23 & 38.79 & 872 & - \\ \hline \end{tabular} \end{table*} \begin{table*}[t]\centering \caption{Tiny-ImageNet, 500 clients, 0.02} \begin{tabular}{lcccc} \hline \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Participation \\ rate\end{tabular}} & \multicolumn{4}{c}{2\%} \\ \hline \multirow{2}{*}{Method \textbackslash{}@} & \multicolumn{2}{c}{accuracy (\%, )} & \multicolumn{2}{c}{rounds (\#, )} \\ & 500R & \multicolumn{1}{c}{1000R} & 73\% & 77\% \\ \hline Ours(mu\textgreater{}0) & 73.61 & 82.80 & 484 & 606 \\ Ours(mu=0) & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ FedAvg & 58.74 & 71.45 & - & - \\ FedAvgm & 65.86 & 77.49 & 753 & 959 \\ FedProx & 57.88 & 70.75 & - & - \\ FedDyn & 69.14 & 82.56 & 600 & 719 \\ FedAdam & 61.53 & 69.94 & - & - \\ FedCM & 69.27 & 76.57 & 742 & - \\ \hline \end{tabular} \end{table*} \fi \iffalse \begin{table*}[h]\centering \caption{CIFAR10, Dirichlet 0.3} \label{tab:cifar10_d_0.3} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} \cline{1-13} Participation rate & \multicolumn{4}{c}{10 \%} & \multicolumn{4}{c}{5 \%} & \multicolumn{4}{c}{1 \%} \\ \cline{1-13} \multirow{2}{*}{Method } & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & 500R & \multicolumn{1}{c}{1000R} & 84\% & 87\% & 500R & \multicolumn{1}{c}{1000R} & 81\% & 85\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 71\% & 75\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 77.01& 84.60 & 885 & 1000+ & 74.36 & 82.53 & 840 & 1000+ & 46.72 & 64.54 & 1000+ & 1000+ \\ FedProx~\cite{li2018federated} & 77.20 & 83.72 & 1000+ & 1000+ & 73.70 & 82.68 & 826 & 1000+ & 47.60 & 65.47 & 1000+ & 1000+ \\ FedAvgm~\cite{hsu2019measuring} & 83.58 & 87.72 & 527 & 889 & 80.56 & 85.48 & 519 & 828 & 46.05 & 63.73 & 1000+ & 1000+ \\ FedAdam~\cite{reddi2020adaptive} & 80.86 & 84.61 & 777 & 1000+ & 73.18 & 81.14 & 878 & 1000+ & 53.15 & 69.29 & 976 & 1000+ \\ FedDyn~\cite{acar2020federated} & \textbf{88.29} & 89.89 & 323 & \textbf{403} & {\bf85.67} & 89.04 & 340 & 794 & 53.59 & 72.18 & 794 & 854 \\ FedCM~\cite{xu2021fedcm} & 84.87 & 87.75 & 376 & 789 & 78.92 & 83.71 & 624 & 1000+ & 10.00 & 10.00 & 1000+ & 1000+ \\ FedAGM (ours) & 87.44 & \textbf{90.10} & \textbf{289} & 458 & 85.13 & {\bf89.07} & {\bf 326} & {\bf 450} & {\bf 64.30} & {\bf 76.72} & {\bf 616} & {\bf 696} \\ \cline{1-13} \end{tabular}} \end{table*} \begin{table*}[h]\centering \caption{CIFAR100, Dirichlet 0.3} \label{tab:cifar100_d_0.3} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} \cline{1-13} Participation rate & \multicolumn{4}{c}{10 \%} & \multicolumn{4}{c}{5 \%} & \multicolumn{4}{c}{1 \%} \\ \cline{1-13} \multirow{2}{*}{Method } & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & 500R & \multicolumn{1}{c}{1000R} & 55\% & 59\% & 500R & \multicolumn{1}{c}{1000R} & 47\% & 53\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 36\% & 40\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 45.51& 49.63& 1000+& 1000+ & 41.88 & 47.83 & 924 & 1000+ & 26.28 & 40.44 & 829 & 994 \\ FedProx~\cite{li2018federated} & 44.66 & 48.18 & 1000+ & 1000+ & 42.43 & 48.32 & 881 & 1000+ & 25.30 & 40.16 & 832 & 995 \\ FedAvgm~\cite{hsu2019measuring} & 48.19 & 52.77 & 1000+ & 1000+ & 46.98 & 53.29 & 515 & 936 & 23.07 & 36.30 & 995 & 1000+ \\ FedAdam~\cite{reddi2020adaptive} & 52.67 & 58.01 & 688 & 1000+ & 44.80 & 52.48 & 691 & 1000+ & 14.68 & 18.02 & 976 & 1000+ \\ FedDyn~\cite{acar2020federated} & 57.36 & 60.79 & 398 & 625 & 52.96 & 58.22 & 293 & 504 & 30.24 & 42.64 & 645 & 849 \\ FedCM~\cite{xu2021fedcm} & 56.68 & 60.41 & 403 & 702 & 52.44 & 58.06 & 293 & 572 & 2.57 & 3.09 & 1000+ & 1000+ \\ FedAGM (ours) & \textbf{58.47} & \textbf{62.41} & \textbf{350} & \textbf{546} & {\bf55.79} & {\bf62.51} & {\bf260} & {\bf389} & {\bf34.79} & {\bf46.03} & {\bf546} & {\bf678} \\ \cline{1-13} \end{tabular}} \end{table*} \fi \section{Evaluation on Low Participation Rate with a Large Number of Clients} The low participation rate and the large number of clients are factors that make the convergence of federated learning difficult. To show the robustness to client heterogeneity and participation rates, we perform experiments when both factors are present, that is when the total number of clients is 500 and the participation rate is 1\%. Local epochs and local iterations are set to 5 and 50, respectively. Note that this setting makes each client have a very small number of training examples and increases client heterogeneity significantly. \cref{tab:abl_500cl_1partrate} again shows that FedAGM has the best performance on all metrics. Note that the performance gap between FedAGM and its strongest competitor, FedDyn, becomes larger than when the participation rate is 2\%: from 1.37\%p to 1.68\%p in CIFAR 10 and from 3.39\%p to 4.76\%p in CIFAR 100 at round 1000. \begin{table*}[h] \centering \caption{Comparison of federated learning methods on CIFAR-10 and CIFAR-100 with the large number of clients (500): Dirichlet (0.3) split on low client participation (1\%). Accuracy at the target round and the communication round to reach target test accuracy are based on running exponential moving average with parameter $0.9$. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted by {\bf bold}.} \label{tab:abl_500cl_1partrate} \scalebox{0.9}{ \begin{tabular}{lcccccccc} \multirow{3}{*}{Method} & \multicolumn{4}{c}{CIFAR-10} & \multicolumn{4}{c}{CIFAR-100} \\ \cline{2-9} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 64\% & 68\% & 500R & \multicolumn{1}{c}{1000R} & 30\% & 35\% \\ \cline{1-9} FedAvg~\cite{mcmahan2017communication} &54.71 & 68.96 & 792 & 949 & 26.94 & 35.69 & 636 & 950 \\ FedProx~\cite{li2018federated} & 55.18 & 69.80 & 773 & 919 & 26.92 & 35.41 & 648 & 963 \\ FedAvgm~\cite{hsu2019measuring} & 57.82 & 71.12 & 669 &812 & 29.29 & 39.36 & 530 & 755 \\ FedAdam~\cite{reddi2021adaptive} & 47.97 & 55.11 & 1000+ & 1000+ & 17.72 & 23.92 & 1000+ & 1000+ \\ FedDyn~\cite{acar2020federated} & 58.28 & 74.77 & 621 & 752 & 29.68 & 40.42 & 512 & 703 \\ FedCM~\cite{xu2021fedcm} & 49.21& 60.38 & 1000+ & 1000+ & 16.32 & 22.59 & 1000+ & 1000+ \\ FedAGM (ours) & {\bf63.70} & {\bf76.45} & {\bf509} & {\bf618} & {\bf 31.74} & {\bf45.18} & {\bf458} & {\bf581} \\ \cline{1-9} \end{tabular}} \end{table*} \section{Effect of More Local Iterations} The increase of local iterations under non-iid environments is prone to result in more divergence across client models and degraded performance of an algorithm. We evaluate the accuracy and communication-efficiency of the proposed method with aggravated client heterogeneity by varying the number of local iterations, \textit{i.e.}, $K \in \{50, 70, 100\}$, on CIFAR-100. In this experiments, the number of clients, participation rate, and Dirichlet parameter is set to 100, 5\% and $0.3$, respectively. \cref{tab:abl_local_iterations}~presents that FedAGM outperforms the compared methods consistently for all cases in terms of accuracy and communication efficiency. \begin{table*}[h] \centering \caption{Comparison of federated learning methods with three different local iterations on CIFAR-100: Dirichlet (0.3) split on 5\% client participation out of 100 clients. Accuracy at the target round and the communication round to reach target test accuracy are based on running exponential moving average with parameter $0.9$. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted in {\bf bold}.} \label{tab:abl_local_iterations} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} Local iterations & \multicolumn{4}{c}{$K$ = 50} & \multicolumn{4}{c}{$K$ = 70} & \multicolumn{4}{c}{$K$ = 100} \\ \cline{2-13} \multirow{2}{*}{Methods} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} &\multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 47\% & 53\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 47\% & 53\% & 500R & \multicolumn{1}{c}{1000R} & 47\% & 53\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 41.88 & 47.83 & 924 & 1000+ & 42.45 & 48.29 & 852 & 1000+ & 41.92 & 48.16 & 896 & 1000+ \\ FedProx~\cite{li2018federated} & 42.43 & 48.32 & 881 & 1000+ & 43.31 & 49.54 & 752 & 1000+ & 42.01 & 48.17 & 888 & 1000+ \\ FedAvgm~\cite{hsu2019measuring} & 46.98 & 53.29 & 515 & 936 & 46.17 &52.34 & 544 & 1000+ & 45.72 & 52.74 & 578 & 1000+ \\ FedAdam~\cite{reddi2021adaptive} & 44.80 & 52.48 & 691 & 1000+ & 43.76 & 49.19& 756 & 1000+ & 43.00 & 47.51 & 994 & 1000+ \\ FedDyn~\cite{acar2020federated} & 52.66 & 58.46 & 293 & 504 & 52.85 & 59.98 &326 &509 & 51.98 & 59.13 &347 & 532 \\ FedCM~\cite{xu2021fedcm} & 52.44 & 58.06 & 293 & 572 & 48.30 & 54.89 & 467 & 812 & 46.90 & 54.20 & 502 & 893 \\ FedAGM (ours) & {\bf55.79} & {\bf62.51} & {\bf260} & {\bf389} & \bf{54.23} & \bf{61.23} & \bf{295} & \bf{459} & \bf{54.71} & \bf{63.12} & \bf{284} & \bf{461} \\ \cline{1-13} \end{tabular}} \end{table*} \subsection{Convergence plot on Tiny-ImageNet} To validate the performance of the proposed method on a more realistic dataset, we compare FedAGM with the baselines on Tiny-ImageNet in the two federated settings: one with 100 total clients and 5\% client participation, and the other with 500 clients and 2\% client participation. In both settings, local epochs and local iterations are set to 5 and 50 respectively. \cref{fig:tiny_curve} shows that FedAGM consistently outperforms the compared methods every communication rounds. Note that FedAGM shows a faster convergence rate, especially at the early stage of training, than other algorithms. \begin{figure*}[h] \centering \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=1\linewidth]{figures/tiny_mid_100.pdf} \caption{5\% participation, 100 clients} \end{subfigure} \hspace{0.05cm} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=1\linewidth]{figures/tiny_low_500.pdf} \caption{2\% participation, 500 clients} \end{subfigure} \caption{ The convergence plot of FedAGM and the compared methods on Tiny-ImageNet with the different levels of client heterogeneity. } \label{fig:tiny_curve} \end{figure*} \section{Evaluation on Tiny-ImageNet} \bhhan{We found a small mistake in our implementation for the experiments on the Tiny-ImageNet dataset, and report the correct numbers in \cref{tab:abl_tiny_imagenet}. Also, for consistency with the results on other datasets, we present the results at round 500 and 1000.} To validate the performance of the proposed method on a more realistic dataset, we compare FedAGM with the baselines on Tiny-ImageNet in the two federated settings: one with 100 total clients and 5\% client participation, and the other with 500 clients and 2\% client participation. In both settings, local epochs and local iterations are set to 5 and 50 respectively. \cref{tab:abl_tiny_imagenet}~shows that FedAGM achieves the state-of-the-art performance for all cases. This validates that FedAGM not only works well on small datasets, but shows good performance on more realistic large-scale datasets. \cref{fig:tiny_curve} also shows that FedAGM consistently outperforms the compared methods every communication rounds. Note that FedAGM shows a faster convergence rate, especially at the early stage of training, than other algorithms. \begin{table*}[h] \centering \caption{Comparison of federated learning methods on Tiny-ImageNet with two different settings. Accuracy at the target round and the communication round to reach target test accuracy are based on running exponential moving average with parameter $0.9$. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted by {\bf bold}. Note that we put wrong results of FedAGM on Tiny-ImageNet of Tab. 1 and 2 in the main paper due to our mistake, and here are the correct results of FedAGM, which is denoted by an asterisk (*). We apologize for the confusion.} \label{tab:abl_tiny_imagenet} \scalebox{0.9}{ \begin{tabular}{lcccccccc} \multirow{3}{*}{Method} & \multicolumn{4}{c}{5$\%$,100 clients} & \multicolumn{4}{c}{2$\%$,500 clients } \\ \cline{2-9} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 35\% & 38\% & 500R & \multicolumn{1}{c}{1000R} & 24\% & 30\% \\ \cline{1-9} FedAvg~\cite{mcmahan2017communication} & 33.94 & 35.42 & 645 & 1000+&23.63 & 29.48 & 523 & 1000+ \\ FedProx~\cite{li2018federated} & 34.14 & 35.53 & 613 & 1000+ &25.45 & 31.71 & 445 & 799 \\ FedAvgm~\cite{hsu2019measuring} & 36.32 & 38.51 & 416 & 829 & 26.75 & 33.26 & 386 & 687 \\ FedAdam~\cite{reddi2021adaptive} & 33.22 & 38.91 & 658& 945 & 21.88 & 28.08 & 648 & 1000+ \\ FedDyn~\cite{acar2020federated} & 38.86 & 42.82 & 327 & 440 & 27.74 & 37.25 & 387 & 580 \\ FedCM~\cite{xu2021fedcm} & 31.61 & 37.87 & 694 & 1000+ & 19.41 & 24.09 & 975 & 1000+ \\ FedAGM* (ours) & \bf{42.26} & \bf{46.31} & \bf{226} & \bf{331} &\bf{31.47} & \bf{38.48} & \bf{246} & \bf{447} \\ \cline{1-9} \end{tabular}} \end{table*} \begin{figure*}[h] \centering \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=1\linewidth]{figures/tiny_mid_100.pdf} \caption{5\% participation, 100 clients} \end{subfigure} \hspace{0.05cm} \begin{subfigure}[b]{0.4\linewidth} \centering \includegraphics[width=1\linewidth]{figures/tiny_low_500.pdf} \caption{2\% participation, 500 clients} \end{subfigure} \caption{ The convergence plot of FedAGM and the compared methods on Tiny-ImageNet with the different levels of client heterogeneity. } \label{fig:tiny_curve} \end{figure*} \section{Evaluation on Low Participation Rate with a Large Number of Clients} The low participation rate and the large number of clients are factors that make the convergence of federated learning difficult. To show the robustness to client heterogeneity and participation rates, we perform experiments when both factors are present, that is when the total number of clients is 500 and the participation rate is 1\%. Local epochs and local iterations are set to 5 and 50, respectively. Note that this setting makes each client have a very small number of training examples and increases client heterogeneity significantly. \cref{tab:abl_500cl_1partrate} again shows that FedAGM has the best performance on all metrics. Note that the performance gap between FedAGM and its strongest competitor, FedDyn, becomes larger than when the participation rate is 2\%: from 1.37\%p to 1.68\%p in CIFAR 10 and from 3.39\%p to 4.76\%p in CIFAR 100 at round 1000. \begin{table*}[h] \centering \caption{Comparison of federated learning methods on CIFAR-10 and CIFAR-100 with the large number of clients (500): Dirichlet (0.3) split on low client participation (1\%). Accuracy at the target round and the communication round to reach target test accuracy are based on running exponential moving average with parameter $0.9$. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted by {\bf bold}.} \label{tab:abl_500cl_1partrate} \scalebox{0.9}{ \begin{tabular}{lcccccccc} \multirow{3}{*}{Method} & \multicolumn{4}{c}{CIFAR-10} & \multicolumn{4}{c}{CIFAR-100} \\ \cline{2-9} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 64\% & 68\% & 500R & \multicolumn{1}{c}{1000R} & 30\% & 35\% \\ \cline{1-9} FedAvg~\cite{mcmahan2017communication} &54.71 & 68.96 & 792 & 949 & 26.94 & 35.69 & 636 & 950 \\ FedProx~\cite{li2018federated} & 55.18 & 69.80 & 773 & 919 & 26.92 & 35.41 & 648 & 963 \\ FedAvgm~\cite{hsu2019measuring} & 57.82 & 71.12 & 669 &812 & 29.29 & 39.36 & 530 & 755 \\ FedAdam~\cite{reddi2021adaptive} & 47.97 & 55.11 & 1000+ & 1000+ & 17.72 & 23.92 & 1000+ & 1000+ \\ FedDyn~\cite{acar2020federated} & 58.28 & 74.77 & 621 & 752 & 29.68 & 40.42 & 512 & 703 \\ FedCM~\cite{xu2021fedcm} & 49.21& 60.38 & 1000+ & 1000+ & 16.32 & 22.59 & 1000+ & 1000+ \\ FedAGM (ours) & {\bf63.70} & {\bf76.45} & {\bf509} & {\bf618} & {\bf 31.74} & {\bf45.18} & {\bf458} & {\bf581} \\ \cline{1-9} \end{tabular}} \end{table*} \section{Effect of More Local Iterations} The increase of local iterations under non-iid environments is prone to result in more divergence across client models and degraded performance of an algorithm. We evaluate the accuracy and communication-efficiency of the proposed method with aggravated client heterogeneity by varying the number of local iterations, \textit{i.e.}, $K \in \{50, 70, 100\}$, on CIFAR-100. In this experiments, the number of clients, participation rate, and Dirichlet parameter is set to 100, 5\% and $0.3$, respectively. \cref{tab:abl_local_iterations}~presents that FedAGM outperforms the compared methods consistently for all cases in terms of accuracy and communication efficiency. \begin{table*}[h] \centering \caption{Comparison of federated learning methods with three different local iterations on CIFAR-100: Dirichlet (0.3) split on 5\% client participation out of 100 clients. Accuracy at the target round and the communication round to reach target test accuracy are based on running exponential moving average with parameter $0.9$. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted in {\bf bold}.} \label{tab:abl_local_iterations} \scalebox{0.9}{ \begin{tabular}{lcccccccccccc} Local iterations & \multicolumn{4}{c}{$K$ = 50} & \multicolumn{4}{c}{$K$ = 70} & \multicolumn{4}{c}{$K$ = 100} \\ \cline{2-13} \multirow{2}{*}{Methods} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} &\multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 47\% & 53\% & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 47\% & 53\% & 500R & \multicolumn{1}{c}{1000R} & 47\% & 53\% \\ \cline{1-13} FedAvg~\cite{mcmahan2017communication} & 41.88 & 47.83 & 924 & 1000+ & 42.45 & 48.29 & 852 & 1000+ & 41.92 & 48.16 & 896 & 1000+ \\ FedProx~\cite{li2018federated} & 42.43 & 48.32 & 881 & 1000+ & 43.31 & 49.54 & 752 & 1000+ & 42.01 & 48.17 & 888 & 1000+ \\ FedAvgm~\cite{hsu2019measuring} & 46.98 & 53.29 & 515 & 936 & 46.17 &52.34 & 544 & 1000+ & 45.72 & 52.74 & 578 & 1000+ \\ FedAdam~\cite{reddi2021adaptive} & 44.80 & 52.48 & 691 & 1000+ & 43.76 & 49.19& 756 & 1000+ & 43.00 & 47.51 & 994 & 1000+ \\ FedDyn~\cite{acar2020federated} & 52.66 & 58.46 & 293 & 504 & 52.85 & 59.98 &326 &509 & 51.98 & 59.13 &347 & 532 \\ FedCM~\cite{xu2021fedcm} & 52.44 & 58.06 & 293 & 572 & 48.30 & 54.89 & 467 & 812 & 46.90 & 54.20 & 502 & 893 \\ FedAGM (ours) & {\bf55.79} & {\bf62.51} & {\bf260} & {\bf389} & \bf{54.23} & \bf{61.23} & \bf{295} & \bf{459} & \bf{54.71} & \bf{63.12} & \bf{284} & \bf{461} \\ \cline{1-13} \end{tabular}} \end{table*} \iffalse \begin{table*}[h] \centering \caption{Comparison with federated learning methods for different local iterations on CIFAR-100 with 100 clients of Dirichlet (0.3) split on 5\% client participation. Accuracy at target round and the communication round to reach target test accuracy is based on running exponential moving average with parameter $0.9$. The arrows indicate whether higher ($\uparrow$) or lower ($\downarrow$) is better. The best performance in each column is denoted in {\bf bold}.} \label{tab:abl_local_iterations} \scalebox{0.9}{ \begin{tabular}{lcccccccc} Local iterations & \multicolumn{4}{c}{$K$ = 70} & \multicolumn{4}{c}{$K$ = 100} \\ \cline{2-9} \multirow{2}{*}{Methods} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$ )} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} & \multicolumn{2}{c}{accuracy (\%, $\uparrow$)} & \multicolumn{2}{c}{rounds (\#, $\downarrow$)} \\ & \multicolumn{1}{c}{500R} & \multicolumn{1}{c}{1000R} & 47\% & 53\% & 500R & \multicolumn{1}{c}{1000R} & 47\% & 53\% \\ \cline{1-9} FedAvg~\cite{mcmahan2017communication} & 42.45 & 48.29 & 852 & 1000+ & 41.92 & 48.16 & 896 & 1000+ \\ FedProx~\cite{li2018federated} & 43.31 & 49.54 & 752 & 1000+ & 42.01 & 48.17 & 888 & 1000+ \\ FedAvgm~\cite{hsu2019measuring} & 46.17 &52.34 & 544 & 1000+ & 45.72 & 52.74 & 578 & 1000+ \\ FedAdam~\cite{reddi2021adaptive} & 43.76 & 49.19& 756 & 1000+ & 43.00 & 47.51 & 994 & 1000+ \\ FedDyn~\cite{acar2020federated} & 52.85 & 59.98 &326 &509 & 51.98 & 59.13 &347 & 532 \\ FedCM~\cite{xu2021fedcm} & 48.30 & 54.89 & 467 & 812 & 46.90 & 54.20 & 502 & 893 \\ FedAGM (ours) & \bf{54.23} & \bf{61.23} & \bf{295} & \bf{459} & \bf{54.71} & \bf{63.12} & \bf{284} & \bf{461} \\ \cline{1-9} \end{tabular}} \end{table*} \fi \section{Sensitivity to $\lambda$} $\lambda$ controls how much the server projects the current model toward the global gradient to initialize the local update. To further analyze the effect of $\lambda$ on the performance of FedAGM algorithms, we evaluate the accuracy of the proposed method with different client heterogeneity by varying $\lambda$ as \{0.75, 0.8, 0.85, 0.9, 0.95\}, on CIFAR-10. \cref{tab:lamb_ablation} shows that the proposed algorithm outperforms most baselines regardless of $\lambda$, and the proposed method shows steady results in terms of the choices of $\lambda$ for different client heterogeneity. \begin{table}[h] \centering \caption{Sensitivity to $\lambda$ of FedAGM in terms of accuracy [\%] at rounds 1000 on CIFAR-10. Three different levels of client heterogeneity are tested, and the number of clients and participation rate are set to 500 and 2\%, respectively.} \label{tab:lamb_ablation} \scalebox{0.9}{ \begin{tabular}{cccccc} $\lambda$ & 0.75 & 0.8 & 0.85 & 0.9 & 0.95 \\ \hline Dirichlet (0.3) & 81.32 & 82.52 & 82.80 & 81.82 & 78.25 \\ Dirichlet (0.6) & 83.97 & 84.89 & 85.28 & 84.56 & 82.07 \\ IID & 85.52 & 86.92 & 86.83 & 87.08 & 84.37 \\ \hline \end{tabular}} \end{table} \section{Hyperparameter Setting} Our hyperparameter setting is as follows. For the experiments on CIFAR-10 and CIFAR-100, we choose 5 as the number of local training epochs (50 iterations) and 0.1 as the local learning rate. We set the batch size of the local update to 50 and 10 for the 100 and 500 client participation, respectively. The learning rate decay parameter of each algorithm is selected from \{0.995, 0.998, 1\} to achieve the best performance. The global learning rate is set to 1, except for FedAdam, which is tested with 0.01. \iffalse set to 1.0 except for FedAdam, which is set to 0.01.\fi For the experiments on Tiny-ImageNet, we match the total local iterations of local updates with other benchmarks by setting the batch size of local update as 100 and 20 for the 100 and 500 client participation, respectively. As for algorithm-dependent hyperparameters, $\alpha$ in FedCM is selected from \{0.1, 0.3, 0.5\}, $\alpha$ in FedDyn is selected from \{0.001, 0.01, 0.1\}, and $\alpha$ in FedAGM is selected from \{0.9, 1.0\}. $\tau$ in FedAdam is set to 0.001. $\beta$ in FedAvgM is selected from \{0.4, 0.6, 0.8\}, and $\beta$ in FedProx and FedAGM is selected from \{0.1, 0.01, 0.001\}. For FedAGM, $\lambda$ is selected from \{0.8, 0.85, 0.9\} and $\tau$ is selected from \{0.2, 1.0\}. \paragraph{Reproducibility statement} More details for the implementation and hyperparameters are organized at the demo code in the supplementary material, and we will release the source code to facilitate the reproduction of our results. {\small \bibliographystyle{ieee_fullname}
{'timestamp': '2022-01-11T02:28:39', 'yymm': '2201', 'arxiv_id': '2201.03172', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.03172'}
ArXiv
Abstract: In my PhD thesis I studied cooperative phenomena arise in complex systems using the methods of statistical and computational physics. The aim of my work was also to study the critical behaviour of interacting many-body systems during their phase transitions and describe their universal features analytically and by means of numerical calculations. In order to do so I completed studies in four different subjects. My first investigated subject was a study of non-equilibrium phase transitions in weighted scale-free networks. The second problem I examined was the ferromagnetic random bond Potts model with large values of q on evolving scale-free networks which problem is equivalent to an optimal cooperation problem. The third examined problem was related to the large-q sate random bond Potts model also and I examined the critical density of clusters which touched a certain border of a perpendicular strip like geometry and expected to hold analytical forms deduced from conformal invariance. The last investigated problem was a study of the non-equilibrium dynamical behaviour of the antiferromagnetic Ising model on two-dimensional triangular lattice at zero temperature in the absence of external field and at the Kosterlitz-Thouless phase transition point.
{'timestamp': '2019-04-25T08:57:29Z', 'url': 'https://arxiv.org/abs/0907.3858', 'language': 'en', 'source': 'c4'}
ArXiv
\section{Introduction} In \cite{lodha2016nonamenable}, Lodha and Moore introduced the group $G_0$ consisting of piecewise projective homeomorphisms of the real projective line. This group is a finitely presented torsion free counterexample to the von Neumann-Day problem \cite{Neumann1929, day1950means}, which asks whether every nonamenable group contains nonabelian free subgroups. Although counterexamples of the von Neumann-Day problem are known \cite{MR586204, adian1979burnside, MR682486, ol2003non, ivanov2005embedding, monod2013groups}, it is still an open question whether the Thompson group $F$ can be a new counterexample. The Lodha--Moore group $G_0$ has similar properties to the Thompson group $F$. Indeed, the Lodha--Moore group can also be defined as a finitely generated group consisting of homeomorphisms of the space of infinite binary sequences, whose generating set is obtained by adding an element to the well-known finite generating set of $F$. Both have (small) finite presentations \cite{brown1987finiteness, lodha2016nonamenable}, normal forms with infinite presentations \cite{brown1984infinite, lodha2020nonamenable}, simple commutator subgroups \cite{cannon1996introductory, burillo2018commutators}, trivial homotopy groups at infinity \cite{brown1984infinite, zaremsky2016hnn}, no nonabelian free subgroups \cite{brin1985groups, lodha2016nonamenable}, and are of type $F_\infty$ \cite{brown1984infinite, lodha2020nonamenable}. On the other hand, there exist various generalizations of the Thompson group $F$. One of the most natural ones is the $n$-adic Thompson group $F(n)$, which is obtained by replacing infinite binary sequences with infinite $n$-ary sequences. Even now, it is still being actively studied whether what is true for the group $F$ is also true for the generalized group and what interesting properties can be obtained under the generalization. In this paper, we generalize the Lodha--Moore group similarly and study its properties. Namely, we define the $n$-adic group $G_0(n)$ of the Lodha--Moore group $G_0$ and show that several properties which hold for $G_0$ also hold for $G_0(n)$. We remark that $G_0(2)$ is isomorphic to $G_0$. Let $n, m \geq 2$. We show the following: \begin{theorem} \begin{enumerate}[font=\normalfont] \item The group $G_0(n)$ admits an infinite presentation with a normal form of elements. \item The group $G_0(n)$ is finitely presented. \item The group $G_0(n)$ is nonamenable. \item The group $G_0(n)$ has no free subgroups. \item The group $G_0(n)$ is torsion free. \item The groups $G_0(n)$ and $G_0(m)$ are isomorphic if and only if $n=m$ holds. \item The commutator subgroup of the group $G_0(n)$ is simple. \item The center of the group $G_0(n)$ is trivial. \item There does not exist any nontrivial direct product decomposition of the group $G_0(n)$. \item There does not exist any nontrivial free product decomposition of the group $G_0(n)$. \end{enumerate} \end{theorem} This paper is organized as follows. In Section \ref{section_F(n)}, we generalize Dehornoy's infinite presentation of $F$ to $F(n)$, which will be used to construct that of $G_0(n)$. To the best of the author's knowledge, this is a new presentation of $F(n)$. In Section \ref{section_G0(n)}, we first recall the definition of the group $G_0$ and define the group $G_0(n)$. Then by using the infinite presentation of $F(n)$ constructed in Section \ref{section_F(n)}, we generalize Lodha's method to obtain that of $G_0(n)$ and its normal form. Finally, in Section \ref{section_G0(n)_properties}, we study several properties of $G_0(n)$. Let us mention some open problems which are known to hold in the case of $G_0$. First, it is an interesting question whether $G_0(n)$ can be realized as a subgroup of the group of piecewise projective homeomorphisms of the real projective line. The second problem is whether this group is of type $F_{\infty}$, and all homotopy groups are trivial at infinity. If it has these two properties, then ${G_0(n)}$ is an example of an (infinite) family of groups satisfying all Geoghegan's conjectures for the Thompson group $F$. Furthermore, we can consider some groups related to $G_0(n)$. The first one is constructed by using another definition of the map $y$ defined in Section \ref{subsection_def_G_0(n)}. Although we define the map so that $G_0$ is naturally a subgroup of $G_0(n)$, we can consider several different generalizations. We can also construct groups that contain $G_0(n)$. In \cite{lodha2016nonamenable}, the group $G$ is defined, where $G$ contains $G_0$ as a subgroup. For our group, by adding some of the generators $y_0$, $y_{(n-1)1}$, $\dots$, $y_{(n-1)(n-2)}$, $y_{(n-1)}$, we can define not only the group $G(n)$, which corresponds to $G$ but also the groups ``between'' $G_0(n)$ and $G(n)$. \section{The generalized Thompson group $F(n)$}\label{section_F(n)} \subsection{Definition} \label{subsection_F(n)_definition} Let $n \geq 2$. There exist several ways to define the generalized Thompson group $F(n)$. In this paper, we define it as a group of homeomorphisms on the $n$-adic Cantor set. We use tree diagrams to represent elements of the group visually. We define $\bm{N}$ to be the set $\{0, 1, \dots, n-1 \}$. We endow $\bm{N}$ with the discrete topology and endow $\N^\mathbb{N}$ with the product topology. Note that $\N^\mathbb{N}$ and the Cantor set are homeomorphic. We also consider the set of all finite sequences on $\bm{N}$ and write $\N^{<\mathbb{N}}$ for it. For $s \in \N^{<\mathbb{N}}$ and $t \in \N^{<\mathbb{N}}$ (or $\N^\mathbb{N}$), the concatenation is denoted by $st$. The group $F(n)$ is a finitely generated group that is generated by the following $n$ homeomorphisms on $\N^\mathbb{N}$: \begin{align*} x_0(\zeta)&= \begin{cases} 0\eta & ( \zeta=00\eta )\\ 1\eta & ( \zeta=01\eta ) \\ &\vdots \\ (n-2)\eta & (\zeta=0(n-2)\eta) \\ (n-1)0\eta & (\zeta=0(n-1)\eta) \\ (n-1)1\eta & (\zeta=1\eta) \\ &\vdots \\ (n-1)(n-1)\eta & (\zeta=(n-1)\eta), \end{cases} \\ x_1(\zeta)&= \begin{cases} 0\eta & (\zeta=0\eta) \\ 1\eta & (\zeta=10\eta) \\ 2\eta & (\zeta=11\eta) \\ &\vdots \\ (n-2)\eta & (\zeta=1(n-3)\eta) \\ (n-1)0\eta & (\zeta=1(n-2)\eta) \\ (n-1)1\eta & (\zeta=1(n-1)\eta) \\ (n-1)2\eta & (\zeta=2\eta) \\ &\vdots \\ (n-1)(n-1)\eta & (\zeta=(n-1)\eta), \end{cases} \\ &\vdots \\ x_{n-2}(\zeta)&= \begin{cases} 0\eta & (\zeta=0\eta) \\ &\vdots \\ (n-3)\eta & (\zeta=(n-3)\eta) \\ (n-2)\eta & (\zeta=(n-2)0\eta) \\ (n-1)0\eta & (\zeta=(n-2)1\eta) \\ &\vdots \\ (n-1)(n-2)\eta & (\zeta=(n-2)(n-1)\eta) \\ (n-1)(n-1)\eta & (\zeta=(n-1)\eta), \end{cases} \shortintertext{and} \x{0}{(n-1)}(\zeta)&= \begin{cases} (n-1)x_0(\eta) & (\zeta=(n-1)\eta) \\ \zeta & (\zeta \neq (n-1)\eta). \end{cases} \end{align*} These maps are represented by tree diagrams as in Figure \ref{generator_Fn}. \begin{figure}[tbp] \centering \includegraphics[width=150mm]{generator_of_Fn.pdf} \caption{Tree diagrams of the homeomorphisms. } \label{generator_Fn} \end{figure} Here, we briefly review the definition of tree diagrams. See \cite{burillo2001metrics} for details. An \textit{$n$-ary tree} is a finite tree with a top vertex (\textit{root}) with $n$ edges, and all vertices except the root have degree only $1$ (\textit{leaves}) or $n+1$. We define a \textit{caret} to be an $n$-ary tree with no vertices whose degree is $n+1$ (see Figure \ref{n-caret}). \begin{figure}[tbp] \centering \includegraphics[width=30mm]{n-caret.pdf} \caption{A caret. } \label{n-caret} \end{figure} Then, each $n$-ary tree is obtained by attaching carets to a leaf of a caret. We always assume that the root is the top and the others are descendants. Each $n$-ary tree can be regarded as a finite subset of $\N^{<\mathbb{N}}$. To do this, we label each edge of each caret by $0, 1, \dots, n-1$ from the left. Since every leaf corresponds to a unique path from the root to the leaf, we can regard it as an element in $\N^{<\mathbb{N}}$. Let $T_+$ and $T_-$ be $n$-ary trees with $m$ leaves. Let $a_1, \dots, a_m$ be elements in $\N^{<\mathbb{N}}$ with lexicographic order corresponding to the leaves of $T_+$. For $T_-$, define $b_1, \dots, b_m$ in the same way. Then, for every $\zeta \in \N^\mathbb{N}$, there exists $i$ uniquely such that $\zeta=a_i\eta$ for some $\eta \in \N^\mathbb{N}$. Thus we obtain a homeomorphism $a_i \eta \mapsto b_i \eta$. It is known that every homeomorphism obtained from two $n$-ary trees with the same number of leaves in this way is generated by the composition of $x_0, x_1, \dots, x_{n-2}, \x{0}{(n-1)}$. See \cite[Corollary 10.9]{meier2008groups} for the case $n=2$. We define $\epsilon$ to be the empty word. Let $i$ in $\{0, \dots, n-2 \}$ and $\alpha$ in $\N^{<\mathbb{N}} \cup \{\epsilon \}$. We define the map $\x{i}{\alpha}: \N^\mathbb{N} \to \N^\mathbb{N}$ by \begin{align*} \x{i}{\alpha}(\zeta)&= \begin{cases} \alpha x_i(\eta) & (\zeta=\alpha \eta) \\ \zeta & (\zeta \neq \alpha \eta), \end{cases} \end{align*} and we define \begin{align*} X(n)&:= \left\{ \x{i}{\alpha} \mid i = 0, \dots, n-2, \alpha \in \N^{<\mathbb{N}} \cup \{ \epsilon \} \right\}. \end{align*} This set contains the well-known infinite generating set of $F(n)$, which we describe below, and is the key generating set in Section \ref{subsubsection_New_presentation_F(n)}. Let $X^\prime(n):=\{ \x{0}{s}, \dots, \x{n-2}{s} \mid s=\epsilon, (n-1), (n-1)(n-1), \dots \}$. We denote each element as $x_0=X_0, \dots, x_{n-2}=X_{n-2}, \x{0}{(n-1)}=X_{n-1}, \x{1}{(n-1)}=X_{n}, \dots$ only in this section for the sake of simplicity. For $i<j$, we have $X_i^{-1}X_jX_i=X_{j+n-1}$. This implies that every element in $F(n)$ has the following form: \begin{align*} \X{i_1}{r_1}\X{i_2}{r_2}\cdots \X{i_m}{r_m}\X{j_k}{-s_k}\cdots\X{j_2}{-s_2}\X{j_1}{-s_1} \end{align*} where $i_1<i_2<\cdots<i_m\neq j_k>\cdots>j_2>j_1$ and $r_1, \dots, r_m, s_1, \dots, s_k>0$. We require that this form satisfies the following additional condition: if there exist $X_i$ and $X_i^{-1}$, then there also exists one of \begin{align*} X_{i+1}, \X{i+1}{-1}, X_{i+2}, \X{i+2}{-1}, \dots, X_{i+n-1}, \X{i+n-1}{-1}. \end{align*} It is known that this form with the additional condition always uniquely exists. Thus we call this form \textit{normal form} of elements of $F(n)$. The proof for the case $n=2$ is in \cite[Section 1]{brown1984infinite}. \subsection{A presentation of the generalized Thompson group} While the group $F(n)$ has the well-known infinite and finite presentations, we construct another presentation with respect to $X(n)$ for Section \ref{infinite_presentation}. We list the relations of the elements in $F(n)$ as follows. Here, $A_i$s are defined at the beginning of Section \ref{subsubsection_New_presentation_F(n)}, and the shift-map is defined in Definition \ref{Def_shiftmap}. \begin{table}[H] {\small \begin{tabular}{ccccc} $A_0A_1=\A{1}{(n-1)}A_0$, & $A_1A_2=\A{2}{(n-1)}A_1$, & $\dots$, & $A_{n-2}\A{0}{(n-1)}=\A{0}{(n-1)^2}A_{n-2}$, & $\dots$ \\ $A_0A_2=\A{2}{(n-1)}A_0$, & $A_1A_3=\A{3}{(n-1)}A_1$, & $\dots$, & $A_{n-3}\A{0}{(n-1)}=\A{0}{(n-1)^2}A_{n-3}$, & $\dots$ \\ $A_0A_3=\A{3}{(n-1)}A_0$, & $\dots$ \\ $\vdots$ & $\ddots$ \\ $A_0A_{n-2}=\A{n-2}{(n-1)}A_0$, \\ $A_0\A{0}{(n-1)}=\A{0}{(n-1)^2}A_0$, \\ $A_0\A{1}{(n-1)}=\A{1}{(n-1)^2}A_0$, \\ $\vdots$ \end{tabular} } \centering \caption{Moving to a right column corresponds to the sift-map. } \label{relations_1} \end{table} \begin{table}[H] {\small \begin{tabular}{cccc} $A_0A_0=\A{0}{(n-1)}A_0\A{0}{0}$, & $A_1A_0=\A{0}{(n-1)}A_0\A{1}{0}$, & $\dots$, & $A_{n-2}A_0=\A{0}{(n-1)}A_0\A{n-2}{0}$, \\ $A_1A_1=\A{1}{(n-1)}A_1\A{0}{1}$, \\ $\vdots$ & $\ddots$ \\ \end{tabular} } \centering \caption{Moving to a lower row corresponds to the sift-map. } \label{relations_2} \end{table} \begin{table}[H] {\small \begin{tabular}{ccc} $\A{k}{0\alpha}A_1=A_1\A{k}{0\alpha}$, & $\A{k}{1\alpha}A_2=A_2\A{k}{1\alpha}$, & $\dots$ \\ $\A{k}{0\alpha}A_2=A_2\A{k}{0\alpha}$, & $\A{k}{1\alpha}A_3=A_3\A{k}{1\alpha}$, & $\dots$ \\ $\A{k}{0\alpha}A_3=A_3\A{k}{0\alpha}$, \\ $\vdots$ & $\ddots$ \end{tabular} } \centering \caption{For each $\alpha$ in $\N^{<\mathbb{N}} \cup \{\epsilon \}$ and $k$ in $\{0, \dots, n-2\}$, moving to a right column corresponds to the sift-map. } \label{relations_3} \end{table} \begin{table}[H] {\small \begin{tabular}{ccc} $\A{k}{0\alpha}A_0=A_0\A{k}{00\alpha}$, & $\A{k}{1\alpha}A_1=A_1\A{k}{10\alpha}$, & $\dots$ \\ $\A{k}{1\alpha}A_0=A_0\A{k}{01\alpha}$, & $\A{k}{2\alpha}A_1=A_1\A{k}{11\alpha}$, & $\dots$ \\ $\A{k}{2\alpha}A_0=A_0\A{k}{02\alpha}$, \\ $\vdots$ & $\ddots$ \\ $\A{k}{(n-2)\alpha}A_0=A_0\A{k}{0(n-2)\alpha}$, \\ $\A{k}{(n-1)0\alpha}A_0=A_0\A{k}{0(n-1)\alpha}$, \end{tabular} } \centering \caption{For each $\alpha$ in $\N^{<\mathbb{N}} \cup \{\epsilon \}$ and $k$ in $\{0, \dots, n-2\}$, moving to a right column corresponds to the sift-map. } \label{relations_4} \end{table} \subsubsection{Dehornoy's results} For the ``geometric'' presentation of $F(n)$, we generalize Dehornoy's method in \cite{dehornoy2005geometric}. Section 1 of \cite{dehornoy2005geometric} gives the way to find a presentation of a group. So we recall the general setting and the way. \begin{definition}\cite[Section 1.1]{dehornoy2005geometric} Let $G$ be a group (monoid) and $T$ be a set. We define a \textit{partial} (\textit{right}) \textit{action} to be a map $\phi$ from $G$ to the set of injections $\{f: T^\prime \to T \mid T^\prime \subset T \}$ such that the following are satisfied (in the following, we write $t \cdot g$ for the image of $t$ under $\phi(g)$ if it is defined): \begin{itemize} \item[$({PA}_1)$] For every $t \in T$, $t \cdot e=t$ holds; \item[$({PA}_2)$] For every $g, h \in G$, and $t \in T$, if $t \cdot g$ is defined, then $(t\cdot g)\cdot h$ is defined if and only if $t\cdot gh$ is defined, and if one of them is defined, we have $(t\cdot g)\cdot h=t \cdot gh$; \item[$({PA}_3)$] For every finite family $g_1, \dots, g_m$, there exists $t \in T$ such that $t \cdot g_1, \dots, t \cdot g_m$ are defined. \end{itemize} \end{definition} Dehornoy also introduced a stronger condition on partial actions. \begin{definition}\cite[Section 1.3]{dehornoy2005geometric} We assume that $G$ has a partial action on $T$. Then we call a subset $S \subset T$ is \textit{discriminating} if: \begin{enumerate} \item In (${PA}_3$), we can take $t$ in the set $S \cdot G=\{s \cdot g \mid s \in S, \mbox{$g \in G$ such that $s \cdot g$ is defined} \}$; \item Each $G$-orbit contains at most one element of $S$; \item For every $s \in S$, its stabilizer is trivial. \end{enumerate} \end{definition} \begin{remark} In the above setting, for every $t \in S \cdot G$, there exist the unique $s \in S$ and $g \in G$ such that $t=s \cdot g$. \end{remark} When $R$ is a family of relations of a group, we write $w\equiv_R z$ if one can rewrite $w$ to $z$ by using elements in $R$. The following theorem holds. \begin{theorem}[{\cite[Proposition 1.4]{dehornoy2005geometric}}] \label{dehornoy_presentation} Let $G$ be a group with a partial action on a set $T$. Let $X$ be a subset of $G$ and $R$ be a collection of relations satisfied in $G$ by the elements of $X$. Assume that $S$ is a discriminating subset of $T$ and that, for each $s$ in $S$ and $t$ in the $G$-orbit of $s$, a word $w_t$ on $X$ is chosen so that $t=s \cdot w_t$ holds. Then $\langle X \mid R \rangle$ is a presentation of $G$ if and only if for all $t, t^\prime$ in $S \cdot G$ and $x$ in $X$, \begin{align} \label{condition} \mbox{$t^\prime=t \cdot x$ implies $w_{t^\prime} \equiv_R w_t \times x$}, \end{align} where $\times$ denotes the concatenation of words in $X$. \end{theorem} \subsubsection{A presentation of $F(n)$}\label{subsubsection_New_presentation_F(n)} In this section, we construct a partial action of $F(n)$ and give a presentation by using Theorem \ref{dehornoy_presentation}. Except for the construction of $w_t$, the discussions are almost the same as the case $n=2$. Let $T(n)$ be a set consisting of $n$-ary trees and the root (the graph with a single vertex). We first define partial actions of $n-1$ elements $A_0, \dots, A_{n-2}$ on $T(n)$ as illustrated in Figure \ref{partial_action}. \begin{figure}[tbp] \centering \includegraphics[width=160mm]{partial_action.pdf} \caption{Partial actions of $A_1, \dots, A_{n-2}$. } \label{partial_action} \end{figure} We note that the actions of $A_i$ ($i=0, \dots, n-2$) on a single caret are not defined. Let $\alpha$ in $\N^{<\mathbb{N}}$. Then, for $n$-ary trees which contain $\alpha$ as a subpath, we define a partial action of $\A{i}{\alpha}$ to be the partial mapping obtained by applying $A_i$ to the element of $T(n)$ positioned just below $\alpha$ (if it is defined). See Figure \ref{monoid_def} and observe that the bottom caret is the only one moved by the action. \begin{figure}[tbp] \centering \includegraphics[width=60mm]{monoid_def_color.pdf} \caption{Example of the action of $\A{1}{2}$ for $n=4$. } \label{monoid_def} \end{figure} \begin{definition} We define $\mathscr{G}_n(\mathscr{A})$ to be the monoid generated by $X(n):=\{\A{i}{\alpha} \mid i \in \{0, \dots, n-2\}, \alpha \in \N^{<\mathbb{N}} \cup \{ \epsilon \} \}$ and their inverse, where $\epsilon$ is the empty word and $\A{i}{\epsilon}:=A_i$. \end{definition} From the construction, the monoid $\mathscr{G}_n(\mathscr{A})$ naturally has a partial action on $T(n)$. In order to make $\mathscr{G}_n(\mathscr{A})$ into a group, we introduce a congruence on $\mathscr{G}_n(\mathscr{A})$. \begin{definition} We assume that a group $G$ has a partial action on a set $T$. If there exists $t \in T$ such that $t \cdot g$ and $t \cdot g^\prime$ are defined, and they coincide for all such $t$, then we define $g$ and $g^\prime$ to be \textit{near-equal} and write it as $g \approx g^\prime$. \end{definition} As in \cite[Corollary 2.4]{dehornoy2005geometric}, we can show that $\mathscr{G}_n(\mathscr{A})$ and the set of all-right trees $S(n)$ satisfy the assumptions in \cite[Lemma 2.2]{dehornoy2005geometric} (see Figure \ref{T_m} as an example of an all-right tree). Thus, near-equality is a congruence on $\mathscr{G}_n(\mathscr{A})$, and the quotient monoid $\mathscr{G}_n(\mathscr{A}) / {\approx}$ is a group. We write $G_n(\mathscr{A})$ for this group. Moreover, $S(n)$ is discriminating for the induced partial action on $G_n(\mathscr{A})$. We omit the proofs because they are the same for $n=2$, but we recall the induced action for the reader's convenience. \begin{definition} For $t, t^\prime$ in $T(n)$ and $x$ in $G_n(\mathscr{A})$, we define $t \cdot x=t^\prime$ if $t \cdot g =t^\prime$ holds for some $g$ in $\mathscr{G}_n(\mathscr{A})$ such that $g$ is a representative of $x$ (if there exists such $g$). \end{definition} By the definitions, it is easy to see that $G_n(\mathscr{A})$ and $F(n)$ are isomorphic. We simply write $g$ for the class of $g$ in $G_n(\mathscr{A})$ for the sake of simplicity. In order to give a presentation of $G_n(\mathscr{A})$, we first define the ``shift-map'' on $G_n(\mathscr{A})$. \begin{definition}\label{Def_shiftmap} Define the map $[1]$ by the following: \begin{align*} A_0 \overset{[1]}{\mapsto} &A_1 \overset{[1]}{\mapsto} \cdots \overset{[1]}{\mapsto} A_{n-2} \overset{[1]}{\mapsto} \A{0}{(n-1)} \overset{[1]}{\mapsto} \A{1}{(n-1)} \overset{[1]}{\mapsto} \cdots \\ &\overset{[1]}{\mapsto} \A{n-2}{(n-1)} \overset{[1]}{\mapsto} \A{0}{(n-1)(n-1)} \overset{[1]}{\mapsto} \cdots, \end{align*} and for each $k \in \{0, \dots, n-2\}$ and $\alpha \in \N^{<\mathbb{N}} \cup \{ \epsilon \}$, \begin{align*} \A{k}{0\alpha} \overset{[1]}{\mapsto} &\A{k}{1\alpha} \overset{[1]}{\mapsto} \cdots \overset{[1]}{\mapsto} \A{k}{(n-2)\alpha} \overset{[1]}{\mapsto} \A{k}{(n-1)0\alpha} \overset{[1]}{\mapsto} \A{k}{(n-1)1\alpha} \overset{[1]}{\mapsto} \cdots \\ &\overset{[1]}{\mapsto} \A{k}{(n-1)(n-2)\alpha} \overset{[1]}{\mapsto} \A{k}{(n-1)(n-1)0\alpha} \overset{[1]}{\mapsto} \cdots \overset{[1]}{\mapsto} \A{k}{(n-1)(n-1)1\alpha} \overset{[1]}{\mapsto} \cdots. \end{align*} For the empty word $\epsilon$, define $[1](\epsilon)=\epsilon$. Furthermore, for an element $\A{k_1}{\alpha_1} \cdots \A{k_m}{\alpha_m}$ in $G_n(\mathscr{A})$, we define $[1](\A{k_1}{\alpha_1} \dots \A{k_m}{\alpha_m})=[1](\A{k_1}{\alpha_1}) \cdots [1](\A{k_m}{\alpha_m})$. A map $[i]$ denotes the composition of $[1]$ $i$ times. \end{definition} \begin{remark}\label{shift_n-1} By the definition, for each $\A{k}{\alpha}$ (even if $\alpha$ is empty), we have \begin{align*} [n-1](\A{k}{\alpha})=\A{k}{(n-1)\alpha}. \end{align*} This fact is useful to check the relations. \end{remark} Next, for an $n$-ary tree $t$, we define inductively two words $w_t$ and $w_t^\ast$ as follows: \begin{definition} \begin{enumerate} \item If $t$ is a single vertex, we define \begin{align*} w_t=w_t^\ast=\epsilon. \end{align*} \item If $t$ is an $n$-ary tree, as in Figure \ref{inductive_tree}, we define \begin{align*} w_t&=w_{t_0}^\ast \times [1](w_{t_1}^\ast) \times [2](w_{t_2}^\ast) \times \cdots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}), \\ w_t^\ast&=w_{t_0}^\ast \times [1](w_{t_1}^\ast) \times [2](w_{t_2}^\ast) \times \cdots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}^\ast) \times A_0, \end{align*} where $\times$ denotes the concatenation. \end{enumerate} \end{definition} \begin{figure}[tbp] \centering \includegraphics[width=30mm]{inductive_tree.pdf} \caption{An $n$-ary tree $t$ with subtrees (or leaves) $t_0, \dots, t_{n-1}$} \label{inductive_tree} \end{figure} Now, we show that $t=s \cdot w_t$ holds. We note that $s$ is an $n$-ary tree with the same number of carets of $t$. \begin{lemma} \label{lemma1_presentation} Let $t$ in $T(n)$ with $m$ carets. We define $T_m$ to be an all-right tree with $m$ carets if $m \geq 1$ and a single vertex if $m=0$. Then, for every $t^\prime$ in $T(n)$, we have $T_m \cdot w_t=t$, and the equality in Figure $\ref{presentation_assumption}$ holds. \begin{figure}[tbp] \centering \includegraphics[width=80mm]{presentation_assumption.pdf} \caption{The second claim in Lemma \ref{lemma1_presentation}. } \label{presentation_assumption} \end{figure} \begin{proof} We show by induction on $m$. For the base case, since $t, T_m$ are single vertices and, we have $w_t=w_t^\prime=\epsilon$, the equality is clear. Let $t$ be in Figure \ref{inductive_tree} and $n_i$ be the number of carets of $t_i$ for $i=0, \dots, n-1$. By the definitions, \begin{align*} w_t=w_{t_0}^\ast \times [1](w_{t_1}^\ast) \times [2](w_{t_2}^\ast) \times \cdots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}) \end{align*} holds and $T_m$ is in Figure \ref{T_m}. \begin{figure}[tbp] \centering \includegraphics[width=70mm]{n1nn-1carets.pdf} \caption{The $n$-ary tree $T_m$. } \label{T_m} \end{figure} By the inductive hypothesis for $w_{t_0}^\ast$, it acts on $T_m$ as in Figure \ref{wt0_action}. \begin{figure}[tbp] \centering \includegraphics[width=125mm]{wt0_action.pdf} \caption{The action of $w_{t_0}^\ast$. } \label{wt0_action} \end{figure} Since $w_t, w_t^\ast$ are the words on $\{A_i, \A{i}{(n-1)}, \A{i}{(n-1)(n-1)}, \dots \mid i=0, \dots, n-2 \}$, the sift-map $[1]$ shits indices just one. Thus, by applying the inductive hypothesis repeatedly, $w_{t_0}^\ast \times [1](w_{t_1}^\ast) \times [2](w_{t_2}^\ast) \times \cdots \times [n-2](w_{t_{n-2}}^\ast)$ acts on $T_m$ as in Figure \ref{wto_wtn-2_action}, and we write $\tilde{t}$ for the resulting tree. \begin{figure}[tbp] \centering \includegraphics[width=160mm]{wto_wtn-2_action.pdf} \caption{The action of $w_{t_0}^\ast \times [1](w_{t_1}^\ast) \times [2](w_{t_2}^\ast) \times \cdots \times [n-2](w_{t_{n-2}}^\ast)$. } \label{wto_wtn-2_action} \end{figure} Recall Remark \ref{shift_n-1}, the action of $[n-1](w_{t_{n-1}})$ on $\tilde{t}$ is obtained from that of $w_{t_{n-1}}$ on the subtree $T_{n_{n-1}}$ in $\tilde{t}$. By the inductive hypothesis for $w_{t_{n-1}}$, we have $T_{n_{n-1}} \cdot w_{t_{n-1}}=t_{n-1}$. This completes the proof of the first claim that $w_t$ satisfies the equality. By the similar argument as in the case of the action of $w_t$, that of $w_t^\ast$ is calculated as shown in Figure \ref{wtast}. \begin{figure}[tbp] \centering \includegraphics[width=160mm]{wtast.pdf} \caption{The action of $w_t^\ast$. } \label{wtast} \end{figure} \end{proof} \end{lemma} It remains to prove that the condition \eqref{condition} in Theorem \ref{dehornoy_presentation} holds for a collection of relations. Let $R(n)$ be the set of the elements in Tables \ref{relations_1}, \ref{relations_2}, \ref{relations_3}, and \ref{relations_4}. It is easy to see that all elements in $R(n)$ are relations of $F(n)$. \begin{lemma} Let $t^\prime=t \cdot \A{k}{\alpha}$. Then we have \begin{align*} w_{t^\prime} &\equiv_{R(n)} w_t \cdot \A{k}{\alpha}, \\ {w_{t^\prime}}^\ast &\equiv_{R(n)} {w_t}^\ast \cdot \A{k}{0\alpha}. \end{align*} \begin{proof} When we rewrite a word $w$ to $z$ by applying a relation in Table $i$ ($1 \leq i \leq 4$), we denote by $w \equiv_i z$. We show by induction on the length of $\alpha$ as an $n$-ary sequence. For the base case, first, we assume $k=0$. Then we can illustrate trees $t, t^\prime$ as in Figure \ref{t_tprime}. \begin{figure}[tbp] \centering \includegraphics[width=100mm]{t_tprime.pdf} \caption{The $n$-ary trees $t$ (left) and $t^\prime$ (right) for $\alpha=\epsilon$ and $k=0$. } \label{t_tprime} \end{figure} Let $\tilde{t}$ be the subtree of $t^\prime$ consisting of $t_0, \dots, t_{n-1}$ (as in Figure \ref{inductive_tree}). Then we have \begin{align*} w_{t^\prime} &=w_{\tilde{t}}^\ast \times [1](w_{t_n}^\ast) \times \dots \times [n-2](w_{t_{2n-3}}^\ast) \times [n-1](w_{t_{2n-2}}) \\ &=w_{t_0}^\ast \times [1](w_{t_1}^\ast)\times \dots \times [n-1]({w_{t_{n-1}}}^\ast) \times A_0 \\ &\; \times [1](w_{t_n}^\ast) \times \dots \times [n-2](w_{t_{2n-3}}^\ast) \times [n-1](w_{t_{2n-2}}) \\ &\equiv_1 w_{t_0}^\ast \times [1](w_{t_1}^\ast)\times \dots \times [n-1]({w_{t_{n-1}}}^\ast) \\ &\; \times[n-1]\left([1](w_{t_n}^\ast) \times \dots \times [n-2](w_{t_{2n-3}}^\ast) \times [n-1](w_{t_{2n-2}})\right) \times A_0 \\ &= w_{t_0}^\ast \times [1](w_{t_1}^\ast)\times \cdots \\ &\; \times [n-1]\bigl(({w_{t_{n-1}}}^\ast) \times[1](w_{t_n}^\ast) \times \dots \times [n-2](w_{t_{2n-3}}^\ast) \times [n-1](w_{t_{2n-2}})\bigr) \times A_0 \\ &= w_t \cdot A_0, \end{align*} and \begin{align*} w_{t^\prime}^\ast &=w_{t_0}^\ast \times [1](w_{t_1}^\ast)\times \dots \times [n-1]({w_{t_{n-1}}}^\ast) \times A_0 \\ &\; \times [1](w_{t_n}^\ast) \times \dots \times [n-2](w_{t_{2n-3}}^\ast) \times [n-1](w_{t_{2n-2}}^\ast) \times A_0 \\ &\equiv_1 w_{t_0}^\ast \times [1](w_{t_1}^\ast)\times \cdots \\ &\; \times [n-1]\bigl(({w_{t_{n-1}}}^\ast) \times[1](w_{t_n}^\ast) \times \dots \times [n-2](w_{t_{2n-3}}^\ast) \times [n-1](w_{t_{2n-2}}^\ast)\bigr) \times A_0A_0 \\ &\equiv_2 w_{t_0}^\ast \times [1](w_{t_1}^\ast)\times \cdots \\ &\; \times [n-1]\bigl(({w_{t_{n-1}}}^\ast) \times[1](w_{t_n}^\ast) \times \dots \times [n-2](w_{t_{2n-3}}^\ast) \times [n-1](w_{t_{2n-2}}^\ast)\bigr) \\ &\; \times \A{0}{(n-1)}A_0\A{0}{0} \\ &=w_{t_0}^\ast \times [1](w_{t_1}^\ast)\times \cdots \\ &\; \times [n-1]\bigl(({w_{t_{n-1}}}^\ast) \times[1](w_{t_n}^\ast) \times \dots \times [n-2](w_{t_{2n-3}}^\ast) \times [n-1](w_{t_{2n-2}}^\ast)\times A_0 \bigr) \\ &\; \times A_0\A{0}{0} \\ &=w_t^\ast \times \A{0}{0}. \end{align*} The case $k\geq1$ can be proved in the same way. Let $\alpha=0\beta$. That is, we consider the case when the condition $t_0^\prime=t_0 \cdot \A{k}{\beta}$ holds for $t$ and $t^\prime$, as shown in Figure \ref{inductive_t_tprime}. \begin{figure}[tbp] \centering \includegraphics[width=80mm]{inductive_t_tprime.pdf} \caption{The $n$-ary trees $t$ (left) and $t^\prime$ (right) for $\alpha=0\beta$. } \label{inductive_t_tprime} \end{figure} When we rewrite a word by applying the inductive hypothesis, we use $\equiv_I$ to denote its equality. We note that \begin{align*} [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}) \shortintertext{and} [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}^\ast) \end{align*} are words on the set \begin{align*} \{ A_1, \dots, A_{n-2}, \A{i}{\alpha} \mid i= 0, \dots, n-2, \alpha=(n-1), (n-1)^2, \dots \}. \end{align*} Then we have \begin{align*} w_{t^\prime}&=w_{t_0^\prime}^\ast \times [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}) \\ &\equiv_I w_{t_0}^\ast \times \A{k}{0\beta} \times [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}) \\ &\equiv_3 w_{t_0}^\ast \times [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}) \times \A{k}{0\beta} \\ &=w_t \times \A{k}{\alpha}, \shortintertext{and} w_{t^\prime}^\ast&=w_{t_0^\prime}^\ast \times [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}^\ast) \times A_0 \\ &\equiv_I w_{t_0}^\ast \times \A{k}{0\beta} \times [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}^\ast) \times A_0 \\ &\equiv_3 w_{t_0}^\ast \times [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}^\ast) \times \A{k}{0\beta}A_0 \\ &\equiv_4 w_{t_0}^\ast \times [1](w_{t_1}^\ast) \times \dots \times [n-2](w_{t_{n-2}}^\ast) \times [n-1](w_{t_{n-1}}^\ast) \times A_0 \A{k}{00\beta} \\ &=w_{t}^\ast \times \A{k}{0\alpha}. \end{align*} Since the collection of relations is closed under the shift-map, we can apply the inductive hypothesis when considering any other $i \beta$ ($1 \leq i \leq n-1$). Although the only case $w_t$ for $i=n-1$ is rewritten slightly differently (since $w_{t_{n-1}}$ appears in the word $w_t$ instead of $w_{t_{n-1}}^\ast$), all can be shown similarly. This completes the proof. \end{proof} \end{lemma} \section{The Lodha--Moore group and its generalization}\label{section_G0(n)} \subsection{The Lodha--Moore group} In this section, we briefly review the original Lodha--Moore group $G_0$. In some papers, this group is denoted by $G$. Let $x_0$, $\x{0}{1}$ be the maps defined in Section \ref{subsection_F(n)_definition} for $n=2$. The group $G_0$ is generated by these two maps and one more generator called $y_{10}$. To define this generator, we first define the homeomorphism called $y$. \begin{definition}\label{definiton_y_2} The map $y$ and its inverse $y^{-1}$ is defined recursively based on the following rule: \begin{align*} &y: 2^\mathbb{N} \to 2^\mathbb{N} & &y^{-1}: 2^\mathbb{N} \to 2^\mathbb{N} \\ &y(00\zeta)=0y(\zeta) & &y^{-1}(0\zeta)=00y^{-1}(\zeta) \\ &y(01\zeta)=10y^{-1}(\zeta) & &y^{-1}(10\zeta)=01y(\zeta) \\ &y(1\zeta)=11y(\zeta), & &y^{-1}(11\zeta)=1y^{-1}(\zeta). \end{align*} \end{definition} For each $s$ in $2^{<\mathbb{N}}$, we also define the map $y_s$ by setting \begin{align*} y_s(\xi)&= \left \{ \begin{array}{cc} s y(\eta), & \xi=s\eta \\ \xi, & \mbox{otherwise}. \end{array} \right. \end{align*} We give an example of a calculation of the $y_{001}$ on $00101101 \cdots$: \begin{align*} y_{001}(00101101\cdots)&=001y(01101\cdots)\\ &=00110y^{-1}(101\cdots)\\ &=0011001y(1\cdots). \end{align*} \begin{definition} The group $G_0$ is a group generated by $x_0$, $\x{0}{1}$, and $y_{10}$. \end{definition} The group $G_0$ is also realized as a group of piecewise projective homeomorphisms. \begin{proposition}[{\cite[Proposition 3.1]{lodha2016nonamenable}}]\label{proposition_piecewiseprojective} The group $G_0$ is isomorphic to the group generated by the following three maps of $\mathbb{R}$: \begin{align*} a(t)&=t+1, & b(t) &= \left \{ \begin{array}{cc} t & \mbox{if $t \leq 0$} \\ \frac{t}{1-t} & \mbox{if $0 \leq t \leq \frac{1}{2}$} \\ 3-\frac{1}{t} & \mbox{if $\frac{1}{2} \leq t \leq 1$} \\ t+1 & \mbox{if $1 \leq t$}, \end{array} \right. & c(t) &= \left \{ \begin{array}{cc} \frac{2t}{1+t} & \mbox{if $0 \leq t \leq 1$} \\ t & \mbox{otherwise}. \end{array} \right. \end{align*} \end{proposition} This proposition is shown by identifying $2^\mathbb{N}$ with $\mathbb{R}$ by the following maps: \begin{align*} &\varphi: 2^\mathbb{N} \to [0, \infty] & &\Phi: 2^\mathbb{N} \to \mathbb{R} \cup \{ \infty \} \\ &\varphi(0\xi)=\frac{1}{1+\frac{1}{\varphi(\xi)}} & &\Phi(0\xi)=-\varphi(\tilde{\xi}) \\ &\varphi(1\xi)=1+\varphi(\xi) & &\Phi(1\xi)=\varphi(\xi), \end{align*} where $\mathbb{R} \cup \{ \infty \}$ denotes the real projective line. We note that for every $x \in \mathbb{R} \cup \{ \infty \}$, the inverse image $\Phi^{-1}(\{x\})$ is either a one-point set or a two-point set. \subsection{$n$-adic Lodha--Moore group} \label{subsection_def_G_0(n)} In order to define a generalization of the Lodha--Moore group, we first define a homeomorphism on $\N^\mathbb{N}={\{0, \dots, n-1 \}}^\mathbb{N}$ corresponding to $y$ for the case of $n=2$. We fix $n \geq 2$ and also denote this map by $y$ as in the case of $n=2$. \begin{definition} The map $y$ and its inverse map $y^{-1}$ is defined recursively based on the following rule: \begin{align*} y: \N^\mathbb{N} &\to \N^\mathbb{N} & y^{-1}: \N^\mathbb{N} &\to \N^\mathbb{N} \\ y(00\zeta)&=0y(\zeta) & y^{-1}(0\zeta)&=00y^{-1}(\zeta) \\ y(01\zeta)&=1\zeta & y^{-1}(1\zeta)&=01\zeta \\ &\;\vdots & &\;\vdots \\ y(0(n-2)\zeta)&=(n-2)\zeta & y^{-1}((n-2)\zeta)&=0(n-2)\zeta \\ y(0(n-1)\zeta)&=(n-1)0y^{-1}(\zeta) & y^{-1}((n-1)0\zeta)&=0(n-1)y(\zeta) \\ y(1\zeta)&=(n-1)1\zeta & y^{-1}((n-1)1\zeta)&=1\zeta \\ &\;\vdots & &\;\vdots \\ y((n-2)\zeta)&=(n-1)(n-2)\zeta & y^{-1}((n-1)(n-2)\zeta)&=(n-2)\zeta \\ y((n-1)\zeta)&=(n-1)(n-1)y(\zeta) & y^{-1}((n-1)(n-1)\zeta)&=(n-1)y^{-1}(\zeta) \end{align*} \end{definition} \begin{remark} Note that if we restrict the domain to $\{0, n-1\}^\mathbb{N}$, $y$ is exactly the map defined in Definition \ref{definiton_y_2} under the identification of $n-1$ with $1$. We will use this fact to reduce the discussion to the case of $n=2$. \end{remark} For each $s$ in $\N^\mathbb{N}$, define the map $y_s$ by setting \begin{align} \label{n-adicy_definition} y_s(\xi)&= \left \{ \begin{array}{cc} s y(\eta), & \xi=s\eta \\ \xi, & \mbox{otherwise}. \end{array} \right. \end{align} \begin{definition} We define $G_0(n)$ to be the group generated by the $n+1$ elements $x_0, \dots, x_{n-2}, {x_0}_{[n-1]}$, and $y_{(n-1)0}$. We call this group the \textit{$n$-adic Lodha--Moore group}. \end{definition} For the infinite presentation of $G_0(n)$ described in Section \ref{infinite_presentation}, we introduce an infinite generating set of this group. Let $i$ in $\{0, \dots, n-2 \}$ and $\alpha$ in $\N^{<\mathbb{N}} \cup \{\epsilon \}$. We recall that the map $\x{i}{\alpha}$ is defined as follows: \begin{align*} \x{i}{\alpha}(\zeta)&= \begin{cases} \alpha x_i(\eta) & (\zeta=\alpha \eta) \\ \zeta & (\zeta \neq \alpha \eta). \end{cases} \end{align*} We also recall that the group $F(n)$ is generated by the following infinite set: \begin{align*} X(n)= \left\{ \x{i}{\alpha} \mid i = 0, \dots, n-2, \alpha \in \N^{<\mathbb{N}} \cup \{ \epsilon \} \right\}. \end{align*} In addition, let \begin{align*} Y(n):= \left\{ y_\alpha \;\middle|\; \begin{array}{l} \alpha \in \N^{<\mathbb{N}}, \\ \alpha \neq 0\cdots0, (n-1)\cdots(n-1), \epsilon, \\ \mbox{the sum of each number in $\alpha$ is equal to $0 \bmod {n-1}$} \end{array} \right\}. \end{align*} We remark that for $\alpha=\alpha_1 \cdots \alpha_m \in \N^{<\mathbb{N}}$, the actions of $x_0, \dots, x_{n-2}, \x{0}{n-1}$, and $y$ preserve the value of $\alpha_1+\cdots+\alpha_m \bmod {n-1}$. Then the set $Z(n):=X(n) \cup Y(n)$ also generates the group $G_0(n)$. \subsection{Infinite presentation and normal form}\label{infinite_presentation} In this section, we give the unique word with ``good properties'' for each element of $G_0(n)$ (Definition \ref{normal_form}). In this process, we also give an infinite presentation of $G_0(n)$ (Corollary \ref{G_0(n)_presentation}). Although almost all result in the rest of this section follows along the lines of the arguments in \cite{lodha2020nonamenable, lodha2016nonamenable}, we write them down for the convenience of the reader. Let $s $ in $\N^{<\mathbb{N}}$ and $t$ in $\N^{<\mathbb{N}}$ or $\N^\mathbb{N}$. We write $s \subset t$ if $s$ is a proper prefix of $t$ and write $s \subseteq t$ if $s \subset t$ or $s=t$. For $s$, $t$, we say that $s$ and $t$ are independent if one of the following holds: \begin{itemize} \item $s, t \in \N^{<\mathbb{N}}$ and neither $s \subseteq t$ and $t \subseteq s$ holds. \item $s \in \N^{<\mathbb{N}}$, $t \in \N^\mathbb{N}$ and $s$ is not any prefixes of $t$. \item $s, t \in \N^\mathbb{N}$ and $s\neq t$. \end{itemize} In all cases, we write $s \perp t$. Let $s(i), t(i)$ denote the $i$-th number of $s, t$, respectively. Then we say $s<t$ if one of the following is true: \begin{itemize} \item[(a)] $t \subset s$; \item[(b)] $s \perp t$ and $s(i)<t(i)$, where $i$ is the smallest integer such that $s(i) \neq t(i)$. \end{itemize} We note that this order is transitive. For elements in $\N^\mathbb{N}$, we use the same symbol to denote the lexicographical order. We claim that the following collection of relations gives a presentation of $G_0(n)$ (Corollary \ref{G_0(n)_presentation}): \begin{enumerate} \item the relations of $F(n)$ in Tables \ref{relations_1}, \ref{relations_2}, \ref{relations_3}, and \ref{relations_4}; \item $y_t\x{i}{s}=\x{i}{s}y_{\x{i}{s}(t)}$ for all $i$ and $s, t \in \N^{<\mathbb{N}}$ such that $y_t \in Y(n)$ and $\x{i}{s}(t)$ is defined; \item $y_s y_t =y_t y_s$ for all $s, t \in \N^{<\mathbb{N}}$ such that $y_s, y_t \in Y(n)$ and $s\perp t$; \item $y_s=\x{0}{s} y_{s0} y_{s(n-1)0}^{-1}y_{s(n-1)(n-1)}$ for all $s \in \N^{<\mathbb{N}}$ such that $y_s \in Y(n)$. \end{enumerate} We note that $\x{0}{n-1}((n-1)0)$ is not defined, for example. All relations can be verified directly. We write $R(n)$ for the collection of these relations. We first define a form, which is easy to compute the composition of maps. \begin{definition} A word $\Omega$ on $Z(n)$ is in \textit{standard form} if $\Omega$ is a word such as $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ where $f$ is a word on $X(n)$ and $\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ is a word on $Y(n)$ with the condition that $s_i<s_j$ if $i<j$. \end{definition} In some cases, it is helpful to use the following weaker form. \begin{definition} A word $\Omega$ on $Z(n)$ is in \textit{weak standard form} if $\Omega$ is a word such as $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ where $f$ is a word on $X(n)$ and $\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ is a word on $Y(n)$ with the condition that if $s_j \subset s_i$, then $i<j$. \end{definition} We can always make a word in weak standard form into one in standard form. \begin{lemma}[{\cite[Lemma 3.11]{lodha2020nonamenable}} for $n$=2]\label{weakstandard_standard} We can rewrite a weak standard form into a standard form with the same length by just switching the letters $($i.e., relation $(3)$$)$ finitely many times. \begin{proof} Let $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ be a word in weak standard form. We show this by induction on $m$. It is obvious if $m=0, 1$. For $m \geq 2$, by the induction hypothesis, we get a word $f\y{s_1^\prime}{t_1^\prime}\cdots\y{s_{m-1}^\prime}{t_{m-1}^\prime}\y{s_m}{t_m}$ where $f\y{s_1^\prime}{t_1^\prime}\cdots\y{s_{m-1}^\prime}{t_{m-1}^\prime}$ is in standard form. Then one of the following holds: \begin{enumerate} \item $s_{m-1}^\prime \supset s_m$; \item $s_{m-1}^\prime \perp s_m$ and $s_{m-1}^\prime(i)<s_m(i)$, where $i$ is the smallest number such that $s_{m-1}^\prime (i) \neq s_m(i)$; \item $s_{m-1}^\prime \perp s_m$ and $s_{m-1}^\prime(i)>s_m(i)$, where $i$ is the smallest number such that $s_{m-1}^\prime(i)\neq s_m(i)$. \end{enumerate} If (1) or (2), $f\y{s_1^\prime}{t_1^\prime}\cdots\y{s_{m-1}^\prime}{t_{m-1}^\prime}\y{s_m}{t_m}$ is also in standard form. If (3), by applying the relation (3) to $\y{s_{m-1}^\prime}{t_{m-1}^\prime}\y{s_m}{t_m}$ and using the induction hypothesis for the first $m-1$ characters again, we get a word in standard form. \end{proof} \end{lemma} For an infinite $n$-ary word $w$ in $\N^\mathbb{N}$ and a word $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ in weak standard form, we define their \textit{calculation} as follows: First, we apply $f$ to $w$. Then apply $\y{s_1}{t_1}, \dots, \y{s_m}{t_m}$ to $f(w)$ in this order, where the latter term ``apply'' means to rewrite each $y_{s_i}$ by using its definition in equation \eqref{n-adicy_definition}, and no rewriting can be done by the definition of the map $y$. \begin{example} Let $n=4$. For $w=3002\cdots$ and $y_{300}^{-1}y_{30}y_1$, we apply as follows: \begin{align*} 3002\cdots \xrightarrow{y_{300}^{-1}}300y^{-1}(2\cdots) \xrightarrow{y_{30}} 30(y(0(y^{-1}(2\cdots)))) \xrightarrow{y_1} 30(y(0(y^{-1}(2\cdots)))). \end{align*} Therefore the calculation of $w=3002\cdots$ and $y_{300}^{-1}y_{30}y_1$ is $30(y(0(y^{-1}(2\cdots))))$. \end{example} We write such the element as $30y0y^{-1}2\cdots$ and sometimes regard it as a word on $\bm{N} \cup \{y, y^{-1}\}$. We also define calculations for finite words in $\N^{<\mathbb{N}}$ and weak standard forms in the same way, although finite words are not in the domain of $y$. We note that, unlike infinite words, not all calculations of finite words can be defined. For example, in $n=4$, the calculation of $w=3002$ and $y_{300}^{-1}y_{30}y_1$ is $30y0y^{-1}2$. However, that of $w=3$ and $y_{300}^{-1}y_{30}y_1$ is not defined. We call the operation of rewriting a calculation by using the definition of $y$ once as \textit{substitution} and call the element in $\N^\mathbb{N}$ obtained by repeating the substitution \textit{output string}. \begin{example} For $w=0^4(n-1)0^4(n-1)\cdots$ and $y^2$, we have the following substitutions: \begin{align*} y^20^4(n-1)0^4(n-1)\cdots \to y 0 y 0^2(n-1)0^4(n-1)\cdots \to y0^2 y(n-1)0^4(n-1)\cdots. \end{align*} Their output string is $0(n-1)^40(n-1)^4 \cdots$. \end{example} As described in \cite[Section 3.6]{lodha2020nonamenable}, the map $y$ can be expressed as a finite state transducer. For the sake of simplicity, assume $n\geq3$. Then our transducer is illustrated in Figure \ref{transducer_y}. \begin{figure}[tbp] \centering \includegraphics[width=120mm]{transducer_y.pdf} \caption{The transducer of $y$, where $i=1, \dots, n-2$. } \label{transducer_y} \end{figure} In this setting, each edge is labeled by a form $\sigma \mid \tau$, where $\sigma$ and $\tau$ are in $\N^{<\mathbb{N}}$. The element $\sigma$ represents the input string, and $\tau$ represents the output string. The difference from the case of $n=2$ is that there exists an accepting state (double circle mark). This difference corresponds to the fact that $y$ vanishes in $y \sigma$ by substitutions if $\sigma$ is not in $\{0, n-1 \}^{\mathbb{N}}$ or $\{0, n-1 \}^{<\mathbb{N}}$. \begin{definition} A calculation contains a \textit{potential cancellation} if there exists a subword of the form \begin{align*} y^{t_1}\sigma y^{t_2}, \sigma \in \N^{<\mathbb{N}}, t_i \in \{1, -1 \} \end{align*} such that we get the word $\sigma^\prime y^{-t_2}$ by substituting $y^{t_1}\sigma$ finitely many times. \end{definition} \begin{example} The subwords $y0(n-1)y$ and $y00y^{-1}$ are potential cancellations. The subword $y0y$ is not a potential cancellation since we can not apply any substitutions. The subword $y^{-1}(n-1)00(n-1)y$ is also not a potential cancellation since we have $y^{-1}(n-1)00(n-1)y \to 0(n-1)y0(n-1)y \to 0(n-1)(n-1)0y^{-1}y$. \end{example} \begin{remark} \label{remark_PCA} If a calculation contains a potential cancellation, then $\sigma$ is in $\{0, n-1\}^{<\mathbb{N}}$. Indeed, if not, $y$ vanishes. \end{remark} The following holds. \begin{lemma}[{\cite[Lemma 5.9]{lodha2016nonamenable}} for $n=2$] \label{substitution_NPCA} Let $\Lambda$ be a calculation that contains no potential cancellations. Then the word $\Lambda^{\prime}$ obtained by substitution at any $y^{\pm1}$ again contains no potential cancellations. \begin{proof} By Remark \ref{remark_PCA}, it is sufficient to consider the case $\sigma \in \{0, n-1\}^{<\mathbb{N}}$. Then we can identify $\{0, n-1 \}$ with $\{0, 1\}$. By \cite[Lemma 5.9]{lodha2016nonamenable}, we have the desired result. \end{proof} \end{lemma} We generalize the notion of potential cancellation to the case of weak standard forms. \begin{definition} A weak standard form $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ has a \textit{potential cancellation} if there exists $w$ in $\N^\mathbb{N}$ such that the calculation of $w$ by $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ contains a potential cancellation. \end{definition} To construct unique words, we define the moves obtained from the relations of $G_0(n)$. \begin{definition} \label{definition_five_moves} We assume that every map in the following is defined. \begin{description} \item[Rearranging move] $\y{t}{i}\x{j}{s}^{\pm1} \to \x{j}{s}^{\pm1}\y{\x{j}{s}^{\pm1}(t)}{i}$; \item[Expansion move] $y_s \to \x{0}{s}y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}$ and $y_s^{-1} \to \x{0}{s}^{-1}\y{s00}{-1}y_{s0(n-1)}\y{s(n-1)}{-1}$; \item[Commuting move] $y_uy_v\leftrightarrow y_v y_u$ (if $u \perp v$); \item[Cancellation move] $\y{s}{\pm i}\y{s}{\mp i} \to \epsilon$; \item[ER moves] \begin{enumerate} \item $f(\y{s_1}{t_1}\cdots\y{s_k}{t_k})y_u \to f\x{0}{u}(\y{\x{0}{u}(s_1)}{t_1}\cdots \y{\x{0}{u}(s_k)}{t_k})(y_{u0}\y{u(n-1)0}{-1}y_{u(n-1)(n-1)})$ \item $f(\y{s_1}{t_1}\cdots\y{s_k}{t_k})\y{u}{-1} \to f\x{0}{u}^{-1}(\y{\x{0}{u}^{-1}(s_1)}{t_1}\cdots \y{\x{0}{u}^{-1}(s_k)}{t_k})(\y{u00}{-1}y_{u0(n-1)}\y{u(n-1)}{-1})$. \end{enumerate} \end{description} ER moves are a combination of an expansion move and rearranging moves. \end{definition} By the definition of potential cancellations, we note that if we apply ER move to a standard form that contains a potential cancellation, then either the resulting word also contains a potential cancellation, or we can apply a cancellation move to the resulting word. Like the Thompson group $F$, each element of $G_0(n)$ can be represented by tree diagrams. Then the expansion move for $y_s$ is the replacement from the left diagram to the right diagram in Figure \ref{image_expansion_move}. \begin{figure}[tbp] \centering \includegraphics[width=110mm]{image_expansion_move.pdf} \caption{The black and white circles represent that we apply $y$ and $y^{-1}$ to the words corresponding to the edges below them, respectively. } \label{image_expansion_move} \end{figure} See \cite[Section 4]{lodha2016nonamenable} for details of the case $n=2$. We introduce the following notion, which makes it easy to check whether the moves are defined. \begin{definition} For $s$ in $\N^{<\mathbb{N}}$, we write $\| s\|$ for the length of $s$. We define the \textit{depth} of a weak standard form $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ to be the integer $\min_{1 \leq i \leq m} \|s_i \|$. If a weak standard form is on $X(n)$, we define its depth to be $\infty$. \end{definition} We show that an arbitrary word on $Z(n)$ can be rewritten in standard form. In order to do this, we prepare two lemmas. \begin{lemma}[{\cite[Lemma 5.2]{lodha2016nonamenable}} for $n=2$] \label{standard_base_case} Let $y_s$ in $Y(n)$ and $l$ in $\mathbb{N}\cup\{0\}$. Then there exists a standard form $\Omega$ obtained by applying expansion move and rearranging move on $y_s^{\pm1}$ finitely many times that satisfies the following: \begin{enumerate}[font=\normalfont] \item If there exists $\x{i}{u}^t$ in $\Omega$, then $s \subset u$ holds. \item If there exists $y_u$ or $\y{u}{-1}$, then it is the only one, and $s \subseteq u$ and $\|u \| \geq l$ hold. \item If there exist $y_u, y_v$ with $u \neq v$, then $u \perp v$ holds. \end{enumerate} \begin{proof} We show this by induction on $l-\|s\|$. If $l-\|s\| \leq 0$, then $\Omega=y_s^{\pm1}$ satisfies all the conditions. We assume that $l-\|s\|>0$ holds. Because we can discuss $y_s^{-1}$ in almost the same way, we consider only $y_s$. We apply expansion move on $y_s$ and get the word $\x{0}{s}y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}$. Since we have \begin{align*} \max \{ l-\|s0\|,~ l-\|s(n-1)0\|,~ l-\|s(n-1)(n-1)\| \} \leq l-\|s\|, \end{align*} by the induction hypothesis, there exists three standard forms $\Omega_{s0}, \Omega_{s(n-1)0}, \Omega_{s(n-1)(n-1)}$ for $y_{s0}$, $y_{s(n-1)}^{-1}$, $y_{s(n-1)(n-1)}$, respectively, all of which satisfy the conditions. Thus the word is rewritten from $\x{0}{s}y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}$ to $\x{0}{s}\Omega_{s0}\Omega_{s(n-1)0}\Omega_{s(n-1)(n-1)}$. If there exists $\x{i}{u}^t$ in $\Omega_{s(n-1)0}$, then $u \supset s(n-1)0$ holds. If there exists $\y{v}{\pm1}$ in $\Omega_{s0}$, then $v \supseteq s0$ holds. Since $u \perp v$, $\x{i}{u}^t(v)$ is defined and equals to $v$. Thus, we can apply rearranging moves on this $\x{i}{u}^t$. A similar process can be done for the $\x{i}{u}^t$ in $\Omega_{s(n-1)(n-1)}$. This completes the proof. \end{proof} \end{lemma} \begin{lemma}[{\cite[Lemma 5.3]{lodha2016nonamenable}} for $n=2$] \label{standard_depth_lemma} Let $\Lambda$ be a word on $X(n)$, and $k$ be the length of $\Lambda$. Then there exists $l_0$ in $\mathbb{N}$ such that the following holds: If $\Omega$ is a standard form with depth $l \geq l_0$, then we have a standard form $\Omega^\prime$ obtained from $\Omega \Lambda$ by rearranging moves, with the depth of $\Omega^\prime$ being at least $l-k$. \begin{proof} We show this by induction on the length of $\Lambda$. For the base case, let $\Lambda=\x{i}{s}^{\pm1}$. If $\x{i}{s}^{\pm1}(t)$ is not defined, then $t=si$ holds. This means that if $t \in \N^{<\mathbb{N}}$ with $\|t \| \geq \|s\|+2$, then $\x{i}{s}(t)$ is defined. Hence let $l_0=\|s\|+2$. We note that \begin{align*} \|t\|-1 \leq \|\x{i}{s}^{\pm1}(t)\| \leq \| t\|+1 \end{align*} holds. Then we can apply rearranging moves on $\x{i}{s}^{\pm1}$ with every $y_t$ in $\Omega$. The depth of the resulting standard form $\Omega^\prime$ is at least $l-1$. We assume that the claim holds for $k-1$. Let $\Lambda=\Lambda^\prime\x{i}{s}^{\pm1}$. By the induction hypothesis, there exists $l_0^\prime$ for $\Lambda^\prime$ such that we can apply moves $\Omega\Lambda^\prime\x{i}{s}^{\pm1}\to\Omega^{\prime \prime}\x{i}{s}^{\pm1}$ where the depth of $\Omega$ is $l \geq l_0^\prime$ and that of $\Omega^{\prime \prime}$ is at least $l-(k-1)$. Then, let $l_0=\max\{l_0^\prime, k+\|s\|+1\}$ and assume that the depth of $\Omega$ is $l \geq l_0$. Since we have $l-(k-1) \geq \|s\|+2$, for the same reason as in the base case, we can apply rearranging moves and obtain a standard form $\Omega^\prime$ whose depth is at least $l-(k-1)-1=l-k$. \end{proof} \end{lemma} Now we make arbitrary words into standard forms. \begin{proposition}[{\cite[Lemma 5.4]{lodha2016nonamenable}} for $n=2$] \label{Prop_standard_forms_from_words} Let $l$ be a natural number and $w$ be a word on $Z(n)$. Then we can rewrite $w$ into a standard form whose depth is at least $l$ by moves. \begin{proof} If $w$ is in $X(n)$, there is nothing to do since the depth of $w$ is $\infty$. Hence we assume that $w$ is not in $X(n)$. We show the claim by induction on the length $m$ of $w$. For the base case, since $w$ is not in $X(n)$, we have $w=\y{s}{\pm1}$. By Lemma \ref{standard_base_case}, the claim holds. Let $m>1$. By dividing $a^{\pm(k+1)}$ in $w$ as $a^{\pm k}a^{\pm1}$ if necessary, we can decompose $w$ into two positive length words $\Omega_0\Omega_1$. We assume that $\Omega_1$ is not on $X(n)$. By the induction hypothesis, we obtain a standard form $\Lambda\Upsilon$ from $\Omega_1$, where $\Lambda$ is on $X(n)$, and $\Upsilon$ is on $Y(n)$, with the depth being at least $l$. Let $k$ be the length of $\Lambda$ and $r=\max\{\|u\| \mid \mbox{$\y{u}{i}$ belongs to $\Upsilon$} \}+1$. By the induction hypothesis, there exists a standard form $\Omega_0^\prime$ obtained from $\Omega_0$, with the depth being at least $\max\{l_0, r+k\}$, where $l_0$ is a natural number for $\Lambda$ in Lemma \ref{standard_depth_lemma}. Then we rewrite $\Omega_0^\prime\Lambda \to \Omega_0^{\prime\prime}$ where $\Omega_0^{\prime\prime}$ is in standard form with the depth being at least $r$. The form $\Omega_0^{\prime\prime}\Upsilon$ obtained through the previous transformations $w \to \Omega_0\Omega_1 \to \Omega_0^\prime\Lambda\Upsilon \to \Omega_0^{\prime\prime}\Upsilon$ is in weak standard form. Indeed, since $\Omega_0^{\prime\prime}$ and $\Upsilon$ (on $Y(n)$) are in standard form, by the definition of $r$, the claim holds. By Lemma \ref{weakstandard_standard}, we get the standard form without changing the depth. If $\Omega_1$ is in $X(n)$, then we set $\Lambda=\Omega$, $\Upsilon=\epsilon$, and $r=l$. Then we can apply the above argument. \end{proof} \end{proposition} Next, we list three lemmas about ER moves. These are useful for getting words in standard form without potential cancellations. \begin{lemma}[{\cite[Lemma 3.21]{lodha2020nonamenable}} for $n$=2]\label{ER_move_preserve_wsf} Let $f(\y{s_1}{t_1}\cdots\y{s_k}{t_k})y_u^{\pm1}(\y{p_1}{q_1}\cdots\y{p_m}{q_m})$ be in weak standard form. Then the word obtained from $f(\y{s_1}{t_1}\cdots\y{s_k}{t_k})y_u^{\pm1}(\y{p_1}{q_1}\cdots\y{p_m}{q_m})$ by the ER move on $y_u^{\pm1}$ is also in weak standard form. \begin{sproof} We show that only the case $y_u$. Let \begin{align*} f\x{0}{u}(\y{\x{0}{u}(s_1)}{t_1}\cdots \y{\x{0}{u}(s_k)}{t_k})(y_{u0}\y{u(n-1)0}{-1}y_{u(n-1)(n-1)})(\y{p_1}{q_1}\cdots\y{p_m}{q_m}) \end{align*} be the word obtained from the given word by ER move. We show this by considering whether each $s_i$ is independent with $u$ or not. If $s_i \perp u$ holds, then $\x{0}{u}(s_i)=s_i$, and there is nothing to do. If $s_i \supset u$ holds, since $\x{0}{u}(s_i)$ is defined, $\x{0}{u}(s_i) \supset u$ holds, and it can never be a proper prefix of $u0$, $u(n-1)0$, or $u(n-1)(n-1)$. Since $p_j$ satisfies either $p_j\perp u$ or $p_j \subseteq u$, the claim holds in both cases. \end{sproof} \end{lemma} \begin{lemma}[{\cite[Lemma 3.22]{lodha2020nonamenable}} for $n$=2]\label{lem_ERmove_preserve_NPCA} Let $f \lambda_1$ be in weak standard form with no potential cancellations and $g\lambda_2$ be a word obtained from $f \lambda_1$ by an ER move. Then $g\lambda_2$ is in weak standard form with no potential cancellations. \begin{proof} Let $\lambda_1=\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ with every $t_i$ in $\{\pm1\}$. We apply ER move on $\y{u_i}{t_i}$. Since $f$ does not affect the existence of potential cancellations, we assume that $f$ is the empty word. We only consider the case $t_i=1$ since the cases $t_i=1$ and $t_i=-1$ are shown similarly. Let $s_j^{\prime}:=\x{0}{s_i}(s_j)$ $(1 \leq j \leq i-1)$. By the definition of ER move, we have \begin{align*} g\lambda_2=\x{0}{s_i}(\y{s_1^\prime}{t_1}\cdots\y{s_{i-1}^\prime}{t_{i-1}})(y_{s_i0}\y{s_i(n-1)0}{-1}y_{s_i(n-1)(n-1)})(\y{s_{i+1}}{t_{i+1}}\cdots\y{s_m}{t_m}). \end{align*} Since it is clear by Lemma \ref{ER_move_preserve_wsf} that $g\lambda_2$ is in weak standard form, suppose that $g\lambda_2$ has a potential cancellation. Let $\tau \in \N^\mathbb{N}$ be an element such that the calculation $\Lambda$ of $\tau$ by $(\y{s_1^\prime}{t_1}\cdots\y{s_{i-1}^\prime}{t_{i-1}})(y_{s_i0}\y{s_i(n-1)0}{-1}y_{s_i(n-1)(n-1)})(\y{s_{i+1}}{t_{i+1}}\cdots\y{s_m}{t_m})$ contains a potential cancellation. Then, $s_i$ is a prefix of $\tau$. Indeed, we have the following: \begin{enumerate} \item if $s_j \perp s_i$, then $\y{s_j^\prime}{t_j}=\y{s_j}{t_j}$; \item if $s_j \perp s_i$, then $\y{s_j}{t_j}(\tau^\prime)\perp s_i$ where $\tau^\prime$ is in $\N^\mathbb{N}$ with $\tau^\prime \perp s_i$; \item if $s_j \supset s_i$, since $\x{0}{s_i}(s_j) \supset s_i$, we have $\y{s_j^\prime}{t_j}(\tau^\prime)=\y{\x{0}{s_i}(s_j)}{t_j}(\tau^\prime)=\tau^\prime$ and $y_{s_j}(\tau^\prime)=\tau^\prime$ where $\tau^\prime$ is in $\N^\mathbb{N}$ with $\tau^\prime \perp s_i$; \item $y_{s_i}$, $y_{s_i0}$, $y_{s_i(n-1)0}^{-1}$, and $y_{s_i(n-1)(n-1)}$ all fix $\tau^\prime$ where $\tau^\prime$ is in $\N^\mathbb{N}$ with $\tau^\prime \perp s_i$. \end{enumerate} Since ER move is defined, each $y_j$ satisfies either $s_j\perp s_i$ or $s_j \supset s_i$. If $s_i$ is not a prefix of $\tau$, $\tau \perp s_i$ holds. Then the calculations of $\tau$ with $f\lambda_1$ and with $g\lambda_2$ are the same. This contradicts the assumption that $f\lambda_1$ does not have a potential cancellation. Hence $s_i$ is a prefix of $\tau$. Let $\Lambda^\prime$ be the calculation of $\x{0}{s_i}^{-1}(\tau)$ by $\y{s_1}{t_1}\cdots\y{s_m}{t_m}$. By the assumption, this calculation does not contain a potential cancellation. On the other hand, $\Lambda$ is obtained from $\Lambda^\prime$ by substituting once. However, this also contradicts Lemma \ref{substitution_NPCA}. \end{proof} \end{lemma} \begin{lemma} \label{lem_depth_ER_moves} Let $l$ be an arbitrary natural number. Then, by applying ER moves to the weak standard form $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ finitely many times, we obtain a weak standard form whose depth is at least $l$. \begin{sproof} If $\|s_i\|< l$ holds, we apply ER moves on $\y{s_i}{t_i}$ (if $t_i \neq \pm1$, we apply on the last one). We note that the move may not be defined. In that case, before we do that, we apply ER moves to $y_{s_j}$ $(j<i)$ which is the cause that the ER move on $y_{s_i}$ is not defined. If it is also not defined, repeat the process. Since we can always apply ER moves on $fy_{s_1}$, this process is finished. We do this process repeatedly until the depth of the obtained word is at least $l$. By Lemma \ref{ER_move_preserve_wsf}, it is also in weak standard form. \end{sproof} \end{lemma} Since subwords play an essential role in the notion of potential cancellations, we introduce the following definitions for simplicity. \begin{definition} Let $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ be in weak standard form. We say the pair $(\y{s_j}{t_j}, \y{s_i}{t_i})$ is \textit{adjacent} if the following two conditions hold: \begin{enumerate} \item $s_i \subset s_j$, \item if $u$ in $\N^{<\mathbb{N}}$ satisfies $s_i \subset u \subset s_j$, then $u \notin \{ s_1, \dots, s_m\}$. \end{enumerate} \end{definition} We define an adjacent pair $(\y{s_j}{t_j}, \y{s_i}{t_i})$ is a \textit{potential cancellation} if $y^{t_i}\sigma y^{t_j}$ is a potential cancellation, where $\sigma$ is the word satisfying $s_i\sigma=s_j$. It is clear from the definition that a weak standard form contains a potential cancellation if and only if there exists an adjacent pair such that it is a potential cancellation. By the definition, for example, if $(\y{j}{t}, y_s)$ is a potential cancellation, then either $\y{j}{t}=\y{s00}{-1}$, $y_{0(n-1)}$, or $\y{s(n-1)}{-1}$ holds, or $(\y{\x{0}{s}(j)}{t}, y_{s0})$, $(\y{\x{0}{s}(j)}{t}, \y{s(n-1)0}{-1})$, or $(\y{\x{0}{s}(j)}{t}, y_{s(n-1)(n-1)})$ is also a potential cancellation. As mentioned in Remark \ref{remark_PCA}, we note that if an adjacent pair is a potential cancellation, then $\sigma$ is in $\{0, n-1\}^{<\mathbb{N}}$. We introduce the moves which are the ``inverse'' of ER moves. First, we define the conditions where the moves are defined. \begin{definition}\label{def_potential_contraction} We say that a weak standard form contains a \textit{potential contraction} if it satisfies either of the following: \begin{enumerate} \item there exists a subword $y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}$, but there does not exist $\y{s(n-1)}{\pm1}$; \item there exists a subword $\y{s00}{-1}y_{s0(n-1)}\y{s(n-1)}{-1}$, but there does not exist $\y{s0}{\pm1}$. \end{enumerate} We also call the same when the condition above is satisfied by commuting moves. \end{definition} \begin{example} \label{example_potential_contraction} Let $n=4$. A weak standard form $y_{300}y_{3030}^{-1}y_{3031}y_{3033}y_{30}$ contains a potential contraction. \end{example} We now define moves. \begin{definition}\label{def_contraction_move} Let $f\lambda$ be in standard form. We assume that this form contains a potential contraction in the sense of (1). We apply commuting move to all $y_u^{v}$ except the subword in the standard form satisfying $s0<u \leq s(n-1)(n-1)$ to the left of $y_{s0}$ while preserving the order, and we replace $y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}$ with $\x{0}{s}^{-1}y_s$. Then, move $\x{0}{s}^{-1}$ to just next to $f$ by rearranging moves. Finally, we apply cancellation moves or moves $x^ax^b \to x^{a+b}$ if necessary. The rearranging moves are defined, and the word obtained by the above sequence of moves is in standard form. We call this sequence of moves a \textit{contraction move}. We also define the contraction move for (2) in the same way. \end{definition} The moves and the claim in this definition are justified by the following lemma. We describe only case (1), but the similarity holds for case (2). \begin{lemma}\label{lem_contraction_move} In Definition $\ref{def_contraction_move}$, the rearranging moves are defined, and the resulting word is again in standard form. \begin{proof} By the definition of standard form, for $y_u^v$, we have either $s0\perp u$ or $s0 \subseteq u$. In the former case, we have either $s \perp u$ or $s \subset u$. In the latter case, we have $s \subset u$. By the definition of potential contraction, there does not exist $\y{s(n-1)}{\pm1}$. Hence $\x{0}{s}^{-1}(u)$ is defined, and the rearranging moves can be done. Suppose that we have finished applying commuting move. Let $s_j^\prime:=\x{0}{s}^{-1}(s_j)$, and $\x{0}{s}^{-1}(\y{s_1^\prime}{t_1}\cdots\y{s_k^\prime}{t_k})y_s(\y{p_1}{q_1}\cdots\y{p_m}{q_m})$ be the resulting word. If $s_k^\prime=s_k$, then $s_k\perp s$ holds. Since the original word is in standard form, we have $s_k^\prime <s$. If $s_k^\prime\neq s_k$, then $s_k^\prime \supset s$. By the transitivity of this order, $\x{0}{s}^{-1}(\y{s_1^\prime}{t_1}\cdots\y{s_k^\prime}{t_k})y_s$ is in standard form. We show that $y_s(\y{p_1}{q_1}\cdots\y{p_m}{q_m})$ is in standard form. We note that $s(n-1)(n-1)<p_1$ holds. If $s(n-1)(n-1) \supset p_1$, either $s=p_1$ or $s \supset p_1$ holds. In the former case, by $y_s\y{p_1}{t_1}\to\y{s}{t_1+1}$ or $\to \epsilon$, we get the standard form. In the latter case, we have $s<p_1$. If $s(n-1)(n-1) \perp p_1$, since $(n-1)$ is the largest number, we have $s<p_1$. By the transitivity, $\x{0}{s}^{-1}(\y{s_1^\prime}{t_1}\cdots\y{s_k^\prime}{t_k})y_s(\y{p_1}{q_1}\cdots\y{p_m}{q_m})$ is in standard form. \end{proof} \begin{example} In example \ref{example_potential_contraction}, we obtain $\x{0}{30}^{-1}y_{301}\y{30}{2}$ by applying a contraction move. \end{example} \end{lemma} Contraction moves have the following property. \begin{lemma}\label{lem_contraction_move_preserve_NPCA} Let $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ be in standard form which contains a potential contraction and no potential cancellations. Then the word obtained by contraction move contains no potential cancellations. \begin{proof} We show only the case of potential contraction condition (1). We assume that we have produced a potential cancellation after applying a contraction move. If there exists such an adjacent pair, the pair must be $(\y{s}{k}, \y{s^\prime}{k^\prime})$, where $s^\prime$ satisfies $s^\prime \subset s$. Indeed, if not, each adjacent pair is either $(\y{u_1}{j_1}, \y{u_2}{j_2})$ where $u_1, u_2 \neq s$, or $(y_{s^{\prime\prime}}^{k^{\prime\prime}}, y_s^k)$ where $s^{\prime\prime}$ satisfies $s^{\prime\prime} \supset s$. In the former case, since either $\x{0}{s}(u_i)\neq u_i$ ($i=1, 2$) or $\x{0}{s}(u_i)= u_i$ ($i=1, 2$) holds, this contradicts the assumption that $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ contains no potential cancellations. In the latter case, we apply ER move on $\y{s}{k}$. Since the contraction move is the inverse of the ER move, if $y_{s^{\prime\prime}}^{k^{\prime\prime}}$ is $\y{s0}{-1}$, $y_{s(n-1)0}$, or $\y{s(n-1)(n-1)}{-1}$, then it contradicts the assumption that $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ is in standard form. If $y_{s^{\prime\prime}}^{k^{\prime\prime}}$ is none of them, then it contradicts that $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ contains no potential cancellations. There does not exist $\y{s}{k_1}$ in $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$. Indeed, if there exists $y_s^{k_1}$ ($k_1<0$), since there exists $y_{s0}$ by the assumption of potential contraction, $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ contains a potential cancellation. If there exists $\y{s}{k_1}$ ($k_1>0$), since $(\y{s}{k}, \y{s^\prime}{k^\prime})$ is a potential cancellation in the word after applying the contraction move, the adjacent pair $(y_s^{k_1}, \y{s^\prime}{k^\prime})$ is a potential cancellation in $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$. This contradicts the assumption about no potential cancellations. Now we consider the adjacent pair $(y_s, y_{s^\prime}^{k^\prime})$. Let $\sigma$ be the finite word such that $s^\prime\sigma=s$ holds. By the assumption, $y\sigma y$ or $y^{-1}\sigma y$ is potential cancellation. In the former case, by substitutions, we have $y\sigma=\sigma^\prime y^{-1}$. Then, we have $y\sigma 0y=\sigma^\prime y^{-1}0y=\sigma^\prime00y^{-1}y$. This contradicts the assumption that the standard form contains no potential cancellations. Similarly, in the latter case, we have $y^{-1}\sigma=\sigma^\prime y^{-1}$, which also contradicts the assumption. \end{proof} \end{lemma} Our goal in the rest of this section is to give the unique word for each element in $G_0(n)$, which satisfies the following: \begin{definition} \label{normal_form} Let $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ be in standard form with no potential cancellations, no potential contractions, and $f$ is in the normal form in the sense of $F(n)$. Then we say that $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ is in the \textit{normal form}. \end{definition} See Section \ref{subsection_F(n)_definition} for the definition of the normal form of elements in $F(n)$. We obtain a normal form from an arbitrary word on $Z(n)$ through the following four steps: \begin{description} \item[Step 1] Convert an arbitrary word into a standard form (Proposition \ref{Prop_standard_forms_from_words}); \item[Step 2] Convert a standard form into a standard form that contains no potential cancellations; \item[Step 3]Convert a standard form that contains no potential cancellations into a standard form $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ which contains no potential cancellations and no potential contractions; \item[Step 4]Convert $f$ into $g$, where $g$ is the unique normal form in $F(n)$ (Section \ref{subsection_F(n)_definition}). \end{description} The remaining steps are only 2 and 3. \begin{lemma}[{\cite[Lemma 4.5]{lodha2020nonamenable}} for $n$=2]\label{lem_for_step2} Let $(\y{s_1}{t_1}\cdots\y{s_m}{t_m})\y{s}{t}$ be in standard form such that the following hold: \begin{enumerate}[font=\normalfont] \item $t_1, \dots, t_m, t \in \{1, -1\}$; \item any two $s_i, s_j$ are independent; \item for any $\y{s_i}{t_i}$, $\y{s_i}{t_i}$ and $\y{s}{t}$ is an adjacent pair with being potential cancellation. \end{enumerate} Then by applying ER moves and cancellation moves, we obtain a standard form $f\y{u_1}{v_1}\cdots\y{u_k}{v_k}$ such that any two $u_i, u_j$ are independent and $v_1, \dots, v_k \in \{1, -1\}$. \begin{proof} We consider only the case $t=1$. We claim that $\x{0}{s}(s_i)$ is defined for every $i$. Indeed, for $s_i$ such that $s \subset s_i$, the only case where $\x{0}{s}(s_i)$ is not defined is the case $s_i=s0$. Then since the adjacent pair $(\y{s0}{t_i}, y_s)$ is not potential cancellation whether $t_i$ is $1$ or $-1$, this contradicts the assumption $(3)$. Let $s_i^\prime=\x{0}{s}(s_i)$. By applying ER move to $y_s$, we have \begin{align*} (\y{s_1}{t_1}\cdots\y{s_m}{t_m})y_s=\x{0}{s}(\y{s_1^\prime}{t_1}\cdots\y{s_m^\prime}{t_m})y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}. \end{align*} We note that $s0$, $s(n-1)0$, and $s(n-1)(n-1)$ are independent of each other. For every $\y{s_i^\prime}{t_i}$, one of $y_{s0}$, $\y{s(n-1)0}{-1}$, or $y_{s(n-1)(n-1)}$ corresponds to it, either as the inverse or as an adjacent pair. Let $\y{\sigma}{\tau}$ be the corresponding one. If it is the inverse, we apply a cancellation move. If it forms an adjacent pair, the distance in an $n$-ary tree between $s_i^\prime$ and $\sigma$ is smaller than the distance between $s_i$ and $s$. Thus, by iterating this process, we obtain the desired result. \end{proof} \end{lemma} The following lemma completes step 2. \begin{lemma}[{\cite[Lemma 4.6]{lodha2020nonamenable}} for $n$=2]\label{lem_step2} By applying moves, any weak standard form can be rewritten into a standard form that contains no potential cancellations. \begin{proof} Let $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ $(t_i \in \{-1, 1 \})$ be a weak standard form. We show by induction on $m$. If $m\leq1$, since there exists no adjacent pair, it is clear. We assume that $m>1$ holds. By applying the inductive hypothesis, Lemmas \ref{lem_ERmove_preserve_NPCA}, and \ref{lem_depth_ER_moves} to $f\y{s_1}{t_1}\cdots\y{s_{m-1}}{t_{m-1}}$, we obtain a weak standard form $g\y{p_1}{q_1}\cdots \y{p_k}{q_k}$ with at least depth $\|s_m\|+1$ and without potential cancellations. Since $p_i \subset s_m$ does not hold by its depth, $g\y{p_1}{q_1}\cdots \y{p_k}{q_k}\y{s_m}{t_m}$ is in weak standard form. If there exists an adjacent pair that is a potential cancellation, it is only of the form $(\y{p_i}{q_i}, \y{s_m}{t_m})$. We record all such $\y{p_i}{q_i}$. By applying commuting moves, we obtain a weak standard form $h(\y{v_1}{k_1}\cdots\y{v_l}{k_l})(\y{u_1}{v_1}\cdots\y{u_o}{v_o})\y{s_m}{t_m}$ which satisfies the following: \begin{enumerate} \item $v_1, \dots, v_o \in \{ -1, 1\}$; \item For $j=1, \dots, o$, $\y{u_j}{v_j}$ and $\y{s_m}{t_m}$ is an adjacent pair which is a potential cancellation; \item All other adjacent pairs are not potential cancellations. \end{enumerate} Indeed, since $s_m$ is the shortest word, each adjacent element is the ``second shortest.'' Hence we can apply commuting moves to get a word. We again note that Lemmas \ref{lem_ERmove_preserve_NPCA} and \ref{lem_depth_ER_moves}, i.e., we can increase the depth by ER moves while preserving in weak standard form without potential cancellations. We apply ER moves to $h(\y{v_1}{k_1}\cdots\y{v_l}{k_l})(\y{u_1}{v_1}\cdots\y{u_o}{v_o})\y{s_m}{t_m}$ in two phases. First, we apply ER moves to the part of $h(\y{v_1}{k_1}\cdots\y{v_l}{k_l})$ so that we can apply ER moves to the word on $X(n)$ that appears when we apply Lemma \ref{lem_for_step2} to $(\y{u_1}{v_1}\cdots\y{u_o}{v_o})\y{s_m}{t_m}$. Secondly, we apply Lemma \ref{lem_for_step2} to $(\y{u_1}{v_1}\cdots\y{u_o}{v_o})\y{s_m}{t_m}$. By the same argument in Lemma \ref{lem_ERmove_preserve_NPCA}, no new potential cancellations are produced in this process. Finally, by Lemma \ref{weakstandard_standard}, we have the desired result. \end{proof} \end{lemma} The following lemma completes step 3. \begin{lemma}[{\cite[Lemma 4.7]{lodha2020nonamenable}} for $n$=2]\label{lem_step3} Any standard form which contains no potential cancellations can be rewritten into a standard form that contains no potential cancellations and no potential contractions. \begin{proof} We apply contraction moves repeatedly. By Lemmas \ref{lem_contraction_move} and \ref{lem_contraction_move_preserve_NPCA}, the applied word is again in standard form and contains no potential cancellations. Since this move makes the word on $Y(n)$ in the standard form strictly shorter, the process finishes. \end{proof} \end{lemma} The only thing that remains to be proved is the uniqueness of the normal form. We will show this by contradiction. In order to do this, we define an ``invariant'' of the forms. \begin{definition} A calculation that contains no potential cancellations has \textit{exponent $m$} if $m$ is the number of $y^{\pm1}$ satisfying the condition that the number appearing after it is only $0$, $(n-1)$, or $y^{\pm1}$. \end{definition} \begin{example} Let $n=4$. The string $y0y02y^{-1}(n-1)y00\cdots$ has exponent $2$. The string $y100\cdots$ has exponent $0$. \end{example} For the proof of uniqueness, we prepare two lemmas. \begin{lemma}[{\cite[Lemma 5.10]{lodha2016nonamenable}} for $n=2$]\label{lem_exponent_1} Let $\Lambda$ be a finite word on $\bm{N} \cup \{y, y^{-1}\}$ that contains no potential cancellations. Let $m$ be the exponent of $\Lambda$. Then there exists a finite word $u$ on $\{0, n-1\}$ and $v$ in $\N^{<\mathbb{N}}$ such that $\Lambda u$ can be rewritten into $vy^m$ by substitutions. \begin{proof} By substitutions, we rewrite $\Lambda$ into $v^\prime\Lambda^\prime$, where $v^\prime$ is in $\N^{<\mathbb{N}}$ and $\Lambda^\prime$ is in $\{0, (n-1), y, y^{-1}\}$. By the definition of exponent, $\Lambda^\prime$ also has exponent $m$, and by Lemma \ref{substitution_NPCA}, this also contains no potential cancellations. Hence, by identifying $\{0, n-1\}$ with $\{0, 1\}$, the claim comes down to the case $n=2$. \end{proof} \end{lemma} For $u, s$ in $\N^{<\mathbb{N}}$, we say that \textit{$u$ dominates $s$} if the condition $u \perp s$ or $u \supset s$ is satisfied. \begin{lemma}[{\cite[Lemma 4.8]{lodha2020nonamenable}} for $n=2$]\label{lem_exponent_2} Let $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}$ and $g\y{p_1}{q_1}\cdots\y{p_m}{q_m}$ be in standard form which represent the same element in $G_0(n)$. Let $u$ be in $\N^{<\mathbb{N}}$ such that the following hold: \begin{enumerate}[font=\normalfont] \item $f(u)=:u_1$ and $g(u)=:u_2$ are defined; \item $u_1$ dominates $s_1, \dots, s_l$; \item $u_2$ dominates $p_1, \dots, p_m$. \end{enumerate} Let $\Theta$ be the calculation of $u$ and $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}$, and let $\Lambda$ be the calculation of $u$ and $g\y{p_1}{q_1}\cdots\y{p_m}{q_m}$. Assume that these calculations contain no potential cancellations. Then two exponents of the calculations are the same. \begin{proof} Let $e(\Lambda)$ and $e(\Theta)$ be the two corresponding exponents. We show this by contradiction. We can assume that $e(\Lambda)>e(\Theta)$ without loss of generality. Since exponents are non-negative integer, let $k:=e(\Lambda)>0$. By Lemma \ref{lem_exponent_1}, we have $wy^k$ obtained from $\Lambda v$ by substitutions, where $v \in \N^{<\mathbb{N}}$. Since $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}$ and $g\y{p_1}{q_1}\cdots\y{p_m}{q_m}$ are equal as elements of $G_0(n)$, the output strings of \begin{align*} \Theta v 0^{2^k}(n-1)0^{2^k}(n-1)\cdots \shortintertext{and} \Lambda v 0^{2^k}(n-1)0^{2^k}(n-1)\cdots \end{align*} are equal as elements of $\N^\mathbb{N}$. Then the latter is \begin{align*} \Lambda v 0^{2^k}(n-1)0^{2^k}(n-1)\cdots = w y^k 0^{2^k}(n-1)0^{2^k}(n-1)\cdots=w0(n-1)^{2^k}0(n-1)^{2^k}\cdots. \end{align*} Since $e(\Lambda)>e(\Theta)$ holds, by the definition of $y$, this implies a contradiction. \end{proof} \end{lemma} \begin{corollary} \label{G_0(n)_presentation} The group obtained from the presentation $\langle Z(n) \mid R(n)\rangle$ is the $n$-adic Lodha--Moore group $G_0(n)$. \begin{proof} We show that any standard form which contains no potential cancellations and represents an element of $F(n)$ is always a word on $X(n)$. Let $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}=g$ be standard forms, where $f, g \in F(n)$, and $m \geq 1$. Assume that they contain no potential cancellations. Let $u=s_100\cdots$ in $\N^\mathbb{N}$. Then the exponent of the calculation of $f^{-1}(u)$ and $f\y{s_1}{t_1}\cdots\y{s_m}{t_m}$ is strictly greater than $0$, and that of the calculation of $f^{-1}(u)$ and $g$ is $0$. By Lemma \ref{lem_exponent_2}, this is a contradiction. In particular, for a word on $Z(n)$ representing the identity element of $G_0(n)$, we can rewrite it into a standard form that contains no potential cancellations by Proposition \ref{Prop_standard_forms_from_words} and Lemma \ref{lem_step2}. Since this standard form is a word on $X(n)$, as shown above, we can reduce it to the empty word by using the relations of $F(n)$. \end{proof} \end{corollary} \begin{remark} From the above argument, the uniqueness of the normal form of the elements of $F(n)$ in $G_0(n)$ follows. \end{remark}\label{remark_uniqueF(n)} The following completes the proof of the uniqueness. \begin{theorem}[{\cite[Theorem 4.4]{lodha2020nonamenable}} for $n=2$]\label{thm_normal_form_uniqueness} For each element in $G_0(n)$, its normal form is unique. \begin{proof} By Remark \ref{remark_uniqueF(n)}, we can assume that every normal form in the following argument is not in $F(n)$. We show this by contradiction. Let $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}$ and $g\y{p_1}{q_1}\cdots\y{p_m}{q_m}$ be different normal forms representing the same element. We can assume that $s_l \leq p_m$ without loss of generality. One of the following three holds: \begin{enumerate} \item $s_l=p_m$ and $t_l=q_m$; \item $s_l=p_m$ and $t_l \neq q_m$; \item $s_l<p_m$. \end{enumerate} First, we show that it is sufficient to consider only case $(3)$. In case (1), since $\y{s_l}{t_l}=\y{p_m}{q_m}$, we start from $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ and $g\y{p_1}{q_1}\cdots\y{p_{m-1}}{q_{m-1}}$. They are two different standard forms representing the same element. They contain no potential cancellations but possibly contain a potential contraction. Since we only cancel $\y{s_l}{t_l}=\y{p_m}{q_m}$, the only case that contains a potential cancellation at this step is when $\y{s_l}{t_l}$ or $\y{p_m}{q_m}$ or both play the role of $\y{s(n-1)}{\pm1}$ for some $s$ (see Definition \ref{def_potential_contraction} (1)). By the assumption of the standard form, the case with $\y{s0}{\pm 1}$ (Definition \ref{def_potential_contraction} (2)) does not occur. We further divide case (1) into the following four subcases: \begin{description} \item[\rm (1-1)] Both $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ and $g\y{p_1}{q_1}\cdots\y{p_{m-1}}{q_{m-1}}$ contain a potential contraction, and $t_{l-1}=q_{m-1}$; \item[\rm (1-2)] Both $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ and $g\y{p_1}{q_1}\cdots\y{p_{m-1}}{q_{m-1}}$ contain a potential contraction, and $t_{l-1}\neq q_{m-1}$; \item[\rm (1-3)] $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ contains a potential contraction, and $g\y{p_1}{q_1}\cdots\y{p_{m-1}}{q_{m-1}}$ contains no potential contractions, or vice versa; \item[\rm (1-4)] $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ and $g\y{p_1}{q_1}\cdots\y{p_{m-1}}{q_{m-1}}$ contain no potential contractions. \end{description} In case (1-1), we consider $f\y{s_1}{t_1}\cdots\y{s_{l-2}}{t_{l-2}}$ and $g\y{p_1}{q_1}\cdots\y{p_{m-2}}{q_{m-2}}$ instead of the original words in order to eliminate the potential contraction part. Since the assumption of containing a potential contraction and being in standard form, $s_l=p_m=s(n-1)$, $s_{l-1}=p_{m-1}=s(n-1)(n-1)$ and $t_{l-1}=q_{m-1}>0$ hold. Then $f\y{s_1}{t_1}\cdots\y{s_{l-2}}{t_{l-2}}$ and $g\y{p_1}{q_1}\cdots\y{p_{m-2}}{q_{m-2}}$ are different normal forms. Indeed, the only case where a potential contraction is generated by canceling $\y{s_{l-1}}{t_{l-1}}=\y{p_{m-1}}{q_{m-1}}$ is when $\y{s_{l-1}}{t_{l-1}}=\y{p_{m-1}}{q_{m-1}}$ plays the role of $y_{s^\prime(n-1)}$ for $s^\prime=s(n-1)$, but this does not occur since $\y{s^\prime0}{k_1}=\y{s(n-1)0}{k_1}$ and $\y{s^\prime0}{k_2}=\y{s(n-1)0}{k_2}$ exist in two forms respectively and $k_1, k_2<0$ holds by the assumption of potential contractions. Thus, we get ``shorter'' normal forms that satisfy all the assumptions. In case (1-2), we can assume that $0<t_{l-1}<q_{m-1}$ without loss of generality. We consider $f\y{s_1}{t_1}\cdots\y{s_{l-2}}{t_{l-2}}$ and $g\y{p_1}{q_1}\cdots\y{p_{m-1}}{q_{m-1}-t_{l-1}}$ instead. For the same reason as for case (1-1), the former is in normal form. By the assumption of a potential contraction, $\y{s_{l-2}}{t_{l-2}}=\y{s(n-1)i\sigma}{t_{l-2}}$, where $i$ is in $\bm{N}$ and $\sigma$ is a word in $\N^{<\mathbb{N}} \cup \{\epsilon\}$. Note that $s(n-1)0 \leq s(n-1)i \sigma$ holds. For the latter, by performing contraction moves as in step 3, the last letter is $\y{u}{k}$ where $u \subseteq s$. This is the situation in case (3). In case (1-3), we only consider the former situation. By the assumptions, $s_l=p_m=s(n-1)$ holds for some $s$. By applying contraction moves to $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ as in step3, the last latter is $\y{u}{k}$ where $u \subseteq s \subset s_l=p_m$. Since $g\y{p_1}{q_1}\cdots\y{p_{m-1}}{q_{m-1}}$ is in normal form, and in particular in standard form, we have $y_{p_{m-1}}\neq y_u$. This is the situation in case (3). In case (1-4), both are ``shorter'' normal forms that satisfy all the assumptions. Therefore, any subcase of case (1) yields shorter words or in case (3). In case (2), we can assume that neither $0<q_m<t_l$ nor $t_l<q_m<0$ without loss of generality. Consider $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ and $g\y{p_1}{q_1}\cdots\y{p_{m}}{q_{m}-t_{l}}$ instead. The latter is in normal form, but as in case (1), the former may contain a potential contraction. Similarly, we divide case (2) into two subcases: \begin{description} \item[\rm (2-1)] $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ contains a potential contraction; \item[\rm (2-2)] $f\y{s_1}{t_1}\cdots\y{s_{l-1}}{t_{l-1}}$ contains no potential contractions. \end{description} In case (2-1), the same argument as in (1-3) can be applied. In case (2-2), since $s_{l-1}\neq p_m$, this is the situation in case (3). We note that case (1-1) or (1-4) happens only finitely many times due to the uniqueness of the normal form of $F(n)$. Thus, we consider case (3). We only consider the case $q_m>0$. We note that $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}\y{p_m}{-1}$ is in standard form since $s_l < p_m$. One of the following holds: \begin{enumerate}[label=(\roman*)] \item there exists an infinite word $\sigma$ on $\{0, n-1\}$ such that for any finite word $\sigma_1 \subset \sigma$, $p_m\sigma_1$ is not in $\{s_1, \dots, s_l \}$; \item there exists an adjacent pair of the form $p_m, s_i$ where $p_mu=s_i$ for $u$ on $\{0, n-1\}$ which is not a potential cancellation. \end{enumerate} Indeed, if both are false, it contradicts that $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}$ contains no potential contractions. Then, in either case, there exists a finite word $w$ on $\{0, n-1\}$ such that the calculation $\Lambda$ of $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}\y{p_m}{-1}$ and $f^{-1}(p_mw)$ contains no potential cancellations. By expanding $w$ if necessary, we can consider the following three calculations: \begin{enumerate}[label=(\alph*)] \item $\Theta$ is the calculation of $g\y{p_1}{q_1}\cdots\y{p_m}{q_m-1}$ and $f^{-1}(p_mw)$; \item $\Lambda^\prime$ is the calculation of $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}$ and $f^{-1}(p_mw)$; \item $\Theta^\prime$ is the calculation of $g\y{p_1}{q_1}\cdots\y{p_m}{q_m}$ and $f^{-1}(p_mw)$. \end{enumerate} All calculations contain no potential cancellations. Indeed, for $\Lambda^\prime$ and $\Theta^\prime$, it is clear from the assumption of normal form. For $\Theta$, either $q_m-1$ is zero or strictly greater than zero since $q_m>0$. In both cases, it is clear from the definitions and the assumption that $g\y{p_1}{q_1}\cdots\y{p_m}{q_m}$ contains no potential cancellations. Let $e(\Theta)$, $e(\Theta^\prime)$, $e(\Lambda)$, and $e(\Lambda^\prime)$ be the its exponent, respectively. Since we have \begin{align*} f\y{s_1}{t_1}\cdots\y{s_l}{t_l}\y{p_m}{-1}=g\y{p_1}{q_1}\cdots\y{p_m}{q_m-1} \end{align*} as elements of $G_0(n)$, by Lemma \ref{lem_exponent_2}, $e(\Lambda)=e(\Theta)$ holds. Similarly, we have $e(\Theta^\prime)=e(\Lambda^\prime)$. By the construction of $\Lambda$ and $\Lambda^\prime$, we have $e(\Lambda)>e(\Lambda^\prime)$. Similarly, we have $e(\Theta)\leq e(\Theta^\prime)$ since $q_m>0$. Combining them, we obtain \begin{align*} e(\Theta)\leq e(\Theta^\prime)=e(\Lambda^\prime)<e(\Lambda), \end{align*} which is a contradiction. If $q_m<0$, we can prove the same for $f\y{s_1}{t_1}\cdots\y{s_l}{t_l}y_{p_m}$ and $g\y{p_1}{q_1}\cdots\y{p_m}{q_m+1}$. \end{proof} \end{theorem} \section{Several properties of $G_0(n)$}\label{section_G0(n)_properties} We show that $G_0(n)$ has ``expected'' properties. \subsection{The finite presentation of $G_0(n)$} Using the infinite presentation given in Section \ref{infinite_presentation}, we construct a finite presentation of $G_0(n)$ with respect to the generating set $\{x_0, \dots, x_{n-2}, {x_0}_{[n-1]}, y_{(n-1)0}\}$. As in \cite[Theorem 3.3]{lodha2016nonamenable}, we use two properties of $F(n)$. Since $G_0=G_0(2)$ is already known to be finitely presented, we assume that $n>2$. For $s$ in $\N^{<\mathbb{N}}$ that is neither $0\cdots0$ nor $(n-1)\cdots(n-1)$, we define a map $f_s$ in $F(n)$ as follows: Let $a$ be the sum of each number in $s$, and let $d, k$ be the integers such that $a=(n-1)d+k$ holds, where $0\leq d$ and $0\leq k<n-1$. Then $f_s$ is defined as in Figure \ref{definition_fs}. \begin{figure}[tbp] \centering \includegraphics[width=100mm]{definition_fs.pdf} \caption{The definition of $f_s$. The rightmost carets are for adjustment. The triangle labeled $s$ denotes the minimal $n$-ary tree containing $s$. } \label{definition_fs} \end{figure} We fix a word on $\{x_0, \dots, x_{n-2}, \x{0}{(n-1)}\}$ representing $f_s$. Since the leaf corresponding to $s$ in the minimal $n$-ary tree containing $s$ is the $a$-th leaf counting from the left (from $0$), we note that $f_s((n-1)k)=s$ holds. \begin{lemma}\label{lem_finitely_presented1} For the above $f_s$, we have $\x{i}{(n-1)k}f_s=f_s \x{i}{s}$ and $y_{(n-1)0}f_s=f_sy_s$. \begin{sproof} We note that we have $f_s^{-1}\x{i}{(n-1)k}f_s=\x{i}{f_s((n-1)k)}=\x{i}{s}$. Indeed, if $w=sw^\prime \in \N^\mathbb{N}$, the left hand side sends $w$ to $sx_i(w^\prime)$ via $(n-1)kw^\prime$ and $(n-1)kx_i(w^\prime)$. If $w \neq sw^\prime$, since $f_s^{-1}(w)$ does not contains $(n-1)k$ as a prefix, $\x{i}{(n-1)k}(f_s^{-1}(w))=f_s^{-1}(w)$. Another statement can be shown in the same way. \end{sproof} \end{lemma} \begin{lemma}\label{lem_finitely_presented2} Let $u, v$ be in $\N^{<\mathbb{N}}$ such that $u<v$, $u \perp v$, and $y_u, y_v$ are in $Y(n)$. Then there exists $g$ in $F(n)$ such that $g(0(n-1))=u$ and $g((n-1)0)=v$ hold. \begin{sproof} Like the construction of $f_s$, by adding some carets, we can obtain two trees (may no be tree diagram) such that $0(n-1) \mapsto u$. Since $n>2$, we can add some carets between two leaves corresponding to $0(n-1)$ and $(n-1)0$ so that $(n-1)0 \mapsto v$. Finally, we add some carets to the rightmost if necessary, to make it a tree diagram. See Figure \ref{construction_g_uv} for the sketch of the construction of $g$. \end{sproof} \end{lemma} \begin{figure}[tbp] \centering \includegraphics[width=80mm]{construction_g_uv.pdf} \caption{The construction of $g$ in Lemma \ref{lem_finitely_presented2}. } \label{construction_g_uv} \end{figure} \begin{theorem}[{\cite[Theorem 3.3]{lodha2016nonamenable}} for $n=2$]\label{Thm_finitely_presented_G0(n)} $G_0(n)$ admits a finite presentation. \begin{proof} Since $F(n)$ is finitely presented \cite[Theorem 4.17]{brown1987finiteness}, the relation (1) in $R(n)$ is expressed by finite relations. Thus we only consider the other three relations (2), (3), and (4). For the relation $y_t\x{i}{s}=\x{i}{s}y_{\x{i}{s}(t)}$, we can rewrite as follows: \begin{align*} y_t\x{i}{s}&=f_t^{-1}y_{(n-1)0}f_t \x{i}{s}, \\ \x{i}{s}y_{\x{i}{s}(t)}&=\x{i}{s}f^{-1}_{\x{i}{s}(t)}y_{(n-1)0}f_{\x{i}{s}(t)}. \end{align*} Thus, it is sufficient to show that \begin{align} y_{(n-1)0}f_t \x{i}{s}f^{-1}_{\x{i}{s}(t)}=f_t \x{i}{s}f^{-1}_{\x{i}{s}(t)}y_{(n-1)0}, \label{relation2_into_finite} \end{align} by using a finite number of relations. We note that $f_t \x{i}{s}f^{-1}_{\x{i}{s}(t)}$ maps $(n-1)0$ to itself. Therefore, $f_t \x{i}{s}f^{-1}_{\x{i}{s}(t)} \in F(n)$ is rewritten into a word on the finite set \begin{align*} X_{(n-1)0}(n):= &\{x_0x_i^{-1}, \x{i}{(n-1)} \mid i=1, \dots, n-2 \} \\ &\cup \{\x{i}{0}, \x{i}{1}, \dots, \x{i}{n-2}, \x{i}{(n-1)1}, \dots, \x{i}{(n-1)(n-1)}\mid i=0, \dots, n-2 \} \\ &\cup \{ \x{0}{0(n-1)}, \dots, \x{0}{(n-2)(n-1)}, \x{0}{(n-1)1(n-1)}, \dots, \x{0}{(n-1)(n-1)(n-1)}\}, \end{align*} by finite relations of $F(n)$. Since each element is commute with $y_{(n-1)0}$, the collection of the finite relations $hy_{(n-1)0}=y_{(n-1)0}h$ where $h$ is in $X_{(n-1)0}(n)$ implies equation \eqref{relation2_into_finite}. For the relation $y_sy_t=y_ty_s$ where $s \perp t$, we note that we have $y_{(n-1)0}y_{0(n-1)}=y_{0(n-1)}y_{(n-1)0}$. By using this one and relation (2), we have \begin{align*} y_sy_t=f^{-1}_sy_{(n-1)0}f_sf^{-1}_ty_{(n-1)0}f_t=g^{-1}y_{0(n-1)}gg^{-1}y_{(n-1)0}g=g^{-1}y_{(n-1)0}gg^{-1}y_{0(n-1)}g=y_ty_s, \end{align*} where $g$ is the element such that $g(0(n-1))=s$ and $g((n-1)0)=t$. Finally, for the relation $y_s=\x{0}{s}y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}$, we note that we have $y_{(n-1)0}=\x{0}{(n-1)0}y_{(n-1)00}\y{(n-1)0(n-1)0}{-1}y_{(n-1)0(n-1)(n-1)}$. Since $f_s((n-1)0w)=sw$ for $w \in \N^{<\mathbb{N}}$, by using relation (2), we have \begin{align*} y_s&=f^{-1}_sy_{(n-1)0}f_s \\ &=f^{-1}_s\x{0}{(n-1)0}y_{(n-1)00}\y{(n-1)0(n-1)0}{-1}y_{(n-1)0(n-1)(n-1)}f_s \\ &=f^{-1}_s\x{0}{(n-1)0}f_sf^{-1}_sy_{(n-1)00}f_sf^{-1}_s\y{(n-1)0(n-1)0}{-1}f_sf^{-1}_sy_{(n-1)0(n-1)(n-1)}f_s \\ &=\x{0}{s}y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}. \end{align*} This completes the proof. \end{proof} \end{theorem} \subsection{Nonamenability of $G_0(n)$} In this section, we discuss embeddings of the groups $G_0(n)$. For nonamenability, we only use the fact that a subgroup of an amenable group is also amenable \cite[Theorem 18.29 (1)]{dructu2018geometric}. The idea for the following proposition comes from \cite{burillo2001metrics}. \begin{theorem}\label{prop_embedding_G_0(p)G_0(q)} Let $p, q \geq 2$ and assume that there exists $d$ in $\mathbb{N}$ such that $q-1=d(p-1)$ holds. Then there exists an embedding $I_{p, q}: G_0(p) \to G_0(q)$. Moreover, the equality $I_{p,r}=I_{p, q}I_{q, r}$ holds for the maps $I_{p, q}$, $I_{q, r}$ and $I_{p, r}$ defined for $r\geq q \geq p \geq2$ such that any two satisfy the condition. \begin{proof} For the sake of clarity, we label elements of $G_0(p)$ with ``tildes'' and elements of $G_0(q)$ with ``hats.'' We first recall the definition of the embedding $F(p) \to F(q)$ given in \cite[Section 3, example (3)]{burillo2001metrics}. This homomorphism is defined by \begin{align*} &\tilde{x}_0\mapsto\hat{x}_0 & &\tilde{x}_1 \mapsto \hat{x}_d & &\cdots & &\tilde{x}_{p-2} \mapsto \hat{x}_{d(p-2)}& &\tx{0}{(p-1)} \mapsto \hx{0}{(q-1)}, \end{align*} and extended to the (quasi-isometric) embedding \cite[Theorem 6]{burillo2001metrics}. By considering $F(p)$ and $F(q)$ as pairs of $p$-ary trees and $q$-ary trees, this embedding is regarded as a ``caret replacement.'' Indeed, for every pair of $p$-ary trees, inserting $d-1$ edges between every pair of adjacent edges of each $p$-caret corresponds to the embedding. See Figure \ref{embedding_F3F7}, for example. \begin{figure}[tbp] \centering \includegraphics[width=120mm]{embedding_F3F7.pdf} \caption{Example of embedding for $p=3$ and $q=7$. } \label{embedding_F3F7} \end{figure} Define the map $i_{p, q}: \Pset^{<\mathbb{N}}=\{0, \dots, p-1\}^{<\mathbb{N}} \to \Qset^{<\mathbb{N}}=\{0, \dots, q-1\}^{\mathbb{N}}$ by setting \begin{align*} w_1 w_2\cdots w_k \mapsto (dw_1)(dw_2)\cdots (dw_k), \end{align*} namely, we multiply each number of a given word by $d$. In addition, define the map $Y(p)\to Y(q)$ by setting $\tilde{y}_s \mapsto \hat{y}_{i_{p, q}(s)}$. By the definition of $i_{p, q}$, we have that $\hat{y}_{i_{p, q}(s)}$ is in $Y(q)$. By combining the two maps $F(p) \to F(q)$ and $Y(p) \to Y(q)$, we obtain the map $I_{p, q}: F(p)\cup Y(p) \to F(q)\cup Y(q)$. This map can be extended to the homomorphism $I_{p, q}: G_0(p) \to G_0(q)$. Indeed, since $i_{p, q}(\tx{i}{s}(t))=\hx{di}{i_{p, q}(s)}(i_{p, q}(t))$ and $I_{p, q}(\tilde{y}_s)=\hat{y}_{i_{p, q}(s)}$ hold, we can verify that the relations of the infinite presentation (Corollary \ref{G_0(n)_presentation}) are preserved under $I_{p, q}$ by directly calculations. We claim that $I_{p, q}: G_0(p) \to G_0(q)$ is injective. We show this by contradiction. Assume that $\operatorname{Ker}(I_{p, q})$ is not trivial. By the construction, the restriction $I_{p, q}\restr{F(p)}$ coincides with the embedding $F(p) \to F(q)$ mentioned above. Hence if there exists $x$ in $\operatorname{Ker}(I_{p, q})$ that is not identity, then $x$ is not in $F(p)$. We note that the map $I_{p, q}$ preserves being in normal form. In particular, if $x$ is not in $F(p)$, then $I_{p, q}(x)$ is not in $F(q)$ by the uniqueness (Theorem \ref{thm_normal_form_uniqueness}). This implies that $I_{p, q}(x)$ is not identity, as desired. Let $d_1$, $d_2$ and $d_3$ be natural numbers such that $q-1=d_1(p-1)$, $r-1=d_2(q-1)$ and $r-1=d_3(p-1)$ holds respectively. We note that $d_3=d_1d_2$ holds. By considering the definitions of the map $i_{p, q}$ and the homomorphism from $F(p)$ to $F(q)$, the equality of $I_{p,r}$ and $I_{p, q}I_{q, r}$ follow immediately. \end{proof} \end{theorem} \begin{corollary} Let $n\geq2$. Then $G_0(n)$ is nonamenable. \begin{proof} Since we have $n-1=(n-1)(2-1)$ for every $n$, by Theorem \ref{prop_embedding_G_0(p)G_0(q)}, we have an embedding from $G_0(2)=G_0$ into $G_0(n)$. This implies that $G_0(n)$ has a subgroup that is isomorphic to $G_0$. Since $G_0$ is nonamenable \cite[Theorem 1.1]{lodha2016nonamenable}, $G_0(n)$ is also nonamenable. \end{proof} \end{corollary} \begin{corollary} Let $s_i:=2^{i-1}+1$. Then the sequence $G(s_1), G(s_2), \dots$ forms an inductive system of groups. \end{corollary} \subsection{$G_0(n)$ has no free subgroups} We assume $n \geq 3$. In order to show that $G_0(n)$ does not contain the free groups, we will follow \cite[Section 3]{brin1985groups}. The difference in this paper is that the domain and range set of the maps are $\N^\mathbb{N}$ instead of $\mathbb{R}$. For an element $g$ in $G_0(n)$, we write the set-theoretic support $\supp{g}$ for the set $\{\xi \in \N^\mathbb{N} \mid g(\xi)\neq\xi\}$. Although we equip $\N^\mathbb{N}$ with the topology that is homomorphic to Cantor set, which is totally disconnected, we can consider ``connected components'' of $\supp{g}$, using the total order of $\N^\mathbb{N}$. For $a, b \in \N^\mathbb{N}$ with $a<b$, set $(a, b):=\{\xi \in \N^\mathbb{N} \mid a<\xi<b\}$. \begin{lemma}\label{lem_supp_finite_component} For $g \in G_0(n)$, there exists a sequence $a_1<a_2 \leq a_3<a_4\leq \cdots < a_{2m}$ in $\N^\mathbb{N}$ such that we have \begin{align*} \supp{g}=(a_1, a_2) \sqcup (a_3, a_4) \sqcup \cdots \sqcup (a_{2m-1}, a_{2m}). \end{align*} We call each $(a_{2i}, a_{2i+1})$ a \textit{connected component} of $\supp{g}$. \begin{proof} Let $f\y{s_1}{t_1}\cdots \y{s_m}{t_m}$ be in normal form of $g$. We decompose $\N^\mathbb{N}$ into disjoint sets $\{n_1\eta \mid \eta \in \N^\mathbb{N} \}, \dots, \{n_p\eta \mid \eta \in \N^\mathbb{N}\}$ by using $n_1, \dots, n_p$ in $\N^{<\mathbb{N}}$ such that the following holds: \begin{enumerate} \item For each $n_l$, $f(n_l)$ is defined; \item For each $s_l$, there exists $n_{l^\prime}$ such that $s_l \leq f(n_{l^\prime})$ holds. \end{enumerate} We show the claim in the lemma by contradiction. Assume that there exist $a_1<b_1<\cdots$ such that each $a_i$ is a fixed point, and each $b_i$ is in $\supp{g}$. Since $\N^\mathbb{N}=\{n_1\eta \mid \eta \in \N^\mathbb{N} \}\cup \dots \cup \{n_p\eta \mid \eta \in \N^\mathbb{N}\}$ holds, we can assume that each $a_i$ and $b_i$ is in some set $\{n_j\eta \mid \eta \in \N^\mathbb{N} \}$ without loss of generality. Since $a_{i}<b_i<a_{i+1}$ holds, we can write each $a_i$ and $b_i$ as $n_ja_i^\prime$ and $n_jb_i^\prime$, respectively. We claim that each $a_i^\prime$ and $b_i^\prime$ can be replaced by words in $\{0, n-1\}^{\mathbb{N}}$, respectively. Indeed, if $a_i^\prime$ is not in $\{0, n-1\}^{\mathbb{N}}$, then there exist $\hat{a}_i \in \{0, n-1\}^{<\mathbb{N}}\cup \{\epsilon\}$, $k \in \{1, \dots, n-2\}$, and $a_i^{\prime \prime} \in \N^\mathbb{N}$ such that $a_i^\prime=\hat{a}_i k a_i^{\prime \prime}$ holds. Then, by the definition of $y$, $g(n_j\hat{a}_i k)$ is defined and equals to $n_j\hat{a}_i k$ since $a_i=n_j \hat{a}_i k a_i^{\prime \prime}$ is a fixed point of $g$. Moreover, for any other $k^\prime \in \{1, \dots, n-2\}$, we have $g(n_j\hat{a}_i k^\prime)=n_j\hat{a}_i k^\prime$. This implies that we have $g(n_j\hat{a}_i 0 \overline{(n-1)})=n_j\hat{a}_i 0 \overline{(n-1)}$, where $\overline{(n-1)}$ denotes the element $(n-1)(n-1)(n-1)\cdots $ in $\N^\mathbb{N}$. By a similar argument for $b_i^\prime$, we have $g(n_j\hat{b}_i k)=n_j\tilde{b_i}k$, where $\hat{b}_i \neq \tilde{b}_i$. Since we have $g(n_j\hat{b}_i k^\prime)\neq n_j\hat{b}_i k^\prime$ for any other $k^\prime$, $n_j\hat{b}_i 0\overline{(n-1)}$ is in $\supp{g}$. Even if each $a_i$ or $b_i$ is replaced by the above one, the order $a_1<b_1<\cdots$ is also preserved. Let $w_1, \dots, w_t$ be in $\{0, \dots, n-1\}$ such that $f(n_j)=w_1 \cdots w_t$ holds. For $n_j \zeta \in \N^\mathbb{N}$ and $f\y{s_1}{t_1}\cdots \y{s_m}{t_m}$, let $w_1y^{l_1}w_2 y^{l_2}\cdots w_t y^{l_t} \zeta$ be their calculation. Then some $l_q$ may be zero. Assume that $f(n_j)=w_1 \cdots w_t$ is not in $\{0, n-1\}^{<\mathbb{N}}$ and let $z$ be the maximal number such that $w_z$ is in $\{1, \dots, n-2\}$. By applying finitely many substitutions to all $y^{l_{z^\prime}}$ ($z^\prime<z$), we obtain a word $w y^{l_1^\prime} w_2^\prime y^{l_2^\prime} \cdots w_{t^\prime}^\prime y^{l_{t^\prime}^\prime}\zeta$, where $w_j^\prime \in \{0, n-1\}$. Since $n_j a_i$ is the output string of $w y^{l_1^\prime} w_2^\prime y^{l_2^\prime} \cdots w_{t^\prime}^\prime y^{l_{t^\prime}^\prime} a_i$, we have $w \leq n_j$. Indeed, it is clear that $w \perp n_j$ does not hold, and if $w > n_j$ holds, then it contradicts the facts that $a_i$ is in $\{0, n-1\}^\mathbb{N}$, and the last number of $w$ is $w_j$. Let $n_j=w n_j^\prime$. Then we note that $n_j^\prime$ is in $\{0, n-1\}^{<\mathbb{N}} \cup \{\epsilon\}$ since $y^{l_1^\prime} w_2^\prime y^{l_2^\prime} \cdots w_{t^\prime}^\prime y^{l_{t^\prime}^\prime}a_i$ is in $\{0, n-1\}^\mathbb{N}$ and its output string equals to $n_j^\prime a_i$. We recall that $G_0(n)$ has a subgroup that is isomorphic to $G_0$ (Theorem \ref{prop_embedding_G_0(p)G_0(q)}). We construct an element $g^\prime$ which is in the subgroup and satisfies the assumption of $g$. Let $g^\prime$ be the element represented by \begin{align*} f^\prime \y{(n-1)0w_2^\prime \cdots w_{t^\prime}^\prime}{l_{t^\prime}^\prime} \y{(n-1)0w_2^\prime \cdots w_{t^\prime-1}^\prime}{l_{t^\prime-1}^\prime} \cdots \y{(n-1)0w_2^\prime}{j_2^\prime} \y{(n-1)0}{j_1^\prime}, \end{align*} where $f^\prime$ is a word on $\{x_0, \x{0}{(n-1)}\}$ satisfying $f^\prime((n-1)0n_j^\prime)=(n-1)0w_2^\prime \cdots w_{t^\prime}^\prime$. Since $n_j^\prime$ and $w_2^\prime \cdots w_{t^\prime}^\prime$ are in $\{0, n-1\}^{<\mathbb{N}}$, there must exist such an element $f^\prime$. By the construction, $g^\prime$ is in the subgroup that is isomorphic to $G_0$. In addition, each $(n-1)0n_j^\prime a_i$ is a fixed point of $g^\prime$, and $(n-1)0n_j^\prime b_i$ is in $\supp{g^\prime}$. By the construction of the map $I_{2, n}$ in Theorem \ref{prop_embedding_G_0(p)G_0(q)}, if we restrict the domain set of $g^\prime$ to $\{0, n-1\}^\mathbb{N}$ and identify $n-1$ with $1$, then we obtain the element of $G_0$ which satisfies all the assumptions about fixed points and elements of support. However, since this does not happen by Proposition \ref{proposition_piecewiseprojective}, this is a contradiction. Finally, assume that $f(n_j)=w_1 \cdots w_t$ is in $\{0, n-1\}^{<\mathbb{N}}$. Since $w_1y^{l_1}w_2 y^{l_2}\cdots w_t y^{l_t} a_i$ is in $\{0, n-1\}^{\mathbb{N}}$, $n_j$ is also in $\{0, n-1\}^{<\mathbb{N}}$. Therefore, $f$ is represented by a word on $\{x_0, \x{0}{(n-1)}\}$. We again note that some $l_q$ may be zero. Then $g^\prime=f\y{w_1\cdots w_t}{l_t} \cdots \y{w_1w_2}{l_2} \y{w_1}{l_1}$ is in $G_0(n)$, and an element of $G_0$ can be similarly constructed from $g^\prime$. \end{proof} \end{lemma} \begin{remark} By the construction, the sequence $a_1, \dots, a_{2m}$ in Lemma \ref{lem_supp_finite_component} is uniquely determined. \end{remark} For a group $G$, $[G, G]$ denotes its commutator subgroup. For two elements of $x, y \in G$, $[x, y]$ denotes the commutator $xyx^{-1}y^{-1}$. \begin{theorem}\label{Theorem_no_free_group} The group $G_0(n)$ has no free subgroups. \begin{proof} Similarly to \cite{brin1985groups}, we will show that if $G_1$ is a subgroup of $[G_0(n), G_0(n)]$, then either $\mathbb{Z}^2$ is a copy of a subgroup of $G_1$ or $[G_1, G_1]$ is trivial. Indeed, in that case, if $G$ is a subgroup of $G_0(n)$, then either $\mathbb{Z}^2$ is a copy of a subgroup of $G$ or $[G, G]$ is abelian, and the free group $F_2$ does not have such a property. Let $G_1$ be a subgroup of $[G_0(n), G_0(n)]$, and assume that $[G_1, G_1]$ is not trivial. Then there exist elements $f$ and $g$ in $G_1$ such that $z=fgf^{-1}g^{-1}\neq \mathrm{Id}$. For $f$ and $g$, let $a_1, \dots, a_{2s}$ and $b_1, \dots, b_{2t}$ be sequences given in Lemma \ref{lem_supp_finite_component}, respectively. By choosing elements appropriately from $a_1, \dots, a_{2s}, b_1, \dots, b_{2t}$, we obtain $c_1, \dots, c_{2m}$ such that \begin{align*} \supp{f} \cup \supp{g}=(c_1, c_2) \sqcup (c_3, c_4) \sqcup \cdots \sqcup (c_{2m-1}, c_{2m}). \end{align*} Let $d_1, \dots, d_{2l}$ be a sequence given by Lemma \ref{lem_supp_finite_component} for $[f, g]$. Then we have that for each $x \in \supp{[f, g]}$, there exist $i$ and $i^\prime$ such that \begin{align} c_{2i^\prime-1}<d_{2i-1}<x<d_{2i}<c_{2i^\prime} \label{support_condition_commutator} \end{align} holds. Indeed, it is sufficient to show that for each common fixed point $x$ of $f$ and $g$, there exists a prefix $u$ such that $[f, g]$ fixes all the element of $\{u\xi \mid \xi \in \N^\mathbb{N}\}$. If $x$ is not in $\{0, n-1\}^{\mathbb{N}}$, then $y$ ``vanishes.'' Thus by regarding $f$ and $g$ as piecewise linear map of $F(n)$, it is clear. If $x$ is in $\{0, n-1\}^{\mathbb{N}}$, by identifying $\{0, n-1\}$ with $\{0, 1\}$, it is also clear by Proposition \ref{proposition_piecewiseprojective}. We write $\supp{[f, g]} \subsetneq \supp{f} \cup \supp{g}$ for the ``proper'' inclusion of each connected component of the support described above. Let \begin{align*} W:=\{h \in \langle f, g \rangle \mid \supp{h} \subsetneq \supp{f} \cup \supp{g}, h \neq \mathrm{Id} \}, \end{align*} where $\langle f, g \rangle$ is the subgroup generated by $f$ and $g$. The set $W$ is not empty since $z$ is in $W$. Define the function $\kappa: W \to \mathbb{Z}_{\geq 0}$ by setting \begin{align*} \kappa(h)= \# \left\{ i \in \{ 1, 3, \dots, 2m-1\} \mid \supp{h}\cap (c_i, c_{i+1}) \neq \emptyset \right\}. \end{align*} Here $\#$ denotes the cardinality of the set. Let $z^\prime$ be a minimizer of $\kappa$ and let $e_1, \dots, e_{2p}$ denote a sequence given by Lemma \ref{lem_supp_finite_component} for $z^\prime$. Since $z^\prime$ is not identity map, there exist $i$ and $i^\prime$ such that $c_{2i^\prime-1}<e_{2i-1}<e_{2i}<c_{2i^\prime}$ holds. We note in the following that each element of $G_0(n)$ preserves the order of $\N^\mathbb{N}$. Since there may exist more than one such $i$, we denote the smallest one by $i_-$ and the largest one by $i_+$. Then there exists $w \in \langle f, g \rangle$ such that $w(e_{2{i_-} -1})>e_{2i_+}$ holds. By construction of $w$, we have \begin{align*} \supp{z^\prime} \cap \supp{w^{-1}z^\prime w} \cap (c_{2i^\prime-1}, c_{2i^\prime})=\emptyset. \end{align*} This implies that we have \begin{align*} \supp{[z^\prime, w^{-1}z^\prime w]} \cap (c_{2i^\prime-1}, c_{2i^\prime})=\emptyset. \end{align*} Indeed, if $x$ is in $\supp{z^\prime}$, then $z^\prime(x)$ is in $\supp{z^\prime}$, thus $z^\prime(x)$ is not in $\supp{w^{-1}z^\prime w}$ and we have $[z^\prime, w^{-1}z^\prime w](x)=x$. If $x$ is in $\supp{w^{-1}z^\prime w}$, then the image of $x$ by $w^{-1}z^\prime w$ is also in $\supp{w^{-1}z^\prime w}$. Since $z^\prime(x)=x$ holds, we have $[z^\prime, w^{-1}z^\prime w](x)=x$. We note that \begin{align*} \supp{[z^\prime, w^{-1}z^\prime w]} \subsetneq \supp{z^\prime} \cup \supp{w^{-1}z^\prime w} \subsetneq \supp{f} \cup \supp{g}. \end{align*} However, $[z^\prime, w^{-1}z^\prime w]$ is not in $W$. Indeed, if $\supp{z^\prime} \cap (c_{2j-1}, c_{2j}) =\emptyset$ for any other $j$, then we have \begin{align*} \supp{[z^\prime, w^{-1}z^\prime w]}\cap (c_{2j-1}, c_{2j})=\emptyset, \end{align*} since if $x$ is in $(c_{2j-1}, c_{2j})$ then $w(x)$ is also in $(c_{2j-1}, c_{2j})$. Thus if $[z^\prime, w^{-1}z^\prime w]$ is in $W$, it contradicts the minimality of $\kappa(z^\prime)$. This implies that we have \begin{align*} z^\prime(w^{-1}z^\prime w)=(w^{-1}z^\prime w)z^\prime. \end{align*} Therefore, these two maps $z^\prime$ and $(w^{-1}z^\prime w)$ generate the group which is isomorphic to $\mathbb{Z}^2$. We have the desired result. \end{proof} \end{theorem} \begin{remark} \label{remark_torsion_free} From this proof, we can see that $G_0(n)$ is torsion free. Indeed, let $g$ $(g \neq \textrm{Id})$ be in $G_0(n)$ and assume that $g^m= \textrm{Id}$ holds. Since $g$ preserves the order of $\N^\mathbb{N}$, for $x$ in $\supp{g}$, we have either \begin{align*} x < g(x)<g^2(x)< \cdots < g^m(x)=x \shortintertext{or} x>g(x)>g^2(x)<\cdots <g^m(x)=x. \end{align*} This is a contradiction. \end{remark} \begin{remark} If there exists an embedding from $G_0(n)$ into either $G_0$ or the Monod group $H$ \cite{monod2013groups}, the theorem follows immediately because $G_0$ and $H$ contain no free subgroups. \end{remark} \subsection{The abelianization of $G_0(n)$ and simplicity of the commutator subgroup} The idea for the theorems in this section comes from \cite{brown1987finiteness} and \cite{burillo2018commutators}. We note that the group $F(n)$ is called $F_{n, 1}$ in \cite[section 4D]{brown1987finiteness}. Thus there exists a surjective homomorphism $\phi: F(n) \to \mathbb{Z}^n$. We briefly recall its definition to compute the abelianization of $G_0(n)$. Let $A_{n+1}$ be the free abelian group generated by $e_-$, $e_+$, and $e_i$ ($i \in \mathbb{Z}$) and satisfying the relations $e_i =e_j$ if $i \equiv j \pmod {(n-1)}$. Then the rank of $A_{n+1}$ is $n+1$. From an $n$-ary tree $Y$ and an $n$-ary tree Z which contains $Y$ as a rooted subtree, we define the element $\delta(Z, Y)$ in $A_{n+1}$ as follows: \begin{enumerate} \item Label the leftmost leaf of Y as $-$, the rightmost as $+$, and the other leaves as $1, 2, \dots $ from left to right. \item To construct $Z$ from $Y$, add a caret to some leaf of $Y$. Then record the label of the leaf. \item Regard the obtained tree as $Y$ again. \item Repeat the process (1) to (3) until the tree $Z$ is obtained. \item Add up all the $e_i$ that have the recorded labels as indices. \end{enumerate} Since $e_i=e_j$ if $i \equiv j$ mod $(n-1)$, we can add carets in any order in process (2). For example, for the trees $Y$ and $Z$ in Figure \ref{definition_Y_and_Z}, by adding carets from left to right, $\delta(Y, Z)=e_-+e_7+e_{10}+e_+=e_-+2e_1+e_+$. \begin{figure}[tbp] \centering \includegraphics[width=80mm]{definition_Y_and_Z.pdf} \caption{The trees $Y$ and $Z$. } \label{definition_Y_and_Z} \end{figure} The process of this example is in Figure \ref{calculation_delta_Y_Z}. \begin{figure}[tbp] \centering \includegraphics[width=150mm]{calculation_delta_Y_Z.pdf} \caption{A process of the calculation of $\delta(Y, Z)$. } \label{calculation_delta_Y_Z} \end{figure} For the group $F(n)$, we define the homomorphism $\phi: F(n) \to A_{n+1}$ as follows: Let $x$ be in $F(n)$ and $(T_+, T_-)$ be a tree diagram that represents $x$. Then we set \begin{align*} \phi(x):=\delta(W, T_-)-\delta(W, T_+), \end{align*} where $W$ is an $n$-ary tree that contains both $T_+$ and $T_-$. From the construction, this map is independent of the choice of $W$ and $(T_+, T_-)$, and this is a homomorphism. By calculating $\phi(x_0), \dots, \phi(x_{n-2})$, and $\phi(\x{o}{(n-1)})$, we have that $\operatorname{Im} \phi =\{\Sigma\lambda_ie_i \mid \Sigma \lambda_i=0 \} \cong \mathbb{Z}^n$. Thus we obtain a surjective homomorphism $F(n) \to \mathbb{Z}^n$. \begin{theorem}[{\cite[Lemma 2.1]{burillo2018commutators}} for $n=2$] \label{Th_abelianization_G0(n)} The abelianization of $G_0(n)$ is isomorphic to $\mathbb{Z}^{n+1}$. \begin{proof} We define the map $\pi: Z(n) \to \mathbb{Z}^n \bigoplus \mathbb{Z}$ by \begin{align*} &\x{i}{s} \mapsto (\phi(\x{i}{s}), 0), & &y_s \mapsto (\bm{0}, 1). \end{align*} Since $G_0(n)$ is $(n+1)$-generated group, its abelianization is a quotient of $\mathbb{Z}^{n+1}$. So if we obtain a surjective homomorphism $G_0(n) \to \mathbb{Z}^{n+1}$, then this map must be its abelianization map. Thus it is sufficient to show that $\pi$ extends to a homomorphism $G_0(n) \to \mathbb{Z}^{n+1}$. In order to do this, we only need to check that the relations in $R(n)$ are satisfied, which is clear except for (4). By the definition of $\phi$, we have $\phi(\x{0}{s})=\bm{0}$ for each $\x{0}{s}$, where $s$ is in $\N^{<\mathbb{N}}$ such that $y_s$ is in $Y(n)$. Indeed, since $s$ is neither $0\cdots0$ nor $(n-1)\cdots(n-1)$, $\phi(\x{0}{s})$ is calculated as in Figure \ref{phi_x0s=0}. \begin{figure}[tbp] \centering \includegraphics[width=140mm]{phi_x0s=0.pdf} \caption{Note that $e_{s}=e_{s+(n-1)}$. } \label{phi_x0s=0} \end{figure} Thus the relation (4) is also satisfied. \end{proof} \end{theorem} \begin{corollary} Let $n, m \geq 2$. Then the groups $G_0(n)$ and $G_0(m)$ are isomorphic if and only if $n=m$ holds. \end{corollary} In the following, we show that the commutator subgroup $G_0(n)^\prime=[G_0(n), G_0(n)]$ is simple. As in the case of $n=2$, first we show that the second derived subgroup $G_0(n)^{\prime \prime}=[G_0(n)^\prime, G_0(n)^\prime]$ is simple by using Higman's Theorem, and then we show that $G_0(n)^{\prime \prime}=G_0(n)^{\prime}$. Let $\Gamma$ be a group of bijections of a set $E$. For $\alpha \in \Gamma$, we write the set-theoretic support $\supp{\alpha}$ for the set $\{x \in E \mid \alpha(x) \neq x\}$. \begin{theorem}[{\cite[Theorem 1]{MR72136}}] \label{Theorem_Higman_simple} Suppose that for every $\alpha, \beta, \gamma \in \Gamma \setminus \{ 1_\Gamma \}$, there exists $\rho$ such that \begin{align*} \gamma \Bigl( \rho \bigl(\supp{\alpha}\cup \supp{\beta}) \Bigr) \cap \rho \bigl(\supp{\alpha}\cup \supp{\beta}\bigr) = \emptyset \end{align*} holds. Then the commutator subgroup $\Gamma^\prime$ is simple. \end{theorem} \begin{theorem}[{\cite[Theorem 2]{burillo2018commutators}} for $n=2$] \label{theorem_commutator_simple} The commutator subgroup of the group $G_0(n)$ is simple. \end{theorem} The following two lemmas complete the proof. \begin{lemma} The second derived subgroup $G_0(n)^{\prime \prime}$ is simple. \begin{proof} Let $\alpha, \beta, \gamma \in G_0(n)^\prime$. Choose $x \in \supp{\gamma}$. If $\gamma(x)>x$, then let $I$ be the set $\{x^\prime \in \N^\mathbb{N} \mid x<x^\prime<\gamma(x)\}$. If $\gamma(x)<x$, then let $I$ be the set $\{x^\prime \in \N^\mathbb{N} \mid \gamma(x)<x^\prime<x\}$. Then $\gamma(I) \cap I = \emptyset$ holds. We note that $\supp{\alpha}$ and $\supp{\beta}$ are not $\N^\mathbb{N} \setminus \{00\cdots, (n-1)(n-1) \cdots\}$, by the argument in Theorem \ref{Theorem_no_free_group} (in particular, see the order \eqref{support_condition_commutator}). Therefore there exists $\rho \in F(n)$ such that \begin{align*} \rho \bigl(\supp{\alpha} \cup \supp{\beta}\bigr) \subset I \end{align*} holds. Then we have \begin{align*} \rho \bigl(\supp{\alpha} \cup \supp{\beta} \bigr) \cap \gamma \Bigl(\rho \bigl(\supp{\alpha} \cup \supp{\beta} \bigr) \Bigr) \subset I \cap \gamma(I)=\emptyset. \end{align*} By Theorem \ref{Theorem_Higman_simple}, $G_0(n)^{\prime \prime}$ is simple. \end{proof} \end{lemma} \begin{lemma}[{\cite[Proposition 2.5]{burillo2018commutators}} for $n=2$] $G_0(n)^\prime=G_0(n)^{\prime \prime}$. \begin{proof} We assume that $n \geq 3$. Since $G_0(n)^{\prime \prime} \subset G_0(n)^\prime$ holds, we show $G_0(n)^\prime \subset G_0(n)^{\prime \prime}$. Let $g \in G_0(n)^\prime$ and $f \y{s_1}{t_1}\cdots \y{s_m}{t_m}$ be its normal form. By the definition of the map $\pi$ in Theorem \ref{Th_abelianization_G0(n)}, $t_1+\cdots +t_m=0$ holds and $f$ is in $F(n)^\prime$. By \cite[Theorem 4.13]{brown1987finiteness}, we have $F(n)^\prime = F(n)^{\prime \prime} \subset G_0(n)^{\prime \prime}$. Therefore, it is sufficient to show that $\y{s_1}{t_1}\cdots \y{s_m}{t_m}$ is in $G_0(n)^{\prime \prime}$. We show this by induction on $k=|t_1|+\cdots + |t_m|$ for words $\y{s_1}{t_1}\cdots \y{s_m}{t_m}$ that satisfy $t_1+\cdots t_m=0$ (which may not be in normal). We note that $k$ is an even number. Since the base case $k=0$ is clear, we assume $k>0$. Then, since $G_0(n)^{\prime \prime}$ is a normal subgroup of $G_0$, by cyclically conjugating, we can assume that the word starts with a subword of the form $y_sy_t^{-1}$. For example, we can do the following: \begin{align*} y_{s_1}y_{s_2}\y{s_3}{-1}\y{s_4}{-1} \mapsto \y{s_1}{-1}(y_{s_1}y_{s_2}\y{s_3}{-1}\y{s_4}{-1})y_{s_1}=y_{s_2}\y{s_3}{-1}\y{s_4}{-1}y_{s_1}. \end{align*} By the inductive hypothesis, it is sufficient to show that $y_s \y{t}{-1}$ is in $G_0(n)^{\prime \prime}$. We divide the proof into two cases: Case (1): $s \perp t$. Since $(y_s y_t^{-1})^{-1}=y_t y_s^{-1}$ holds, we can assume that $s<t$ without loss of generality. By the definition of $Y(n)$, $s$ and $t$ are neither $000\cdots$ nor $(n-1)(n-1)(n-1)\cdots$. Then since $n \geq 3$, there exists an element $h^\prime$ in $F(n)$ such that \begin{align*} h^\prime((n-1)000)=s \shortintertext{and} h^\prime((n-1)00(n-1)0)=t \end{align*} hold. Indeed, we can construct it similarly as in Figure \ref{construction_g_uv}. Then we have \begin{align*} y_s y_t^{-1}=({h^\prime}^{-1}y_{(n-1)000} h^\prime)({h^\prime}^{-1}\y{(n-1)00(n-1)0}{-1} h^\prime)={h^\prime}^{-1}y_{(n-1)000} \y{(n-1)00(n-1)0}{-1} h^\prime. \end{align*} Thus it is sufficient to show that $y_{(n-1)000}\y{(n-1)00(n-1)0}{-1}$ is in $G_0(n)^{\prime \prime}$. Let $w=y_{(n-1)00}\y{(n-1)0(n-1)}{-1} \in G_0(n)^\prime$. We note that there exists an element $h$ in $F(n)^\prime$ such that \begin{align*} h\bigl((n-1)00(n-1)(n-1)\bigr)=(n-1)00 \shortintertext{and} h\bigl((n-1)0(n-1)\bigr)=(n-1)0(n-1) \end{align*} hold. Indeed, let $h$ be as in Figure \ref{definition_h_in_simple}. Then since $\phi(h)=\bm{0}$ holds where $\phi$ is the abelianization map of $F(n)$, the element $h$ is in $F(n)^\prime$. \begin{figure}[tbp] \centering \includegraphics[width=80mm]{definition_h_in_simple.pdf} \caption{The element $h$. } \label{definition_h_in_simple} \end{figure} Since $w$ is in $G_0(n)^\prime$ and $h$ is in $F(n)^\prime \subset G_0(n)^\prime$, the commutator $[w, h]=whw^{-1} h^{-1}$ is in $G_0(n)^{\prime \prime}$. By the construction of $h$, we have $h w h^{-1}=y_{(n-1)00(n-1)(n-1)}\y{(n-1)0(n-1)}{-1}$. Thus we have \begin{align*} [w, h]&=w (hwh^{-1})^{-1} \\ &=y_{(n-1)00}\y{(n-1)0(n-1)}{-1} (y_{(n-1)00(n-1)(n-1)}\y{(n-1)0(n-1)}{-1})^{-1}\\ &=y_{(n-1)00} \y{(n-1)00(n-1)(n-1)}{-1}. \end{align*} By applying expansion move (in Definition \ref{definition_five_moves}) to $y_{(n-1)00}$, we have \begin{align*} y_{(n-1)00} \y{(n-1)00(n-1)(n-1)}{-1} &=\x{0}{(n-1)00}y_{(n-1)000}\y{(n-1)00(n-1)0}{-1}y_{(n-1)00(n-1)(n-1)}\y{(n-1)00(n-1)(n-1)}{-1} \\ &=\x{0}{(n-1)00}y_{(n-1)000}\y{(n-1)00(n-1)0}{-1}. \end{align*} Since $\x{0}{(n-1)00}$ is in $F(n)^\prime$ (see Figure \ref{phi_x0s=0}), we note that this element is in $F(n)^{\prime \prime} \subset G_0(n)^{\prime \prime}$. Therefore we have \begin{align*} y_{(n-1)000}\y{(n-1)00(n-1)0}{-1}=\x{0}{(n-1)00}^{-1} [w, h] \in G_0(n)^{\prime \prime}, \end{align*} as required. Case (2): $s$ is a prefix of $t$ or vice versa. Since $(y_s y_t^{-1})^{-1}=y_t y_s^{-1}$ holds, we can assume that $s$ is a prefix of $t$ without loss of generality. Let $t=su$. By expansion move, we have $y_s \y{su}{-1}=\x{0}{s}y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}\y{su}{-1}$. Then since $\x{0}{s}$ is in $F(n)^\prime=F(n)^{\prime \prime} \subset G_0(n)^{\prime \prime}$ (see Figure \ref{phi_x0s=0}), it is enough to show that $y_{s0}\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}\y{su}{-1}$ is in $G_0(n)^{\prime \prime}$. We further divide the proof of case (2) into two subcases. Case (2-1): $0$ is a prefix of $u$. We note that $s0 \perp s(n-1)0$ and $s(n-1)(n-1)\perp su$ hold. Thus $y_{s0}\y{s(n-1)0}{-1}$ and $y_{s(n-1)(n-1)}\y{su}{-1}$ are in $G_0(n)^{\prime \prime}$, by case (1). Case (2-2): $0$ is not a prefix of $u$. By cyclically conjugating, it is sufficient to show that $\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}\y{su}{-1}y_{s0}$ is in $G_0(n)^{\prime \prime}$. Since $su \perp s0$ holds, $\y{s(n-1)0}{-1}y_{s(n-1)(n-1)}$ and $\y{su}{-1}y_{s0}$ are in $G_0(n)^{\prime \prime}$, by case (1). \end{proof} \end{lemma} \subsection{The center of $G_0(n)$}\label{subsec_center_G0(n)} In this section, we show that the center of the group $G_0(n)$ is trivial. The idea for the theorem comes from \cite[Section 4]{cannon1996introductory}. Let $D(n):=\{s \overline{0} \mid s \in \N^{<\mathbb{N}}\}$, where $\overline{0}$ denotes the element $000\cdots $ in $\N^\mathbb{N}$. Then the following holds. \begin{lemma} \label{lemma_rich_Thompson_F(n)} For any $s\overline{0}$ in $D(n)$, there exists $x$ in $F(n)$ such that \begin{align*} \supp{x}=(s\overline{0}, \overline{(n-1)})=\{\xi \in \N^\mathbb{N} \mid s\overline{0}<\xi< \overline{(n-1)}\} \end{align*} holds. \begin{proof} If $s=0, \dots, (n-2)$, then $x_0, \dots, x_{n-2}$ satisfy the claim, respectively. By regarding $(n-1)\overline{0}$ as $(n-1)0\overline{0}$, we can assume that the length of $s$ is greater than or equal to $2$. Let $s=s^\prime i \overline{0}$ ($i \in \{0, \dots, n-2 \}$). If $s^\prime=(n-1)\cdots(n-1)$, then $\x{i}{s^\prime}$ satisfies the claim. If $s^\prime \neq (n-1)\cdots(n-1)$, by using $\x{i}{s^\prime}$, we can define an element in $F(n)$ as in Figure \ref{rich_elements_Fn}, which satisfies the claim \begin{figure}[tbp] \centering \includegraphics[width=90mm]{rich_elements_Fn.pdf} \caption{The construction of an element of $F(n)$ from $\x{i}{s^\prime}$. We add a caret to each of the leaf $s(n-1)$ of the domain tree and the rightmost leaf of the range tree. } \label{rich_elements_Fn} \end{figure} \end{proof} \end{lemma} We note that $D(n)$ is a dense subset of $\N^\mathbb{N}$. \begin{theorem}[{\cite[Proposition 2.7]{burillo2018commutators}} for $n=2$] \label{theorem_center_trivial} The center of $G_0(n)$ is trivial. \begin{proof} Let $f$ be an element of the center of $G_0(n)$. For $g \in G_0(n)$, assume that $supp(g)=(b_1, \overline{(n-1)})$ holds. Then we have $f(b_1)=b_1$. Indeed, if not, either $f(b_1)>b_1$ or $f^{-1}(b_1)>b_1$ holds. Since $g(b_1)=b_1$, in both cases, this contradicts that $fg=gf$ holds. By Lemma \ref{lemma_rich_Thompson_F(n)}, for every $s\overline{0} \in D(n)$, there exists $g \in F(n) \subset G_0(n)$ such that $b_1=s \overline{0}$ holds. Thus we have $f(s\overline{0})=s\overline{0}$ for every $s\overline{0} \in D(n)$. Since $D(n)$ is a dense subset of $\N^\mathbb{N}$, we conclude that $f$ is the identity map. \end{proof} \end{theorem} \subsection{Indecomposability with respect to direct products and free products} In this section, we show that there exist no nontrivial ``decompositions'' using theorems of $G_0(n)$. \begin{theorem} There exists neither nontrivial direct product decompositions nor nontrivial free product decompositions. \begin{proof} Suppose that $G_0(n)$ is isomorphic to $K \times H$ for some groups $K$ and $H$. We first assume that $H$ (or $K$) is abelian. Then the center of $G_0(n)$ contains $\{1\} \times H$. Since the center of $G_0(n)$ is trivial (Theorem \ref{theorem_center_trivial}), $H$ must be the trivial group. We assume that $K$ and $H$ are not abelian. We note that the commutator subgroup of $G_0(n)=K \times H$ is $[K, K] \times [H, H]$. Since $[K, K]$ and $[H, H]$ are not trivial, the group $\{1\} \times [H, H]$ is a nontrivial normal subgroup of $[K, K] \times [H, H]$. However, this contradicts that $[G_0(n), G_0(n)]$ is simple (Theorem \ref{theorem_commutator_simple}). Finally, we assume that $G_0(n)=K \star H$ for nontrivial groups $K$ and $H$. Let $k \in K \setminus \{1\}$ and $h \in H \setminus \{1\}$. Since $G_0(n)$ is torsion free (Remark \ref{remark_torsion_free}), both $h$ and $k$ generate infinite cyclic groups $\langle k \rangle$ and $\langle h \rangle$, respectively. This implies that $G_0(n)$ has a subgroup \begin{align*} \langle k \rangle \star \langle h \rangle \cong \mathbb{Z} \star \mathbb{Z}=F_2. \end{align*} By Theorem \ref{Theorem_no_free_group}, this is a contradiction. \end{proof} \end{theorem} \subsection*{Acknowledgments} I would like to appreciate Professor Motoko Kato and Professor Shin-ichi Oguni for several comments and suggestions. I would also like to thank my supervisor, Professor Tomohiro Fukaya, for his comments and careful reading of the paper. \bibliographystyle{plain}
{'timestamp': '2022-04-19T02:40:14', 'yymm': '2204', 'arxiv_id': '2204.08230', 'language': 'en', 'url': 'https://arxiv.org/abs/2204.08230'}
ArXiv
\section{Introduction} One of the fundamental ingredients in the theory of non-commutative or quantum geometry is the notion of a differential calculus. In the framework of quantum groups the natural notion is that of a bicovariant differential calculus as introduced by Woronowicz \cite{Wor_calculi}. Due to the allowance of non-commutativity the uniqueness of a canonical calculus is lost. It is therefore desirable to classify the possible choices. The most important piece is the space of one-forms or ``first order differential calculus'' to which we will restrict our attention in the following. (From this point on we will use the term ``differential calculus'' to denote a bicovariant first order differential calculus). Much attention has been devoted to the investigation of differential calculi on quantum groups $C_q(G)$ of function algebra type for $G$ a simple Lie group. Natural differential calculi on matrix quantum groups were obtained by Jurco \cite{Jur} and Carow-Watamura et al.\ \cite{CaScWaWe}. A partial classification of calculi of the same dimension as the natural ones was obtained by Schm\"udgen and Sch\"uler \cite{ScSc2}. More recently, a classification theorem for factorisable cosemisimple quantum groups was obtained by Majid \cite{Majid_calculi}, covering the general $C_q(G)$ case. A similar result was obtained later by Baumann and Schmitt \cite{BaSc}. Also, Heckenberger and Schm\"udgen \cite{HeSc} gave a complete classification on $C_q(SL(N))$ and $C_q(Sp(N))$. In contrast, for $G$ not simple or semisimple the differential calculi on $C_q(G)$ are largely unknown. A particularly basic case is the Lie group $B_+$ associated with the Lie algebra $\lalg{b_+}$ generated by two elements $X,H$ with the relation $[H,X]=X$. The quantum enveloping algebra \ensuremath{U_q(\lalg{b_+})}{} is self-dual, i.e.\ is non-degenerately paired with itself \cite{Drinfeld}. This has an interesting consequence: \ensuremath{U_q(\lalg{b_+})}{} may be identified with (a certain algebraic model of) \ensuremath{C_q(B_+)}. The differential calculi on this quantum group and on its ``classical limits'' \ensuremath{C(B_+)}{} and \ensuremath{U(\lalg{b_+})}{} will be the main concern of this paper. We pay hereby equal attention to the dual notion of ``quantum tangent space''. In section \ref{sec:q} we obtain the complete classification of differential calculi on \ensuremath{C_q(B_+)}{}. It turns out that (finite dimensional) differential calculi are characterised by finite subsets $I\subset\mathbb{N}$. These sets determine the decomposition into coirreducible (i.e.\ not admitting quotients) differential calculi characterised by single integers. For the coirreducible calculi the explicit formulas for the commutation relations and braided derivations are given. In section \ref{sec:class} we give the complete classification for the classical function algebra \ensuremath{C(B_+)}{}. It is essentially the same as in the $q$-deformed setting and we stress this by giving an almost one-to-one correspondence of differential calculi to those obtained in the previous section. In contrast, however, the decomposition and coirreducibility properties do not hold at all. (One may even say that they are maximally violated). We give the explicit formulas for those calculi corresponding to coirreducible ones. More interesting perhaps is the ``dual'' classical limit. I.e.\ we view \ensuremath{U(\lalg{b_+})}{} as a quantum function algebra with quantum enveloping algebra \ensuremath{C(B_+)}{}. This is investigated in section \ref{sec:dual}. It turns out that in this setting we have considerably more freedom in choosing a differential calculus since the bicovariance condition becomes much weaker. This shows that this dual classical limit is in a sense ``unnatural'' as compared to the ordinary classical limit of section \ref{sec:class}. However, we can still establish a correspondence of certain differential calculi to those of section \ref{sec:q}. The decomposition properties are conserved while the coirreducibility properties are not. We give the formulas for the calculi corresponding to coirreducible ones. Another interesting aspect of viewing \ensuremath{U(\lalg{b_+})}{} as a quantum function algebra is the connection to quantum deformed models of space-time and its symmetries. In particular, the $\kappa$-deformed Minkowski space coming from the $\kappa$-deformed Poincar\'e algebra \cite{LuNoRu}\cite{MaRu} is just a simple generalisation of \ensuremath{U(\lalg{b_+})}. We use this in section \ref{sec:kappa} to give a natural $4$-dimensional differential calculus. Then we show (in a formal context) that integration is given by the usual Lesbegue integral on $\mathbb{R}^n$ after normal ordering. This is obtained in an intrinsic context different from the standard $\kappa$-Poincar\'e approach. A further important motivation for the investigation of differential calculi on \ensuremath{U(\lalg{b_+})}{} and \ensuremath{C(B_+)}{} is the relation of those objects to the Planck-scale Hopf algebra \cite{Majid_Planck}\cite{Majid_book}. This shall be developed elsewhere. In the remaining parts of this introduction we will specify our conventions and provide preliminaries on the quantum group \ensuremath{U_q(\lalg{b_+})}, its deformations, and differential calculi. \subsection{Conventions} Throughout, $\k$ denotes a field of characteristic 0 and $\k(q)$ denotes the field of rational functions in one parameter $q$ over $\k$. $\k(q)$ is our ground field in the $q$-deformed setting, while $\k$ is the ground field in the ``classical'' settings. Within section \ref{sec:q} one could equally well view $\k$ as the ground field with $q\in\k^*$ not a root of unity. This point of view is problematic, however, when obtaining ``classical limits'' as in sections \ref{sec:class} and \ref{sec:dual}. The positive integers are denoted by $\mathbb{N}$ while the non-negative integers are denoted by $\mathbb{N}_0$. We define $q$-integers, $q$-factorials and $q$-binomials as follows: \begin{gather*} [n]_q=\sum_{i=0}^{n-1} q^i\qquad [n]_q!=[1]_q [2]_q\cdots [n]_q\qquad \binomq{n}{m}=\frac{[n]_q!}{[m]_q! [n-m]_q!} \end{gather*} For a function of several variables (among them $x$) over $\k$ we define \begin{gather*} (T_{a,x} f)(x) = f(x+a)\\ (\fdiff_{a,x} f)(x) = \frac{f(x+a)-f(x)}{a} \end{gather*} with $a\in\k$ and similarly over $\k(q)$ \begin{gather*} (Q_{m,x} f)(x) = f(q^m x)\\ (\partial_{q,x} f)(x) = \frac{f(x)-f(qx)}{x(1-q)}\\ \end{gather*} with $m\in\mathbb{Z}$. We frequently use the notion of a polynomial in an extended sense. Namely, if we have an algebra with an element $g$ and its inverse $g^{-1}$ (as in \ensuremath{U_q(\lalg{b_+})}{}) we will mean by a polynomial in $g,g^{-1}$ a finite power series in $g$ with exponents in $\mathbb{Z}$. The length of such a polynomial is the difference between highest and lowest degree. If $H$ is a Hopf algebra, then $H^{op}$ will denote the Hopf algebra with the opposite product. \subsection{\ensuremath{U_q(\lalg{b_+})}{} and its Classical Limits} \label{sec:intro_limits} We recall that, in the framework of quantum groups, the duality between enveloping algebra $U(\lalg{g})$ of the Lie algebra and algebra of functions $C(G)$ on the Lie group carries over to $q$-deformations. In the case of $\lalg{b_+}$, the $q$-deformed enveloping algebra \ensuremath{U_q(\lalg{b_+})}{} defined over $\k(q)$ as \begin{gather*} U_q(\lalg{b_+})=\k(q)\langle X,g,g^{-1}\rangle \qquad \text{with relations} \\ g g^{-1}=1 \qquad Xg=qgX \\ \cop X=X\otimes 1 + g\otimes X \qquad \cop g=g\otimes g \\ \cou (X)=0 \qquad \cou (g)=1 \qquad \antip X=-g^{-1}X \qquad \antip g=g^{-1} \end{gather*} is self-dual. Consequently, it may alternatively be viewed as the quantum algebra \ensuremath{C_q(B_+)}{} of functions on the Lie group $B_+$ associated with $\lalg{b_+}$. It has two classical limits, the enveloping algebra \ensuremath{U(\lalg{b_+})}{} and the function algebra $C(B_+)$. The transition to the classical enveloping algebra is achieved by replacing $q$ by $e^{-t}$ and $g$ by $e^{tH}$ in a formal power series setting in $t$, introducing a new generator $H$. Now, all expressions are written in the form $\sum_j a_j t^j$ and only the lowest order in $t$ is kept. The transition to the classical function algebra on the other hand is achieved by setting $q=1$. This may be depicted as follows: \[\begin{array}{c @{} c @{} c @{} c} & \ensuremath{U_q(\lalg{b_+})} \cong \ensuremath{C_q(B_+)} && \\ & \diagup \hspace{\stretch{1}} \diagdown && \\ \begin{array}{l} q=e^{-t} \\ g=e^{tH} \end{array} \Big| _{t\to 0} && q=1 &\\ \swarrow &&& \searrow \\ \ensuremath{U(\lalg{b_+})} & <\cdots\textrm{dual}\cdots> && \ensuremath{C(B_+)} \end{array}\] The self-duality of \ensuremath{U_q(\lalg{b_+})}{} is expressed as a pairing $\ensuremath{U_q(\lalg{b_+})}\times\ensuremath{U_q(\lalg{b_+})}\to\k$ with itself: \[\langle X^n g^m, X^r g^s\rangle = \delta_{n,r} [n]_q!\, q^{-n(n-1)/2} q^{-ms} \qquad\forall n,r\in\mathbb{N}_0\: m,s\in\mathbb{Z}\] In the classical limit this becomes the pairing $\ensuremath{U(\lalg{b_+})}\times\ensuremath{C(B_+)}\to\k$ \begin{equation} \langle X^n H^m, X^r g^s\rangle = \delta_{n,r} n!\, s^m\qquad \forall n,m,r\in\mathbb{N}_0\: s\in\mathbb{Z} \label{eq:pair_class} \end{equation} \subsection{Differential Calculi and Quantum Tangent Spaces} In this section we recall some facts about differential calculi along the lines of Majid's treatment in \cite{Majid_calculi}. Following Woronowicz \cite{Wor_calculi}, first order bicovariant differential calculi on a quantum group $A$ (of function algebra type) are in one-to-one correspondence to submodules $M$ of $\ker\cou\subset A$ in the category $^A_A\cal{M}$ of (say) left crossed modules of $A$ via left multiplication and left adjoint coaction: \[ a\triangleright v = av \qquad \mathrm{Ad_L}(v) =v_{(1)}\antip v_{(3)}\otimes v_{(2)} \qquad \forall a\in A, v\in A \] More precisely, given a crossed submodule $M$, the corresponding calculus is given by $\Gamma=\ker\cou/M\otimes A$ with $\diff a = \pi(\cop a - 1\otimes a)$ ($\pi$ the canonical projection). The right action and coaction on $\Gamma$ are given by the right multiplication and coproduct on $A$, the left action and coaction by the tensor product ones with $\ker\cou/M$ as a left crossed module. In all of what follows, ``differential calculus'' will mean ``bicovariant first order differential calculus''. Alternatively \cite{Majid_calculi}, given in addition a quantum group $H$ dually paired with $A$ (which we might think of as being of enveloping algebra type), we can express the coaction of $A$ on itself as an action of $H^{op}$ using the pairing: \[ h\triangleright v = \langle h, v_{(1)} \antip v_{(3)}\rangle v_{(2)} \qquad \forall h\in H^{op}, v\in A \] Thereby we change from the category of (left) crossed $A$-modules to the category of left modules of the quantum double $A\!\bowtie\! H^{op}$. In this picture the pairing between $A$ and $H$ descends to a pairing between $A/\k 1$ (which we may identify with $\ker\cou\subset A$) and $\ker\cou\subset H$. Further quotienting $A/\k 1$ by $M$ (viewed in $A/\k 1$) leads to a pairing with the subspace $L\subset\ker\cou H$ that annihilates $M$. $L$ is called a ``quantum tangent space'' and is dual to the differential calculus $\Gamma$ generated by $M$ in the sense that $\Gamma\cong \Lin(L,A)$ via \begin{equation} A/(\k 1+M)\otimes A \to \Lin(L,A)\qquad v\otimes a \mapsto \langle \cdot, v\rangle a \label{eq:eval} \end{equation} if the pairing between $A/(\k 1+M)$ and $L$ is non-degenerate. The quantum tangent spaces are obtained directly by dualising the (left) action of the quantum double on $A$ to a (right) action on $H$. Explicitly, this is the adjoint action and the coregular action \[ h \triangleright x = h_{(1)} x \antip h_{(2)} \qquad a \triangleright x = \langle x_{(1)}, a \rangle x_{(2)}\qquad \forall h\in H, a\in A^{op},x\in A \] where we have converted the right action to a left action by going from \mbox{$A\!\bowtie\! H^{op}$}-modules to \mbox{$H\!\bowtie\! A^{op}$}-modules. Quantum tangent spaces are subspaces of $\ker\cou\subset H$ invariant under the projection of this action to $\ker\cou$ via \mbox{$x\mapsto x-\cou(x) 1$}. Alternatively, the left action of $A^{op}$ can be converted to a left coaction of $H$ being the comultiplication (with subsequent projection onto $H\otimes\ker\cou$). We can use the evaluation map (\ref{eq:eval}) to define a ``braided derivation'' on elements of the quantum tangent space via \[\partial_x:A\to A\qquad \partial_x(a)={\diff a}(x)=\langle x,a_{(1)}\rangle a_{(2)}\qquad\forall x\in L, a\in A\] This obeys the braided derivation rule \[\partial_x(a b)=(\partial_x a) b + a_{(2)} \partial_{a_{(1)}\triangleright x}b\qquad\forall x\in L, a\in A\] Given a right invariant basis $\{\eta_i\}_{i\in I}$ of $\Gamma$ with a dual basis $\{\phi_i\}_{i\in I}$ of $L$ we have \[{\diff a}=\sum_{i\in I} \eta_i\cdot \partial_i(a)\qquad\forall a\in A\] where we denote $\partial_i=\partial_{\phi_i}$. (This can be easily seen to hold by evaluation against $\phi_i\ \forall i$.) \section{Classification on \ensuremath{C_q(B_+)}{} and \ensuremath{U_q(\lalg{b_+})}{}} \label{sec:q} In this section we completely classify differential calculi on \ensuremath{C_q(B_+)}{} and, dually, quantum tangent spaces on \ensuremath{U_q(\lalg{b_+})}{}. We start by classifying the relevant crossed modules and then proceed to a detailed description of the calculi. \begin{lem} \label{lem:cqbp_class} (a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ensuremath{C_q(B_+)}$ by left multiplication and left adjoint coaction are in one-to-one correspondence to pairs $(P,I)$ where $P\in\k(q)[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is finite. $\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$ if $P=1$. (b) The finite codimensional maximal $M$ correspond to the pairs $(1,\{n\})$ with $n$ the codimension. The infinite codimensional maximal $M$ are characterised by $(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any $k\in\mathbb{N}_0$. (c) Crossed submodules $M$ of finite codimension are intersections of maximal ones. In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to $(1,\{n\})$. \end{lem} \begin{proof} (a) Let $M\subseteq\ensuremath{C_q(B_+)}$ be a crossed \ensuremath{C_q(B_+)}-submodule by left multiplication and left adjoint coaction and let $\sum_n X^n P_n(g) \in M$, where $P_n$ are polynomials in $g,g^{-1}$ (every element of \ensuremath{C_q(B_+)}{} can be expressed in this form). From the formula for the coaction ((\ref{eq:adl}), see appendix) we observe that for all $n$ and for all $t\le n$ the element \[X^t P_n(g) \prod_{s=1}^{n-t} (1-q^{s-n}g)\] lies in $M$. In particular this is true for $t=n$, meaning that elements of constant degree in $X$ lie separately in $M$. It is therefore enough to consider such elements. Let now $X^n P(g) \in M$. By left multiplication $X^n P(g)$ generates any element of the form $X^k P(g) Q(g)$, where $k\ge n$ and $Q$ is any polynomial in $g,g^{-1}$. (Note that $Q(q^kg) X^k=X^k Q(g)$.) We see that $M$ contains the following elements: \[\begin{array}{ll} \vdots & \\ X^{n+2} & P(g) \\ X^{n+1} & P(g) \\ X^n & P(g) \\ X^{n-1} & P(g) (1-q^{1-n}g) \\ X^{n-2} & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\ \vdots & \\ X & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g) \\ & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g)(1-g) \end{array} \] Moreover, if $M$ is generated by $X^n P(g)$ as a module then these elements generate a basis for $M$ as a vector space by left multiplication with polynomials in $g,g^{-1}$. (Observe that the application of the coaction to any of the elements shown does not generate elements of new type.) Now, let $M$ be a given crossed submodule. We pick, among the elements in $M$ of the form $X^n P(g)$ with $P$ of minimal length, one with lowest degree in $X$. Then certainly the elements listed above are in $M$. Furthermore for any element of the form $X^k Q(g)$, $Q$ must contain $P$ as a factor and for $k<n$, $Q$ must contain $P(g) (1-q^{1-n}g)$ as a factor. We continue by picking the smallest $n_2$, so that $X^{n_2} P(g) (1-q^{1-n}g) \in M$. Certainly $n_2<n$. Again, for any element of $X^l Q(g)$ in $M$ with $l<n_2$, we have that $P(g) (1-q^{1-n}g) (1-q^{1-n_2}g)$ divides Q(g). We proceed by induction, until we arrive at degree zero in $X$. We obtain the following elements generating a basis for $M$ by left multiplication with polynomials in $g,g^{-1}$ (rename $n_1=n$): \[ \begin{array}{ll} \vdots & \\ X^{n_1+1} & P(g) \\ X^{n_1} & P(g) \\ X^{n_1-1} & P(g) (1-q^{1-{n_1}}g) \\ \vdots & \\ X^{n_2} & P(g) (1-q^{1-{n_1}}g) \\ X^{n_2-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2})\\ \vdots & \\ X^{n_3} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) \\ X^{n_3-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) (1-q^{1-n_3})\\ \vdots & \\ & P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2}g) (1-q^{1-n_3}g) \ldots (1-q^{1-n_m}g) \end{array} \] We see that the integers $n_1,\ldots,n_m$ uniquely determine the shape of this picture. The polynomial $P(g)$ on the other hand can be shifted (by $g$ and $g^{-1}$) or renormalised. To determine $M$ uniquely we shift and normalise $P$ in such a way that it contains no negative powers and has unit constant coefficient. $P$ can then be viewed as a polynomial $\in\k(q)[g]$. We see that the codimension of $M$ is the sum of the lengths of the polynomials in $g$ over all degrees in $X$ in the above picture. Finite codimension corresponds to $P=1$. In this case the codimension is the sum $n_1+\ldots +n_m$. (b) We observe that polynomials of the form $1-q^{j}g$ have no common divisors for distinct $j$. Therefore, finite codimensional crossed submodules are maximal if and only if there is just one integer ($m=1$). Thus, the maximal left crossed submodule of codimension $k$ is generated by $X^k$ and $1-q^{1-k}g$. For an infinite codimensional crossed submodule we certainly need $m=0$. Then, the maximality corresponds to irreducibility of $P$. (c) This is again due to the distinctness of factors $1-q^j g$. \end{proof} \begin{cor} \label{cor:cqbp_eclass} (a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C_q(B_+)}$ are in one-to-one correspondence to pairs $(P,I)$ as in lemma \ref{lem:cqbp_class} with the additional constraint $(1-g)$ divides $P(g)$ or $1\in I$. $\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$ if $P=1$. (b) The finite codimensional maximal $M$ correspond to the pairs $(1,\{1,n\})$ with $n\ge 2$ the codimension. The infinite codimensional maximal $M$ correspond to pairs $(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any $k\in\mathbb{N}_0$. (c) Crossed submodules $M$ of finite codimension are intersections of maximal ones. In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to $(1,\{1,n\})$. \end{cor} \begin{proof} First observe that $\sum_n X^n P_n(g)\in \ker\cou$ if and only if $(1-g)$ divides $P_0(g)$. This is to say that that $\ker\cou$ is the crossed submodule corresponding to the pair $(1,\{1\})$ in lemma \ref{lem:cqbp_class}. We obtain the classification from the one of lemmas \ref{lem:cqbp_class} by intersecting everything with this crossed submodule. In particular, this reduces the codimension by one in the finite codimensional case. \end{proof} \begin{lem} \label{lem:uqbp_class} (a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ via the left adjoint action and left regular coaction are in one-to-one correspondence to the set $3^{\mathbb{N}_0}\times2^{\mathbb{N}}$. Finite dimensional $L$ are in one-to-one correspondence to finite sets $I\subset\mathbb{N}$ and $\dim L=\sum_{n\in I}n$. (b) Finite dimensional irreducible $L$ correspond to $\{n\}$ with $n$ the dimension. (c) Finite dimensional $L$ are direct sums of irreducible ones. In particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$. \end{lem} \begin{proof} (a) The action takes the explicit form \[g\triangleright X^n g^k = q^{-n} X^n g^k\qquad X\triangleright X^n g^k = X^{n+1}g^k(1-q^{-(n+k)})\] while the coproduct is \[\cop(X^n g^k)=\sum_{r=0}^{n} \binomq{n}{r} q^{-r(n-r)} X^{n-r} g^{k+r}\otimes X^r g^k\] which we view as a left coaction here. Let now $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ be a crossed \ensuremath{U_q(\lalg{b_+})}-submodule via this action and coaction. For $\sum_n X^n P_n(g)\in L$ invariance under the action by $g$ clearly means that \mbox{$X^n P_n(g)\in L\ \forall n$}. Then from invariance under the coaction we can conclude that if $X^n \sum_j a_j g^j\in L$ we must have $X^n g^j\in L\ \forall j$. I.e.\ elements of the form $X^n g^j$ lie separately in $L$ and it is sufficient to consider such elements. From the coaction we learn that if $X^n g^j\in L$ we have $X^m g^j\in L\ \forall m\le n$. The action by $X$ leads to $X^n g^j\in L \Rightarrow X^{n+1} g^j\in L$ except if $n+j=0$. The classification is given by the possible choices we have for each power in $g$. For every positive integer $j$ we can choose wether or not to include the span of $\{ X^n g^j|\forall n\}$ in $L$ and for every non-positive integer we can choose to include either the span of $\{ X^n g^j|\forall n\}$ or just $\{ X^n g^j|\forall n\le -j\}$ or neither. I.e.\ for positive integers ($\mathbb{N}$) we have two choices while for non-positive (identified with $\mathbb{N}_0$) ones we have three choices. Clearly, the finite dimensional $L$ are those where we choose only to include finitely many powers of $g$ and also only finitely many powers of $X$. The latter is only possible for the non-positive powers of $g$. By identifying positive integers $n$ with powers $1-n$ of $g$, we obtain a classification by finite subsets of $\mathbb{N}$. (b) Irreducibility clearly corresponds to just including one power of $g$ in the finite dimensional case. (c) The decomposition property is obvious from the discussion. \end{proof} \begin{cor} \label{cor:uqbp_eclass} (a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ker\cou\subset\ensuremath{U_q(\lalg{b_+})}$ via the left adjoint action and left regular coaction (with subsequent projection to $\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to the set $3^{\mathbb{N}}\times2^{\mathbb{N}_0}$. Finite dimensional $L$ are in one-to-one correspondence to finite sets $I\subset\mathbb{N}\setminus\{1\}$ and $\dim L=\sum_{n\in I}n$. (b) Finite dimensional irreducible $L$ correspond to $\{n\}$ with $n\ge 2$ the dimension. (c) Finite dimensional $L$ are direct sums of irreducible ones. In particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$. \end{cor} \begin{proof} Only a small modification of lemma \ref{lem:uqbp_class} is necessary. Elements of the form $P(g)$ are replaced by elements of the form $P(g)-P(1)$. Monomials with non-vanishing degree in $X$ are unchanged. The choices for elements of degree $0$ in $g$ are reduced to either including the span of $\{ X^k |\forall k>0 \}$ in the crossed submodule or not. In particular, the crossed submodule characterised by \{1\} in lemma \ref{lem:uqbp_class} is projected out. \end{proof} Differential calculi in the original sense of Woronowicz are classified by corollary \ref{cor:cqbp_eclass} while from the quantum tangent space point of view the classification is given by corollary \ref{cor:uqbp_eclass}. In the finite dimensional case the duality is strict in the sense of a one-to-one correspondence. The infinite dimensional case on the other hand depends strongly on the algebraic models we use for the function or enveloping algebras. It is therefore not surprising that in the present purely algebraic context the classifications are quite different in this case. We will restrict ourselves to the finite dimensional case in the following description of the differential calculi. \begin{thm} \label{thm:q_calc} (a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C_q(B_+)}{} and corresponding quantum tangent spaces $L$ on \ensuremath{U_q(\lalg{b_+})}{} are in one-to-one correspondence to finite sets $I\subset\mathbb{N}\setminus\{1\}$. In particular $\dim\Gamma=\dim L=\sum_{n\in I}n$. (b) Coirreducible $\Gamma$ and irreducible $L$ correspond to $\{n\}$ with $n\ge 2$ the dimension. Such a $\Gamma$ has a right invariant basis $\eta_0,\dots,\eta_{n-1}$ so that the relations \begin{gather*} \diff X=\eta_1+(q^{n-1}-1)\eta_0 X \qquad \diff g=(q^{n-1}-1)\eta_0 g\\ [a,\eta_0]=\diff a\quad \forall a\in\ensuremath{C_q(B_+)}\\ [g,\eta_i]_{q^{n-1-i}}=0\quad \forall i\qquad [X,\eta_i]_{q^{n-1-i}}=\begin{cases} \eta_{i+1} & \text{if}\ i<n-1 \\ 0 & \text{if}\ i=n-1 \end{cases} \end{gather*} hold, where $[a,b]_p := a b - p b a$. By choosing the dual basis on the corresponding irreducible $L$ we obtain the braided derivations \begin{gather*} \partial_i\no{f}= \no{Q_{n-1-i,g} Q_{n-1-i,X} \frac{1}{[i]_q!} (\partial_{q,X})^i f} \qquad\forall i\ge 1\\ \partial_0\no{f}= \no{Q_{n-1,g} Q_{n-1,X} f - f} \end{gather*} for $f\in \k(q)[X,g,g^{-1}]$ with normal ordering $\k(q)[X,g,g^{-1}]\to \ensuremath{C_q(B_+)}$ given by \mbox{$g^n X^m\mapsto g^n X^m$}. (c) Finite dimensional $\Gamma$ and $L$ decompose into direct sums of coirreducible respectively irreducible ones. In particular $\Gamma=\oplus_{n\in I}\Gamma^n$ and $L=\oplus_{n\in I}L^n$ with $\Gamma^n$ and $L^n$ corresponding to $\{n\}$. \end{thm} \begin{proof} (a) We observe that the classifications of lemma \ref{lem:cqbp_class} and lemma \ref{lem:uqbp_class} or corollary \ref{cor:cqbp_eclass} and corollary \ref{cor:uqbp_eclass} are dual to each other in the finite (co){}dimensional case. More precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$ corresponding to $(1,I)$ in lemma \ref{lem:cqbp_class} is the annihilator of the crossed submodule $L$ corresponding to $I$ in lemma \ref{lem:uqbp_class} and vice versa. $\ensuremath{C_q(B_+)}/M$ and $L$ are dual spaces with the induced pairing. For $I\subset\mathbb{N}\setminus\{1\}$ finite this descends to $M$ corresponding to $(1,I\cup\{1\})$ in corollary \ref{cor:cqbp_eclass} and $L$ corresponding to $I$ in corollary \ref{cor:uqbp_eclass}. For the dimension of $\Gamma$ observe $\dim\Gamma=\dim{\ker\cou/M}=\codim M$. (b) Coirreducibility (having no proper quotient) of $\Gamma$ clearly corresponds to maximality of $M$. The statement then follows from parts (b) of corollaries \ref{cor:cqbp_eclass} and \ref{cor:uqbp_eclass}. The formulas are obtained by choosing the basis $\eta_0,\dots,\eta_{n-1}$ of $\ker\cou/M$ as the equivalence classes of \[(g-1)/(q^{n-1}-1),X,\dots,X^{n-1}\] The dual basis of $L$ is then given by \[g^{1-n}-1, X g^{1-n},\dots, q^{k(k-1)} \frac{1}{[k]_q!} X^k g^{1-n}, \dots,q^{(n-1)(n-2)} \frac{1}{[n-1]_q!} X^{n-1} g^{1-n}\] (c) The statement follows from corollaries \ref{cor:cqbp_eclass} and \ref{cor:uqbp_eclass} parts (c) with the observation \[\ker\cou/M=\ker\cou/{\bigcap_{n\in I}}M^n =\oplus_{n\in I}\ker\cou/M^n\] \end{proof} \begin{cor} There is precisely one differential calculus on \ensuremath{C_q(B_+)}{} which is natural in the sense that it has dimension $2$. It is coirreducible and obeys the relations \begin{gather*} [g,\diff X]=0\qquad [g,\diff g]_q=0\qquad [X,\diff X]_q=0\qquad [X,\diff g]_q=(q-1)({\diff X}) g \end{gather*} with $[a,b]_q:=ab-qba$. In particular we have \begin{gather*} \diff\no{f} = {\diff g} \no{\partial_{q,g} f} + {\diff X} \no{\partial_{q,X} f}\qquad\forall f\in \k(q)[X,g,g^{-1}] \end{gather*} \end{cor} \begin{proof} This is a special case of theorem \ref{thm:q_calc}. The formulas follow from (b) with $n=2$. \end{proof} \section{Classification in the Classical Limit} \label{sec:class} In this section we give the complete classification of differential calculi and quantum tangent spaces in the classical case of \ensuremath{C(B_+)}{} along the lines of the previous section. We pay particular attention to the relation to the $q$-deformed setting. The classical limit \ensuremath{C(B_+)}{} of the quantum group \ensuremath{C_q(B_+)}{} is simply obtained by substituting the parameter $q$ with $1$. The classification of left crossed submodules in part (a) of lemma \ref{lem:cqbp_class} remains unchanged, as one may check by going through the proof. In particular, we get a correspondence of crossed modules in the $q$-deformed setting with crossed modules in the classical setting as a map of pairs $(P,I)\mapsto (P,I)$ that converts polynomials $\k(q)[g]$ to polynomials $\k[g]$ (if defined) and leaves sets $I$ unchanged. This is one-to-one in the finite dimensional case. However, we did use the distinctness of powers of $q$ in part (b) and (c) of lemma $\ref{lem:cqbp_class}$ and have to account for changing this. The only place where we used it, was in observing that factors $1-q^j g $ have no common divisors for distinct $j$. This was crucial to conclude the maximality (b) of certain finite codimensional crossed submodules and the intersection property (c). Now, all those factors become $1-g$. \begin{cor} \label{cor:cbp_class} (a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ensuremath{C(B_+)}$ by left multiplication and left adjoint coaction are in one-to-one correspondence to pairs $(P,I)$ where $P\in\k[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is finite. $\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$ if $P=1$. (b) The infinite codimensional maximal $M$ are characterised by $(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-g$ for any $k\in\mathbb{N}_0$. \end{cor} In the restriction to $\ker\cou\subset\ensuremath{C(B_+)}$ corresponding to corollary \ref{cor:cqbp_eclass} we observe another difference to the $q$-deformed setting. Since the condition for a crossed submodule to lie in $\ker\cou$ is exactly to have factors $1-g$ in the $X$-free monomials this condition may now be satisfied more easily. If the characterising polynomial does not contain this factor it is now sufficient to have just any non-empty characterising integer set $I$ and it need not contain $1$. Consequently, the map $(P,I)\mapsto (P,I)$ does not reach all crossed submodules now. \begin{cor} \label{cor:cbp_eclass} (a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C(B_+)}$ are in one-to-one correspondence to pairs $(P,I)$ as in corollary \ref{cor:cbp_class} with the additional constraint $(1-g)$ divides $P(g)$ or $I$ non-empty. $\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$ if $P=1$. (b) The infinite codimensional maximal $M$ correspond to pairs $(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-g$. \end{cor} Let us now turn to quantum tangent spaces on \ensuremath{U(\lalg{b_+})}{}. Here, the process to go from the $q$-deformed setting to the classical one is not quite so straightforward. \begin{lem} \label{lem:ubp_class} Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules $L\subset\ensuremath{U(\lalg{b_+})}$ via the left adjoint action and left regular coaction are in one-to-one correspondence to pairs $(l,I)$ with $l\in\mathbb{N}_0$ and $I\subset\mathbb{N}$ finite. $\dim L<\infty$ iff $l=0$. In particular $\dim L=\sum_{n\in I}n$ if $l=0$. \end{lem} \begin{proof} The left adjoint action takes the form \[ X\triangleright X^n H^m = X^{n+1}(H^m-(H+1)^m) \qquad H\triangleright X^n H^m = n X^n H^m \] while the coaction is \[ \cop(X^n H^m) = \sum_{i=1}^n \sum_{j=1}^m \binom{n}{i} \binom{m}{j} X^i H^j\otimes X^{n-1} H^{m-j} \] Let $L$ be a crossed submodule invariant under the action and coaction. The (repeated) action of $H$ separates elements by degree in $X$. It is therefore sufficient to consider elements of the form $X^n P(H)$, where $P$ is a polynomial. By acting with $X$ on an element $X^n P(H)$ we obtain $X^{n+1}(P(H)-P(H+1))$. Subsequently applying the coaction and projecting on the left hand side of the tensor product onto $X$ (in the basis $X^i H^j$ of \ensuremath{U(\lalg{b_+})}) leads to the element $X^n (P(H)-P(H+1))$. Now the degree of $P(H)-P(H+1)$ is exactly the degree of $P(H)$ minus $1$. Thus we have polynomials $X^n P_i(H)$ of any degree $i=\deg(P_i)\le \deg(P)$ in $L$ by induction. In particular, $X^n H^m\in L$ for all $m\le\deg(P)$. It is thus sufficient to consider elements of the form $X^n H^m$. Given such an element, the coaction generates all elements of the form $X^i H^j$ with $i\le n, j\le m$. For given $n$, the characterising datum is the maximal $m$ so that $X^n H^m\in L$. Due to the coaction this cannot decrease with decreasing $n$ and due to the action of $X$ this can decrease at most by $1$ when increasing $n$ by $1$. This leads to the classification given. For $l\in N_0$ and $I\subset\mathbb{N}$ finite, the corresponding crossed submodule is generated by \begin{gather*} X^{n_m-1} H^{l+m-1}, X^{n_m+n_{m-1}-1} H^{l+m-2},\dots, X^{(\sum_i n_i)-1} H^{l}\\ \text{and}\qquad X^{(\sum_i n_i)+k} H^{l-1}\quad \forall k\ge 0\quad\text{if}\quad l>0 \end{gather*} as a crossed module. \end{proof} For the transition from the $q$-deformed (lemma \ref{lem:uqbp_class}) to the classical case we observe that the space spanned by $g^{s_1},\dots,g^{s_m}$ with $m$ different integers $s_i\in\mathbb{Z}$ maps to the space spanned by $1, H, \dots, H^{m-1}$ in the prescription of the classical limit (as described in section \ref{sec:intro_limits}). I.e.\ the classical crossed submodule characterised by an integer $l$ and a finite set $I\subset\mathbb{N}$ comes from a crossed submodule characterised by this same $I$ and additionally $l$ other integers $j\in\mathbb{Z}$ for which $X^k g^{1-j}$ is included. In particular, we have a one-to-one correspondence in the finite dimensional case. To formulate the analogue of corollary \ref{cor:uqbp_eclass} for the classical case is essentially straightforward now. However, as for \ensuremath{C(B_+)}{}, we obtain more crossed submodules than those from the $q$-deformed setting. This is due to the degeneracy introduced by forgetting the powers of $g$ and just retaining the number of different powers. \begin{cor} \label{cor:ubp_eclass} (a) Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules $L\subset\ker\cou\subset\ensuremath{U(\lalg{b_+})}$ via the left adjoint action and left regular coaction (with subsequent projection to $\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to pairs $(l,I)$ with $l\in\mathbb{N}_0$ and $I\subset\mathbb{N}$ finite where $l\neq 0$ or $I\neq\emptyset$. $\dim L<\infty$ iff $l=0$. In particular $\dim L=(\sum_{n\in I}n)-1$ if $l=0$. \end{cor} As in the $q$-deformed setting, we give a description of the finite dimensional differential calculi where we have a strict duality to quantum tangent spaces. \begin{prop} (a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C(B_+)}{} and finite dimensional quantum tangent spaces $L$ on \ensuremath{U(\lalg{b_+})}{} are in one-to-one correspondence to non-empty finite sets $I\subset\mathbb{N}$. In particular $\dim\Gamma=\dim L=(\sum_{n\in I}) n)-1$. The $\Gamma$ with $1\in\mathbb{N}$ are in one-to-one correspondence to the finite dimensional calculi and quantum tangent spaces of the $q$-deformed setting (theorem \ref{thm:q_calc}(a)). (b) The differential calculus $\Gamma$ of dimension $n\ge 2$ corresponding to the coirreducible one of \ensuremath{C_q(B_+)}{} (theorem \ref{thm:q_calc}(b)) has a right invariant basis $\eta_0,\dots,\eta_{n-1}$ so that \begin{gather*} \diff X=\eta_1+\eta_0 X \qquad \diff g=\eta_0 g\\ [g, \eta_i]=0\ \forall i \qquad [X, \eta_i]=\begin{cases} 0 & \text{if}\ i=0\ \text{or}\ i=n-1\\ \eta_{i+1} & \text{if}\ 0<i<n-1 \end{cases} \end{gather*} hold. The braided derivations obtained from the dual basis of the corresponding $L$ are given by \begin{gather*} \partial_i f=\frac{1}{i!} \left(\frac{\partial}{\partial X}\right)^i f\qquad \forall i\ge 1\\ \partial_0 f=\left(X \frac{\partial}{X}+ g \frac{\partial}{g}\right) f \end{gather*} for $f\in\ensuremath{C(B_+)}$. (c) The differential calculus of dimension $n-1$ corresponding to the one in (b) with $1$ removed from the characterising set is the same as the one above, except that we set $\eta_0=0$ and $\partial_0=0$. \end{prop} \begin{proof} (a) We observe that the classifications of corollary \ref{cor:cbp_class} and lemma \ref{lem:ubp_class} or corollary \ref{cor:cbp_eclass} and corollary \ref{cor:ubp_eclass} are dual to each other in the finite (co)dimensional case. More precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$ corresponding to $(1,I)$ in corollary \ref{cor:cbp_class} is the annihilator of the crossed submodule $L$ corresponding to $(0,I)$ in lemma \ref{lem:ubp_class} and vice versa. $\ensuremath{C(B_+)}/M$ and $L$ are dual spaces with the induced pairing. For non-empty $I$ this descends to $M$ corresponding to $(1,I)$ in corollary \ref{cor:cbp_eclass} and $L$ corresponding to $(0,I)$ in corollary \ref{cor:ubp_eclass}. For the dimension of $\Gamma$ note $\dim\Gamma=\dim{\ker\cou/M}=\codim M$. (b) For $I=\{1,n\}$ we choose in $\ker\cou\subset\ensuremath{C(B_+)}$ the basis $\eta_0,\dots,\eta_{n-1}$ as the equivalence classes of $g-1,X,\dots,X^{n-1}$. The dual basis in $L$ is then $H,X,\dots,\frac{1}{k!}X^k,\dots,\frac{1}{(n-1)!}X^{n-1}$. This leads to the formulas given. (c) For $I=\{n\}$ we get the same as in (b) except that $\eta_0$ and $\partial_0$ disappear. \end{proof} The classical commutative calculus is the special case of (b) with $n=2$. It is the only calculus of dimension $2$ with $\diff g\neq 0$. Note that it is not coirreducible. \section{The Dual Classical Limit} \label{sec:dual} We proceed in this section to the more interesting point of view where we consider the classical algebras, but with their roles interchanged. I.e.\ we view \ensuremath{U(\lalg{b_+})}{} as the ``function algebra'' and \ensuremath{C(B_+)}{} as the ``enveloping algebra''. Due to the self-duality of \ensuremath{U_q(\lalg{b_+})}{}, we can again view the differential calculi and quantum tangent spaces as classical limits of the $q$-deformed setting investigated in section \ref{sec:q}. In this dual setting the bicovariance constraint for differential calculi becomes much weaker. In particular, the adjoint action on a classical function algebra is trivial due to commutativity and the adjoint coaction on a classical enveloping algebra is trivial due to cocommutativity. In effect, the correspondence with the $q$-deformed setting is much weaker than in the ordinary case of section \ref{sec:class}. There are much more differential calculi and quantum tangent spaces than in the $q$-deformed setting. We will not attempt to classify all of them in the following but essentially contend ourselves with those objects coming from the $q$-deformed setting. \begin{lem} \label{lem:cbp_dual} Left \ensuremath{C(B_+)}-subcomodules $\subseteq\ensuremath{C(B_+)}$ via the left regular coaction are $\mathbb{Z}$-graded subspaces of \ensuremath{C(B_+)}{} with $|X^n g^m|=n+m$, stable under formal derivation in $X$. By choosing any ordering in \ensuremath{C_q(B_+)}{}, left crossed submodules via left regular action and adjoint coaction are in one-to-one correspondence to certain subcomodules of \ensuremath{C(B_+)}{} by setting $q=1$. Direct sums correspond to direct sums. This descends to $\ker\cou\subset\ensuremath{C(B_+)}$ by the projection $x\mapsto x-\cou(x) 1$. \end{lem} \begin{proof} The coproduct on \ensuremath{C(B_+)}{} is \[\cop(X^n g^k)=\sum_{r=0}^{n} \binom{n}{r} X^{n-r} g^{k+r}\otimes X^r g^k\] which we view as a left coaction. Projecting on the left hand side of the tensor product onto $g^l$ in a basis $X^n g^k$, we observe that coacting on an element $\sum_{n,k} a_{n,k} X^n g^k$ we obtain elements $\sum_n a_{n,l-n} X^n g^{l-n}$ for all $l$. I.e.\ elements of the form $\sum_n b_n X^n g^{l-n}$ lie separately in a subcomodule and it is sufficient to consider such elements. Writing the coaction on such an element as \[\sum_t \frac{1}{t!} X^t g^{l-t}\otimes \sum_n b_n \frac{n!}{(n-t)!} X^{n-t} g^{l-n}\] we see that the coaction generates all formal derivatives in $X$ of this element. This gives us the classification: \ensuremath{C(B_+)}-subcomodules $\subseteq\ensuremath{C(B_+)}$ under the left regular coaction are $\mathbb{Z}$-graded subspaces with $|X^n g^m|=n+m$, stable under formal derivation in $X$ given by $X^n g^m \mapsto n X^{n-1} g^m$. The correspondence with the \ensuremath{C_q(B_+)} case follows from the trivial observation that the coproduct of \ensuremath{C(B_+)}{} is the same as that of \ensuremath{C_q(B_+)}{} with $q=1$. The restriction to $\ker\cou$ is straightforward. \end{proof} \begin{lem} \label{lem:ubp_dual} The process of obtaining the classical limit \ensuremath{U(\lalg{b_+})}{} from \ensuremath{U_q(\lalg{b_+})}{} is well defined for subspaces and sends crossed \ensuremath{U_q(\lalg{b_+})}-submodules $\subset\ensuremath{U_q(\lalg{b_+})}$ by regular action and adjoint coaction to \ensuremath{U(\lalg{b_+})}-submodules $\subset\ensuremath{U(\lalg{b_+})}$ by regular action. This map is injective in the finite codimensional case. Intersections and codimensions are preserved in this case. This descends to $\ker\cou$. \end{lem} \begin{proof} To obtain the classical limit of a left ideal it is enough to apply the limiting process (as described in section \ref{sec:intro_limits}) to the module generators (We can forget the additional comodule structure). On the one hand, any element generated by left multiplication with polynomials in $g$ corresponds to some element generated by left multiplication with a polynomial in $H$, that is, there will be no more generators in the classical setting. On the other hand, left multiplication by a polynomial in $H$ comes from left multiplication by the same polynomial in $g-1$, that is, there will be no fewer generators. The maximal left crossed \ensuremath{U_q(\lalg{b_+})}-submodule $\subseteq\ensuremath{U_q(\lalg{b_+})}$ by left multiplication and adjoint coaction of codimension $n$ ($n\ge 1$) is generated as a left ideal by $\{1-q^{1-n}g,X^n\}$ (see lemma \ref{lem:cqbp_class}). Applying the limiting process to this leads to the left ideal of \ensuremath{U(\lalg{b_+})}{} (which is not maximal for $n\neq 1$) generated by $\{H+n-1,X^n\}$ having also codimension $n$. More generally, the picture given for arbitrary finite codimensional left crossed modules of \ensuremath{U_q(\lalg{b_+})}{} in terms of generators with respect to polynomials in $g,g^{-1}$ in lemma \ref{lem:cqbp_class} carries over by replacing factors $1-q^{1-n}g$ with factors $H+n-1$ leading to generators with respect to polynomials in $H$. In particular, intersections go to intersections since the distinctness of the factors for different $n$ is conserved. The restriction to $\ker\cou$ is straightforward. \end{proof} We are now in a position to give a detailed description of the differential calculi induced from the $q$-deformed setting by the limiting process. \begin{prop} (a) Certain finite dimensional differential calculi $\Gamma$ on \ensuremath{U(\lalg{b_+})}{} and quantum tangent spaces $L$ on \ensuremath{C(B_+)}{} are in one-to-one correspondence to finite dimensional differential calculi on \ensuremath{U_q(\lalg{b_+})}{} and quantum tangent spaces on \ensuremath{C_q(B_+)}{}. Intersections correspond to intersections. (b) In particular, $\Gamma$ and $L$ corresponding to coirreducible differential calculi on \ensuremath{U_q(\lalg{b_+})}{} and irreducible quantum tangent spaces on \ensuremath{C_q(B_+)}{} via the limiting process are given as follows: $\Gamma$ has a right invariant basis $\eta_0,\dots,\eta_{n-1}$ so that \begin{gather*} \diff X=\eta_1 \qquad \diff H=(1-n)\eta_0 \\ [H, \eta_i]=(1-n+i)\eta_i\quad\forall i\qquad [X, \eta_i]=\begin{cases} \eta_{i+1} & \text{if}\ \ i<n-1\\ 0 & \text{if}\ \ i=n-1 \end{cases} \end{gather*} holds. The braided derivations corresponding to the dual basis of $L$ are given by \begin{gather*} \partial_i\no{f}=\no{T_{1-n+i,H} \frac{1}{i!}\left(\frac{\partial}{\partial X}\right)^i f} \qquad\forall i\ge 1\\ \partial_0\no{f}=\no{T_{1-n,H} f - f} \end{gather*} for $f\in\k[X,H]$ with the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ via $H^n X^m\mapsto H^n X^m$. \end{prop} \begin{proof} (a) The strict duality between \ensuremath{C(B_+)}-subcomodules $L\subseteq\ker\cou$ given by lemma \ref{lem:cbp_dual} and corollary \ref{cor:uqbp_eclass} and \ensuremath{U(\lalg{b_+})}-modules $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$ with $M$ given by lemma \ref{lem:ubp_dual} and corollary \ref{cor:cqbp_eclass} can be checked explicitly. It is essentially due to mutual annihilation of factors $H+k$ in \ensuremath{U(\lalg{b_+})}{} with elements $g^k$ in \ensuremath{C(B_+)}{}. (b) $L$ is generated by $\{g^{1-n}-1,Xg^{1-n},\dots, X^{n-1}g^{1-n}\}$ and $M$ is generated by $\{H(H+n-1),X(H+n-1),X^n \}$. The formulas are obtained by denoting with $\eta_0,\dots,\eta_{n-1}$ the equivalence classes of $H/(1-n),X,\dots,X^{n-1}$ in $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$. The dual basis of $L$ is then \[g^{1-n}-1,X g^{1-n}, \dots,\frac{1}{(n-1)!}X^{n-1} g^{1-n}\] \end{proof} In contrast to the $q$-deformed setting and to the usual classical setting the many freedoms in choosing a calculus leave us with many $2$-dimensional calculi. It is not obvious which one we should consider to be the ``natural'' one. Let us first look at the $2$-dimensional calculus coming from the $q$-deformed setting as described in (b). The relations become \begin{gather*} [\diff H, a]=\diff a\qquad [\diff X, a]=0\qquad\forall a\in\ensuremath{U(\lalg{b_+})}\\ \diff\no{f} =\diff H \no{\fdiff_{1,H} f} + \diff X \no{\frac{\partial}{\partial X} f} \end{gather*} for $f\in\k[X,H]$. We might want to consider calculi which are closer to the classical theory in the sense that derivatives are not finite differences but usual derivatives. Let us therefore demand \[\diff P(H)=\diff H \frac{\partial}{\partial H} P(H)\qquad \text{and}\qquad \diff P(X)=\diff X \frac{\partial}{\partial X} P(X)\] for polynomials $P$ and ${\diff X}\neq 0$ and ${\diff H}\neq 0$. \begin{prop} \label{prop:nat_bp} There is precisely one differential calculus of dimension $2$ meeting these conditions. It obeys the relations \begin{gather*} [a,\diff H]=0\qquad [X,\diff X]=0\qquad [H,\diff X]=\diff X\\ \diff \no{f} =\diff H \no{\frac{\partial}{\partial H} f} +\diff X \no{\frac{\partial}{\partial X} f} \end{gather*} where the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ is given by $X^n H^m\mapsto X^n H^m$. \end{prop} \begin{proof} Let $M$ be the left ideal corresponding to the calculus. It is easy to see that for a primitive element $a$ the classical derivation condition corresponds to $a^2\in M$ and $a\notin M$. In our case $X^2,H^2\in M$. If we take the ideal generated from these two elements we obtain an ideal of $\ker\cou$ of codimension $3$. Now, it is sufficient without loss of generality to add a generator of the form $\alpha H+\beta X+\gamma XH$. $\alpha$ and $\beta$ must then be zero in order not to generate $X$ or $H$ in $M$. I.e.\ $M$ is generated by $H^2, XH, X^2$. The relations stated follow. \end{proof} \section{Remarks on $\kappa$-Minkowski Space and Integration} \label{sec:kappa} There is a straightforward generalisation of \ensuremath{U(\lalg{b_+})}. Let us define the Lie algebra $\lalg b_{n+}$ as generated by $x_0,\dots, x_{n-1}$ with relations \[ [x_0,x_i]=x_i\qquad [x_i,x_j]=0\qquad\forall i,j\ge 1\] Its enveloping algebra \ensuremath{U(\lalg{b}_{n+})}{} is nothing but (rescaled) $\kappa$-Minkowski space as introduced in \cite{MaRu}. In this section we make some remarks about its intrinsic geometry. We have an injective Lie algebra homomorphism $b_{n+}\to b_+$ given by $x_0\mapsto H$ and $x_i\mapsto X$. This is an isomorphism for $n=2$. The injective Lie algebra homomorphism extends to an injective homomorphism of enveloping algebras $\ensuremath{U(\lalg{b_+})}\to \ensuremath{U(\lalg{b}_{n+})}$ in the obvious way. This gives rise to an injective map from the set of submodules of \ensuremath{U(\lalg{b_+})}{} to the set of submodules of \ensuremath{U(\lalg{b}_{n+})}{} by taking the pre-image. In particular this induces an injective map from the set of differential calculi on \ensuremath{U(\lalg{b_+})}{} to the set of differential calculi on \ensuremath{U(\lalg{b}_{n+})}{} which are invariant under permutations of the $x_i\ i\ge 1$. \begin{cor} \label{cor:nat_bnp} There is a natural $n$-dimensional differential calculus on \ensuremath{U(\lalg{b}_{n+})}{} induced from the one considered in proposition \ref{prop:nat_bp}. It obeys the relations \begin{gather*} [a,\diff x_0]=0\quad\forall a\in \ensuremath{U(\lalg{b}_{n+})}\qquad [x_i,\diff x_j]=0 \quad [x_0,\diff x_i]=\diff x_i\qquad\forall i,j\ge 1\\ \diff \no{f} =\sum_{\mu=0}^{n-1}\diff x_{\mu} \no{\frac{\partial}{\partial x_{\mu}} f} \end{gather*} where the normal ordering is given by \[\k[x_0,\dots,x_{n-1}]\to \ensuremath{U(\lalg{b}_{n+})}\quad\text{via}\quad x_{n-1}^{m_{n-1}}\cdots x_0^{m_0}\mapsto x_{n-1}^{m_{n-1}}\cdots x_0^{m_0}\] \end{cor} \begin{proof} The calculus is obtained from the ideal generated by \[x_0^2,x_i x_j, x_i x_0\qquad\forall i,j\ge 1\] being the pre-image of $X^2,XH,X^2$ in \ensuremath{U(\lalg{b_+})}{}. \end{proof} Let us try to push the analogy with the commutative case further and take a look at the notion of integration. The natural way to encode the condition of translation invariance from the classical context in the quantum group context is given by the condition \[(\int\otimes\id)\circ\cop a=1 \int a\qquad\forall a\in A\] which defines a right integral on a quantum group $A$ \cite{Sweedler}. (Correspondingly, we have the notion of a left integral.) Let us formulate a slightly weaker version of this equation in the context of a Hopf algebra $H$ dually paired with $A$. We write \[\int (h-\cou(h))\triangleright a = 0\qquad \forall h\in H, a\in A\] where the action of $H$ on $A$ is the coregular action $h\triangleright a = a_{(1)}\langle a_{(2)}, h\rangle$ given by the pairing. In the present context we set $A=\ensuremath{U(\lalg{b}_{n+})}$ and $H=\ensuremath{C(B_{n+})}$. We define the latter as a generalisation of \ensuremath{C(B_+)}{} with commuting generators $g,p_1,\dots,p_{n-1}$ and coproducts \[\cop p_i=p_i\otimes 1+g\otimes p_i\qquad \cop g=g\otimes g\] This can be identified (upon rescaling) as the momentum sector of the full $\kappa$-Poincar\'e algebra (with $g=e^{p_0}$). The pairing is the natural extension of (\ref{eq:pair_class}): \[\langle x_{n-1}^{m_{n-1}}\cdots x_1^{m_1} x_0^{k}, p_{n-1}^{r_{n-1}}\cdots p_1^{r_1} g^s\rangle = \delta_{m_{n-1},r_{n-1}}\cdots\delta_{m_1,r_1} m_{n-1}!\cdots m_1! s^k\] The resulting coregular action is conveniently expressed as (see also \cite{MaRu}) \[p_i\triangleright\no{f}=\no{\frac{\partial}{\partial x_i} f}\qquad g\triangleright\no{f}=\no{T_{1,x_0} f}\] with $f\in\k[x_0,\dots,x_{n-1}]$. Due to cocommutativity, the notions of left and right integral coincide. The invariance conditions for integration become \[\int \no{\frac{\partial}{\partial x_i} f}=0\quad \forall i\in\{1,\dots,n-1\} \qquad\text{and}\qquad \int \no{\fdiff_{1,x_0} f}=0\] The condition on the left is familiar and states the invariance under infinitesimal translations in the $x_i$. The condition on the right states the invariance under integer translations in $x_0$. However, we should remember that we use a certain algebraic model of \ensuremath{C(B_{n+})}{}. We might add, for example, a generator $p_0$ to \ensuremath{C(B_{n+})}{} that is dual to $x_0$ and behaves as the ``logarithm'' of $g$, i.e.\ acts as an infinitesimal translation in $x_0$. We then have the condition of infinitesimal translation invariance \[\int \no{\frac{\partial}{\partial x_{\mu}} f}=0\] for all $\mu\in\{0,1,\dots,{n-1}\}$. In the present purely algebraic context these conditions do not make much sense. In fact they would force the integral to be zero on the whole algebra. This is not surprising, since we are dealing only with polynomial functions which would not be integrable in the classical case either. In contrast, if we had for example the algebra of smooth functions in two real variables, the conditions just characterise the usual Lesbegue integral (up to normalisation). Let us assume $\k=\mathbb{R}$ and suppose that we have extended the normal ordering vector space isomorphism $\mathbb{R}[x_0,\dots,x_{n-1}]\cong \ensuremath{U(\lalg{b}_{n+})}$ to a vector space isomorphism of some sufficiently large class of functions on $\mathbb{R}^n$ with a suitable completion $\hat{U}(\lalg{b_{n+}})$ in a functional analytic framework (embedding \ensuremath{U(\lalg{b}_{n+})}{} in some operator algebra on a Hilbert space). It is then natural to define the integration on $\hat{U}(\lalg{b_{n+}})$ by \[\int \no{f}=\int_{\mathbb{R}^n} f\ dx_0\cdots dx_{n-1}\] where the right hand side is just the usual Lesbegue integral in $n$ real variables $x_0,\dots,x_{n-1}$. This integral is unique (up to normalisation) in satisfying the covariance condition since, as we have seen, these correspond just to the usual translation invariance in the classical case via normal ordering, for which the Lesbegue integral is the unique solution. It is also the $q\to 1$ limit of the translation invariant integral on \ensuremath{U_q(\lalg{b_+})}{} obtained in \cite{Majid_qreg}. We see that the natural differential calculus in corollary \ref{cor:nat_bnp} is compatible with this integration in that the appearing braided derivations are exactly the actions of the translation generators $p_{\mu}$. However, we should stress that this calculus is not covariant under the full $\kappa$-Poincar\'e algebra, since it was shown in \cite{GoKoMa} that in $n=4$ there is no such calculus of dimension $4$. Our results therefore indicate a new intrinsic approach to $\kappa$-Minkowski space that allows a bicovariant differential calculus of dimension $4$ and a unique translation invariant integral by normal ordering and Lesbegue integration. \section*{Acknowledgements} I would like to thank S.~Majid for proposing this project, and for fruitful discussions during the preparation of this paper.
{'timestamp': '1998-07-19T14:33:52', 'yymm': '9807', 'arxiv_id': 'math/9807097', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9807097'}
ArXiv
\section{Introduction} The galaxy luminosity function (hereafter, LF) represents the number density of galaxies of a given luminosity. It is a robust observable, extensively used in the past, to study the properties of galaxy populations \citep[e.g.][and references therein]{blanton2003}. The comparison between the behaviour of the LF and the halo mass function at faint magnitudes has been proposed as a crucial test to understand galaxy formation processes. Indeed, the cold dark matter theory predicts halo mass functions with a slope of $\sim-1.9$ \citep[e.g.][and reference therein]{springel2008}, steeper than the one observed in deep LFs of nearby galaxy clusters, or in the field \citep[$\sim -1.1:-1.5$][]{trenthamt2002,depropris2003,blanton2005}. This is the so-called missing satellite problem, well reported in censuses of galaxies around the Milky Way and M31 \citep[e.g.][]{klypin1999}. Attempts to solve these discrepancies invoke several physical mechanisms that halt the star formation and darken or destroy dwarf galaxies, including high gas cooling times \citep{wr1978}, and suppression of low-mass galaxies due to a combination of feedback, photoionization or/and dynamical processes \citep{benson2002,benson2003,brooks2013}. Claims of the environmental dependence of the LF are numerous \citep[e.g.][]{tully2002,trentham2002,trentham2005,infante2003}, but the exact shape and significance of this dependence is still a matter of debate. Many previous studies suggest that the most striking differences between low-density and high-density environments concern the faint-end slope of the LF, with cluster galaxies showing a higher abundance of low-luminosity galaxies than the field \citep[e.g.][and references therein]{blanton2005, popesso2006}. This has important implications, as it suggests that, whatever the mechanisms preventing the formation of galaxies within low-mass satellites are, they depend on the host halo mass. \cite{tully2002} suggest that perhaps reionization inhibited the collapse of gas in late-forming dwarfs in low-density environments, whereas a larger fraction of low-mass haloes formed before the photoinization of the intergalactic medium in overdensities that ultimately become clusters. Interestingly, this mechanism could be at the heart of the discrepancies existing in the literature about the faint-end of the LF in clusters. While some authors observe a marked upturn at faint magnitudes \citep[][]{yagi2002,popesso2006,barkhouse2007}, others find a more regular behaviour \citep[e.g.][]{sanchez2005,andreon2006}. When observed, this upturn is usually due to early-type dwarfs located in the outer regions of clusters, suggesting that cluster-related mechanisms may be responsible for the paucity of dwarfs observed in the denser inner regions \citep{pfeffer2013}. Moreover, the slopes derived in the upturn region are rather steep, as much as to be fully consistent with that of the halo mass function. Unfortunately, there is no evidence for the existence of such a dramatic steeping in the few cluster LFs where spectroscopic or visual membership has been determined \citep[][]{hilker2003,rines2003,mieske2007,misgeld2008,misgeld2009}. More extensive spectroscopic cluster surveys are needed to derive robust results on the faint-end slope of the LF, and in this way put constraints on the role played by the environment on the formation of baryonic structures within low-mass dark matter halos. We have undertaken a project in order to obtain spectroscopy of galaxies in nearby clusters down to the dwarf regime ($M > M^*+6$). This dataset will allow us to infer accurate cluster membership and analyze several properties of the dwarf galaxy population in nearby clusters, minimizing the contribution of background sources. In the present work, we present a study of the spectroscopic LF of (A85), a nearby ($z = 0.055$) and massive cluster \citep[$M_{200} = 2.5 \times 10^{14} \; M_{\odot}$, and $R_{200} = 1.02 \, h^{-1} \,$Mpc][]{rines2006}. This cluster is not completely virialized, since several substructures and infalling groups have been identified \citep{durret1999,aguerri2010,cava2010}. A85 is an ideal target for a deep study of the LF, as spectroscopy within the virial radius is almost complete down to m$_r \sim$ 18, resulting in 273 confirmed members \citep[][]{aguerri2007}. The new dataset presented here reaches three mag fainter, and almost doubles the number cluster members. Throughout this work we have used the cosmological parameters $H_0 = 75 \; \mathrm{km} \, \mathrm{s}^{-1} \mathrm{Mpc}^{-1}$, $\Omega _m = 0.3$ and $\Omega _{\Lambda} = 0.7$. \section{The observational data on A85} \subsection{Deep VLT/VIMOS Spectroscopy} Our parent photometric catalogue contains all galaxies brighter than $m_r = 22$ mag\footnote{The apparent magnitudes used are the dered SDSS-DR6 $r$-band magnitudes.} from the SDSS-DR6 \citep[][]{adelman2008}, and within $R_{200}$\footnote{The cluster center is assumed to be at the brightest cluster galaxy (BCG, $\alpha$(J2000): $00^h \, 41^m \, 50.448^s$ $\delta$(J2000): $-9^{\circ} \, 18' \, 11.45''$). This is a sensible assumption because the peak of X-ray emission lies at only 7 kpc from the BCG \citep{popesso2004}.}. Figure \ref{cmd} shows the colour-magnitude diagram of A85 for the galaxies included in this catalogue. The target galaxies for our spectroscopic observations were selected among those with no existing redshift in the literature and bluer that $g-r = 1.0$ (see Fig. \ref{cmd}). This is the colour of a 12 Gyr old stellar population with [Fe/H] = +0.25 supersolar metallicity \citep{worthey1994}, typical of very luminous early-type galaxies. As a result, this colour selection should minimize the contamination by background sources, while matching at the same time the colour distribution of galaxies in the nearby Universe \citep[e.g.][]{hogg2004}. The observations were carried out using the multi-object-spectroscopy (MOS) mode of VLT/VIMOS, in combination with the LR-blue+OS-blue grisms and filters (Program 083.A-0962(B), PI R. S\'anchez-Janssen). To maximize the number of targets, and avoid the gaps between the instrument CCDs, we designed 25 masks with large overlaps covering an area of 3.0 $\times$ 2.6 Mpc$^2$ around the central galaxy in A85 -- i.e. extending out to more than 1\,R$_{200}$. This observational strategy allowed us to obtain 2861 low-resolution spectra (R=180) of galaxies down to $m_r= 22$ mag. We exposed during 1000 s to reach a signal-to-noise ($S/N$) in the range $6 - 10$ down to the limiting magnitude. The data were reduced using GASGANO and the provided pipeline \citep{izzo2004}. The spectra were wavelength calibrated using the HeNe lamp, which yield a wavelength accuracy of $\sim 0.5$ $\rm \AA$ pixel$^{-1}$ in the full spectral range ($3700 - 6700 \, \rm \AA$). \subsection{Redshift Determination and Cluster Membership} The recessional velocity of the observed galaxies were determined using the \textit{rvsao.xcsao} \textit{IRAF} task \citep{kurtz1992}. This task cross correlates a template spectrum \citep[in this work][]{kennicutt1992} with the observed galaxy spectrum. This technique allowed us to determine the recessional velocity in 2070 spectra. The remaining spectra had too low $S/N$ to estimate reliable redshifts. \textit{xcsao} formal errors are smaller than true intrinsic errors \citep[e.g.][]{bardelli1994}, and reliable errors can only be estimated by observing galaxies more than once. Our observational strategy allowed us to obtain 676 repeated spectra result in a one-sigma velocity uncertainty of $\sim 500$ km s$^{-1}$. The redshifts from the literature (SDSS-DR6 and NED catalogues), together with our new data, result in a total number of 1593 galaxy velocities in the A85 direction within R$_{200}$ and $14 < m_r < 22$. \begin{figure} \centering \includegraphics[width=1\linewidth]{cmd} \caption{Lower panel: colour-magnitude diagram of the galaxies in the direction of A85. Grey points are the target galaxies. Red and blue symbols show red and blue cluster members, respectively. The solid line represents the red sequence of the cluster. Upper panel: spectroscopic completeness ($C$, diamonds) and cluster member function ($f_m$, black triangles) as a function of $r$-band magnitude. The dashed vertical line represents our limit magnitude for the spectroscopic LF.} \label{cmd} \end{figure} The caustic method \citep{diaferio1997,diaferio1999,serra2011} estimates the escape velocity and the mass profile of galaxy clusters in both their virial and infall regions, where the assumption of dynamical equilibrium does not necessarily hold. A by-product of the caustic technique is the identification of the members of the cluster with an interloper contamination of only 2\% within R$_{200}$, on average \citep{serra2013}. The application of the caustic technique to our spectroscopic catalogue resulted in a sample of $434$ cluster members within $R_{200}$, 284 of which are new data. We define the completeness of our data as $C = N_z / N_{phot}$, with $N_z$ being the number of measured redshift and $N_{phot}$ the number of photometric targets. Figure \ref{cmd} shows that $C$ is higher than 90 $\%$ for galaxies with M$_r < -19$ and it decreases around $40 \, \%$ at $M_r \sim -16$. We also defined the member fraction as $f_m = N_m / N_z$, being $N_m$ the number of members. The member fraction also strongly depends on luminosity. Thus, $f_m$ is higher than 80 $\%$ down to M$_r < -19$ and then rapidly decreases down to $\sim$ 20 $\%$ at M$_r = -16$ (see Fig. \ref{cmd}). \section{The spectroscopic LF of A85} The A85 LF is computed using all cluster members with m$_r \leq 21$ mag and $\langle \mu_{e,r} \rangle \leq 24$ mag arcsec$^{-2}$. These limits correspond to the values where the galaxy counts stop increasing, and thus determine our completeness limits. We note that our uniform spectroscopic selection function (cf. Sect. 2.1) does not introduce any bias in magnitude, $\langle \mu_{e,r} \rangle $, or colour. Figure \ref{LF} shows the $r$-band spectroscopic LF of A85. It is computed as $\phi(M_r) = N_{phot}(M_r) \times f_m(M_r) / (0.5 \times A)$, where $A$ is the observed area and 0.5 is the magnitude bin. The Pearson test shows that the observed LF cannot be modelled by a single Schechter at 99$\%$ confidence level, due to the presence of a statistically significant upturn at $M_r > -18$ (see Fig. \ref{LF}). The observed LF is better parameterized using two Schechter functions. Because of the degeneracy in fitting a double Schechter profile, we followed a two-steps process \citep[see][]{barkhouse2007}. First, we fitted the bright part ($b: -22.5<$ $M_{r}<-19.0$) of the LF obtaining $M^*_b$ and $\alpha_b$. Then, these parameters were fixed when a double Schechter fit was performed to the total LF. Table \ref{tabella_data} shows the M$_r^*$ and $\alpha$ values of the faint ($f: -19.0<$ $M_{r}<-16.0$) and bright parts of the best-fit LF. \begin{figure} \includegraphics[width=1\linewidth]{lf} \caption{Black points are the observed spectroscopic LF of A85. The solid blue line shows the fit of the LF by a double Schechter. The bright ($b$) and faint ($f$) components of the fit are represented by a dashed and dotted lines, respectively. The 68$\%$ and 99$\%$ c.l. for the fitted parameters are shown in the insets.} \label{LF} \end{figure} In order to better understand the nature of the galaxies responsible for the upturn at the faint end of the LF, we classified galaxies in blue and red ones according to their $(g-r)$ colours. Thus, blue galaxies are those with $(g-r) < (g-r)_{RS} - 3 \sigma $ and red ones are the remaining cluster members; where $(g-r)_{RS}$ is the colour of the red sequence of A85, and $\sigma_{RS}$ represents its dispersion\footnote{The red sequence of the cluster and its dispersion were measured in the magnitude range $-22.5 < $ M$_r < -19.0$.} (see Fig. \ref{cmd}). Figure \ref{LF_rb} shows the spectroscopic LF of the blue and red populations of A85. Naturally, red galaxies completely dominate in number. The red LF departs from the single Schechter shape, showing the characteristic flattening at intermediate luminosities followed by a (mild) upturn at the faint end. The blue LF, however, is well fitted by a single Schechter function. This is in qualitative agreement with previous work \citep[e.g.][P06 hereafter]{popesso2006}. Nevertheless, it is remarkable that the faint-end slopes of both the red and blue populations are virtually identical (see Table \ref{tabella_data}). \begin{table} \begin{center} \caption{Schechter Function Parameters.\label{tabella_data}} \begin{tabular}{cccc} \hline\hline mag interval &$M_{r}^* $ [mag] &$\alpha $\\ \hline -22.5 $<$ M$_{r}$ $<$ -19.0 &$-20.85\; ^{+0.14}_{-0.14}$ &$-0.79\; ^{+0.08}_{-0.09}$ \\ -19.0 $<$ M$_{r}$ $<$ -16.0 &$-18.36\; ^{+0.41}_{-0.40}$ &$-1.58\; ^{+0.19}_{-0.15}$ \\ \hline & red &\\ \hline -22.5 $<$ M$_{r}$ $<$ -19.0 &$-20.71\; ^{+0.13}_{-0.15}$ &$-0.63\; ^{+0.09}_{-0.08}$ \\ -19.0 $<$ M$_{r}$ $<$ -16.0 &$-17.90\; ^{+0.35}_{-0.19}$ &$-1.46\; ^{+0.18}_{-0.17}$ \\ \hline & blue &\\ \hline -22.5 $<$ M$_{r}$ $<$ -16.0 &$-21.29 \; ^{+0.36}_{-0.44}$ &$-1.43 \; ^{+0.06}_{-0.05}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \includegraphics[width=1\linewidth]{lf_b_r} \caption{The spectroscopic LF of A85 (black points), blue and red diamonds show the LF of blue and red galaxies of A85. The full lines correspond to the Schecter function fits. The histograms are the LF of field red and blue galaxies (Blanton et al. 2005).} \label{LF_rb} \end{figure} \section{Discussion} In Fig.3 we additionally show the LFs of blue and red field populations from \cite{blanton2005} \footnote{Throughout this work, all the literature LFs are normalized so that they have the same number of counts as our A85 LF in the magnitude range $-22 < M_{r} < -19$.}. Their field LF is best described by a double Schechter function with faint-end slope -1.5. Figure \ref{LF_cfr} shows the comparison between our LF and other cluster and field LFs from the literature. We compare with the LF of the Virgo and Abell\,2199 clusters \citep{rines2008} because they are the two deepest spectroscopic LFs of nearby clusters covering a significant fraction of their respective virial radii (out to $\sim 0.7\,R_{200}$, very similar to our coverage). In addition, we include the LF from P06 because it is a photometric LF obtained by stacking a large sample of 69 clusters, and it exhibits the most statistically meaningful example of an upturn. Finally, we compare with the field LF from \cite{blanton2005}. We note that all LFs in Fig.4 have been derived using SDSS photometry. The very steep upturn present by the stacked photometric LF ($\alpha \sim -2.2$) is not observed in any of the nearby clusters. While A85 shows a slightly steeper slope than Virgo and A2199, it is not nearly as steep as the photometric LF. The discrepancy between the LF of A85 and the composite cluster LF from P06 can be traced back to the different methodologies applied to derive them. First, we introduced a stringent colour cut in the selection of our spectroscopic targets (see Sect.2.1). Second, our spectroscopic LF is based on accurate cluster membership using the galaxy recessional velocity, while photometric LFs necessarily rely on a statistical background subtraction. The open circles in Fig. \ref{LF_cfr} show the photometric LF of A85, derived from the number counts of candidate cluster galaxies with $(g-r) < 1.0$, and after performing a statistical background subtraction using galaxies (with the same colour cut) from 50 SDSS random fields of the same area as our A85 survey. It is clear that the photometric LF of A85 closely matches the spectroscopic LF, except for a steeper faint-end slope, which moves towards the P06 value. We quantify this difference by fitting a power law function to these LFs in the $ -18 \leq M_{r} \leq -16$ magnitude range. We prefer this simple approach to minimize the degeneracies present when fitting a double Schechter function (see Sect.3). We derive power-law faint-end slopes $\alpha_{f} = -1.5, -1.8,$ and $-2.1$ for the spectroscopic LF of A85, the photometric LF of A85, and the stacked photometric LF from P06, respectively. We note that cosmic variance of the cluster LF, or any mass dependence, can not explain the discrepancy between the composite cluster LF and that of A85. P06 show that 90 per cent of their nearby cluster sample have individual LFs consistent with the composite one shown in Fig.\ref{LF_cfr}. Moreover, the photometric LF for A85 itself from P06 is totally consistent with having a very steep faint end (see their Figure 5). This all suggests that the composite photometric LF from P06 suffers from contamination at the faint end, resulting in much steeper slopes than what is found using spectroscopic data. \begin{figure} \includegraphics[width=1\linewidth]{lf_cfr} \caption{Comparison between our LF and others present in literature: the stacked photometric LF of 69 clusters from Popesso et al. (2006), the spectroscopic LFs of A2199 and Virgo from Rines \& Geller (2008), and the field galaxy LF from Blanton et al. (2005). The photometric LF of A85 is also shown with open circles.} \label{LF_cfr} \end{figure} The field spectroscopic LF presented by \citep[][]{blanton2005} shows an upturn and a faint-end slope ($\alpha=-1.5$) similar to the one of A85. Contrary to some claims from photometrically derived cluster LFs, Fig. \ref{LF_cfr} shows that the faint-end slope in A85 and the field are consistent with each other -- i.e. clusters of these masses do not seem to contain a significant excess of dwarf galaxies with respect to the field. This, in turn, suggests that the environment may not play a major role in determining the abundance of low-mass galaxies \citep[cf.][]{tully2002}, but only acts to modify their star formation activity. While the overall abundance of dwarfs in the field and in clusters like A85 is the same, blue dwarfs dominate in the former environments, and red dwarfs in the latter. Environmental processes involving the loss of the galaxy gas reservoirs \citep[e.g.][]{quilis2000, bekki2002}, followed by the subsequent halt of the star formation, and the reddening of their stellar populations are the obvious mechanisms invoked to explain the colour transformation of cluster dwarfs. It is however not yet clear whether the efficiency of these processes depends on the halo mass or not. \cite{lisker2013} propose that quiescent cluster dwarfs originate as the result of early \citep[see also][] {sanchez2012} \emph{and} prolonged environmental influence in \emph{both} group- and cluster-size haloes. Along these lines, \cite{wetzel2013} find that group preprocessing is responsible for up to half of the quenched satellite population in massive clusters. On the other hand, P06 suggest that the excess of red dwarfs in clusters is a threshold process that occurs within the cluster virial radius \citep[see also][]{sanchez2008}, but their exact abundance is nevertheless a function of clustercentric radius: the upturn becomes steeper as the distance from the centre increases. Our dataset calls for a study of the LF as a function of radial distance from the cluster centre, and an investigation of the properties of A85 dwarfs in substructures and infalling groups. This will be the subject of future papers in this series. \section{Conclusions} We obtained 2861 low resolution spectra for galaxies down to $m_{r}=22$ mag within the virial radius of the nearby ($z=0.055$) galaxy cluster A85. This unique dataset allowed us to identify 438 galaxy cluster members, and build the spectroscopic LF of A85 down to M* + 6. The resulting LF is best modelled by a double Schechter function due to the presence of a statistically significant upturn at the faint-end. The amplitude of this upturn ($\alpha = -1.58^{+0.19}_{-0.15}$), however, is much smaller than that of most photometric cluster LFs previously reported in the literature. The faint-end slope of the LF in A85 is consistent, within the uncertainties, with that of the field. We investigate the nature of the galaxy population dominating the faint end of the LF of A85 by dividing the galaxies according to their $(g-r)$ colour. The red population dominates at low luminosities, and is the main responsible for the upturn. This is different from the field LF: while a relatively steep upturn is also present in the red population, blue galaxies are the prevalent population. The fact that the slopes of the spectroscopic LFs in the field and in a cluster as massive as A85 are similar suggests that the cluster environment does not play a major role in determining the abundance of low-mass galaxies, but it does influence the star formation history of dwarfs. \textbf{\textit{Acknowledgements.}} IA, AD and ALS acknowledge partial support from the INFN grant InDark and from the grant Progetti di Ateneo TO Call 2012 0011 ‘Marco Polo’ of the University of Torino.
{'timestamp': '2014-07-09T02:00:34', 'yymm': '1407', 'arxiv_id': '1407.1841', 'language': 'en', 'url': 'https://arxiv.org/abs/1407.1841'}
ArXiv
\section{Background on Homomorphic Encryption} \label{sec:HE} {\em Homomorphic Encryption} schemes allow computations on encrypted data such that the decrypted results are equal to the result of a mathematical or logical operation applied to the corresponding plaintext data. Therefore, calculations on encrypted data can be performed without decrypting the data first. This property makes it possible to outsource not only the storage of encrypted data, but also the computation on sensitive data to untrusted third parties (e.g., the cloud). The computation on cloud servers has two major advantages. If only the result and not the whole dataset needs to be transferred to the client, a lower network bandwidth is required. Furthermore, the client can be a thin client like a tablet without much computing and storage resources because the computational expensive rendering is done on the server. Homomorphic encryption schemes are classified into three categories: \begin{itemize} \itemsep0em \item {\em partially homomorphic encryption (PHE)}: are homomorphic with regard to only one type of operation (addition or multiplication) \item {\em somewhat homomorphic encryption (SHE)}: Can perform more general calculations than PHE, but only a limited number of them. \item {\em fully homomorphic encryption (FHE)}: Can perform any computation on encrypted data. \end{itemize} Rivest et al. \cite{pdf:FirstHE.Rivest1978} invented the idea of Homomorphic Encryption in 1978. They showed the demand of a secure Homomorphic Encryption scheme that supports a large set of operations. However, it took more than 30 years until the first proposal of such a FHE was made by C. Gentry in 2009 \cite{phdthesis:homencGentry}. While the first FHE schemes were just concepts, due to an ongoing development and optimization process, they are currently at least efficient enough for functional implementations. However, FHE schemes are still not practical for real applications, because the storage and computational costs are too high \cite{phdthesis:Hu2013ImprovingHE, article:MartinsFheSurvey2018, article:AbbasHeSurvey2018}. Therefore, in our approach, we propose to take advantage of the Paillier PHE scheme, whose homomorphic properties that are relevant for our encrypted volume rendering will be introduced next. \subsection{The Paillier Cryptosystem} \label{sec:PaillierCryptosystem} Paillier's cryptosystem \cite{pdf:Paillier} is an additive PHE scheme. The important property of this scheme is that a multiplication of two encrypted numbers is equivalent to an addition in the plaintext domain. This means that it is possible to calculate the sum of plaintext numbers that have been encrypted by multiplying the encrypted numbers. This relation is stated in \autoref{equ:paillierAddition}. For \autoref{equ:paillierAddition} and \autoref{equ:paillierMultiplication}, we adopt the notation by Ziad et al. \cite{pdf:CryptoImg-IEEE}. The $\oplus$ symbol is used for the operation on encrypted numbers that is equivalent to an addition on plaintext numbers. The encrypted version of the value $m$ is denoted as $\llbracket m \rrbracket$ and $\textrm{Dec}(\llbracket m \rrbracket) = m$ means decrypting $\llbracket m \rrbracket$ to $m$ by the decryption function of Paillier's cryptosystem (see: \autoref{alg:PaillierDecrypt}). Paillier works on finite fields and thus performs a modulo (mod $N^2$) operation after each multiplication ensures to stay in the field and does not change the decrypted result because the corresponding plaintext numbers need to be less than the modulus $N$ for a correct decryption anyway. However, the modulo operation makes further calculations more efficient because it prevents unnecessary large numbers ($n$ multiplications will blow up the number length by about $n$ times). \begin{align} \begin{split} \textrm{Dec}(\llbracket m_1 \rrbracket \oplus \llbracket m_2 \rrbracket) &= \textrm{Dec}((\llbracket m_1 \rrbracket \times \llbracket m_2 \rrbracket) \mod N^2)\\ &= (m_1 + m_2) \mod N \end{split} \label{equ:paillierAddition} \end{align} Since it is possible to add encrypted numbers to each other, in a \texttt{for-loop}, an encrypted number can be added to itself $d$ times to simulate a multiplication with $d$. However, note that the value $d$ is in plaintext. Since addition is performed by doing multiplication on the encrypted numbers, we can, instead of a \texttt{for-loop}, take the encrypted value to the power of $d$ to get this result, which can be implemented more efficiently than a \texttt{for-loop}. Furthermore, it has the advantage that it also works for $d < 0$. The case $d = -1$ is of special interest, because this makes subtraction of two encrypted numbers possible ($\textrm{Dec}(\llbracket m_1 - m_2 \rrbracket) = \textrm{Dec}( \llbracket m_1 \rrbracket \times \llbracket m_2 \rrbracket^{-1} \: \mod \: N^2)$). The symbol $\otimes$ is used for such an operation on one plaintext number $d$ and one encrypted number $\llbracket m_1 \rrbracket$. \autoref{equ:paillierMultiplication} shows how to calculate this multiplication with one encrypted number. \begin{align} \begin{split} \textrm{Dec}(\llbracket m_1 \rrbracket \otimes d) &= \textrm{Dec}(\llbracket m_1 \rrbracket^d \mod N^2)\\ &= (m_1 \times d) \mod N \end{split} \label{equ:paillierMultiplication} \end{align} While the Paillier PHE supports an efficient method to multiply an encrypted and a plaintext number, it does not support the multiplication of two encrypted numbers and is, therefore, not a fully homomorphic encryption scheme. \input{algorithm/paillierCreateKeys-short} \input{algorithm/paillierEncrypt} \input{algorithm/paillierDecrypt} The Paillier HE is a probabilistic asymmetric encryption scheme like the well-known RSA scheme \cite{article:rsa}. That means that an encryption can be performed by using the public key, which is derived from the secure (or private) key. However, for the task of decryption, the secure key is required, which cannot (efficiently) be calculated from the public key because this would require factoring a product of two large prime numbers. It is essential for the security of Paillier's cryptosystem (and also for RSA) that there is no known fast method for integer factorization of a product of two large prime numbers. The \autoref{alg:PaillierCreateKeys} shows an example of a key generation function for the Paillier cryptosystem, which sets the generator $g$ always to $N+1$, as this allows a more efficient encryption function (see: \cite{inproc:Fazio:2018:paillierOptimizingGenerator}). Furthermore, the algorithm clearly shows that the secure key, which needs to be kept secret, contains the two large prime numbers $p$ and $q$ (e.g., 1024 bit long) and the public key contains the product $N$ (e.g., 2048 bit long) of these two prime numbers. \autoref{alg:PaillierEncrypt} and \autoref{alg:PaillierDecrypt} show the pseudocode for the encrypt and decrypt routines for the Paillier HE. The encryption algorithm also contains the obfuscating of an encrypted number with a random number $r$, which qualifies the Paillier HE as a probabilistic encryption scheme. This means that a specific plaintext message $m$ can be represented by many possible ciphertexts $ \llbracket m \rrbracket_1, \llbracket m \rrbracket_2, \llbracket m \rrbracket_3 \dots, \llbracket m \rrbracket_r$. The decryption with the right secure key will return the original message $m$ for all the possible ciphertext representations. While this is not required for correct homomorphic calculations with Paillier (imagine $r = 1$), it is important for the semantic security against chosen-plaintext attacks (IND-CPA) that Paillier's cryptosystem provides \cite{pdf:Paillier, inbook:homoEncApps:2014:paillier}. Without this obfuscating, it would be possible to decrypt datasets without knowing the secure key because an attacker would only need to encrypt all possible plaintext values with the public key, store the plaintext and the corresponding ciphertext values in pairs, and compare the values of the encrypted dataset with the self-encrypted values for which the correct decryption is known. For datasets with a limited number of possible values like the voxel values of a volume, which usually contains no more than $2^{10} = 1024$ different values, this would be a trivial task. \section{Related Work} We have only found two works that address the topic of privacy-preserving rendering of volumetric data. The most similar work to ours is that of Mohanty et al.\cite{inproc:3DCrypt}. They present a cryptosystem for privacy-preserving volume rendering in the cloud. Unlike our approach, they achieve correct alpha compositing. However, to attain this goal, they end up with a solution that cannot be considered secure, that has a fixed transfer function, and that requires that the volume is sent from one server to another server for each rendered frame. Their approach requires two servers for rendering: a Public Cloud Server and a Private Cloud Server. The first step of their rendering approach is to apply a color and opacity to each voxel before encrypting the volume. This means that the transfer function is pre-calculated and cannot be changed by a user without performing a time-consuming reencryption and uploading of the volume. In the next step, the encrypted data is uploaded to the Public Cloud Server, which stores the volume data. When the Public Cloud Server receives an authorized rendering request from a client, the server calculates all sample positions for the requested ray casting and interpolates the encrypted color and opacity values for each sample position. All interpolated sample values then need to be individually sent to the Private Cloud Server, which decrypts the opacity value of each sample in order to perform the alpha blending along the viewing rays. For alpha compositing, the opacity values of samples represent object structures in the volume; therefore, anyone who can gain access to the Private Cloud Server, such as an administrator or a hacker, will be able to observe these structures in the volume dataset. If an unauthorized person has access to this server, the whole approach collapses. For the task of encrypting and decrypting parts of the volume data on the servers, their approach requires a central Key Management Authority (KMA). While this brings the advantage that an organization can centrally control which users have access to a specific volume, it enlarges the attack surface of their system considerably, because the KMA has all keys required for decrypting all volume data. Therefore, the confidentiality of the KMA is constitutional for the privacy of all datasets, no matter who they belong to. Another weakness of their approach is the required network bandwidth between the Public and the Private Cloud Server because all sample values of a ray casting frame need to be transferred from the Public and the Private Cloud Server (more than 1GB). With our approach, the privacy of the volume data and rendered image depends only on a single secure key. Also, our approach should scale linearly with the computing power of the hardware it is running on. Chou and Yang \cite{ObfuscatedVolRend} present a volume rendering approach that attempts to make it difficult for an unintended observer to make sense of the volume dataset that resides on a server. This is done by, on the client's side, subdividing the original data into equally sized blocks. The blocks are rearranged in a random order and then sent to the server as a volume. The server then performs volume rendering on each block and sends the result back to the client, which will reorder the individual block renderings and composite them to create a correct rendering. To obfuscate the data further, on the client's side, the data values in each block are changed using one out of three possible monotonic operations: flipping, scaling, and translating. Monotonic operations are used as they are invertible and associative under the volume rendering integration. Therefore, doing the inverse operators on the resulting rendering gives the same result as doing them on the data values before performing the rendering. This algorithm cannot be considered safe, and the authors acknowledge this as they state that the goal is only to not trivially reveal the volume to unauthorized viewers. A possible attack would be to consider the gradient magnitude of the obfuscated volume. This should reveal the block borders. The gradient magnitude can further be used inside each block to reveal structures in the data that can be used for aligning the blocks correctly. \deleted{ Our goal is to develop an approach that is open and secure by design (Kerckhoffs's principle \cite{article:Kerckhoffs83}) and not {\em secure through obscurity} \cite{article:hoepman2008securityThrough} or relies on assumptions that are outside the algorithm itself, such that the internal memory of a cloud server is not accessible to an intruder. The former is the case for Chou and Yang's approach \cite{ObfuscatedVolRend} and the latter is the case for the approach by Mohanty et al. \cite{inproc:3DCrypt}. } \replaced{ To attain our goal of developing an approach that is open and secure by design, we use the Paillier cryptosystem developed by Paillier in 1999 \cite{pdf:Paillier}. This cryptosystem is an asymmetric encryption scheme, where the secure key contains two large prime numbers $p$ and $q$% , and the public key contains the product $N$ (modulus) of $p$ and $q$. The cryptosystem supports an additive homomorphic operation ($\oplus$). If this operation is applied to two encrypted values $\llbracket m_1 \rrbracket, \llbracket m_2 \rrbracket$ ($\llbracket m \rrbracket$ means encrypted $m$), the decrypted result is the sum of the $m_1$ and $m_2$ ($\textrm{Dec}(\llbracket m_1 \rrbracket \oplus \llbracket m_2 \rrbracket) = (m_1 + m_2) \mod N$). Furthermore, a homomorphic multiplication ($\otimes$) between an encrypted value and a plaintext value $d$ is supported ($\textrm{Dec}(\llbracket m_1 \rrbracket \otimes d) = (m_1 \times d) \mod N$). Since Paillier's cryptosystem does not carry over multiplication of two encrypted values to plaintext, it is classified as a partially homomorphic encryption (PHE) scheme. Paillier can securely encrypt many values (e.g., $512^3$ voxels of a volume) from a small number space (e.g., $2^{10}$ possible density values), because it is {\em probabilistic}, which means that during the encryption, the {\em obfuscation} can map a single plaintext value randomly to a large number of possible encrypted values. This makes a simple \enquote{probing} for finding out the number correspondence impossible. Further details about Paillier's cryptosystem such as the encryption and decryption algorithm is provided in the Supplementary material document. We are limited to the arithmetic operations supported by Paillier for creating a volume rendering that captures as much structure as possible from the data. } { To attain this goal, we use the Paillier cryptosystem, and are limited in the types of arithmetic operations we can use for creating a volume rendering that captures as much structure as possible from the data. } This forces us to think unconventionally and creatively when designing the volume renderer. For homomorphic image processing, the work by Ziad et al. \cite{pdf:CryptoImg-IEEE} makes use of the additive homomorphic property of Paillier's cryptosystem. They demonstrate that they are able to implement many image processing filters using the limited operations allowed with Paillier. They implement filters for negation, brightness adjustment, low pass filtering, Sobel filter, sharpening, erosion, dilation and equalization. \added{ While most of these filters are computed entirely on the server side, erosion, dilation, and equalization require the client for parts of the computation. There are various works that make use of such a {\em trusted client protocol} approach to overcome the limitation of a PHE scheme and enable operations such as addition, multiplication, and comparisons on the encrypted data~\cite{inProc:SecureMR, inProc:Crypsis, inProc:JCrypt}. A {\em trusted client} knows the secure key and can, therefore, perform any computation on the data or {\em convert / re-encrypt} it from one encryption scheme to another (e.g., from an additive to a multiplicative {\em homomorphic encryption}). These client-side computations introduce latency because the data needs to be transferred back and forth between the server and the client. Furthermore, the client needs to have enough computational power to avoid becoming the bottleneck of the system. To mitigate this problem, automated code conversions can be used that minimize the required client side {\em re-encryptions} \cite{inProc:JCrypt,inProc:SecureMR}. While a {\em trusted client} approach could theoretically solve many of the problems we face with our untrusted server-only approach, it is not practical for volume rendering. The most demanding problems of volume rendering, such as transferring a voxel value and advanced compositing (alpha blending, maximum intensity projection, ...), need to be done per voxel. Hence, every voxel that could contribute to the image synthesis (all voxels of a volume for many rendering cases) needs to be transferred to the {\em trusted client} and processed there for every rendered frame. The encryption and decryption on the client side are more expensive than the operations required for a classical sample compositing due to the size of encrypted values (e.g., 1000 bit per voxel). If an amount of data in the range of the volume itself needs to be transferred from the server to the client, where the data would need to be encrypted and decrypted, it is pointless to perform any calculations on the server, because the client then has more work to do than in a classical volume rendering on the client. Moreover, it does not save any network traffic as compared to a simple download, decrypt, and process use case. Therefore, we argue that {\em trusted client} approaches are not suitable for our work. Furthermore, a {\em trusted client} approach will not work with thin clients, which contradicts our third requirement. Our second requirement is also contradicted because, in real-world use cases, the network bandwidth between a client like a tablet computer and a cloud server will not have enough bandwidth (e.g., more than 1Gbit/s) to support interactive frame rates. } \section{Encrypted Rendering Overview} The first step of the introduced privacy preserving rendering system is the encryption of the volume dataset \added{(\autoref{fig:setup} Acquisition Device)}. During the encryption stage, every single scalar voxel value of a volume dataset needs to be encrypted with Paillier's approach (see \autoref{alg:PaillierEncrypt} in the Supplementary Material document). Meta data of the volume such as width, height, depth and the storage order of voxels will not be encrypted. The next step is to upload the encrypted volume dataset to a server \added{(\autoref{fig:setup} arrow from Acquisition Device to Cloud Server)}. For our approach, the device that encrypts the volume and uploads it to a server does not even need the secure key, because for encryption, only the public key is required. When a rendered image is requested to be shown on a client, the client sends a rendering request to the server, which has the encrypted volume dataset \added{(\autoref{fig:setup} arrow from Client to Cloud Server)}. The rendering request contains further information about the settings of the rendering pipeline, such as the camera position, view projection, and (depending on the selected rendering type) also information about the transfer function that should be used. After the server receives such a rendering request, it uses the included pipeline settings and the already stored encrypted volume dataset to render the requested image \added{(\autoref{fig:setup} the rendering pipeline stages of the Cloud Server)}. To preserve privacy, the server does not have the secure key and can not, therefore, decrypt the volume data. The operations that are used for rendering an image from an encrypted volume dataset are limited to the homomorphic operations add ($\oplus$\deleted{, \autoref{equ:paillierAddition}}) and multiply with plaintext ($\otimes$\deleted{, \autoref{equ:paillierMultiplication}}), which are defined for Paillier's encryption scheme. When the rendering is finished, the server will send the calculated image data to the client \added{(\autoref{fig:setup} arrow from Cloud Server to Client)}. The resulting image that the client receives is still encrypted. Decrypting such an image is only possible for a client that knows the correct secure key. For everyone else, the image will be random noise (shown in Supplementary Video Material). Since every single pixel value is an encrypted number, every single pixel can be decrypted independently of the other pixels. For a gray-scale image, that means one number per pixel. An RGB colored image requires three values that need to be decrypted per pixel. In \autoref{sec:xray}, we explain how the homomorphic operations of Paillier's HE can be used for X-ray sample integration. Furthermore, we will show how to use Paillier's cryptosystem with floating-point numbers, which allows us to perform trilinear interpolation. \autoref{sec:transferFunction} explains a more advanced approach that allows the emphasizing of different density ranges in the rendered images. \section{Encrypted X-Ray Rendering} \label{sec:xray} \added{ Ray-casting \cite{proc:Krueger:2003:ATGV} is the most frequently used approach for volume rendering. Furthermore, ray-casting based algorithms can be easily and efficiently parallelized and can be implemented with fewer memory reads than slicing-based algorithms. Memory access is time-consuming, especially if every number that needs to be read is thousands of bits long. Therefore, we implement our privacy-preserving volume rendering approach with ray-casting. } \replaced{ However, other direct volume rendering approaches developed for unencrypted data, such as slicing, can be used as well. Slicing on the server can be built by the same encrypted rendering pipeline components (sampling / interpolation, color mapping, compositing), which we will explain anon. Slicing could also be used to just perform the sampling on the server, transfer the slices to the client, and perform the compositing there. However, this would not fulfill our requirements because of the required network bandwith and the high computational requirement on the client. }{ } \added{ The ray casting algorithm first calculates a viewing ray for every pixel of the final image (\autoref{fig:setup} Ray Traversal - stage of the Server). These viewing rays will be calculated based on the camera position, up vector, opening angle, image resolution, and pixel index. At discrete and equidistant steps along the ray, the data of the volume is sampled (\autoref{fig:setup} Sampling - stage of the Server). The last step is the compositing, where the final pixel value is calculated based on the sample values of a viewing ray (\autoref{fig:setup} Compositing - stage of the Server). } \replaced{ X-ray rendering is a volume rendering approach where the sample value is mapped to a white color with monotonically increasing opacity, and the compositing is a summation followed by a normalization at the end of the ray traversal. }{ } If the sampling of the voxel values is done by nearest-neighbor filtering, the sum along a viewing ray can be calculated by only using the homomorphic add operation ($\oplus$) which is already defined for Paillier's cryptosystem\deleted{ (see \autoref{equ:paillierAddition})}. \replaced{ The final normalization of all samples along a view ray cannot be done directly by the homomorphic operations of Paillier's encryption scheme because this requires a division that can result in a non-integer value that is not supported. }{ } However, the server \replaced{could}{} send the encrypted sum together with the sample count to the client, which can perform the division after decrypting the sum. \begin{figure}[tb] \centering \subfigure[Nearest Neighbor]{ \includegraphics[width=0.478\columnwidth,trim=0 3mm 0 3mm,clip]{figures/enc-xray-nearestNeighbor-x300.png} \label{fig:xrayTrilinear:nn} } \subfigure[Trilinear Interpolation]{ \includegraphics[width=0.478\columnwidth,trim=0 3mm 0 3mm,clip]{figures/enc-xray-trilinearInterpolation-x300.png} \label{fig:xrayTrilinear:ti} } \caption{Results from encrypted X-ray rendering showing nearest neighbor (a) and Trilinear interpolation (b), which we also support.} \label{fig:xrayTrilinear} \end{figure} \added{ To improve the nearest-neighbor sampling with trilinear interpolation, a mechanism that allows the summing and normalization of encrypted values ($\llbracket m_1 \rrbracket$, $\llbracket m_1 \rrbracket$), which are scaled by some plaintext weights ($\alpha_1$, $\alpha_2$), is required. For plaintext integers, the interpolation could be implemented around the integer arithmetic operations add, multiply, and divide (1D example: $(m_1 \cdot \alpha_1 + m_2 \cdot \alpha_2) / (\alpha_1 + \alpha_2)$). Since an arbitrary division is not supported by Paillier's cryptosystem, this is not directly feasible on encrypted data. A possible solution could be to use fraction types, which has an encrypted denominator and a plaintext numerator for storage and calculations. After the image is rendered, which contains such fractions as pixel values, the client can download it, decrypt the denominators, and perform the deferred divisions% }% \footnote{ \added{ If the rendering pipeline is designed in a very static way, it is theoretically possible to know the final numerator upfront and let the client perform the required division without explicitly specifying the numerator. However, this is very inflexible, error prone, and requires an update for the client whenever a change on the server leads to a change of the final numerator. } }. \added{ However, we decided to use a floating-point encoding, which is easier to implement and allows a shader code development as is usual for hardware accelerated rendering. With a floating-point representation of encrypted values, it is possible to multiply the eight neighboring voxels of a sample position with the distances between the samples and voxel position. These distances, which have a sum of $1.0$, are the weights of the interpolation (1D example: $m_1 \cdot \alpha_1 + m_2 \cdot \alpha_2$). } \deleted{ To improve the nearest-neighbor sampling with trilinear interpolation, a floating-point number encoding for encrypted values is required because the distances between the sample position and the eight neighboring voxels, which are used as weights for the interpolation, are fractions. } A floating-point encoding will also make the final division of the sample sum for X-ray rendering on the server side possible. While a floating-point encoding does not directly enable divisions in the encrypted domain, it can be used to approximate a division by a multiplication with the reciprocal of the divisor, as shown in \mbox{\autoref{equ:fpDeviation}}. \begin{equation} \begin{split} \frac{\sum}{n} \approx \textrm{Dec}\left( \left\llbracket\sum\right\rrbracket \otimes \left\lfloor \frac{1}{n} \cdot 10^\gamma \right\rceil \right) \cdot 10^{-\gamma} \end{split} \label{equ:fpDeviation} \end{equation} The sum of samples along a viewing ray is denoted as $\sum$, and $n$ is the count of samples. The precision of the approximation is defined by the count of decimal digits $\gamma$ (e.g., $\gamma = 3$ for thousandth). Before the reciprocal of $n$ is multiplied with $\sum$, the comma is moved $\gamma$ digits to the right ($\cdot 10^\gamma$) and then rounded ($\lfloor \rceil$). The multiplication with $10^{-\gamma}$, which moves the comma back to the correct position, can be achieved by subtracting $\gamma$ from the exponent of the floating-point encoded result. Since the Paillier cryptosystem is defined over $\mathbb{Z}_N$, the result is only correct if no intermediate result is greater than $N-1$. We will discuss the used floating-point encoding in \autoref{sec:FPNumbers}. \autoref{fig:xrayTrilinear} shows two images that were rendered from an encrypted floating-point encoded dataset. For the rendering of the left image, a nearest-neighbor sampling was used, and for the right image, a trilinear interpolation was used. The used dataset contains three objects with different densities: a solid cube in the center wrapped inside a sphere and another sphere at the top left front corner. The same dataset is also used for renderings shown in \autoref{fig:densityEmphasising} and \autoref{fig:generic4dimColored}. \subsection{Encrypted Floating-Point Numbers} \label{sec:FPNumbers} A floating-point number is defined as $m \cdot b^e$, where $m$ is called the mantissa. The exponent $e$ defines the position of the comma in the final number. The base $b$ is a constant that is defined upfront (e.g., during the compilation of the application). We used a decimal system for convenience; therefore, our prototype uses $b = 10$. However, $b$ can be any positive integer that is greater or equal to $2$. To calculate with floating-point arithmetic in the encrypted domain, we have chosen to use the approach developed for Google’s Encrypted BigQuery Client \cite{web:GoogleEncryptedBigqueryClientGit}. The idea is to store the mantissa $m$ and the exponent $e$ of a floating-point number in two different integer variables. During the encryption of the floating-point number $(m, e)$, only the mantissa $m$ is encrypted using Paillier's cryptosystem. The exponent $e$ remains unencrypted, which results in the floating-point number $(\llbracket m \rrbracket, e)$. This floating-point number representation is also used by the {\em python-paillier} library \cite{web:PythonPaillierGit}, the Java library {\em javallier} \cite{web:PythonPaillierGit} and in the work by Ziad et al. \cite{pdf:CryptoImg-IEEE}. For an addition of two such encrypted floating-point numbers, both need to have the same exponent. Therefore, the exponents of both numbers must be made equal before the actual addition, if they are not already equal. Hence, it is not possible to increase the exponent if the mantissa is encrypted because that would require a homomorphic division of the encrypted mantissa, which is not possible. Therefore, the floating-point number with the greater exponent needs to be changed. On the other hand, decreasing the exponent of a floating-point number is not a problem because it requires a homomorphic multiplication of the encrypted mantissa with a plaintext number, which is possible with Paillier. \autoref{equ:decreaseExponentTo} shows how to calculate the new mantissa $\llbracket m_n \rrbracket$ that is required for decreasing the exponent of the floating-point number ($\llbracket m_o \rrbracket$, $e_o$) to the lower exponent $e_n$. The new floating-point number is defined as ($\llbracket m_n \rrbracket$, $e_n$), which represents exactly the same number as ($\llbracket m_o \rrbracket$, $e_o$). It is just another way to store it. \begin{align} \begin{split} \llbracket m_n \rrbracket = \llbracket m_o \rrbracket \otimes b^{e_o - e_n} \end{split} \label{equ:decreaseExponentTo} \end{align} When both floating-point numbers ($\llbracket m_1 \rrbracket$, $e_1$) and ($\llbracket m_2 \rrbracket$, $e_2$) have the same exponent $e_1 = e_2 = e_n$, the homomorphic sum $\llbracket m_s \rrbracket$ of both mantissas can be calculated by the add operation defined for Paillier \deleted{(see \autoref{equ:paillierAddition}) }, which results in the final floating-point number ($\llbracket m_s \rrbracket$, $e_n$). The \autoref{alg:PaillierFpAdd} shows this approach for summing two floating-point numbers with encrypted mantissas. The lines from 2 to 10 bring the exponents of both floating-point numbers to the same value ($e_n$), and line number 11 contains the addition of the encrypted mantissas. \input{algorithm/paillierFpAdd} A multiplication with a floating-point number that contains an encrypted mantissa ($\llbracket m_1 \rrbracket$, $e_1$) and a floating-point number with a plaintext mantissa ($m_2$, $e_2$) can be achieved by multiplying the mantissas with the multiplication operation defined for Paillier ($\llbracket m_n \rrbracket = \llbracket m_1 \rrbracket \otimes m_2 \;\;$\deleted{~ , \autoref{equ:paillierMultiplication}}) and a plaintext addition of the exponents ($e_n = e_1 + e_2 $). This is also stated in line 10 and 11 of the \autoref{alg:PaillierFpMultiply}, which is sufficient for a correct result. The lines from 2 to 9 contain a performance optimization, which prevents the intermediate result of $\llbracket m_e \rrbracket^{m_d}$, which is computed before $\mathrm{mod}\, N^2$ is applied in line 10, from being unnecessarily large\deleted{ (see \autoref{equ:paillierMultiplication})}. This optimization is also used by the python library {\em python-paillier} \cite{web:PythonPaillierGit} in \texttt{paillier.py} and the java library {\em javallier} \cite{web:JavallierGit} in \texttt{PaillierContext.java}. \input{algorithm/paillierFpMultiply} Signed numbers can be represented by using a two's complement representation for the mantissa $m$. The exponent $e$ does not change. If $v$ is a negative integer, the two's complement in the integer modulo $N$ can be calculated by: $m = v + N$. In the encrypted domain, the additive inverse $-m$ of $m$ is defined by the multiplicitive inverse $\llbracket m \rrbracket ^{-1} = \llbracket i \rrbracket$ of $\llbracket m \rrbracket$ in the integers, modulo $N^2$ ($\llbracket i \rrbracket$ is defined by: $\llbracket m \rrbracket \cdot \llbracket i \rrbracket = 1 \mod N^2$ and can be computed from $\llbracket m \rrbracket$ and $N^2$ by the {\em extended Euclidian algorithm} \cite{book:KnuthArtV2}). This complement representation for encrypted numbers can also be used for a subtraction of two encrypted numbers\deleted{, as already stated in \autoref{sec:PaillierCryptosystem}}. Since, the first operand of a subtraction can be added to the additive inverse of the second operand ($\textrm{Dec}(\llbracket m_1 - m_2 \rrbracket) = \textrm{Dec}( \llbracket m_1 \rrbracket \times \llbracket m_2 \rrbracket^{-1} \: \mod \: N^2)$). With the floating-point encoding explained in this section, it is possible to perform a trilinear interpolation of voxel values because the encrypted voxel values can be multiplied by the fractional distances between a sample position on a viewing ray and the actual voxel position. Furthermore, divisions of an encrypted number $(\llbracket m \rrbracket, e)$ by a plaintext number $d$ can be approximated by a multiplication of the encrypted number $(\llbracket m \rrbracket, e)$ with the reciprocal $(\left\lfloor 1/d \cdot 10^\gamma \right\rceil , -\gamma)$ of $d$ ($\gamma$ defines the precision~ -~compare with \autoref{equ:fpDeviation}). \section{Transfer Function} \label{sec:transferFunction} In this section, we discuss the challenges of building a transfer function approach that works for a probabilistic PHE scheme, and we show a novel and practical solution for a simplified transfer function. It is not possible to use the transferred values for an alpha blending sample compositing because this would require a multiplication of two encrypted values, which is not possible with Paillier's cryptosystem. However, the transfer function can be used to highlight specific density ranges at X-ray rendering, which helps an observer to distinguish between different objects inside a volume. A transfer function for non-encrypted voxel values can be implemented as an array with the possible voxel values as indices and the assigned color as values of the array. The evaluation of such a transfer function is as simple as reading the value from the array at the index, which is equal to the voxel value that should be mapped. However, this cannot be efficiently implemented for encrypted data. For non-encrypted voxel values, such a transfer function array will have a length that is equal to the amount of possible voxel values, which is only $2^8 = 256$ for $8$-bit voxels or $2^{10} = 1024$ for $10$-bit voxels. An encrypted volume dataset will probably not contain two equal voxel values, because of the obfuscation during the encryption. That means an encrypted dataset will probably have as many different voxel values as it has voxels. Therefore, an array as transfer function will not work because it would be at least as big as the volume itself. Another approach for non-encrypted data is to store just some supporting points that contain the density and color. The evaluation for this transfer function approach is achieved by interpolating the color between the value of the next lower and next greater supporting point. To find the neighboring supporting points of the voxel value that should be transferred, comparison operators such as lower than ($<$) or greater than ($>$) are required. However, comparison operators cannot exist for probabilistic PHE schemes like Paillier because that would break its security (see \autoref{sec:ComparisonOperators}). Therefore, the question is how to implement a function $f:X \to Y$ that can map finite sets of numbers $X$ to another set of numbers $Y$ by just using the operations {\em add} ($\oplus$) and {\em multiply with constant} ($\otimes$). The result of this function is again an encrypted number. A promising approach that can achieve this was presented by Wamser et al. \cite{pdf:ObliviousLookupTables} in their work on \enquote{oblivious lookup-tables}. \subsection{Oblivious Lookup Tables} \label{sec:olut} Let $X = \{x_1, x_2,...,x_n\}$ be an enumeration of values that should be mapped to $Y = \{y_1, y_2,...,y_n\}$ by the lookup function $f(x_i) = y_1$. The idea is to create a vector $\vec{v_i}$ for every $x_i \in X$ with the same cardinality as $X$ ($|\vec{v_i}|=|X|$) and define the evaluation of a lookup by the dot product $\vec{v_i} \cdot \vec{l} = y_i$. The scalar value $y_i$ is the result of the lookup. For a transfer function, this would be the value of one color channel. The vector $\vec{l}$ can be calculated form the linear equation $V \cdot \vec{l} = \vec{y}$. $V$ is a square matrix of full rank with $n = |X|$, that uses all vectors $\vec{v_i}$ as rows. However, this linear equation needs to be solved only once. Therefore, the client can calculate $\vec{l}$ upfront based on unencrypted numbers. The equation $V \cdot \vec{l} = \vec{y}$ has a unique solution, if all vectors $\vec{v_i}$ are linearly independent. Hence, the crucial part is to find an approach to extrapolate every vector $\vec{v_i}$ only from one single $x_i$ so that the $\vec{v_i}$ are linearly independent from each other. Wamser et al. \cite{pdf:ObliviousLookupTables} suggest to use a Vandermonde-Matrix as $V$ (\autoref{equ:olutVandermonde}), because it fulfills these requirements. \begin{align} \begin{split} V = \begin{pmatrix} 1 & x_1^1 & x_1^2 & \cdots & x_1^{n-1} \\ 1 & x_2^1 & x_2^2 & \cdots & x_2^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n^1 & x_n^2 & \cdots & x_n^{n-1} \end{pmatrix} \end{split} \label{equ:olutVandermonde} \end{align} From the creation rule of the Vandermonde-Matrix, it follows that a $\vec{v_i}$, which is equal to the $i$-th row of the matrix $V$, is defined as $\vec{v_i} = (1, x_i^1, x_i^2, \cdots, x_i^{n-1})$. The lookup function $f(x_i)$ can, therefore, be stated as: \begin{align} \begin{split} f(x_i) = (1, x_i^1, x_i^2, \cdots, x_i^{n-1}) \cdot \vec{l} = y_i \end{split} \label{equ:olutEvaluationVandermonde} \end{align} The dot product in \autoref{equ:olutEvaluationVandermonde} can be calculated even if $\vec{v_i} = (1, x_i^1, x_i^2, \cdots, x_i^{n-1})$ is encrypted because only the operations {\em add} ($\oplus$) and {\em multiply} ($\otimes$) that are defined for the Paillier HE are required for calculating a dot product. However, it is not possible to calculate the vector $\vec{v_i}$ from an encrypted $\llbracket x_i \rrbracket$, because this would involve multiplications of two encrypted numbers, which is not possible with Paillier. A theoretical solution for this could be to store the vector $\vec{v_i}$ instead of scalar $x_i$ as the value of a voxel. For a volume dataset, where the voxel values have only a resolution of $8$ bits, this would lead to a vector length of $n = 2^8 = 256$. Therefore, the required storage size for the volume will increase $256$ times. A volume with $512 \times 512 \times 512$ voxels and a resolution of $8$ bits per voxel requires $512^3 \cdot 8~\mathrm{bits} / 8~\mathrm{bits} = 134,217,728~\mathrm{Bytes} = 128~\mathrm{MB}$. The same volume encrypted by Paillier HE with a public key length that can be considered as secure ($2048$ bits) requires $512^3 \cdot 2 \cdot 2048 \mathrm{bits} / 8 \mathrm{bits} = 64~\mathrm{GB}$. If the scalar voxel values $x_i$ are replaced by the vectors $\vec{v_i}$ with a length of $256$, the volume will require $64~\mathrm{GB} * 256 = 16 \mathrm{TB}$. While a volume dataset with $16$ Terabyte is probably better than a transfer function that is at least as big as the encrypted volume, the overhead in terms of storage and computation is still too big to be practical. Therefore, we develop a simplified and novel transfer function approach with a considerably lower storage overhead, which we discuss in the next two sections. \subsection{Density Range Emphasizing} Our simplified transfer function approach is based on the observation that it is possible to compute the dot product of a vector with encrypted values and a vector with plaintext values. Furthermore, the dot product can be used to calculate an encrypted scalar value indicating the similarity of an encrypted vector and a plaintext vector. This will work if both vectors have length $1$. Therefore, our approach is to encode the density values of each voxel as a vector and encrypt each component of this vector by the Paillier encryption algorithm (Supplementary Material \autoref{alg:PaillierEncrypt}). In order to highlight a user-defined density range, the density value at the center of this range needs to be encoded as a vector. Note that this vector is not encrypted. The encrypted volume rendering engine can now compute the dot product between this vector and the encrypted vector of a sample position. Then the ray-casting algorithm needs to sum up the results of the dot products along a ray instead of the density values. This approach allows a user to emphasize a selectable density range in the rendered image. \autoref{fig:densityEmphasising} contains images that were created using this approach. The top left subfigure shows a result of an X-ray rendering for comparison. All other subfigures show results for different density ranges that are emphasized. The density that is encoded as vector that was used for the dot-product calculation is specified in the caption of each sub figure. \begin{figure}[tb] \centering \subfigure[X-ray]{ \includegraphics[width=0.478\columnwidth,trim=0 1mm 0 1mm,clip]{figures/c-01-x-ray-smaller.png} } \subfigure[emphasized density: 0.653]{ \includegraphics[width=0.478\columnwidth,trim=0 1mm 0 1mm,clip]{figures/c-02-colored-1-1-smaller.png} } \\ \vspace*{-1mm} \subfigure[emphasized density: 0.331]{ \includegraphics[width=0.478\columnwidth,trim=0 1mm 0 1mm,clip]{figures/c-04-colored-1-3-smaller.png} } \subfigure[emphasized density: 0.781]{ \includegraphics[width=0.478\columnwidth,trim=0 1mm 0 1mm,clip]{figures/c-05-colored-1-4-smaller.png} } \caption{First image shows an X-ray rendering result for comparison with the other three images that are created by our encrypted density emphasizing approach. The volume density values are encoded with 4-dimensional vectors.} \label{fig:densityEmphasising} \end{figure} \begin{figure}[tb] \centering \subfigure{ \includegraphics[width=0.478\columnwidth]{figures/voxelEncoding3dNormalized-dot_045and085-c.pdf} } \subfigure{ \includegraphics[width=0.478\columnwidth]{figures/voxelEncoding6dNormalized-dot_045and085-c.pdf} } \caption{ Visualization of density encoded as 3 dimensional (left) and 6 dimensional (right) vectors. The scalar value (density) of the voxel is represented on the x-axes. The magnitude of each vector component at a specific density is represented by the curves. The first component is drawn in red, the second in green, red, purple, olive and light blue. The dashed curves shows the result of the dot product between the encoded voxel value and a {\em TF-Node} vector for a density of 0.45 in cyan and a density of 0.85 in orange. } \label{fig:densityEncodingPlots} \end{figure} \input{algorithm/extendedHsvMapping-short} The density-to-vector encoding scheme we used is based on an HSV-to-RGB color conversion. The exact encoding scheme is stated in \autoref{alg:encodeDensity}. \autoref{fig:densityEncodingPlots} illustrates the magnitude of the vector components for all possible density values. Furthermore, the response intensities for user-defined emphasizing densities at 0.45 and 0.85 are shown. At the last line of \autoref{alg:encodeDensity}, the calculated vector is normalized. This is important to make sure that the result of the dot product is always between $0$ and $1$ and to ensure that the highest possible dot product result ($1$) is at the user-defined emphasizing density. There are other and possibly better density-to-vector encoding schemes. However, the HSV-based encoding leads to results that feel natural, especially while smoothly increasing or decreasing the emphasizing density. The encoding scheme should in any case be chosen in such a way that the curve created by the dot product is steep and narrow (see dashed lines in \autoref{fig:densityEncodingPlots}), so that the density selected by the user can be seen as clearly as possible in the resulting image. The \autoref{alg:encodeDensity} takes not only the density that should be encoded as parameter, but also the count of dimensions of the returned vector. Increasing the count of dimension not only makes the dot product response curve more steep (See \autoref{fig:densityEncodingPlots} and compare the dashed lines in the left and right plot.), but also increases the required storage size of the encoded and encrypted volume dataset. Note that the count of dimensions must be the same during the encryption of the volume and for the encoding of the user-defined emphasizing density. This also means that the amount of computations required for the volume rendering depends on the number of dimensions used for encoding the volume. \subsection{Simplified Transfer Function} It is possible to add RGB colors to the rendered images based on the density range emphasizing described in the last section. This is useful because RGB colors allow a user to emphasize different densities in the same image while keeping the densities distinguishable (see \autoref{fig:generic4dimColored}). Since the dot product between an encoded and encrypted voxel value and a user-defined encoded density is an encrypted scalar value, a multiplication with another plaintext number is possible. For our simplified transfer function approach, the dot product result needs to be multiplied with a user-defined RGB color vector. As the dot product expresses the similarity between the voxel value and the user-defined density, the intensity of the resulting RGB color will be high if the densities are similar, and low otherwise. Since the RGB color vector is not encrypted, the multiplication between the encrypted dot product result and the RGB color vector can be archived by three separate homomorphic multiplications ($\otimes$) of one encrypted and one plaintext number\deleted{ (see \autoref{equ:paillierMultiplication})}. The result of such a multiplication is an encrypted RGB color. This calculation can be performed not only for one density-RGB-color-pair, but also for multiple such pairs. For a better understanding, we will call such a pair consisting of a density and an RGB color a {\em transfer function node (TF-Node)}. \autoref{equ:encVoxel2Rgb} shows the transformation for one encoded and encrypted voxel value $\llbracket \vec{v} \rrbracket$ to an encrypted RGB color $\llbracket \vec{c_v} \rrbracket$. The symbol $\bigoplus$ is used instead of $\sum$, because the sum of encrypted vectors needs to be calculated. The variable $n$ denotes the count of user defined {\em TF-Nodes}. The vectors $\vec{d_i}$ and $\vec{c_i}$ are the encoded density and RGB color of the {\em TF-Node} with index $i$. The symbol $\odot$ is used as operator for a dot product between one encrypted vector and one plaintext vector. \begin{align} \begin{split} \llbracket \vec{c_v} \rrbracket = \bigoplus_{i=0}^{n}\left( \llbracket \vec{v} \rrbracket \odot \vec{d_i} \right) \otimes \vec{c_i} \end{split} \label{equ:encVoxel2Rgb} \end{align} To obtain the final encrypted RGB color of a pixel, the sum of all encrypted RGB sample values $\llbracket \vec{c_v} \rrbracket$ along a viewing ray needs to be calculated. The total RGB vector needs to be divided by the sample count as usual for averaging and, furthermore, by the count of {\em TF-Nodes}. This can be achieved by dividing each component of the total RGB vector by the product of the sample count and the count of {\em TF-Nodes}. The method to approximate a division of an encrypted number is stated in \autoref{equ:fpDeviation}. After calculating this for every image pixel, the entire encrypted image is sent to the client. A client that knows the right secure key can now decrypt each RGB component of each pixel and display the colored image. Example images rendered with this approach are shown in \autoref{fig:teaser} and \autoref{fig:generic4dimColored}. \begin{figure}[tb] \centering \subfigure[blue at 0.279, red at 0.797]{ \includegraphics[width=0.478\columnwidth,trim=0 1mm 0 1mm,clip]{figures/c-06-colored-2-1-smaller.png} \label{fig:generic4dimColored_xray} } \subfigure[blue at 0.000, red at 1.000]{ \includegraphics[width=0.478\columnwidth,trim=0 1mm 0 1mm,clip]{figures/c-07-colored-2-2-smaller.png} } \\ \vspace*{-1mm} \subfigure[green at 0.076, blue at 0.651, red at 1.000]{ \includegraphics[width=0.478\columnwidth,trim=0 1mm 0 1mm,clip]{figures/c-08-colored-3-1-smaller.png} } \subfigure[blue at 0.000, yellow at 0.293, green at 0.664, purple at 1.000]{ \includegraphics[width=0.478\columnwidth,trim=0 1mm 0 1mm,clip]{figures/c-09-colored-4-1-smaller.png} } \caption{Images are created by our simplified transfer function approach. The volume data voxel values are encoded by four-dimensional vectors. The subfigures shows results of different transfer functions applied to the same encrypted dataset.} \label{fig:generic4dimColored} \end{figure} \section{Results} \label{sec:Results} All performance tests are executed on a Mac Book Pro (15-inch, 2016) with an 2.9 GHz Intel Core i7. All algorithms are implemented in Java \added{and are only single-threaded}. The purpose of the implementation is to prove the concept and, in its current form, is not performance-optimized. All runtimes shown in \autoref{table:performanceXray} and \autoref{table:performanceColor} are measured with volume size of $100 \times 100 \times 100$ voxels. The rendered image always has a size of $150 \times 150$ pixels. \autoref{table:performanceXray} shows the runtime performance required for encrypting a volume with scalar voxel values, X-ray rendering, and image decryption with different public key modulus lengths. The table is divided into four groups of rows. The first two groups show the required time for rendering with nearest-neighbor sampling. Group three and four show the resulting performance for trilinear interpolation. The numbers in group one and three of the \autoref{table:performanceXray} are measured without obfuscation during the encryption; therefore, the encrypted volume is not secure. While this type of \enquote{encryption} does not have any practical relevance, it is interesting to compare these runtime numbers with those in the group two and four, which are measured from a secure encryption with obfuscation. It can be seen that the obfuscation takes a significant amount of time. Therefore, the random number generation ($r$) that is required for the obfuscation and the calculation of $r^N$ (see Supplementary Material \autoref{alg:PaillierEncrypt}) has a substantial impact on the time required for encrypting the volume dataset. We use the \texttt{java.security.SecureRandom} class from the java standard runtime framework as random number generator for the obfuscation. \input{tables/javaPerformanseXrayTables} \input{tables/volumeStorageSize} \autoref{table:volumeStorageSize} shows the required memory size for this volume with a single scalar value per voxel and also for encodings in multiple dimensions at different modulus lengths. \autoref{table:performanceColor} shows the runtime required for encrypting a volume with different voxel encodings (two, three, and four dimensions), rendering with our simplified transfer function approach at different counts of {\em TF-Nodes} (one, two, ... colors) and image decryption. The resulting performance for all these operations is provided for different public key modulus lengths. \input{tables/javaPerformanseColorTable} \added{ The rendering results of \autoref{fig:teaser} show what can be done with our simplified transfer function. The right image demonstrates the utilization in nuclear medicine. During the diagnosis, these datasets are usually investigated either by showing single slices or by X-ray renderings, where the depth cues are provided through rotating the dataset around an axis. This is possible with our homomorphic-encrypted volume rendering with the added privacy, which is useful for diagnosing from such a highly sensitive type of modality and associated pathologies. } \section{Discussion} \added{ First, we discuss possible performance improvements of our prototype and how the approach could scale to interactive frame rates for larger real-world datasets. Later, starting with general noteworthy considerations, we discuss security-related aspects of our volume rendering approach. Finally, we follow with an explanation for the invisibility of comparisons. In \autoref{sec:FPNumbersSecurity}, we will show that the used floating-point encoding with an encrypted mantissa and a plaintext exponent does not weaken the privacy of the encrypted volume data. } \subsection{\added{Performance}}% \label{sec:PerformanceImprovements}% Our prototype is implemented as a single-threaded application; however, a major strength of our approach is that it is highly parallelizable and should scale linearly with the processing power. There are obvious opportunities to improve the performance to a multi-threaded implementation, and multiple memory allocations (\texttt{new} statements) during the rendering could be avoided. During the encryption, every voxel can be processed independently. Therefore, it should be relatively easy to use as many processing units (e.g., CPU cores or shader hardware on GPU) as voxels in the volume for the encryption. In the rendering and decryption stage, every pixel of the image can be processed independently. Therefore, the number of processing units that can be used efficiently in parallel is equal to the number of pixels in the final image. Furthermore, a better storage order of voxel values, such as Morton order \cite{tech:mortonOder} (recursive Z curve) extended to three dimensions, could lead to a better cache usage, which will further improve the performance. The implementation used for all shown results is based only on a naive three-dimensional \texttt{BigInteger} array as volume storage. \added{ If we consider a real-world dataset with a resolution of $512 \times 512 \times 512$ voxels encrypted with a perfectly secure 2048bit long key for the purpose of X-ray rendering with a single value per voxel, the encrypted dataset will have a size of 64GB ($=(512^3 \cdot 2048 \cdot 2) / (8 \cdot 1024^3)$). While this is a considerable data amplification compared to the 16bit plaintext representation of the dataset with $256$MB ($=(512^3 \cdot 16) / (8 \cdot 1024^2)$), it will nevertheless perfectly fit in the video-memory of two NVIDIA Quadro RTX 8000 that have $48$~GB of memory each. An encrypted volume with a four-dimensional encoding for our simplified transfer function approach will be four times bigger and will, therefore, have a size of $256$GB. Consequently, at least six GPUs with $48$GB memory each will be required. While six GPUs in one server is absolutely possible, our privacy-preserving volume rendering approach should scale much further. It should be possible to use our proposed encrypted voxel compositing scheme as mapper for the MapReduce implementation proposed by Stuart et al. \cite{inProc:GPUMapReduce}, which can make use of a GPU-accelerated distributed memory system for volume rendering. } \subsection{\added{Security Considerations}} The data privacy of our approach depends entirely on the security of Paillier's cryptosystem. Our approach does not store any voxel value or any information that is computed from a voxel value without an encryption by Paillier's cryptosystem. The Paillier cryptosystem is semantically secure against chosen-plaintext attacks (IND-CPA) \cite{inbook:homoEncApps:2014:paillier}. Therefore, we conclude that the data that our approach provides to the storage and rendering server are protected in a semantically secure way. The computational complexity required for breaking a secure key of Paillier's cryptosystem depends on the length of the modulus $N$. The larger the modulus $N$ is, the harder it is to be factorized, which would be required for data decryption. For the required length of the modulus, the same conditions as for the RSA cryptosystem \cite{article:rsa} should hold. From 2018 until 2022, a modulus $N$ with a length of at least 2048 bits is considered to be secure \cite{web:cryptographicKeyLength, NIST:800_56B_Rev.2}. \subsection{Encrypted Comparison Operators} \label{sec:ComparisonOperators} It is not possible to compare encrypted numbers with each other. During the encryption of a number, the obfuscation is performed, \deleted{(see \autoref{sec:PaillierCryptosystem})} which randomly distributes the encrypted values between $0$ an $N^2-1$. Therefore, the order of the encrypted values $\llbracket M \rrbracket$ has nothing to do with the order of the underlying numbers $M$ that were encrypted. Consequently, operators such as lower than ($<$) or greater than ($>$) cannot provide a result that is meaningful for the numbers $M$, if they are applied to encrypted values $\llbracket M \rrbracket$. We can also argue that comparison operators cannot exist if the Paillier cryptosystem is secure, since the existence of a comparison operator would break the security of the cryptosystem. Consider a less-than comparison for example: if such a comparator could be implemented, every value could be decrypted within $\mathrm{log_2}(N)$ comparisons by a binary search. For a modulus $N$ with a length of $2048$ bit, an attacker would need to encrypt and then compare only $\mathrm{log_2}(2^{2048}) = 2048$ numbers with the encrypted value $\llbracket m \rrbracket$ in order to find the decrypted number $m$. This would effectively break the security of the encryption scheme. \subsection{Plaintext Exponent Does Not Leak Private Data} \label{sec:FPNumbersSecurity} At first glance, it may look like the floating-point representation (encrypted mantissa, plaintext exponent) we used will allow an attacker to obtain more important information than within an encoding where all number components are encrypted. However, if it is implemented correctly, an attacker cannot take any advantage from this number representation. \added{ First, we will discuss this for the data in the server memory and, in the last paragraph, we will show how the exponent can be protected during the data transfer from the server to the client. } For the following, we will suppose a secure system with an at least 2048-bit long modulus $N$ and, therefore, a mantissa $\llbracket m \rrbracket$ with at least $600$ decimal digits usable in the plaintext domain. Voxel values that are stored as 10 bit values are probably precise enough for most volume-rendering use cases. To store numbers between $0$ and $2^{10} = 1024$, the exponent $e$ is not required at all, because the voxel information can be stored only in the mantissa $m$. Therefore, the exponent $e$ can be $1$ for all voxels. This means that the exponent does not even have to be transferred to the server, because the server can implicitly assume that the exponents of all numbers is $1$. An addition of any of these numbers that have an exponent of $1$ does not change the exponent, because for an addition, the exponent needs to be taken into account only if the summands have different exponents (see \autoref{alg:PaillierFpAdd}). Therefore, only a multiplication (e.g., an interpolation between voxel values) can change the exponent to anything other than $1$. However, the Paillier cryptosystem only supports the multiplication of an encrypted number with an unencrypted number. Consequently, the number $d$ that changes an exponent has to be unencrypted. Furthermore, this number $d$ can only depend on unencrypted data, because Paillier does not support comparison operators (see \autoref{sec:ComparisonOperators}), which are required for flow control statements like \texttt{if} or \texttt{for-loops}, and arithmetic operations with an encrypted number will result in useless random noise, except those add ($\oplus$) and multiply ($\otimes$) that are defined for the Paillier cryptosystem% . Therefore, the number $d$ can only be the result of some computation with other unencrypted variables. This implies that $d$ does not need to be encrypted, because everyone can calculate $d$ itself. In other words, if the variable $d$ can be computed from some variables that need to be considered as publicly available, because they are unencrypted, it is pointless to encrypt $d$. If $d$, which is unencrypted and can only depend on unencrypted data, influences an exponent, the exponent exposes only the information that is already publicly available. The important observation here is that an unencrypted value (e.g., an exponent) can influence an encrypted value (e.g., a mantissa), but an encrypted value (e.g., a mantissa) cannot influence an unencrypted value (e.g., an exponent). This means that no information that is only available as encrypted data can ever be exposed in unencrypted values like the exponent. In our rendering system, a number $d$ that changes an exponent can either be the result of a computation with a constant or with an unencrypted number that is provided in unencrypted form to the rendering system, such as the camera properties (position of eye point, opening angle, view direction...). Therefore, an attacker could possibly learn the constants used in our program code and data, such as the camera properties that are provided in the unencrypted form, from the exponents of the rendering result (the image). However, we want to develop an approach that is open and semantically secure \added{by design and not {\em secure through obscurity}} (compare: \cite{article:hoepman2008securityThrough, NIST:800_123, web:BruceSchneierTheInsecurity2, web:ChadPerrinSecurity101}). Therefore, we have to treat the source code of the application as publicly available, which means that a constant cannot be considered to be private. Furthermore, for our approach, the camera properties need to be provided in an unencrypted form to the rendering system. Therefore, we cannot consider it as private anyway. It should be noted that the camera properties could possibly provide interesting information to an attacker, because it could be possible to learn something about the volume data by tracking the camera properties over time. For instance, if a user rotates the camera around a specific region for a considerable amount of time, an attacker could guess that the region contains some interesting data. During the transfer of the camera properties from the client to the server over the network, the camera properties could be secured by using an encrypted tunnel, such as {\em IPsec} \cite{tech:RFC4301.IPsec} or {\em TLS} \cite{tech:RFC8446.TLS}. However, our basic assumption is that we cannot trust the server that hosts our rendering program. This means that an attacker has access to the entire memory of the server and, therefore, can read the camera properties directly from the memory of the server, regardless of the used network transfer method. While the unencrypted camera properties could \replaced{indirectly expose some information}{be a security problem}, we will not discuss this further because it is beyond the scope of this work. Based on the arguments stated in this section, we can conclude that using plaintext exponents for the rendering process on an untrusted computer system does not provide more information to a third party than using encryption for all components of a floating-point number. The only remaining part that needs to be considered is the transfer of the final image from the server back to the client across a network. Operations like trilinear interpolation will change the exponents during the rendering. Therefore, the final image will contain floating-point numbers with exponents unequal to $1$ and, because the interpolation weights that change the exponents depend on the camera properties, the exponents of the final image will provide some information about the camera properties. The privacy of the information that is stored in the exponents is only important if it can be assumed that the server is trustworthy, which contradicts the basic assumption of this work. Therefore, this is somewhat beyond the scope of this work, but we nonetheless discuss it here for the sake of completeness. In order to encrypt as much information as possible during the image transfer from the server to the client, ideally all information should be stored in the encrypted mantissa. While it is not possible to divide an encrypted number, it is possible to multiply an encrypted number. Furthermore, the encrypted mantissa can store numbers in the range from $0$ to $2^{2047}$. Therefore, it is possible to bring all exponents to the value of the smallest exponent of any pixel of the final image. This can be achieved by the calculation shown in \autoref{equ:decreaseExponentTo}. For the new exponent $e_n$, the value of the smallest exponent of any pixel must be used. If this exponent-decrease operation is applied to all image values on the server before transferring the image to the client, the exponent should not contain any important information during the transfer, because all exponents then contain the same value. However, if there is concern that even this might contain something useful, it is possible to encrypt this exponent with the public key because the client that has the secure key can decrypt it anyway. Since it is the same value for every number that is sent back to the client, this exponent needs to be sent and decrypted only once. \section{Conclusions} While the expressiveness of our renderings is far from what is possible with state-of-the-art algorithms for non-encrypted data, we have presented a highly parallelizable direct volume rendering approach that allows not only the outsourcing of the storage of the volume data, but also the outsourcing of the whole rendering pipeline, without compromising the privacy of the data. The approach we propose does not leak any voxel values or any information computed from a voxel value after the volume encryption. Since we encrypt every single bit of voxel data with Paillier's cryptosystem, which is provably semantically secure (see: \cite{pdf:Paillier, inbook:homoEncApps:2014:paillier}), it is rather obvious that with our approach, the confidentiality of the volume data (densities, shapes, structures,..) and the colors of the rendered image only depends on the privacy of the secure key. If we trust all devices that have seen the volume data before encryption (e.g.,: MRI-/CT-scanner, the computer that performs the encryption) to safely delete the data after encryption, only the owner of the secure key is able to obtain any useful information of the encrypted volume or rendered images. This is a significant advantage compared to all previous works to date. This security naturally comes with associated costs. The storage overhead costs for computation are between four and five orders of magnitude compared to plaintext data. Compared to our prototype, an optimized implementation of our approach can reduce the computational complexity by an order of magnitude. While we hope that further improvements of our approach would lead to rendering results with better expressiveness, it will be a non-trivial task because the security aspect needs to be considered for even the slightest change. Many of the ideas we considered in the algorithmic design eventually led to a leak of sensitive information, which is, in our opinion, intolerable, no matter how small it may be. \added{Future work definitively needs to improve the rendering performance. We see that the performance can be tremendously accelerated, as ray-casting is an \emph{embarrassingly parallel} workload. For practical utilization of our privacy-preserving volume rendering, an efficient GPU-based implementation would be necessary. A single server full with GPUs should be able to provide five orders of magnitude more computational power than a single CPU core can. Based on the measured performance with a non-optimized single threaded implementation, such a server could be able to achieve interactive frame rates for datasets that are small enough to fit into the memory of the graphics cards. Therefore, we see, as a next step, to port the rendering onto GPUs, where the necessary technological piece will be to design efficient big-integer arithmetic. Another possible improvement within the scope of Paillier HE will be the visual quality of compositing. This can be done with gradient-magnitude opacity modulation, where the gradient magnitude will be pre-calculated and encrypted along with the data values. Such representation can already lead to substantial visual quality improvement, although it will still not reach the outcome of compositing using Porter/Duffs's over operator~\cite{Porter1984}. For the Paillier HE scheme, we do not see a way to implement the over operator compositing, as it requires a multiplication of encrypted numbers. To support alpha blending, new research should be oriented on investigating other homomorphic encryption schemes or a combination of those that, unlike Paillier, would support desired secure alpha blending functionality.} \acknowledgments{ The authors wish to thank Michal Hojsík for his fruitful discussions on cryptography. The authors would like to thank Michael Cusack from Publication Services at KAUST for proofreading. The research was supported by King Abdullah University of Science and Technology (KAUST) under award number BAS/1/1680-01-01.} \bibliographystyle{abbrv-doi-hyperref}
{'timestamp': '2020-10-16T02:22:30', 'yymm': '2009', 'arxiv_id': '2009.02122', 'language': 'en', 'url': 'https://arxiv.org/abs/2009.02122'}
ArXiv
\section{Introduction} Solar prominences or filaments are cool and dense plasmas embedded in the million-Kelvin corona \citep{mac10}. The plasmas originate from the direct injection of chromospheric materials into a preexisting filament channel, levitation of chromospheric mass into the corona, or condensation of hot plasmas from the chromospheric evaporation due to the thermal instability \citep{xia11,xia12,rony14,zhou14}. Prominences are generally believed to be supported by the magnetic tension force of the dips in sheared arcades \citep{guo10b,ter15} or twisted magnetic flux ropes \citep[MFRs;][]{su12,sun12a,zhang12a,cx12,cx14a,xia14a,xia14b}. They can keep stable for several weeks or even months, but may get unstable after being disturbed. Large-amplitude and long-term filament oscillations before eruption have been observed by spaceborne telescopes \citep{chen08,li12,zhang12b,bi14,shen14} and reproduced by numerical simulations \citep{zhang13}, which makes filament oscillation another precursor for coronal mass ejections \citep[CMEs;][]{chen11} and the accompanying flares. When the twist of a flux rope supporting a filament exceeds the threshold value (2.5$\pi$$-$3.5$\pi$), it will also become unstable and erupt due to the ideal kink instability \citep[KI;][]{hood81,kli04,tor04,tor10,fan05,sri10,asch11,kur12}. However, whether the eruption of the kink-unstable flux rope becomes failed or ejective depends on how fast the overlying magnetic field declines with height \citep{tor05,liu08a,kur10}. When the decay rate of the background field exceeds a critical value, the flux rope will lose equilibrium and erupt via the so-called torus instability \citep[TI;][]{kli06,jiang14,ama14}. On the other hand, if the confinement from the background field is strong enough, the filament will decelerate to reach the maximum height before falling back to the solar surface, which means the eruption is failed \citep{ji03,liu09,guo10a,kur11,song14,jos13,jos14}. In addition to the successful and failed eruptions, there are partial filament eruptions \citep{gil07,liu07}. After examining 54 H$\alpha$ prominence activities, \citet{gil00} found that a majority of the eruptive prominences show separation of escaping material from the bulk of the prominence; the latter initially lifted away from and then fell back to the solar surface. To explain the partial filament eruptions, the authors proposed a cartoon model in which magnetic reconnection occurs inside an inverse-polarity flux rope, leading to the separation of the escaping portion of the prominence and the formation of a second X-type neutral line in the upper portion of the prominence. The inner splitting and subsequent partial prominence eruption is also observed by \citet{shen12}. \citet{gil01} interpreted an active prominence with the process of vertical reconnection between an inverse-polarity flux rope and an underlying magnetic arcade. \citet{liu08b} reported a partial filament eruption characterised by a quasi-static, slow phase and a rapid kinking phase showing a bifurcation of the filament. The separation of the filament, the extreme-ultravoilet (EUV) brightening at the separation location, and the surviving sigmoidal structure provide convincing evidences that magnetic reconnection occurs within the body of filament \citep{tri13}. \citet{gib06a,gib06b} carried out three-dimensional (3D) numerical simulations to model the partial expulsion of a MFR. After multiple reconnections at current sheets that form during the eruption, the rope breaks in an upper, escaping rope and a lower, surviving rope. The ``partially-expelled flux rope'' (PEFR) model has been justified observationally \citep{tri09}. \citet{tri06} observed a distinct coronal downflow following a curved path at the speed of $<$150 km s$^{-1}$ during a CME-associated prominence eruption. Their observation provides support for the pinching off of the field lines drawn-out by the erupting prominences and the contraction of the arcade formed by the reconnection. Similar multithermal downflow at the speed of $\sim$380 km s$^{-1}$ starting at the cusp-shaped structures where magnetic reconnection occurred inside the erupting flux rope that led to its bifurcation was reported by \citet{tri07}. \citet{liu12} studied a flare-associated partial eruption of a double-decker filament. \citet{cx14b} found that a stable double-decker MFR system existed for hours prior to the eruption on 2012 July 12. After entering the domain of instability, the high-lying MFR impulsively erupted to generate a fast CME and \textit{GOES} X1.4 class flare; while the low-lying MFR remained behind and continuously maintained the sigmoidicity of the active region (AR). From the previous literatures, we can conclude that magnetic reconnection and the release of free energy involve in most of the partial filament eruptions. However, the exact mechanism of partial eruptions, which is of great importance to understanding the origin of solar eruptions and forecasting space weather, remain unclear and controversial. In this paper, we report multiwavelength observations of a partial filament eruption and the associated CME and M6.7 flare in NOAA AR 11283 on 2011 September 8. The AR emerged from the eastern solar limb on 2011 August 30 and lasted for 14 days. Owing to its extreme complexity, it produced a couple of giant flares and CMEs during its lifetime \citep{feng13,dai13,jiang14,liu14,li14,ruan14}. In Section~\ref{s-data}, we describe the data analysis using observations from the Big Bear Solar Observatory (BBSO), \textit{SOHO}, \textit{Solar Dynamics Observatory} (\textit{SDO}), \textit{Solar Terrestrial Relation Observatory} \citep[\textit{STEREO};][]{kai05}, \textit{GOES}, \textit{Reuven Ramaty High-Energy Solar Spectroscopic Imager} \citep[\textit{RHESSI};][]{lin02}, and \textit{WIND}. Results and discussions are presented in Section~\ref{s-result} and Section~\ref{s-disc}. Finally, we draw our conclusion in Section~\ref{s-sum}. \section{Instruments and data analysis} \label{s-data} \subsection{BBSO and \textit{SOHO} observations} \label{s-ha} On September 8, the dark filament residing in the AR was most clearly observed at H$\alpha$ line center ($\sim$6563 {\AA}) by the ground-based telescope in BBSO. During 15:30$-$16:30 UT, the filament rose and split into two parts. The major part lifted away and returned to the solar surface, while the runaway part separated from and escaped the major part, resulting in a very faint CME recorded by the \textit{SOHO} Large Angle Spectroscopic Coronagraph \citep[LASCO;][]{bru95} CME catalog\footnote{http://cdaw.gsfc.nasa.gov/CME\_list/}. The white light (WL) images observed by the LASCO/C2 with field-of-view (FOV) of 2$-$6 solar radii ($R_{\sun}$) were calibrated using the \textit{c2\_calibrate.pro} in the \textit{Solar Software} (\textit{SSW}). \subsection{\textit{SDO} observations} \label{s-euv} The partial filament eruption was clearly observed by the Atmospheric Imaging Assembly \citep[AIA;][]{lem12} aboard \textit{SDO} with high cadences and resolutions. There are seven EUV filters (94, 131, 171, 193, 211, 304, and 335 {\AA}) and two UV filters (1600 {\AA} and 1700 {\AA}) aboard AIA to achieve a wide temperature coverage ($4.5\le \log T \le7.5$). The AIA level\_1 fits data were calibrated using the standard program \textit{aia\_prep.pro}. The images observed in different wavelengths were coaligned carefully using the cross-correlation method. To investigate the 3D magnetic configurations before and after the eruption, we employed the line-of-sight (LOS) and vector magnetograms from the Helioseismic and Magnetic Imager \citep[HMI;][]{sch12} aboard \textit{SDO}. The 180$^{\circ}$ ambiguity of the transverse field was removed by assuming that the field changes smoothly at the photosphere \citep{guo13}. We also performed magnetic potential field and non-linear force free field (NLFFF) extrapolations using the optimization method as proposed by \citet{wht00} and as implemented by \citet{wig04}. The FOV for extrapolation was 558$\farcs$5$\times$466$\farcs$2 to cover the whole AR and make sure the magnetic flux was balanced, and the data were binned by 2$\times$2 so that the resolution became 2$\arcsec$. \subsection{\textit{STEREO} and \textit{WIND} observations} The eruption was also captured from different perspectives by the Extreme-Ultraviolet Imager (EUVI) and COR1\footnote{http://cor1.gsfc.nasa.gov/catalog/cme/2011/} coronagraph of the Sun Earth Connection Coronal and Heliospheric Investigation \citep[SECCHI;][]{how08} instrument aboard the ahead satellite (\textit{STA} hereafter) and behind satellite (\textit{STB} hereafter) of \textit{STEREO}. The COR1 has a smaller FOV of 1.3$-$4.0 $R_{\sun}$ compared with LASCO/C2, which is favorable for the detection of early propagation of CMEs. On September 8, the twin satellites (\textit{STA} and \textit{STB}) had separation angles of 103$^{\circ}$ and 95$^{\circ}$ with the Earth. The presence of open magnetic field lines within the AR was confirmed indirectly by the evidence of type \Rmnum{3} burst in the radio dynamic spectra. The spectra were obtained by the S/WAVES \citep{bou08} on board \textit{STEREO} and the WAVES instrument \citep{bou95} on board the \textit{WIND} spacecraft. The frequency of S/WAVES ranges from 2.5 kHz to 16.025 MHz. The WAVES has two radio detectors: RAD1 (0.02$-$1.04 MHz) and RAD2 (1.075$-$13.825 MHz). \subsection{\textit{GOES} and \textit{RHESSI} observations} \label{s-xray} The accompanying M6.7 flare was obviously identified in the \textit{GOES} soft X-ray (SXR) light curves in 0.5$-$4.0 {\AA} and 1$-$8 {\AA}. To figure out where the accelerated nonthermal particles precipitate, we also made hard X-ray (HXR) images and light curves at different energy bands (3$-$6, 6$-$12, 12$-$25, 25$-$50, and 50$-$100 keV) using the observations of \textit{RHESSI}. The HXR images were generated using the CLEAN method with integration time of 10 s. The observing parameters are summarized in Table~\ref{tbl-1}. \section{Results} \label{s-result} Figure~\ref{fig1} shows eight snapshots of the H$\alpha$ images to illustrate the whole evolution of the filament (see also the online movie Animation1.mpg). Figure~\ref{fig1}(a) displays the H$\alpha$ image at 15:30:54 UT before eruption. It is overlaid with the contours of the LOS magnetic field, where green (blue) lines stand for positive (negative) polarities. The dark filament that is $\sim$39 Mm long resides along the polarity inversion line (PIL). The top panels of Figure~\ref{fig2} demonstrate the top-view of the 3D magnetic configuration above the AR at the beginning and after eruption, with the LOS magnetograms located at the bottom boundary. Using the same method described in \citet{zhang12c}, we found a magnetic null point and the corresponding spine and separatrix surface. The normal magnetic field lines are denoted by green lines. The magnetic field lines around the outer/inner spine and the separatrix surface (or arcade) are represented by red/blue lines. Beneath the null point, the sheared arcades supporting the filament are represented by orange lines. The spine is rooted in the positive polarity (P1) that is surrounded by the negative polarities (N1 and PB). It extends in the northeast direction and connects the null point with a remote place on the solar surface. Such magnetic configuration is quite similar to those reported by \citet{sun12b}, \citet{jiang13}, and \citet{man14}. As time goes on, the filament rose and expanded slowly (Figure~\ref{fig1}(b)). The initiation process is clearly revealed by the AIA 304 {\AA} observation (see the online movie Animation2.mpg). Figure~\ref{fig3} shows eight snapshots of the 304 {\AA} images. Initial brigtenings (IB1, IB2, and IB3) appeared near the ends and center of the sigmoidal filament, implying that magnetic reconnection took place and the filament got unstable (Figure~\ref{fig3}(b)-(d)). Such initial brightenings were evident in all the EUV wavelengths. With the intensities of the brigtenings increasing, the dark filament rose and expanded slowly, squeezing the overlying arcade field lines. Null-point magnetic reconnection might be triggered when the filament reached the initial height of the null point ($\sim$15 Mm), leading to impulsive brightenings in H$\alpha$ (Figure~\ref{fig1}(c)-(d)) and EUV (Figure~\ref{fig3}(e)-(h)) wavelengths and increases in SXR and HXR fluxes (Figure~\ref{fig4}). The M6.7 flare entered the impulsive phase. The bright and compact flare kernel pointed by the white arrow in Figure~\ref{fig1}(c) extended first westward and then northward, forming a quasi-circular ribbon at $\sim$15:42 UT (Figure~\ref{fig1}(d)), with the intensity contours of the HXR emissions at 12$-$25 keV superposed. There was only one HXR source associated with the flare, and the source was located along the flare ribbon with the strongest H$\alpha$ emission, which is compatible with the fact that the footpoint HXR emissions come from the nonthermal bremsstrahlung of the accelerated high-energy electrons after penetrating into the chromosphere. The flare demonstrates itself not only around the filament but also at the point-like brightening (PB hereafter) and the V-shape ribbon to the left of the quasi-circular ribbon. Since the separatrix surface intersects with the photosphere at PB to the north and the outer spine intersects with the photosphere to the east (Figure~\ref{fig2}(a)), it is believed that nonthermal electrons accelerated by the null-point magnetic reconnection penetrated into the lower atmosphere not only at the quasi-circular ribbon, but also at PB and the V-shape ribbon. Figure~\ref{fig4} shows the SXR (black solid and dashed lines) and HXR (colored solid lines) light curves of the flare. The SXR fluxes started to rise rapidly at $\sim$15:32 UT and peaked at 15:45:53 UT for 1$-$8 {\AA} and 15:44:21 UT for 0.5$-$4.0 {\AA}. The HXR fluxes below 25 keV varied smoothly like the SXR fluxes, except for earlier peak times at $\sim$15:43:10 UT. The HXR fluxes above 25 keV, however, experienced two small peaks that imply precursor release of magnetic energy and particle acceleration at $\sim$15:38:36 UT and $\sim$15:41:24 UT and a major peak at $\sim$15:43:10 UT. The time delay between the SXR and HXR peak times implies the possible Neupert effect for this event \citep{ning10}. The main phase of the flare sustained until $\sim$17:00 UT, indicating that the flare is a long-duration event. During the flare, the filament continued to rise and split into two branches at the eastern leg around 15:46 UT (Figure~\ref{fig1}(e)), the right of which is thicker and darker than the left one. Such a process is most clearly revealed by the AIA 335 {\AA} observation (see the online movie Animation3.mpg). Figure~\ref{fig5} displays eight snapshots of the 335 {\AA} images. It is seen that the dark filament broadened from $\sim$15:42:30 UT and completely split into two branches around 15:45:51 UT. We define the left and right branches as the runaway part and major part of the filament. The two interwinding parts also underwent rotation (panels (d)-(h)). Meanwhile, the plasma of the runaway part moved in the northwest direction and escaped. To illustrate the rotation, we derived the time-slice diagrams of the two slices (S4 and S5 in panel (f)) that are plotted in Figure~\ref{fig6}. The upper (lower) panels represent the diagrams of S4 (S5), and the left (right) panels represent the diagrams for 211 {\AA} (335 {\AA}). $s=0$ in the diagrams stands for the southwest endpoints of the slices. The filament began to split into two parts around 15:42:30 UT, with the runaway part rotating round the eastern leg of the major part for $\sim$1 turn until $\sim$15:55 UT. During the eruption, the runaway branch of the filament disappeared (Figure~\ref{fig1}(f)). The major part, however, fell back to the solar surface after reaching the maximum height around 15:51 UT, suggesting that the eruption of the major part of the filament was failed. The remaining filament after the flare was evident in the H$\alpha$ image (Figure~\ref{fig1}(h)). NLFFF modelling shows that the magnetic topology was analogous to that before the flare, with the height of the null point slightly increased by 0.4 Mm (Figure~\ref{fig2}(b)). Figure~\ref{fig7} shows six snapshots of the 171 {\AA} images. The rising and expanding filament triggered the M-class flare and the kink-mode oscillation of the adjacent large-scale coronal loops within the same AR (see the online movie Animation4.mpg). With the filament increasing in height, part of its material was ejected in the northwest direction represented by ``S1'' in panel (c). After reaching the maximum height at $\sim$15:51:12 UT, the major part of the filament returned to the solar surface. The bright cusp-like post-flare loops (PFLs) in the main phase of the flare are clearly observed in all the EUV filters, see also Figure~\ref{fig7}(f). To illustrate the eruption and loop oscillation more clearly, we extracted four slices. The first slice, S0 in Figure~\ref{fig1}(f) and Figure~\ref{fig7}(d), is 170 Mm in length. It starts from the flare site and passes through the apex of the major part of the filament. The time-slice diagram of S0 in H$\alpha$ is displayed in Figure~\ref{fig8}(a). The filament started to rise rapidly at $\sim$15:34:30 UT with a constant speed of $\sim$117 km s$^{-1}$. After reaching the peak height ($z_{max}$) of $\sim$115 Mm at $\sim$15:51 UT, it fell back to the solar surface in a bumpy way until $\sim$16:30 UT. Using a linear fitting, we derived the average falling speed ($\sim$22 km s$^{-1}$) of the filament in H$\alpha$ wavelength. The time-slice diagram of S0 in UV and EUV passbands are presented in Figure~\ref{fig9}. We selected two relatively hot filters (335 {\AA} and 211 {\AA} in the top panels), two warm filters (171 {\AA} and 304 {\AA} in the middle panels), and two cool filters (1600 {\AA} and 1700 {\AA} in the bottom panels), respectively. Similar to the time-slice diagram in H$\alpha$ (Figure~\ref{fig8}(a)), the filament rose at apparent speeds of 92$-$151 km s$^{-1}$ before felling back in an oscillatory way at the speeds of 34$-$46 km s$^{-1}$ during 15:51$-$16:10 UT and $\sim$71 km s$^{-1}$ during 16:18$-$16:30 UT (Figure~\ref{fig9}(a)-(d)). The falling speeds in UV wavelengths are $\sim$78 km s$^{-1}$ during 15:51$-$16:10 UT (Figure~\ref{fig9}(e)-(f)). The times when the major part of the filament reached maximum height in UV and EUV passbands, $\sim$15:51 UT, are consistent with that in H$\alpha$. The later falling phase during 16:18$-$16:30 UT is most obvious in the warm filters. The downflow of the surviving filament in an oscillatory way was also observed during a sympathetic filament eruption \citep{shen12}. Owing to the lower time cadence of BBSO than AIA, the escaping process of the runaway part of the filament in Figure~\ref{fig1}(e)-(f) was detected by AIA. We extracted another slice S1 that is 177 Mm in length along the direction of ejection (Figure~\ref{fig7}(c)). $s=0$ Mm and $s=177$ Mm represent the southeast and northwest endpoints of the slice. The time-slice diagram of S1 in 171 {\AA} is displayed in Figure~\ref{fig8}(b). Contrary to the major part, the runaway part of the filament escaped successfully from the corona at the speeds of 125$-$255 km s$^{-1}$ without returning to the solar surface. The intermittent runaway process during 15:45$-$16:05 UT was obviously observed in most of the EUV filters. We extracted another slice S2 that also starts from the flare site and passes through both parts of the filament (Figure~\ref{fig7}(d)). The time-slice diagram of S2 in 171 {\AA} is drawn in Figure~\ref{fig8}(c). As expected, the diagram features the bifurcation of the filament as pointed by the white arrow, i.e., the runaway part escaped forever while the major part moved on after bifurcation and finally fell back. The eruption of the filament triggered transverse kink oscillation of the adjacent coronal loops (OL in Figure~\ref{fig7}(a)). The direction of oscillation is perpendicular to the initial loop plane (see the online movie Animation4.mpg). We extracted another slice S3 that is 80 Mm in length across the oscillating loops (Figure~\ref{fig7}(b)). $s=0$ Mm and $s=80$ Mm represent the northwest and southeast endpoints of the slice. The time-slice diagram of S3 in 171 {\AA} is shown in Figure~\ref{fig8}(d), where the oscillation pattern during 15:38$-$15:47 UT is evidently demonstrated. The OL moved away from the flare site during 15:38$-$15:41 UT before returning to the initial position and oscillating back and forth for $\sim$2 cycles. By fitting the pattern with a sinusoidal function as marked by the white dashed line, the resulting amplitude and period of the kink oscillation were $\sim$1.6 Mm and $\sim$225 s. We also extracted several slices across the OL and derived the time-slice diagrams, finding that the coronal loops oscillated in phase and the mode was fundamental. The initial velocity amplitude of the oscillation was $\sim$44.7 km s$^{-1}$. The speed of propagation of the mode $C_K=2L/P=\sqrt{2/(1+\rho_o/\rho_i)}v_A$, where $L$ is the loop length, $P$ is the period, $v_A$ is the Alfv\'{e}n wave speed, and $\rho_i$ and $\rho_o$ are the plasma densities inside and outside the loop \citep{nak99,nak01,whi12}. In Figure~\ref{fig7}(a), we denote the footpoints of the OL with black crosses that are 106.1 Mm away. Assuming a semi-circular shape, the length of the loop $L=166.7$ Mm and $C_{K}=1482$ km s$^{-1}$. Using the same value of $\rho_o/\rho_{i}=0.1$, we derived $v_{A}=1100$ km s$^{-1}$. In addition, we estimated the electron number density of the OL to be $\sim$2.5$\times10^{10}$ cm$^{-3}$ based on the results of NLFFF extrapolation in Figure~\ref{fig2}(a). The kink-mode oscillation of the loops was best observed in 171 {\AA}, indicating that the temperatures of loops were $\sim$0.8 MK. The escaping part of the filament was also clearly observed by \textit{STA}/EUVI. Figure~\ref{fig10} shows six snapshots of the 304 {\AA} images, where the white arrows point to the escaping filament. During 15:46$-$16:30 UT, the material moved outwards in the northeast direction without returning to the solar surface. The bright M6.7 flare pointed by the black arrows is also quite clear. The runaway part of the filament resulted in a very faint CME observed by the WL coronagraphs. Figure~\ref{fig11}(a)-(d) show the running-difference images of \textit{STA}/COR1 during 16:00$-$16:15 UT. As pointed by the arrows, the CME first appeared in the FOV of \textit{STA}/COR1 at $\sim$16:00 UT and propagated outwards at a nearly constant speed, with the contrast between CME and the background decreasing as time goes on. The propagation direction of the CME is consistent with that of the runaway filament in Figure~\ref{fig10}. Figure~\ref{fig11}(e)-(f) show the running-difference images of LASCO/C2 during 16:36$-$16:48 UT. The faint blob-like CME first appeared in the FOV of C2 at $\sim$16:36 UT and propagated in the same direction as that of the escaping filament observed by AIA in Figure~\ref{fig7}(c). The central position angle and angular width of the CME observed by C2 are 311$^{\circ}$ and 37$^{\circ}$. The linear velocity of the CME is $\sim$214 km s$^{-1}$. The time-height profiles of the runaway filament observed by \textit{STA}/EUVI (\textit{boxes}) and the corresponding CME observed by \textit{STA}/COR1 (\textit{diamonds}) and LASCO/C2 (\textit{stars}) are displayed in Figure~\ref{fig12}. The apparent propagating velocities represented by the slopes of the lines are 60, 358, and 214 km s$^{-1}$, respectively. Taking the projection effect into account, the start times of the filament eruption and the CME observed by LASCO/C2 and \textit{STA}/COR1 from the lower corona ($\approx1.0 R_{\sun}$) are approximately coincident with each other. In the CDAW catalog, the preceding and succeeding CMEs occurred at 06:12 UT and 18:36 UT on September 8. In the COR1 CME catalog, the preceding and succeeding CMEs occurred slightly earlier at 05:45 UT and 18:05 UT on the same day, which is due to the smaller FOV of COR1 than LASCO/C2. Therefore, the runaway part of the filament was uniquely associated with the CME during 16:00$-$18:00 UT. Then, a question is raised: How can the runaway part of the filament successfully escape from the corona and give rise to a CME? We speculate that open magnetic field lines provide a channel. In order to justify the speculation, we turn to the large-scale magnetic field calculated by the potential field source surface \citep[PFSS;][]{sch69,sch03} modelling and the radio dynamic spectra from the S/WAVES and WAVES instruments. In Figure~\ref{fig13}, we show the magnetic field lines whose footpoints are located in AR 11283 at 12:04 UT before the onset of flare/CME event. The open and closed field lines are represented by the purple and white lines. It is clear that open field lines do exist in the AR and their configuration accords with the directions of the escaping part of filament observed by AIA and the CME observed by C2. The radio dynamic spectra from S/WAVES and WAVES are displayed in panels (a)$-$(b) and (c)$-$(d) of Figure~\ref{fig14}, respectively. There are clear signatures of type \Rmnum{3} radio burst in the spectra. For \textit{STA}, the burst started at $\sim$15:38:30 UT and ended at $\sim$16:00 UT, during which the frequency drifted rapidly from 16 MHz to $\sim$0.3 MHz. For \textit{STB} that was $\sim$0.07 AU further than \textit{STA} from the Sun, the burst started slightly later by $\sim$2 minutes with the frequency drifting from $\sim$4.1 MHz to $\sim$0.3 MHz since the early propagation of the filament was blocked by the Sun. For WAVES, the burst started at $\sim$15:39:30 UT and ended at $\sim$16:00 UT with the frequency drifting from 13.8 MHz to $\sim$0.03 MHz. The starting times of the radio burst were consistent with the HXR peak times of the flare. Since the type \Rmnum{3} radio emissions result from the cyclotron maser instability of the nonthermal electron beams that are accelerated and ejected into the interplanetary space along open magnetic field lines during the flare \citep{tang13}, the type \Rmnum{3} radio burst observed by \textit{STEREO} and \textit{WIND} provides indirect and supplementary evidence that open magnetic field lines exist near the flare site. \section{Discussions} \label{s-disc} \subsection{How is the energy accumulated?} \label{s-eng} It is widely accepted that the solar eruptions result from the release of magnetic free energy. For this event, we studied how the energy is accumulated by investigating the magnetic evolution of the AR using the HMI LOS magnetograms (see the online movie Animation5.mpg). Figure~\ref{fig15} displays four snapshots of the magnetograms, where the AR is dominated by negative polarity (N1). A preexisting positive polarity (P1) is located in the northeast direction. From the movie, we found continuous shearing motion along the highly fragmented and complex PIL between N1 and P1. For example, the small negative region N2 at the boundary of the sunspot was dragged westward and became elongated (Figure~\ref{fig15}(b)-(d)). To better illustrate the motion, we derived the transverse velocity field ($v_x$, $v_y$) at the photosphere using the differential affine velocity estimator (DAVE) method \citep{sch05}. The cadence of the HMI LOS magnetograms was lowered from 45 s to 180 s. Figure~\ref{fig16} displays six snapshots of the magnetograms overlaid with the transverse velocity field represented by the white arrows. The velocity field is clearly characterized by the shearing motions along the PIL. The regions within the green and blue elliptical lines are dominated by eastward and westward motions at the speeds of $\sim$1.5 km s$^{-1}$. From the online movie (Animation6.mpg), we can see that the continuous shearing motions were evident before the flare, implying that the magnetic free energy and helicity were accumulated and stored before the impulsive release. \subsection{How is the eruption triggered?} \label{s-tri} Once the free energy of the AR is accumulated to a critical value, chances are that the filament constrained by the overlying magnetic field lines undergoes an eruption. Several types of triggering mechanism have been proposed. One type of processes where magnetic reconnection is involved include the flux emergence model \citep{chen00}, catastrophic model \citep{lin00}, tether-cutting model \citep{moo01,chen14}, and breakout model \citep{ant99}, to name a few. Another type is the ideal magnetohydrodynamic (MHD) processes as a result of KI \citep{kli04} and/or TI \citep{kli06}. From Figure~\ref{fig15} and the movie (Animation5.mpg), we can see that before the flare there was continuous magnetic flux emergence (P2, P3, and P4) and subsequent magnetic cancellation along the fragmented PIL. We extracted a large region within the white dashed box of Figure~\ref{fig15}(d) and calculated the total positive ($\Phi_{P}$) and negative ($\Phi_{N}$) magnetic fluxes within the box. In Figure~\ref{fig17}, the temporal evolutions of the fluxes during 11:00$-$16:30 UT are plotted, with the evolution of $\Phi_{P}$ divided into five phases (I$-$V) separated by the dotted lines. The first four phases before the onset of flare at 15:32 UT are characterized by quasi-periodic and small-amplitude magnetic flux emergence and cancellation, implying that the large-scale magnetic field was undergoing rearrangement before the flare. The intensity contours of the 304 {\AA} images in Figure~\ref{fig3}(b) and (d) are overlaid on the magnetograms in Figure~\ref{fig15}(b) and (c), respectively. It is clear that the initial brightenings IB1 and IB2 are very close to the small positive polarities P4 and P3. There is no significant magnetic flux emergence around IB3. In the emerging-flux-induced-eruption model \citep{chen00}, when reconnection-favorable magnetic bipole emerges from beneath the photosphere into the filament channel, it reconnects with the preexisting magnetic field lines that compress the inverse-polarity MFR. The small-scale magnetic reconnection and flux cancellation serve as the precursor for the upcoming filament eruption and flare. During the flare when magnetic reconnection occurred between 15:32 UT and 16:10 UT, both the positive and negative magnetic field experienced impulsive and irreversible changes. Despite that the flux emergences are plausible to interpret the triggering mechanism, there is another possibility. In the tether-cutting model \citep{moo01}, a pair of $J$-shape sheared arcades that comprise a sigmoid reconnect when the two elbows come into contact, forming a short loop and a long MFR. Whether the MFR experiences a failed or ejective eruption depends on the strength of compression from the large-scale background field. The initial brightenings (IB1, IB2, and IB3) around the sigmoidal filament might be the precursor brightenings as a result of internal tether-cutting reconnection due to the continuous shearing motion along the PIL. After onset, the whole flux system erupted and produced the M-class flare. Considering that the magnetic configuration could not be modelled during the flare, we are not sure whether a coherent MFR was formed after the initiation \citep{chen14}. Compared to the flux emergences, the internal tether-cutting seems more believable to interpret how the filament eruption was triggered for the following reasons. Firstly, the filament was supported by sheared arcade. Secondly, there were continuous shearing motions along the PIL, and the directions were favorable for the tether-cutting reconnection. Finally, the initial brightenings (IB1, IB2, and IB3) around the filament in Figure~\ref{fig3} fairly match the internal tether-cutting reconnection with the presence of multiple bright patches of flare emission in the chromosphere at the feet of reconnected field lines, while there was no flux emergence around IB3. NLFFF modelling shows that the twist number ($\sim$1) of the sheared arcades supporting the filament is less than the threshold value ($\sim$1.5), implying that the filament eruption may not be triggered by ideal KI. The photospheric magnetic field of the AR features a bipole (P1 and N1) and a couple of mini-polarities (e.g., P2, P3, P4, and N2). Therefore, the filament eruption could not be explained by the breakout model that requires quadrupolar magnetic field, although null-point magnetic reconnection took place above the filament during the eruption. After the onset of eruption, the filament split into two parts as described in Section~\ref{s-result}. How the filament split is still unclear. In the previous literatures, magnetic reconnection is involved in the split in most cases \citep{gil01,gib06a,liu08b}. In this study, the split occurred during the impulsive phase of the flare at the eastern leg that was closer to the flare site than the western one, implying that the split was associated with the release of magnetic energy. The subsequent rotation or unwinding motion implies the release of magnetic helicity stored in the filament before the flare, presumably due to the shearing motion in the photosphere. Nevertheless, it is still elusive whether the filament existed as a whole or was composed of two interwinding parts before splitting. The way of splitting seems difficult to be explained by any of the previous models and requires in-depth investigations. Though the runaway part escaped out of the corona, the major part failed. It returned to the solar surface after reaching the apex. Such kind of failed eruptions have been frequently observed and explained by the strapping effect of the overlying arcade \citep{ji03,guo10a,song14,jos14} or asymmetry of the background magnetic fields with respect to the location of the filament \citep{liu09}. In order to figure out the cause of failed eruption of the major part, we turn to the large-scale magnetic configurations displayed in the bottom panels of Figure~\ref{fig2}. It is revealed that the overlying magnetic arcades above AR 11283 are asymmetric to a great extent, i.e., the magnetic field to the west of AR is much stronger than that to the east, which is similar to the case of \citet{liu09}. According to the analysis of \citet{liu09}, the confinements of the large-scale arcade acted on the filament are strong enough to prevent it from escaping. We also performed magnetic potential-field extrapolation using the same boundary and derived the distributions of $|\mathbf{B}|$ above the PIL. It is found that the maximum height of the major part considerably exceeds the critical height ($\sim$80$\arcsec$) of TI where the decay index ($-d\ln|\mathbf{B}|/d\ln z$) of the background potential field reaches $\sim$1.5. The major part would have escaped from the corona successfully after entering the instability domain if TI had worked. Therefore, the asymmetry with respect to the filament location, rather than TI of the overlying arcades, seems reasonable and convincing to interpret why the major part of the filament underwent failed eruption. In this study, both successful and failed eruptions occurred in a partially eruptive event, which provides more constraints to the theoretical models of solar eruptions. \subsection{How is the coronal loop oscillation triggered?} \label{s-loop} Since the first discovery of coronal loop oscillations during flares \citep{asch99,nak99}, such kind of oscillations are found to be ubiquitous and be useful for the diagnostics of coronal magnetic field \citep{guo15}. Owing to the complex interconnections of the magnetic field lines, blast wave and/or EUV wave induced by filament eruption may disturb the adjacent coronal loops in the same AR or remote loops in another AR, resulting in transverse kink-mode oscillations. \citet{nis13} observed decaying and decayless transverse oscillations of a coronal loop on 2012 May 30. The loops experience small-amplitude decayless oscillations, which is driven by an external non-resonant harmonic driver before and after the flare \citep{mur14}. The flare, as an impulsive driver, triggers large-amplitude decaying loop oscillations. In our study, the decayless loop oscillation with moderate amplitude ($\sim$1.6 Mm) occurred during the flare and lasted for only two cycles, which makes it quite difficult to precisely measure the decay timescale if it is decaying indeed. The loop may cool down and become invisible in 171 {\AA} while oscillating. Considering that the distance between the flare and OL is $\sim$50 Mm and the time delay between the flare onset and loop oscillation is $\sim$6 minutes, the speed of propagation of the disturbances from the flare to OL is estimated to be $\sim$140 km s$^{-1}$, which is close to the local sound speed of the plasmas with temperature of $\sim$0.8 MK. Hence, we suppose that the coronal loop oscillation was triggered by the external disturbances as a result of the rising and expanding motions of the filament. \subsection{Significance for space weather prediction} \label{s-swp} Flares and CMEs play a very important role in the generation of space weather. Accurate prediction of space weather is of great significance. Successful eruptions have substantially been observed and deeply investigated. Partial filament eruptions that produce flares and CMEs, however, are rarely detected and poorly explored. For the type of partial eruptions in this study, i.e., one part undergoes failed eruption and the other part escapes out of the corona, it would be misleading and confusing to assess and predict the space weather effects based on the information only from the solar surface, since the escaping part may carry or produce solar energetic particles that have potential geoeffectiveness. Complete observations are necessary for accurate predictions. \section{Summary} \label{s-sum} Using the multiwavelength observations from both spaceborne and ground-based telescopes, we studied in detail a partial filament eruption event in AR 11283 on 2011 September 8. The main results are summarized as follows: \begin{enumerate} \item{A magnetic null point was found above the preexisting positive polarity surrounded by negative polarities in the AR. A spine passed through the null and intersected with the photosphere to the left. Weakly twisted sheared arcade supporting the filament was located under the null point whose height increased slightly by $\sim$0.4 Mm after the eruption.} \item{The filament rose and expanded, which was probably triggered by the internal tether-cutting reconnection or by continuous magnetic flux emergence and cancellation along the highly complex and fragmented PIL, the former of which seems more convincing. During its eruption, it triggered the null-point magnetic reconnection and the M6.7 flare with a single HXR source at different energy bands. The flare produced a quasi-circular ribbon and a V-shape ribbon where the outer spine intersects with the photosphere.} \item{During the expansion, the filament split into two parts at the eastern leg that is closer to the flare site. The major part of the filament rose at the speeds of 90$-$150 km s$^{-1}$ before reaching the maximum apparent height of $\sim$115 Mm. Afterwards, it returned to the solar surface staggeringly at the speeds of 20$-$80 km s$^{-1}$. The rising and falling motions of the filament were clearly observed in the UV, EUV, and H$\alpha$ wavelengths. The failed eruption of the major part was most probably caused by the asymmetry of the overlying magnetic arcades with respect to the filament location.} \item{The runaway part, however, separated from and rotated around the major part for $\sim$1 turn before escaping outward from the corona at the speeds of 125$-$255 km s$^{-1}$, probably along the large-scale open magnetic field lines as evidenced by the PFSS modelling and the type \Rmnum{3} radio burst. The ejected part of the filament led to a faint CME. The angular width and apparent speed of the CME in the FOV of C2 are 37$^{\circ}$ and 214 km s$^{-1}$. The propagation directions of the escaping filament observed by SDO/AIA and \textit{STA}/EUVI are consistent with those of the CME observed by LASCO/C2 and \textit{STA}/COR1, respectively.} \item{The partial filament eruption also triggered transverse oscillation of the neighbouring coronal loops in the same AR. The amplitude and period of the kink-mode oscillation were 1.6 Mm and 225 s. We also performed diagnostics of the plasma density and temperature of the oscillating loops.} \end{enumerate} \acknowledgements The authors thank the referee for valuable suggestions and comments to improve the quality of this article. We gratefully acknowledge Y. N. Su, P. F. Chen, J. Zhang, B. Kliem, R. Liu, S. Gibson, H. Gilbert, M. D. Ding, and H. N. Wang for inspiring and constructive discussions. \textit{SDO} is a mission of NASA\rq{}s Living With a Star Program. AIA and HMI data are courtesy of the NASA/\textit{SDO} science teams. \textit{STEREO}/SECCHI data are provided by a consortium of US, UK, Germany, Belgium, and France. QMZ is supported by Youth Fund of JiangSu BK20141043, by 973 program under grant 2011CB811402, and by NSFC 11303101, 11333009, 11173062, 11473071, and 11221063. H. Ji is supported by the Strategic Priority Research Program$-$The Emergence of Cosmological Structures of the Chinese Academy of Sciences, Grant No. XDB09000000. YG is supported by NSFC 11203014. Li Feng is supported by the NFSC grant 11473070, 11233008 and by grant BK2012889. Li Feng also thanks the Youth Innovation Promotion Association, CAS, for the financial support.
{'timestamp': '2015-03-11T01:08:57', 'yymm': '1503', 'arxiv_id': '1503.02933', 'language': 'en', 'url': 'https://arxiv.org/abs/1503.02933'}
ArXiv
\section*{Abstract} We show that the Backus (1962) equivalent-medium average, which is an average over a spatial variable, and the Gazis et al. (1963) effective-medium average, which is an average over a symmetry group, do not commute, in general. They commute in special cases, which we exemplify. \section{Introduction} Hookean solids are defined by their mechanical property relating linearly the stress tensor,~$\sigma$\,, and the strain tensor,~$\varepsilon$\,, \begin{equation*} \sigma_{ij}=\sum_{k=1}^3\sum_{\ell=1}^3c_{ijk\ell}\varepsilon_{k\ell}\,,\qquad i,j=1,2,3 \,. \end{equation*} The elasticity tensor,~$c$\,, belongs to one of eight material-symmetry classes shown in Figure~\ref{fig:orderrelation}. \begin{figure} \begin{center} \includegraphics[scale=0.7]{FigPartialOrder.pdf} \end{center} \caption{\small{Order relation of material-symmetry classes of elasticity tensors: Arrows indicate subgroups in this partial ordering. For instance, monoclinic is a subgroup of all nontrivial symmetries, in particular, of both orthotropic and trigonal, but orthotropic is not a subgroup of trigonal or {\it vice-versa}.}} \label{fig:orderrelation} \end{figure} The Backus (1962) moving average allows us to quantify the response of a wave propagating through a series of parallel layers whose thicknesses are much smaller than the wavelength. Each layer is a Hookean solid exhibiting a given material symmetry with given elasticity parameters. The average is a Hookean solid whose elasticity parameters---and, hence, its material symmetry---allow us to model a long-wavelength response. This material symmetry of the resulting medium, to which we refer as {\sl equivalent}, is a consequence of symmetries exhibited by the averaged layers. The long-wave-equivalent medium to a stack of isotropic or transversely isotropic layers with thicknesses much less than the signal wavelength was shown by Backus (1962) to be a homogeneous or nearly homogeneous transversely isotropic medium, where a {\it nearly\/} homogeneous medium is a consequence of a {\it moving\/} average. Backus (1962) formulation is reviewed by Slawinski (2016) and Bos et al.\ (2016), where formulations for generally anisotropic, monoclinic, and orthotropic thin layers are also derived. Bos et al.\ (2016) examine the underlying assumptions and approximations behind the Backus (1962) formulation, which is derived by expressing rapidly varying stresses and strains in terms of products of algebraic combinations of rapidly varying elasticity parameters with slowly varying stresses and strains. The only mathematical approximation in the formulation is that the average of a product of a rapidly varying function and a slowly varying function is approximately equal to the product of the averages of the two functions. According to Backus (1962), the average of $f(x_3)$ of ``width''~$\ell'$ is \begin{equation} \label{eq:BackusOne} \overline f(x_3):=\int\limits_{-\infty}^\infty w(\zeta-x_3)f(\zeta)\,{\rm d}\zeta \,, \end{equation} where $w(x_3)$ is the weight function with the following properties: \begin{equation*} w(x_3)\geqslant0\,, \quad w(\pm\infty)=0\,, \quad \int\limits_{-\infty}^\infty w(x_3)\,{\rm d}x_3=1\,, \quad \int\limits_{-\infty}^\infty x_3w(x_3)\,{\rm d}x_3=0\,, \quad \int\limits_{-\infty}^\infty x_3^2w(x_3)\,{\rm d}x_3=(\ell')^2\,. \end{equation*} These properties define $w(x_3)$ as a probability-density function with mean~$0$ and standard deviation~$\ell'$\,, explaining the use of the term ``width'' for $\ell'$\,. Gazis et al.~(1963) average allows us to obtain the closest symmetric counterpart---in the Frobenius sense---of a chosen material symmetry to a generally anisotropic Hookean solid. The average is a Hookean solid, to which we refer as {\sl effective}, whose elasticity parameters correspond to the symmetry chosen {\it a priori}. Gazis average is a projection given by \begin{equation} \widetilde c^{\,\,\rm sym}:=\intop_{G^{\rm sym}}(g\circ c)\,\mathrm{d}\mu(g) \,, \label{eq:proj} \end{equation} where the integration is over the symmetry group, $G^{\rm sym}$\,, whose elements are $g$\,, with respect to the invariant measure, $\mu$\,, normalized so that $\mu(G^{\rm sym})=1$\,; $\widetilde c^{\,\,\rm sym}$ is the orthogonal projection of $c$\,, in the sense of the Frobenius norm, on the linear space containing all tensors of that symmetry, which are $ c^{\,\,\rm sym}$\,. Integral~(\ref{eq:proj}) reduces to a finite sum for the classes whose symmetry groups are finite, which are all classes except isotropy and transverse isotropy. The Gazis et al.\ (1963) approach is reviewed and extended by Danek et al.\ (2013, 2015) in the context of random errors. Therein, elasticity tensors are not constrained to the same---or even different but known---orientation of the coordinate system. Concluding this introduction, let us emphasize that the fundamental distinction between the two averages is their domain of operation. The Gazis et al.\ (1963) average is an average over symmetry groups at a point and the Backus (1962) average is a spatial average over a distance. Both averages can be used, separately or together, in quantitative seismology. Hence, an examination of their commutativity might provide us with an insight into their physical meaning and into allowable mathematical operations. \section{Generally anisotropic layers and monoclinic medium} Let us consider a stack of generally anisotropic layers to obtain a monoclinic medium. To examine the commutativity between the Backus and Gazis averages, let us study the following diagram, \begin{equation} \label{eq:CD2} \begin{CD} \rm{aniso}@>\rm{B}>>\rm{aniso}\\ @V\mathrm{G}VV @VV\rm{G}V\\ \rm{mono}@>>\rm{B}>\rm{mono} \end{CD} \end{equation} and Proposition~\ref{thm:One}, below, \begin{prop} \label{thm:One} In general, the Backus and Gazis averages do not commute. \end{prop} \begin{proof} To prove this proposition and in view of Diagram~\ref{eq:CD2}, let us begin with the following corollary. \begin{corollary} For the generally anisotropic and monoclinic symmetries, the Backus and Gazis averages do not commute. \end{corollary} \noindent To understand this corollary, we invoke the following lemma, whose proof is in \ref{AppOne1}. \begin{lemma} \label{lem:Mono} For the effective monoclinic symmetry, the result of the Gazis average is tantamount to replacing each $c_{ijk\ell}$\,, in a generally anisotropic tensor, by its corresponding $c_{ijk\ell}$ of the monoclinic tensor, expressed in the natural coordinate system, including replacements of the anisotropic-tensor components by the zeros of the corresponding monoclinic components. \end{lemma} \noindent Let us first examine the counterclockwise path of Diagram~\ref{eq:CD2}. Lemma~\ref{lem:Mono} entails a corollary. \begin{corollary} \label{col:Mono} For the effective monoclinic symmetry, given a generally anisotropic tensor,~$C$\,, \begin{equation} \label{eq:GazisMono} \widetilde{C}^{\,\rm mono}=C^{\,\rm mono} \,; \end{equation} where $\widetilde{C}^{\,\rm mono}$ is the Gazis average of~$C$\,, and $C^{\,\rm mono}$ is a monoclinic tensor whose nonzero entries are the same as for~$C$\,. \end{corollary} \noindent According to Corollary~\ref{col:Mono}, the effective monoclinic tensor is obtained simply by setting to zero---in the generally anisotropic tensor---the components that are zero for the monoclinic tensor. Then, the second counterclockwise branch of Diagram~\ref{eq:CD2} is performed as follows. Applying the Backus average, we obtain (Bos et al., 2015) \begin{equation*} \langle c_{3333}\rangle=\overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1}\,, \qquad \langle c_{2323}\rangle=\frac{\overline{\left(\frac{c_{2323}}{D}\right)}}{2D_2}\,, \end{equation*} \begin{equation*} \langle c_{1313}\rangle=\frac{\overline{\left(\frac{c_{1313}}{D}\right)}}{2D_2}\,, \qquad \langle c_{2313}\rangle=\frac{\overline{\left(\frac{c_{2313}}{D}\right)}}{2D_2}\,, \end{equation*} where $D\equiv 2(c_{2323}c_{1313}-c_{2313}^2)$ and $D_2\equiv (\overline{c_{1313}/D})(\overline{c_{2323}/D})-(\overline{c_{2313}/D})^2$\,. We also obtain \begin{equation*} \langle c_{1133}\rangle= \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{1133}}{c_{3333}}\right)}\,, \quad \langle c_{2233}\rangle= \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{2233}}{c_{3333}}\right)}\,, \quad \langle c_{3312}\rangle= \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{3312}}{c_{3333}}\right)}\,, \end{equation*} \begin{equation*} \langle c_{1111}\rangle= \overline{c_{1111}}-\overline{\left(\frac{c_{1133}^2}{c_{3333}}\right)}+ \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{1133}}{c_{3333}}\right)}^{\,2}\,, \end{equation*} \begin{equation*} \langle c_{1122}\rangle= \overline{c_{1122}}-\overline{\left(\frac{c_{1133}\,c_{2233}}{c_{3333}}\right)}+ \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{1133}}{c_{3333}}\right)}\,\, \overline{\left(\frac{c_{2233}}{c_{3333}}\right)}\,, \end{equation*} \begin{equation*} \langle c_{2222}\rangle= \overline{c_{2222}}-\overline{\left(\frac{c_{2233}^2}{c_{3333}}\right)}+ \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{2233}}{c_{3333}}\right)}^{\,2}\,, \end{equation*} \begin{equation*} \langle c_{1212}\rangle= \overline{c_{1212}}-\overline{\left(\frac{c_{3312}^2}{c_{3333}}\right)}+ \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{3312}}{c_{3333}}\right)}^{\,2}\,, \end{equation*} \begin{equation*} \langle c_{1112}\rangle= \overline{c_{1112}}-\overline{\left(\frac{c_{3312}\,c_{1133}}{c_{3333}}\right)}+ \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{1133}}{c_{3333}}\right)}\,\, \overline{\left(\frac{c_{3312}}{c_{3333}}\right)} \end{equation*} and \begin{equation*} \langle c_{2212}\rangle= \overline{c_{2212}}-\overline{\left(\frac{c_{3312}\,c_{2233}}{c_{3333}}\right)}+ \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{2233}}{c_{3333}}\right)}\,\, \overline{\left(\frac{c_{3312}}{c_{3333}}\right)}\,, \end{equation*} where angle brackets denote the equivalent-medium elasticity parameters. The other equivalent-medium elasticity parameters are zero. Following the clockwise path of Diagram~\ref{eq:CD2}, the upper branch is derived in matrix form in Bos et al.\ (2015). Then, from Bos et al. (2015) the result of the right-hand branch is derived by setting entries in the generally anisotropic tensor that are zero for the monoclinic tensor to zero. The nonzero entries, which are too complicated to display explicitly, are---in general---not the same as the result of the counterclockwise path. Hence, for generally anisotropic and monoclinic symmetries, the Backus and Gazis averages do not commute. \end{proof} \section{Higher symmetries} \subsection{Monoclinic layers and orthotropic medium} \label{sec:mono} Proposition~\ref{thm:One} remains valid for layers exhibiting higher material symmetries, and simpler expressions of the corresponding elasticity tensors allow us to examine special cases that result in commutativity. Let us consider the following corollary of Proposition~\ref{thm:One}. \begin{corollary} \label{thm:Two} For the monoclinic and orthotropic symmetries, the Backus and Gazis averages do not commute. \end{corollary} \noindent To study this corollary, let us consider the following diagram, \begin{equation} \label{eq:CD} \begin{CD} \rm{mono}@>\rm{B}>>\rm{mono}\\ @V\mathrm{G}VV @VV\rm{G}V\\ \rm{ortho}@>>\rm{B}>\rm{ortho} \end{CD} \end{equation} and the lemma, whose proof is in \ref{AppOne2}. \begin{lemma} \label{lem:Ortho} For the effective orthotropic symmetry, the result of the Gazis average is tantamount to replacing each $c_{ijk\ell}$\,, in a generally anisotropic---or monoclinic---tensor, by its corresponding $c_{ijk\ell}$ of the orthotropic tensor, expressed in the natural coordinate system, including the replacements by the corresponding zeros. \end{lemma} \noindent Lemma~\ref{lem:Ortho} entails a corollary. \begin{corollary} \label{col:Ortho} For the effective orthotropic symmetry, given a generally anisotropic---or monoclinic---tensor,~$C$\,, \begin{equation} \label{eq:GazisOrtho} \widetilde{C}^{\,\rm ortho}=C^{\,\rm ortho} \,. \end{equation} where $\widetilde{C}^{\,\rm ortho}$ is the Gazis average of~$C$\,, and $C^{\,\rm ortho}$ is an orthotropic tensor whose nonzero entries are the same as for~$C$\,. \end{corollary} \noindent Let us consider a monoclinic tensor and proceed counterclockwise along the first branch of Diagram~\ref{eq:CD}. Using the fact that the monoclinic symmetry is a special case of general anisotropy, we invoke Corollary~\ref{col:Ortho} to conclude that $\widetilde{C}^{\,\rm ortho}=C^{\,\rm ortho}$\,, which is equivalent to setting $c_{1112}$\,, $c_{2212}$\,, $c_{3312}$ and $c_{2313}$ to zero in the monoclinic tensor. We perform the upper branch of Diagram~\ref{eq:CD}, which is the averaging of a stack of monoclinic layers to get a monoclinic equivalent medium, as in the case of the lower branch of Diagram~\ref{eq:CD2}. Thus, following the clockwise path, we obtain \begin{equation*} c_{1212}^\circlearrowright= \overline{c_{1212}}-\overline{\left(\frac{c_{3312}^2}{c_{3333}}\right)}+ \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \overline{\left(\frac{c_{3312}}{c_{3333}}\right)}^{\,2}\,, \end{equation*} \begin{equation*} c_{1313}^\circlearrowright=\overline{\left(\frac{c_{1313}}{D}\right)}/(2D_2)\,,\qquad c_{2323}^\circlearrowright=\overline{\left(\frac{c_{2323}}{D}\right)}/(2D_2) \end{equation*} Following the counterclockwise path, we obtain \begin{equation*} c_{1212}^\circlearrowleft=\overline{c_{1212}}\,,\quad c_{1313}^\circlearrowleft=\overline{\left(\frac{1}{c_{1313}}\right)}^{\,\,-1}\,,\quad c_{2323}^\circlearrowleft=\overline{\left(\frac{1}{c_{2323}}\right)}^{\,\,-1}\,. \end{equation*} The other entries are the same for both paths. In conclusion, the results of the clockwise and counterclockwise paths are the same if $c_{2313}=c_{3312}=0$\,, which is a special case of monoclinic symmetry. Thus, the Backus average and Gazis average commute for that case, but not in general. \subsection{Orthotropic layers and tetragonal medium} \label{sec:ortho} In a manner analogous to Diagram~\ref{eq:CD}, but proceeding from the the upper-left-hand corner orthotropic tensor to lower-right-hand corner tetragonal tensor by the counterclockwise path, \begin{equation} \label{eq:CD3} \begin{CD} \rm{ortho}@>\rm{B}>>\rm{ortho}\\ @V\mathrm{G}VV @VV\rm{G}V\\ \rm{tetra}@>>\rm{B}>\rm{tetra} \end{CD} \end{equation} we obtain \begin{equation*} c_{1111}^\circlearrowleft=\overline{\frac{c_{1111}+c_{2222}}{2}- \frac{\left(\frac{c_{1111}+c_{2222}}{2}\right)^2}{c_{3333}}}+ \overline{\left(\frac{c_{1111}+c_{2222}}{2c_{3333}}\right)}^2 \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1} \,. \end{equation*} Following the clockwise path, we obtain \begin{equation*} c_{1111}^\circlearrowright=\overline{\frac{c_{1111}+c_{2222}}{2}- \frac{c_{1133}^2+c_{2233}^2}{2c_{3333}}}+ \frac{1}{2}\left[\overline{\left(\frac{c_{1133}}{c_{3333}}\right)}^2+ \overline{\left(\frac{c_{2233}}{c_{3333}}\right)}^2\right] \overline{\left(\frac{1}{c_{3333}}\right)}^{\,\,-1}\,. \end{equation*} These results are not equal to one another, unless $c_{1133}=c_{2233}$\,, which is a special case of orthotropic symmetry. Also $c_{2323}$ must equal $c_{1313}$ for $c_{2323}^\circlearrowright=c_{2323}^\circlearrowleft$. The other entries are the same for both paths. Thus, the Backus average and Gazis average do commute for $c_{1133}=c_{2233}$ and $c_{2323}=c_{1313}$\,, which is a special case of orthotropic symmetry, but not in general. Let us also consider the case of monoclinic layers and a tetragonal medium to examine the process of combining the Gazis averages, which is tantamount to combining Diagrams~(\ref{eq:CD}) and~(\ref{eq:CD3}), \begin{equation} \begin{CD} \label{eq:CD4} \rm{mono}@>\rm{B}>>\rm{mono}\\ @V\mathrm{G}VV @VV\rm{G}V\\ \rm{ortho}@>>\rm{B}>\rm{ortho}\\ @V\mathrm{G}VV @VV\rm{G}V\\ \rm{tetra}@>>\rm{B}>\rm{tetra} \end{CD} \end{equation} In accordance with Proposition~\ref{thm:One}, there is---in general---no commutativity. However, the outcomes are the same as for the corresponding steps in Sections~\ref{sec:mono} and \ref{sec:ortho}. In general, for the Gazis average, proceeding directly, $\rm{aniso}\xrightarrow{\rm{G}}\rm{iso}$\,, is tantamount to proceeding along arrows in Figure~\ref{fig:orderrelation}, $\rm{aniso}\xrightarrow{\rm{G}}\cdots\xrightarrow{\rm{G}}\rm{iso}$\,. No such combining of the Backus averages is possible, since, for each step, layers become a homogeneous medium. \subsection{Transversely isotropic layers} Lack of commutativity can also be exemplified by the case of transversely isotropic layers. Following the clockwise path of Diagram~\ref{eq:CD}, the Backus average results in a transversely isotropic medium, whose Gazis average---in accordance with Figure~\ref{fig:orderrelation}---is isotropic. Following the counterclockwise path, Gazis average results in an isotropic medium, whose Backus average, however, is transverse isotropy. Thus, not only the elasticity parameters, but even the resulting material-symmetry classes differ. Also, we could---in a manner analogous to the one illustrated in Diagram~\ref{eq:CD4}\,---begin with generally anisotropic layers and obtain isotropy by the clockwise path and transverse isotropy by the counterclockwise path, which again illustrates noncommutativity. \section{Discussion} Herein, we assume that all tensors are expressed in the same orientation of their coordinate systems. Otherwise, the process of averaging become more complicated, as discussed---for the Gazis average---by Kochetov and Slawinski (2009a, 2009b) and as mentioned---for the Backus average---by Bos et al. (2016). Mathematically, the noncommutativity of two distinct averages is shown by Proposition~\ref{thm:One}, and exemplified for several material symmetries. We do not see a physical justification for special cases in which---given the same orientation of coordinate systems---these averages commute. This behaviour might support the view that a mathematical realm, which allows for fruitful analogies with the physical world, has no causal connection with it. \section*{Acknowledgments} We wish to acknowledge discussions with Theodore Stanoev. This research was performed in the context of The Geomechanics Project supported by Husky Energy. Also, this research was partially supported by the Natural Sciences and Engineering Research Council of Canada, grant 238416-2013. \section*{References} \frenchspacing \newcommand{\par\noindent\hangindent=0.4in\hangafter=1}{\par\noindent\hangindent=0.4in\hangafter=1} \par\noindent\hangindent=0.4in\hangafter=1 Backus, G.E., Long-wave elastic anisotropy produced by horizontal layering, {\it J. Geophys. Res.\/}, {\bf 67}, 11, 4427--4440, 1962. \setlength{\parskip}{4pt} \par\noindent\hangindent=0.4in\hangafter=1 B\'{o}na, A., I. Bucataru and M.A. Slawinski, Space of $SO(3)$-orbits of elasticity tensors, {\it Archives of Mechanics\/}, {\bf 60}, 2, 121--136, 2008 \par\noindent\hangindent=0.4in\hangafter=1 Bos, L, D.R. Dalton, M.A. Slawinski and T. Stanoev, On Backus average for generally anisotropic layers, {\it arXiv\/}, 2016. \par\noindent\hangindent=0.4in\hangafter=1 Chapman, C. H., {\it Fundamentals of seismic wave propagation\/}, Cambridge University Press, 2004. \par\noindent\hangindent=0.4in\hangafter=1 Danek, T., M. Kochetov and M.A. Slawinski, Uncertainty analysis of effective elasticity tensors using quaternion-based global optimization and Monte-Carlo method, {\it The Quarterly Journal of Mechanics and Applied Mathematics\/}, {\bf 66}, 2, pp. 253--272, 2013. \par\noindent\hangindent=0.4in\hangafter=1 Danek, T., M. Kochetov and M.A. Slawinski, Effective elasticity tensors in the context of random errors, {\it Journal of Elasticity\/}, 2015. \par\noindent\hangindent=0.4in\hangafter=1 Gazis, D.C., I. Tadjbakhsh and R.A. Toupin, The elastic tensor of given symmetry nearest to an anisotropic elastic tensor, {\it Acta Crystallographica\/}, {\bf 16}, 9, 917--922, 1963. \par\noindent\hangindent=0.4in\hangafter=1 Kochetov, M. and M.A. Slawinski, On obtaining effective orthotropic elasticity tensors, {\it The Quarterly Journal of Mechanics and Applied Mathematics\/}, {\bf 62}, 2, pp. 149-Ð166, 2009a. \par\noindent\hangindent=0.4in\hangafter=1 Kochetov, M. and M.A. Slawinski, On obtaining effective transversely isotropic elasticity tensors, {\it Journal of Elasticity\/}, {\bf 94}, 1Ð-13., 2009b. \par\noindent\hangindent=0.4in\hangafter=1 Slawinski, M.A. {\it Wavefronts and rays in seismology: Answers to unasked questions\/}, World Scientific, 2016. \par\noindent\hangindent=0.4in\hangafter=1 Slawinski, M.A., {\it Waves and rays in elastic continua\/}, World Scientific, 2015. \par\noindent\hangindent=0.4in\hangafter=1 Thomson, W., {\it Mathematical and physical papers: Elasticity, heat, electromagnetism\/}, Cambridge University Press, 1890 \setcounter{section}{0} \setlength{\parskip}{0pt} \renewcommand{\thesection}{Appendix~\Alph{section}} \section{} \subsection{}\label{AppOne1} Let us prove Lemma~\ref{lem:Mono}. \begin{proof} For discrete symmetries, we can write integral~(\ref{eq:proj}) as a sum, \begin{equation} \label{eq:AverageDisc} \widetilde C^{\,\rm sym}=\frac{1}{n}\left(\tilde{A}_1^{\rm sym}\,C\,\tilde{A}_1^{\rm sym}\,{}^{^T}+\ldots+\tilde{A}_n^{\rm sym}\,C\,\tilde{A}_n^{\rm sym}\,{}^{^T}\right) \,, \end{equation} where $\widetilde C^{\rm sym}$ is expressed in Kelvin's notation, in view of Thomson (1890, p.~110) as discussed in Chapman (2004, Section~4.4.2). To write the elements of the monoclinic symmetry group as $6\times 6$ matrices, we must consider orthogonal transformations in $\mathbb{R}^3$\,. Transformation $A\in SO(3)$ of $c_{ijk\ell}$ corresponds to transformation of $C$ given by \begin{equation} {\footnotesize \tilde{A}=\left[\begin{array}{cccccc} A_{11}^{2} & A_{12}^{2} & A_{13}^{2} & \sqrt{2}A_{12}A_{13} & \sqrt{2}A_{11}A_{13} & \sqrt{2}A_{11}A_{12}\\ A_{21}^{2} & A_{22}^{2} & A_{23}^{2} & \sqrt{2}A_{22}A_{23} & \sqrt{2}A_{21}A_{23} & \sqrt{2}A_{21}A_{22}\\ A_{31}^{2} & A_{32}^{2} & A_{33}^{2} & \sqrt{2}A_{32}A_{33} & \sqrt{2}A_{31}A_{33} & \sqrt{2}A_{31}A_{32}\\ \sqrt{2}A_{21}A_{31} & \sqrt{2}A_{22}A_{32} & \sqrt{2}A_{23}A_{33} & A_{23}A_{32}+A_{22}A_{33} & A_{23}A_{31}+A_{21}A_{33} & A_{22}A_{31}+A_{21}A_{32}\\ \sqrt{2}A_{11}A_{31} & \sqrt{2}A_{12}A_{32} & \sqrt{2}A_{13}A_{33} & A_{13}A_{32}+A_{12}A_{33} & A_{13}A_{31}+A_{11}A_{33} & A_{12}A_{31}+A_{11}A_{32}\\ \sqrt{2}A_{11}A_{21} & \sqrt{2}A_{12}A_{22} & \sqrt{2}A_{13}A_{23} & A_{13}A_{22}+A_{12}A_{23} & A_{13}A_{21}+A_{11}A_{23} & A_{12}A_{21}+A_{11}A_{22}\end{array}\right]} \,, \label{eq:ATildeQ} \end{equation} which is an orthogonal matrix, $\tilde{A}\in SO(6)$ (Slawinski (2015), Section~5.2.5).\footnote{Readers interested in formulation of matrix~(\ref{eq:ATildeQ}) might refer to B\'ona et al. (2008).} The required symmetry-group elements are \begin{equation*} A_1^{\rm mono}= \left[ \begin{array}{ccc} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\end{array}\right] \mapsto \left[\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\end{array}\right] =\tilde{A}_1^{\rm mono} \end{equation*} \begin{equation*} A_2^{\rm mono}= \left[ \begin{array}{ccc} -1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 1\end{array}\right] \mapsto \left[\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & -1 & 0 & 0\\ 0 & 0 & 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\end{array}\right] =\tilde{A}_2^{\rm mono} \,. \end{equation*} For the monoclinic\index{material symmetry!monoclinic} case, expression~(\ref{eq:AverageDisc}) can be stated explicitly as \begin{equation*} \widetilde C^{\rm mono}= \frac{\left(\tilde{A}_1^{\rm mono}\right)\,C\,\left(\tilde{A}_1^{\rm mono}\right)^T+\left(\tilde{A}_2^{\rm mono}\right)\,C\,\left(\tilde{A}_2^{\rm mono}\right)^T}{2} \,. \end{equation*} Performing matrix operations, we obtain \begin{equation} \widetilde C^{\rm mono} =\left[\begin{array}{cccccc} c_{1111} & c_{1122} & c_{1133} & 0 & 0 & \sqrt{2}c_{1112}\\ c_{1122} & c_{2222} & c_{2233} & 0 & 0 & \sqrt{2}c_{2212}\\ c_{1133} & c_{2233} & c_{3333} & 0 & 0 & \sqrt{2}c_{3312}\\ 0 & 0 & 0 & 2c_{2323} & 2c_{2313} & 0\\ 0 & 0 & 0 & 2c_{2313} & 2c_{1313} & 0\\ \sqrt{2}c_{1112} & \sqrt{2}c_{2212} & \sqrt{2}c_{3312} & 0 & 0 & 2c_{1212} \end{array}\right] \,, \label{eq:MonoExplicitRef} \end{equation} which exhibits the form of the monoclinic tensor in its natural coordinate system. In other words, $\widetilde{C}^{\rm mono}=C^{\rm mono}$\,, in accordance with Corollary~\ref{col:Mono}. \subsection{}\label{AppOne2} Let us prove Lemma~\ref{lem:Ortho}. For orthotropic symmetry, $\tilde{A}_1^{\rm ortho}=\tilde{A}_1^{\rm mono}$ and $\tilde{A}_2^{\rm ortho}=\tilde{A}_2^{\rm mono}$ and \begin{equation*} A_3^{\rm ortho}= \left[ \begin{array}{ccc} -1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & -1\end{array}\right] \mapsto \left[\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & -1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & -1\end{array}\right] =\tilde{A}_3^{\rm ortho} \,, \end{equation*} \begin{equation*} A_4^{\rm ortho}= \left[ \begin{array}{ccc} 1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & -1\end{array}\right] \mapsto \left[\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 0 & 0 & -1\end{array}\right] =\tilde{A}_4^{\rm ortho} \,. \end{equation*} For the orthotropic\index{material symmetry!orthotropic} case, expression~(\ref{eq:AverageDisc}) can be stated explicitly as {\footnotesize \begin{equation*} \widetilde C^{\rm ortho}= \frac{\left(\tilde{A}_1^{\rm ortho}\right)\,C\,\left(\tilde{A}_1^{\rm ortho}\right)^T+\left(\tilde{A}_2^{\rm ortho}\right)\,C\,\left(\tilde{A}_2^{\rm ortho}\right)^T +\left(\tilde{A}_3^{\rm ortho}\right)\,C\,\left(\tilde{A}_3^{\rm ortho}\right)^T+\left(\tilde{A}_4^{\rm ortho}\right)\,C\,\left(\tilde{A}_4^{\rm ortho}\right)^T } {4} \,. \end{equation*}} Performing matrix operations, we obtain \begin{equation} \widetilde C^{\rm ortho} =\left[\begin{array}{cccccc} c_{1111} & c_{1122} & c_{1133} & 0 & 0 & 0\\ c_{1122} & c_{2222} & c_{2233} & 0 & 0 & 0\\ c_{1133} & c_{2233} & c_{3333} & 0 & 0 &0\\ 0 & 0 & 0 & 2c_{2323} & 0 & 0\\ 0 & 0 & 0 & 0 & 2c_{1313} & 0\\ 0& 0 & 0 & 0 & 0 & 2c_{1212} \end{array}\right] \,, \label{eq:OrthoExplicitRef} \end{equation} which exhibits the form of the orthotropic tensor in its natural coordinate system. In other words, $\widetilde{C}^{\rm ortho}=C^{\rm ortho}$\,, in accordance with Corollary~\ref{col:Ortho}. \end{proof} \end{document}
{'timestamp': '2016-01-13T02:11:39', 'yymm': '1601', 'arxiv_id': '1601.02969', 'language': 'en', 'url': 'https://arxiv.org/abs/1601.02969'}
ArXiv
\section{Model} We consider a collection of N three-level V-type atoms located at the same position. We label the ground state as $\ket{1}$ and the two excited states as $\ket{2}$ and $\ket{3}$, and the transition frequency from level j to i as $\omega_{ij}$. A weak drive field which is resonantly tuned to $\omega_{21}$ prepares the atomic system in a timed-Dicke state. As the drive field is turned off, we detect the photons emitted from the cloud in the forward direction. In the experiment, the atomic cloud has a finite size, but for theoretical simplicity we can assume it to be point-like ensemble interacting each other through the vacuum field modes. This is because we are measuring the forward scattering, where any phases of emitted photons due to the atomic position distribution is exactly compensated by the phases initially imprinted on the atoms by the drive field \cite{Scully_2006}. Additionally, the transitions $\ket{1}\leftrightarrow\ket{2}$ and $\ket{1}\leftrightarrow\ket{3}$ interact with the field effectively with the same phase considering that the atomic cloud size is much smaller compared to $2\pi c/\omega_{23}$. We note that while the forward-scattered field is collectively enhanced, the decay rate of the atoms arising from interaction with the rest of the modes is not cooperative \cite{Bienaime_2011}. The atomic Hamiltonian $H_A$ and the vacuum field Hamiltonian $H_F$ are \eqn{\begin{split} H_A &= \sum_{m=1}^{N}\sum_{j=2,3} \hbar \omega_{j1} \hat{\sigma}_{m,j}^+ \hat{\sigma}_{m,j}^-,\\ H_F &= \sum_{k} \hbar \omega_{k} \hat{a}_{k}^{\dagger} \hat{a}_{k}, \label{eq:H-0} \end{split}} where $\hat{\sigma}_{m,j}^{\pm}$ is the raising/lowering operator acting on $m^\mr{th}$ atom and $j^\mr{th}$ level, $\hat{a}_k^{\dagger}$ and $\hat{a}_k$ are the field creation/annihilation operators of the corresponding frequency mode $\omega_{k}$, and $N$ refers to the effective number of atoms acting cooperatively in the forward direction. First, we prepare the atomic system by a weak drive field. The atom-drive field interaction Hamiltonian is \eqn{H_{\text{AD}}=-\sum_{m=1}^N\sum_{j=2,3} \hbar \Omega_j^m \bkt{ \hat{\sigma}_{m,j}^+ e^{-i \omega_D t} + \hat{\sigma}_{m,j}^- e^{i \omega_D t} }.\label{eq:H-AD}} Here, $\omega_D$ is the drive frequency and $\Omega_j^m \equiv \vec{d}_{j1}^m\cdot\vec{\epsilon_{D}}\,E_D$ is the Rabi frequency of $j^\mr{th}$ level, where $\vec{d}_{j1}^{m}$ is the dipole moment of $\ket{j}\leftrightarrow\ket{1}$ transition of $m^\mr{th}$ atom, $\vec{\epsilon}_D$ is the polarization unit vector of the drive field, and $E_D$ is the electric field of the drive field. Given that the atomic ensemble is driven with the common field in our experiment, we will assume that the atomic dipoles are aligned with the drive and each other. We can thus omit the atomic labels to write $\Omega_j$. The interaction Hamiltonian describing the atom-vacuum field interaction, under the rotating wave approximation, is given as \eqn{ H_{\text{AV}} = -\sum_{m=1}^N\sum_{j=2,3}\sum_{k} \hbar g_{m,j}(\omega_k) \bkt{ \hat{\sigma}_{m,j}^+\hat{a}_{k} + \hat{\sigma}_{m,j}^-\hat{a}_k^{\dagger}}. } Here, the atom-field coupling strength $g_{m,j}(\omega_k) \equiv \vec{d}_{j1}^{m} \cdot \vec{\epsilon}_k\sqrt{\frac{\omega_k}{2\hbar\varepsilon_0 V}}$, where $\vec{\epsilon}_k$ is the polarization unit vector of the field mode, $\varepsilon_0$ is the vacuum permittivity, and $V$ is the field mode volume. As justified previously, the atomic dipoles are aligned to each other and we write $g_{j}(\omega_k)$. Also, note that the sum over k only refers to the forward-scattered modes. The spontaneous emission arising from the rest of the modes is to be considered separately later. \section{Driven dynamics} \begin{figure*}[t] \centering \includegraphics[width = 3.5 in]{fig_laser_extinction.eps} \caption{\textbf{(a)} The drive field intensity (red circles) at turn-off edge characterized as the truncated $\cos^4\bkt{\frac{\pi}{2}\frac{t-t_0}{\tau}}$ function (red solid line) bridging the on and off state of the intensity. Here, $t_0 = -4$ ns and the fall-time $\tau=3.5$ ns are assumed. While the intensity of the drive field turns off mostly within $\approx$ 3.5 ns, additional 0.5-ns waiting time is provided before the the data analysis of the collective emission begins at $t=0$ as shown in Fig.\,\ref{fig_decay}\,(b), to further remove the residual drive intensity and the transient effect from our measurement.} \label{fig_laser_extinction} \end{figure*} We consider here the driven dynamics of a atoms. Moving to the rotating frame with respect to the drive frequency, and tracing out the vacuum field modes, we can write the following Born-Markov master equation for the atomic density matrix: \eqn{ \der{\hat{\rho}_A}{t} = -\frac{i}{\hbar} \sbkt{\widehat{H}_A + \widehat H_{AD}, \hat {\rho}_A} - \sum_{m,n = 1}^N\sum_{i,j = 2,3} \frac{\Gamma_{ij,mn}^{(D)}}{2} \sbkt{ \hat \rho_A \widehat{\sigma}_{m,i} ^+ \widehat{\sigma}_{n,j} ^- + \widehat{\sigma}_{m,i} ^+ \widehat{\sigma}_{n,j} ^- \hat \rho_A - 2\widehat{\sigma}_{n,j} ^- \hat \rho_A \widehat{\sigma}_{m,i} ^+ }, } where $\widehat H_A = - \sum_{m=1}^{N}\sum_{j=2,3} \hbar \Delta_{j} \widehat{\sigma}_{m,j}^+ \widehat{\sigma}_{m,j}^-$ is the free atomic Hamiltonian and $\widehat H_{AD} = -\sum_{m=1}^N\sum_{j=2,3} \hbar \Omega_j^m \bkt{ \widehat{\sigma}_{m,j}^+ + \widehat{\sigma}_{m,j}^- }$ is the atom-drive interaction Hamiltonian in the rotating frame, with $ \Delta_j \equiv \omega_{j1} - \omega_D$. The driven damping rates are defined as $ \Gamma_{ij,mn}^{(D) } \equiv \frac{\vec{d}^m_{i1} \cdot \vec{d}^n_{j1}\omega_{D}^3}{3\pi \varepsilon_0 \hbar c^3}$, with the indices $i,j$ referring to the atomic levels, and $m,n$ to different atoms. Using the above master equation, one can obtain the following optical Bloch equations for the case of a single atom: \begin{subequations} \eqn{\label{eq:optical-Bloch-eqa} \partial_t \rho_{33} &= i\Omega_3(\rho_{13}-\rho_{31}) - \Gamma_{33}^{(D)}\rho_{33} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{32} \\ \partial_t \rho_{22} &= i\Omega_2(\rho_{12}-\rho_{21}) - \Gamma_{22}^{(D)}\rho_{22}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{32}\\ \partial_t \rho_{11} &= -i\Omega_3(\rho_{13}-\rho_{31}) -i\Omega_2(\rho_{12}-\rho_{21}) + \Gamma^{(D)}_{33}\rho_{33}+ \Gamma^{(D)}_{22}\rho_{22} + \Gamma^{(D)}_{23}\bkt{ \rho_{23} + \rho_{32}} \\ \partial_t \rho_{31} &= -i \Omega_2 \rho_{32} - i \Omega_3(\rho_{33}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{33}}{2}-i\Delta_3}\rho_{31}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{21}\\ \partial_t \rho_{13} &= i \Omega_2 \rho_{23} + i \Omega_3(\rho_{33}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{33}}{2}+i\Delta_3}\rho_{13}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{12}\\ \partial_t \rho_{21} &= -i \Omega_3 \rho_{23} - i \Omega_2(\rho_{22}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{22}}{2}-i\Delta_2}\rho_{21} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{31}\\ \partial_t \rho_{12} &= i \Omega_3 \rho_{32} + i \Omega_2(\rho_{22}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{22}}{2}+i\Delta_2}\rho_{12}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{13}\\ \partial_t \rho_{32} &= -i \Omega_2 \rho_{31} + i\Omega_3\rho_{12}-\bkt{\frac{\Gamma^{(D)}_{22}+\Gamma^{(D)}_{33}}{2} -i\omega_{23}}\rho_{32} - \frac{\Gamma^{(D)}_{23}}{2} \bkt{\rho_{22} + \rho_{33}}\\ \partial_t \rho_{23} &= i \Omega_2 \rho_{13} - i\Omega_3\rho_{21}-\bkt{\frac{\Gamma^{(D)}_{22}+\Gamma^{(D)}_{33}}{2}+i\omega_{23}}\rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \bkt{\rho_{22} + \rho_{33}}, \label{eq:optical-Bloch-eqi}} \end{subequations} where we have defined the single atom driven damping rate as $\Gamma_{ij}^{(D) }\equiv \frac{\vec{d}_{i1} \cdot \vec{d}_{j1}\omega_{D}^3}{3\pi \varepsilon_0 \hbar c^3}$. Numerically solving Eq.\,\eqref{eq:optical-Bloch-eqa}--Eq.\,\eqref{eq:optical-Bloch-eqi} along with the normalization condition $\rho_{33}+\rho_{22}+\rho_{11}=1$ gives us the steady state density matrix $\rho_S$ for the atom. Substituting our experimental parameters, we get the populations: $\rho_{S,33}\approx 0$, $\rho_{S,22}\approx 10^{-10}$, and $\rho_{S,11}\approx 1$. The absolute value of the coherences are: $|\rho_{S,23}|\approx 0$, $|\rho_{S,21}|\approx 10^{-5}$, and $|\rho_{S,31}|\approx 0$. These estimates are made for $N \approx1 - 10$, assuming the collective driven damping rate to be $ \Gamma_{ij}^{(D)} (N) \approx (1 + Nf) \Gamma_{ij}^{(D) }$ with phenomenological value f=1 and the collective Rabi frequency to be $ \Omega_{j} \approx \sqrt{N} \Omega_{j}$. Thus we can conclude that the atomic ensemble is well within the single excitation regime in $\ket{2}$. The 3.5 ns time window of laser extinction has broad spectral component and may excite extra population to $\ket{2}$ and $\ket{3}$. We numerically simulate the optical Bloch equation for this time window to find the density matrix after the laser turn-off. We model the laser turn-off shape as $\cos^4$ (see Fig. \ref{fig_laser_extinction}) and vary the Rabi frequency accordingly. Note that this is a calculation for estimate purposes and may not convey the full dynamics in the laser extinction period. Within the numerical precision limit which is set by the evolution time step ($10^{-5}$ ns) multiplied by $\Gamma_{ij} \approx 0.01$ GHz, we obtain the following density matrix values after the turn-off: $\rho_{33}\approx 0$, $\rho_{22}\approx 0$, $\rho_{11}\approx 1$, $\rho_{23}\approx 0$, $\rho_{12}\approx 10^{-5}$, and $\rho_{13}\approx 10^{-7}-10^{-6}$. Thus the laser turn-off edge doesn't produce any significant excitation in $\ket{3}$. \section{Quantum beat dynamics} As the drive field is turned off, the system evolves with the atom-vacuum field interaction Hamiltonian. Moving to the interaction representation with respect to $H_A+H_F$, we get the interaction Hamiltonian in the interaction picture: \eqn{ \tilde{H}_{\text{AV}} = -\sum_{m=1}^N\sum_{j=2,3}\sum_{k} \hbar g_{j}(\omega_k) \bkt{ \hat{\sigma}_{m,j}^+\hat{a}_{k}e^{i(\omega_{j1}-\omega_{k})t } + \hat{\sigma}_{m,j}^-\hat{a}_k^{\dagger}e^{-i(\omega_{j1}-\omega_{k})t}}, \label{eq:H-AV} } Initially the system shares one excitation in $\ket{2}$ symmetrically, and the EM field is in the vaccum state such that \eqn{ \ket{\Psi(0)} = \frac{1}{\sqrt{N}}\sum_{m=1}^{N}\hat{\sigma}_{m,2}^{+}\ket{11\cdots 1}\ket{\{0\}}. \label{eq:psi-initial} } As the system evolves due to the atom-vacuum field interaction, it remains in the single-excitation manifold of total atom + field Hilbert space, as one can see from the interaction Hamiltonian (Eq.\,\eqref{eq:H-AV}): \eqn{ \ket{\Psi(t)}=\bkt{\sum_{m=1}^{N}\sum_{j=2,3}c_{m,j}(t)\hat{\sigma}_{m,j}^{+}+\sum_{k}c_{k}(t)\hat{a}_k^{\dagger}}\ket{11\cdots 1}\ket{\{0\}}. \label{eq:psi-evolved} } Now we solve the Schr\"odinger equation to find the time evolution of the atom + field system under the atom-field interaction using Eqs.\eqref{eq:psi-evolved} and \eqref{eq:H-AV} to obtain \begin{subequations} \begin{align} & \partial_t c_{m,j}(t) = i\sum_{k} g_j(\omega_k) e^{i(\omega_{j1}-\omega_{k})t} c_{\omega_k}(t), \\ & \partial_t c_{\omega_k}(t) = i\sum_{m=1}^{N}\sum_{j=2,3}g_j(\omega_k)e^{-i(\omega_{j1}-\omega_{k})t}c_{m,j}(t). \end{align} \label{eq:de-1} \end{subequations} Formally integrating Eq.\,\eqref{eq:de-1}(b) and plugging it in Eq.\,\eqref{eq:de-1}(a), we have \eqn{ \partial_t c_{m,j}(t) = -\sum_{k}g_j(\omega_k) e^{i(\omega_{j1}-\omega_{k})t}\int_0^t \mathrm{d}{\tau}\sum_{n=1}^{N}\sum_{l=2,3}g_l(\omega_k) e^{-i(\omega_{l1}-\omega_k)\tau} c_{n,l}(\tau). } We observe that $c_{m,2}(t)$'s ($c_{m,3}(t)$'s) have the same initial conditions and the same evolution equation, thus we can justifiably define $c_2(t) \equiv c_{m,2}(t)$ ($c_3(t)\equiv c_{m,3}(t)$). Assuming a flat spectral density of the field and making the Born-Markov approximation we get \begin{subequations} \begin{align} \partial_t c_2(t) &= -\frac{\Gamma_{22}^{(N)}}{2} c_2(t)-\frac{\Gamma_{23}^{(N)}}{2} e^{i\omega_{23}t}c_3(t),\\ \partial_t c_3(t) &= -\frac{\Gamma_{33}^{(N)}}{2} c_3(t)-\frac{\Gamma_{32}^{(N)}}{2} e^{-i\omega_{23}t}c_2(t), \end{align} \label{eq:de-2} \end{subequations} where we have defined $\Gamma_{jl}^{(N)}\equiv \Gamma_{jl} + Nf\Gamma_{jl}$, with $\Gamma_{jl} = \frac{\vec{d}_{j1}\cdot \vec{d}_{l1}\omega_{l1}^3}{3\pi \varepsilon_0 \hbar c^3}$ as the generalized decay rate into the quasi-isotropic modes and $Nf\Gamma_{jl}$ as the collective decay rate in the forward direction \cite{Bienaime_2011, Araujo_2016}. The factor $f$ represents the geometrical factor coming from restricting the emission to the forward scattered modes. We emphasize here that the emission into all the modes (not specifically the forward direction) denoted by $\Gamma_{jl}$ is added phenomenologically and is not collective. Considering that the atomic dipole moments induced by the drive field are oriented along the polarization of the driving field, we can obtain $\Gamma_{23}=\sqrt{ \Gamma_{22}\Gamma_{33}}$, which can be extended to $\Gamma_{23}^{(N)}=\sqrt{ \Gamma_{22}^{(N)}\Gamma_{33}^{(N)}}$. To solve the coupled differential equations, we take the Laplace transform of Eq.\,\eqref{eq:de-2}(a) and (b): \begin{subequations} \begin{align} s\tilde{c}_2(s)&=c_2(0)-\frac{\Gamma_{22}^{(N)}}{2}\tilde{c}_2(s)-\frac{\Gamma_{23}^{(N)}}{2}\tilde{c}_3(s-i\omega_{23}),\\ s\tilde{c}_3(s)&=c_3(0)-\frac{\Gamma_{33}^{(N)}}{2}\tilde{c}_3(s)-\frac{\Gamma_{32}^{(N)}}{2}\tilde{c}_2(s+i\omega_{23}), \end{align} \end{subequations} where we have defined $\tilde{c}_j(s) \equiv \int_0^{\infty} c_j(t) e^{-st} \mathrm{d}(t)$ as the Laplace transform of $c_j \bkt{t}$. Substituting the initial conditions, we obtain the Laplace coefficients as \begin{subequations}\begin{align} \tilde{c}_2(s)&=\,\frac{1}{\sqrt{N}}\frac{s+\frac{\Gamma_{33}^{(N)}}{2}-i\omega_{23}}{s^2+(\Gamma_{\text{avg}}^{(N)}-i\omega_{23})s-i\omega_{23}\frac{\Gamma_{22}^{(N)}}{2}},\\ \tilde{c}_3(s)&=-\frac{\Gamma^{(N)}_{32}}{2\sqrt{N}}\,\frac{1}{s^2+(\Gamma^{(N)}_{\text{avg}}+i\omega_{23})s+i\omega_{23}\frac{\Gamma^{(N)}_{33}}{2}}. \end{align}\end{subequations} And the poles of the denominators are, respectively, \begin{subequations}\begin{align} s_{\pm}^{(2)}=&-\frac{\Gamma^{(N)}_{\text{avg}}}{2} + \frac{i\omega_{23}}{2} \pm \frac{i\delta}{2}, \\ s_{\pm}^{(3)}=&-\frac{\Gamma^{(N)}_{\text{avg}}}{2} - \frac{i\omega_{23}}{2} \pm \frac{i\delta}{2}, \end{align}\end{subequations} where we have defined $\Gamma_{\text{avg}}^{(N)}=\frac{\Gamma_{33}^{(N)}+\Gamma_{22}^{(N)}}{2}$, $\Gamma_{\text{d}}=\frac{\Gamma_{33}^{(N)}-\Gamma_{22}^{(N)}}{2}$, and $\delta = \sqrt{\omega_{23}^2-\bkt{\Gamma^{(N)}_{\text{avg}}}^2+2i\omega_{23}\Gamma^{(N)}_{\text{d}}}$. The real part of the above roots corresponds to the collective decay rate of each of the excited states, while the imaginary part corresponds to the frequencies. The fact that $\delta$ is generally a complex number unless $\Gamma_{22}\neq\Gamma_{33}$ means that we will have modification to both the decay rate and the frequency. To see this more clearly, we can expand $\delta$ up to second order in $\Gamma_{jl}^{(N)}/\omega_{23}$, considering we are working in a spectroscopically well-separated regime ($\Gamma_{jl}^{(N)}\ll\omega_{23}$); \eqn{ \delta \approx \omega_{23}\sbkt{1-\frac{1}{2}\bkt{\frac{\Gamma^{(N)}_{23}}{\omega_{23}}}^2}+i\Gamma^{(N)}_{d}\sbkt{1+\frac{1}{2}\bkt{\frac{\Gamma^{(N)}_{23}}{\omega_{23}}}^2}, } the above poles become \begin{subequations}\begin{align} s_{+}^{(2)}=&-\frac{\Gamma^{(N)}_{33}}{2}\bkt{1+\frac{\Gamma^{(N)}_{\text{d}}\Gamma_{22}^{(N)}}{2\omega_{23}^2}} + i\omega_{23}\sbkt{1-\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2}, \\ s_{-}^{(2)}=&-\frac{\Gamma^{(N)}_{22}}{2}\bkt{1-\frac{\Gamma^{(N)}_{\text{d}}\Gamma_{33}^{(N)}}{2{\omega_{23}^2}}} + i\omega_{23}\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2, \\ s_{+}^{(3)}=&-\frac{\Gamma^{(N)}_{33}}{2}\bkt{1+\frac{\Gamma^{(N)}_{\text{d}}\Gamma^{(N)}_{22}}{2\omega_{23}^2}} - i\omega_{23}\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2 \\ s_{-}^{(3)}=&-\frac{\Gamma^{(N)}_{22}}{2}\bkt{1-\frac{\Gamma^{(N)}_{\text{d}}\Gamma^{(N)}_{33}}{2\omega_{23}^2}} - i\omega_{23}\sbkt{1-\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2}. \end{align}\end{subequations} The atomic state coefficients in time domain are \begin{subequations} \begin{align} c_2(t) &= \frac{1}{2\sqrt{N}\delta} e^{- \Gamma^{(N)}_{\text{avg}}t/2} e^{i\omega_{23}t/2} \sbkt{(-i\Gamma^{(N)}_{d}-\omega_{23}+\delta)e^{i\delta t/2} + (i\Gamma^{(N)}_{d}+\omega_{23}+\delta)e^{-i\delta t/2}},\\ c_3(t) & =\frac{i\Gamma^{(N)}_{32}}{2\sqrt{N}\delta} e^{-\Gamma^{(N)}_{\text{avg}}t/2} e^{-i\omega_{23}t/2} \sbkt{e^{i\delta t/2}-e^{-i\delta t/2}}. \end{align} \label{eq:atom-coefficients} \end{subequations} Again, expanding $\delta$ under the condition $\Gamma_{jl}^{(N)}\ll\omega_{23}$, we get \begin{subequations} \begin{align} c_2(t) &= \frac{1}{\sqrt{N}}\sbkt{ e^{- \Gamma^{(N)}_{22}t/2} - \bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2\frac{\delta^*}{\delta\,}e^{-\Gamma^{(N)}_{33}t/2}e^{i\omega_{23}t}},\\ c_3(t) & = -\frac{i\Gamma^{(N)}_{32}}{2\sqrt{N}\delta} \sbkt{e^{-\Gamma^{(N)}_{22}t/2}e^{-i\omega_{23}t}-e^{-\Gamma^{(N)}_{33}t/2}}. \end{align} \label{eq:atom-coefficients-2} \end{subequations} Note that the collection of $N$ atoms behaves like one ``super-atom'' which decays with a rate that is $N$-times that of an individual atom in the forward direction. We note that the system is not only superradiant with respect to the transition involving the initially excited level, but also with respect to other transitions as well as a result of the vacuum-induced coupling between the levels. Most population in $\ket{2}$ decays with the decay rate $\Gamma_{22}^{(N)}$, and small amount of it decays with $\Gamma_{33}^{(N)}$ and has corresponding level shift $\omega_{23}$. In $\ket{3}$ are the equal amount of components decaying with $\Gamma^{(N)}_{22}$ (and level shifted $-\omega_{23}$) and $\Gamma^{(N)}_{33}$. The small but nonzero contribution of $\ket{3}$ makes beating of frequency about $\omega_{23}$. \section{Field Intensity} The light intensity at position $x$ and time $t$ (assuming the atom is at position $x=0$ and it starts to evolve at time $t=0$) is \eqn{ I(x,t) = \frac{\epsilon_0 c}{2}\bra{\Psi(t)}\hat{E}^{\dagger}(x,t) \hat{E}(x,t) \ket{\Psi(t)}, } where the electric field operator is \eqn{ \hat{E}(x,t) = \int_{-\infty}^{\infty} \mathrm{d} k \, E_k \hat{a}_k e^{ikx}e^{-i\omega_k t}. } Plugging in the electric field operator and the single-excitation ansatz (Eq.\,\eqref{eq:psi-evolved}), we obtain the intensity up to a constant factor: \eqn{ I(x,t) \simeq N^2 \abs{e^{-i\omega_{23}\tau}c_2(\tau) + \frac{\Gamma_{23}}{\Gamma_{22}}c_3(\tau) }^2 \Theta(\tau), \label{eq:field-intensity} } where $\tau = t-\abs{x/v}$. Substituting Eqs. \eqref{eq:atom-coefficients}(a) and (b) in the above and approximating $\delta$ in the regime $\Gamma_{jl}^{(N)}\ll\omega_{23}$, we get \eqn{ \frac{I(\tau)}{I_0} = e^{-\Gamma^{(N)}_{22}\tau}+\bkt{\frac{ \Gamma^{(N)}_{33}}{2\omega_{23}}}^2 e^{-\Gamma^{(N)}_{33}\tau} + \frac{\Gamma^{(N)}_{33}}{\omega_{23}} e^{-\Gamma^{(N)}_{\text{avg}}\tau} \sin(\omega_{23}\tau+\phi), } where $I_0$ is a normalization factor which increases as the number of atom increases. Neglecting the small second term in the right hand side, we get the relative beat intensity normalized to the main decay amplitude: \eqn{ \text{beat amp.} = \frac{\Gamma^{(N)}_{33}}{\omega_{23}}, } and the beat phase $\phi$: \eqn{ \phi = \arctan\bkt{\frac{\Gamma^{(N)}_{22}}{\omega_{23}}}. } We see that even if there was no population in level 3 in the beginning, the vacuum field builds up a coherence between level 2 and level 3 to make a quantum beat. This is in line with the quantum trajectory calculation of single atom case \cite{Hegerfeldt_1994}, where the individual decay rates are replaced with collective decay rates. We can verify that the collective effect manifests in the beat size and the beat phase. \section{Data analysis in Fig. \ref{fig_decay} (b)} The modulated decay profiles of the flash after the peak are magnified in Fig.\,\ref{fig_decay} (b). The purpose of the figure is to visually compare the decay rate and the relative beat intensity $I_\mathrm{b}$, so we normalize each curve with the exponential decay amplitude such that the normalized intensity starts to decay from $\approx1$ at $t=0$. In practice, we fit the $I(t)$ shown in Fig.\,\ref{fig_decay} (a) after $t=0$ using Eq.\,\eqref{eq_intensity} to get $I_0$ for each curve, to get $I(t)/I_0$ curves as in Fig.\,\ref{fig_decay} (b). Note that, more precisely, it is the fitting curve that decays from $I(t)/I_0\approx1$, not the experimental data. In fact, the plotted data tend to be lower than the fitting curves near $t=0$, due to the effect of the transient behavior around the flash peak. The inset displays the FFT of the beat signal shown in the main figure. We first subtract from $I(t)/I_0$ data the exponential decay profile the first term of the fitting function Eq.\,\eqref{eq_intensity} as well as the dc offset. The residual, which is a sinusoidal oscillation with an exponentially decaying envelop, is the beat signal represented by the second term of Eq.\,\eqref{eq_intensity}. The FFT of the beat signal has the lower background at $\omega = 0$ due to the pre-removal of the exponential decay and the offset. The linewidth of each spectrum is limited by the finite lifetime of the beat signal, which corresponds to $\Gamma^{(N)}_\mathrm{avg}$ as in Eq.\,\eqref{eq_intensity}. \vspace{2cm} \end{document}
{'timestamp': '2021-02-25T02:05:29', 'yymm': '2102', 'arxiv_id': '2102.11982', 'language': 'en', 'url': 'https://arxiv.org/abs/2102.11982'}
ArXiv
\section{Conclusion $\&$ Future Work} \vspace{-0.3cm} In this work, we propose a novel co-motion pattern, a second-order local motion descriptor in order to detect whether the video is deep-faked. Our method is fully interpretable and pretty robust to slight variations such as video compression and noises. We have achieved superior performance on the latest datasets under classification and anomaly detection settings, and have comprehensively evaluated various characteristics of our method including robustness and generalizability. In the future, an interesting direction is to investigate whether a more accurate motion estimation can be achieved as well as how temporal information can be integrated within our method. \clearpage \bibliographystyle{splncs04} \section{Experiments} \label{sect:Exp} In this section, extensive experiments are conducted to empirically demonstrate the feasibility of our co-motion pattern, coupled with the advantages over other methods. We first describe the experiment protocol, followed by the choice of hyperparameters. The quantitative performance of our method evaluated on different datasets is reported and analyzed in Sect.~\ref{sec:quantitative}. Subsequently, we interpret the composition of the co-motion pattern, showing how it can be used for determining the genuineness of any given sequence or even individual estimated motion set. Finally, we demonstrate the transferability and robustness of our method under different scenarios. \subsubsection{Dataset} We evaluate our method on FaceForensics++~\cite{FaceForensics} dataset which consists of four sub-databases that produce face forgery via different methods, i.e. Deepfake~\cite{deepfake}, FaceSwap~\cite{faceswap}, Face2Face~\cite{F2F} and NeuralTexture~\cite{NeuralTexture}. In addition, we utilize the real set from~\cite{Google_dataset} to demonstrate the similarity of co-motion patterns from real videos. Since each sub-database contains 1,000 videos, we form 2,000 co-motion patterns with each composed of picking $N$ $\rho$ matrices for training and testing respectively. We use c23 and c40 to indicate the quality of datasets, which are compressed by H.264~\cite{H264} with 23 and 40 as constant rate quantization parameters. Unless otherwise stated, all of our performance reported are achieved on c23. The validation set and testing set are split before any experiments to ensure no overlapping would interfere the results. \subsubsection{Implementation} In this section, we specify hyperparameters and other detailed settings in order to reproduce our method. The local motion estimation procedure is accomplished by integrating \cite{opticalflow} as the estimator and \cite{Landmark} as the landmark detector, both with default parameter settings as reported in the original papers. For the facial landmarks, we only keep the last 51 landmarks out of 68 in total as the first 17 denotes the face boundary which is usually not manipulated. During the calculation of co-motion, we constrain $K$ to be at most 8 as only 8 facial components, thus avoiding unnecessary computation. Since a certain portion of frames do not contain sufficient motion, we only preserve co-motion patterns with $p\%$ motion features having greater magnitude than the total $p\%$ of others, i.e. $p = 0.5$ with magnitude $\geq 0.85$, where the number is acquired by randomly sampling a set of 100 videos. An AdaBoost~\cite{AdaBoost} classifier is employed for all supervised classification tasks. For Gaussian smoothing, we set $\hat{k} = 3$ for all experiments. \subsection{Quantitative Results} \label{sec:quantitative} \begin{table}[t!] \caption{Accuracy of our method on all four forgery databases, with each treated as a binary classification task against the real videos. Performance of \cite{OpenWorld} is estimated from figures in the paper. } \begin{center} \begin{tabular}{l|c|c|c|c|c} \hline Method/Dataset & Deepfakes & FaceSwap & Face2Face & NeuralTexture & Combined \\ \hline Xception~\cite{FaceForensics} & 93.46\% & 92.72\% & 89.80\% & N/A & \textbf{95.73\%} \\ R-CNN~\cite{RCNN} & 96.90\% & 96.30\% & \textbf{94.35\%} & N/A & N/A \\ Optical Flow + CNN~\cite{OFCNN} & N/A & N/A & 81.61\% & N/A & N/A \\ FacenetLSTM~\cite{OpenWorld} & 89\% & 90\% & 87\% & N/A & N/A \\ \hline $N$ = 1 (Ours) & 63.65\% & 61.90\% & 56.50\% & 56.65\% & 57.05\% \\ $N$ = 10 (Ours) & 82.80\% & 81.95\% & 72.30\% & 68.50\% & 71.30\% \\ $N$ = 35 (Ours) & 95.95\% & 93.60\% & 85.35\% & 83.00\% & 88.25\% \\ $N$ = 70 (Ours) & \textbf{99.10\%} & \textbf{98.30\%} & 93.25\% & \textbf{90.45\%} & 94.55\% \\ \hline \end{tabular} \end{center} \end{table} In this section, we demonstrate the quantitative results of our method under different settings. At first, we show that the co-motion pattern can adequately separate forged and real videos in classification tasks as shown in Tab.~1. Comparing with other state-of-the-art forensic methods in terms of classification accuracy, we have achieved competent performance and have outperformed them by a large margin on Deepfakes~\cite{deepfake} and FaceSwap~\cite{faceswap}, respectively $99.10\%$ and $98.30\%$. In \cite{OFCNN}, while the researchers have similarly attempted establishing a forensic pipeline on top of motion features, we have outperformed its performance by approx. 12$\%$. It is noteworthy that \cite{RCNN,OpenWorld,FaceForensics} are all exploiting deep features that are learned in an end-to-end manner and consequently cannot be properly explained. By contrast, as interpretability is one of the principal factors to media forensics, our attention lies on proposing a method such that it can be justified and make no effort on deliberately outperforming deep learning based methods. Equally importantly, as forgery methods are various and targeting each is expensive, we demonstrate that the proposed co-motion pattern can also be employed for anomaly detection tasks, where only the behaviors of real videos require to be modeled, and forged videos can be separated if an appropriate threshold is selected. As presented in Fig.~\ref{fig:ROCs}, we show receiver operating characteristic (ROC) curves on each forgery database with increasing $N$. The real co-motion template is constructed of 3,000 randomly selected $\rho$ matrices for each co-motion pattern (real or fake) to compare against during evaluation. In general, our method can be used for authenticating videos even without supervision. In the next section, we exhibit that the co-motion pattern is also robust to random noise and data compression. \begin{figure}[t] \centering \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=\textwidth]{eccv2020kit/DF.jpg} \end{subfigure} \hfill \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=\textwidth]{eccv2020kit/FS.jpg} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=\textwidth]{eccv2020kit/F2F.jpg} \end{subfigure} \hfill \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=\textwidth]{eccv2020kit/NT.jpg} \end{subfigure} \caption{Anomaly detection performance of our co-motion patterns. } \label{fig:ROCs} \end{figure} \vspace{-0.3cm} \subsection{Robustness Analysis} \label{sec:robustness} In this section, we demonstrate the robustness of our proposed method against noises or data compression and the generalizability of co-motion patterns. Experiments about whether the compression rate of the video and noise would affect the effectiveness of co-motion patterns are conducted and the results are shown in Tab. 2. Empirically, co-motion has demonstrated great robustness against heavy compression (c40) and random noise, i.e. $N(\mu,\sigma^2)$ with $\mu = 0$ and $\sigma = 1$. Such results verify our proposed co-motion patterns exploiting high-level temporal information are much less sensitive to pixel-level variation, while statistics based methods as reviewed in Sect.~2.2 do not possess this property. \vspace{-0.7cm} \begin{table} \caption{Robustness experiment for demonstrating that co-motion can maintain its characteristics under different scenarios. All experiments are conducted on Deepfake~\cite{deepfake} with $N = 35$. Classification accuracy and area under curve (AUC) are reported respectively. } \begin{center} \begin{tabular}{l|c|c|c|c} \hline Setting / Dataset & Original &c23&c40& c23+noise \\ \hline Binary classification & 97.80\% & 95.95\% & 91.60\% & 91.95\% \\ \hline Anomaly detection & 98.57 & 96.14 & 93.76 & 92.60 \\ \hline \end{tabular} \end{center} \end{table} \vspace{-0.5cm} In addition to demonstrating the robustness, we also investigate in whether the modeled co-motion patterns are generalizable, as recorded in Tab.~3. It turns out that co-motion patterns constructed on relatively high-quality forgery databases such as NeuralTextures~\cite{NeuralTexture} and Face2Face~\cite{F2F} can easily be generalized for classifying other low-quality databases, while the opposite results in inferior accuracy. This phenomenon is caused by that videos forged by NeuralTextures are generally being more consistent, thus the inconsistency learned is more narrowed down and specific, while the types of inconsistency vary greatly in low-quality databases, which can be hard to model. \vspace{-0.5cm} \begin{table} \caption{Experiments for demonstrating generalizability of co-motion patterns. Same experiment setting was employed as in Tab. 1. } \begin{center} \begin{tabular}{l|c|c|c|c} \hline Test on / Train on & Deepfakes & FaceSwap & Face2Face & NeuralTexture \\ \hline Deepfakes & N/A & 92.15\% & 93.45\% & 95.85\% \\ FaceSwap & 84.25\% & N/A & 76.75\% & 84.95\% \\ Face2Face & 70.30\% & 64.85\% & N/A & 81.65\% \\ NeuralTexture & 76.20\% & 65.15\% & 77.85\% & N/A \\ \hline \end{tabular} \end{center} \end{table} \vspace{-1cm} \subsection{Abnormality Reasoning} \label{sec:reasoning} In this section, we explicitly interpret the implication of each co-motion pattern for an intuitive understanding. A co-motion example of real videos can be found in Fig. ~6. As we illustrated, the local motion at 51 facial landmarks are estimated as features, where the order of landmarks are preserved identically in all places on purpose for better visual understanding. It is noteworthy that the order of landmarks do not affect the performance as long as they are aligned during experiments. Consequently, each co-motion pattern describes the relationship of any pair of two local motion features, where features from the same or highly correlated facial component would instinctively have greater correlation. For instance, it is apparent that two eyes would generally move in the same direction, as the center area highlighted in Fig. ~6. Similarly, a weak yet stable high correlation of the first 31 features is consistently observed on all real co-motion patterns, which conforms to the accordant movement of facial components on upper and middle face area. We also observe strong negative correlation, indicating opposite movements, between upper lip and lower lip. This credits to the dataset containing a large volume of videos with people talking, while in forged videos such a negative correlation is undermined, usually due to the fact that the videos are synthesized in a frame-by-frame manner, thus the temporal relationship is not well-preserved. Moreover, the co-motion is normalized in range $[0, 1]$ for visualization purpose which leads to the weakened difference between real and fake co-motion patterns, while in original scale the difference can be more magnificent, verified by the experiments. \begin{figure}[t!] \centering \includegraphics[width=0.55\textwidth, height=0.48\textwidth]{eccv2020kit/Interpret.png} \caption{An example of interpreting co-motion patterns. } \label{fig:interpret} \end{figure} For an explicit comparison, we also average 1,000 $\rho$ matrices from each source to illustrate the distinction and which motion pattern in specific was not well-learned as in Fig.~7. Evidently, co-motion patterns from forged videos fail to model the negative correlation between upper lip and lower lip. Moreover, in Deepfake and FaceSwap, the positive correlation between homogeneous components (e.g. eyes and eyebrows) is also diluted, while in reality it would be difficult to control them having uncorrelated motion. We also attempt to construct co-motion patterns on another set of real videos~\cite{Google_dataset} to illustrate the commonality of co-motion patterns over all real videos. Additionally, we show that visually, the structure of co-motion pattern could quickly converge as illustrated in Fig. 8, which sustains our choices of building second-order pattern as it is less sensitive to intra-instance variation. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.83\textwidth]{eccv2020kit/real_cooccurrence.jpg} {{\small Real videos}} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.83\textwidth]{eccv2020kit/deepfakes_cooccurrence.jpg} {{\small Deepfakes}} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.83\textwidth]{eccv2020kit/faceswap_cooccurrence.jpg} {{\small FaceSwap}} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.83\textwidth]{eccv2020kit/actor_cooccurrence.jpg} {{\small Real videos from \cite{Google_dataset}}} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.83\textwidth]{eccv2020kit/face2face_cooccurrence.jpg} {{\small Face2Face}} \label{fig:mean and std of net44} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.83\textwidth]{eccv2020kit/neuraltextures_cooccurrence.jpg} {{\small NeuralTexture}} \end{subfigure} \caption{Averaged co-motion pattern from different sources. Two real co-motion patterns (leftmost column) collectively present component-wise motion consistency while forged videos fail to maintain that property. } \label{fig:Cooccurrences} \end{figure*} \begin{figure}[t!] \centering \includegraphics[width=0.68\textwidth, height=0.3\textwidth]{eccv2020kit/Convergence.png} \caption{Co-motion pattern comparison on the same video (original and deep-faked based on the original one). As $N$ increases, both co-motion patterns gradually converge to the same structure. } \label{fig:framework} \end{figure} \section{Introduction} Media forensic, referring to judge the authenticity, detect potentially manipulated region and reason its decision of the given images/videos, plays an important role in real life to prevent media data from being edited and utilized for malicious purposes, e.g., spreading fake news~\cite{FakeNews,WorldLeader}. Unlike traditional forgery methods (e.g., copy-move and slicing) which can falsify the original content with low cost but are also easily observable, the development of deep generative models such as generative adversarial net (GAN)~\cite{GAN} makes the boundary between realness and forgery more blurred than ever, as deep models are capable of learning the distribution from real-world data so well. In this paper, among all the forensic-related tasks, we focus on exposing forged videos produced by face swapping and manipulation applications~\cite{FastFaceSwap,DVP,F2F,FSGAN,MakeAFace,NeuralTexture}. These methods, while initially designed for entertainment purposes, have gradually become uncontrollable in particular when the face of celebrities, who possess greater social impact such as Obama~\cite{obama}, can be misused at no cost, leading to pernicious influence. \begin{figure}[t!] \centering \includegraphics[width=0.95\textwidth, height=0.51\textwidth]{eccv2020kit/clear_comparison.png} \caption{Example of motion analysis results by our method. \textbf{Landmarks} with the same color are considered having analogous motion patterns, which are consistent with facial structure in real videos but not in deep-faked videos. We compactly model such patterns and utilize them to determine the authenticity of given videos.} \label{fig:clear_comparison} \end{figure} Traditional forensic methods focusing on detecting specific traces remained ineluctably during the editing (e.g., inconsistency in re-sampling~\cite{Resampling}, shadowing~\cite{shadow}, reflection~\cite{Reflection}, compression quality~\cite{CompressionQuality} and noise pattern~\cite{Noise}) fail to tackle the indistinguishable DNN-generated images/videos due to the powerful generative ability of existing deep models. Therefore, the demand for forensic approaches explicitly against deep-faked videos is increasing. Existing deep forensic models can be readily categorized into three branches including real-forged binary classification-based methods~\cite{XRay,TwoStep,RCNN,MesoNet}, anomaly image statistics detection based approaches~\cite{ColorComponent,FaceArtifict,PRNU,Unmasking,AttributeGAN} and high-level information driven cases~\cite{headpose,exposelandmark,blinking}. However, no matter which kind of methods, their success heavily relies on a high-quality, uncompressed and well-labeled forensic dataset to facilitate the learning. Once the given data are compressed or in low-resolution, their performance is inevitably affected. More importantly, these end-to-end deep forensic methods are completely unexplainable, no explicit reason can be provided by these methods to justify based on what a real or fake decision is made. To overcome the aforementioned issues, in this paper, we propose a video forensic method based on motion features to explicitly against deep-faked videos. Our method aims to model the conjoint patterns of local motion features from real videos, and consequently spot the abnormality of forged videos by comparing the extracted motion pattern against the real ones. To do so, we first estimate motion features of keypoints that are commonly shared across deep-faked videos. In order to enhance the generalizability of obtained motion features as well as eliminate noises introduced by inaccurate estimation results, we divide motion features into various groups which are further reformed into a correlation matrix as a more compact frame-wise representation. Then a sequence of correlation matrices are calculated from each video, with each weighted by the grouping performance to form the co-motion pattern which describes the local motion consistency and correlation of the whole video. In general, co-motion patterns collected from real videos obey the movement pattern of facial structures and are homogeneous with each other regardless of the video content variation, while it becomes less associated across fake videos. To sum up, our contributions are four-fold: (1) We propose co-motion pattern, a descriptor of consecutive image pairs that can be used to effectively describe local motion consistency and correlation. (2) The proposed co-motion pattern is being entirely explainable, robust to video compression/pixel noises and generalizes well. (3) We conduct experiments under both classification and anomaly detection settings, showing that the co-motion pattern is able to accurately reveal the motion-consistency level of given videos. (4) We also evaluate our method on datasets with different quality and forgery methods, with the intention to demonstrate the robustness and transferability of our method. \begin{figure}[t!] \centering \includegraphics[width=\textwidth, height=0.45\textwidth]{eccv2020kit/framework.png} \caption{The pipeline of our proposed co-motion pattern extraction method. As illustrated, we firstly estimate the motion of corresponding keypoints, which are then to be grouped for analysis. On top of that, we construct co-motion pattern as a compact representation to describe the relationship between motion features. } \label{fig:framework} \end{figure} \section{Related Work} \vspace{-0.25cm} \subsection{Face Forgery by Media Manipulation} \vspace{-0.15cm} First of all, we review relevant human face forgery methods. Traditionally, methods such as copy-move and slicing, if employed for face swapping tasks, can hardly produce convincing result due to the inconsistency caused by image quality~\cite{Resampling,quantization,jpeg_ghosts}, lighting changing~\cite{lighting,complex_lighting} and noise patterns~\cite{Noise,estimate_noise} between the tampered face region and other regions. With the rapid expeditious development of deep generative models~\cite{GAN}, the quality of generated images has significantly improved. The success of ProGAN~\cite{pggan} makes visually determining the authenticity of generated images pretty challenging if only focusing on the face region. Furthermore, the artifacts remained in boundary regions whose corresponding distribution in training datasets are relatively disperse are also progressively eliminated by \cite{StyleGANV1,StyleGANV2,glow,BigGAN}. Although these methods have demonstrated appealing generating capability, they do not focus on a certain identity but generate faces with random input. Currently, the capability of deep neural networks has also been exploited for human-related tasks such as face swapping~\cite{deepfake,faceswap,FastFaceSwap,F2F,NeuralTexture,FSNET,FSGAN,DeformAE}, face expression manipulation~\cite{MakeAFace,F2F,x2face,NFE} and facial attribute editing~\cite{NFE,AttGAN,DA_Face_M,SMIT,MulGAN} majorly for entertainment purposes at the initial stage (samples of deep-faked face data are shown in Fig.~\ref{fig:deepfake_samples}.). However, since the face swapping methods in particular have already been misused for commercial purposes, homologous techniques should be studied and devised as prevention measures before it causing irreparable adverse influence. \vspace{-15pt} \begin{figure} \centering \includegraphics[width=0.8\textwidth, height=0.45\textwidth]{eccv2020kit/fake_samples.png} \caption{Samples to illustrate what ``Deepfake'' is. Top left~\cite{StyleGANV2}: high fidelity generated faces. Top right~\cite{jim}: face swapping. Bottom left~\cite{MakeAFace}: face expression manipulation, original image on top and expression manipulated on bottom. Bottom right~\cite{MulGAN}: face attribute editing, original images on top and edited on bottom. } \label{fig:deepfake_samples} \end{figure} \vspace{-15pt} \subsection{Deep-faked Manipulation Detection} While media forensic has been a long existing field, the countermeasures against deep-faked images and videos are scarce. As we mentioned earlier, existing methods can be categorized into three genres, respectively by utilizing a deep neural network~\cite{XRay,FaceForensics,RCNN,MesoNet,TwoStep,OFCNN,Incremental,DetectF2F,OpenWorld}, by exploiting the unnatural low-level statistics and by detecting the abnormality of high-level information. In the very first category, it has been usually considered as a binary classification problem where a classifier is constructed to learn the boundary between original and manipulated data via hand-crafted or deep features. As one of the earliest works in this branch, \cite{MesoNet} employs an Inception~\cite{Inception} with proper architecture improvement to directly classify each original or edited frame. Later, in order to consider the intra-frame correlation, \cite{RCNN} constructed a recurrent convolutional neural network that learns from temporal sequences. Due to the variety of video content and the characteristics of neural network, a sufficiently large dataset is required. To overcome this problem, \cite{OFCNN} attempted using the optical flow as input to train a neural network. While high classification accuracy achieved, since the features learned directly by neural networks yet to be fully comprehended, the decision of whether the input data has been manipulated cannot be appropriately elucidated. Regarding the second category, \cite{Unmasking,PRNU,AttributeGAN,CameraFingerprint} have all utilized the characteristics that the current deep generated images can barely learn the natural noise carried with untampered images, hence using the noise pattern for authentication. In \cite{ColorComponent}, the diminutive difference of color components between original and manipulated images for classification. While effective, these methods are also exceedingly susceptible to the quality of dataset. Our method lies in the third category and is constructed based upon high-level information~\cite{headpose,exposelandmark}, which are generally being more explainable and robust to the miniature pixel change introduced by compression or noise. Furthermore, as co-motion pattern is derived by second-order statistics, it is being more robust than ~\cite{headpose,exposelandmark} to instance-wise variation. \section{Methodology} In this section, we elaborate on the details of our proposed video forensic method based on co-motion pattern extraction from videos and the overall pipeline of our method is illustrated in Fig.~2. Firstly, we obtain aligned local motion feature describing the movement of specific keypoints from the input videos (Sect.~\ref{sect:LME}). To eliminate the instance-wise deviation, we then design high-order patterns among the extracted local motion features. Subsequently, we demonstrate how to construct co-motion patterns that describe the motion consistency over each video, as well as its usage altogether in Sect.~\ref{sect:CMP}. \subsection{Local Motion Estimation} \label{sect:LME} The fundamental of constructing co-motion pattern is to extract local motion features firstly. Since each co-motion pattern is comprised by multiple independent correlation matrices (explained in Sect.~\ref{sect:CMP}), we expound on how to obtain local motion features from two consecutive frames in this section first. Denote a pixel on image $I$ with coordinate $(x, y)$ at time $t$ as $I(x, y, t)$, according to brightness constancy assumption, we have~\cite{HS,opticalflow}: \begin{equation} I(x, y, t) = I(x + \Delta x, y + \Delta x, t + \Delta t) \end{equation} where $\Delta x, \Delta y$ and $\Delta t$ denote the displacements on $\mathbb{R}^3$ respectively. $\Delta t$ is usually 1 to denote two consecutive frames. This leads to the optical flow constraint: \begin{equation} \frac{\partial I}{\partial x} \Delta x + \frac{\partial I}{\partial y} \Delta y + \frac{\partial I}{\partial t} = 0 \end{equation} However, such a hard constraint can lead motion estimation result to be sensitive to even slight changes in brightness, and therefore gradient constancy assumption is proposed~\cite{gradient,opticalflow}: \begin{equation} \nabla I(x, y, t) = \nabla I(x + \Delta x, y + \Delta y, t + 1) \end{equation} where \begin{equation} \nabla = (\partial x, \partial y)^\intercal \end{equation} Based on above constraints, the objective function can be formulated as: \begin{equation} \underset{\Delta x, \Delta y}{\min} E_{total}(\Delta x, \Delta y) = E_{brightness} + \alpha E_{smoothness} \end{equation} where: \begin{equation} \begin{split} E_{brightness} = \iint & \psi(I(x, y, t) - I(x + \Delta x, y + \Delta y, t + 1)) ~ + \\ & \psi(\nabla I(x, y, t) - \nabla I(x + \Delta x, y + \Delta y, t + 1)) dxdy \end{split} \end{equation} $\alpha$ denotes a weighting parameter and $\psi$ denotes a concave cost function, and $E_{smoothness}$ penalization term is introduced to avoid too significant motion displacement: \begin{equation} E_{smoothness} = \iint \psi(|\nabla x|^2 + |\nabla y|^2) dxdy \end{equation} In our approach, we utilize Liu's~\cite{celiu} dense optical flow to estimate motion over frame pairs. However, while the intra-frame movement is estimable, it cannot be used directly as motion features because the content of each video varies considerably which makes the comparison between the estimated motion of different videos unreasonable~\cite{OFCNN}. Moreover, the estimated motion cannot be pixel-wise accurate due to the influence of noises and non-linear displacements. \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth, height=0.38\textwidth]{eccv2020kit/LME.png} \caption{Illustration of local motion estimation step.} \label{fig:lme} \end{figure} To overcome the above problems, we propose to narrow the region of interests via finding facial landmarks for comparison. By employing an arbitrary facial landmark detector $f_{D}$, we are able to obtain a set of spatial coordinates $L$ as: \begin{equation} f_D(I) = L_I = \{l^i_I | l_I^i \in \mathbb{R}^2, 1 \leq i \leq n \} \end{equation} so that the local motion features $M_I$ can be denoted as: \begin{equation} M_I = \{m_I^i | m_I^i = I_{\Delta x, \Delta y} \oplus \mathcal{N}(l_I^i \pm \hat{k}), l_I^i \in L_I\} \end{equation} representing the Gaussian-weighted average of estimated motion map $I_{\Delta x, \Delta y}$ centered on $(l_i^x, l_i^y)$ with stride $\hat{k}$. The Gaussian smoothing is introduced to further mitigate the negative impact by inaccurate estimation result. By doing so, we align the motion feature extracted from each video for equitable comparison. An intuitive illustration of this step is presented in Fig.~\ref{fig:lme}. Due to the lack of sufficient motion in some $I_{\Delta x, \Delta y}$, we abandon these with trivial magnitude by setting a hyperparameter as threshold where the detailed choice will be discussed in Sect.~\ref{sect:Exp}. \subsection{Co-motion Patterns} \label{sect:CMP} Depending merely on local motion features obtained above would require an incredibly large-scale dataset to cover as many scenarios as possible, which is redundant and costly. Based on the observation that a human face is an articulated structure, the intra-component correlation can also depict the motion in an efficient manner. Inspired by the co-occurrence feature~\cite{Cooccurrence}, which has been frequently employed in texture analysis, we propose to further calculate the second-order statistics from extracted local motion features. \subsubsection{Grouping Intra-Correlated Motion Features} \hfill \break \noindent In this step, we group analogous $m_I^i \in M_I$ to estimate articulated facial structure by motion features since motion features that are collected from the same facial component would more likely to share consistent movement. Meanwhile, the negative correlation can also be represented where motion features having opposite directions (e.g. upper lip and lower lip) would be assigned to disjoint groups. As $m_I^i \in \mathbb{R}^2$ denotes motion on two orthogonal directions, we construct the affinity matrix $A_I$ on $M_I$ such that: \begin{equation} A_I^{i, j} = m_I^i \cdot m_I^j \end{equation} We here choose the inner product over other metrics such as cosine and euclidean since we wish to both emphasize the correlation instead of difference and to lighten the impact of noise within $M_I$. In specific, using inner product can ensure the significance of two highly correlated motions that both possess certain magnitude to be highlighted, while noises with trivial magnitude would relatively affect less. The normalized spectral clustering~\cite{spectral,tutorial} is then performed, where we calculate the degree matrix $D$ such that: \begin{equation} D_I^{i, j} = \begin{cases} \sum^n_{j} A_I^{i, j} & \text{if $i = j$}\\ 0 & \text{if $i \neq j$}\\ \end{cases} \end{equation} and the normalized Laplacian matrix $\mathcal{L}$ as: \begin{equation} \mathcal{L} = (D_I)^{-\frac{1}{2}}(D_I - A_I)(D_I)^{\frac{1}{2}} \end{equation} In order to split $M_I$ into $K$ disjoint groups, the first $K$ eigenvectors of $\mathcal{L}$, denote as $\textbf{V} = \{\nu_k | k \in [1, K]\}$, are extracted to form matrix $F \in \mathbb{R}^{n \times K}$. After normalizing $F$ by dividing the corresponding L2-norms row-wisely, a K-Means clustering is used to separate $P = \{p_i | p_i = F^i \in \mathbb{R}^{K}, i \in [1, n]\}$ into $K$ clusters where $C_k = \{i | p_i \in C_k\}$. However, since $K$ is not directly available in our case, we will demonstrate how to determine the optimal $K$ in the next step. \subsubsection{Constructing Co-motion Patterns} \hfill \break As previously stated, determining a proper $K$ can also assist in describing the motion pattern more accurately. A straightforward approach is to iterate through all possible $K$ such that the Calinski-Harabasz index~\cite{CH} is maximized: \begin{equation} \operatorname*{arg\,max}_{K \in [2, n]} ~ f_{CH}(\{C_k | k \in [1, K]\}, K) \end{equation} where \begin{equation} f_{CH}(\{C_k | k \in [1, K]\}, K) = \frac{tr(\sum^K_y \sum_{p_i \in C_y} (p_i - C_y^{\mu})(p_i - C_y^{\mu})^\intercal)}{tr(\sum^K_y |C_y| (C_y^{\mu} - M_I^{\mu})(C_y^{\mu} - M_I^{\mu})^\intercal)} \times \frac{n - K}{K - 1} \end{equation} with $C_y^{\mu}$ is the centroid of $C_y$, $M_I^{\mu}$ is the center of all local motion features and $tr$ denotes taking the trace of the corresponding matrix. After all the efforts, the motion correlation matrix $\rho_{I_t, I_{t+1}}$ of two consecutive frames $I_t$ and $I_{t+1}$ can be calculated as: \begin{equation} \rho_{I_t, I_{t+1}}^{i, j} = \begin{cases} 1 & \text{if $(m_i \in C_k ~\&~ m_j \in C_k ~|~ \exists C_k)$}\\ 0 & \text{otherwise}\\ \end{cases} \end{equation} and consequently, the co-motion pattern of sequence $S = \{I_1, ..., I_T\}$ is calculated as the weighted average of all correlation matrices: \begin{equation} f_{CP}(S) = \sum^T_t k_{I_t, I_{t+1}} \times f_{CH}(\{C_k | k \in [1, K]\}, k_{I_t, I_{t+1}}) \times \rho_{I_t, I_{t+1}} \end{equation} where the weighting procedure is also to reduce the impact of noise: the greater the $f_{CH}(\{C_k | k \in [1, K]\}, K)$, naturally the more consistent the motions are; simultaneously, co-motion pattern constructed on noisy estimated local motion would scatter more sparse, which should be weighted as less important. \subsubsection{Usage of Co-motion Patterns} \hfill \break The co-motion pattern can be utilized as a statistical feature for comparison purposes. When used for supervised classification, each co-motion must be normalized by its L1 norm: \begin{equation} \dot f_{CP}(S) = \frac{f_{CP}(S)}{\sum |f_{CP}(S)|} \end{equation} and $\dot f_{CP}(S)$ can be used as features for arbitrary objectives. In order to illustrate that our co-motion pattern can effectively distinguish all forgery types by only modeling on real videos, we also conduct anomaly detection experiments where a real co-motion pattern is firstly built as template. Then, co-motion patterns from real and forgery databases are all compared against the template where the naturalness is determined by the threshold. Jensen–Shannon divergence is suggested to be employed as distance measure between any two co-motion patterns: \begin{equation} d_{KL}(f_{CP}(S_1), f_{CP}(S_2)) = \sum_i \sum_j^{i-1} f_{CP}(S_1)^{i, j} log(\frac{f_{CP}(S_1)^{i, j}}{f_{CP}(S_2)^{i, j}}) \end{equation} \begin{equation} d_{JS}(f_{CP}(S_1), f_{CP}(S_2)) = \frac{1}{2} d_{KL}(f_{CP}(S_1), \overline{f_{CP}}_{S_1, S_2}) + \frac{1}{2} d_{KL}(f_{CP}(S_2), \overline{f_{CP}}_{S_1, S_2}) \end{equation} where $\overline{f_{CP}}_{S_1, S_2} = \frac{f_{CP}(S_1) + f_{CP}(S_2)}{2}$ and $S1, S2$ denote two sequences.
{'timestamp': '2020-08-12T02:18:56', 'yymm': '2008', 'arxiv_id': '2008.04848', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.04848'}
ArXiv